Subscribe: Psychological Methods - Vol 11, Iss 3
http://content.apa.org/journals/met.rss
Added By: Feedage Forager Feedage Grade A rated
Language: Indonesian
Tags:
apa rights  database record  database  methods  model  models  psycinfo database  record apa  response  rights reserved 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Psychological Methods - Vol 11, Iss 3

Psychological Methods - Vol 22, Iss 1



Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, a



Last Build Date: Mon, 29 May 2017 19:00:15 EST

Copyright: Copyright 2017 American Psychological Association
 



The making of Psychological Methods.

2017-03-02

Psychological Methods celebrated its 20-year anniversary recently, having published its first quarterly issue in March 1996. It seemed time to provide a brief overview of the history, the highlights over the years, and the current state of the journal, along with tips for submissions. The article is organized to discuss (a) the background and development of the journal; (b) the top articles, authors, and topics over the years; (c) an overview of the journal today; and (d) a summary of the features of successful articles that usually entail rigorous and novel methodology described in clear and understandable writing and that can be applied in meaningful and relevant areas of psychological research. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Two-condition within-participant statistical mediation analysis: A path-analytic framework.

2016-06-30

Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.’s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.’s method requires, because it relies only on an inference about the product of paths—the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



A parsimonious weight function for modeling publication bias.

2017-03-02

Quantitative research literature is often biased because studies that fail to find a significant effect (or that demonstrate effects in an undesired or unexpected direction) are less likely to be published. This phenomenon, termed publication bias, can cause problems when researchers attempt to synthesize results using meta-analytic methods. Various techniques exist that attempt to estimate and correct meta-analyses for publication bias. However, there is no single method that can (a) account for continuous moderators by including them within the model, (b) allow for substantial data heterogeneity, (c) produce an adjusted mean effect size, (d) include a formal test for publication bias, and (e) allow for correction when only a small number of effects is included in the analysis. This article describes a method that we believe helps fill that gap. The model uses the beta density as a weight function that represents the selection process and provides adjusted parameter estimates that account for publication bias. Use of the beta density allows us to represent selection using fewer parameters than similar models so that the proposed model is suitable for meta-analyses that include relatively few studies. We explain the model and its rationale, illustrate its use with a real data set, and describe the results of a simulation study that shows the model’s utility. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Plausibility and influence in selection models: A comment on Citkowicz and Vevea (2017).

2017-03-02

I discuss how methods that adjust for publication selection involve implicit or explicit selection models. Such models describe the relation between the studies conducted and those actually observed. I argue that the evaluation of selection models should include an evaluation of the plausibility of the empirical implications of that model. This includes how many studies would have had to exist to yield the observed sample of studies. I also argue that the amount of influence that one or a small number of studies might have on the overall results is also important to understand. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Effects of parceling on model selection: Parcel-allocation variability in model ranking.

2016-01-25

Research interest often lies in comparing structural model specifications implying different relationships among latent factors. In this context parceling is commonly accepted, assuming the item-level measurement structure is well known and, conservatively, assuming items are unidimensional in the population. Under these assumptions, researchers compare competing structural models, each specified using the same parcel-level measurement model. However, little is known about consequences of parceling for model selection in this context—including whether and when model ranking could vary across alternative item-to-parcel allocations within-sample. This article first provides a theoretical framework that predicts the occurrence of parcel-allocation variability (PAV) in model selection index values and its consequences for PAV in ranking of competing structural models. These predictions are then investigated via simulation. We show that conditions known to manifest PAV in absolute fit of a single model may or may not manifest PAV in model ranking. Thus, one cannot assume that low PAV in absolute fit implies a lack of PAV in ranking, and vice versa. PAV in ranking is shown to occur under a variety of conditions, including large samples. To provide an empirically supported strategy for selecting a model when PAV in ranking exists, we draw on relationships between structural model rankings in parcel- versus item-solutions. This strategy employs the across-allocation modal ranking. We developed software tools for implementing this strategy in practice, and illustrate them with an example. Even if a researcher has substantive reason to prefer one particular allocation, investigating PAV in ranking within-sample still provides an informative sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Measuring response styles in Likert items.

2016-11-28

The recently proposed class of item response tree models provides a flexible framework for modeling multiple response processes. This feature is particularly attractive for understanding how response styles may affect answers to attitudinal questions. Facilitating the disassociation of response styles and attitudinal traits, item response tree models can provide powerful process tests of how different response formats may affect the measurement of substantive traits. In an empirical study, 3 response formats were used to measure the 2-dimensional Personal Need for Structure traits. Different item response tree models are proposed to capture the response styles for each of the response formats. These models show that the response formats give rise to similar trait measures but different response-style effects. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Inferences about competing measures based on patterns of binary significance tests are questionable.

2016-12-12

An important step in demonstrating the validity of a new measure is to show that it is a better predictor of outcomes than existing measures—often called incremental validity. Investigators can use regression methods to argue for the incremental validity of new measures, while adjusting for competing or existing measures. The argument is often based on patterns of binary significance tests (BST): (a) both measures are significantly related to the outcome, (b) when adjusted for the new measure the competing measure is no longer significantly related to the outcome, but (c) when adjusted for the competing measure the new measure is still significantly related to the outcome. We show that the BST argument can lead to false conclusions up to 30% of the time when the validity study has modest statistical power. We review alternate methods for making strong inferences about validity and illustrate these with data on construal level in the context of relationships. Researchers often present results in black and white terms using statistical significance tests; the conclusions from such results can be misleading. We focus on a special case of this style of reporting whereby a new measure is said to be as good as, or better than, another measure because it is significantly related to an outcome whereas the other measure is not significant when both measures are tested jointly. In our tutorial on inference in regression, we show that arguments based on binary (black and white) patterns can lead to incorrect conclusions more than a third of the time, and we explain why this result is obtained. We further distinguish 3 situations where 2 measures are compared and show better ways of making arguments: (a) when 2 measures are thought to be literally equivalent, (b) when the new measure is thought to be better than the other, and (c) when the new measure adds information to the other, even if it is not equivalent or superior. We illustrate the statistical arguments with data on a new measure of construal level (specific vs. general thinking) in the context of relationships. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Estimating the standardized mean difference with minimum risk: Maximizing accuracy and minimizing cost with sequential estimation.

2016-09-08

The standardized mean difference is a widely used effect size measure. In this article, we develop a general theory for estimating the population standardized mean difference by minimizing both the mean square error of the estimator and the total sampling cost. Fixed sample size methods, when sample size is planned before the start of a study, cannot simultaneously minimize both the mean square error of the estimator and the total sampling cost. To overcome this limitation of the current state of affairs, this article develops a purely sequential sampling procedure, which provides an estimate of the sample size required to achieve a sufficiently accurate estimate with minimum expected sampling cost. Performance of the purely sequential procedure is examined via a simulation study to show that our analytic developments are highly accurate. Additionally, we provide freely available functions in R to implement the algorithm of the purely sequential procedure. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



On the unnecessary ubiquity of hierarchical linear modeling.

2016-05-05

In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Multiple imputation of missing data in multilevel designs: A comparison of different strategies.

2016-09-08

Multiple imputation is a widely recommended means of addressing the problem of missing data in psychological research. An often-neglected requirement of this approach is that the imputation model used to generate the imputed values must be at least as general as the analysis model. For multilevel designs in which lower level units (e.g., students) are nested within higher level units (e.g., classrooms), this means that the multilevel structure must be taken into account in the imputation model. In the present article, we compare different strategies for multiply imputing incomplete multilevel data using mathematical derivations and computer simulations. We show that ignoring the multilevel structure in the imputation may lead to substantial negative bias in estimates of intraclass correlations as well as biased estimates of regression coefficients in multilevel models. We also demonstrate that an ad hoc strategy that includes dummy indicators in the imputation model to represent the multilevel structure may be problematic under certain conditions (e.g., small groups, low intraclass correlations). Imputation based on a multivariate linear mixed effects model was the only strategy to produce valid inferences under most of the conditions investigated in the simulation study. Data from an educational psychology research project are also used to illustrate the impact of the various multiple imputation strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



The impact of total and partial inclusion or exclusion of active and inactive time invariant covariates in growth mixture models.

2016-09-19

This article evaluates the impact of partial or total covariate inclusion or exclusion on the class enumeration performance of growth mixture models (GMMs). Study 1 examines the effect of including an inactive covariate when the population model is specified without covariates. Study 2 examines the case in which the population model is specified with 2 covariates influencing only the class membership. Study 3 examines a population model including 2 covariates influencing the class membership and the growth factors. In all studies, we contrast the accuracy of various indicators to correctly identify the number of latent classes as a function of different design conditions (sample size, mixing ratio, invariance or noninvariance of the variance-covariance matrix, class separation, and correlations between the covariates in Studies 2 and 3) and covariate specification (exclusion, partial or total inclusion as influencing class membership, partial or total inclusion as influencing class membership, and the growth factors in a class-invariant or class-varying manner). The accuracy of the indicators shows important variation across studies, indicators, design conditions, and specification of the covariates effects. However, the results suggest that the GMM class enumeration process should be conducted without covariates, and should rely mostly on the Bayesian information criterion (BIC) and consistent Akaike information criterion (CAIC) as the most reliable indicators under conditions of high class separation (as indicated by higher entropy), versus the sample size adjusted BIC or CAIC (SBIC, SCAIC) and bootstrapped likelihood ratio test (BLRT) under conditions of low class separation (indicated by lower entropy). (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Item response theory scoring and the detection of curvilinear relationships.

2016-11-07

Psychologists are increasingly positing theories of behavior that suggest psychological constructs are curvilinearly related to outcomes. However, results from empirical tests for such curvilinear relations have been mixed. We propose that correctly identifying the response process underlying responses to measures is important for the accuracy of these tests. Indeed, past research has indicated that item responses to many self-report measures follow an ideal point response process—wherein respondents agree only to items that reflect their own standing on the measured variable—as opposed to a dominance process, wherein stronger agreement, regardless of item content, is always indicative of higher standing on the construct. We test whether item response theory (IRT) scoring appropriate for the underlying response process to self-report measures results in more accurate tests for curvilinearity. In 2 simulation studies, we show that, regardless of the underlying response process used to generate the data, using the traditional sum-score generally results in high Type 1 error rates or low power for detecting curvilinearity, depending on the distribution of item locations. With few exceptions, appropriate power and Type 1 error rates are achieved when dominance-based and ideal point-based IRT scoring are correctly used to score dominance and ideal point response data, respectively. We conclude that (a) researchers should be theory-guided when hypothesizing and testing for curvilinear relations; (b) correctly identifying whether responses follow an ideal point versus dominance process, particularly when items are not extreme is critical; and (c) IRT model-based scoring is crucial for accurate tests of curvilinearity. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Why the resistance to statistical innovations? A comment on Sharpe (2013).

2016-06-02

Sharpe’s (2013) article considered reasons for the apparent resistance of substantive researchers to the adoption of newer statistical methods recommended by quantitative methodologists, and possible ways to reduce that resistance, focusing on improved communication. The important point that Sharpe missed, however, is that because research methods vary radically from one subarea of psychology to another, a particular statistical innovation may be much better suited to some subareas than others. Although there may be some psychological or logistical explanations that account for resistance to innovation in general, to fully understand the resistance to any particular innovation, it is necessary to consider how that innovation impacts specific subareas of psychology. In this comment, I focus on the movement to replace null hypothesis significance testing (NHST) with reports of effect sizes and/or confidence intervals, and consider its possible impact on research in which only the direction of the effect is meaningful, and there is no basis for predicting specific effect sizes (and very large samples are rarely used). There are numerous examples of these studies in social psychology, for instance, such as those that deal with priming effects. I use a study in support of terror management theory as my main example. I conclude that the degree to which statistical reformers have overgeneralized their criticisms of NHST, and have failed to tailor their recommendations to different types of research, may explain some of the resistance to abandoning NHST. Finally, I offer suggestions for improved communication to supplement those presented by Sharpe. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)