Subscribe: Psychological Methods - Vol 11, Iss 3
Added By: Feedage Forager Feedage Grade A rated
Language: Indonesian
apa rights  correlation  data  database record  measurement  model  psycinfo database  record apa  rights reserved  structural 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Psychological Methods - Vol 11, Iss 3

Psychological Methods - Vol 21, Iss 3

Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, a

Last Build Date: Wed, 26 Oct 2016 23:00:16 EST

Copyright: Copyright 2016 American Psychological Association

Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.


The Pearson product–moment correlation coefficient (rp) and the Spearman rank correlation coefficient (rs) are widely used in psychological research. We compare rp and rs on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, rp and rs have similar expected values but rs is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, rp is more variable than rs. Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, rp had lower variability than rs in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, rs had lower variability than rp, and often corresponded more accurately to the population Pearson correlation coefficient (Rp) than rp did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing rs instead of rp. In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of rs and rp. In conclusion, rp is suitable for light-tailed distributions, whereas rs is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Dynamical correlation: A new method for quantifying synchrony with multivariate intensive longitudinal data.


In this article, we introduce dynamical correlation, a new method for quantifying synchrony between 2 variables with intensive longitudinal data. Dynamical correlation is a functional data analysis technique developed to measure the similarity of 2 curves. It has advantages over existing methods for studying synchrony, such as multilevel modeling. In particular, it is a nonparametric approach that does not require a prespecified functional form, and it places no assumption on homogeneity of the sample. Dynamical correlation can be easily estimated with irregularly spaced observations and tested to draw population-level inferences. We illustrate this flexible statistical technique with a simulation example and empirical data from an experiment examining interpersonal physiological synchrony between romantic partners. We discuss the advantages and limitations of the method, and how it can be extended and applied in psychological research. We also provide a set of R code for other researchers to estimate and test for dynamical correlation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

The shifted Wald distribution for response time data analysis.


We propose and demonstrate the shifted Wald (SW) distribution as both a useful measurement tool and intraindividual process model for psychological response time (RT) data. Furthermore, we develop a methodology and fitting approach that readers can easily access. As a measurement tool, the SW provides a detailed quantification of the RT data that is more sophisticated than mean and SD comparisons. As an intraindividual process model, the SW provides a cognitive model for the response process in terms of signal accumulation and the threshold needed to respond. The details and importance of both of these features are developed, and we show how the approach can be easily generalized to a variety of experimental domains. The versatility and usefulness of the approach is demonstrated on 3 published data sets, each with a different canonical mode of responding: manual, vocal, and oculomotor modes. In addition, model-fitting code is included with the article. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

A flexible full-information approach to the modeling of response styles.


We present a flexible full-information approach to modeling multiple user-defined response styles across multiple constructs of interest. The model is based on a novel parameterization of the multidimensional nominal response model that separates estimation of overall item slopes from the scoring functions (indicating the order of categories) for each item and latent trait. This feature allows the definition of response styles to vary across items as well as overall item slopes that vary across items for both substantive and response style dimensions. We compared the model with similar approaches using examples from the smoking initiative of the Patient-Reported Outcomes Measurement Information System. A small set of simulations showed that the estimation approach is able to recover model parameters, factor scores, and reasonable estimates of standard errors. Furthermore, these simulations suggest that failing to include response style factors (when present in the data generating model) has adverse consequences for substantive trait factor score recovery. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Population performance of SEM parceling strategies under measurement and structural model misspecification.


Previous research has suggested that the use of item parcels in structural equation modeling can lead to biased structural coefficient estimates and low power to detect model misspecification. The present article describes the population performance of items, parcels, and scales under a range of model misspecifications, examining structural path coefficient accuracy, power, and population fit indices. Results revealed that, under measurement model misspecification, any parceling scheme typically results in more accurate structural parameters, but less power to detect the misspecification. When the structural model is misspecified, parcels do not affect parameter accuracy, but they do substantially elevate power to detect the misspecification. Under particular, known measurement model misspecifications, a parceling scheme can be chosen to produce the most accurate estimates. The root mean square error of approximation and the standardized root mean square residual are more sensitive to measurement model misspecification in parceled models than the likelihood ratio test statistic. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

The performance of ML, DWLS, and ULS estimation with robust corrections in structural equation models with ordinal variables.


Three estimation methods with robust corrections—maximum likelihood (ML) using the sample covariance matrix, unweighted least squares (ULS) using a polychoric correlation matrix, and diagonally weighted least squares (DWLS) using a polychoric correlation matrix—have been proposed in the literature, and are considered to be superior to normal theory-based maximum likelihood when observed variables in latent variable models are ordinal. A Monte Carlo simulation study was carried out to compare the performance of ML, DWLS, and ULS in estimating model parameters, and their robust corrections to standard errors, and chi-square statistics in a structural equation model with ordinal observed variables. Eighty-four conditions, characterized by different ordinal observed distribution shapes, numbers of response categories, and sample sizes were investigated. Results reveal that (a) DWLS and ULS yield more accurate factor loading estimates than ML across all conditions; (b) DWLS and ULS produce more accurate interfactor correlation estimates than ML in almost every condition; (c) structural coefficient estimates from DWLS and ULS outperform ML estimates in nearly all asymmetric data conditions; (d) robust standard errors of parameter estimates obtained with robust ML are more accurate than those produced by DWLS and ULS across most conditions; and (e) regarding robust chi-square statistics, robust ML is inferior to DWLS and ULS in controlling for Type I error in almost every condition, unless a large sample is used (N = 1,000). Finally, implications of the findings are discussed, as are the limitations of this study as well as potential directions for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

A taxonomy of path-related goodness-of-fit indices and recommended criterion values.


Almost all goodness-of-fit indexes (GFIs) for latent variable structural equation models are global GFIs that simultaneously assess the fits of the measurement and structural portions of the model. In one sense, this is an elegant feature of overall model GFIs, but in another sense, it is unfortunate as the fits of the 2 different portions of the model cannot be assessed independently. We (a) review the developing literature on this issue, (b) propose 6 new GFIs that are designed to evaluate the structural portion of latent variable models independently of the measurement model, (c) that are couched within a general taxonomy of James, Mulaik, and Brett’s (1982) Conditions 9 and 10 for causal inference from nonexperimental data, (d) conduct a Monte Carlo simulation of the usefulness of these 6 new GFIs for model selection, and (e) on the basis of simulation results provide recommended criteria for 4 of them. Supplemental analyses also compare 2 of the new GFIs to 2 other structural model selection strategies currently in use. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.


Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Propensity score analysis with missing data.


Propensity score analysis is a method that equates treatment and control groups on a comprehensive set of measured confounders in observational (nonrandomized) studies. A successful propensity score analysis reduces bias in the estimate of the average treatment effect in a nonrandomized study, making the estimate more comparable with that obtained from a randomized experiment. This article reviews and discusses an important practical issue in propensity analysis, in which the baseline covariates (potential confounders) and the outcome have missing values (incompletely observed). We review the statistical theory of propensity score analysis and estimation methods for propensity scores with incompletely observed covariates. Traditional logistic regression and modern machine learning methods (e.g., random forests, generalized boosted modeling) as estimation methods for incompletely observed covariates are reviewed. Balance diagnostics and equating methods for incompletely observed covariates are briefly described. Using an empirical example, the propensity score estimation methods for incompletely observed covariates are illustrated and compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)