Subscribe: Psychological Assessment - Vol 18, Iss 3
http://content.apa.org/journals/pas.rss
Preview: Psychological Assessment - Vol 18, Iss 3

Psychological Assessment - Vol 29, Iss 4



Psychological Assessment publishes mainly empirical articles concerning clinical assessment. Papers that fall within the domain of the journal include research on the development, validation, application, and evaluation of psychological assessment instrum



Last Build Date: Fri, 24 Mar 2017 17:00:35 EST

Copyright: Copyright 2017 American Psychological Association
 



The Center for Epidemiologic Studies Depression Scale: Invariance across heterosexual men, heterosexual women, gay men, and lesbians.

2016-06-30

The present study examined measurement invariance of the Center for Epidemiologic Studies Depression Scale (CES–D) in community groups of Australian heterosexual men (N = 1106), heterosexual women (N = 2111), gay men (N = 527), and lesbians (N = 712). Confirmatory factor analysis of CES–D item scores supported the theorized oblique 4-factor model. There was support for full measurement invariance across the 4 groups, based on differences in approximate fit indices. In contrast there was support for only partial invariance when the chi-square difference test was applied. Lack of invariance was mostly for depressed affect and somatic symptom items, with noninvariant somatic symptom items showing consistently high factor loadings and thresholds among lesbians compared with the other groups. The findings are discussed in relation to the use of the CES–D, the relevance of different depression symptoms to how depressions is experienced by the different gender and sexual orientation groups, and gender role socialization and minority sexual orientation theories. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Measurement properties of the Center for Epidemiologic Studies Depression Scale (CES-D 10): Findings from HCHS/SOL.

2016-06-13

The Center for Epidemiologic Studies Depression Scale (CES-D) is a widely used self-report measure of depression symptomatology. This study evaluated the reliability, validity, and measurement invariance of the CES-D 10 in a diverse cohort of Hispanics/Latinos from the Hispanic Community Health Study/Study of Latinos (HCHS/SOL). The sample consisted of 16,415 Hispanic/Latino adults recruited from 4 field centers (Miami, FL; San Diego, CA; Bronx, NY; Chicago, IL). Participants completed interview administered measures in English or Spanish. The CES-D 10 was examined for internal consistency, test–retest reliability, convergent validity, and measurement invariance. The total score for the CES-D 10 displayed acceptable internal consistencies (Cronbach’s alpha’s = .80–.86) and test–retest reliability (r values = .41–.70) across the total sample, language group and ethnic background group. The total CES-D 10 scores correlated in a theoretically consistent manner with the Spielberger State–Trait Anxiety Inventory, r = .72, p < .001, the Patient Health Questionnaire-9 depression measure, r = .80, p < .001, the Short Form-12’s Mental Component Summary, r = −.65, p < .001, and Physical Component Summary score, r = −.25, p < .001. A confirmatory factor analysis showed that a 1-factor model fit the CES-D 10 data well (CFI = .986, RMSEA = .047) after correlating 1 pair of item residual variances. Multiple group analyses showed the 1-factor structure to be invariant across English and Spanish speaking responders and partially invariant across Hispanic/Latino background groups. The total score of the CES-D 10 can be recommended for use with Hispanics/Latinos in English and Spanish. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Individuals at high risk for suicide are categorically distinct from those at low risk.

2016-06-09

Although suicide risk is often thought of as existing on a graded continuum, its latent structure (i.e., whether it is categorical or dimensional) has not been empirically determined. Knowledge about the latent structure of suicide risk holds implications for suicide risk assessments, targeted suicide interventions, and suicide research. Our objectives were to determine whether suicide risk can best be understood as a categorical (i.e., taxonic) or dimensional entity, and to validate the nature of any obtained taxon. We conducted taxometric analyses of cross-sectional, baseline data from 16 independent studies funded by the Military Suicide Research Consortium. Participants (N = 1,773) primarily consisted of military personnel, and most had a history of suicidal behavior. The Comparison Curve Fit Index values for MAMBAC (.85), MAXEIG (.77), and L-Mode (.62) all strongly supported categorical (i.e., taxonic) structure for suicide risk. Follow-up analyses comparing the taxon and complement groups revealed substantially larger effect sizes for the variables most conceptually similar to suicide risk compared with variables indicating general distress. Pending replication and establishment of the predictive validity of the taxon, our results suggest the need for a fundamental shift in suicide risk assessment, treatment, and research. Specifically, suicide risk assessments could be shortened without sacrificing validity, the most potent suicide interventions could be allocated to individuals in the high-risk group, and research should generally be conducted on individuals in the high-risk group. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Exploratory and hierarchical factor analysis of the WJ-IV Cognitive at school age.

2016-06-09

Exploratory and confirmatory factor analytic studies were not reported in the Technical Manual for the Woodcock-Johnson, 4th ed. Cognitive (WJ IV Cognitive; Schrank, McGrew, & Mather, 2014b) Instead, the internal structure of the WJ IV Cognitive was extrapolated from analyses based on the full WJ IV test battery (Schrank, McGrew, & Mather, 2014b). Even if the veracity of extrapolating from the WJ IV full battery were accepted, there were shortcomings in the choices of analyses used and only limited information regarding those analyses was presented in the WJ IV Technical Manual (McGrew, Laforte, & Shrank, 2014). The present study examined the structure of the WJ IV Cognitive using exploratory factor analysis procedures (principal axis factoring with oblique [promax] rotation followed by application of the Schmid–Leiman, 1957, procedure) applied to standardization sample correlation matrices for 2 school age groups (ages 9–13; 14–19). Four factors emerged for both the 9–13 and 14–19 age groups in contrast to the publisher’s proposed 7 factors. Results of these analyses indicated a robust manifestation of general intelligence (g) that exceeded the variance attributed to the lower-order factors. Model-based reliability estimates supported interpretation of the higher-order factor (i.e., g). Additional analyses were conducted by forcing extraction of the 7 theoretically posited factors; however, the resulting solution was only partially aligned (i.e., Gs, Gwm) with the theoretical structure promoted in the Technical Manual and suggested the preeminence of the higher-order factor. Results challenge the hypothesized structure of the WJ IV Cognitive and raise concerns about its alignment with Cattell-Horn-Carroll theory. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Universal happiness? Cross-cultural measurement invariance of scales assessing positive mental health.

2016-06-20

Research into positive aspects of the psyche is growing as psychologists learn more about the protective role of positive processes in the development and course of mental disorders, and about their substantial role in promoting mental health. With increasing globalization, there is strong interest in studies examining positive constructs across cultures. To obtain valid cross-cultural comparisons, measurement invariance for the scales assessing positive constructs has to be established. The current study aims to assess the cross-cultural measurement invariance of questionnaires for 6 positive constructs: Social Support (Fydrich, Sommer, Tydecks, & Brähler, 2009), Happiness (Subjective Happiness Scale; Lyubomirsky & Lepper, 1999), Life Satisfaction (Diener, Emmons, Larsen, & Griffin, 1985), Positive Mental Health Scale (Lukat, Margraf, Lutz, van der Veld, & Becker, 2016), Optimism (revised Life Orientation Test [LOT-R]; Scheier, Carver, & Bridges, 1994) and Resilience (Schumacher, Leppert, Gunzelmann, Strauss, & Brähler, 2004). Participants included German (n = 4,453), Russian (n = 3,806), and Chinese (n = 12,524) university students. Confirmatory factor analyses and measurement invariance testing demonstrated at least partial strong measurement invariance for all scales except the LOT-R and Subjective Happiness Scale. The latent mean comparisons of the constructs indicated differences between national groups. Potential methodological and cultural explanations for the intergroup differences are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Graphical representations of adolescents’ psychophysiological reactivity to social stressor tasks: Reliability and validity of the Chernoff Face approach and person-centered profiles for clinical use.

2016-07-18

Low-cost methods exist for measuring physiology when clinically assessing adolescent social anxiety. Two barriers to widespread use involve lack of (a) physiological expertise among mental health professionals, and (b) techniques for modeling individual-level physiological profiles. We require a “bridge approach” for interpreting physiology that does not require users to have a physiological background to make judgments, and is amenable to developing individual-level physiological profiles. One method—Chernoff Faces—involves graphically representing data using human facial features (eyes, nose, mouth, face shape), thus capitalizing on humans’ abilities to detect even subtle variations among facial features. We examined 327 adolescents from the Tracking Adolescents’ Individual Lives Survey (TRAILS) study who completed baseline social anxiety self-reports and physiological assessments within the social scenarios of the Groningen Social Stressor Task (GSST). Using heart rate (HR) norms and Chernoff Faces, 2 naïve coders made judgments about graphically represented HR data and HR norms. For each adolescent, coders made 4 judgments about the features of 2 Chernoff Faces: (a) HR within the GSST and (b) aged-matched HR norms. Coders’ judgments reliably and accurately identified elevated HR relative to norms. Using latent class analyses, we identified 3 profiles of Chernoff Face judgments: (a) consistently below HR norms across scenarios (n = 193); (b) above HR norms mainly when speech making (n = 35); or (c) consistently above HR norms across scenarios (n = 99). Chernoff Face judgments displayed validity evidence in relation to self-reported social anxiety and resting HR variability. This study has important implications for implementing physiology within adolescent social anxiety assessments. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Factor mixture modeling of intolerance of uncertainty.

2016-07-14

Intolerance of uncertainty (IU) is a multidimensional construct that has been proposed as an important transdiagnostic risk factor across various anxiety and mood disorders. Recent work found support for IU having a continuous latent structure when utilizing taxometric methods. However, taxometrics may not be ideally suited to examine the latent structure of constructs such as IU given the methodological shortcomings associated with this technique. The current study applied factor mixture modeling, a statistical technique that overcomes shortcomings of prior work, to examine the latent structure of IU in a sample of 371 individuals presenting at an outpatient clinic. Findings indicated that the best fitting solution was a 3-class model with 1 class consisting of individuals with high levels of IU (High IU; n = 55) and 1 containing individuals with low levels of IU (Low IU; n = 206). Our third class, labeled Moderate IU, consisted of 110 individuals with levels of IU between those of the High IU and Low IU groups. There were also significant differences across the 3 IU classes, including the relations between IU classes and anxiety-related and depressive disorders. The current investigation was the first to find evidence of IU having a categorical latent structure. Implications for research and clinical utility are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Cognitive discrepancy models for specific learning disabilities identification: Simulations of psychometric limitations.

2016-08-08

Few studies have investigated specific learning disabilities (SLD) identification methods based on the identification of patterns of processing strengths and weaknesses (PSW). We investigated the reliability of SLD identification decisions emanating from different achievement test batteries for 1 method to operationalize the PSW approach: the concordance/discordance model (C/DM; Hale & Fiorello, 2004). Two studies examined the level of agreement for SLD identification decisions between 2 different simulated, highly correlated achievement test batteries. Study 1 simulated achievement and cognitive data across a wide range of potential latent correlations between an achievement deficit, a cognitive strength and a cognitive weakness. Latent correlations permitted simulation of case-level data at specified reliabilities for cognitive abilities and 2 achievement observations. C/DM criteria were applied and resulting SLD classifications from the 2 achievement test batteries were compared for agreement. Overall agreement and negative agreement were high, but positive agreement was low (0.33–0.59) across all conditions. Study 2 isolated the effects of reduced test reliability on agreement for SLD identification decisions resulting from different test batteries. Reductions in reliability of the 2 achievement tests resulted in average decreases in positive agreement of 0.13. Conversely, reductions in reliability of cognitive measures resulted in small average increases in positive agreement (0.0–0.06). Findings from both studies are consistent with prior research demonstrating the inherent instability of classifications based on C/DM criteria. Within complex ipsative SLD identification models like the C/DM, small variations in test selection can have deleterious effects on classification reliability. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



Structural validity of the Wechsler Intelligence Scale for Children–Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests.

2016-07-21

The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)



The impact of underreporting and overreporting on the validity of the Personality Inventory for DSM–5 (PID-5): A simulation analog design investigation.

2016-07-14

The Personality Inventory for DSM–5 (PID-5) is a 220-item self-report instrument that assesses the alternative model of personality psychopathology in Section III (Emerging Measures and Models) of DSM–5. Despite its relatively recent introduction, the PID-5 has generated an impressive accumulation of studies examining its psychometric properties, and the instrument is also already widely and frequently used in research studies. Although the PID-5 is psychometrically sound overall, reviews of this instrument express concern that this scale does not possess validity scales to detect invalidating levels of response bias, such as underreporting and overreporting. McGee Ng et al. (2016), using a “known-groups” (partial) criterion design, demonstrated that both underreporting and overreporting grossly affect mean scores on PID-5 scales. In the current investigation, we replicate these findings using an analog simulation design. An important extension to this replication study was the finding that the construct validity of the PID-5 was also significantly compromised by response bias, with statistically significant attenuation noted in validity coefficients of the PID-5 domain scales with scales from other instruments measuring congruent constructs. This attenuation was found for underreporting and overreporting bias. We believe there is a need to develop validity scales to screen for data-distorting response bias in research contexts and in clinical assessments where response bias is likely or otherwise suspected. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)