Subscribe: Psychological Review - Vol 117, Iss 1
Preview: Psychological Review - Vol 117, Iss 1

Psychological Review - Vol 123, Iss 5

Psychological Review publishes articles that make important theoretical contributions to any area of scientific psychology.

Last Build Date: Fri, 21 Oct 2016 14:00:26 EST

Copyright: Copyright 2016 American Psychological Association

The discovery of processing stages: Extension of Sternberg’s method.


We introduce a method for measuring the number and durations of processing stages from the electroencephalographic signal and apply it to the study of associative recognition. Using an extension of past research that combines multivariate pattern analysis with hidden semi-Markov models, the approach identifies on a trial-by-trial basis where brief sinusoidal peaks (called bumps) are added to the ongoing electroencephalographic signal. We propose that these bumps mark the onset of critical cognitive stages in processing. The results of the analysis can be used to guide the development of detailed process models. Applied to the associative recognition task, the hidden semi-Markov models multivariate pattern analysis method indicates that the effects of associative strength and probe type are localized to a memory retrieval stage and a decision stage. This is in line with a previously developed the adaptive control of thought–rational process model, called ACT-R, of the task. As a test of the generalization of our method we also apply it to a data set on the Sternberg working memory task collected by Jacobs, Hwang, Curran, and Kahana (2006). The analysis generalizes robustly, and localizes the typical set size effect in a late comparison/decision stage. In addition to providing information about the number and durations of stages in associative recognition, our analysis sheds light on the event-related potential components implicated in the study of recognition memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Cooperative inference: Features, objects, and collections.


Cooperation plays a central role in theories of development, learning, cultural evolution, and education. We argue that existing models of learning from cooperative informants have fundamental limitations that prevent them from explaining how cooperation benefits learning. First, existing models are shown to be computationally intractable, suggesting that they cannot apply to realistic learning problems. Second, existing models assume a priori agreement about which concepts are favored in learning, which leads to a conundrum: Learning fails without precise agreement on bias yet there is no single rational choice. We introduce cooperative inference, a novel framework for cooperation in concept learning, which resolves these limitations. Cooperative inference generalizes the notion of cooperation used in previous models from omission of labeled objects to the omission values of features, labels for objects, and labels for collections of objects. The result is an approach that is computationally tractable, does not require a priori agreement about biases, applies to both Boolean and first-order concepts, and begins to approximate the richness of real-world concept learning problems. We conclude by discussing relations to and implications for existing theories of cognition, cognitive development, and cultural evolution. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Tool use and affordance: Manipulation-based versus reasoning-based approaches.


Tool use is a defining feature of human species. Therefore, a fundamental issue is to understand the cognitive bases of human tool use. Given that people cannot use tools without manipulating them, proponents of the manipulation-based approach have argued that tool use might be supported by the simulation of past sensorimotor experiences, also sometimes called affordances. However, in the meanwhile, evidence has been accumulated demonstrating the critical role of mechanical knowledge in tool use (i.e., the reasoning-based approach). The major goal of the present article is to examine the validity of the assumptions derived from the manipulation-based versus the reasoning-based approach. To do so, we identified 3 key issues on which the 2 approaches differ, namely, (a) the reference frame issue, (b) the intention issue, and (c) the action domain issue. These different issues will be addressed in light of studies in experimental psychology and neuropsychology that have provided valuable contributions to the topic (i.e., tool-use interaction, orientation effect, object-size effect, utilization behavior and anarchic hand, tool use and perception, apraxia of tool use, transport vs. use actions). To anticipate our conclusions, the reasoning-based approach seems to be promising for understanding the current literature, even if it is not fully satisfactory because of a certain number of findings easier to interpret with regard to the manipulation-based approach. A new avenue for future research might be to develop a framework accommodating both approaches, thereby shedding a new light on the cognitive bases of human tool use and affordances. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

A multi-rater framework for studying personality: The trait-reputation-identity model.


Personality and social psychology have historically been divided between personality researchers who study the impact of traits and social–cognitive researchers who study errors in trait judgments. However, a broader view of personality incorporates not only individual differences in underlying traits but also individual differences in the distinct ways a person’s personality is construed by oneself and by others. Such unique insights are likely to appear in the idiosyncratic personality judgments that raters make and are likely to have etiologies and causal force independent of trait perceptions shared across raters. Drawing on the logic of the Johari window (Luft & Ingham, 1955), the Self–Other Knowledge Asymmetry Model (Vazire, 2010), and Socioanalytic Theory (Hogan, 1996; Hogan & Blickle, 2013), we present a new model that separates personality variance into consensus about underlying traits (Trait), unique self-perceptions (Identity), and impressions conveyed to others that are distinct from self-perceptions (Reputation). We provide three demonstrations of how this Trait-Reputation-Identity (TRI) Model can be used to understand (a) consensus and discrepancies across rating sources, (b) personality’s links with self-evaluation and self-presentation, and (c) gender differences in traits. We conclude by discussing how researchers can use the TRI Model to achieve a more sophisticated view of personality’s impact on life outcomes, developmental trajectories, genetic origins, person–situation interactions, and stereotyped judgments. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Thinking outside the box when reading aloud: Between (localist) module connection strength as a source of word frequency effects.


The frequency with which words appear in print is a powerful predictor of the time to read monosyllabic words aloud, and consequently all models of reading aloud provide an explanation for this effect. The entire class of localist accounts assumes that the effect of word frequency arises because the mental lexicon is organized around frequency of occurrence (the action is inside the lexical boxes). We propose instead that the frequency of occurrence effect is better understood in terms of the hypothesis that the strength of between module connections varies as a function of word frequency. Findings from 3 different lines of investigation (experimental and computational) are difficult to understand in terms of the “within lexicon” account, but are consistent with the strength of between-module connections account. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

The practical and principled problems with educational neuroscience.


The core claim of educational neuroscience is that neuroscience can improve teaching in the classroom. Many strong claims are made about the successes and the promise of this new discipline. By contrast, I show that there are no current examples of neuroscience motivating new and effective teaching methods, and argue that neuroscience is unlikely to improve teaching in the future. The reasons are twofold. First, in practice, it is easier to characterize the cognitive capacities of children on the basis of behavioral measures than on the basis of brain measures. As a consequence, neuroscience rarely offers insights into instruction above and beyond psychology. Second, in principle, the theoretical motivations underpinning educational neuroscience are misguided, and this makes it difficult to design or assess new teaching methods on the basis of neuroscience. Regarding the design of instruction, it is widely assumed that remedial instruction should target the underlying deficits associated with learning disorders, and neuroscience is used to characterize the deficit. However, the most effective forms of instruction may often rely on developing compensatory (nonimpaired) skills. Neuroscience cannot determine whether instruction should target impaired or nonimpaired skills. More importantly, regarding the assessment of instruction, the only relevant issue is whether the child learns, as reflected in behavior. Evidence that the brain changed in response to instruction is irrelevant. At the same time, an important goal for neuroscience is to characterize how the brain changes in response to learning, and this includes learning in the classroom. Neuroscientists cannot help educators, but educators can help neuroscientists. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

The promise of educational neuroscience: Comment on Bowers (2016).


Bowers (2016) argues that there are practical and principled problems with how educational neuroscience may contribute to education, including lack of direct influences on teaching in the classroom. Some of the arguments made are convincing, including the critique of unsubstantiated claims about the impact of educational neuroscience and the reminder that the primary outcomes of education are behavioral, such as skill in reading or mathematics. Bowers’ analysis falls short in 3 major respects. First, educational neuroscience is a basic science that has made unique contributions to basic education research; it is not part of applied classroom instruction. Second, educational neuroscience contributes to ideas about education practices and policies beyond classroom curriculum that are important for helping vulnerable students. Third, educational neuroscience studies using neuroimaging have not only revealed for the first time the brain basis of neurodevelopmental differences that have profound influences on educational outcomes, but have also identified individual brain differences that predict which students learn more or learn less from various curricula. In several cases, the brain measures significantly improved or vastly outperformed conventional behavioral measures in predicting what works for individual children. These findings indicate that educational neuroscience, at a minimum, has provided novel insights into the possibilities of individualized education for students, rather than the current practice of learning through failure that a curriculum did not support a student. In the best approach to improving education, educational neuroscience ought to contribute to basic research addressing the needs of students and teachers. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

The principles and practices of educational neuroscience: Comment on Bowers (2016).


In his recent critique of Educational Neuroscience, Bowers argues that neuroscience has no role to play in informing education, which he equates with classroom teaching. Neuroscience, he suggests, adds nothing to what we can learn from psychology. In this commentary, we argue that Bowers’ assertions misrepresent the nature and aims of the work in this new field. We suggest that, by contrast, psychological and neural levels of explanation complement rather than compete with each other. Bowers’ analysis also fails to include a role for educational expertise—a guiding principle of our new field. On this basis, we conclude that his critique is potentially misleading. We set out the well-documented goals of research in Educational Neuroscience, and show how, in collaboration with educators, significant progress has already been achieved, with the prospect of even greater progress in the future. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Psychology, not educational neuroscience, is the way forward for improving educational outcomes for all children: Reply to Gabrieli (2016) and Howard-Jones et al. (2016).


In Bowers (2016), I argued that there are (a) practical problems with educational neuroscience (EN) that explain why there are no examples of EN improving teaching and (b) principled problems with the logic motivating EN that explain why it is likely that there never will be. In the following article, I consider the main responses raised by both Gabrieli (2016) and Howard-Jones et al. (2016) and find them all unconvincing. Following this exchange, there are still no examples of EN providing new insights to teaching in the classroom, there are still no examples of EN providing new insights to remedial instructions for individuals, and, as I detail in this article, there is no evidence that EN is useful for the diagnosis of learning difficulties. The authors have also failed to address the reasons why EN is unlikely to benefit educational outcomes in the future. Psychology, by contrast, can (and has) made important discoveries that can (and should) be used to improve teaching and diagnostic tests for learning difficulties. This is not a debate about whether science is relevant to education, rather it is about what sort of science is relevant. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)