Subscribe: Psychological Review - Vol 117, Iss 1
Preview: Psychological Review - Vol 117, Iss 1

Psychological Review - Vol 124, Iss 3

Psychological Review publishes articles that make important theoretical contributions to any area of scientific psychology.

Last Build Date: Tue, 25 Apr 2017 12:00:26 EST

Copyright: Copyright 2017 American Psychological Association

How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis.


People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture’s ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

LATEST: A model of saccadic decisions in space and time.


Many of our actions require visual information, and for this it is important to direct the eyes to the right place at the right time. Two or three times every second, we must decide both when and where to direct our gaze. Understanding these decisions can reveal the moment-to-moment information priorities of the visual system and the strategies for information sampling employed by the brain to serve ongoing behavior. Most theoretical frameworks and models of gaze control assume that the spatial and temporal aspects of fixation point selection depend on different mechanisms. We present a single model that can simultaneously account for both when and where we look. Underpinning this model is the theoretical assertion that each decision to move the eyes is an evaluation of the relative benefit expected from moving the eyes to a new location compared with that expected by continuing to fixate the current target. The eyes move when the evidence that favors moving to a new location outweighs that favoring staying at the present location. Our model provides not only an account of when the eyes move, but also what will be fixated. That is, an analysis of saccade timing alone enables us to predict where people look in a scene. Indeed our model accounts for fixation selection as well as (and often better than) current computational models of fixation selection in scene viewing. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Formalizing Neurath’s ship: Approximate algorithms for online causal learning.


Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath’s ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people’s limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Model flexibility analysis does not measure the persuasiveness of a fit.


Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that “aids model evaluation by providing a metric for gauging the persuasiveness of a given fit” (p. 755) Model flexibility analysis measures the complexity of a model in terms of the proportion of all possible data patterns it can predict. We show that this measure does not provide a reliable way to gauge complexity, which prevents model flexibility analysis from fulfilling either of the 2 aims outlined by Veksler et al. (2015): absolute and relative model evaluation. We also show that model flexibility analysis can even fail to correctly quantify complexity in the most clear cut case, with nested models. We advocate for the use of well-established techniques with these characteristics, such as Bayes factors, normalized maximum likelihood, or cross-validation, and against the use of model flexibility analysis. In the discussion, we explore 2 issues relevant to the area of model evaluation: the completeness of current model selection methods and the philosophical debate of absolute versus relative model evaluation. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Learning, remembering, and predicting how to use tools: Distributed neurocognitive mechanisms: Comment on Osiurak and Badets (2016).


The reasoning-based approach championed by Francois Osiurak and Arnaud Badets (Osiurak & Badets, 2016) denies the existence of sensory-motor memories of tool use except in limited circumstances, and suggests instead that most tool use is subserved solely by online technical reasoning about tool properties. In this commentary, I highlight the strengths and limitations of the reasoning-based approach and review a number of lines of evidence that manipulation knowledge is in fact used in tool action tasks. In addition, I present a “two route” neurocognitive model of tool use called the “Two Action Systems Plus (2AS+)” framework that posits a complementary role for online and stored information and specifies the neurocognitive substrates of task-relevant action selection. This framework, unlike the reasoning based approach, has the potential to integrate the existing psychological and functional neuroanatomic data in the tool use domain. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Use of tools and misuse of embodied cognition: Reply to Buxbaum (2017).


Osiurak and Badets (2016) examined the validity of the manipulation-based versus the reasoning-based approaches to tool use in light of studies in experimental psychology and neuropsychology. They concluded that the reasoning-based approach seems to be more promising than the manipulation-based approach for understanding the current literature. Buxbaum (2017) questioned this conclusion and raised certain theoretical limitations with regard to the reasoning-based approach. She also suggested that this approach is not well-equipped to integrate the existing psychological and neuroanatomical data in the tool use domain. In this context, she presented a neurocognitive model—the “Two Action Systems Plus” (2AS+) framework—deeply anchored in the embodied cognition approach. In this reply, we address the key points raised by Buxbaum, leading us to draw 2 new conclusions. The first is that the reasoning-based approach integrates the existing psychological and neuroanatomical data not only in the tool use domain, but also in the motor control domain. As a matter of fact, it is even better equipped than the 2AS+ to account for recent neuroscience data. The second is that the 2AS+ suffers from epistemological and theoretical limitations, generating confusion as to what manipulation knowledge—a core concept in this model—precisely is. To sum up, 2AS+ illustrates potential misuse of embodied cognition, viewing tool use mainly as a matter of manipulation and not of understanding mechanical actions between tools and objects. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)