Subscribe: Journal of Experimental Psychology: Human Perception and Performance - Vol 35, Iss 6
Added By: Feedage Forager Feedage Grade A rated
Language: English
action  apa rights  database record  face  information  participants  psycinfo database  record apa  rights reserved  task  visual 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Experimental Psychology: Human Perception and Performance - Vol 35, Iss 6

Journal of Experimental Psychology: Human Perception and Performance - Vol 43, Iss 2

The Journal of Experimental Psychology: Human Perception and Performance publishes studies on perception, control of action, and related cognitive processes.

Last Build Date: Tue, 21 Feb 2017 14:00:11 EST

Copyright: Copyright 2017 American Psychological Association

Signal enhancement, not active suppression, follows the contingent capture of visual attention.


Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Competition in saccade target selection reveals attentional guidance by simultaneously active working memory representations.


The content of visual working memory (VWM) guides attention, but whether this interaction is limited to a single VWM representation or functional for multiple VWM representations is under debate. To test this issue, we developed a gaze-contingent search paradigm to directly manipulate selection history and examine the competition between multiple cue-matching saccade target objects. Participants first saw a dual-color cue followed by two pairs of colored objects presented sequentially. For each pair, participants selectively fixated an object that matched one of the cued colors. Critically, for the second pair, the cued color from the first pair was presented either with a new distractor color or with the second cued color. In the latter case, if two cued colors in VWM interact with selection simultaneously, we expected the second cued color object to generate substantial competition for selection, even though the first cued color was used to guide attention in the immediately previous pair. Indeed, in the second pair, selection probability of the first cued color was substantially reduced in the presence of the second cued color. This competition between cue-matching objects provides strong evidence that both VWM representations interacted simultaneously with selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Learning to perceive haptic distance-to-break in the presence of friction.


Two experiments employed attunement and calibration training to investigate whether observers are able to identify material break points in compliant materials through haptic force application. The task required participants to attune to a recently identified haptic invariant, distance-to-break (DTB), rather than haptic stimulation not related to the invariant, including friction. In the first experiment participants probed simulated force-displacement relationships (materials) under 3 levels of friction with the aim of pushing as far as possible into the materials without breaking them. In a second experiment a different set of participants pulled on the materials. Results revealed that participants are sensitive to DTB for both pushing and pulling, even in the presence of varying levels of friction, and this sensitivity can be improved through training. The results suggest that the simultaneous presence of friction may assist participants in perceiving DTB. Potential applications include the development of haptic training programs for minimally invasive (laparoscopic) surgery to reduce accidental tissue damage. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Information foraging for perceptual decisions.


We tested an information foraging framework to characterize the mechanisms that drive active (visual) sampling behavior in decision problems that involve multiple sources of information. Experiments 1 through 3 involved participants making an absolute judgment about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between 2 motion patterns that could only be sampled sequentially. Our results show that: (a) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (b) The limited growth is attributable to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (c) Little information is lost once a new source of information is being sampled; and (d) The point at which the observer switches from 1 source to another is governed by online monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Using a dichoptic moving window presentation technique to investigate binocular advantages during reading.


Reading comes with a clear binocular advantage, expressed in shorter fixation times and fewer regressions in binocular relative to monocular visual presentations. Little is known, however, about whether the cost associated with monocular viewing derives primarily from the encoding of foveal information or in obtaining a preview benefit from upcoming parafoveal text. In the present sentence reading eye tracking experiment, the authors used a novel dichoptic binocular gaze-contingent moving window technique to selectively manipulate the amount of text made available to the reader both binocularly and monocularly in the fovea and parafovea on a fixation-by-fixation basis. This technique allowed the authors to quantify disruption to reading caused by prevention of binocular fusion during direct fixation of words and parafoveal preprocessing of upcoming text. Sentences were presented (a) binocularly; (b) monocularly; (c) with monocular text to the left of fixation; (d) with monocular text to the right of fixation; or (e) with all words other than the fixated word presented binocularly. A robust binocular advantage occurred for average fixation duration and regressions. Also, while there was a limited cost associated with monocular foveal processing, the restriction of parafoveal processing to monocular information was particularly disruptive. The findings demonstrate the critical importance of a unified binocular input for the efficient preprocessing text to the right of fixation. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Individual differences in adaptive norm-based coding and holistic coding are associated yet each contributes uniquely to unfamiliar face recognition ability.


We can discriminate and recognize many faces, despite their visual similarity. Individual differences in this ability have been linked to 2 face coding mechanisms: adaptive norm-based coding of identity and holistic coding. However, it is not yet known whether these mechanisms are distinct. Nor is it known whether they make unique contributions to face recognition ability because no studies have measured the operation of both these mechanisms in the same individuals. We measured individual differences in both the strength of adaptive norm-based coding (with a face identity aftereffect task) and holistic coding (with a composite face task). For the first time, we show that these 2 mechanisms are positively and moderately associated and that each makes significant unique contributions to unfamiliar face recognition ability (Cambridge Face Memory Test [CMFT]). Importantly, these relationships were face-specific. We also show that the combined contribution of these mechanisms to face recognition performance is significantly larger than the contribution of nonface recognition memory, consistent with the view that face recognition relies on the operation of face-sensitive mechanisms. Overall, our results raise intriguing questions regarding what these mechanisms may have in common, and what other mechanisms support face recognition performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Culture shapes spatial frequency tuning for face identification.


Many studies have revealed cultural differences in the way Easterners and Westerners attend to their visual world. It has been proposed that these cultural differences reflect the utilization of different processes, namely holistic processes by Easterners and analytical processes by Westerners. In the face processing literature, eye movement studies have revealed different fixation patterns for Easterners and Westerners that are congruent with a broader spread of attention by Easterners: compared with Westerners, Easterners tend to fixate more toward the center of the face even if they need the information provided by the eyes and mouth. Although this cultural difference could reflect an impact of culture on the visual mechanisms underlying face processing, this interpretation has been questioned by the finding that Easterners and Westerners do not differ on the location of their initial fixations, that is, those that have been shown as being sufficient for face recognition. Because a broader spread of attention is typically linked with the reduced sensitivity to higher spatial frequency, the present study directly compared the spatial frequency tuning of Easterners (Chinese) and Westerners (Canadians) in 2 face recognition tasks (Experiment 1 and 2), along with their general low-level sensitivity to spatial frequencies (Experiment 3). Consistent with our hypothesis, Chinese participants were tuned toward lower spatial frequencies than Canadians participants during the face recognition tasks, despite comparable low-level contrast sensitivity functions. These results strongly support the hypothesis that culture impacts the nature of the visual information extracted during face recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Contributions of perceptual and motor experience of an observed action to anticipating its result.


To gain deeper insight into respective contributions of perceptual and motor experience of an observed action to anticipating its result, we examined the perceptual anticipation of players with different action roles in striking sports. Baseball pitchers and batters at both advanced and intermediate levels were asked to make a decision about whether to swing the bat when viewing a series of videos showing incomplete sequences of a model pitcher throwing a strike or a ball. The results revealed that first 100 ms of ball flight could discriminate advanced batters from intermediate pitchers and batters (with no difference between intermediate pitchers and batters). Particularly, advanced batters (perceptual experts with regard to pitching action) were statistically more accurate and less uncertain in making decisions than were intermediate players, whereas advanced pitchers (motor experts) only showed this tendency without reaching a statistically significant level. Moreover, advanced batters demonstrated greater perceptual sensitivity in discriminating when to swing at strikes over balls than all other players. Our findings suggested that when players were above intermediate level, perceptual experience of an observed action facilitated the perceptual anticipation to a greater extent than motor experience of producing it. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Phasic alertness and residual switch costs in task switching.


Residual switch costs are deficits in task-switching performance that occur despite considerable time to prepare for a task switch. In the present study, the author investigated whether increased phasic alertness modulates residual switch costs. In 2 experiments involving the task-cuing procedure, subjects performed numerical categorization tasks on target digits, with and without an alerting stimulus presented shortly before the target (alert and no-alert trials, respectively). Switch costs were obtained that decreased with a longer cue–target interval, indicating subjects engaged in preparation, but large residual switch costs remained. Alerting effects were obtained in the form of faster overall performance on alert than on no-alert trials, indicating the alerting stimuli increased phasic alertness. Critically, residual switch costs were similar on alert and no-alert trials in both experiments, unaffected by manipulations of alert type, alert availability, and alert–target interval. Implications of the results for understanding the relationship between phasic alertness and cognitive control in task switching are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

The power of words: On item-specific stimulus–response associations formed in the absence of action.


Research on stimulus–response (S-R) associations as the basis of behavioral automaticity has a long history. Traditionally, it was assumed that S-R associations are formed as a consequence of the (repeated) co-occurrence of stimulus and response, that is, when participants act upon stimuli. Here, we demonstrate that S-R associations can also be established in the absence of action. In an item-specific priming paradigm, participants either classified everyday objects by performing a left or right key press (task-set execution) or they were verbally presented with information regarding an object’s class and associated action while they passively viewed the object (verbal coding). Both S-R associations created by task-set execution and by verbal coding led to the later retrieval of both the stimulus–action component and the stimulus–classification component of S-R associations. Furthermore, our data indicate that both associations created by execution and by verbal coding are temporally stable and rather resilient against overwriting. The automaticity of S-R associations formed in the absence of action reveals the striking adaptability of human action control. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Expert players accurately detect an opponent’s movement intentions through sound alone.


Sounds offer a rich source of information about events taking place in our physical and social environment. However, outside the domains of speech and music, little is known about whether humans can recognize and act upon the intentions of another agent’s actions detected through auditory information alone. In this study we assessed whether intention can be inferred from the sound an action makes, and in turn, whether this information can be used to prospectively guide movement. In 2 experiments experienced and novice basketball players had to virtually intercept an attacker by listening to audio recordings of that player’s movements. In the first experiment participants had to move a slider, while in the second one their body, to block the perceived passage of the attacker as they would in a real basketball game. Combinations of deceptive and nondeceptive movements were used to see if novice and/or experienced listeners could perceive the attacker’s intentions through sound alone. We showed that basketball players were able to more accurately predict final running direction compared to nonplayers, particularly in the second experiment when the interceptive action was more basketball specific. We suggest that athletes present better action anticipation by being able to pick up and use the relevant kinematic features of deceptive movement from event-related sounds alone. This result suggests that action intention can be perceived through the sound a movement makes and that the ability to determine another person’s action intention from the information conveyed through sound is honed through practice. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Body posture differentially impacts on visual attention towards tool, graspable, and non-graspable objects.


Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants’ eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants’ fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants’ action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants’ gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Auditory compensation for head rotation is incomplete.


Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information. These “extraretinal signals” compensate for self-movement, converting image motion into head-centered coordinates, although not always perfectly. We investigated whether the auditory system also transforms coordinates by examining the degree of compensation for head rotation when judging a moving sound. Real-time recordings of head motion were used to change the “movement gain” relating head movement to source movement across a loudspeaker array. We then determined psychophysically the gain that corresponded to a perceptually stationary source. Experiment 1 showed that the gain was small and positive for a wide range of trained head speeds. Hence, listeners perceived a stationary source as moving slightly opposite to the head rotation, in much the same way that observers see stationary visual objects move against a smooth pursuit eye movement. Experiment 2 showed the degree of compensation remained the same for sounds presented at different azimuths, although the precision of performance declined when the sound was eccentric. We discuss two possible explanations for incomplete compensation, one based on differences in the accuracy of signals encoding image motion and self-movement and one concerning statistical optimization that sacrifices accuracy for precision. We then consider the degree to which such explanations can be applied to auditory motion perception in moving listeners. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

A link between attentional function, effective eye movements, and driving ability.


The misallocation of driver visual attention has been suggested as a major contributing factor to vehicle accidents. One possible reason is that the relatively high cognitive demands of driving limit the ability to efficiently allocate gaze. We present an experiment that explores the relationship between attentional function and visual performance when driving. Drivers performed 2 variations of a multiple-object tracking task targeting aspects of cognition including sustained attention, dual-tasking, covert attention, and visuomotor skill. They also drove a number of courses in a driving simulator. Eye movements were recorded throughout. We found that individuals who performed better in the cognitive tasks exhibited more effective eye movement strategies when driving, such as scanning more of the road, and they also exhibited better driving performance. We discuss the potential link between an individual’s attentional function, effective eye movements, and driving ability. We also discuss the use of a visuomotor task in assessing driving behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Spontaneous rereading within sentences: Eye movement control and visual sampling.


Three experiments examine the role of previously read text in sentence comprehension and the control of eye movements during spontaneous rereading. Spontaneous rereading begins with a regressive saccade and involves reinspection of previously read text. All 3 experiments employed the gaze-contingent change technique to modulate the availability of previously read text. In Experiment 1, previously read text was permanently masked either immediately to the left of the fixated word (beyond wordn) or more than 1 word to the left (beyond wordn-1). The results of Experiment 1 indicate that the availability of the word immediately to the left (wordn-1) is important for comprehension. Experiments 2 and 3 further explored the role of previously read text beyond wordn-1. In these studies, text beyond wordn-1 was replaced, retaining only word length information, or word length and shape information. Following a regression back within a sentence, meaningful text either reappeared or remained unavailable during rereading. The experiments show that the visual format of text beyond wordn-1 (the parafoveal postview) is important for triggering regressions. The results also indicate that, as least for more complex sentences, the availability of meaningful text is important in driving eye movement control during rereading. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Perceptual adaptation of vowels generalizes across the phonology and does not require local context.


Listeners usually understand without difficulty even speech that sounds atypical. When they encounter noncanonical realizations of speech sounds, listeners can make short-term adjustments of their long-term representations of those sounds. Previous research, focusing mostly on adaptation in consonants, has suggested that for perceptual adaptation to take place some local cues (lexical, phonotactic, or visual) have to guide listeners’ interpretation of the atypical sounds. In the present experiment we investigated perceptual adaptation in vowels. Our first aim was to show whether perceptual adaptation generalizes to unexposed but phonologically related vowels. To this end, we exposed Greek listeners to words or nonwords containing manipulated /i/ or /e/, and tested whether they adapted their perception of the /i/-/e/ contrast, as well as the unexposed /u/-/o/ contrast, which represents the same phonological height distinction. Our second aim was to test whether perceptual adaptation in vowels requires local context. Thus, a half of our listeners heard the manipulated vowels in real Greek words, while the other half heard them in nonwords providing no phonotactic cues on vowel identity. The results showed similar adjustment of /i/-/e/ categorization and of /u/-/o/ categorization, which indicates generalization of perceptual adaptation across phonologically related vowels. Furthermore, adaptation occurred irrespective of whether local context cues were present or not, suggesting that, at least in vowels, adaptation can be based on the distribution of auditory properties in the input. Our findings, confirming that fast perceptual adaptation in adult listeners occurs even for vowels, highlight the role of phonological abstraction in speech perception. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

“Should I stop or should I go? The role of associations and expectancies”: Correction to Best et al. (2016).


Reports an error in "Should I stop or should I go? The role of associations and expectancies" by Maisy Best, Natalia S. Lawrence, Gordon D. Logan, Ian P. L. McLaren and Frederick Verbruggen (Journal of Experimental Psychology: Human Perception and Performance, 2016[Jan], Vol 42[1], 115-137). In the article, there is an error in Table 3 of the Results and third paragraph of the Results section labeled Test phase. In Experiment 4, the study performed an exploratory post-hoc test of the go reaction times in the training phase, contrasting stop-associated and go-associated items. Control items were excluded. Instead of reporting the results of the full analysis (with all three items types included), the authors incorrectly reported the results of this post-hoc analysis in Table 3 and in the main text. The correct analysis is presented below. Note that all other analyses reported in the tables and main text are correct. The R code shared via Open Research Exeter data repository (http://hdl.handle .net/10871/17735) is also correct. The interaction between image type and block is no longer significant when control items are included (p .094; p .037 for the post-hoc test). (The following abstract of the original article appeared in record 2015-40003-001.) Following exposure to consistent stimulus–stop mappings, response inhibition can become automatized with practice. What is learned is less clear, even though this has important theoretical and practical implications. A recent analysis indicates that stimuli can become associated with a stop signal or with a stop goal. Furthermore, expectancy may play an important role. Previous studies that have used stop or no-go signals to manipulate stimulus–stop learning cannot distinguish between stimulus-signal and stimulus-goal associations, and expectancy has not been measured properly. In the present study, participants performed a task that combined features of the go/no-go task and the stop-signal task in which the stop-signal rule changed at the beginning of each block. The go and stop signals were superimposed over 40 task-irrelevant images. Our results show that participants can learn direct associations between images and the stop goal without mediation via the stop signal. Exposure to the image-stop associations influenced task performance during training, and expectancies measured following task completion or measured within the task. But, despite this, we found an effect of stimulus–stop learning on test performance only when the task increased the task-relevance of the images. This could indicate that the influence of stimulus–stop learning on go performance is strongly influenced by attention to both task-relevant and task-irrelevant stimulus features. More generally, our findings suggest a strong interplay between automatic and controlled processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)