Subscribe: Journal of Experimental Psychology: Human Perception and Performance - Vol 35, Iss 6
Added By: Feedage Forager Feedage Grade A rated
Language: English
apa rights  database record  psycinfo database  record apa  response  rights reserved  selection  stimulus  task  visual 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Experimental Psychology: Human Perception and Performance - Vol 35, Iss 6

Journal of Experimental Psychology: Human Perception and Performance - Vol 42, Iss 10

The Journal of Experimental Psychology: Human Perception and Performance publishes studies on perception, control of action, and related cognitive processes.

Last Build Date: Fri, 30 Sep 2016 08:00:09 EST

Copyright: Copyright 2016 American Psychological Association

You think you know where you looked? You better look again.


People are surprisingly bad at knowing where they have looked in a scene. We tested participants’ ability to recall their own eye movements in 2 experiments using natural or artificial scenes. In each experiment, participants performed a change-detection (Exp.1) or search (Exp.2) task. On 25% of trials, after 3 seconds of viewing the scene, participants were asked to indicate where they thought they had just fixated. They responded by making mouse clicks on 12 locations in the unchanged scene. After 135 trials, observers saw 10 new scenes and were asked to put 12 clicks where they thought someone else would have looked. Although observers located their own fixations more successfully than a random model, their performance was no better than when they were guessing someone else’s fixations. Performance with artificial scenes was worse, though judging one’s own fixations was slightly superior. Even after repeating the fixation-location task on 30 scenes immediately after scene viewing, performance was far from the prediction of an ideal observer. Memory for our own fixation locations appears to add next to nothing beyond what common sense tells us about the likely fixations of others. These results have important implications for socially important visual search tasks. For example, a radiologist might think he has looked at “everything” in an image, but eye tracking data suggest that this is not so. Such shortcomings might be avoided by providing observers with better insights of where they have looked. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Holistic processing of face configurations and components.


Although many researchers agree that faces are processed holistically, we know relatively little about what information holistic processing captures from a face. Most studies that assess the nature of holistic processing do so with changes to the face affecting many different aspects of face information (e.g., different identities). Does holistic processing affect every aspect of a face? We used the composite task, a common means of examining the strength of holistic processing, with participants making same–different judgments about configuration changes or component changes to 1 portion of a face. Configuration changes involved changes in spatial position of the eyes, whereas component changes involved lightening or darkening the eyebrows. Composites were either aligned or misaligned, and were presented either upright or inverted. Both configuration judgments and component judgments showed evidence of holistic processing, and in both cases it was strongest for upright face composites. These results suggest that holistic processing captures a broad range of information about the face, including both configuration-based and component-based information. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Perceiving a continuous visual world across voluntary eye blinks.


People blink their eyes every few seconds, but the changes in retinal illumination that accompany eyeblinks are hardly noticed. Furthermore, despite the loss of visual input, visual experience remains continuous across eyeblinks. Two hypotheses were investigated to account for these phenomena. The first proposes that perceptual information is maintained across a blink whereas the second proposes that perceptual information is not maintained but rather postblink perceptual experience is antedated to the beginning of the blink. Two experiments found no evidence for temporal antedating of a stimulus presented during a voluntary eyeblink. In a third experiment subjects judged the temporal duration of a stimulus that was interrupted by a voluntary eyeblink with that of a stimulus presented while the eyes were open. The duration of stimuli that were interrupted by eyeblinks was judged to be 117 ms shorter than that of stimuli presented while the eyes remained open, indicating that blink duration was not accounted for in the perception of stimulus duration. This suggests that perceptual experience is neither maintained nor antedated across eyeblinks, but rather is ignored, perhaps in response to the extraretinal signal that accompanies the eyeblink. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

The time-limited visual statistician.


The visual system can calculate summary statistics over time. For example, the multiple frames of a movie showing a dynamically changing disk can be collapsed to form a single representation of that disk’s mean size. Summary representations of dynamic information may engage online updating processes that establish a running average of the mean by continuously adjusting the persisting representation of the average in tandem with the arrival of incoming information. Alternatively, summary representations may involve subsampling strategies that reflect limitations in the degree to which the visual system can integrate information over time. Observers watched movies of a disk that changed size smoothly at different rates and then reported the disk’s average size by adjusting the diameter of a response disk. Critically, the movie varied in duration. Size estimates depended on the duration of the movie. They were constant and fairly accurate for movie durations up to approximately 600 ms, at which point accuracy decreased with increasing duration to imprecise levels by about 1,000 ms. Summary statistics established over time are unlikely to be updated continuously and may instead be restricted by subsampling processes, such as limited temporal windows of integration. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Contextual within-trial adaptation of cognitive control: Evidence from the combination of conflict tasks.


It is assumed that we recruit cognitive control (i.e., attentional adjustment and/or inhibition) to resolve 2 conflicts at a time, such as driving toward a red traffic light and taking care of a near-by ambulance car. A few studies have addressed this issue by combining a Simon task (that required responding with left or right key-press to a stimulus presented on the left or right side of the screen) with either a Stroop task (that required identifying the color of color words) or a Flanker task (that required identifying the target character among flankers). In most studies, the results revealed no interaction between the conflict tasks. However, these studies include a small stimulus set, and participants might have learned the stimulus-response mappings for each stimulus. Thus, it is possible that participants have more relied on episodic memory than on cognitive control to perform the task. In 5 experiments, we combined the 3 tasks pairwise, and we increased the stimulus set size to circumvent episodic memory contributions. The results revealed an interaction between the conflict tasks: Irrespective of task combination, the congruency effect of 1 task was smaller when the stimulus was incongruent for the other task. This suggests that when 2 conflicts are presented concurrently, the control processes induced by 1 conflict source can affect the control processes induced by the other conflict source. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Visuospatial working memory mediates inhibitory and facilitatory guidance in preview search.


Visual search is faster and more accurate when a subset of distractors is presented before the display containing the target. This “preview benefit” has been attributed to separate inhibitory and facilitatory guidance mechanisms during search. In the preview task the temporal cues thought to elicit inhibition and facilitation provide complementary sources of information about the likely location of the target. In this study, we use a Bayesian observer model to compare sensitivity when the temporal cues eliciting inhibition and facilitation produce complementary, and competing, sources of information. Observers searched for T-shaped targets among L-shaped distractors in 2 standard and 2 preview conditions. In the standard conditions, all the objects in the display appeared at the same time. In the preview conditions, the initial subset of distractors either stayed on the screen or disappeared before the onset of the search display, which contained the target when present. In the latter, the synchronous onset of old and new objects negates the predictive utility of stimulus-driven capture during search. The results indicate observers combine memory-driven inhibition and sensory-driven capture to reduce spatial uncertainty about the target’s likely location during search. In the absence of spatially predictive onsets, memory-driven inhibition at old locations persists despite irrelevant sensory change at previewed locations. This result is consistent with a bias toward unattended objects during search via the active suppression of irrelevant capture at previously attended locations. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Asymmetries in the perception of Mandarin tones: Evidence from mismatch negativity.


Most investigations of the representation and processing of speech sounds focus on their segmental representations, and considerably less is known about the representation of suprasegmental phenomena (e.g., Mandarin tones). Here we examine the mismatch negativity (MMN) response to the contrast between Mandarin Tone 3 (T3) and other tones using a passive oddball paradigm. Because the MMN response has been shown to be sensitive to the featural contents of speech sounds in a way that is compatible with underspecification theories of phonological representations, here, we test the predictions of such theories regarding suprasegmental phenomena. Assuming T3 to be underspecified in Mandarin (because it has variable surface representations and low pitch), we predicted that an asymmetric MMN response would be elicited when T3 is contrasted with another tone. In 2 of our 3 experiments, this was observed, but in non-Mandarin-speaking participants as well as native speakers, suggesting that the locus of the effect was perceptual (acoustic or phonetic) rather than phonological. In a third experiment, the predicted asymmetry was limited to native speakers. These results highlight the importance of distinguishing phonological and perceptual contributions to MMN asymmetries, but also demonstrate a role of abstract phonological representations in which certain information is underspecified in long-term memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Category-based guidance of spatial attention during visual search for feature conjunctions.


The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Conjoint influence of mind-wandering and sleepiness on task performance.


Recent research suggests that sleepiness and mind-wandering—the experience of thoughts that are both stimulus-independent and task-unrelated—frequently co-occur and are both associated with poorer cognitive functioning. Whether these two phenomena have distinguishable effects on task performance remains unknown, however. To investigate this question, we used the online experience sampling of mind-wandering episodes and subjective sleepiness during a laboratory task (the Sustained Attention to Response Task; SART), and also assessed mind-wandering frequency and sleep-related disturbances in daily life using self-report questionnaires. The results revealed that the tendency to experience mind-wandering episodes during the SART and daily life was associated with higher levels of daytime sleepiness and sleep-related disturbances. More important, however, mind-wandering and sleepiness were independent predictors of SART performance at both the within- and between-individuals levels. These findings demonstrate that, although mind-wandering and sleepiness frequently co-occur, these two phenomena have distinguishable and additive effects on task performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Acting and anticipating: Impact of outcome-compatible distractor depends on response selection efficiency.


Action selection is thought to involve selection of the action’s sensory outcomes. This notion is supported when encountering a distractor that resembles a learned response–outcome biases response selection. Some evidence, however, suggests that a larger contribution of stimulus-based response selection leaves little role for outcome-based selection, especially in forced-choice tasks with easily identifiable target stimuli. In the present study, we asked whether the contribution of outcome-based selection depends on the ease and efficiency of stimulus-based selection. If so, then efficient stimulus-based response selection should reduce the impact of an irrelevant distractor that resemble a response–outcome. We manipulated efficiency of stimulus-based selection by varying the spatial relationship between stimulus and response (Experiment 1) and by varying stimulus discriminability (Experiments 2). We hypothesized that with efficient stimulus-based selection, outcome-based processes will play a weaker role in response selection, and performance will be less susceptible to outcome-compatible or -incompatible distractors. By contrast, when stimulus-based selection is relatively inefficient, outcome-based processes will play a stronger role in response selection, and performance should be more susceptible to outcome-compatible or -incompatible distractors. Confirming our predictions, our results showed stronger impact of the distractors when stimulus-based response selection was relatively inefficient. Finally, results of a control experiment (Experiment 3) suggested that learning the consistent response–outcome mapping is necessary for obtaining the effect of these distractors. We conclude that outcome-based processes do contribute to response selection in forced-choice tasks, and that this contribution varies with the efficiency of stimulus-based response selection. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Evidence for dual mechanisms of action prediction dependent on acquired visual-motor experiences.


To test mechanisms underpinning action prediction, we directly controlled experience in a dart-throwing training study. A motor-visual group physically practiced throwing darts and a perceptual training group learned to associate dart throw actions (occluded video clips) with landing outcomes. A final control group did not practice. Accuracy was assessed on related prediction tests before and after practice (involving temporally occluded video clips). These tests were performed while additionally performing simple, action-incongruent secondary motor tasks with either the right (observed throwing arm) or left effector, in addition to an attention control task. Motor proficiency tests were also performed. Although both trained groups improved their prediction accuracy after training, only the motor-visual group showed interference associated with the right-arm secondary motor task after practice. No interference was shown for the left-arm motor task. These effects were evidenced regardless of whether predictions were made in response to video stimuli or static clips. Moreover, improvements on the motor proficiency test were only shown for the motor-visual group. These results show evidence in support of motor simulation processes during action prediction among observers with motor experience. Prediction accuracy can be achieved via nonmotor processes (for the perceptual group), but there was no evidence that physically experienced performers could effectively switch processes to maintain prediction accuracy. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Hierarchical nesting of affordances in a tool use task.


In studying the perception of affordances, researchers have typically identified a single affordance and designed experiments to evaluate the perception of that affordance. Yet in daily life, multiple affordances always exist. One consequence of this is that there may be higher order, means-ends relations between different affordances. In 4 experiments, we created situations in which lower order, subordinate affordances could affect the realization of higher order, superordinate affordances, and we asked whether participants were sensitive to these hierarchical, nested relations. Participants wielded tools that varied in length, mass, and mass distribution. In Experiments 1 and 2, we asked them to evaluate these tools in terms of their suitability for executing specific interactions with target objects (striking vs. poking) that were positioned at different distances. In Experiments 3 and 4, we asked participants to select rods and masses and then to assemble them into tools that could be used to execute specific interactions with target objects at different distances. The results were compatible with the hypothesis that participants were simultaneously sensitive to affordances for tool assembly and affordances for tool use. We argue that the nesting of affordances is characteristic of many situations in daily life and that, consequently, sensitivity to hierarchical, means-ends relations among affordances may be an essential characteristic of perceptually guided action. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Explicit spatial compatibility is not critical to the object handle effect.


In object perception studies, a response advantage arises when the handle of an object is congruent with the responding hand. This handle effect is thought to reflect increased motor activation of the hand most suited to grasp the object, consistent with affordance theories of object representation. An alternative explanation has been proposed, however, which suggests that the handle effect is related to a simple spatial compatibility effect (the Simon effect). In 3 experiments, we determined whether the handle effect would emerge in the absence of explicit spatial compatibility between handle and response. Stimulus and response location was varied vertically and participants made horizontally orthogonal, bimanual responses to objects’ kitchen/garage category, color (as in a traditional Simon effect) or upright/inverted orientation. Categorization and inversion tasks, which relied on object knowledge, elicited a handle effect and a vertical Simon effect regarding stimulus and response locations. When participants judged object color, as per standard Simon effect paradigms, the handle effect disappeared but the Simon effect strengthened. These data demonstrate a dissociation between affordance and spatial compatibility effects and prove that affordance plays an important role in the handle effect. Models that incorporate both affordance and spatial compatibility mechanisms are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

GSDT: An integrative model of visual search.


We present a new quantitative process model (GSDT) of visual search that seeks to integrate various processing mechanisms suggested by previous studies within a single, coherent conceptual frame. It incorporates and combines 4 distinct model components: guidance (G), a serial (S) item inspection process, diffusion (D) modeling of individual item inspections, and a strategic termination (T) rule. For this model, we derive explicit closed-form results for response probability and mean search time (reaction time [RT]) as a function of display size and target presence/absence. The fit of the model is compared in detail to data from 4 visual search experiments in which the effects of target/distractor discriminability and of target prevalence on performance (present/absent display size functions for mean RT and error rate) are studied. We describe how GSDT accounts for various detailed features of our results such as the probabilities of hits and correct rejections and their mean RTs; we also apply the model to explain further aspects of the data, such as RT variance and mean miss RT. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)

Listeners lengthen phrase boundaries in self-paced music.


Previous work has shown that musicians tend to slow down as they approach phrase boundaries (phrase-final lengthening). In the present experiments, we used a paradigm from the action perception literature, the dwell time paradigm (Hard, Recchia, & Tversky, 2011), to investigate whether participants engage in phrase boundary lengthening when self-pacing through musical sequences. When participants used a key press to produce each successive chord of Bach chorales, they dwelled longer on boundary chords than nonboundary chords in both the original chorales and atonal manipulations of the chorales. When a novel musical sequence was composed that controlled for metrical and melodic contour cues to boundaries, the dwell time difference between boundaries and nonboundaries was greater in the tonal condition than in the atonal condition. Furthermore, similar results were found for a group of nonmusicians, suggesting that phrase-final lengthening in musical production is not dependent on musical training and can be evoked by harmonic cues. (PsycINFO Database Record (c) 2016 APA, all rights reserved)(image)