Subscribe: Journal of Experimental Psychology: General - Vol 138, Iss 4
Added By: Feedage Forager Feedage Grade A rated
Language: English
apa rights  database record  faces  information  memory  psycinfo database  psycinfo  race  record apa  reserved  rights reserved 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Experimental Psychology: General - Vol 138, Iss 4

Journal of Experimental Psychology: General - Vol 146, Iss 1

The Journal of Experimental Psychology: General publishes articles describing empirical work that bridges the traditional interests of two or more communities of psychology.

Last Build Date: Wed, 18 Jan 2017 21:00:11 EST

Copyright: Copyright 2017 American Psychological Association

The dynamic effect of incentives on postreward task engagement.


Although incentives can be a powerful motivator of behavior when they are available, an influential body of research has suggested that rewards can persistently reduce engagement after they end. This research has resulted in widespread skepticism among practitioners and academics alike about using incentives to motivate behavior change. However, recent field studies looking at the longer term effects of temporary incentives have not found such detrimental behavior. We design an experimental framework to study dynamic behavior under temporary rewards, and find that although there is a robust decrease in engagement immediately after the incentive ends, engagement returns to a postreward baseline that is equal to or exceeds the initial baseline. As a result, the net effect of temporary incentives on behavior is strongly positive. The decrease in postreward engagement is not on account of a reduction in intrinsic motivation, but is instead driven by a desire to take a “break,” consistent with maintaining a balance between goals with primarily immediate and primarily delayed benefits. Further supporting this interpretation, the initial decrease in postreward engagement is reduced by contextual factors (such as less task difficulty and higher magnitude incentives) that reduce the imbalance between effort and leisure. These findings are contrary to the predictions of major established accounts and have important implications for designing effective incentive policies to motivate behavior change. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Overdistribution illusions: Categorical judgments produce them, confidence ratings reduce them.


Overdistribution is a form of memory distortion in which an event is remembered as belonging to too many episodic states, states that are logically or empirically incompatible with each other. We investigated a response formatting method of suppressing 2 basic types of overdistribution, disjunction and conjunction illusions, which parallel some classic illusions in the judgment and decision making literature. In this method, subjects respond to memory probes by rating their confidence that test cues belong to specific episodic states (e.g., presented on List 1, presented on List 2), rather than by making the usual categorical judgments about those states. The central prediction, which was derived from the task calibration principle of fuzzy-trace theory, was that confidence ratings should reduce overdistribution by diminishing subjects’ reliance on noncompensatory gist memories. The data of 3 experiments agreed with that prediction. In Experiment 1, there were reliable disjunction illusions with categorical judgments but not with confidence ratings. In Experiment 2, both response formats produced reliable disjunction illusions, but those for confidence ratings were much smaller than those for categorical judgments. In Experiment 3, there were reliable conjunction illusions with categorical judgments but not with confidence ratings. Apropos of recent controversies over confidence-accuracy correlations in memory, such correlations were positive for hits, negative for correct rejections, and the 2 types of correlations were of equal magnitude. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

On the failure to notice that White people are White: Generating and testing hypotheses in the celebrity guessing game.


Drawing together social psychologists’ concerns with equality and cognitive psychologists’ concerns with scientific inference, 6 studies (N = 841) showed how implicit category norms make the generation and test of hypothesis about race highly asymmetric. Having shown that Whiteness is the default race of celebrity actors (Study 1), Study 2 used a variant of Wason’s (1960) rule discovery task to demonstrate greater difficulty in discovering rules that require specifying that race is shared by White celebrity actors than by Black celebrity actors. Clues to the Whiteness of White actors from analogous problems had little effect on hypothesis formation or rule discovery (Studies 3 and 4). Rather, across Studies 2 and 4 feedback about negative cases—non-White celebrities—facilitated the discovery that White actors shared a race, whether participants or experimenters generated the negative cases. These category norms were little affected by making White actors’ Whiteness more informative (Study 5). Although participants understood that discovering that White actors are White would be harder than discovering that Black actors are Black, they showed limited insight into the information contained in negative cases (Study 6). Category norms render some identities as implicit defaults, making hypothesis formation and generalization about real social groups asymmetric in ways that have implications for scientific reasoning and social equality. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Who “believes” in the Gambler’s Fallacy and why?


Humans possess a remarkable ability to discriminate structure from randomness in the environment. However, this ability appears to be systematically biased. This is nowhere more evident than in the Gambler’s Fallacy (GF)—the mistaken belief that observing an increasingly long sequence of “heads” from an unbiased coin makes the occurrence of “tails” on the next trial ever more likely. Although the GF appears to provide evidence of “cognitive bias,” a recent theoretical account (Hahn & Warren, 2009) has suggested the GF might be understandable if constraints on actual experience of random sources (such as attention and short term memory) are taken into account. Here we test this experiential account by exposing participants to 200 outcomes from a genuinely random (p = .5) Bernoulli process. All participants saw the same overall sequence; however, we manipulated experience across groups such that the sequence was divided into chunks of length 100, 10, or 5. Both before and after the exposure, participants (a) generated random sequences and (b) judged the randomness of presented sequences. In contrast to other accounts in the literature, the experiential account suggests that this manipulation will lead to systematic differences in postexposure behavior. Our data were strongly in line with this prediction and provide support for a general account of randomness perception in which biases are actually apt reflections of environmental statistics under experiential constraints. This suggests that deeper insight into human cognition may be gained if, instead of dismissing apparent biases as failings, we assume humans are rational under constraints. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Entrainment to an auditory signal: Is attention involved?


Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Try to look on the bright side: Children and adults can (sometimes) override their tendency to prioritize negative faces.


We used eye tracking to examine 4- to 10-year-olds’ and adults’ (N = 173) visual attention to negative (anger, fear, sadness, disgust) and neutral faces when paired with happy faces in 2 experimental conditions: free-viewing (“look at the faces”) and directed (“look only at the happy faces”). Regardless of instruction, all age groups more often looked first to negative versus positive faces (no age differences), suggesting that initial orienting is driven by bottom-up processes. In contrast, biases in more sustained attention—last looks and looking duration—varied by age and could be modified by top-down instruction. On the free-viewing task, all age groups exhibited a negativity bias which attenuated with age and remained stable across trials. When told to look only at happy faces (directed task), all age groups shifted to a positivity bias, with linear age-related improvements. This ability to implement the “look only at the happy faces” instruction, however, fatigued over time, with the decrement stronger for children. Controlling for age, individual differences in executive function (working memory and inhibitory control) had no relation to the free-viewing task; however, these variables explained substantial variance on the directed task, with children and adults higher in executive function showing better skill at looking last and looking longer at happy faces. Greater anxiety predicted more first looks to angry faces on the directed task. These findings advance theory and research on normative development and individual differences in the bias to prioritize negative information, including contributions of bottom-up salience and top-down control. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Face-blind for other-race faces: Individual differences in other-race recognition impairments.


We report the existence of a previously undescribed group of people, namely individuals who are so poor at recognition of other-race faces that they meet criteria for clinical-level impairment (i.e., they are “face-blind” for other-race faces). Testing 550 participants, and using the well-validated Cambridge Face Memory Test for diagnosing face blindness, results show the rate of other-race face blindness to be nontrivial, specifically 8.1% of Caucasians and Asians raised in majority own-race countries. Results also show risk factors for other-race face blindness to include: a lack of interracial contact; and being at the lower end of the normal range of general face recognition ability (i.e., even for own-race faces); but not applying less individuating effort to other-race than own-race faces. Findings provide a potential resolution of contradictory evidence concerning the importance of the other-race effect (ORE), by explaining how it is possible for the mean ORE to be modest in size (suggesting a genuine but minor problem), and simultaneously for individuals to suffer major functional consequences in the real world (e.g., eyewitness misidentification of other-race offenders leading to wrongful imprisonment). Findings imply that, in legal settings, evaluating an eyewitness’s chance of having made an other-race misidentification requires information about the underlying face recognition abilities of the individual witness. Additionally, analogy with prosopagnosia (inability to recognize even own-race faces) suggests everyday social interactions with other-race people, such as those between colleagues in the workplace, will be seriously impacted by the ORE in some people. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Two paths to blame: Intentionality directs moral information processing along two distinct tracks.


There is broad consensus that features such as causality, mental states, and preventability are key inputs to moral judgments of blame. What is not clear is exactly how people process these inputs to arrive at such judgments. Three studies provide evidence that early judgments of whether or not a norm violation is intentional direct information processing along 1 of 2 tracks: if the violation is deemed intentional, blame processing relies on information about the agent’s reasons for committing the violation; if the violation is deemed unintentional, blame processing relies on information about how preventable the violation was. Owing to these processing commitments, when new information requires perceivers to switch tracks, they must reconfigure their judgments, which results in measurable processing costs indicated by reaction time (RT) delays. These findings offer support for a new theory of moral judgment (the Path Model of Blame) and advance the study of moral cognition as hierarchical information processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)

Imagining wrong: Fictitious contexts mitigate condemnation of harm more than impurity.


Over 5 experiments, we test the fictive pass asymmetry hypothesis. Following observations of ethics and public reactions to media, we propose that fictional contexts, such as reality, imagination, and virtual environments, will mitigate people’s moral condemnation of harm violations, more so than purity violations. That is, imagining a purely harmful act is given a “fictive pass,” in moral judgment, whereas imagining an abnormal act involving the body is evaluated more negatively because it is seen as more diagnostic of bad character. For Experiment 1, an undergraduate sample (N = 250) evaluated 9 vignettes depicting an agent committing either violations of harm or purity in real life, watching them in films, or imagining them. For Experiments 2 and 3, online participants (N = 375 and N = 321, respectively) evaluated a single vignette depicting an agent committing a violation of harm or purity that either occurred in real life, was imagined, watched in a film, or performed in a video game. Experiment 4 (N = 348) used an analysis of moderated mediation to demonstrate that the perceived wrongness of fictional purity violations is explained both by the extent to which they are seen as a cue to, and a cause of, a poor moral character. Lastly, Experiment 5 (N = 484) validated our manipulations and included the presumption of desire as an additional mediator of the fictive pass asymmetry effects. We discuss implications for moral theories of act and character, anger and disgust, and for media use and regulation. (PsycINFO Database Record (c) 2017 APA, all rights reserved)(image)