MPRL Research

Our lab aims to organize the astonishing complexity of moral judgment around basic functional principles. Much of it is motivated by a simple idea: Because we use punishments and rewards to modify others' behavior, one function of morality is to teach others how to behave, while another complementary function is to learn appropriate patterns of behavior. Our research may be divided into a few broad categories:

 

Teaching: Moral condemnation and the logic of luck

Consider two friends who share beers over football on a snowy Sunday and then drive home separately. Both fall asleep at the wheel and run off the road. The "lucky" one careens into a snow bank, while the "unlucky" one hits and kills a person. Our laws treat these two individuals dramatically differently. In Massachusetts, for instance, the first would receive a small fine and perhaps a point off her license, while the second would face 2.5 - 15 years in prison. Moral luck is not just a peculiar feature of our laws; it also shapes the punishment judgments of ordinary people (Cushman, 2008), as well as their behavior in economic games (Cushman, Dreber, Yang & Costa, 2009), and it does so from a young age (Cushman, Sheketoff, Wharton & Carey, 2013).

Past theories have treated moral luck as a very general heuristic, mistake or emotional bias, but my research points toward a different conclusion. We have shown that moral luck is a special feature of punishment judgments, one that does not apply to judgments of moral wrongness or character (Cushman, 2008; Cushman et al., 2013), as would be expected if it reflected a general cognitive or emotional bias.

How might the distinctive architecture of punishment judgments reflect its ultimate function? We propose that moral luck is, in fact, well suited to the purpose of modifying the behavior of social partners. Put simply, an accident is a teachable moment. This perspective on the functional design of punishment opens many new directions for research. For instance, we have explored how the demands of behavioral modification can lead to peculiar forms of collective retribution in cultures of honor (Cushman, Durwin & Lively, 2012). More generally, it implies that the structure of human punishment should exhibit a close fit to the capacities and constraints of human learning. In this respect, it dovetails with a second major topic of our research.

 

Learning: Moral behavior and the architecture of decision-making

It is widely accepted that humans are averse to performing harmful actions and that this emotional response explains important properties of moral judgment and behavior. The specific nature of this emotional response, however, remains poorly understood. Some past models have assumed that the aversion stems from empathy; that is, the anticipated outcome of a suffering victim. But in recent research, we have demonstrated that another powerful contributor is an intrinsic aversion to certain canonically harmful actions, including even their sensory-motor properties. This line of research reveals deep and promising connections between the study of moral behavior and current neurocomputational models of learning and choice (Cushman, 2013). At its heart is a distinction between psychological systems that learn the value of outcomes and those that learn the value of actions.

In one study, participants were asked to perform a series of pretend harmful actions toward an experimenter—for instance, shooting him in the face with a disabled handgun (Cushman, Grey, Gaffey & Mendes, 2012). This preserved the sensory-motor properties of an action while removing any expectation of an actual harmful outcome. Participants exhibited a large increase in peripheral vasoconstriction, a physiological state associated with an aversive emotional response. Critically, this response was significantly weaker if the participant merely watched an experimenter perform the action on another experimenter. These results point toward an intrinsic aversion to sensory-motor properties of a canonically harmful act, independent of the expected outcome of that act.

In another study, we asked participants how averse they would be to performing different methods of mercy killing: Giving a poison pill, shooting, suffocating, etc. (Miller, Hannikainan & Cushman, in press). We found that their reported aversion was not significantly predicted by the amount of suffering they expected the victim to experience (an outcome), but was almost perfectly predicted by their reported aversion to pretending to kill a person in the specified manner as part of a theatrical performance (preserving the sensory-motor properties of the action but without any harmful outcome or suffering).

Although these findings expose an apparent irrationality in the structure of our moral emotions, they are readily explained on the assumption that moral behavior is influenced by a specific class of ordinary, domain-general learning mechanisms (Cushman, 2013). Current computational and neurobiological theories of associative learning distinguish between two basic mechanisms for learning and deciding. One assigns value directly to stimulus-response associations ("pointing gun at face = bad") and functions primarily through the midbrain dopamine system and basal ganglia. This system provides a natural model for understanding the aversion to harmful actions based on sensory-motor properties.

Equally important, however, is another system that derives value from a causal model of expected outcomes ("pulling a trigger causes shooting, which causes harm, which is bad") and draws on a network of cortical brain areas. We have proposed that both of these systems contribute to moral judgment and behavior and that conflict between the systems provides a natural dual-process model for the moral domain as well as for cognition generally, with substantial advantages over the more traditional contrast between emotion and reasoning. Consistent with this interpretation, moral judgments of trolley-type dilemmas exhibit sensitivity to both action-based and outcome-based value representations, which are themselves dissociable (Miller et al., in press).

Notably, these studies imply that one's own aversion to performing an action ultimately contributes to the moral condemnation of third-party action. In current research, we are testing a model for this first-person to third-person transfer, which we call "evaluative simulation." Past research has emphasized the role that simulation may play in describing, explaining and predicting others' behavior (i.e., theory of mind). We propose a parallel process by which simulation is used to evaluate another person's behavior (Miller & Cushman, 2013). In other words, a common way of asking, "Was it wrong for her to do it?" is to instead evaluate, "How would it make me feel to do it?"

 

Constructing: The origins of moral principles and errors of induction

Much of our moral knowledge takes the form of intuitions—judgments about particular cases that arise spontaneously and without conscious awareness of the relevant cognitive processing (Cushman, Young & Hauser 2006). But, from the courtroom to the church to the classroom, human morality is also structured around explicit rules. In a series of studies, we have shown that these moral principles are often generalized from patterns of intuitive judgment in particular cases and also that this process is highly error-prone. This body of research offers a productive case study of one of the most fundamental issues in social psychology: the processes by which we develop explicit theories based on observations of our own implicit attitudes and automatic behaviors.

One demonstration of an errant moral induction comes from a study of several hundred professional philosophers (Schwitzgebel & Cushman, 2012). By manipulating the order in which these professors judged a series of well-known moral dilemmas, we amplified or muted salient contrasts between them. Not only did this affect philosophers' judgments, it also subsequently produced a large shift in the proportion of philosophers that professed allegiance to prominent moral principles widely debated in the literature. These effects were significantly larger among philosophers than non-philosophers and largest of all among specialists in ethics. In other words, philosophical training apparently makes people especially adept at generalizing principles from patterns of intuition but in a manner that can be blind to the underlying psychological causes of their intuitions. Consequently, philosophers' theories are susceptible to psychological influences that they themselves would reject upon reflection.

Order effects can be produced in laboratory environments, but what are the influences on moral judgment that shape our explicit moral principles in more ordinary circumstances? From the U.S. Supreme Court to the American Medical Association to the average person on the street, many people endorse the view that there is a morally significant distinction between killing a person actively and passively allowing a person to die when it could have been prevented (Cushman et al., 2006). Our research indicates that this explicit principle may represent another error of induction. It arises in part from a general bias, not specific to the moral domain, to regard actions as more causal and intentional than omissions (Cushman & Young, 2011).

Evidence from functional neuroimaging indicates that this bias has a natural interpretation in terms of automatic versus controlled processing (Cushman et al., 2011): Judging harmful omissions recruits greater activation in the frontoparietal control network than judging harmful actions, and individuals who exhibit the greatest levels of activation in this network also show the least difference in their judgments of actions and omissions. Put in simple terms, institutions like the AMA and Supreme Court may explicitly endorse the view that active harm is morally worse than passive harm because harmful actions automatically generate robust causal and mental-state attributions, whereas controlled processing is necessary to generate the same attributions in cases of harmful omissions.