Elsevier

Cognitive Psychology

Volume 77, March 2015, Pages 42-76
Cognitive Psychology

Do the right thing: The assumption of optimality in lay decision theory and causal judgment

https://doi.org/10.1016/j.cogpsych.2015.01.003Get rights and content

Highlights

  • We investigated lay theories of decision-making.

  • The quality of rejected decision options is used in assigning causal responsibility.

  • Lay decision theorists hold an optimality theory of decision-making.

  • The global optimality of the agent and the local optimality for the goal are used.

  • These results inform theories of social cognition and of strategic interaction.

Abstract

Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent’s perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent’s actual and counterfactual choices (Experiments 1–3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5–6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take.

Introduction

Psychologists, economists, and philosophers are united in their disagreements over the question of human rationality. Some psychologists focus on the fallibility of the heuristics we use and the systematic biases that result (Kahneman & Tversky, 1996), while others are impressed by the excellent performance of heuristics in the right environment (Gigerenzer & Goldstein, 1996). Economists spar over the appropriateness of rationality assumptions in economic models, with favorable views among classically-oriented economists (Friedman, 1953) and unfavorable views among behavioral theorists (Simon, 1986). Meanwhile, philosophers studying decision theory struggle to characterize what kind of behavior is rational, given multifaceted priorities, indeterminate probabilities, and pervasive ignorance (Jeffrey, 1965).

Although decision scientists have debated sophisticated theories of rationality, less is known about people’s lay theories of decision-making. Understanding how people predict and make sense of others’ decision-making has both basic and applied value, just as research on lay theories of biology (e.g., Shtulman, 2006), psychiatry (e.g., Ahn, Proctor, & Flanagan, 2009), and personality (e.g., Haslam, Bastian, & Bissett, 2004) has led to both theoretical and practical progress. The study of lay decision theory can illuminate aspects of our social cognition and reveal the assumptions we make when interacting with others.

In this article, we argue that people use an optimality theory in thinking about others’ behavior, and we show that this optimality assumption guides the attribution of causal responsibility. In the remainder of this introduction, we first describe game theory research on optimality assumptions, then lay out the connections to causal attribution research. Finally, we derive predictions for several competing theoretical views, and preview our empirical strategy.

Psychologists are well-versed in the evidence against human rationality (e.g., Shafir & LeBoeuf, 2002; the collected works of Kahneman and Tversky). Nonetheless, optimality assumptions have a venerable pedigree in economics (Friedman, 1953, Muth, 1961, Smith, 1982/1776), and are incorporated into some game-theoretic models. In fact, classical game theory assumes not only first-order optimality (i.e., behaving optimally relative to one’s self-interest) but also second-order optimality (assuming that others will behave optimally relative to their own self-interest), third-order optimality (assuming that others will assume that others will behave optimally), and so on ad infinitum (von Neumann & Morgenstern, 1944). Understanding the nature of our assumptions about others’ decision-making is thus a foundational issue in behavioral game theory—the empirical study of strategic interaction (Camerer, 2003, Colman, 2003).

Because people are neither infinitely wise nor infinitely selfish, rational self-interest models of economic behavior break down even in simple experimental settings (Camerer & Fehr, 2006). For example, in the beauty contest game (Ho et al., 1998, Moulin, 1986, Nagel, 1995), a group of players each picks a number between 0 and 100, with the player choosing the number closest to 2/3 of the average winning a fixed monetary payoff. The Nash Equilibrium for this game is that every player chooses 0 (i.e., only if every player chooses 0 is it the case that no player can benefit by changing strategy). If others played the game without any guidance from rationality, choosing randomly, then their mean choice would be 50, so the best response would be around 33. But if others followed that exact reasoning, then their average response would be 33, and the best response to 33 is about 22. Applying this same logic repeatedly leads us to the conclusion that the equilibrium guess should be 0. Yet average guesses are between 20 and 40, depending on the subject pool, with more analytic populations (such as Caltech undergraduates) tending to give lower guesses (Camerer, 2003). Which assumption or assumptions of classical game theory are being violated here? Are people miscalculating the equilibrium? Are they assuming that others will miscalculate, or assuming that others will assume miscalculations from others? Are they making a perspective-taking error, or assuming that others will make perspective-taking errors?

One approach toward answering such questions is to build an econometric model of each player’s behavior, interpreting the parameter estimates as evidence concerning the players’ underlying psychology (e.g., Camerer et al., 2004, Stahl and Wilson, 1995). This approach has led to important advances, but the mathematical models often underdetermine the players’ thinking, because a variety of mental representations and cognitive failures can often produce identical behavior. In this paper, we approach the problem of what assumptions people make about others’ behavior using a different set of tools—those of experimental psychology.

Two key assumptions of mathematical game theory—perfect self-interest and perfect rationality—are not empirically plausible (Camerer, 2003). However, a third assumption—that people assume (first-order) optimality in others’ decision-making—may be more plausible. To test this possibility, we studied how people assign causal responsibility to agents for the outcomes of their decisions: How do people evaluate Angie’s responsibility for an outcome, given Angie’s choice of a means for achieving it? Our key prediction is that if people use an optimality theory, agents should be seen as more responsible for outcomes flowing from their actions when those actions led optimally to the outcome.

We hypothesized this connection between lay decision theory and perceived responsibility because (a) rational behavior is a cue to agency (Gao and Scholl, 2011, Gergely and Csibra, 2003), and (b) agents are perceived as more responsible than non-agents (Alicke, 1992, Hart and Honoré, 1959, Hilton et al., 2010, Lagnado and Channon, 2008). Putting these two findings together, a lay decision theorist should assign higher responsibility to others to the extent that those others conform to her theory of rational decision-making (see Gerstenberg, Ullman, Kleiman-Weiner, Lagnado, & Tenenbaum, 2014, for related computational work). Conversely, decision-making that contradicts her theory could result in attenuated responsibility assignment, on the grounds that the decision-maker is not operating in a fully rational way. In extreme cases, murderers may even be acquitted on grounds of mental defect when their decision-making mechanism is perceived as wildly discrepant from rational behavior (see Sinnott-Armstrong & Levy, 2011), overriding the strong motivation to punish morally objectionable actions (Alicke, 2000).

Studying attributions of responsibility also has methodological and practical advantages. Responsibility attributions can be used to test inferences not only about agents’ actual choices, but also about their counterfactual choices—the options that were available but not taken. Intuitively, responsibility attributions are a way of assigning “ownership” of an outcome to one or more individuals after a fully specified outcome has occurred (Hart and Honoré, 1959, Zultan et al., 2012). This method allows us to independently vary the quality of the actual and counterfactual decision options. Further, attributions of causal responsibility have real-life consequences. They affect our willingness to cooperate (Falk & Fischbacher, 2006), our predictions about behavior (McArthur, 1972, Meyer, 1980), and our moral evaluations (Cushman, 2008). For this reason, understanding how people assign responsibility for outcomes has been a recurring theme in social cognition research (e.g., Heider, 1958, Kelley, 1967, Weiner, 1995, Zultan et al., 2012).

In this article, we argue that perceived responsibility depends on the optimality of an action—that people behave like lay classical economists in the tradition of Adam Smith. People believe a decision maker is responsible for an outcome if the decision maker’s choice is the best of all available options. However, optimality is not the only rule people could adopt in evaluating decisions, and in this section, we compare optimality to other strategies.

To compare the alternative strategies, suppose that Angie wants the flowers of her cherished shrub to turn red, and faces a decision as to which fertilizer to purchase—Ever-Gro or Green-Scream. Suppose she purchases Ever-Gro, which has a 50% chance of making her flowers turn red. We abbreviate this probability as PACT, where PACT = P(Outcome | Actual Choice). In this case, PACT = .5. Suppose, too, that the rejected option, Green-Scream, has a 30% chance of making her flowers turn red; we abbreviate this as PALT = P(Outcome | Alternative Choice). Since PACT > PALT, Angie’s choice was optimal. However, if the rejected option, Green-Scream, had instead had a 70% chance of making the flowers turn red, then PALT > PACT, and Angie’s choice of Ever-Gro would have been suboptimal. Finally, if both fertilizers had a 50% chance of producing red flowers, then PACT = PALT, and there would have been no uniquely optimal decision. Supposing that the fertilizer of her choice does cause the flowers to turn red, is Angie responsible for the successful completion of her goal—for the flowers turning red?

One possibility is that the quality of the rejected options is not relevant to Angie’s responsibility. What does it matter if Angie might have made a choice more likely to fulfill her goal, given that she actually did fulfill it? People are ordinarily more likely to generate “upward” counterfactuals in cases of failure than “downward” counterfactuals in cases of success (e.g., Mandel & Lehman, 1996), and on some accounts, the primary function of counterfactual reasoning is to elicit corrective thinking in response to negative episodes (Roese, 1997). So people may not deem counterfactual actions relevant if the actual choice led to a success (see Belnap, Perloff, & Xu, 2001 for a different rationale for such a pattern). If people do not view Angie’s rejected options as relevant to evaluating her actual (successful) decision, then they would follow a strategy we call alternative-insensitive: For a given value of PACT, there would be no relationship between attributions of responsibility and PALT. Table 1 summarizes this possibility by showing that this view predicts that people will assign Angie responsibility (indicated by + in the table) as long as (a) she chooses an option that has a nonzero probability of leading to the desired outcome and (b) that outcome actually occurs.

A quite different pattern would appear if people assume that agents are optimizers. Although much of the time people do not themselves behave optimally (e.g., Simon, 1956), the assumption of optimal decision-making might be useful for predicting and explaining behavior (Davidson, 1967, Dennett, 1987) and is built into game theory models of strategic interaction (e.g., Von Neumann & Morgenstern, 1944). If optimality of this sort underlies our lay decision theories, the perceived responsibility of other decision-makers should depend on whether they select the highest quality option available (i.e., on whether PACT > PALT). For example, given Angie’s choice of Ever-Gro (PACT = .5), Angie might be seen as more responsible for the flowers turning red if the rejected option of Green Scream is inferior (PALT = .3) than if it is superior (PALT = .7). According to this account, the size of the difference between PACT and PALT should have little impact on responsibility ratings. That is, if PACT = .5, Angie would be seen as equally responsible for the flowers turning red, regardless of whether the rejected option is only somewhat worse (PALT = .3) or is much worse (PALT = .1), because she chose optimally either way. Likewise, Angie’s (non-)responsibility for the outcome would be similar whether the rejected option is only somewhat better (PALT = .7) or much better (PALT = .9), because she chose suboptimally either way.

The prediction that responsibility ratings would be insensitive to the magnitude of [PACT  PALT] is an especially strong test of optimality, because in other contexts, people often judge the strength of a cause to be proportional to the size of the difference the cause made to the probability of the outcome (Cheng and Novick, 1992, Spellman, 1997). The canonical measure of probabilistic difference-making is ΔP (Allan, 1980), which is equal to [P(Effect | Cause)  P(Effect | ∼Cause)]. One might expect, based on those previous results, that responsibility ratings would be sensitive to the magnitude of [PACT  PALT], which is equivalent to ΔP if one interprets the actual decision as the cause and the rejected option as the absence of the cause (i.e., ∼Cause). We refer to this strategy as ΔP dependence.

The final strategy we consider is positive difference-making. If more than two alternatives are available, the difference-making status of any one of them is best evaluated against a common baseline. For example, if Angie can choose to apply Ever-Gro, Green-Scream, or neither, then we can calculate ΔP separately for each fertilizer relative to the do-nothing option. Suppose, for example, that Angie’s plant has a 10% chance of developing red flowers even if she does not add any fertilizer, and that Angie’s choice of Ever-Gro was suboptimal (PACT = .5) relative to the rejected choice of Green-Scream (PALT = .7). Now, ΔP is positive both for her actual (suboptimal) choice (ΔPACT = .4) and for the rejected option (ΔPALT = .6). If people simply assign higher responsibility ratings when ΔP > 0 than when ΔP < 0—in contrast to both the ΔP dependence and the optimality strategies—then Angie would be seen as highly causal, despite her suboptimal choice.

Table 1 compares these four methods of assigning responsibility. Suppose the decision-maker has three options, A, B, and C. For illustration, we will assume that PA [ = P(outcome | choice of A)] = .5, PB = .3, and PC = .1. Then, as Table 1 shows, optimizing implies that the decision-maker is responsible (indicated by a +) only if she chooses A, whereas positive difference-making implies that she is responsible if she chooses either A or B (assuming that ΔP is calculated relative to the worst option, C). A pure ΔP strategy (i.e., responsibility is directly proportional to ΔP) also assigns responsibility to A and B, but more strongly for the former. Finally, if people are insensitive to alternative choices, then so long as a positive outcome occurred, the decision-maker would be credited with responsibility if she chooses any of A, B, or C.

These issues are explored in seven experiments. Experiments 1 and 2 distinguish the predictions of the four accounts summarized in Table 1 by varying the quality of the decision-makers’ rejected options (PALT). Experiment 3 then turns to how people combine information about the quality of both the actual and rejected options (PACT and PALT) in forming responsibility judgments, and Experiment 4 looks at individual differences in assignment strategies. Experiments 5 and 6 then examine how people conceptualize trade-offs among multiple goals, testing whether perceived responsibility for a goal tracks optimality for that goal or optimality relative to the agents’ overall utility. Finally, Experiment 7 uses more naturalistic decision problems to see how people spontaneously assign responsibility when the probabilities are supplied by background knowledge rather than by the experimenter (Experiment 7).

Section snippets

Experiment 1: The influence of rejected options

In Experiment 1, we ask whether people typically use an optimality assumption to guide their attributions of responsibility, or whether instead they follow a linear ΔP or alternative-insensitive strategy (see Table 1). To do so, we examine how agents’ perceived responsibility for a desired outcome depends on the quality of a counterfactual choice—that is, an option they rejected. Participants read about agents who made decisions leading to an outcome with probability PACT (always .5), but could

Experiments 2A and 2B: Optimizing vs. positive difference making

Although the results of Experiment 1 are consistent with optimality, they could also be explained by participants attributing responsibility to the agent whenever ΔP > 0. This is the positive difference-making strategy of Table 1. The ΔP > 0 relationship occurs when the decision makes a positive difference to the outcome, relative to some reference point. If participants were computing ΔP relative to the worst available option, then the optimal choice in every condition of Experiment 1 was also the

Experiments 3A and 3B: Varying the quality of the actual choice

Causes with higher probabilities of bringing about their effects are ordinarily assigned higher causal strength than causes with lower probabilities (e.g., Cheng, 1997). Therefore, holding optimality constant, one might expect a positive relationship between responsibility judgments and PACT. In Experiment 3, we measure the effect of PACT, both for decisions where PALT < PACT and the decision was therefore optimal (Experiment 3A) and for decisions where PALT = PACT and the decision was not optimal

Experiment 4: Individual differences in responsibility assignment

We have so far described our findings at the group level, averaging across participants and comparing means across conditions. The possibility remains, however, that some of our findings reflect a mix of strategies at the individual level. For example, a subset of participants who were insensitive to counterfactuals and a subset of participants using the optimality principle could lead to a pattern of group means like that in Fig. 1. This possibility is particularly plausible in light of

Experiment 5: Local and global optimality

Sometimes an agent has multifaceted priorities, and the optimal means toward some particular end may not maximize the agent’s overall utility. An example from Audi (1993) illustrates this point:

Suppose I want to save money, purely as a means to furthering my daughter’s education…. I might discover that I can save money by not buying her certain books which are available at a library, and then, in order to save money, and with no thought of my ultimate reason for wanting to do so, decline to buy

Experiment 6: Varying the probability of the non-focal goal

In the present experiment, we manipulate global optimality in a more subtle way than we did in the previous experiment, this time asking whether responsibility for a focal goal is sensitive to manipulations of the choice’s efficacy for non-focal goals. More concretely, consider how Jill’s response to her newest conundrum could affect her responsibility for her hair smelling like apples (where PACT of the non-focal goal is varied across conditions as indicated in brackets):

  • Jill is shopping for a

Experiments 7A and 7B: Tacit knowledge of conditional probabilities

We have so far explored lay decision theory using controlled vignettes that indicated optimal choice with exact probabilities. Unfortunately, life seldom wears probabilities on its sleeve: Would optimality assumptions also extend to more realistic decision environments that do not explicitly quantify uncertainty?

Experiment 7 addressed this question by using vignettes about decisions for which participants might have prior beliefs, omitting explicit mention of decision efficacies. We began by

General discussion

These experiments examined lay theories of decision-making, asking how the quality of decision-makers’ actual and rejected options influences the perceived quality of their decisions, as measured by attributions of responsibility. Two main results emerged consistently across these studies.

First, an assumption of optimality guides attributions of responsibility. In Experiments 1 and 4 (as well as a near-exact replication of Experiment 1), attributions of responsibility depended qualitatively on

Acknowledgments

This research was partially supported by funds awarded to the first author by the Yale University Department of Psychology. Experiments 1 and 2 were presented at the 35th Annual Meeting of the Cognitive Science Society. We thank the conference attendees and reviewers for their extremely helpful suggestions. We thank Andy Jin for assistance with stimuli development, Fabrizio Cariani, Winston Chang, Angie Johnston, Frank Keil, Doug Medin, Emily Morson, Axel Mueller, Eyal Sagi, and Laurie Santos

References (71)

  • I. Ritov et al.

    Outcome knowledge, regret, and omission bias

    Organizational Behavior and Human Decision Processes

    (1995)
  • A. Shtulman

    Qualitative differences between naïve and scientific theories of evolution

    Cognitive Psychology

    (2006)
  • D.O. Stahl et al.

    On players’ models of other players: Theory and experimental evidence

    Games and Economic Behavior

    (1995)
  • R. Zultan et al.

    Finding fault: Causality and counterfactuals in group attributions

    Cognition

    (2012)
  • W. Ahn et al.

    Mental health clinicians’ beliefs about the biological, psychological, and environmental bases of mental disorders

    Cognitive Science

    (2009)
  • M.D. Alicke

    Culpable causation

    Journal of Personality and Social Psychology

    (1992)
  • M.D. Alicke

    Culpable control and the psychology of blame

    Psychological Bulletin

    (2000)
  • L.G. Allan

    A note on measurement of contingency between two binary variables in judgment tasks

    Bulletin of the Psychonomic Society

    (1980)
  • K.J. Arrow

    Rationality of self and others in an economic system

    Journal of Business

    (1986)
  • R. Audi

    Action, intention, and reason

    (1993)
  • N. Belnap et al.

    Facing the future: Agents and choices in our indeterminist world

    (2001)
  • C. Camerer

    Behavioral game theory: Experiments in strategic interaction

    (2003)
  • C.F. Camerer et al.

    When does ‘Economic Man’ dominate social behavior?

    Science

    (2006)
  • C.F. Camerer et al.

    A cognitive hierarchy model of games

    Quarterly Journal of Economics

    (2004)
  • S. Carey

    The origin of concepts

    (2009)
  • P.W. Cheng

    From covariation to causation: A causal power theory

    Psychological Review

    (1997)
  • P.W. Cheng et al.

    Covariation in natural causal induction

    Psychological Review

    (1992)
  • M.T.H. Chi et al.

    Misconceived causal explanations for emergent processes

    Cognitive Science

    (2012)
  • A.W. Colman

    Cooperation, psychological game theory, and limitations of rationality in social interaction

    Behavioral and Brain Sciences

    (2003)
  • D. Davidson

    Truth and meaning

    Synthese

    (1967)
  • D. Davidson

    Paradoxes of irrationality

  • De Freitas, J., & Johnson, S. G. B. (submitted for publication). Behaviorist thinking in judgments of wrongness,...
  • D.C. Dennett

    The intentional stance

    (1987)
  • Z. Dienes

    Bayesian versus orthodox statistics: Which side are you on?

    Perspectives on Psychological Science

    (2011)
  • M. Friedman

    Essays on positive economics

    (1953)
  • Cited by (0)

    View full text