Pushing away from representative advice: Advice taking, anchoring, and adjustment

https://doi.org/10.1016/j.obhdp.2015.05.004Get rights and content

Highlights

  • We explore how the sequence of advice impacts how much advice is used.

  • We call those who receive advice before forming their own opinion “dependent”.

  • Dependent advisees adjusted away from median advice (“push away effect”).

  • Our studies also found a push-away effect using a classic anchoring paradigm.

  • We discuss when push-away effects occur in advice taking and anchoring studies.

Abstract

Five studies compare the effects of forming an independent judgment prior to receiving advice with the effects of receiving advice before forming one’s own opinion. We call these the independent-then-revise sequence and the dependent sequence, respectively. We found that dependent participants adjusted away from advice, leading to fewer estimates close to the advice compared to independent-then-revise participants (Studies 1–5). This “push-away” effect was mediated by confidence in the advice (Study 2), with dependent participants more likely to evaluate advice unfavorably and to search for additional cues than independent-then-revise participants (Study 3). Study 4 tested accuracy under different advice sequences. Study 5 found that classic anchoring paradigms also show the push-away effect for median advice. Overall, the research shows that people adjust from representative (median) advice. The paper concludes by discussing when push-away effects occur in advice taking and anchoring studies and the value of independent distributions for observing these effects.

Introduction

People often have to make decisions about topics on which they are not well informed, such as retirement, health care, or new work projects. Therefore, using advice from other people is an important life skill (Heath & Heath, 2013). Yet a large literature shows that people do not take advice particularly well, often overweighting their own opinions (Harvey and Fischer, 1997, Mannes, 2009, Yaniv and Kleinberger, 2000) or ignoring the advice that they receive (Soll & Larrick, 2009). In this paper we ask whether changing the way the advice is provided changes how much people use that advice. Specifically, we manipulate when the advice is received, relative to exposure to the decision problem, to test whether the timing of advice has an important influence on how much people take advice and on the accuracy of their final judgments.

The degree to which people take advice has important implications for judgmental accuracy. First, egocentric bias may cause people to underweight the opinions of others who are more accurate than they are (Yaniv & Kleinberger, 2000). Second, when individual abilities are not too different from one another, averaging quantitative judgments is typically superior to relying on one person’s opinion (Armstrong, 2001, Clemen, 1989, Hastie, 1986, Yaniv, 2004). This benefit occurs for quantitative estimates because errors cancel out when estimates bracket the truth (i.e., fall on both sides of the truth). As long as bracketing is sufficiently frequent, averaging is a very powerful way to reduce judgmental error (Larrick and Soll, 2006, Soll and Larrick, 2009). By underweighting or ignoring advice, as the literature shows is common, people lose out on benefitting from the knowledge of others.

Studies of advice taking typically ask participants to form their own independent opinion on the decision problem before seeing the opinion of their advisor, after which they are given a chance to revise by using the advice however they wish (see review by Bonaccio & Dalal, 2006). We call this sequence of receiving advice the independent-then-revise advice sequence (Fig. 1). Most advice taking studies employ this sequence and use tasks in which participants answer numerical, fact-based questions, such as dates in history or the weights of people in photographs. This allows the researcher to calculate continuous measures of both the amount of advice taking and the accuracy of initial and revised judgments. The independent-then-revise sequence has the advantage of helping judges avoid any “mental contamination” (Wilson & Brekke, 1994) from an advisor when forming their opinion. Seeing the advisor’s answer first could cause errors to be correlated, decrease the chances of bracketing, and thereby decrease the potential benefit of combining opinions with an advisor.

A number of core findings in the advice taking literature have emerged from this standard independent-then-revise paradigm. People tend to discount the opinions of others, with average weights of 70% on their own estimate and 30% on the advice (Harvey and Fischer, 1997, Yaniv and Kleinberger, 2000). Notably, this average weight arises from a multi-modal distribution of weights in which people often ignore advice entirely, occasionally average, and more rarely fully accept advice (Minson et al., 2011, Soll and Larrick, 2009, Soll and Mannes, 2011). A number of moderators of advice taking have also been identified. For example, people take more advice the more they trust the advisor (Gino & Schweitzer, 2008) or when they pay for the advice (Gino, 2008). People take less advice when they are primed with power (See et al., 2011, Tost et al., 2012, Tost et al., 2013) or are induced to experience certain emotions such as anger (Gino & Schweitzer, 2008).

Although most research results have been obtained with the independent-then-revise sequence, in many common advice-taking situations people receive advice before they have an opportunity to form their own opinion on a question—advice comes first, followed by an estimate. For example, subordinates may make recommendations to their managers about spending in categories that the manager had not previously considered, such as, “We should budget $1500 to send me to a conference in Hawaii.” When working on the conference budget, the manager will be forming an estimate of the appropriate allocation after receiving the subordinate’s advice. We call this the dependent advice sequence, because the judgment is likely to be influenced by, and therefore dependent upon, the advice.

We are interested in two main questions about the independent-then-revise and dependent advice sequences: When do people take more advice? When are they more accurate? The natural prediction from the perspective of decades of anchoring research (Chapman and Johnson, 1999, Mussweiler and Strack, 1999, Tversky and Kahneman, 1974) would be that people take more advice in dependent advice sequences, and in fact the handful of studies that have looked at this question found such a result (Koehler and Beauregard, 2006, Sniezek and Buckley, 1995, Yaniv and Choshen-Hillel, 2012, Study 3). Although the logic behind such a prediction is compelling and the published data supports it, we will suggest that there are situations in which the opposite can happen such that answers are more distant from advice in dependent vs. independent-then-revise sequences.

To understand the effects of dependence on advice taking, we consider the perspective of anchoring research (Chapman and Johnson, 1999, Mussweiler and Strack, 1999, Tversky and Kahneman, 1974), given that the advice is likely to act as an anchor for dependent participants because they see advice before they form an opinion. A critical difference between research on anchoring and on advice taking is that anchoring studies typically provide participants with anchors that are near the extremes of what people might answer independently (e.g., Jacowitz & Kahneman, 1995 used anchors from the 15th and 85th percentiles of an independent distribution). In contrast, studies of advice taking often sample advice representatively from the distribution of unaided guesses (Bonaccio & Dalal, 2006). Providing extreme anchors is helpful for detecting anchoring effects because it maximizes the probable effect size. However, in everyday advice taking situations we expect that people will rarely encounter extreme advice (because by definition, extreme advice comes from the tails of the distribution of all possible advice and is therefore less likely to occur); more often they will see advice relatively close to the center of the distribution of independent answers (but see Gino, Brooks, & Schweitzer, 2012 for an advice taking experiment using extreme advice). Central advice, in particular, can frequently match (or nearly match) what people would have said independently if they were in the independent-then-revise sequence rather than the dependent sequence. For example, in an age estimation task, if many people independently think that a target person is 63 years old, then in many cases the advice given will be age 63 and the answer that would have been estimated independently is also age 63. Precisely how often such matches occur depends on the variance and shape of the distribution of independent estimates. For instance, matches will be particularly likely when the distribution has a tall peak at the median. The anchoring literature is mute on what happens in the case of central advice (i.e., median advice), which is critical because central advice is the norm in everyday opportunities to receive advice rather than the exception. From the perspective of how well people use advice, these are important circumstances to understand.

Although studies of anchoring have not looked at what happens when advice matches what people would have said on their own, the theory of anchoring does speak to this question, at least implicitly. The most prominent and widely-accepted anchoring theory that applies in this context is anchoring-as-accessibility, because the anchor in advice taking is provided by an external source (Epley, 2004). The theory posits that the anchor either primes anchor-consistent information in memory (Mussweiler & Strack, 1999), or more generally causes people to focus first on anchor-consistent features of the target (Chapman & Johnson, 1999). Although the anchor may be rejected as the answer, the anchor-consistent information remains active, and therefore pulls judgment in the direction of the anchor. Based solely on accessibility, one might hypothesize that a central anchor would boost evidentiary support for answers near the center of the distribution, leading to a strong anchoring effect in dependent sequences.

However, this interpretation of anchoring as solely accessibility neglects the potential role of adjustment in the judgment process. Although it has been proposed that effortful adjustment away from an anchor only applies to internally generated anchors (Epley, 2004), recent evidence with externally-generated anchors suggests that both accessibility and adjustment operate together in a multi-stage process (Simmons, LeBoeuf, & Nelson, 2010). For example, upon seeing the advice that a target person is 63 years old in an age estimation task, the judge may initially focus on consistent cues such as the target’s baldness (which illustrates selective search prompted by the anchor). Following this, the judge may consider whether the balance of remaining cues favors a higher or lower answer, and adjust in that direction. For extreme advice, the initial consideration of evidence will cause the judge to start at an extreme answer, and insufficient adjustment will likely arrive at an answer close to the extreme anchor and quite distant from what would have been said independently. However, for central advice the accessibility stage of the process will cause many judges to notice evidence that they would have noticed anyway. In other words, even if they had not seen the advice, they would have on their own started with an answer close to it.

What happens next? We posit that the judge tests the advice by implicitly asking the question that anchoring studies ask explicitly—Is the answer higher or lower than that? (Simmons et al., 2010). This internal framing of the problem will often lead the judge to identify evidence in one direction (“he has a lot of wrinkles around his eyes”), and additional evidence may then be recruited that favors answers on that same side of the advice and not the other side (“and his hair is pretty thin”). The result of this will be a push-away effect: Judges in a dependent sequence will systematically give answers that deviate from the advice. Of course, we cannot know how a specific participant would have responded in the absence of advice. Even so, we can infer that the push-away effect exists if the distribution of answers in the dependent sequence exhibits a “hole” at the location of the advice, when compared to a distribution of independent answers.

Whereas the dependent sequence is likely to prompt judges to engage in additional recruitment and search for information upon seeing advice, judges in an independent-then-revise sequence have already completed a search and reported an answer before they see the advice. Moreover, the independent-then-revise sequence now makes available an additional cue, which is the extent to which the advice agrees or disagrees with their independent answer. When the advice matches a person’s initial, independent opinion, the person is likely to infer from the observed consensus that the answer is fairly accurate (Budescu & Yu, 2007), express greater confidence in that answer, and therefore stay with it. When there is a mismatch such that the initial answers disagree, people will occasionally accept the advice to some extent (Soll & Larrick, 2009). Putting these effects together, we expect more responses close to median advice in an independent-then-revise sequence, compared to a distribution of unaided, independent judgments. In terms of the influence of advice, therefore, our discussion suggests that dependent and independent-then-revise advice sequences are likely to have different effects on the distribution of estimates. Although differences in accessibility may cause greater assimilation to advice in dependent sequences, the adjustment phase can actually lead to a “push-away” effect, leaving judgments further from advice in the dependent sequence than in the independent-then-revise sequence. Whether or not this push-away can be detected depends on a variety of factors; we will uncover one factor in the studies that follow (use of median advice) and will speculate on others in Section 7. We suggest that the push-away effect has not been observed in previous work because the necessary factors were not present – for example, previous work did not use median advice (cf. Koehler and Beauregard, 2006, Sniezek and Buckley, 1995, Yaniv and Choshen-Hillel, 2012).

We present the results of five studies addressing the question of whether people take more advice when they have first formed their own independent judgment (independent-then-revise sequence) or have no prior opinion (dependent sequence). Study 1 provides an initial demonstration of the push-away effect. Using median advice, we found that dependent estimates were less likely to be close to the advice than revised estimates (from the independent-then-revise sequence) and that they were further from the advice on average than revised estimates.

Next, Studies 2 and 3 explore the mechanism for the push-away effect. We first tested whether confidence mediates the effect. Because participants in a dependent sequence cannot observe the instances where the answer they would have given independently matches or nearly matches the advice, they are not as confident in such advice as they would be otherwise. We propose that confidence is what causes people to either accept the advice quickly or pursue extended deliberation by asking whether the answer is higher or lower. Once they recruit initial evidence in one direction, evidentiary search is biased in that direction, leading to adjustment and a hole in the distribution of estimates. In Study 2, we found that confidence in the advice indeed mediates the push away effect and Study 3 deepened our understanding of the process using a verbal protocol task where participants talked aloud as they made their decision, providing corroborating evidence for the confidence process.

Study 4 investigates the implications of dependence for accuracy. Using a wide span of advice covering the range of what people might plausibly encounter, we found that both advice sequences are beneficial, compared to the accuracy of independent judgments. Consistent with anchoring-as-accessibility, we also found that when advice is very extreme, dependent estimates are less accurate than revised estimates, because dependent estimates are pulled toward bad advice more than revised estimates. And consistent with the push away effect, we found that a situation in which the dependent sequence should produce highly accurate judgments—exposure to median advice—yields no gain in accuracy over the independent-then-revise sequence because people adjust too much from good advice.

Finally, Study 5 considers how the dependent sequence performed when implemented as the standard anchoring paradigm (which included varying whether the source of information was social or not and whether a comparative “higher or lower” judgment preceded the estimate). We found that the dependent sequence gave the same results when configured as in the standard anchoring paradigm. The results suggest that similar processes underlie judgments in the dependent sequence in both the advice taking and anchoring paradigms. In Section 7, we explore the similarity between anchoring and dependent advice taking at greater length.

In the studies that follow, we compare three types of responses: the dependent estimate in the dependent sequence, the independent estimate in the independent-then-revise sequence, and the revised estimate in the independent-then-revise sequence. Additionally, these will be compared for two dependent variables: The first dependent variable measures the percentage of answers that are close to advice and the second looks at the absolute distance between the advice and the participant’s answers. With median advice, we expect participants in the dependent sequence to push away from the advice, whereas the participants in the independent-then-revise sequence are likely to stick with their independent answer or revise to an answer closer to the advice. In terms of our dependent variables, this would result in the dependent sequence having a lower percentage of answers close to the advice and a higher absolute distance between the advice and the participant’s final answer compared to the revised responses given in the independent-then-revise sequence.

Following the recommendations of Simmons, Nelson, and Simonsohn (2012), for each study we report how we determined our sample size, all data exclusions, all manipulations, and all measures.

Section snippets

Study 1

Study 1 investigated the basic question of how estimates differ when people see advice first (dependent advice sequence) vs. when they give an independent answer first and then revise it (independent-then-revise advice sequence). We varied whether participants saw the dependent or the independent-then-revise advice sequence and crossed that with advice centrality—whether they saw advice that was low (15th percentile), high (85th percentile), or at the median of independent judgments.

Study 2

In the next study we sought to better understand how median advice can have opposite effects in the dependent and independent-then-revise sequences, creating a push-away pattern in the case of dependent estimates and pulling in revised answers in the independent-then-revise sequence. We propose that confidence in advice can explain this seemingly paradoxical result. In the dependent advice sequence, participants evaluate the advice near the beginning of a search process, and therefore are

Study 3

In this study participants were asked to “talk aloud” as they made their decisions. By asking participants to talk aloud, we hoped to get a clearer indication for whether dependent participants tended to ask themselves whether the answer is higher or lower than the advice, recruit evidence in the favored direction, and adjust accordingly. We hypothesized that when giving dependent estimates, participants would be less likely to make positive remarks about the advice than when giving revised

Study 4

Study 4 investigated the impact of advice sequence on accuracy, using a wider span of advice centrality. Previous work has reached differing conclusions about accuracy. Sniezek and Buckley (1995) found that participants were less accurate when in the dependent sequence, perhaps because they were biased toward confirming evidence, whereas Yaniv and Choshen-Hillel (2012) found that participants were more accurate in the dependent sequence because they gave more equal weight to the advice as

Study 5

In this final study, we explore the similarities between the dependent condition and the traditional anchoring paradigm as well as test a boundary condition on the push away-effect. Our reasoning about the push-away effect posited that upon seeing advice, people in the dependent condition implicitly ask themselves a comparative question – Is the answer higher or lower than that? This is precisely the question that is asked explicitly in most anchoring studies. In Study 5, we tested the effect

General discussion

In a series of five studies we found that individuals in a dependent advice-taking sequence gave fewer estimates close to the advice compared to individuals who first formed independent opinions before seeing advice. Whereas participants in the independent-then-revise sequence tended to move toward advice, participants in the dependent sequence gave answers that were, ironically, sometimes further away from the advice than independent answers given by people who did not see advice at all

Acknowledgements

The authors would like to thank Francesca Gino, Julia Minson, and two anonymous reviewers as well as Lalin Anik, H. Min Bang, and the Management and Organizations seminar at Duke for their helpful comments on this work. We are also grateful to Daniel C. Feiler for his assistance collecting a portion of the data and the Fuqua School of Business for providing the funding for this work.

References (55)

  • J.A. Sniezek et al.

    Cueing and cognitive conflict in judge-advisor decision making

    Organizational Behavior and Human Decision Processes

    (1995)
  • J.B. Soll et al.

    Judgmental aggregation strategies depend on whether the self is involved

    International Journal of Forecasting

    (2011)
  • L.P. Tost et al.

    Power, competitiveness, and advice taking: Why the powerful don’t listen

    Organizational Behavior and Human Decision Processes

    (2012)
  • I. Yaniv et al.

    Advice taking in decision making: Egocentric discounting and reputation formation

    Organizational Behavior and Human Decision Processes

    (2000)
  • J.S. Armstrong

    Combining forecasts

  • J.W. Brehm

    A theory of psychological reactance

    (1966)
  • D.V. Budescu et al.

    Aggregation of opinions based on correlated cues and advisors

    Journal of Behavioral Decision Making

    (2007)
  • G. Chapman et al.

    The limits of anchoring

    Journal of Behavioral Decision Making

    (1994)
  • G. Chapman et al.

    Incorporating the irrelevant: Anchors in judgments of belief and value

  • A. Chernev

    Semantic anchoring in sequential evaluations of vices and virtues

    Journal of Consumer Research

    (2011)
  • H. Einhorn et al.

    Quality of group judgment

    Psychological Bulletin

    (1977)
  • N. Epley et al.

    Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors

    Psychological Science

    (2001)
  • N. Epley et al.

    The anchoring-and-adjustment heuristic

    Psychological Science

    (2006)
  • N. Epley

    A tale of tuned decks? Anchoring as accessibility and anchoring as adjustment

  • K.A. Ericsson et al.

    Protocol analysis

    (1993)
  • S.W. Frederick et al.

    A scale distortion theory of anchoring

    Journal of Experimental Psychology: General

    (2012)
  • G. Gigerenzer et al.

    Probabilistic mental models: A Brunswikian theory of confidence

    Psychological Review

    (1991)
  • Cited by (0)

    View full text