Skip to main content
Log in

Disagreement and Epistemic Utility-Based Compromise

  • Published:
Journal of Philosophical Logic Aims and scope Submit manuscript

Abstract

Epistemic utility theory seeks to establish epistemic norms by combining principles from decision theory and social choice theory with ways of determining the epistemic utility of agents’ attitudes. Recently, Moss (Mind, 120(480), 1053–69, 2011) has applied this strategy to the problem of finding epistemic compromises between disagreeing agents. She shows that the norm “form compromises by maximizing average expected epistemic utility”, when applied to agents who share the same proper epistemic utility function, yields the result that agents must form compromises by splitting the difference between their credence functions. However, this “split the difference” norm is in conflict with conditionalization, since applications of the two norms don’t commute. A common response in the literature seems to be to abandon the procedure of splitting the difference in favor of compromise strategies that avoid non-commutativity. This would also entail abandoning Moss’ norm. I explore whether a different response is feasible. If agents can use epistemic utility-based considerations to agree on an order in which they will apply the two norms, they might be able to avoid diachronic incoherence. I show that this response can’t save Moss’ norm, because the agreements concerning the order of compromising and updating it generates are not stable over time, and hence cannot avoid diachronic incoherence. I also show that a variant of Moss’ norm, which requires that the weights given to each agent’s epistemic utility change in a way that ensures commutativity, cannot be justified on epistemological grounds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The problem I am interested in here is how the agents should form a degree of belief that represents their joint opinion. The answer to this question does not automatically settle the answer to the question of whether and how the agents should change their individual credences when they learn about the disagreement.

  2. The Brier score is a proper scoring rule that can be used to measure the epistemic utility of an agent’s credence, or credence function, at a given world. It is essentially a way of measuring how far away the agent’s credences are from the truth – the larger the difference between an agent’s credence in some proposition p and the truth value of p at that world, the higher the inaccuracy, or epistemic disutility, of that credence. More formally: suppose c is a credence function defined over a set of propositions F, and the function I indicates the truth values of the propositions in F at world w by mapping them onto {0,1}. Then the following function gives us the Brier score of c at w: \(Brier(c,w)=\sum \limits _{A\in F} {(c(A)-Iw(A))^{2}} \)

  3. To be precise, she says that it is not mandatory that both agents’ attitudes get weighted equally, but doing so is the default strategy unless one of the agents has more expertise on the issue in question.

  4. As an anonymous referee helpfully points out, Moss’ norm is not an exact analogue of consequentialism: standard consequentialism calculates which action has the highest expected utility based on a single agent’s credences, whereas Moss’ norm takes into account what each agent’s expected epistemic utilities are, hence taking into account more than one agent’s credences.

  5. See Russell, Hawthorne, & Buchak, Groupthink, (manuscript), for an example of such a Dutch book.

  6. See, for example: Russell, Hawthorne & Buchak, Groupthink, (manuscript); [1, 2, 7, 8], and others.

  7. This is of course not the only possible reaction to the data about non-commutativity. An alternative response would be for instance to reject conditionalization and/or to embrace diachronic incoherence. Usually, conditionalization is defined and defended as a norm that applies to a particular agent’s credences. Hence, one might argue that the joint credences of groups are importantly different from the credences of a single agent, and conditionalization is a norm governing the latter, but not the former. However, it seems plausible to me that if a group wants to find a joint opinion in order to act as a single, unified epistemic agent, then the same norms apply to their joint opinion that govern an individual agent’s credences. Hence, it seems difficult to reject conditionalization for joint opinions while accepting it for individual opinions. Moreover, given the variety and strength of the arguments for conditionalization, rejecting the rule altogether seems to be generally perceived as unattractive, and furthermore, it would be deeply revisionary.

  8. This is because a coherent agent will always expect her own credences to have a lower Brier score than any alternative credences.

  9. Instead of using the increases in Brier score, we could also just work with the Brier scores for the different values of q. The result is exactly the same.

  10. Let’s walk through the calculations for Alma, to make things more concrete (numbers are rounded): If Alma learns that p, her new credence in q will be 0.416. She will then expect the following Brier score for her credence in q:

    $$0.416(0.416-1)^{2}+0.584(0.416-0)^{2} = 0.2429$$

    She can compare this to the Brier scores she expects for the compromise credences the two strategies yield for q.

    Compromise first:

    $$0.416(0.607-1)^{2}+0.584(0.607-0)^{2} = 0.2794$$

    Expected Brier penalty increase: 0.0365 (=x)

    Update first:

    $$0.416(0.583-1)^{2}+0.584(0.583-0)^{2} = 0.2708$$

    Expected Brier penalty increase: 0.0279 (=m)

    If Alma learns that ˜p, her new credence in q will be 0.625. She will then expect the following Brier score for her credence in q:

    $$0.625(0.625-1)^{2}+0.375(0.625-0)^{2} = 0.2344$$

    She can compare this to the Brier scores she expects for the compromise credences the two strategies yield for q.

    Compromise first:

    $$0.625(0.583-1)^{2}+0.375(0.583-0)^{2} = 0.2361$$

    Expected Brier penalty increase: 0.00176 (=y)

    Update first:

    $$0.625(0.563-1)^{2}+0.375(0.563-0)^{2} = 0.2383$$

    Expected Brier penalty increase: 0.00391 (=n)

    Now, Alma can calculate which strategy she expects to have a lower increase in expected Brier penalty:

    $${\begin{array}{*{20}l} {\text{Compromise}\;\text{first}:} & {\text{Cr}\left({\mathrm{p}} \right)\mathrm{x}+\text{Cr}\left({\sim \mathrm{p}} \right)\mathrm{y}} \\ & {0.6\times 0.0365+0.4\times 0.00176=0.0226} \\ \end{array} }$$
    $${\begin{array}{*{20}l} {\text{Update}\;\text{first}:} & {\text{Cr}\left({\mathrm{p}} \right)\mathrm{m}+\text{Cr}\left({\sim \mathrm{p}} \right)\mathrm{n}} \\ & {0.6\times 0.0279+0.4\times 0.00391=0.0183} \\ \end{array} }$$

    As we can see, Alma expects updating first to have a lower increase in expected Brier score, so this is the strategy she prefers based on individual expected epistemic utility calculations. The calculations for Berta proceed in the same fashion, and show that she prefers compromising first:

    $$\begin{array}{*{20}l} \text{Compromise}\;\text{first}: & 0.8\times 0.02045+0.2\times 0.00689=0.01774 \\ \text{Update}\;\text{first:} & 0.8\times 0.02789+0.2\times 0.00391=0.02309 \end{array} $$
  11. A similar point was made by Leon Leontyev in a talk entitled “Conciliating with Conditionalisation”, given at the Australian National University in September 2013.

  12. An anonymous referee is more optimistic than I am about the changing weights-version of Moss’ norm. The referee suggests that in light of how difficult it is to find an appropriate compromising strategy, this proposal might be among the more promising available options. S/he suggests that if we individuate areas of expertise very narrowly, the quick shifts in weight that can happen on this view might be justifiable. While I agree that finding a promising compromise rule is difficult, I still think it is a problem that the changing weights-version of Moss’ norm is only sensitive to how close an agent’s credence was to the truth value of the proposition in question, but not to how justified the agent’s credence was.

References

  1. Fitelson, B., & Jehle, D. (2009). What is the “equal weight view”?Episteme, 6(3), 280–293.

    Article  Google Scholar 

  2. Genest, C., & Zidek, J. (1986). Combining probability distributions: a critique and annotated bibliography. Statistical Science, 1(1), 114–135.

    Article  Google Scholar 

  3. Greaves, H., & Wallace, D. (2006). Justifying conditionalization: conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–632.

    Article  Google Scholar 

  4. Moss, S. (2011). Scoring rules and epistemic compromise. Mind, 120(480), 1053–69.

    Article  Google Scholar 

  5. Pettigrew, R. (2013). Epistemic utility and norms for credences. forthcoming in Philosophy Compass.

  6. Raiffa, H. (1968). Decision analysis. Reading: Addison-Wesley.

    Google Scholar 

  7. Wagner, C. (1985). On the formal properties of weighted averaging as a method of aggregation. Synthese, 62(1), 97–108.

    Article  Google Scholar 

  8. Wilson, A. (2010). Disagreement, equal weight, and commutativity. Philosophical Studies, 149(3), 321–326.

    Article  Google Scholar 

Download references

Acknowledgments

A substantial part of this paper was written while I was a postdoctoral fellow at the Australian National University, thanks to the Australian Research Council Grant for the Discovery Project ‘The Objects of Probabilities’, DP 1097075. For helpful comments and discussion of the paper, I would like to thank Jacob Ross, Kenny Easwaran, Brian Talbot, Dom Bailey, Leon Leontyev, Daniel Nolan, Paul Bartha, Yoaav Isaacs, Sharon Berry, and the audiences at the Central APA Meeting in 2013, the AAP Meeting in 2013, and the University of Sydney.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julia Staffel.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Staffel, J. Disagreement and Epistemic Utility-Based Compromise. J Philos Logic 44, 273–286 (2015). https://doi.org/10.1007/s10992-014-9318-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10992-014-9318-6

Keywords

Navigation