Skip to main content
Log in

Agreement and Updating For Self-Locating Belief

  • Published:
Journal of Philosophical Logic Aims and scope Submit manuscript

Abstract

In this paper, I argue that some plausible principles concerning which credences are rationally permissible for agents given information about one another’s epistemic and credal states have some surprising consequences for which credences an agent ought to have in light of self-locating information. I provide a framework that allows us to state these constraints and draw out these consequences precisely. I then consider and assess the prospects for rejecting these prima facie plausible principles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Jeffrey Sanford Russell, John Hawthorne & Lara Buchak

Notes

  1. A caveat. Strictly speaking this may be too quick. [23] provides a convincing argument that introspection and ideal rationality are not sufficient to ensure that certain facts will be common knowledge amongst a group of agents given that such agents have communicated. (See Section 2 for a characterization of the notion of common knowledge.) Assuming that Lederman’s arguments are sound (and I’m inclined to think they are), then strictly speaking we require certain additional idealizing assumptions to ensure that such common knowledge is achieved via communication. For present purposes, though, we can ignore this potential complication. For, given such additional assumptions, all of the arguments that follow may proceed mutatis mutandis.

  2. We need not assume that W is a set of maximally specific possible worlds. Instead, we can think of each member of W as corresponding to a class of such possibilities. The members of such classes will disagree about matters about which our agents are unable to make distinctions.

  3. One can think of the members of A as ordered pairs of non-time-bound individuals and times. To keep notational clutter to a minimum, however, we’ll suppress this additional structure and work with a single parameter.

  4. It is, perhaps, worth saying something briefly about what form a Bayesian theory of rational updating should take, when we consider individuals who are able to entertain self-locating propositions, and how such a Bayesian picture of rational updating may be represented given the above picture of rational updating.

    According to a standard Bayesian picture, rational individuals should update their credences by conditionalzation. It’s important, however, to note that there are two natural ways of understanding what it is for an individual to update by conditionization.

    According to the first interpretation, for an individual to “update by conditionalization” between times t 1 and t 2 is for the individual to adjust their credences by conditionalizing their t 1 credences on their epistemic state at t 2. Given this understanding, it is simply not plausible that an individual, who is able to entertain self-locating propositions, should always update by conditionalization. For there may be some self-locating proposition—say, the proposition that it is now t 1—that the individual rationally assigns credence 1 to at t 1, in which the individual may rationally lower their credence between t 1 and t 2. But such a change is impossible if the individual is simply conditionalizing on their t 1 credences. See, e.g., [1] for a development of this point.

    According to a second interpretation, there is a set of probability functions that constitute the “rational priors”. For an individual to “update by conditionalization” is for there to be some rational prior P r(⋅), such that, at each time t in their epistemic life, they set their credences by conditionalizing P r(⋅) on their epistemic state at t.

    Now, if we understand the injunction to update by conditionalization in this latter manner, then we don’t face the same sorts of problems as with the former, when we consider individuals who are able to entertain self-locating propositions. For if the agent’s epistemic state at t 1 entails the self-locating proposition that it is now t 1, but their epistemic state at t 2 does not, then by setting their credences, at each time, by conditionalizing on a probability function P r(⋅), they may assign this proposition credence 1 at t 1 and credence less than 1 at t 2.

    In what follows, then, I’ll take Bayesianism to be the view that individuals should conditionalize in the above sense. Within our framework we can think of such a view as claiming that the set of rational update functions are determined, in a particular manner, by a certain set of probability functions. This is spelled out in more detail in Sections 23.

  5. One, though not the only, way of thinking about the argument that follows is as showing that some prima facie plausible principles concerning how an agent should update their credences given self-locating information has the implausible consequence that there may be evidence for uncentered propositions that is, in principle, incommunicable. For discussion related to the phenomenon of apparently incommunicable information in the context of self-locating beliefs see, for example, [6, 7, 25, 29], and [34].

  6. In what follows, I’ll assume that there is a relation of evidential support that holds between propositions, and that is independent of any particular individual’s background credences. I will, however, make only very minimal assumptions about the structure of the evidential support relation and how evidential support relates to rational credences. In particular, all I will assume is that whatever structure this relation has and however such evidential support relates to rational credences, it is at least rationally permissible for agents to assign the same credence to a proposition ϕ given that their respective epistemic states provide the same evidential support for ϕ.

    It is worth stressing that this leaves open a number of possible positions concerning the structure of the evidential support relation and how evidential support relates to rational credences. For example, it leaves open whether the evidential support relation may be represented by a conditional probability function, or by something that imposes less structure, such as a set of conditional probability functions. It also leaves open whether it is rationally required, or merely permissible, for an agent to have credences that, in some appropriately defined sense, respect the evidential support relation.

    Thus, the following (amongst other views) are all consistent with the assumptions that I’ll make: (i) the evidential support relation is given by a particular conditional probability function and it is rationally required that one have a credence in ϕ that matches the evidential support that one’s epistemic state provides ϕ, (ii) the evidential support relation is given by a particular conditional probability function and it is merely rationally permissible that one have a credence in ϕ that matches the evidential support that one’s epistemic state provides ϕ, (iii) the evidential support relation is given by a set of conditional probability functions and it is rationally required that one have a credence in ϕ that matches the probability assigned to ϕ, conditional on one’s epistemic state, by some member of this set, (iv) the evidential support relation is given by a set of conditional probability functions and it is merely rationally permissible that one have a credence in ϕ that matches the probability assigned to ϕ, conditional on one’s epistemic state, by some member of this set.

    It is also worth noting that much of the argumentation that follows would go through, even if there is no relation of evidential support, given that, if there is no relation of evidential support, then trivially K 1 and K 2 provide the same evidential support for ϕ. For example, given this fact, there is a natural reading on which Confidant Symmetry would still hold, as a trivial matter, if there is no relation of evidential support. I will, however, leave it to the interested reader to determine, in particular cases, whether an argument may go through given this sort of trivial understanding of evidential symmetry.

  7. See, for example, [2, 27], and [4] for discussion and formal characterizations of common knowledge.

  8. As we’ll see, this isn’t true for agents whose epistemic and credal states are defined over algebras that include centered propositions.

  9. In the case in which ϕ is an uncentered proposition, we can take Factivity to say: If an agent i knows ϕ, then ϕ is true. However, if ϕ is a centered proposition, we can’t say that such a proposition is true or false simpliciter. Instead, such a proposition will be true relative to some agents and false relative to others. Our formulation, then, is meant to apply when ϕ is an uncentered proposition and when ϕ is a centered proposition. Where ϕ is a centered proposition, i will be mistaken in virtue of believing ϕ just in case ϕ is false relative to i.

  10. See [10] for the justification of these claims.

  11. Strictly speaking we could simplify this by saying that a group of agents GA, whose epistemic states are representable in an agreement frame, are epistemic confidants at a world w just in case Com G ([K i = K i (w)]) holds at w, for each iG. For given that Com G ([K i = K i (w)]) holds at w, for each iG, it follows that Com G (Hom[K i (⋅)]) hold at w, for each iG.

  12. For the proof of this result, see the Second Agreement Theorem in [10].

  13. Both Strong Centered Bayesianism and Strong Centered Anti-Bayesianism are controversial theses, at least when we are dealing with agents whose credal states are defined over algebras that include centered propositions. See, for example, [30] for a view that endorses Strong Centered Anti-Bayesianism. For an endorsement of Strong Centered Bayesianism see [13]. I take it, then, that it’s a mark in favor of Centered Confidants, considered as precisification of the notion of a group of epistemic confidants, that Permissible Agreement, so construed, is compatible with each of these claims as well as their negations.

  14. We’ll assume that for each aA, there is some \(\vec {x} \in \textbf {C}\) such that a x = a. Thus \(\textbf {A} = \{a_{x}: \; \text {for some} \; \vec {x} \in \textbf {C} \}\). Similarly, we assume that, for every wW, there is some \(\vec {x} \in \textbf {C}\) such that w x = w. We’ll assume, further that, for each wW and each aA, a exists in w just in case there is some \(\vec {x} \in \textbf {C}\) such that w x = w and a x = a.

  15. Again, see [10] for the justification of these claims.

  16. See [10].

  17. See the Second Centered Agreement Theorem in [10].

  18. Here’s an even weaker thesis that is, perhaps, worth mentioning. Say that a probability function is subjectively center-indifferent just in case if w x = w q , and a x and a q are subjectively indistinguishable, then \(Pr(\vec {x}) = Pr(\vec {q})\). One might, then, maintain that any rational update function should be determined by a subjectively center-indifferent probability function. (Note that this view has affinities with the position endorsed by [15], according to which a rational agent should assign the same credence to subjectively indistinguishable centered-worlds that share a world coordinate. Elga’s principle is, however, neutral on the update procedure that would generate this result.) It’s worth noting, though, that the example that shows that Mandatory Center-Indifference is incompatible with Permissible Agreement and Centered Confidants also shows that the latter principles are incompatible with this weaker indifference principle. The key fact is that in the following model a 1 and a 2 are subjectively indistinguishable.

  19. Let I c(〈w 1, a 1〉) = I c(〈w 1, a 2〉) = I c(〈w 1, a 3〉) = x, I c(〈w 2, a 1〉) = I c(〈w 2, a 2〉) = I c(〈w 2, a 3〉) = y. Then \(I^{c}(\phi | K_{a_{1}}) = \frac {2y}{x + 2y}\) and \(I^{c}(\phi | K_{a_{3}}) = \frac {y}{x + y}\) . But, in general, we have \(\frac {2y}{x + 2y} \neq \frac {y}{x + y}\) if x, y ≠ 0. One way of seeing this is to note that if \(\frac {2y}{x + y} = \frac {y}{x + y}\) , then we have that \(\frac {x}{x + 2y} = \frac {x}{x + y}\). But given that y ≠ 0, we have (x + 2y) ≠ (x + y). But, in general, we have \(\frac {m}{q} \neq \frac {m}{r}\) if m, q, r ≠ 0 and qr.

  20. This problem was introduced into the philosophical literature in [14]. Related puzzles appear earlier in [3] and [33]. The literature on this puzzle is now vast. For treatments of this and related puzzles and further references see, e.g., [8, 12, 14, 17, 28, 30, 36] and [31].

  21. For a defense of this position see [28]. A related line of thought is also advocated in [29].

  22. For a defense of this position see [14]. See also [12].

  23. To see this, note that if \(s \in \mathcal {S}\) is such that, for some subintervals [l, r)[r, q], of [0, n], if x ∈ [l, r), then \(s(x) = K_{e}^{m_{0}}\) and if x ∈ [r, q], then \(s(x) = K_{e}^{m_{1}}\) then, given Confidant Half, we have that if u is a rational update function, then, if x ∈ [l, r), then \(\textbf {u}(s)(x) = Cr_{e}^{m_{0}}(\cdot )\) , such that \(Cr_{e}^{m_{0}}(\textsf {Heads}) = 1/2\), while if x ∈ [r, q], then \(\textbf {u}(s)(x) = Cr_{e}^{m_{1}}(\cdot )\), such that \(Cr_{e}^{m_{0}}(\textsf {Heads}) = 2/3\). While if \(s \in \mathcal {S}\) is such that, for some subintervals [l, r)[r, q], of [0, n], if x ∈ [l, r), then \(s(x) = K_{a}^{m_{0}}\) and if x ∈ [r, q], then \(s(x) = K_{a}^{m_{1}}\) , then, given Confidant Half, we have that if u is a rational update function, then, if x ∈ [l, r), then \(\textbf {u}(s)(x) = Cr_{a}^{m_{0}}(\cdot )\), such that \(Cr_{a}^{m_{0}}(\textsf {Heads}) = 1/2\), while if x ∈ [r, q], then \(\textbf {u}(s)(x) = Cr_{a}^{m_{1}}(\cdot )\) , such that \(Cr_{a}^{m_{1}}(\textsf {Heads}) = 1/2\).

  24. To see this note that if \(s \in \mathcal {S}\) is such that, for some subintervals, [l, r), [r, q], of [0, n], if x ∈ [l, r), then \(s(x) = K_{e}^{m_{0}}\) and if x ∈ [r, q], then \(s(x) = K_{e}^{m_{1}}\), then, given Confidant Third, we have that if u is a rational update function, then, if x ∈ [l, r), then \(\textbf {u}(s)(x) = Cr_{e}^{m_{0}}(\cdot )\) , such that \(Cr_{e}^{m_{0}}(\textsf {Heads}) = 1/3\), while if x ∈ [r, q], then \(\textbf {u}(s)(x) = Cr_{e}^{m_{1}}(\cdot )\), such that \(Cr_{e}^{m_{0}}(\textsf {Heads}) = 1/2\). And, if \(s \in \mathcal {S}\) is such that, for some subintervals [l, r)[r, q], of [0, n], if x ∈ [l, r), then \(s(x) = K_{a}^{m_{0}}\) and if x ∈ [r, q], then \(s(x) = K_{a}^{m_{1}}\), then, given Confidant Third, we have that if u is a rational update function, then, if x ∈ [l, r), then \(\textbf {u}(s)(x) = Cr_{a}^{m_{0}}(\cdot )\), such that \(Cr_{a}^{m_{0}}(\textsf {Heads}) = 1/2\), while if x ∈ [r, q], then \(\textbf {u}(s)(x) = Cr_{a}^{m_{1}}(\cdot )\) , such that \(Cr_{a}^{m_{1}}(\textsf {Heads}) = 1/2\).

  25. For a defense of this position see [30] and [17]. A related account is developed in [18].

  26. A few points that are perhaps worth highlighting. First, Group Introspection only provides a necessary condition on a group of agents being epistemic confidants. The proponent of Group Introspection will, I take it, accept that the conditions imposed by Centered Confidants are also necessary for any group of agents to be epistemic confidants. They will, however, deny that these conditions are sufficient.

    Second, insofar as one is attracted to Group Introspection, there are further conditions involving first-personal higher-order knowledge of one’s own and others epistemic states that one may naturally be inclined to require of a group of epistemic confidants. For example, given Group Introspection, it is also natural to require of a group of epistemic confidants that each aG be such that, if their epistemic state is characterized by K, then, for any n iterations of \(\textsf {M}^{0}_{G} K_{a_{z}}\), \(K_{a}(\{\vec {z}: (\textsf {M}^{0}_{G} K_{a_{z}})^{n} \{\vec {x} : K_{a_{x}}(\vec {x}) = K\}\})\) holds. That is, if a set of agents G are epistemic confidants, then if some member a’s epistemic state is characterized by K, then a will know, in a first-personal way, that each member of G knows that they know that their epistemic state is characterized by K, and a will know, in a first-personal way, that each member of G knows that they know that each member of G knows that they know that their epistemic state is characterized by K, and so on.

    The points that I’ll make about Group Introspection, in what follows, also apply, mutatis mutandis, to proposals that would require additional first-personal higher-order knowledge conditions such as this. I’ll leave the details, however, to the interested reader.

  27. Of course, if the duplicate exists, then her epistemic state is given by, \(\{ \langle w_{h}, e_{m_{0}} \rangle , \langle w_{t}, e_{m_{0}} \rangle , \langle w_{h}, e^{d}_{m_{0}} \rangle \}\). And Adam knows this. But if the duplicate doesn’t exist, then she doesn’t have any epistemic state. And Adam knows this.

  28. It’s worth noting, though, that endorsing Group Introspection will not suffice to block the argument that Permissible Agreement is incompatible with Beauty Half. For, letting \(G = \{e_{m_{1}}, a_{m_{1}}\}\), given the epistemic states represented in Frame 2, both Esme, upon opening her eyes, and Adam, upon Esme’s opening her eyes, are such that they are both first-personally G-introspective.

  29. Of course, if Esme d exists, then she too will be responding, at the exact same time, in the exact same manner as Esme. But this need not prevent Esme and Adam from conversing, via the computerized medium, in a perfectly natural fluid manner.

  30. For discussion of how epistemic utility should be measured see, for example, [20, 21, 24] and [26].

  31. For discussion of the merits of the Brier score see [20] and [21]. For further discussion see [35] and [32]. For an argument for Beauty Third that appeals to the Brier score see [22].

  32. This is the function mapping members of \(\mathcal {P}(\textbf {C})\) to the value 1 just in case they are true relative to \(\vec {z}\).

  33. Or at least it does in cases in which the truth of ϕ does not depend in any constitutive way on the agent’s credence in ϕ. For cases in which such dependence arises, see [9]. Such dependence, however, is absent in the sorts of cases we’re considering, so we can safely ignore this complication.

  34. Justification: First, we’ll justify the claim that, for all x ∈ ℝ, such that x ≠ 1/3, \(\mathcal {V}(\textsf {Heads}, 1/3, K_{e}^{m_{0}}) > \mathcal {V}(\textsf {Heads}, x, K_{e}^{m_{0}})\) . To see this, note that \(\mathcal {V}(\textsf {Heads}, y, K_{e}^{m_{0}}) = 1-[1/3 {\sum }_{\vec {z} \in \textbf {C}_{K_{e}^{m_{0}}}} [(y - \vec {z}(\textsf {Heads})^{2}]]\) . Now, since we have that \(\textbf {C}_{K_{e}^{m_{0}}} = K_{e}^{m_{0}} = \{ \langle w_{h}, e_{m_{0}} \rangle , \langle w_{t}, e_{m_{0}} \rangle , \langle w_{h}, e^{d}_{m_{0}} \rangle \}\) , and that \(\textsf {Heads} = \{\vec {z}: w_{z} = w_{h}\}\) , it follows that \(\mathcal {V}(\textsf {Heads}, y, K_{e}^{m_{0}}) = 1- [1/3[(y -1)^{2} + (y - 0)^{2} + (y - 0)^{2}]] =1 - [1/3[3y^{2} - 2y + 1]]\). And since 1/3[3y 2 − 2y + 1] attains a unique minimum value at y = 1/3, we have that 1 − [1/3[3y 2 − 2y + 1]] attains a unique maximum value at y = 1/3.

    Next, we’ll justify the claim that, for all x ∈ ℝ, such that x ≠ 1/2, \(\mathcal {V}(\textsf {Heads}, 1/2, K_{a}^{m_{0}}) > \mathcal {V}(\textsf {Heads}, x, K_{e}^{m_{0}})\). To see this note that \(\mathcal {V}(\textsf {Heads}, y, K_{a}^{m_{0}}) = 1-[1/2 {\sum }_{\vec {z} \in \textbf {C}_{K_{a}^{m_{0}}}} [(y - \vec {z}(\textsf {Heads})^{2}]]\) . Now, since we have that \(\textbf {C}_{K_{a}^{m_{0}}} = K_{a}^{m_{0}} = \{ \langle w_{h}, a_{m_{0}} \rangle , \langle w_{t}, a_{m_{0}} \rangle \rangle \}\) , and that \(\textsf {Heads} = \{\vec {z}: w_{z} = w_{h}\}\) , it follows that \(\mathcal {V}(\textsf {Heads}, y, K_{a}^{m_{0}}) = 1- [1/2[(y -1)^{2} + (y - 0)^{2}]] = 1 - [1/2[2y^{2} - 2y + 1]]\). And since 1/2[2y 2 − 2y + 1] attains a unique minimum value at y = 1/2, we have that 1 − [1/2[2y 2 − 2y + 1]] attains a unique maximum value at y = 1/2.

  35. It’s worth noting that while an appeal to Credal Rule Consequentialism may be used by proponents of Beauty Third to block the preceding arguments, the same is not true for the proponent of Beauty Half. For, it’s easy to show that \(\mathcal {V}(\textsf {Heads}, \cdot , K_{e}^{m_{1}})\) and \(\mathcal {V}(\textsf {Heads}, \cdot , K_{a}^{m_{1}})\) are both uniquely maximized at 1/2.

  36. For discussion of so-called epistemic bribes see: [5, 16, 19] and [11].

  37. See [9] for this sort of case.

References

  1. Arntzenius, F. (2003). Some problems for conditionalization and reflection. Journal of Philosophy, 356–370.

  2. Aumann, R. (1976). Agreeing to disagree. The Annals of Statistics, 1236–1239.

  3. Aumann, R., Hart, S., & Perry, M. (1997). The forgetful passenger. Games and Economic Behavior, 20(1), 117–120.

    Article  Google Scholar 

  4. Barwise, J. (1988). Three views of common knowledge, Proceedings of the 2nd Conference on Theoretical Aspects of Reasoning about Knowledge (pp. 365–379): Morgan Kaufmann Publishers Inc.

  5. Berker, S. (2013). Epistemic teleology and the seperateness of propositions. The Philosophical Review, 122, 337–393.

    Article  Google Scholar 

  6. Bostrom, N. (2000). Observer-relative chances in anthropic reasoning Erkenntnis, 52, 93–108.

    Article  Google Scholar 

  7. Bostrom, N. (2002). Anthropic bias: Routledge.

  8. Briggs, R. (2010). Putting a value on beauty. In Gendler T.S., & Hawthorne, J. (Eds.), Oxford Studies in Epistemology, Vol. 3: Oxford University Press.

  9. Caie, M. (2013). Rational probabilistic incoherence. The Philosophical Review, 122(4), 527–575.

    Article  Google Scholar 

  10. Caie, M. (2016). Agreement theorems for self-locating belief. The Review of Symbolic Logic, 9(2), 380–407.

    Article  Google Scholar 

  11. Carr, J. (forthcoming). Accuracy or coherence?: Philosophy and Phenomenological Research.

  12. Dorr, C. (2002). Sleeping beauty: In defence of elga. Analysis, 62(276), 292–296.

    Article  Google Scholar 

  13. Dorr, C., & Arntzenius, F. (2017). Self-locating priors and cosmological measures. In Chamcham, K., Barrow, J, Saunders, S., & Silk, J. (Eds.), The Philosophy of Cosmology. Cambridge University Press.

  14. Elga, A. (2000). Self-locating belief and the sleeping beauty problem. Analysis, 60(266), 143–147.

    Article  Google Scholar 

  15. Elga, A. (2004). Defeating dr. evil with self-locating belief. Philosophy and Phenomenological Research, 69(2), 383–396.

    Article  Google Scholar 

  16. Greaves, H. (2013). Epistemic decision theory. Mind, 122(488), 915–952.

    Article  Google Scholar 

  17. Halpern, J. (2004). Sleeping beauty reconsidered: Conditioning and reflection in asynchronous systems, Proceedings of the Twentieth Conference on Uncertainty in AI (pp. 226–234).

  18. Halpern, J., & Tuttle, M. (1993). Knowledge, probability and adversaries. Journal of the ACM, 40(4), 917–960.

    Article  Google Scholar 

  19. Jenkins, C. S. (2007). Entitlement and rationality. Synthese, 157, 25–45.

    Article  Google Scholar 

  20. Joyce, J. (1998). A non-pragmatic vindication of probabilism. Philosophy of Science, 65, 575–603.

    Article  Google Scholar 

  21. Joyce, J. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In Huber, F., & Schmidt-Petri, C. (Eds.), Degrees of Belief: Synthese Library.

  22. Kierland, B., & Monton, B. (2005). Minimizing inaccurracy for self-locating beliefs. Philosophy and Phenomenological Research, 70(2), 384–395.

    Article  Google Scholar 

  23. Lederman, H. (forthcoming). Uncommon knowledge: Mind.

  24. Leitgeb, H., & Pettigrew, R. (2010). An objective justification of bayesianism I: Measuring inaccuracy. Philosophy of Science, 77.

  25. Leslie, J. (1997). Observer-relative chances and the doomsday argument. Inquiry, 40, 427–436.

    Article  Google Scholar 

  26. Levinstein, B. (2012). Leitgeb and Pettigrew on accuracy and updating. Philosophy of Science, 79(3), 413–424.

    Article  Google Scholar 

  27. Lewis, D. (1969). Convention. Cambridge: Cambridge University Press.

    Google Scholar 

  28. Lewis, D. (2001). Sleeping beauty: Reply to Elga. Analysis, 61(271), 171–176.

    Article  Google Scholar 

  29. Lewis, D. (2004). How many lives has Schrödinger’s cat?. Australasian Journal of Philosophy, 82(1), 3–22.

    Article  Google Scholar 

  30. Meacham, C. (2008). Sleeping beauty and the dynamics of de se beliefs. Philosophical Studies, 138(2), 245–269.

    Article  Google Scholar 

  31. Moss, S. (2012). Updating as communication. Philosophy and Phenomenological Research, 85(2), 225–248.

    Article  Google Scholar 

  32. Pettigrew, R. (2016). Accuracy and the Laws of Credence. Oxford: Oxford University Press.

    Book  Google Scholar 

  33. Piccione, M., & Rubenstein, A. (1997). On the interpretation of decision problems with imperfect recall. Games and Economic Behavior, 20(1), 3–24.

    Article  Google Scholar 

  34. Pittard, J. (2015). When beauties disagree: Why halfers should affirm robust perspectivalism. Oxford Studies in Epistemology, 5.

  35. Predd, J., Kulkarni, S., Seiringer, R., Lieb, E. H., Daniel, O., & Vincent Poor, H. (2009). Probabilistic coherence and proper scoring rules. IEEE Transactions on Information Theory, 55(10), 4786–4792.

    Article  Google Scholar 

  36. Titelbaum, M. (2008). The relevance of self-locating beliefs. The Philosophical Review, 117(4), 555–606.

    Article  Google Scholar 

Download references

Acknowledgments

Thanks to the audiences at NYU, the Bristol-Groningen Conference in Formal Epistemology and Decisions, Games and Logic 2016, and to Harvey Lederman and an anonymous referee for this journal for helpful feedback on earlier drafts of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Caie.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caie, M. Agreement and Updating For Self-Locating Belief. J Philos Logic 47, 513–547 (2018). https://doi.org/10.1007/s10992-017-9437-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10992-017-9437-y

Keywords

Navigation