Skip to main content

Abstract

This chapter provides an axiomatic foundation for a particular type of preference shock model called the random discounting representation where a decision maker believes that her discount factors change randomly over time. For this purpose, we formulate an infinite horizon extension of Dekel, Lipman, and Rustichini (Econometrica 69:891–934, 2001), and identify the behavior that reduces all subjective uncertainties to those about future discount factors. We also show uniqueness of subjective belief about discount factors. Moreover, a behavioral comparison about preference for flexibility characterizes the condition that one’s subjective belief second-order stochastically dominates the other. Finally, the resulting model is applied to a consumption-savings problem.

The original article first appeared in the Journal of Economic Theory 144:1015–1053, 2009. A newly written addendum has been added to this book chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See Mehra and Sah (2002, Section 1.1, pp. 871–873) for more examples about fluctuations in subjective parameters.

  2. 2.

    The set \(\mathcal{K}(\varDelta (C \times \mathcal{Z}))\) is endowed with the Hausdorff metric. Details are relegated to Appendix section “Hausdorff Metric”.

  3. 3.

    DLR consider preference over \(\mathcal{K}(\varDelta (C))\) with finite set C.

  4. 4.

    A sophisticated DM, who is fully aware of time-inconsistency caused by hyperbolic discounting, may be viewed as a limiting case of their model, where the DM never exercises self-control at the moment of choice.

  5. 5.

    The DM may care about timing of resolution of risk and prefer earlier or later resolution of multistage lotteries. Such distinction is examined in Kreps and Porteus (1978). Epstein et al. (2007) argue against Timing Indifference and provide a model with nonlinear future preferences.

  6. 6.

    Their Nondegeneracy axiom requires the existence of menus x, y with \(x \succ y\) and \(x \subset y\). That is, this axiom captures preference for commitment—a DM may prefer a smaller menu.

  7. 7.

    A sophisticated DM with hyperbolic discounting exhibits preference for commitment rather than for flexibility. Thus, such a DM is excluded by this axiom.

  8. 8.

    Dekel et al. (2007) fill a gap in DLR surrounding this representation result.

  9. 9.

    See Gul and Pesendorfer (2004, p. 125, footnote 7) for more details.

  10. 10.

    To prevent arbitrary manipulations, DLR (p. 912) suggest that probability measures can be identified if some aspect of the ex post utility functions is state-independent. Such a condition is satisfied in our model.

  11. 11.

    Strategic Rationality implies Monotonicity. Indeed, assume \(y \subset x\). Arguing by contradiction, suppose \(y \succ x\). Strategic Rationality implies \(x = x \cup y \sim y \succ x\), which is a contradiction.

  12. 12.

    In literature on ambiguity in the Savage-type model, Epstein (1999) and Ghirardato and Marinacci (2002) adopt closely related conditions to capture comparative attitudes toward ambiguity aversion. They compare an arbitrary act with an unambiguous act instead of comparing an arbitrary menu with a commitment menu.

  13. 13.

    Notice that continuity is not redundant because a concave function is continuous in the interior of the domain. In the original definition by Rothschild and Stiglitz (1970), continuity is not imposed.

  14. 14.

    Their argument for this equivalence works even when continuity is imposed on v.

  15. 15.

    Since u is CRRA, ϕ i is independent of s.

  16. 16.

    They fix the argument (Lemma 12, p. 929) of DLR.

  17. 17.

    In their model, C is assumed to be finite.

  18. 18.

    In the supplement to Krishna and Sadowski (2014), Krishna and Sadowski (2013) show a similar result for a preference \(\succapprox\) on \(\mathcal{Z}\simeq \mathcal{K}(\varDelta (C \times \mathcal{Z}))\).

References

  • Ahn DS (2008) Ambiguity without a state space. Rev Econ Stud 75:3–28

    Article  Google Scholar 

  • Atkeson A, Lucas RE Jr (1992) On efficient distribution with private information. Rev Econ Stud 59:427–453

    Article  Google Scholar 

  • Becker R (1980) On the long run steady state in a simple equilibrium with heterogeneous households. Quart J Econ 90:375–382

    Article  Google Scholar 

  • Becker GS, Mulligan CB (1997) The endogenous determination of time preference. Quart J Econ 112:729–758

    Article  Google Scholar 

  • Bertsekas DP, Shreve SE (1978) Stochastic optimal control: the discrete time case. Academic, New York

    Google Scholar 

  • Blanchard, OJ (1985) Debt, deficits, and finite horizons. J Polit Econ 93:223–247

    Article  Google Scholar 

  • Chatterjee S, Corbae D, Nakajima M, Rios-Rull, JV (2007) A quantitative theory of unsecured consumer credit with risk of default. Econometrica 75:1525–1589

    Article  Google Scholar 

  • Dekel E, Lipman B, Rustichini A (2001) Representing preferences with a unique subjective state space. Econometrica 69:891–934

    Article  Google Scholar 

  • Dekel E, Lipman B, Rustichini A, Sarver T (2007) Representing preferences with a unique subjective state space: a corrigendum. Econometrica 75:591–600

    Article  Google Scholar 

  • Dutta J, Michel P (1998) The distribution of wealth with imperfect altruism. J Econ Theory 82:379–404

    Article  Google Scholar 

  • Epstein, LG (1999) A definition of uncertainty aversion. Rev Econ Stud 66:579–608

    Article  Google Scholar 

  • Epstein LG, Zin S (1989) Substitution, risk aversion, and the temporal behavior of consumption and asset returns: a theoretical framework. Econometrica 57:937–969

    Article  Google Scholar 

  • Epstein LG, Marinacci M, Seo K (2007) Coarse contingencies and ambiguity. Theor Econ 2:355–394

    Google Scholar 

  • Farhi E, Werning I (2007) Inequality and social discounting. J Polit Econ 115:365–402

    Article  Google Scholar 

  • Ghirardato P, Marinacci M (2002) Ambiguity made precise: a comparative foundation. J Econ Theory 102:251–289

    Article  Google Scholar 

  • Goldman SM (1974) Flexibility and the demand for money. J Econ Theory 9:203–222

    Article  Google Scholar 

  • Gul F, Pesendorfer W (2001) Temptation and self-control. Econometrica 69:1403–1435

    Article  Google Scholar 

  • Gul F, Pesendorfer W (2004) Self-control and the theory of consumption. Econometrica 72:119–158

    Article  Google Scholar 

  • Higashi Y, Hyogo K, Takeoka N (2009) Subjective random discounting and intertemporal choice. J Econ Theory 144:1015–1053

    Article  Google Scholar 

  • Higashi Y, Hyogo K, Takeoka N (2014) Stochastic endogenous time preference. J Math Econ 51:77–92

    Article  Google Scholar 

  • Higashi Y, Hyogo K, Takeoka N, Tanaka H (2014) Comparative impatience under random discounting. Working paper

    Google Scholar 

  • Karni E, Zilcha I (2000) Saving behavior in stationary equilibrium with random discounting. Econ Theory 15:551–564

    Article  Google Scholar 

  • Koopmans TC (1964) On flexibility of future preference. In: Shelley MW, Bryan GL (ed) Human judgments and optimality, chap 13. Academic, New York

    Google Scholar 

  • Kraus A, Sagi JS (2006) Inter-temporal preference for flexibility and risky choice. J Math Econ 42:698–709

    Article  Google Scholar 

  • Kreps DM (1979) A representation theorem for preference for flexibility. Econometrica 47:565–578

    Article  Google Scholar 

  • Kreps DM (1992) Static choice and unforeseen contingencies. In: Dasgupta P, Gale D, Hart O, Maskin E (ed) Economic analysis of markets and games: essays in honor of Frank Hahn. MIT, Cambridge, pp 259–281

    Google Scholar 

  • Kreps DM, Porteus EL (1978) Temporal resolution of uncertainty and dynamic choice theory. Econometrica 46:185–200

    Article  Google Scholar 

  • Krishna RV, Sadowski P (2013) Supplement to dynamic preference for flexibility: unobservable persistent taste shocks. Working paper

    Google Scholar 

  • Krishna RV, Sadowski P (2014) Dynamic preference for flexibility. Econometrica 82:655–703

    Article  Google Scholar 

  • Krusell P, Smith A (1998) Income and wealth heterogeneity in the macroeconomy. J Polit Econ 106:867–896

    Article  Google Scholar 

  • Levhari D, Srinivasan TN (1969) Optimal savings under uncertainty. Rev Econ Stud 36:153–163

    Article  Google Scholar 

  • Mehra R, Sah R (2002) Mood fluctuations, projection bias, and volatility of equity prices. J Econ Dyn Control 26:869–887

    Article  Google Scholar 

  • Rothschild M, Stiglitz JE (1970) Increasing risk I: a definition. J Econ Theory 2:225–243

    Article  Google Scholar 

  • Rothschild M, Stiglitz JE (1971) Increasing risk II: its economic consequences. J Econ Theory 3:66–84

    Article  Google Scholar 

  • Rustichini A (2002) Preference for flexibility in infinite horizon problems. Econ Theory 20:677–702

    Article  Google Scholar 

  • Salanié F, Treich N (2006) Over-savings and hyperbolic discounting. Eur Econ Rev 50:1557–1570

    Article  Google Scholar 

  • Sandmo A (1970) The effect of uncertainty on saving decisions. Rev Econ Stud 37:353–360

    Article  Google Scholar 

  • Sarver T (2008) Anticipating regret: why fewer options may be better. Econometrica 76:263–305

    Article  Google Scholar 

  • Takeoka N (2007) Subjective probability over a subjective decision tree. J Econ Theory 136:536–571

    Article  Google Scholar 

  • Yaari ME (1965) Uncertain lifetime, life insurance, and the theory of the consumer. Rev Econ Stud 32:137–150

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Larry Epstein for his illuminating guidance and constant support. We also thank Árpád Ábrahám, Larry Blume, Atsushi Kajii, Tomoyuki Nakajima, Jean-Marc Tallon, Katsutoshi Wakai, and the audiences of the 2005 JEA Spring Meeting, CETC 2006, and RUD 2006, Hosei, Keio, Kobe, Kyoto, Osaka, and Shiga Universities for useful comments. Detailed suggestions by an anonymous associate editor and two referees led to substantial improvements. Takeoka gratefully acknowledges the financial support by Grant-in-Aid for Scientific Research and MEXT.OPENRESEARCH(2004-2008). All remaining errors are the author’s responsibility.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Norio Takeoka .

Editor information

Editors and Affiliations

Appendices

Appendices

Hausdorff Metric

Let X be a compact metric space with a metric d. Let \(\mathcal{K}(X)\) be the set of all non-empty compact subsets of X. For x ∈ X and \(A,B \in \mathcal{K}(X)\), let

$$\displaystyle{ d(x,B) \equiv \min _{x^{{\prime}}\in B}d(x,x^{{\prime}}),\ d(A,B) \equiv \max _{ x\in A}d(x,B). }$$

For all \(A,B \in \mathcal{K}(X)\), define the Hausdorff metric d H by

$$\displaystyle{ d_{H}(A,B) \equiv \max [d(A,B),d(B,A)]. }$$

Then, d H satisfies (i) d H (A, B) ≥ 0, (ii) \(A = B \Leftrightarrow d_{H}(A,B) = 0\), (iii) \(d_{H}(A,B) = d_{H}(B,A)\), and (iv) \(d_{H}(A,B) \leq d_{H}(A,C) + d_{H}(C,B)\). Moreover, \(\mathcal{K}(X)\) is compact under the Hausdorff metric.

Perfect Commitment Menus

We follow the construction of menus by Gul and Pesendorfer (2004, Appendix A) (hereafter GP) and define the set \(\mathcal{L}\) of perfect commitment menus. Then we show that \(\mathcal{L}\) is homeomorphic to \(\varDelta (C \times \mathcal{L})\). That is, a perfect commitment menu can be viewed as a multistage lottery.

Let C denote the outcome space (consumption set), which is a compact metric space. We define the set of one-period consumption problems as \(\mathcal{Z}_{1} \equiv \mathcal{K}\left (\varDelta \left (C\right )\right )\). For t > 1, define the set of t-period consumption problems inductively as \(\mathcal{Z}_{t} \equiv \mathcal{K}\left (\varDelta \left (C \times \mathcal{Z}_{t-1}\right )\right )\). Let \(\mathcal{Z}^{{\ast}}\equiv \varPi _{t=1}^{\infty }\mathcal{Z}_{t}\). A menu is a consistent element of \(\mathcal{Z}^{{\ast}}\).

Formally, define \(G_{1}: C \times \mathcal{Z}_{1} \rightarrow C\), \(F_{1}:\varDelta (C \times \mathcal{Z}_{1}) \rightarrow \varDelta (C)\), and \(\overline{F}_{1}: \mathcal{K}(\varDelta (C \times \mathcal{Z}_{1})) \rightarrow \mathcal{K}(\varDelta (C))\) as follows:

$$\displaystyle\begin{array}{rcl} G_{1}(c,z_{1}) \equiv c,\ F_{1}(\mu _{2})(E) \equiv \mu _{2}(G_{1}^{-1}(E))\ \text{ and }\ \overline{F}_{ 1}(z_{2}) \equiv \{ F_{1}(\mu _{2})\,\vert \,\mu _{2} \in z_{2}\},& & {}\\ \end{array}$$

for E in the Borel \(\sigma -\) algebra of C. For t > 1, we define inductively \(G_{t}: C \times \mathcal{Z}_{t} \rightarrow C \times \mathcal{Z}_{t-1}\), \(F_{t}:\varDelta (C \times \mathcal{Z}_{t}) \rightarrow \varDelta (C \times \mathcal{Z}_{t-1})\), and \(\overline{F}_{t}: \mathcal{K}(\varDelta (C \times \mathcal{Z}_{t})) \rightarrow \mathcal{K}(\varDelta (C \times \mathcal{Z}_{t-1}))\) by

$$\displaystyle\begin{array}{rcl} & & G_{t}(c,z_{t}) \equiv (c,G_{t-1}(z_{t})),\ F_{t}(\mu _{t+1})(E) \equiv \mu _{t+1}(G_{t}^{-1}(E)),\ \text{ and } {}\\ & & \overline{F}_{t}(z_{t+1}) \equiv \{ F_{t}(\mu _{t+1})\,\vert \,\mu _{t+1} \in z_{t+1}\}, {}\\ \end{array}$$

for E in the Borel \(\sigma -\) algebra of \(C \times \mathcal{Z}_{t-1}\). Finally, we define \(\{z_{t}\}_{t=1}^{\infty }\in \mathcal{Z}^{{\ast}}\) is consistent if \(z_{t-1} = \overline{F}_{t-1}(z_{t})\) for every t > 1.

We identify a singleton menu with its only element by slightly abusing notation. Let \(\mathcal{L}_{1} \equiv \varDelta (C) \subset \mathcal{Z}_{1}\). An element of \(\mathcal{L}_{1}\) is a one-period “commitment” consumption problem. For t > 1, we define \(\mathcal{L}_{t}\) inductively as \(\mathcal{L}_{t} \equiv \varDelta (C \times \mathcal{L}_{t-1}) \subset \mathcal{Z}_{t}\). An element of \(\mathcal{L}_{t}\) is a t-period “commitment” consumption problem. Let \(\mathcal{L}^{{\ast}}\equiv \varPi _{t=1}^{\infty }\mathcal{L}_{t}\). We define the set of perfect commitment menus as \(\mathcal{L}\equiv \mathcal{Z}\cap \mathcal{L}^{{\ast}}\). Thus, an element in \(\mathcal{L}\) is a menu in which the DM is committed in every period.

Proposition 2

\(\mathcal{L}\) is homeomorphic to \(\varDelta (C \times \mathcal{L})\) .

Proof

GP construct a homeomorphism \(f: \mathcal{Z}\rightarrow \mathcal{K}(\varDelta (C \times \mathcal{Z}))\). Note that \(\mathcal{L}\) is compact since \(\mathcal{L}_{t}\) is compact for every t. It is sufficient to check that \(f(\mathcal{L}) =\varDelta (C \times \mathcal{L})\).

Definition 8

Let \(Y _{1} \equiv \hat{ L}_{1} \equiv \varDelta (C)\) and for t > 1 let \(Y _{t} \equiv \varDelta (C \times \varPi _{n=1}^{t-1}\mathcal{Z}_{n})\) and \(\hat{L}_{t} \equiv \varDelta (C \times \varPi _{n=1}^{t-1}\mathcal{L}_{n})\). Define \(Y ^{\mathit{kc}} \equiv \{\{\hat{\mu _{t}}\} \in \varPi _{t=1}^{\infty }Y _{t}\,\vert \,\text{marg}_{C\times \varPi _{n=1}^{t-1}\mathcal{Z}_{n}}\hat{\mu }_{t+1} =\hat{\mu } _{t}\}\). Let \(\hat{L}^{\mathit{kc}} = Y ^{\mathit{kc}} \cap \varPi _{t=1}^{\infty }\hat{L}_{t}\).

GP show that for every \(\{\hat{\mu }_{t}\} \in Y ^{\mathit{kc}}\) there exists a unique \(\mu \in \varDelta (C \times \mathcal{Z}^{{\ast}})\) such that \(\text{marg}_{C}\mu =\hat{\mu } _{1}\) and \(\text{marg}_{C\times \varPi _{n=1}^{t}\mathcal{Z}_{n}}\mu =\hat{\mu } _{t}\). Then they define \(\psi: Y ^{\mathit{kc}} \rightarrow \varDelta (C \times \mathcal{Z}^{{\ast}})\) as the mapping that associates this μ with the corresponding \(\{\hat{\mu }_{t}\}\).

Step 1::

\(\psi (\hat{L}^{\mathit{kc}}) =\varDelta (C \times \mathcal{L}^{{\ast}})\).

Note that, for a sequence \(\{\hat{l}_{t}\} \in \hat{ L}^{\mathit{kc}}\), it holds that

$$\displaystyle{\text{marg}_{C\times \varPi _{n=1}^{t-1}\mathcal{L}_{n}}\hat{l}_{t+1} = \text{marg}_{C\times \varPi _{n=1}^{t-1}\mathcal{Z}_{n}}\hat{l}_{t+1} =\hat{ l}_{t}.}$$

The same argument of Lemma 3 in GP shows that there exists a homeomorphism \(\psi ^{{\prime}}:\hat{ L}^{\mathit{kc}} \rightarrow \varDelta (C \times \mathcal{L}^{{\ast}})\) such that \(\text{marg}_{C}\psi ^{{\prime}}(\{\hat{l}_{t}\}) =\hat{ l}_{1}\) and \(\text{marg}_{C\times \varPi _{n=1}^{t-1}\mathcal{L}_{n}}\psi ^{{\prime}}(\{\hat{l}_{t}\}) =\hat{ l}_{t}\). The uniqueness part of the Kolmogorov’s Existence Theorem implies that \(\psi ^{{\prime}} =\psi \vert _{\hat{L}^{\mathit{kc}}}\). Step 1 thus follows.

Definition 9

Let \(D_{t} \equiv \{ (z_{1},\ldots,z_{t}) \in \varPi _{n=1}^{t}\mathcal{Z}_{n}\,\vert \,z_{k} = \overline{F}_{k}(z_{k+1}),\,\forall k = 1,\ldots,t - 1\}\) and \(D_{t}^{L} \equiv D_{t} \cap \varPi _{n=1}^{t}\mathcal{L}_{n}\). Define \(M^{c} \equiv \{\{\mu _{t}\} \in \varDelta (C) \times \varPi _{t=1}^{\infty }\varDelta (C \times \mathcal{Z}_{t})\,\vert \,F_{t}(\mu _{t+1}) =\mu _{t},\ \forall t \geq 1\}\). Let \(Y ^{c} \equiv \{\{\hat{\mu }_{t}\} \in Y ^{\mathit{kc}}\,\vert \,\hat{\mu }_{t+1}(C \times D_{t}) = 1,\forall t \geq 1\}\) and \(\hat{L}^{c} \equiv Y ^{c} \cap \hat{ L}^{\mathit{kc}}\).

Note that \(\mathcal{L} = M^{c} \cap \mathcal{L}^{{\ast}}\). GP show that for every \(\{\mu _{t}\} \in M^{c}\) there exists a unique \(\{\hat{\mu }_{t}\} \in Y ^{c}\) such that \(\hat{\mu }_{1} =\mu _{1}\) and \(\text{marg}_{C\times \mathcal{Z}_{t-1}}\hat{\mu }_{t} =\mu _{t}\) for every t ≥ 2. Then they define \(\phi: M^{c} \rightarrow Y ^{c}\) as the mapping that associates this {μ t } with the corresponding \(\{\hat{\mu }_{t}\}\).

Step 2::

\(\phi (\mathcal{L}) =\hat{ L}^{c}\).

It is straightforward from the definition of ϕ that \(\phi (\mathcal{L}) \supset \hat{ L}^{c}\). We show \(\phi (\mathcal{L}) \subset \hat{ L}^{c}\) or \(\phi (\mathcal{L}) \subset \varPi _{t=1}^{\infty }\hat{L}_{t}\) by mathematical induction. Take \(\{l_{t}\} \in \mathcal{L}\) and let \(\{\hat{\mu }_{t}\} \equiv \phi (\{l_{t}\}) \in Y ^{c}\). By definition, \(\hat{\mu }_{1} = l_{1} \in \varDelta (C) =\hat{ L}_{1}\) and \(\hat{\mu }_{2} = \text{marg}_{C\times \mathcal{Z}_{1}}\hat{\mu }_{2} = l_{2} \in \varDelta (C \times \mathcal{L}_{1}) =\hat{ L}_{2}\).

Suppose that \(\hat{\mu }_{k} \in \hat{ L}_{k}\) for every k = 1, 2, , t. Since \(\{\hat{\mu }_{t}\}\) is a Kolmogorov consistent sequence, \(\text{marg}_{C\times \varPi _{n=1}^{t-1}\mathcal{Z}_{n}}\hat{\mu }_{t+1} =\hat{\mu } _{t} \in \hat{ L}_{t}\). Thus, \(\hat{\mu }_{t+1} \in \varDelta (C \times \varPi _{n=1}^{t-1}\mathcal{L}_{n} \times \mathcal{Z}_{t})\). The definition of ϕ implies that \(\text{marg}_{C,\mathcal{Z}_{t}}\hat{\mu }_{t+1} = l_{t+1} \in \varDelta (C \times \mathcal{L}_{t})\). Therefore, \(\hat{\mu }_{t+1} \in \hat{ L}_{t+1} =\varDelta (C \times \varPi _{n=1}^{t}\mathcal{L}_{n})\).

Step 3::

\(\psi (\hat{L}^{c}) =\{ l \in \varDelta (C \times \mathcal{L}^{{\ast}})\,\vert \,l(C \times \mathcal{L}) = 1\}\).

Since \(\hat{L}^{c} =\{\{\hat{ l}_{t}\} \in \hat{ L}^{\mathit{kc}}\,\vert \,\hat{l}_{t+1}(C \times D_{t}^{L}) = 1,\forall t \geqq 1\}\), Step 3 follows from the same argument of Lemma 5 in GP.

GP define \(\xi: \mathcal{Z}\rightarrow \mathcal{K}(M^{c})\) as \(\xi (z) \equiv \{\{\mu _{t}\} \in M^{c}\,\vert \,\mu _{t} \in z_{t},\ \forall t \geq 1\}\). Note that \(\xi\) is identity on \(\mathcal{L}\). Finally, the homeomorphism \(f: \mathcal{Z}\rightarrow \mathcal{K}(\varDelta (C \times \mathcal{Z}))\) is given by \(f(z) =\psi \circ \phi (\xi (z))\). Then the above steps imply that \(f(\mathcal{L}) =\psi \circ \phi (\xi (\mathcal{L})) =\psi \circ \phi (\mathcal{L}) =\varDelta (C \times \mathcal{L})\).

Proof of Theorem 1

4.1 Necessity

Necessity of the axioms is routine. We show that for any (u, μ) there exists U satisfying the functional equation.

Let \(\mathcal{U}\) be the Banach space of all real-valued continuous functions on \(\mathcal{Z}\) with the sup-norm metric. Define the operator \(T: \mathcal{U}\rightarrow \mathcal{U}\) by

$$\displaystyle{ T(U)(x) \equiv \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\right )\mathrm{d}\mu (\alpha ). }$$

Since T(U) is continuous, the operator T is well-defined. To show T is a contraction mapping, it suffices to verify that (i) T is monotonic, that is, T(U) ≥ T(V ) whenever U ≥ V, and (ii) T satisfies the discounting property, that is, there exists δ ∈ [0, 1) such that for any U and \(c \in \mathbb{R}\), T(U + c) = T(U) +δ c.

Step 1::

T is monotonic.

Take any \(U,V \in \mathcal{U}\) with U ≥ V. Since \(\int U(z)\,\mathrm{d}l_{z} \geq \int V (z)\,\mathrm{d}l_{z}\) for all \(l \in \varDelta (C \times \mathcal{Z})\), we have

$$\displaystyle{\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\right ) \geq \max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}V (z)\,\mathrm{d}l_{z}\right )}$$

for all x and α, and hence T(U)(x) ≥ T(V )(x) as desired.

Step 2::

T satisfies the discounting property.

Let \(\delta \equiv \bar{\alpha }\). By assumption, δ ∈ [0, 1). For any \(U \in \mathcal{U}\) and \(c \in \mathbb{R}\),

$$\displaystyle\begin{array}{rcl} T(U + c)& =& \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}(U(z) + c)\,\mathrm{d}l_{z}\right )\mathrm{d}\mu (\alpha ) {}\\ & =& T(U) +\bar{\alpha } c = T(U) +\delta c. {}\\ \end{array}$$

By Steps 1 and 2, T is a contraction mapping. Thus, the fixed point theorem (See Bertsekas and Shreve 1978, p. 55) ensures that there exists a unique \(U^{{\ast}}\in \mathcal{U}\) satisfying \(U^{{\ast}} = T(U^{{\ast}})\). This U satisfies Eq. (20.1).

4.2 Sufficiency

Lemma 1

Commitment Independence, Stationarity, and Timing Indifference imply Independence, that is,

$$\displaystyle{ x \succ y\ \Rightarrow \ \lambda x + (1-\lambda )z \succ \lambda y + (1-\lambda )z, }$$

for all \(x,y,z \in \mathcal{Z}\) and \(\lambda \in (0,1)\) .

Proof

Let \(x \succ y\). From Stationarity, \(\{(c,x)\} \succ \{ (c,y)\}\). For any \(\lambda \in (0,1)\), Commitment Independence implies \(\{\lambda \circ (c,x) + (1-\lambda ) \circ (c,z)\} \succ \{\lambda \circ (c,y) + (1-\lambda ) \circ (c,z)\}\). From Timing Indifference, \(\{(c,\lambda x + (1-\lambda )z)\} \succ \{ (c,\lambda y + (1-\lambda )z)\}\). Again, from Stationarity, \(\lambda x + (1-\lambda )z \succ \lambda y + (1-\lambda )z\). □ 

Let \(\overline{\mathrm{co}}(x)\) denote the closed convex hull of x. As in DLR, Order, Continuity, and Independence imply \(x \sim \overline{\mathrm{co}}(x)\). Hence we can pay attention to the sub-domain

$$\displaystyle{ \mathcal{Z}_{1} \equiv \{ x \in \mathcal{Z}\vert x = \overline{\mathrm{co}}(x)\}. }$$

Since \(\mathcal{Z}_{1}\) is a mixture space, Order, Continuity, and Independence ensure that \(\succapprox\) can be represented by a mixture linear function \(U: \mathcal{Z}_{1} \rightarrow \mathbb{R}\). Nondegeneracy implies U is not constant. Since \(C \times \mathcal{Z}\) is compact, there exist a maximal and a minimal lottery \(\bar{l}\), \(\underline{l} \in \varDelta (C \times \mathcal{Z})\) with respect to U({⋅ }). Without loss of generality, assume \(U(\{\bar{l}\}) = 1\) and \(U(\{\underline{l}\}) = 0\).

Define \(u:\varDelta (C) \rightarrow \mathbb{R}\) and \(W:\varDelta (\mathcal{Z}) \rightarrow \mathbb{R}\) by

$$\displaystyle{ u(l_{c}) \equiv U(\{l_{c} \otimes \underline{ l}_{z}\}),\ W(l_{z}) \equiv U(\{\underline{l}_{c} \otimes l_{z}\}), }$$

where \(\underline{l}_{c}\) and \(\underline{l}_{z}\) be the marginal distributions of \(\underline{l}\) on C and \(\mathcal{Z}\).

Lemma 2

  1. (i)

    For any \(l_{c},l_{c}^{{\prime}}\in \varDelta (C)\) and \(l_{z},l_{z}^{{\prime}}\in \varDelta (\mathcal{Z})\) ,

    $$\displaystyle\begin{array}{rcl} & & u(l_{c}) \geq u(l_{c}^{{\prime}})\ \Leftrightarrow \ U(\{l_{ c} \otimes l_{z}\}) \geq U(l_{c}^{{\prime}}\otimes l_{ z}), {}\\ & & W(l_{z}) \geq W(l_{z}^{{\prime}})\ \Leftrightarrow \ U(\{l_{ c} \otimes l_{z}\}) \geq U(l_{c} \otimes l_{z}^{{\prime}}). {}\\ \end{array}$$
  2. (ii)

    u and W are mixture linear.

Proof

  1. (i)

    Consider the restriction of U on \(\varDelta (C \times \mathcal{Z})\). Let \(U(c,z) \equiv U(\{(c,z)\})\). First we will claim that there exist \(\bar{u}: C \rightarrow \mathbb{R}\) and \(\bar{W}: \mathcal{Z}\rightarrow \mathbb{R}\) such that \(U(c,z) =\bar{ u}(c) +\bar{ W}(z)\). Since

    $$\displaystyle{O\left (\frac{1} {2} \circ (c,z) + \frac{1} {2} \circ (c^{{\prime}},z^{{\prime}})\right ) = O\left (\frac{1} {2} \circ (c^{{\prime}},z) + \frac{1} {2} \circ (c,z^{{\prime}})\right ),}$$

    Marginal Dominance implies

    $$\displaystyle{U\left (\left \{\frac{1} {2} \circ (c,z) + \frac{1} {2} \circ (c^{{\prime}},z^{{\prime}})\right \}\right ) = U\left (\left \{\frac{1} {2} \circ (c^{{\prime}},z) + \frac{1} {2} \circ (c,z^{{\prime}})\right \}\right ).}$$

    Mixture linearity of U implies

    $$\displaystyle{U(c,z) + U(c^{{\prime}},z^{{\prime}}) = U(c^{{\prime}},z) + U(c,z^{{\prime}}).}$$

    Define \(\bar{u}(c) \equiv U(c,z^{{\prime}})\) and \(\bar{W}(z) \equiv U(c^{{\prime}},z) - U(c^{{\prime}},z^{{\prime}})\) for an arbitrarily fixed \((c^{{\prime}},z^{{\prime}})\). Then, \(U(c,z) =\bar{ u}(c) +\bar{ W}(z)\).

By the above claim, for any l ∈ Δ(C × Z),

$$\displaystyle\begin{array}{rcl} U(\{l\})& =& \int U(c,z)\,\mathrm{d}l(c,z) =\int (\bar{u}(c) +\bar{ W}(z))\,\mathrm{d}l(c,z) =\int \bar{ u}(c)\,\mathrm{d}l_{c}(c) {}\\ & & +\int \bar{W}(z)\,\mathrm{d}l_{z}(z). {}\\ \end{array}$$

Thus,

$$\displaystyle\begin{array}{rcl} u(l_{c}) \geq u(l_{c}^{{\prime}})& \Leftrightarrow & U(\{l_{ c} \otimes \underline{ l}_{z}\}) \geq U(\{l_{c}^{{\prime}}\otimes \underline{ l}_{ z}\}) {}\\ & \Leftrightarrow & \int \bar{u}(c)\,\mathrm{d}l_{c}(c) +\int \bar{ W}(z)\,\mathrm{d}\underline{l}_{z}(z) \geq \int \bar{ u}(c)\,\mathrm{d}l_{c}^{{\prime}}(c) +\int \bar{ W}(z)\,\mathrm{d}\underline{l}_{ z}(z) {}\\ & \Leftrightarrow & \int \bar{u}(c)\,\mathrm{d}l_{c}(c) \geq \int \bar{ u}(c)\,\mathrm{d}l_{c}^{{\prime}}(c) {}\\ & \Leftrightarrow & \int \bar{u}(c)\,\mathrm{d}l_{c}(c) +\int \bar{ W}(z)\,\mathrm{d}l_{z}(z) \geq \int \bar{ u}(c)\,\mathrm{d}l_{c}^{{\prime}}(c) +\int \bar{ W}(z)\,\mathrm{d}l_{ z}(z) {}\\ & \Leftrightarrow & U(\{l_{c} \otimes l_{z}\}) \geq U(\{l_{c}^{{\prime}}\otimes l_{ z}\}). {}\\ \end{array}$$

The symmetric argument can be applied to W.

  1. (ii)

    We want to show \(u(\lambda l_{c} + (1-\lambda )l_{c}^{{\prime}}) =\lambda u(l_{c}) + (1-\lambda )u(l_{c}^{{\prime}})\) for any \(l_{c},l_{c}^{{\prime}}\) and \(\lambda \in [0,1]\). Since

    $$\displaystyle{O((\lambda l_{c} + (1-\lambda )l_{c}^{{\prime}}) \otimes l_{ z}) = O(\lambda l_{c} \otimes l_{z} + (1-\lambda )l_{c}^{{\prime}}\otimes l_{ z}),}$$

    Marginal Dominance implies

    $$\displaystyle{U(\{(\lambda l_{c} + (1-\lambda )l_{c}^{{\prime}}) \otimes l_{ z}\}) = U(\{\lambda l_{c} \otimes l_{z} + (1-\lambda )l_{c}^{{\prime}}\otimes l_{ z}\}).}$$

    Since U({⋅ }) is mixture linear,

    $$\displaystyle\begin{array}{rcl} u(\lambda l_{c} + (1-\lambda )l_{c}^{{\prime}})& =& U(\{(\lambda l_{ c} + (1-\lambda )l_{c}^{{\prime}}) \otimes \underline{ l}_{ z}\}) {}\\ & =& U(\{\lambda l_{c} \otimes \underline{ l}_{z} + (1-\lambda )l_{c}^{{\prime}}\otimes \underline{ l}_{ z}\}) {}\\ & =& \lambda U(\{\mathop{\l }_{c} \otimes \underline{ l}_{z}\}) + (1-\lambda )U(\{l_{c}^{{\prime}}\otimes \underline{ l}_{ z}\}) {}\\ & =& \lambda u(l_{c}) + (1-\lambda )u(l_{c}^{{\prime}}). {}\\ \end{array}$$

    By the symmetric argument, we can show that W is mixture linear. □ 

Next we show several properties of the Marginal Dominance operator.

Lemma 3

  1. (i)

    For any \(x \in \mathcal{Z}\) , \(O(x) \in \mathcal{Z}\) .

  2. (ii)

    If x is convex, so is O(x).

  3. (iii)

    \(O: \mathcal{Z}\rightarrow \mathcal{Z}\) is Hausdorff continuous.

Proof

  1. (i)

    Since \(\varDelta (C \times \mathcal{Z})\) is compact, it suffices to show that O(x) is a closed subset of \(\varDelta (C \times \mathcal{Z})\). Let l n → l with l n ∈ O(x). By definition, there exists a sequence \(\{\bar{l}^{n}\}\) with \(\bar{l}^{n} \in x\) such that \(\{\bar{l}_{c}^{n} \otimes l_{z}^{n}\} \succapprox \{ l_{c}^{n} \otimes l_{z}^{n}\}\) and \(\{l_{c}^{n} \otimes \bar{ l}_{z}^{n}\} \succapprox \{ l_{c}^{n} \otimes l_{z}^{n}\}\). Since x is compact, without loss of generality, we can assume that \(\{\bar{l}^{n}\}\) converges to a limit \(\bar{l} \in x\). Since \(l_{c}^{n} \rightarrow l_{c}\) and \(l_{z}^{n} \rightarrow l_{z}\), \(\bar{l}_{c}^{n} \rightarrow \bar{ l}_{c}\) and \(\bar{l}_{z}^{n} \rightarrow \bar{ l}_{z}\), Continuity implies \(\{\bar{l}_{c} \otimes l_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\) and \(\{l_{c} \otimes \bar{ l}_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\). Hence, l ∈ O(x).

  2. (ii)

    Take \(l,l^{{\prime}}\in O(x)\) and \(\lambda \in [0,1]\). Let \(l^{\lambda } \equiv \lambda l + (1-\lambda )l^{{\prime}}\). We want to show \(l^{\lambda } \in O(x)\). By definition, there exist \(\bar{l},\bar{l}^{{\prime}}\in x\) such that \(\{\bar{l}_{c} \otimes l_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\), \(\{l_{c} \otimes \bar{ l}_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\), \(\{\bar{l}_{c}^{{\prime}}\otimes l_{z}^{{\prime}}\}\succapprox \{ l_{c}^{{\prime}}\otimes l_{z}^{{\prime}}\}\), and \(\{l_{c}^{{\prime}}\otimes \bar{ l}_{z}^{{\prime}}\}\succapprox \{ l_{c}^{{\prime}}\otimes l_{z}^{{\prime}}\}\). Let \(\bar{l}^{\lambda } \equiv \lambda \bar{ l} + (1-\lambda )\bar{l}^{{\prime}}\in x\). From Commitment Independence,

    $$\displaystyle{\{\lambda \bar{l}_{c} \otimes l_{z} + (1-\lambda )\bar{l}_{c}^{{\prime}}\otimes l_{ z}^{{\prime}}\}\succapprox \{\lambda l_{ c} \otimes l_{z} + (1-\lambda )l_{c}^{{\prime}}\otimes l_{ z}^{{\prime}}\}.}$$

    Since \(O(l_{c} \otimes l_{z}) = O(l)\), Marginal Dominance implies \(\{l_{c} \otimes l_{z}\} \sim \{ l\}\). By the same reason, \(\{l_{c}^{{\prime}}\otimes l_{z}^{{\prime}}\}\sim \{ l^{{\prime}}\}\), \(\{l_{c}^{\lambda } \otimes l_{z}^{\lambda }\} \sim \{ l^{\lambda }\}\), and

    $$\displaystyle{\{(\lambda \bar{l}_{c} + (1-\lambda )\bar{l}_{c}^{{\prime}}) \otimes (\lambda l_{ z} + (1-\lambda )l_{z}^{{\prime}})\} \sim \{\lambda \bar{ l}_{ c} \otimes l_{z} + (1-\lambda )\bar{l}_{c}^{{\prime}}\otimes l_{ z}^{{\prime}}\}.}$$

    Thus,

    $$\displaystyle\begin{array}{rcl} \{\bar{l}_{c}^{\lambda } \otimes l_{ z}^{\lambda }\}& =& \{(\lambda \bar{l} + (1-\lambda )\bar{l}^{{\prime}})_{ c} \otimes (\lambda \bar{l} + (1-\lambda )\bar{l}^{{\prime}})_{ z}\} {}\\ & =& \{(\lambda \bar{l}_{c} + (1-\lambda )\bar{l}_{c}^{{\prime}}) \otimes (\lambda l_{ z} + (1-\lambda )l_{z}^{{\prime}})\} {}\\ & \sim &\{\lambda \bar{l}_{c} \otimes l_{z} + (1-\lambda )\bar{l}_{c}^{{\prime}}\otimes l_{ z}^{{\prime}}\}\succapprox \{\lambda l_{ c} \otimes l_{z} + (1-\lambda )l_{c}^{{\prime}}\otimes l_{ z}^{{\prime}}\} {}\\ &\sim &\{\lambda l + (1-\lambda )l^{{\prime}}\}\sim \{ l_{ c}^{\lambda } \otimes l_{ z}^{\lambda }\}. {}\\ \end{array}$$

    Similarly, \(\{l_{c}^{\lambda } \otimes \bar{ l}_{z}^{\lambda }\} \succapprox \{ l_{c}^{\lambda } \otimes l_{z}^{\lambda }\}\). Hence, \(l^{\lambda } \in O(x)\).

  3. (iii)

    Let x n → x. We want to show O(x n) → O(x). By contradiction, suppose otherwise. Then, there exists a neighborhood \(\mathcal{U}\) of O(x) such that \(O(x^{\ell})\notin \mathcal{U}\) for infinitely many . Let \(\{x^{\ell}\}_{\ell=1}^{\infty }\) be the corresponding subsequence of \(\{x^{n}\}_{n=1}^{\infty }\). Since x n → x, \(\{x^{\ell}\}_{\ell=1}^{\infty }\) also converges to x. Since \(\{O(x^{\ell})\}_{\ell=1}^{\infty }\) is a sequence in a compact metric space \(\mathcal{Z}\), there exists a convergent subsequence \(\{O(x^{m})\}_{m=1}^{\infty }\) with a limit yO(x). As a result, now we have x m → x and O(x m) → y. In the following argument, we will show that y = O(x), which is a contradiction.

Step 1::

\(O(x) \subset y\).

Take any l ∈ O(x). Then, there exists \(\bar{l} \in x\) such that \(\{\bar{l}_{c} \otimes l_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\) and \(\{l_{c} \otimes \bar{ l}_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\). Since x m → x, we can find a sequence \(\{\bar{l}^{m}\}_{m=1}^{\infty }\) such that \(\bar{l}^{m} \in x^{m}\) and \(\bar{l}^{m} \rightarrow \bar{ l}\).

Now we will construct a sequence \(\{l^{m}\}_{m=1}^{\infty }\) with \(l^{m} \in O(x^{m})\) satisfying l m → l. Let \(l_{c}^{-}\in \varDelta (C)\) be a worst element with respect to u and \(l_{z}^{-}\in \varDelta (\mathcal{Z})\) be a worst element with respect to W. For all sufficiently large k, let B k (l) be the 1∕k-neighborhood of l with respect to the weak convergence topology. There exists \(0 <\lambda ^{k} <1\) such that \(l^{k} \equiv \lambda ^{k}l + (1 -\lambda ^{k})(l_{c}^{-}\otimes l_{z}^{-}) \in B_{k}(l)\). By construction, l k → l. Since u is mixture linear from Lemma 2 (ii), \(u(l_{c})> u(l_{c}^{k})\) if \(u(l_{c})> u(l_{c}^{-})\), and \(u(l_{c}) = u(l_{c}^{k})\) if \(u(l_{c}) = u(l_{c}^{-})\). In the case of former, since \(\bar{l}^{m} \rightarrow \bar{ l}\), by Continuity, there exists \(m_{1}^{k}\) such that for all \(m \geq m_{1}^{k}\), \(u(\bar{l}_{c}^{m})> u(l_{c}^{k})\). In the case of latter, for all m, \(u(\bar{l}_{c}^{m}) \geq u(l_{c}^{-}) = u(l_{c}^{k})\). In both cases, we have \(u(\bar{l}_{c}^{m}) \geq u(l_{c}^{k})\) for all \(m \geq m_{1}^{k}\). Since W is mixture linear from Lemma 2 (ii), by the same argument, there exists \(m_{2}^{k}\) such that for all \(m \geq m_{2}^{k}\), \(W(\bar{l}_{z}^{m}) \geq W(l_{z}^{k})\). Therefore, for all \(m \geq m^{k} \equiv \max [m_{1}^{k},m_{2}^{k}]\), \(u(\bar{l}_{c}^{m}) \geq u(l_{c}^{k})\) and \(W(\bar{l}_{z}^{m}) \geq W(l_{z}^{k})\), that is, \(\{\bar{l}_{c}^{m} \otimes l_{z}^{k}\} \succapprox \{ l_{c}^{k} \otimes l_{z}^{k}\}\) and \(\{l_{c}^{k} \otimes \bar{ l}_{z}^{m}\} \succapprox \{ l_{c}^{k} \otimes l_{z}^{k}\}\). Hence, we have \(l^{k} \in O(\bar{l}^{m}) \subset O(x^{m})\) for all m ≥ m k. Since \(m^{k+1} \geq m^{k}\) for all k, define \(l^{m} \equiv l^{k}\) for all m satisfying \(m^{k} \leq m <m^{k+1}\). Then, \(\{l^{m}\}_{m=1}^{\infty }\) is a required sequence.

Since l m → l and O(x m) → y with \(l^{m} \in O(x^{m})\), we have l ∈ y. Thus, \(O(x) \subset y\).

Step 2::

\(y \subset O(x)\).

Take any l ∈ y. Since O(x m) → y, we can find a sequence \(l^{m} \in O(x^{m})\) with l m → l. By definition, there is \(\bar{l}^{m} \in x^{m}\) such that \(\{\bar{l}_{c}^{m} \otimes l_{z}^{m}\} \succapprox \{ l_{c}^{m} \otimes l_{z}^{m}\}\) and \(\{l_{c}^{m} \otimes \bar{ l}_{z}^{m}\} \succapprox \{ l_{c}^{m} \otimes l_{z}^{m}\}\). Since \(\varDelta (C \times \mathcal{Z})\) is compact, we can assume \(\{\bar{l}^{m}\}\) converges to a limit \(\bar{l} \in \varDelta (C \times \mathcal{Z})\). Since \(\bar{l}^{m} \rightarrow \bar{ l}\) and x m → x with \(\bar{l}^{m} \in x^{m}\), we have \(\bar{l} \in x\). From Continuity, \(\{\bar{l}_{c} \otimes l_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\) and \(\{l_{c} \otimes \bar{ l}_{z}\} \succapprox \{ l_{c} \otimes l_{z}\}\). Thus, l ∈ O(x). □ 

From Marginal Dominance, \(x \sim O(x)\). Hence we can pay attention to the sub-domain,

$$\displaystyle{ \mathcal{Z}_{2} \equiv \{ x \in \mathcal{Z}_{1}\vert x = O(x)\}. }$$

From Lemma 3 (iii), \(\mathcal{Z}_{2}\) is compact. Moreover, Lemma 3 (i) and (ii) imply that any \(x \in \mathcal{Z}_{2}\) is compact and convex.

For each \(x \in \mathcal{Z}_{2}\) and α ∈ [0, 1], define

$$\displaystyle{ \sigma _{x}(\alpha ) \equiv \max _{l\in x}{\Bigl ((1-\alpha )u(l_{c}) +\alpha W(l_{z})\Bigr )}. }$$
(20.14)

Let \(\mathcal{C}([0,1])\) be the set of real-valued continuous functions on [0, 1] with the supnorm. The above formulation (20.14) defines the mapping \(\sigma: \mathcal{Z}_{2} \rightarrow \mathcal{C}([0,1])\).

Lemma 4

  1. (i)

    \(\sigma\) is continuous.

  2. (ii)

    For all \(x,y \in \mathcal{Z}_{2}\) and \(\lambda \in [0,1]\) , \(\lambda \sigma _{x} + (1-\lambda )\sigma _{y} =\sigma _{O(\lambda x+(1-\lambda )y)}\) .

  3. (iii)

    \(\sigma\) is injective.

Proof

  1. (i)

    Let

    $$\displaystyle{V (x) \equiv \{ (u,w)\vert u = u(l_{c}),w = W(l_{z}),l \in x\} \subset \mathbb{R}^{2}.}$$

    Since u and W are continuous and \(C \times \mathcal{Z}\) is compact, there exists a compact set \(L \subset \mathbb{R}^{2}\) such that \(V (x) \subset L\) for all x. Hence, V (x) is also compact and, moreover, convex because u and W are mixture linear. Let \(\mathcal{K}(L)\) be the set of non-empty compact subsets of L with the Hausdorff metric.

Step 1::

The map \(V: \mathcal{Z}_{2} \ni x\mapsto V (x) \in \mathcal{K}(L)\) is Hausdorff continuous.

Take a sequence x n → x with \(x^{n},x \in \mathcal{Z}_{2}\). We want to show that V (x n) → V (x). By contradiction, suppose otherwise. Then, there exists a neighborhood \(\mathcal{U}\) of V (x) such that \(V (x^{m})\notin \mathcal{U}\) for infinitely many m. Let \(\{x^{m}\}_{m=1}^{\infty }\) be the corresponding subsequence of \(\{x^{n}\}_{n=1}^{\infty }\). Since x n → x, \(\{x^{m}\}_{m=1}^{\infty }\) also converges to x. Since \(\{V (x^{m})\}_{m=1}^{\infty }\) is a sequence in a compact metric space \(\mathcal{K}(L)\), there exists a convergent subsequence \(\{V (x^{\ell})\}_{\ell=1}^{\infty }\) with a limit zV (x). As a result, now we have x  → x and V (x ) → z.

In the following argument, we will show that z = V (x), which is a contradiction. Take any \((\bar{u},\bar{w}) \in V (x)\). There exists \(\bar{l} \in x\) such that \(\bar{u} = u(\bar{l}_{c})\) and \(\bar{w} = W(\bar{l}_{z})\). Since x  → x, we can find \(\{l^{\ell}\}_{\ell=1}^{\infty }\) such that \(l^{\ell} \rightarrow \bar{ l}\) with \(l^{\ell} \in x^{\ell}\). Let \((u^{\ell},w^{\ell}) \equiv (u(l_{c}^{\ell}),W(l_{z}^{\ell})) \in V (x^{\ell})\). The conditions \((u^{\ell},w^{\ell}) \rightarrow (\bar{u},\bar{w})\) and V (x ) → z with \((u^{\ell},w^{\ell}) \in V (x^{\ell})\) imply \((\bar{u},\bar{w}) \in z\). Thus, \(V (x) \subset z\).

For the other direction, take any \((\bar{u},\bar{w}) \in z\). Since V (x ) → z, we can find \(\{(u^{\ell},w^{\ell})\}_{\ell=1}^{\infty }\) such that \((u^{\ell},w^{\ell}) \rightarrow (\bar{u},\bar{w})\) with \((u^{\ell},w^{\ell}) \in V (x^{\ell})\). There exists \(l^{\ell} \in x^{\ell}\) satisfying \((u^{\ell},w^{\ell}) = (u(l_{c}^{\ell}),W(l_{z}^{\ell}))\). Since \(\varDelta (C \times \mathcal{Z})\) is compact, there exists a convergent subsequence \(\{l^{k}\}_{k=1}^{\infty }\) with a limit \(\bar{l}\). By continuity of u and W, \((u(\bar{l}_{c}),W(\bar{l}_{z})) = (\bar{u},\bar{w})\). Moreover, since \(l^{k} \rightarrow \bar{ l}\), x k → x with \(l^{k} \in x^{k}\), we have \(\bar{l} \in x\). Thus \((\bar{u},\bar{w}) \in V (x)\), which implies \(z \subset V (x)\).

Step 2::

\(d_{\mathrm{supnorm}}(\sigma _{x},\sigma _{y}) \leq d_{\mathrm{Hausdorff}}(V (x),V (y))\).

For any α ∈ [0, 1], by definition,

$$\displaystyle\begin{array}{rcl} \left \vert \sigma _{x}(\alpha )-\sigma _{y}(\alpha )\right \vert & =& \left \vert \max _{l\in x}{\Bigl ((1-\alpha )u(l_{c})+\alpha W(l_{z})\Bigr )}-\max _{h\in y}{\Bigl ((1-\alpha )u(l_{c})+\alpha W(l_{z})\Bigr )}\right \vert {}\\ & =& \left \vert \max _{(u,w)\in V (x)}((1-\alpha )u +\alpha w) -\max _{(u,w)\in V (y)}((1-\alpha )u +\alpha w)\right \vert. {}\\ \end{array}$$

Let \((u^{\alpha x},w^{\alpha x}) \in V (x)\) and \((u^{\alpha y},w^{\alpha y}) \in V (y)\) be maximizers for the maximization problems, respectively. Without loss of generality, assume

$$\displaystyle{(1-\alpha )u^{\alpha x} +\alpha w^{\alpha x} \geq (1-\alpha )u^{\alpha y} +\alpha w^{\alpha y}.}$$

Let

$$\displaystyle{H^{\alpha y} \equiv \{ (u,w)\vert (1-\alpha )u +\alpha w = (1-\alpha )u^{\alpha y} +\alpha w^{\alpha y}\}}$$

and \((u^{{\ast}},w^{{\ast}}) \in H^{\alpha y}\) be a point solving

$$\displaystyle{\min _{(u,w)\in H^{\alpha y}}\|(u,w) - (u^{\alpha x},w^{\alpha x})\|.}$$

Then, by the Schwarz inequality,

$$\displaystyle\begin{array}{rcl} & & \left \vert \max _{(u,w)\in V (x)}((1-\alpha )u +\alpha w) -\max _{(u,w)\in V (y)}((1-\alpha )u +\alpha w)\right \vert {}\\ & =& \left \vert ((1-\alpha )u^{\alpha x} +\alpha w^{\alpha x}) - ((1-\alpha )u^{\alpha y} +\alpha w^{\alpha y})\right \vert {}\\ & =& \vert ((1-\alpha )u^{\alpha x} +\alpha w^{\alpha x}) - ((1-\alpha )u^{{\ast}} +\alpha w^{{\ast}})\vert {}\\ & =& \vert (1-\alpha )(u^{\alpha x} - u^{{\ast}}) +\alpha (w^{\alpha x} - w^{{\ast}})\vert {}\\ & \leq & \|(u^{\alpha x} - u^{{\ast}},w^{\alpha x} - w^{{\ast}})\|\|(1-\alpha,\alpha )\| \leq \| (u^{\alpha x} - u^{{\ast}},w^{\alpha x} - w^{{\ast}})\| {}\\ & \leq & \min _{(u,w)\in V (y)}\|(u^{\alpha x},w^{\alpha x}) - (u,w)\| \leq d_{\mathrm{ Hausdorff}}(V (x),V (y)). {}\\ \end{array}$$

Since this inequality holds for all α,

$$\displaystyle{d_{\mathrm{supnorm}}(\sigma _{x},\sigma _{y}) =\sup _{\alpha \in [0,1]}\left \vert \sigma _{x}(\alpha ) -\sigma _{y}(\alpha )\right \vert \leq d_{\mathrm{Hausdorff}}(V (x),V (y)).}$$

From Steps 1 and 2, \(\sigma\) is continuous.

  1. (ii)

    Fix α ∈ [0, 1]. Let l x ∈ x and l y ∈ y satisfy

    $$\displaystyle\begin{array}{rcl} (1-\alpha )u(l_{c}^{x}) +\alpha W(l_{ z}^{x})& =& \max _{ l\in x}((1-\alpha )u(l_{c}) +\alpha W(l_{z})), {}\\ (1-\alpha )u(l_{c}^{y}) +\alpha W(l_{ z}^{y})& =& \max _{ l\in y}((1-\alpha )u(l_{c}) +\alpha W(l_{z})). {}\\ \end{array}$$

    Since \(\lambda l^{x} + (1-\lambda )l^{y} \in \lambda x + (1-\lambda )y\), mixture linearity of u and W implies

    $$\displaystyle\begin{array}{rcl} & & \lambda \sigma _{x}(\alpha ) + (1-\lambda )\sigma _{y}(\alpha ) {}\\ & =& \lambda ((1-\alpha )u(l_{c}^{x}) +\alpha W(l_{ z}^{x})) + (1-\lambda )((1-\alpha )u(l_{ c}^{y}) +\alpha W(l_{ z}^{y})) {}\\ & =& (1-\alpha )u(\lambda l_{c}^{x} + (1-\lambda )l_{ c}^{y}) +\alpha W(\lambda l_{ z}^{x} + (1-\lambda )l_{ z}^{y}) {}\\ & =& \max _{l\in \lambda x+(1-\lambda )y}((1-\alpha )u(l_{c}) +\alpha W(l_{z})) {}\\ & =& \max _{l\in O(\lambda x+(1-\lambda )y)}((1-\alpha )u(l_{c}) +\alpha W(l_{z})) =\sigma _{O(\lambda x+(1-\lambda )y)}(\alpha ). {}\\ \end{array}$$
  2. (iii)

    Take \(x,x^{{\prime}}\in \mathcal{Z}_{2}\) with \(x\neq x^{{\prime}}\). Without loss of generality, assume \(x\not\subset x^{{\prime}}\). Take \(\tilde{l} \in x\setminus x^{{\prime}}\). Let \(\tilde{u} = u(\tilde{l}_{c})\) and \(\tilde{w} = W(\tilde{l}_{z})\). Let

    $$\displaystyle{V ^{{\prime}}\equiv \{ (u,w)\vert u = u(l_{ c}),w = W(l_{z}),l \in x^{{\prime}}\}\subset \mathbb{R}^{2}.}$$

    We will claim that \((\{(\tilde{u},\tilde{w})\} + \mathbb{R}_{+}^{2}) \cap V ^{{\prime}} =\emptyset\). Suppose otherwise. Then, there exists \(l^{{\prime}}\in x^{{\prime}}\) such that \(u(l_{c}^{{\prime}}) \geq \tilde{ u}\) and \(W(l_{z}^{{\prime}}) \geq \tilde{ w}\). That is, \(U(\{l_{c}^{{\prime}}\otimes \underline{ l}_{z}\}) \geq U(\{\tilde{l}_{c} \otimes \underline{ l}_{z}\})\) and \(U(\{\underline{l}_{c} \otimes l_{z}^{{\prime}}\}) \geq U(\{\underline{l}_{c} \otimes \tilde{ l}_{z}\})\). From Lemma C.2 (i), \(U(\{l_{c}^{{\prime}}\otimes \tilde{ l}_{z}\}) \geq U(\{\tilde{l}_{c} \otimes \tilde{ l}_{z}\})\) and \(U(\{\tilde{l}_{c} \otimes l_{z}^{{\prime}}\}) \geq U(\{\tilde{l}_{c} \otimes \tilde{ l}_{z}\})\). Thus, \(\tilde{l} \in O(l^{{\prime}}) \subset O(x^{{\prime}})\). Since \(O(x^{{\prime}}) = x^{{\prime}}\), this is a contradiction.

Since the above claim holds, by the separating hyperplane theorem, there exists α ∈ [0, 1] and \(\gamma \in \mathbb{R}\) such that \((1-\alpha )\tilde{u} +\alpha \tilde{ w}>\gamma> (1-\alpha )u^{{\prime}} +\alpha w^{{\prime}}\) for all \((u^{{\prime}},w^{{\prime}}) \in V ^{{\prime}}\). Equivalently,

$$\displaystyle{(1-\alpha )u(\tilde{l}_{c}) +\alpha W(\tilde{l}_{z})>\gamma> (1-\alpha )u(l_{c}^{{\prime}}) +\alpha W(l_{ z}^{{\prime}}),}$$

for all \(l^{{\prime}}\in x^{{\prime}}\). Hence,

$$\displaystyle\begin{array}{rcl} \sigma _{x}(\alpha )& =& \max _{l\in x}((1-\alpha )u(l_{c}) +\alpha W(l_{z})) \geq (1-\alpha )u(\tilde{l}_{c}) +\alpha W(\tilde{l}_{z}) {}\\ &>& \max _{l^{{\prime}}\in x^{{\prime}}}((1-\alpha )u(l_{c}^{{\prime}}) +\alpha W(l_{ z}^{{\prime}})) =\sigma _{ x^{{\prime}}}(\alpha ). {}\\ \end{array}$$

Therefore, \(\sigma _{x}\neq \sigma _{x^{{\prime}}}\). □ 

Let \(C \subset \mathcal{C}([0,1])\) be the range of \(\sigma\).

Lemma 5

  1. (i)

    C is convex.

  2. (ii)

    The zero function belongs to C.

  3. (iii)

    The constant function equal to a positive number c > 0 belongs to C.

  4. (iv)

    The supremum of any two points \(f,f^{{\prime}}\in C\) belongs to C. That is, \(\max [f(\alpha ),f^{{\prime}}(\alpha )]\) belongs to C.

  5. (v)

    For all f ∈ C, f ≥ 0.

Proof

  1. (i)

    Take any \(f,f^{{\prime}}\in C\) and \(\lambda \in [0,1]\). There are \(x,x^{{\prime}}\in \mathcal{Z}_{2}\) satisfying \(f =\sigma _{x}\) and \(f^{{\prime}} =\sigma _{x^{{\prime}}}\). From Lemma 4 (ii),

    $$\displaystyle{\lambda f + (1-\lambda )f^{{\prime}} =\lambda \sigma _{ x} + (1-\lambda )\sigma _{x^{{\prime}}} =\sigma _{O(\lambda x+(1-\lambda )x^{{\prime}})} \in \mathcal{Z}_{2}.}$$

    Hence, C is convex.

  2. (ii)

    Let \(x \equiv O(\underline{l}) \in \mathcal{Z}_{2}\). Then, for all α,

    $$\displaystyle\begin{array}{rcl} \sigma _{x}(\alpha )& =& \max _{l\in O(\underline{l})}(1-\alpha )u(l_{c}) +\alpha W(l_{z}) = (1-\alpha )u(\underline{l}_{c}) +\alpha W(\underline{l}_{z}) {}\\ & =& (1-\alpha )U(\{\underline{l}_{c} \otimes \underline{ l}_{z}\}) +\alpha U(\{\underline{l}_{c} \otimes \underline{ l}_{z}\}) = 0. {}\\ \end{array}$$
  3. (iii)

    Recall that \(\overline{l}\) is a maximal element of U({⋅ }). Without loss of generality, assume \(u(\overline{l}_{c}) \geq W(\overline{l}_{z})\). From Nondegeneracy, there exists \(l_{z}^{{\ast}}\) such that \(W(l_{z}^{{\ast}})> W(\underline{l}_{z}) = 0\). Since \(u(\overline{l}_{c}) \geq W(l_{z}^{{\ast}})> 0 = u(\underline{l}_{c})\), continuity of u implies that there exists \(l_{c}^{{\ast}}\) such that \(u(l_{c}^{{\ast}}) = W(l_{z}^{{\ast}})\). Let \(c \equiv W(l_{z}^{{\ast}})> 0\) and \(x \equiv O(l_{c}^{{\ast}}\otimes l_{z}^{{\ast}}) \in \mathcal{Z}_{2}\). Then, for all α,

    $$\displaystyle\begin{array}{rcl} \sigma _{x}(\alpha )& =& \max _{l\in O(l_{c}^{{\ast}}\otimes l_{z}^{{\ast}})}(1-\alpha )u(l_{c}) +\alpha W(l_{z}) = (1-\alpha )u(l_{c}^{{\ast}}) +\alpha W(l_{ z}^{{\ast}}) = c. {}\\ \end{array}$$
  4. (iv)

    There exist \(x^{{\prime}},x \in \mathcal{Z}_{2}\) such that \(f =\sigma _{x}\) and \(f^{{\prime}} =\sigma _{x^{{\prime}}}\). Let \(f^{{\prime\prime}}\equiv \sigma _{O(\mathrm{co}(x\cup x^{{\prime}}))} \in C\). Then, \(f^{{\prime\prime}}(\alpha ) =\max [\sigma _{x}(\alpha ),\sigma _{x^{{\prime}}}(\alpha )]\).

  5. (v)

    There exists \(x \in \mathcal{Z}_{2}\) such that \(f =\sigma _{x}\). Since \(O(\underline{l}) \subset x\), Lemma 5 (ii) implies \(f(\alpha ) =\sigma _{x}(\alpha ) \geq \sigma _{O(\underline{l})}(\alpha ) = 0\), for any α. □ 

Define \(T: C \rightarrow \mathbb{R}\) by \(T(f) \equiv U(\sigma ^{-1}(f))\). Notice that T(0) = 0 and T(c) = c, where 0 and c are identified with the zero function and the constant function equal to c > 0, respectively. Since U and \(\sigma\) are continuous and mixture linear, so is T.

Lemma 6

\(T(\beta f +\gamma f^{{\prime}}) =\beta T(f) +\gamma T(f^{{\prime}})\) as long as \(f,f^{{\prime}},\beta f +\gamma f^{{\prime}}\in C\) , where \(\beta,\gamma \in \mathbb{R}_{+}\) .

Proof

For any β ∈ [0, 1], T(β f) = T(β f + (1 −β)0) = β T(f) + (1 −β)T(0) = β T(f), where 0 is the zero function. For any β > 1, let \(f^{{\prime\prime}}\equiv \beta f\). Since \(T\left (\frac{1} {\beta } f^{{\prime\prime}}\right ) = \frac{1} {\beta } T(f^{{\prime\prime}})\), β T(f) = T(β f). Additivity follows from \(T(f + f^{{\prime}}) = 2T\left (\frac{1} {2}f + \frac{1} {2}f^{{\prime}}\right ) = T(f) + T(f^{{\prime}})\). □ 

By the same argument as in DLR, we will extend T to \(\mathcal{C}([0,1])\) step by step. For any r ≥ 0, let \(\mathit{rC} \equiv \{\mathit{rf }\vert f \in C\}\) and \(H \equiv \cup _{r\geq 0}\mathit{rC}\). For any \(f \in H\setminus 0\), there is r > 0 satisfying (1∕r)f ∈ C. Define \(T(f) \equiv \mathit{rT}((1/r)f)\). From linearity of T on C, T(f) is well-defined. That is, even if there is another \(r^{{\prime}}> 0\) satisfying \((1/r^{{\prime}})f \in C\), \(\mathit{rT}((1/r)f) = r^{{\prime}}T((1/r^{{\prime}})f)\). It is easy to see that T on H is mixture linear. By the same argument in Lemma 6, T is also linear.

Let

$$\displaystyle{ H^{{\ast}}\equiv H - H =\{ f_{ 1} - f_{2} \in \mathcal{C}([0,1])\vert f_{1},f_{2} \in H\}. }$$

For any f ∈ H , there are \(f_{1},f_{2} \in H\) satisfying \(f = f_{1} - f_{2}\). Define \(T(f) \equiv T(f_{1}) - T(f_{2})\). We can verify that \(T: H^{{\ast}}\rightarrow \mathbb{R}\) is well-defined. Indeed, suppose that \(f_{1},f_{2},f_{3}\) and f 4 in H satisfy \(f = f_{1} - f_{2} = f_{3} - f_{4}\). Since \(f_{1} + f_{4} = f_{2} + f_{3}\), \(T(f_{1}) + T(f_{4}) = T(f_{2}) + T(f_{3})\) by linearity of T on H.

Lemma 7

H is dense in \(\mathcal{C}([0,1])\) .

Proof

From the Stone-Weierstrass theorem, it is enough to show that (i) H is a vector sublattice, (ii) H separates the points of [0, 1]; that is, for any two distinct points \(\alpha,\alpha ^{{\prime}}\in [0,1]\), there exists f ∈ H with \(f(\alpha )\neq f(\alpha ^{{\prime}})\), and (iii) H contains the constant functions equal to one. By the exactly same argument as Lemma 11 (p. 928) in DLR, (i) holds. To verify condition (ii), take \(\alpha,\alpha ^{{\prime}}\in [0,1]\) with \(\alpha \neq \alpha ^{{\prime}}\). Without loss of generality, \(\alpha ^{{\prime}}>\alpha\). Let \(x \equiv O(\overline{l}_{c} \otimes \underline{ l}_{z})\). Then, \(\sigma _{x} \in C \subset H^{{\ast}}\). Since \(u(\overline{l}_{c})> 0\) and \(W(\underline{l}_{z}) = 0\),

$$\displaystyle\begin{array}{rcl} \sigma _{x}(\alpha )& =& (1-\alpha )u(\overline{l}_{c}) +\alpha W(\underline{l}_{z})> (1 -\alpha ^{{\prime}})u(\overline{l}_{ c}) +\alpha ^{{\prime}}W(\underline{l}_{ z}) =\sigma _{x}(\alpha ^{{\prime}}). {}\\ \end{array}$$

Finally, condition (iii) directly follows from Lemma 5 (iii) and the definition of H. □ 

Lemma 8

There exists a constant K > 0 such that \(T(f) \leq K\|f\|\) for any f ∈ H .

Proof

We use the same argument as in Theorem 2 of Dekel et al. (2007).Footnote 16 First, we claim that T is increasing in the pointwise order. Indeed, take any \(g^{{\prime}},g \in H^{{\ast}}\) with \(g^{{\prime}}\geq g\). Since H is a vector space, \(g^{{\prime}}- g \in H^{{\ast}}\). Hence there exist \(f,f^{{\prime}}\in C\) and r > 0 such that \(r(f^{{\prime}}- f) = g^{{\prime}}- g \geq 0\). Thus \(f^{{\prime}}\geq f\) pointwise. Since \(T(f^{{\prime}}) \geq T(f)\) by Monotonicity, \(T(r(f^{{\prime}}- f)) \geq T(0) = 0\) implies \(T(g^{{\prime}}- g) \geq 0\). That is, we have \(T(g^{{\prime}}) \geq T(g)\).

For all f ∈ H , we have \(f \leq \| f\|\mathbf{1}\), where 1 ∈ H is the function identically equal to 1. Since T is increasing, \(T(f) \leq \| f\|T(\mathbf{1})\). Thus \(K \equiv T(\mathbf{1})\) is the desired object. □ 

By Lemma 8 and the Hahn-Banach theorem, we can extend \(T: H^{{\ast}}\rightarrow \mathbb{R}\) to \(\overline{T}: \mathcal{C}([0,1]) \rightarrow \mathbb{R}\) in a linear, continuous and increasing way. Since H is dense in \(\mathcal{C}([0,1])\) by Lemma 7, this extension is unique.

Now we have the following commutative diagram:

Since \(\overline{T}\) is a positive linear functional on \(\mathcal{C}([0,1])\), the Riesz representation theorem ensures that there exists a unique countably additive probability measure μ on [0, 1] satisfying

$$\displaystyle{ \overline{T}(f) =\int _{[0,1]}f(\alpha )\mathrm{d}\mu (\alpha ), }$$

for all \(f \in \mathcal{C}([0,1])\). Thus we have

$$\displaystyle{ U(x) = \overline{T}(\sigma (x)) =\int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha W(l_{z})\right )\mathrm{d}\mu (\alpha ). }$$

For any \(x \in \mathcal{Z}\), let δ x be the degenerate measure at x. Denote W(δ x ) by W(x).

Lemma 9

\(U(x) \geq U(y)\ \Leftrightarrow \ W(x) \geq W(y)\) .

Proof

First of all,

$$\displaystyle{W(l_{z}) =\int _{\varDelta (\mathcal{Z})}W(x)\mathrm{d}l_{z}(x).}$$

Since

$$\displaystyle\begin{array}{rcl} U(\{(c,x)\}) =\int \left ((1-\alpha )u(c) +\alpha W(x)\right )\mathrm{d}\mu (\alpha ) = (1-\bar{\alpha })u(c) +\bar{\alpha } W(x),& & {}\\ \end{array}$$

Stationarity implies that \(U(x) \geq U(y)\ \Leftrightarrow \ U(\{(c,x)\}) \geq U(\{(c,y)\})\ \Leftrightarrow W(x) \geq W(y)\). □ 

Lemma 10

There exist β > 0 and \(\zeta \in \mathbb{R}\) such that W(x) = βU(x) + ζ.

Proof

Since U is mixture linear, there exists \((\underline{c},\underline{z}) \in C \times \mathcal{Z}\) such that \(\{\underline{l}\} \sim \{ (\underline{c},\underline{z})\}\). Thus \(W(l_{z}) = U(\{\underline{c} \otimes l_{z}\})\). We have

$$\displaystyle\begin{array}{rcl} U(\{\lambda \circ (\underline{c},x) + (1-\lambda ) \circ (\underline{c},y)\})& =& U(\{\underline{c} \otimes (\lambda \circ x + (1-\lambda ) \circ y)\}) {}\\ & =& W(\lambda \circ x + (1-\lambda ) \circ y), {}\\ \end{array}$$

and \(U(\{(\underline{c},\lambda x + (1-\lambda )y)\}) = W(\lambda x + (1-\lambda )y)\). Since W is mixture linear over \(\varDelta (\mathcal{Z})\), Timing Indifference implies

$$\displaystyle{\lambda W(x) + (1-\lambda )W(y) = W(\lambda \circ x + (1-\lambda ) \circ y) = W(\lambda x + (1-\lambda )y).}$$

Hence, W is mixture linear over \(\mathcal{Z}_{1}\). From Lemma 9, we know U(x) and W(x) represent the same preference. Since both functions are mixture linear, there exist β > 0 and \(\zeta \in \mathbb{R}\) such that W(x) = β U(x) +ζ. □ 

We will claim that β can be normalized to one. Define \(W^{{\ast}}:\varDelta (\mathcal{Z}) \rightarrow \mathbb{R}\) by \(W^{{\ast}}(l_{z}) = W(l_{z})/\beta\). For any \(x \in \mathcal{D}\), define \(\sigma _{x}^{{\ast}}: [0,1] \rightarrow \mathbb{R}\) by

$$\displaystyle{ \sigma _{x}^{{\ast}}(\alpha ) \equiv \max _{ l\in x}{\Bigl ((1-\alpha )u(l_{c}) +\alpha W^{{\ast}}(l_{ z})\Bigr )}. }$$

Since W is continuous and mixture linear, the same arguments up to Lemma 9 work even for \(\sigma ^{{\ast}}\). Thus, there exists a probability measure μ on [0, 1] such that

$$\displaystyle{ U(x) =\int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha W^{{\ast}}(l_{ z})\right )\,\mathrm{d}\mu ^{{\ast}}(\alpha ). }$$

By definition, W (z) = U(z) +ζβ.

Lemma 11

\(\bar{\alpha }<1\) , where \(\bar{\alpha }\) is the mean of μ .

Proof

Since U is not constant, there exist x and \(x^{{\prime}}\) such that \(U(x)> U(x^{{\prime}})\). For any fixed c, let

$$\displaystyle{x^{t} \equiv \{ (c,\{(c,\{\cdots \{(c,x)\}\cdots \,\})\})\},\ x^{{{\prime}}^{t}} \equiv \{ (c,\{(c,\{\cdots \{(c,x^{{\prime}})\}\cdots \,\})\})\}.}$$

Then,

$$\displaystyle\begin{array}{rcl} U(x^{t}) - U(x^{{{\prime}}^{t}})& =& (1-\bar{\alpha })\bar{\alpha }^{t}U(x) - (1-\bar{\alpha })\bar{\alpha }^{t}U(x^{{\prime}}) = (1-\bar{\alpha })(U(x) - U(x^{{\prime}}))\bar{\alpha }^{t}. {}\\ \end{array}$$

Since Continuity requires \(U(x^{t}) - U(x^{{{\prime}}^{t}}) \rightarrow 0\) as \(t \rightarrow \infty\), we must have \(\bar{\alpha }<1\). □ 

Define \(\zeta ^{{\ast}}\equiv \zeta /\beta\) and

$$\displaystyle{ u^{{\ast}}(l_{ c}) \equiv u(l_{c}) + \frac{\bar{\alpha }} {1-\bar{\alpha }}\zeta ^{{\ast}}. }$$

Then

$$\displaystyle\begin{array}{rcl} U(x)& =& \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}W^{{\ast}}(z)\,\mathrm{d}l_{ z}(z)\right )\mathrm{d}\mu ^{{\ast}}(\alpha ) {}\\ & =& \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}(U(z) +\zeta ^{{\ast}})\,\mathrm{d}l_{ z}(z)\right )\mathrm{d}\mu ^{{\ast}}(\alpha ) {}\\ & =& \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )\left (u(l_{c}) + \frac{\bar{\alpha }} {1-\bar{\alpha }}\zeta ^{{\ast}}\right ) +\alpha \int _{ \mathcal{Z}}U(z)\,\mathrm{d}l_{z}(z)\right )\mathrm{d}\mu ^{{\ast}}(\alpha ) {}\\ & =& \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u^{{\ast}}(l_{ c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}(z)\right )\mathrm{d}\mu ^{{\ast}}(\alpha ). {}\\ \end{array}$$

Therefore the functional form U with components (u , μ ) is the required representation.

Proof of Theorem 2

  1. (i)

    Since mixture linear functions u and \(u^{{\prime}}\) represent the same conditional preference over Δ(C), by the standard argument, \(u^{{\prime}}\) is rewritten as an affine transformation of u. That is, u and \(u^{{\prime}}\) are cardinally equivalent.

  2. (ii)

    From (i), there exist γ > 0 and \(\zeta \in \mathbb{R}\) such that \(u^{{\prime}} =\gamma u+\zeta\). Since U and \(U^{{\prime}}\) are mixture linear functions representing the same preference, there exist γ  > 0 and \(\zeta ^{{\ast}}\in \mathbb{R}\) such that \(U^{{\prime}} =\gamma ^{{\ast}}U +\zeta ^{{\ast}}\). Let x c be the perfect commitment menu to c, that is, \(x_{c} \equiv \{ (c,\{(c,\{\cdots \,\})\})\}\). Since U(x c ) = u(c) and \(U^{{\prime}}(x_{c}) = u^{{\prime}}(c)\), we have \(U^{{\prime}}(x_{c}) =\gamma U(x_{c})+\zeta\), which implies γ  = γ and ζ  = ζ. Now we have

    $$\displaystyle\begin{array}{rcl} U^{{\prime}}(x)& =& \int _{ [0,1]}\max _{l\in x}\left ((1-\alpha )u^{{\prime}}(l_{ c}) +\alpha \int _{\mathcal{Z}}U^{{\prime}}(z)\mathrm{d}l_{ z}\right )\mathrm{d}\mu ^{{\prime}}(\alpha ) {}\\ & =& \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )(\gamma u(l_{c})+\zeta ) +\alpha \int _{\mathcal{Z}}(\gamma U(z)+\zeta )\mathrm{d}l_{z}\right )\mathrm{d}\mu ^{{\prime}}(\alpha ) {}\\ & =& \gamma \int _{[0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\right )\mathrm{d}\mu ^{{\prime}}(\alpha ) +\zeta. {}\\ \end{array}$$

    Hence,

    $$\displaystyle{ U^{{\prime\prime}}(x) \equiv \int _{ [0,1]}\max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\right )\mathrm{d}\mu ^{{\prime}}(\alpha ) }$$

    also represents the same preference. Since \(U^{{\prime}} =\gamma U+\zeta\) and \(U^{{\prime}} =\gamma U^{{\prime\prime}}+\zeta\), we must have \(U(x) = U^{{\prime\prime}}(x)\) for all x. For all \(x \in \mathcal{Z}\) and α ∈ [0, 1], let

    $$\displaystyle{ \sigma _{x}(\alpha ) \equiv \max _{l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int U(z)\,\mathrm{d}l_{z}(x)\right ). }$$

    Then,

    $$\displaystyle{ U(x) =\int \sigma _{x}(\alpha )\,\mathrm{d}\mu (\alpha ) =\int \sigma _{x}(\alpha )\,\mathrm{d}\mu ^{{\prime}}(\alpha ) = U^{{\prime\prime}}(x). }$$
    (20.15)

    If x is convex, \(\sigma _{x}\) is its support function. Equation (20.15) holds also when \(\sigma _{x}\) is replaced with \(a\sigma _{x} - b\sigma _{y}\) for any convex menus x, y and a, b ≥ 0. From Lemma C.6, the set of all such functions is a dense subset of the set of real-valued continuous functions over [0, 1]. Hence, Eq. (20.15) holds when \(\sigma _{x}\) is replaced with any real-valued continuous function. Hence, the Riesz representation theorem implies \(\mu =\mu ^{{\prime}}\).

Proofs of Corollary 1 and Proposition 1

First we show Proposition 1. For all x, let l x denote a best element in x with respect to commitment ranking.

Lemma 12

\(\succapprox\) satisfies Dominance if and only if, for all x, \(x \sim \{ l^{x}\}\) .

Proof

If \(\succapprox\) satisfies Dominance, \(\{l^{x}\} \sim O^{{\ast}}(l^{x}) = O^{{\ast}}(x) \sim x\). Conversely, by definition of O (x), l x is a best element in O (x). Thus \(x \sim \{ l^{x}\} \sim O^{{\ast}}(x)\). □ 

  1. (i)

    By Lemma 12, it suffices to show that Strategic Rationality is equivalent to the condition that \(x \sim \{ l^{x}\}\) for all x. First suppose that \(\succapprox\) satisfies \(x \sim \{ l^{x}\}\). Since \(x \succapprox y\) implies \(\{l^{x}\} \succapprox \{ l^{y}\}\), l x is a best element of \(x \cup y\) with respect to commitment ranking. Hence \(x \sim \{ l^{x}\} \sim x \cup y\).

Next suppose \(\succapprox\) satisfies Strategic Rationality. Take any finite menu x, denoted by \(\{l_{1},l_{2},\cdots \,,l_{N}\}\). Without loss of generality, let \(l^{x} = l_{1}\). Since \(\{l_{1}\} \succapprox \{ l_{2}\}\), Strategic Rationality implies \(\{l_{1},l_{2}\} \sim \{ l_{1}\}\). Since \(\{l_{1},l_{2}\} \sim \{ l_{1}\} \succapprox \{ l_{3}\}\), again by Strategic Rationality, \(\{l_{1},l_{2},l_{3}\} \sim \{ l_{1},l_{2}\} \sim \{ l_{1}\}\). Repeating the same argument finite times, \(x \sim \{ l^{x}\}\). For any menu x, Lemma 0 of Gul and Pesendorfer (2001, p. 1421) shows that there exists a sequence of finite subsets x n of x converging to x in the sense of the Hausdorff metric. Since l x is a best element of x and \(x^{n} \subset x\), applying the above claim, \(x^{n} \cup \{ l^{x}\} \sim \{ l^{x}\}\). Thus, by Continuity, \(x = x \cup \{ l^{x}\} \sim \{ l^{x}\}\) as \(n \rightarrow \infty\).

  1. (ii)

    For all x, choose any l ∈ O(x). By definition, there exists \(l^{{\prime}}\in x\) such that \(l \in O(l^{{\prime}})\). From part (i), preference satisfies Monotonicity. Applying Commitment Marginal Dominance and Monotonicity, we have \(\{l^{x}\} \succapprox \{ l^{{\prime}}\}\sim O(l^{{\prime}}) \succapprox \{ l\}\). Hence, l x is a best element in O(x). Therefore, by Lemma 12, \(x \sim \{ l^{x}\} \sim O(x)\).

Turn to the proof of Corollary 1. If part: The representation has the form of

$$\displaystyle{ U(x) =\max _{l\in x}\left \{(1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\right \}, }$$

for some α ∈ [0, 1). Thus it is easy to verify that U(x) ≥ U(y) implies \(U(x) = U(x \cup y)\).

Only-if part: From Proposition 1, \(\succapprox\) satisfies all the axioms of Theorem 1. Hence \(\succapprox\) admits a random discounting representation U with components (u, μ). By contradiction, suppose #supp(μ) ≠ 1. Then, there exist \(\alpha ^{{\prime}},\alpha ^{{\prime\prime}}\in \mathrm{ supp}(\mu )\) with \(\alpha ^{{\prime\prime}}>\alpha ^{{\prime}}\). Let u(Δ(C)) denote the image of Δ(C) under u. Let \(U(\mathcal{L})\) denote the image of \(\mathcal{L}\subset \mathcal{Z}\) under U. Since \(U(\mathcal{L})\) and u(Δ(C)) are non-degenerate intervals of \(\mathbb{R}_{+}\), take p 1 ∈ u(Δ(C)) and \(p_{2} \in U(\mathcal{L})\) from the relative interiors. Take two points \((p_{1}^{{\prime}},p_{2}^{{\prime}}),(p_{1}^{{\prime\prime}},p_{2}^{{\prime\prime}}) \in \mathbb{R}_{+}^{2}\) such that \(p_{1}^{{\prime\prime}}> p_{1}> p_{1}^{{\prime}}\), \(p_{2}^{{\prime}}> p_{2}> p_{2}^{{\prime\prime}}\), and

$$\displaystyle{ (1 -\alpha ^{{\prime}})p_{ 1}^{{\prime}} +\alpha ^{{\prime}}p_{ 2}^{{\prime}} = (1 -\alpha ^{{\prime}})p_{ 1} +\alpha ^{{\prime}}p_{ 2},\ \text{and}\ (1 -\alpha ^{{\prime\prime}})p_{ 1}^{{\prime\prime}} +\alpha ^{{\prime\prime}}p_{ 2}^{{\prime\prime}} = (1 -\alpha ^{{\prime\prime}})p_{ 1} +\alpha ^{{\prime\prime}}p_{ 2}. }$$
(20.16)

Since p 1 belongs to the relative interior of u(Δ(C)), \(p_{1}^{{\prime}},p_{1}^{{\prime\prime}}\) can be taken to be in u(Δ(C)). Similarly, we can assume \(p_{2}^{{\prime}},p_{2}^{{\prime\prime}}\) belong to \(U(\mathcal{L})\). Then we have

$$\displaystyle{ (1 -\alpha ^{{\prime}})p_{ 1}^{{\prime}} +\alpha ^{{\prime}}p_{ 2}^{{\prime}}> (1 -\alpha ^{{\prime}})p_{ 1}^{{\prime\prime}} +\alpha ^{{\prime}}p_{ 2}^{{\prime\prime}},\ \text{and}\ (1 -\alpha ^{{\prime\prime}})p_{ 1}^{{\prime\prime}} +\alpha ^{{\prime\prime}}p_{ 2}^{{\prime\prime}}> (1 -\alpha ^{{\prime\prime}})p_{ 1}^{{\prime}} +\alpha ^{{\prime\prime}}p_{ 2}^{{\prime}}. }$$
(20.17)

Indeed, by contradiction, suppose \((1 -\alpha ^{{\prime}})p_{1}^{{\prime\prime}} +\alpha ^{{\prime}}p_{2}^{{\prime\prime}}\geq (1 -\alpha ^{{\prime}})p_{1}^{{\prime}} +\alpha ^{{\prime}}p_{2}^{{\prime}}\). By (20.16), \((1 -\alpha ^{{\prime}})p_{1}^{{\prime\prime}} +\alpha ^{{\prime}}p_{2}^{{\prime\prime}}\geq (1 -\alpha ^{{\prime}})p_{1} +\alpha ^{{\prime}}p_{2}\). Since \(p_{1}^{{\prime\prime}}> p_{1}\), \(p_{2}^{{\prime\prime}} <p_{2}\), and \(\alpha ^{{\prime\prime}}>\alpha ^{{\prime}}\), we have \((1 -\alpha ^{{\prime\prime}})p_{1}^{{\prime\prime}} +\alpha ^{{\prime\prime}}p_{2}^{{\prime\prime}}> (1 -\alpha ^{{\prime\prime}})p_{1} +\alpha ^{{\prime\prime}}p_{2}\), which contradicts (20.16). The same argument can be applied to the other case. Now take lotteries \(l_{c}^{{\prime}},l_{c}^{{\prime\prime}}\in \varDelta (C)\) and \(l^{{\prime}},l^{{\prime\prime}}\in \mathcal{L}\) such that \(u(l_{c}^{{\prime}}) = p_{1}^{{\prime}}\),\(u(l_{c}^{{\prime\prime}}) = p_{1}^{{\prime\prime}}\), \(U(\{l^{{\prime}}\}) = p_{2}^{{\prime}}\), and \(U(\{l^{{\prime\prime}}\}) = p_{2}^{{\prime\prime}}\). Taking (20.17) and continuity of the inner product together, there exist open neighborhoods \(B(\alpha ^{{\prime}})\) and \(B(\alpha ^{{\prime\prime}})\) satisfying

$$\displaystyle\begin{array}{rcl} & & (1-\alpha )u(l_{c}^{{\prime}}) +\alpha U(\{l^{{\prime}}\})> (1-\alpha )u(l_{ c}^{{\prime\prime}}) +\alpha U(\{l^{{\prime\prime}}\}),\ \text{and} \\ & & (1-\tilde{\alpha })u(l_{c}^{{\prime\prime}}) +\tilde{\alpha } U(\{l^{{\prime\prime}}\})> (1-\tilde{\alpha })u(l_{ c}^{{\prime}}) +\tilde{\alpha } U(\{l^{{\prime}}\}),{}\end{array}$$
(20.18)

for all \(\alpha \in B(\alpha ^{{\prime}})\) and \(\tilde{\alpha }\in B(\alpha ^{{\prime\prime}})\). Since \(\alpha ^{{\prime}},\alpha ^{{\prime\prime}}\) belong to the support of μ, \(\mu (B(\alpha ^{{\prime}}))> 0\) and \(\mu (B(\alpha ^{{\prime\prime}}))> 0\). Thus, by (20.18) and the representation,

$$\displaystyle{ U(\{l_{c}^{{\prime}}\otimes \{ l^{{\prime}}\},l_{ c}^{{\prime\prime}}\otimes \{ l^{{\prime\prime}}\}\})>\max [U(\{l_{ c}^{{\prime}}\otimes \{ l^{{\prime}}\}\}),U(\{l_{ c}^{{\prime\prime}}\otimes \{ l^{{\prime\prime}}\}\})], }$$

which contradicts Strategic Rationality.

Proof of Theorem 3

((a)\(\Rightarrow\)(b)) Since \(\succapprox ^{1}\) and \(\succapprox ^{2}\) are equivalent on \(\mathcal{L}\), we have condition (i). Let u i(Δ(C)) denote the image of Δ(C) under u i. Let \(U^{i}(\mathcal{L})\) denote the image of \(\mathcal{L}\subset \mathcal{Z}\) under U i. Let \(l_{c}^{+}\) and \(l_{c}^{-}\) be a maximal and a minimal lottery with respect to u i. Since \(u^{i}(l_{c}^{+}) \geq U^{i}(\{l\}) \geq u^{i}(l_{c}^{-})\) for all \(l \in \mathcal{L}\), we have \(U^{1}(\mathcal{L}) = u^{1}(\varDelta (C)) = u^{2}(\varDelta (C)) = U^{2}(\mathcal{L})\). Let supp(μ i) denote the support of μ i. By contradiction, suppose that there exists \(\alpha ^{{\ast}}\in \mathrm{ supp}(\mu ^{1})\) with \(\alpha ^{{\ast}}\not\in \mathrm{supp}(\mu ^{2})\). Since supp(μ 2) is a relative closed set of [0, 1], there exists a relative open interval \((\alpha ^{a},\alpha ^{b})\) of α such that \((\alpha ^{a},\alpha ^{b}) \cap \mathrm{ supp}(\mu ^{2}) =\emptyset\). Since u 1(Δ(C)) is a non-degenerate interval of \(\mathbb{R}_{+}\), take \(p_{1} \in u^{1}(\varDelta (C))\) and \(p_{2} \in U^{1}(\mathcal{L})\) from the relative interior. Take \((p_{1}^{a},p_{2}^{a})\) and \((p_{1}^{b},p_{2}^{b})\) such that \(p_{1}^{a}> p_{1}> p_{1}^{b}\), \(p_{2}^{b}> p_{2}> p_{2}^{a}\),

$$\displaystyle\begin{array}{rcl} & & (1 -\alpha ^{a})p_{ 1}^{a} +\alpha ^{a}p_{ 2}^{a} = (1 -\alpha ^{a})p_{ 1} +\alpha ^{a}p_{ 2},\ \text{and} {}\\ & & (1 -\alpha ^{b})p_{ 1}^{b} +\alpha ^{b}p_{ 2}^{b} = (1 -\alpha ^{b})p_{ 1} +\alpha ^{b}p_{ 2}. {}\\ \end{array}$$

Then we have

$$\displaystyle\begin{array}{rcl} & & (1-\alpha )p_{1}^{b} +\alpha p_{ 2}^{b}>\max [(1-\alpha )p_{ 1} +\alpha p_{2},\ (1-\alpha )p_{1}^{a} +\alpha p_{ 2}^{a}]\mbox{ for all $\alpha>\alpha ^{b}$}, {}\\ & & (1-\alpha )p_{1} +\alpha p_{2}>\max [(1-\alpha )p_{1}^{a} +\alpha p_{ 2}^{a},\ (1-\alpha )p_{ 1}^{b} +\alpha p_{ 2}^{b}]\mbox{ for all $\alpha \in (\alpha ^{a},\alpha ^{b})$}, {}\\ & & (1-\alpha )p_{1}^{a} +\alpha p_{ 2}^{a}>\max [(1-\alpha )p_{ 1} +\alpha p_{2},\ (1-\alpha )p_{1}^{b} +\alpha p_{ 2}^{b}]\mbox{ for all $\alpha <\alpha ^{a}$}. {}\\ \end{array}$$

Since \((p_{1}^{a},p_{2}^{a})\) and \((p_{1}^{b},p_{2}^{b})\) can be chosen sufficiently close to \((p_{1},p_{2})\), assume without loss of generality that \(p_{1}^{a},p_{1}^{b} \in u^{1}(\varDelta (C))\) and \(p_{2}^{a},p_{2}^{b} \in U^{1}(\mathcal{L})\). Thus there exist \(l_{c},l_{c}^{a},l_{c}^{b} \in \varDelta (C)\) and \(l,l^{a},l^{b} \in \mathcal{L}\) such that \(u^{i}(l_{c}) = p_{1},u^{i}(l_{c}^{a}) = p_{1}^{a},u^{i}(l_{c}^{b}) = p_{1}^{b}\), \(U^{i}(\{l\}) = p_{2},U^{i}(\{l^{a}\}) = p_{2}^{a}\), and \(U^{i}(\{l^{b}\}) = p_{2}^{b}\). Define \(x \equiv \{ l_{c} \otimes \{ l\},l_{c}^{a} \otimes \{ l^{a}\},l_{c}^{b} \otimes \{ l^{b}\}\} \in \mathcal{Z}^{1}\) and \(y \equiv \{ l_{c}^{a} \otimes \{ l^{a}\},l_{c}^{b} \otimes \{ l^{b}\}\} \in \mathcal{Z}^{1}\). Since \((\alpha ^{a},\alpha ^{b}) \cap \mathrm{ supp}(\mu ^{2}) =\emptyset\), \(U^{2}(x) = U^{2}(y)\). On the other hand, since \(\mu ^{1}((\alpha ^{a},\alpha ^{b}))> 0\), \(U^{1}(x)> U^{1}(y)\). This contradicts the assumption that \(\succapprox ^{2}\) desires more flexibility in the two-period model than \(\succapprox ^{1}\).

((b)\(\Rightarrow\)(a)) Assume that \(u^{1} = u^{2}\) and \(\mathrm{supp}(\mu ^{1}) \subset \mathrm{ supp}(\mu ^{2})\). Since \(\succapprox ^{i}\), i = 1, 2 are equivalent on \(\mathcal{L}\), we have \(\overline{\alpha }^{1} = \overline{\alpha }^{2}\). Consequently, U 1({l}) = U 2({l}) for all \(l \in \mathcal{L}\). Now take all \(x,y \in \mathcal{Z}^{1}\) with \(y \subset x\) and assume \(x \succ ^{1}y\). There exists α  ∈ supp(μ 1) such that

$$\displaystyle{ \max _{l\in x}(1 -\alpha ^{{\ast}})u^{1}(l_{ c}) +\alpha ^{{\ast}}U^{1}(\{l_{ L}\})>\max _{l\in y}(1 -\alpha ^{{\ast}})u^{1}(l_{ c}) +\alpha ^{{\ast}}U^{1}(\{l_{ L}\}). }$$
(20.19)

By continuity of the representation, there exists an open neighborhood \(O \subset [0,1]\) of α such that strict inequality (20.19) holds for all α ∈ O. Since \(\alpha ^{{\ast}}\in \mathrm{ supp}(\mu ^{1}) \subset \mathrm{ supp}(\mu ^{2})\), μ 2(O) > 0. Moreover, since u 1 = u 2 and U 1({l}) = U 2({l}) for all \(l \in \mathcal{L}\),

$$\displaystyle{ \max _{l\in x}(1-\alpha )u^{2}(l_{ c}) +\alpha U^{2}(\{l_{ L}\})>\max _{l\in y}(1-\alpha )u^{2}(l_{ c}) +\alpha U^{2}(\{l_{ L}\}) }$$

for all α ∈ O, which implies \(U^{2}(x)> U^{2}(y)\).

Proof of Theorem 4

By definition, (a) implies (b). We show that (b)\(\Rightarrow\)(c) and (c)\(\Rightarrow\)(a). To show (b)\(\Rightarrow\)(c), we prepare two lemmas.

Lemma 13

Suppose that \(\succapprox ^{i}\) satisfies all the axioms of Theorem 1 . If agent 2 is more averse to commitment in the two-period model than agent 1, then \(x \succapprox ^{1}\{l\} \Rightarrow x \succapprox ^{2}\{l\}\) for all \(x \in \mathcal{Z}^{1}\) and \(l \in \mathcal{L}\) .

Proof

It suffices to show that \(x \sim ^{1}\{l\} \Rightarrow x \succapprox ^{2}\{l\}\). If agent 1 strictly prefers l to the worst lottery \(\underline{l}\), then \(x \succ ^{1}\{\lambda l + (1-\lambda )\underline{l}\}\) for all \(\lambda \in (0,1)\). By assumption, \(x \succ ^{2}\{\lambda l + (1-\lambda )\underline{l}\}\). Thus Continuity implies \(x \succapprox ^{2}\{l\}\) as \(\lambda \rightarrow 1\). If l is indifferent to \(\underline{l}\), consider the best lottery \(\overline{l}\). Since \(\{\overline{l}\} \succapprox x\) for all \(x \in \mathcal{Z}^{1}\), mixture linearity of the representation implies \(U^{1}(\lambda x + (1-\lambda )\{\overline{l}\})> U^{1}(\{l\})\) for all \(\lambda \in (0,1)\). By assumption, \(\lambda x + (1-\lambda )\{\overline{l}\} \succ ^{2}\{l\}\). Thus Continuity implies \(x \succapprox ^{2}\{l\}\) as \(\lambda \rightarrow 1\). □ 

Lemma 14

Agent 2 is more averse to commitment in the two-period model than agent 1 if and only if there exist random discounting representations U i with \((u^{i},\mu ^{i})\) , i = 1,2 such that (i) u 1 = u 2 and \(\bar{\alpha }^{1} =\bar{\alpha } ^{2}\) , and (ii) U 1 (x) ≤ U 2 (x) for all \(x \in \mathcal{Z}^{1}\) .

Proof

Necessity follows because \(U^{2}(x) \geq U^{1}(x)> U^{1}(\{l\}) = U^{2}(\{l\})\) for all \(x \in \mathcal{Z}^{1}\). We prove sufficiency. By Lemma 13, \(\succapprox ^{1}\) and \(\succapprox ^{2}\) are equivalent on \(\mathcal{L}\). Thus there exist random discounting representations satisfying (i), and hence \(U^{1}(\{l\}) = U^{2}(\{l\})\) for all \(l \in \mathcal{L}\). Note that for all \(x \in \mathcal{Z}^{1}\) there exists \(l \in \mathcal{L}\) such that \(x \sim ^{1}\{l\}\) or U 1(x) = U 1({l}). By Lemma 13, \(x \sim ^{1}\{l\}\) implies that \(x \succapprox ^{2}\{l\}\) or \(U^{2}(x) \geq U^{2}(\{l\}) = U^{1}(\{l\}) = U^{1}(x)\). □ 

We show that, for all continuous and convex functions v of α, there is a sequence {v n } of functions of the form (20.8) such that v ≥ v n ,

$$\displaystyle\begin{array}{rcl} \sup _{\alpha }\vert v(\alpha ) - v_{n}(\alpha )\vert <\frac{1} {n},\text{ and }\int v_{n}(\alpha )\,\mathrm{d}\mu ^{1}(\alpha ) \leq \int v_{ n}(\alpha )\,\mathrm{d}\mu ^{2}(\alpha )& & {}\\ \end{array}$$

for all n = 1, 2, ⋯ . Then the result follows from the dominated convergence theorem.

Let \(v: [0,1] \rightarrow \mathbb{R}\) be a continuous convex function. Then, for every \(\hat{\alpha }\in [0,1]\), there exists a vector \(p_{\hat{\alpha }} \in \mathbb{R}^{2}\) such that for all α ∈ [0, 1],

$$\displaystyle\begin{array}{rcl} v(\alpha ) \geq (1-\alpha )p_{\hat{\alpha },1} +\alpha p_{\hat{\alpha },2}& & {}\\ \end{array}$$

with equality for \(\hat{\alpha }\). Fix n. Since \(v(\alpha ) -\{ (1-\alpha )p_{\hat{\alpha },1} +\alpha p_{\hat{\alpha },2}\}\) is continuous with respect to α, there exists an open neighborhood \(B(\hat{\alpha })\) of \(\hat{\alpha }\) such that for every \(\alpha \in B(\hat{\alpha })\)

$$\displaystyle\begin{array}{rcl} 0 \leq v(\alpha ) -\{ (1-\alpha )p_{\hat{\alpha },1} +\alpha p_{\hat{\alpha },2}\} <\frac{1} {n}.& & {}\\ \end{array}$$

It follows from the compactness of [0, 1] that there exists a finite set \(\{\hat{\alpha }_{i}\}_{i=1}^{M} \subset [0,1]\) such that \(\{B(\hat{\alpha }_{i})\}_{i=1}^{M}\) is a covering of [0, 1].

We define \(v_{n}: [0,1] \rightarrow \mathbb{R}\) by

$$\displaystyle\begin{array}{rcl} v_{n}(\alpha ) =\max _{i}[(1-\alpha )p_{\hat{\alpha }_{i},1} +\alpha p_{\hat{\alpha }_{i},2}].& & {}\\ \end{array}$$

Then it is straightforward that v(α) ≥ v n (α) for every α ∈ [0, 1]. Moreover, we see that

$$\displaystyle\begin{array}{rcl} \sup _{\alpha }\vert v(\alpha ) - v_{n}(\alpha )\vert <\frac{1} {n}.& & {}\\ \end{array}$$

In fact, pick an arbitrary α ∈ [0, 1]. Then there is j ∈ M such that \(\alpha \in B(\hat{\alpha }_{j})\). This implies

$$\displaystyle\begin{array}{rcl} 0 \leq v(\alpha ) - v_{n}(\alpha ) \leq v(\alpha ) -\{ (1-\alpha )p_{\hat{\alpha }_{j},1} +\alpha p_{\hat{\alpha }_{j},2}\} <\frac{1} {n}.& & {}\\ \end{array}$$

Finally we see that

$$\displaystyle\begin{array}{rcl} \int v_{n}(\alpha )\,\mathrm{d}\mu ^{1}(\alpha ) \leq \int v_{ n}(\alpha )\,\mathrm{d}\mu ^{2}(\alpha ).& & {}\\ \end{array}$$

Since u(Δ(C)) and \(U^{1}(\mathcal{L}) = U^{2}(\mathcal{L})\) are closed intervals, we can assume, without loss of generality, that there exist \(\{l_{c,i}\}_{i=1}^{M} \subset \varDelta (C)\) and \(\{l_{i}\}_{i=1}^{M} \subset \mathcal{L}\) satisfying

$$\displaystyle\begin{array}{rcl} u^{1}(l_{ c,i}) = u^{2}(l_{ c,i}) = p_{\hat{\alpha }_{i},1},\text{ and }U^{1}(\{l_{ i}\}) = U^{2}(\{l_{ i}\}) = p_{\hat{\alpha }_{i},2}.& & {}\\ \end{array}$$

Thus we can rewrite v n by

$$\displaystyle\begin{array}{rcl} v_{n}(\alpha )& =& \max _{i}(1-\alpha )u^{1}(l_{ c,i}) +\alpha U^{1}(\{l_{ i}\}) =\max _{i}(1-\alpha )u^{2}(l_{ c,i}) +\alpha U^{2}(\{l_{ i}\}). {}\\ \end{array}$$

Consider the menu \(x^{n} =\{ l_{c,i} \otimes \{ l_{i}\}\vert i = 1,\cdots \,,M\} \in \mathcal{Z}^{1}\). Then it follows from Lemma 14 that

$$\displaystyle\begin{array}{rcl} \int v_{n}(\alpha )\,\mathrm{d}\mu ^{1}(\alpha ) = U^{1}(x^{n}) \leq U^{2}(x^{n}) =\int v_{ n}(\alpha )\,\mathrm{d}\mu ^{2}(\alpha ),& & {}\\ \end{array}$$

which completes the proof.

((c)\(\Rightarrow\)(a)) Let \(\mathcal{U}\) be the Banach space of all real-valued continuous functions on \(\mathcal{Z}\). Define the operator \(T^{i}: \mathcal{U}\rightarrow \mathcal{U}\) by

$$\displaystyle\begin{array}{rcl} T^{i}(U)(x) \equiv \int \max _{ l\in x}\left ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\right )\mathrm{d}\mu ^{i}(\alpha ).& & {}\\ \end{array}$$

Pick \(x \in \mathcal{Z}\) arbitrarily. Note that, for all \(U \in \mathcal{U}\), \(\max _{l\in x}{\bigl ((1-\alpha )u(l_{c}) +\alpha \int _{\mathcal{Z}}U(z)\,\mathrm{d}l_{z}\bigr )}\) is continuous and convex with respect to α. Hence it holds that \(T^{1}(U)(x) \leq T^{2}(U)(x)\) for all \(x \in \mathcal{Z}\).

For i = 1, 2, let T i, n denote the operation defined as n-times iterations of T i. We show, by mathematical induction, that T 1, n(U)(x) ≤ T 2, n(U)(x) for all \(x \in \mathcal{Z}\) and n = 1, 2, ⋯ . Assume that \(T^{1,k}(U)(x^{{\prime}}) \leq T^{2,k}(U)(x^{{\prime}})\) for all \(x^{{\prime}}\in \mathcal{Z}\). Pick \(x \in \mathcal{Z}\) arbitrarily. Then it holds that \(T^{2}(T^{1,k}(U))(x) \leq T^{2}(T^{2,k}(U))(x)\). Moreover, since T 1, k(U) is in \(\mathcal{U}\), we have \(T^{1}(T^{1,k}(U))(x) \leq T^{2}(T^{1,k}(U))(x)\). These together imply \(T^{1,k+1}(U)(x) \leq T^{2,k+1}(U)(x)\) for all \(x \in \mathcal{Z}\). Therefore, it holds that \(U^{1}(x) \leq U^{2}(x)\) since T i, n(U) converges to U i. The desired result follows because \(U^{2}(x) \geq U^{1}(x)> U^{1}(\{l\}) = U^{2}(\{l\})\) for all \(x \in \mathcal{Z}\) and \(l \in \mathcal{L}\).

Proof of Theorem 5

  1. (i)

    We can solve (20.11) by the guess-and-verify method. Let

    $$\displaystyle{ V _{\mu }(s,\alpha ) \equiv A_{\mu }(\alpha )\frac{s^{1-\sigma }} {1-\sigma }. }$$
    (20.20)

    Considering the F.O.C. of

    $$\displaystyle{ \max _{s^{{\prime}}}\left ((1-\alpha )\frac{((1 + r)s - s^{{\prime}})^{1-\sigma }} {1-\sigma } +\alpha \int \left (A_{\mu }(\alpha ^{{\prime}})\frac{s^{{{\prime}}^{1-\sigma }}} {1-\sigma } \right )\mathrm{d}\mu (\alpha ^{{\prime}})\right ), }$$
    (20.21)

    we have

    $$\displaystyle{ (1-\alpha )((1 + r)s - s^{{\prime}})^{-\sigma } =\alpha \overline{A}_{\mu }s^{{{\prime}}^{-\sigma }}, }$$

    where \(\overline{A}_{\mu } \equiv \int A_{\mu }(\alpha ^{{\prime}})\,\mathrm{d}\mu (\alpha ^{{\prime}})\). By rearrangement, we can obtain the savings function

    $$\displaystyle{ s^{{\prime}} = \mathit{SR}(\alpha,\overline{A}_{\mu })(1 + r)s,\ \text{where}\ \mathit{SR}(\alpha,\overline{A}_{\mu }) \equiv \frac{(\alpha \overline{A}_{\mu })^{\frac{1} {\sigma } }} {(1-\alpha )^{\frac{1} {\sigma } } + (\alpha \overline{A}_{\mu })^{\frac{1} {\sigma } }}. }$$
    (20.22)

Substituting (20.22) into (20.21) and comparing the coefficients with (20.20),

$$\displaystyle\begin{array}{rcl} A_{\mu }(\alpha )& =& {\Bigl ((1-\alpha )^{\frac{1} {\sigma } } + (\alpha \overline{A}_{\mu })^{\frac{1} {\sigma } }\Bigr )}^{\sigma }(1 + r)^{1-\sigma }.{}\end{array}$$
(20.23)

For all \(\alpha \in [0,1]\) and A ≥ 0, define f(α, A) and F(A) as

$$\displaystyle{ f(\alpha,A) \equiv {\Bigl ( (1-\alpha )^{\frac{1} {\sigma } } + (\alpha A)^{\frac{1} {\sigma } }\Bigr )}^{\sigma }(1 + r)^{1-\sigma },\ F(A) \equiv \int f(\alpha,A)\,\mathrm{d}\mu (\alpha ). }$$
(20.24)

From (20.23), \(\overline{A}_{\mu }\) is characterized as a solution of \(\overline{A}_{\mu } = F(\overline{A}_{\mu })\). We want to show that there exists a unique \(\overline{A}> 0\) satisfying this equation. Note first that \(F(0) = (1-\bar{\alpha })(1 + r)^{1-\sigma }> 0\). Since \(\lim _{A\rightarrow \infty }F(A) \rightarrow \infty\), the L’Hopital’s rule implies

$$\displaystyle{ \lim _{A\rightarrow \infty }\frac{F(A)} {A} =\lim _{A\rightarrow \infty }F^{{\prime}}(A). }$$

Since

$$\displaystyle\begin{array}{rcl} F^{{\prime}}(A)& =& \int \frac{\partial f} {\partial A}\,\mathrm{d}\mu (\alpha ) =\int (1 + r)^{1-\sigma }\alpha ^{\frac{1} {\sigma } }{\Bigl ((1-\alpha )^{\frac{1} {\sigma } } + (\alpha A)^{\frac{1} {\sigma } }\Bigr )}^{\sigma -1}A^{\frac{1} {\sigma } -1}\mathrm{d}\mu (\alpha ) {}\\ & =& \int (1 + r)^{1-\sigma }\alpha ^{\frac{1} {\sigma } }\left (\left (\frac{1-\alpha } {A} \right )^{\frac{1} {\sigma } } +\alpha ^{\frac{1} {\sigma } }\right )^{\sigma -1}\mathrm{d}\mu (\alpha ), {}\\ \end{array}$$

we have \(\lim _{A\rightarrow \infty }F^{{\prime}}(A) =\bar{\alpha } (1 + r)^{1-\sigma } <1\). Hence, there exists a sufficiently large number \(\tilde{A}\) such that \(F(\tilde{A}) <\tilde{ A}\). By continuity of F, there exists \(\overline{A}> 0\) such that \(\overline{A} = F(\overline{A})\). Finally, since

$$\displaystyle\begin{array}{rcl} F^{{\prime\prime}}& =& \int \frac{\partial ^{2}f} {\partial A^{2}}\,\mathrm{d}\mu (\alpha ) {}\\ & =& \frac{1-\sigma } {\sigma } (1 + r)^{1-\sigma }\int \alpha ^{\frac{2} {\sigma } }(1-\alpha )^{\frac{1} {\sigma } }{\Bigl ((1-\alpha )^{\frac{1} {\sigma } } + (\alpha A)^{\frac{1} {\sigma } }\Bigr )}^{\sigma -2}A^{\frac{1} {\sigma } -2}\,\mathrm{d}\mu (\alpha ), {}\\ \end{array}$$

F is either strictly convex or concave depending on \(\sigma \lessgtr 1\). Since \(\lim _{A\rightarrow \infty }F^{{\prime}}(A) <1\), \(\overline{A}\) must be unique.

  1. (ii)

    From (20.22), it is easy to verify that \(\frac{\partial \mathit{SR}(\alpha,A)} {\partial A}> 0\). Thus it suffices to show that \(\overline{A}_{\mu ^{1}} \lessgtr \overline{A}_{\mu ^{2}}\) if \(\sigma \lessgtr 1\). Note first that f defined as (20.24) is strictly convex or strictly concave in α according as \(\sigma <1\) or \(\sigma> 1\). Indeed, for any α ∈ (0, 1) and A > 0,

    $$\displaystyle\begin{array}{rcl} \frac{\partial ^{2}f} {\partial \alpha ^{2}} & =& (1 + r)^{1-\sigma }\frac{1-\sigma } {\sigma } {\Bigl ((1-\alpha )^{\frac{1} {\sigma } } + (\alpha A)^{\frac{1} {\sigma } }\Bigr )}^{\sigma -2} {}\\ & & \times {\Bigl (2(\alpha (1-\alpha ))^{\frac{1} {\sigma } -1}A^{\frac{1} {\sigma } } + (1-\alpha )^{\frac{1} {\sigma } -2}(\alpha A)^{\frac{1} {\sigma } } +\alpha ^{\frac{1} {\sigma } -2}((1-\alpha )A)^{\frac{1} {\sigma } }\Bigr )} \gtrless 0 {}\\ \end{array}$$

    whenever \(\sigma \lessgtr 1\). Since μ 1 second-order stochastically dominates μ 2,

    $$\displaystyle{ \overline{A}_{\mu ^{2}} =\int f(\alpha,\overline{A}_{\mu ^{2}})\,\mathrm{d}\mu ^{2}(\alpha ) \gtrless \int f(\alpha,\overline{A}_{\mu ^{ 2}})\,\mathrm{d}\mu ^{1}(\alpha ) }$$
    (20.25)

    depending on \(\sigma \lessgtr 1\). Let \(F^{1}(A) \equiv \int f(\alpha,A)\,\mathrm{d}\mu ^{1}(\alpha )\). We know from the proof of part (i) that F 1(0) > 0 and \(\overline{A}_{\mu ^{1}}\) is a unique positive solution of F 1(A) = A. Hence, \(F^{1}(A) \gtrless A\ \mbox{ if $A \lessgtr \overline{A}_{\mu ^{1}}$}\). Taking this observation and (20.25) together, \(\overline{A}_{\mu ^{2}} \gtrless \overline{A}_{\mu ^{1}}\) if \(\sigma \lessgtr 1\).

Addendum: Recent Developments

This addendum has been newly written for this book chapter.

If a DM is uncertain about her future preference, she may prefer to leave some options open rather than choose a completely spelled-out future plan. This behavior is called preference for flexibility and is admitted as an important aspect of sequential decision making. Kreps (19791992) and Dekel et al. (2001) provide a behavioral foundation for preference for flexibility and derive the set of future preferences, called the subjective state space, from observable choice behavior. Although the preference for flexibility arises inherently in a dynamic setup, the derivation of the subjective state space has been considered within a two-period model. Higashi et al. (2009) extend their model to an infinite-horizon setting and specify the subjective state space to be the set of sequences of discount factors.

In Higashi et al. (2009), the belief of future discount factors is assumed to be constant, and hence, the DM’s attitude toward flexibility is the same over time. Some recent works consider more general models for preference for flexibility in a dynamic setup. Krishna and Sadowski (2014) provide a complementary result to ours. To introduce their model, let S be a finite set of objective states. A state-contingent infinite-horizon consumption problem (S-IHCP) is a function specifying for each s ∈ S an opportunity set of lotteries over pairs of current consumption and an S-IHCP in the next period. They show that there exists a compact metric space \(\mathcal{F}\), which is linearly homeomorphic to \(\mathcal{H}(\mathcal{K}(\varDelta (C \times \mathcal{F})))\), where \(\mathcal{H}(X)\) means the set of functions from S to a compact metric space X.Footnote 17 A generic element of \(\mathcal{F}\) is denoted by f and for lottery \(l \in \varDelta (C \times \mathcal{F})\) its marginals on C and \(\mathcal{F}\) are denoted by l c and l f , respectively. A preference \(\succapprox\) is defined on \(\mathcal{F}\simeq \mathcal{H}(\mathcal{K}(\varDelta (C \times \mathcal{F})))\).

Krishna and Sadowski (2014) consider the following representation. The DM has a subjective belief about objective states S captured by a Markov process, that is, a pair of a transition probability Π: S × S → [0, 1] and a stationary distribution (or initial prior) π over S. The subjective state space of this model is the set of all vN-M functions over C denoted by \(\mathcal{U}:= \left \{u \in \mathbb{R}^{C}:\sum u_{i} = 0\right \}\). A belief about subjective states depends on objective states, that is, for each s ∈ S, μ s is a probability measure on \(\mathcal{U}\). Finally, let δ ∈ (0, 1) be a discount factor. A preference \(\succapprox\) on \(\mathcal{F}\) admits a representation of Dynamic Preference for Flexibility (a DPF representation) with components \(((\varPi,\pi ),(\mu _{s})_{s\in S},\delta )\) if \(V _{0}(f) \equiv \sum _{s}V (f,s)\pi (s)\) represents \(\succapprox\), where \(V (\cdot,s): \mathcal{F}\rightarrow \mathbb{R}\) is defined as

$$\displaystyle{ V (f,s) =\sum _{s^{{\prime}}\in S}\varPi (s,s^{{\prime}})\left [\int _{ \mathcal{U}}\max _{l\in f(s^{{\prime}})}[u(l_{c}) +\delta V (l_{f},s^{{\prime}})]\,\mathrm{d}\mu _{ s^{{\prime}}}(u)\right ], }$$

and \(V (l_{f},s^{{\prime}}) \equiv \int V (g,s^{{\prime}})\,\mathrm{d}l_{f}(g)\). Since π is a stationary distribution, which satisfies \(\pi (s^{{\prime}}) =\sum _{s}\pi (s)\varPi (s,s^{{\prime}})\), the representation is rewritten as

$$\displaystyle{ V _{0}(f) =\sum _{s}\pi (s)\left [\int _{\mathcal{U}}\max _{l\in f(s)}[u(l_{c}) +\delta V (l_{f},s)]\,\mathrm{d}\mu _{s}(u)\right ]. }$$

In this representation, the Markov process represented by Π captures persistent shocks on objective states, and the probability measures \((\mu _{s})_{s\in S}\) correspond to unobservable transitory shocks on future utilities. Therefore, an attitude toward flexibility may change according to realization of objective states. Krishna and Sadowski prove that a preference \(\succapprox\) satisfies suitable axioms if and only if it has a DPF representation. Moreover, they prove that \((\mu _{s})_{s\in S}\) are unique up to a common scaling, and (Π, π) and δ are unique.Footnote 18

There are two remarks related to our study (Higashi et al. 2009). First, their DPF representation allows persistent shocks on subjective states, while preference shocks are i.i.d. in our model. Second, a DPF representation can capture a random discounting by specifying \(\mu _{s}(\{\lambda \bar{u}:\lambda \geq 0\}) = 1\) for some fixed \(\bar{u} \in \mathcal{U}\). As a special case, a random discount factor representation is behaviorally characterized.

Another attempt to accommodate a changing preference for flexibility is made by Higashi et al. (2014). In this paper, we extend the previous model in order to allow the situation where a prior action affects future attitude toward flexibility. For example, imagine a DM who invested in self-improvement such as health investment or education is more likely to expect new information about her future preference, and hence may want to have greater demand for flexibility. More formally, we incorporate the histories of past consumption, \(h = (c_{-T},\cdots \,,c_{-1})\), into Higashi et al. (2009) and consider a set of preferences \(\{\succapprox _{h}\}_{h\in H}\). The following recursive representation is axiomatized: there exist a non-constant continuous function \(u: C \rightarrow \mathbb{R}\) and a history-dependent probability measure μ h on the set [0, 1] of discount factors such that for all h, \(\succapprox _{h}\) on \(\mathcal{Z}\) is represented by

$$\displaystyle{ V (x,h) =\int _{[0,1]}\max _{l\in x}\int _{C\times \mathcal{Z}}{\Bigl ((1-\alpha )u(c) +\alpha V (z,\mathit{hc})\Bigr )}\mathrm{d}l(c,z)\,\mathrm{d}\mu _{h}(\alpha ), }$$

where \(\mathit{hc} = (c_{-T+1},\cdots \,,c_{-1},c)\) denotes an updated history of \(h = (c_{-T},\cdots \,,c_{-1})\). This representation can capture a changing future attitude toward flexibility from past consumption.

As an application of the random discounting, Higashi et al. (2014) investigate impatience comparisons within the random discounting model. Time preference has been measured as the magnitude of the discount factor, which is elicited from choices among consumption streams with a time trade-off. This elicitation implicitly assumes that choices are made under commitment. In sequential decision making, however, the degree of impatience may be affected by two potentially conflicting effects: one is pure time preference, which is a preference for early consumption, and the other is preference for flexibility, which is an attitude of leaving one’s options open until the future.

In this paper, we consider preference over menus of consumption streams in two periods and provide behavioral definitions for impatience comparisons among menus having a time trade-off. If one menu includes more options allowing earlier consumption than another menu (such as x = { (100, 0),  (70, 35)} vs y = { (50, 60),  (0, 120)}), an agent expecting to be more impatient in the future will tend to choose the former. Thus, if agent 2 is more impatient than agent 1, we require that

$$\displaystyle{x \succapprox ^{1}y\ \Rightarrow \ x \succapprox ^{2}y.}$$

This is a natural extension of impatience comparisons made under commitment. We show that in the random discounting model, the relative degree of impatience is measured as a probability shift in the monotone likelihood ratio order (MLR), which is characterized via behavioral comparisons among menus defined as above.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Japan

About this chapter

Cite this chapter

Higashi, Y., Hyogo, K., Takeoka, N. (2016). Subjective Random Discounting and Intertemporal Choice. In: Ikeda, S., Kato, H., Ohtake, F., Tsutsui, Y. (eds) Behavioral Economics of Preferences, Choices, and Happiness. Springer, Tokyo. https://doi.org/10.1007/978-4-431-55402-8_20

Download citation

Publish with us

Policies and ethics