1 Introduction

We consider multi-player perfect information games, without chance moves, which have arbitrary action spaces and infinite duration. As a consequence of Flesch et al. (2010) and Purves and Sudderth (2011), these games admit a subgame-perfect \(\epsilon \)-equilibrium, \(\epsilon \)-SPE for brevity, for every \(\epsilon >0\), provided that every player’s payoff function is bounded and continuous on the whole set of plays. Here, continuity is meant with respect to the product topology on the set of plays, with the set of actions given its discrete topology.

The question naturally arises what happens in games in which continuity of the payoffs is not satisfied everywhere, but only on a subset of plays. More precisely, on which subsets of plays discontinuity can be allowed without loosing that an \(\epsilon \)-SPE exists for every \(\epsilon >0\). We show that discontinuities can be allowed on a sigma-discrete set of plays, i.e., on a countable union of discrete sets. In the special case, when the action space is countable, a set of plays is sigma-discrete if and only if it is countable.

Our main findings are as follows:

  1. [1]

    If the set of discontinuities is sigma-discrete, then an \(\epsilon \)-SPE exists for every \(\epsilon >0\).

  2. [2]

    Given a set of plays that is not sigma-discrete, there exists a game in which the payoff functions are continuous outside of the given set, and the game admits no subgame-perfect \(\epsilon \)-equilibrium for small \(\epsilon > 0\). This is achieved by embedding a game given in Flesch et al. (2014), which has no \(\epsilon \)-SPE for small \(\epsilon >0\).

The structure of the paper is follows. In Sect. 2, we define the model. Then, in Sect. 3, we present the main results and discuss the main ideas of the proof. Sections 4 and 5 contain the proofs, and Sect. 6 provides some concluding remarks.

2 The model and preliminaries

The game Let \(N = \{1,\ldots , n\}\) denote the set of players and let \(A\) be an arbitrary non-empty set. Let \({\mathbb {N}} = \{0,1,2,\ldots \}\). We denote by \(H\) the set of all finite sequences of elements of \(A\), including the empty sequence ø, and we denote by \(P = A^{\mathbb {N}}\) the set of all infinite sequences of elements of \(A\). The elements of \(A\) are called actions, the elements of \(H\) are called histories, and the elements of \(P\) are called plays. There is a function \(\iota : H \rightarrow N\) which assigns an active player to each history. Further, each player \(i\in N\) is given a so-called payoff function \(u_{i} : P \rightarrow {\mathbb {R}}\).

The game is played as follows: At period 0, player \(\iota \)(ø) chooses an action \(a_{0}\). In general, suppose that up to period \(t \in {\mathbb {N}}\) of the game, the sequence \(h = (a_{0},\ldots , a_{t})\) of actions has been chosen. Then, at period \(t+1\), player \(\iota (h)\) chooses an action \(a_{t + 1}\). The chosen action is observed by all players. Continuing this way, the players generate a play \(p = (a_{0}, a_{1},\ldots )\), and finally, each player \(i\in N\) receives payoff \(u_{i}(p)\).

We remark that this setup encompasses all games of finite duration. Another important special case is the situation when the players receive instantaneous payoffs at every period of the game and then aggregate them into one payoff, for example, by taking the discounted sum.

The topological structure We endow the set \(A\) with the discrete topology and \(P=A^{{\mathbb {N}}}\) with the product topology \({\mathcal {T}}\). A basis for \((P,{\mathcal {T}})\) is formed by the cylinder sets \(P(h)= \{p \in P: h \prec p\}\) for \(h \in H\), where for a history \(h \in H\) and a play \(p \in P\) we write \(h \prec p\) if \(h\) is the initial segment of \(p\). In this topology, a sequence of plays \((p_n)_{n\in {\mathbb {N}}}\) converges to a play \(p\) precisely when for every \(k\in {\mathbb {N}}\) there exists an \(N_k\in {\mathbb {N}}\) such that for every \(n\ge N_k\), the first \(k\) coordinates of \(p_n\) coincide with those of \(p\). Note that \((P,{\mathcal {T}})\) is completely metrizableFootnote 1, and moreover, it is separable (and hence Polish) if and only if \(A\) is countableFootnote 2.

A function \(f : P \rightarrow {\mathbb {R}}\) is said to be continuous at a play \(p\in P\) if, for every sequence of plays \((p_n)_{n\in {\mathbb {N}}}\) converging to \(p\), we have \(\lim _{n\rightarrow \infty } f(p_n) = f(p)\). Thus, \(f\) is continuous at \(p\) precisely when for every \(\delta >0\) there is a history \(h\prec p\) such that \(|f(p)-f(q)|\le \delta \) for every \(q \in P(h)\). Intuitively, continuity at \(p\) means that, after following \(p\) for a large number of periods, further actions have little effect on the value of \(f\). The function \(f\) is said to be discontinuous at a play \(p\) if it is not continuous at \(p\). Further, \(f\) is said to be continuous if it is continuous at each play in \(P\).

Subgames For the concatenation of histories and actions, we use the following notations. For a history \(h=(a_0,\ldots ,a_t) \in H\) and finite sequence of actions \((b_0,\ldots ,b_m)\) in \(A\), let \((h,b_1,\ldots ,b_m)=(a_0,\ldots ,a_t,b_1,\ldots ,b_m)\), whereas for an infinite sequence of actions \((b_0,b_1,\ldots )\) in \(A\), let \((h,b_0,b_1,\ldots )=(a_0,\ldots ,a_t,b_0,b_1,\ldots )\).

Consider a history \(h = (a_{0}, \ldots , a_{t})\) for some \(t\in {\mathbb {N}}\). The subgame \(G(h)\) corresponding to \(h\) is played as follows: First, player \(\iota (h)\) chooses an action \(a_{t+1}\). In general, suppose that the sequence of actions \((a_{t+1},\ldots ,a_{t+m})\) is chosen. Then, player \(\iota (h,a_{t+1},\ldots ,a_{t+m})\) chooses an action \(a_{t +m+ 1}\). Continuing this way, the players generate a play \(p = (h,a_{t+1}, a_{t+2}, \ldots )\) in \(P(h)\), and finally, each player \(i\in N\) receives payoff \(u_{i}(p)\).

Strategies A strategy for player \(i\) is a function \(\sigma _{i} : \iota ^{-1}(i) \rightarrow \Delta _c(A)\), where \(\iota ^{-1}(i)\) is the set of histories where player \(i\) moves and where \(\Delta _c(A)\) is the set of probability measures on \(A\) with a countable support. The interpretation is that if a history \(h\in \iota ^{-1}(i)\) arises, then \(\sigma _i\) prescribes player \(i\) to choose an action according to \(\sigma _i(h)\). A strategy is called pure, if it always places probability one on one action. A strategy profile is a tuple \((\sigma _{1}, \ldots , \sigma _{n})\) where \(\sigma _{i}\) is a strategy for player \(i\). Given a strategy profile \(\sigma = (\sigma _{1}, \ldots , \sigma _{n})\) and a strategy \(\eta _{i}\) for player \(i\), we write \(\sigma /\eta _{i}\) to denote the strategy profile obtained from \(\sigma \) by replacing \(\sigma _{i}\) with \(\eta _{i}\).

Every strategy profile \(\sigma \) induces a probability measure \(\mu _{\sigma }\) on the Borel sets of \(P\). Similarly, by considering the subgame for a history \(h\in H\), every strategy profile \(\sigma \) induces a probability measure \(\mu _{\sigma ,h}\) on the Borel sets of \(P\), such that \(\mu _{\sigma ,h}(P(h))=1\). We denote the expected payoff for player \(i\in N\) in the beginning of the game by \(u_i(\sigma )\) and in the subgame starting at \(h\) by \(u_i(\sigma ,h)\).

Subgame-perfect \(\epsilon \)-equilibrium Let \(\epsilon \ge 0\) be an error term. A strategy profile \(\sigma \) is called an \(\epsilon \)-equilibrium if no player can gain more than \(\epsilon \) by a unilateral deviation, i.e., if for each player \(i \in N\) and for each strategy \(\sigma _{i}^{\prime }\) of player \(i\), it holds that

$$\begin{aligned} u_{i}(\sigma ) \ge u_{i}(\sigma /\sigma _{i}^{\prime }) - \epsilon . \end{aligned}$$

A stronger concept arises if we require that the strategy profile induces an \(\epsilon \)-equilibrium in every subgame. A strategy profile \(\sigma \) is called a subgame-perfect \(\epsilon \)-equilibrium, \(\epsilon \)-SPE for brevity, if for each history \(h \in H\), each player \(i \in N\), and each strategy \(\sigma _{i}^{\prime }\) of player \(i\), it holds that

$$\begin{aligned} u_{i}(\sigma ,h) \ge u_{i}(\sigma /\sigma _{i}^{\prime }, h) - \epsilon . \end{aligned}$$

A 0-equilibrium is simply called an equilibrium, and a 0-SPE is simply called an SPE.

Existence of \(\epsilon \)-SPE An \(\epsilon \)-SPE exists for every \(\epsilon >0\) provided that each player’s payoff function is bounded and continuous. This follows from more general results in Flesch et al. (2010) and Purves and Sudderth (2011). Carmona (2005) shows that for every \(\epsilon >0\), an \(\epsilon \)-SPE exists under the assumption that the payoff functions are bounded and continuous at infinity. Continuity at infinity implies continuity but not vice versa.Footnote 3 Flesch et al. (2014) describe a game in which every player’s payoff function is bounded and Borel measurable, yet the game admits no \(\epsilon \)-SPE for small \(\epsilon >0\).

Sigma-discrete and perfect sets Let \(X\) be a topological space. Consider a subset \(D \subseteq X\). A point \(x\in D\) is called an isolated point of \(D\) if \(x\) has an open neighborhood that contains no point of \(D\setminus \{x\}\). The set \(D\) is discrete if every point of \(D\) is isolated, and it is sigma-discrete if it is a countable union of discrete sets. If \(X\) is completely metrizable and separable (i.e., Polish), then \(D\) is sigma-discrete if and only if it is countable.

A non-empty subset of \(X\) is said to be perfect if it is closed and has no isolated points. By Koumoullis (1984), if \(X\) is completely metrizable, then a non-empty Borel subset \(D\) of \(X\) is sigma-discrete if and only if \(D\) does not contain a perfect subset of \(X\). Since the set of plays \(P\) is completely metrizable, these results apply to Borel subsets of \(P\).

3 Main results

In this section, we present our main results.

Theorem 1

Consider a perfect information game with bounded payoff functions. Suppose that for every player \(i\) the payoff function \(u_{i}\) is continuous outside a sigma-discrete subset of \(P\). Then, for every \(\epsilon >0\), the game admits an \(\epsilon \)-SPE.

In Appendix 7.3, we prove that the payoff functions satisfying the condition of theorem 1 are Borel measurable. Let \(D_{i}\) denote the set of plays \(p\) such that \(u_{i}\) is not continuous at \(p\), and let \(D = \cup _{i \in N}D_{i}\). Under the hypothesis of theorem 1, the set \(D_{i}\) is sigma-discrete for each \(i \in N\). Since the set of players \(N\) is finite, this is obviously equivalent to the requirement that the set \(D\) be sigma-discrete.

If \(D\) is a countable set, it is sigma-discrete. Moreover, if the set of actions \(A\) is countable, then \(D\) is sigma-discrete if and only if it is countable. The set of eventually constant plays is an example of a sigma-discrete set that is not discrete. Another such example is the set of eventually periodic plays.

The proof of theorem 1 is carried out in three steps. As a first step, we discretize the payoffs in the following sense: We show that there exist payoff functions \(\bar{u}_{1},\ldots ,\bar{u}_{n}\) such that each \(\bar{u}_{i}\) has finite range and is \(\epsilon \)-close to \(u_{i}\), and the set of discontinuity points \(\bar{D}_{i}\) of \(\bar{u}_{i}\) is contained in \(D_{i}\). This approach has the advantage that for each play \(p\) that is a continuity point of each function \(\bar{u}_{i}\) there exists a history \(h \prec p\) such that each \(\bar{u}_{i}\) is constant on the set \(P(h)\). Obviously, the subgame starting at history \(h\) has a \(\delta \)-SPE for every \(\delta > 0\). In contrast, the existence of such a history in the game with the original payoff functions is not trivial to establish. It is for that reason that working with discretized payoffs is crucial for our method.

As a second step, we prove theorem 1 for the so-called stopping games with finite payoff range. Finally, we show theorem 1 along the following lines. We define \(H^{*}\) as the set of histories \(h \in H\) such that the subgame \(G(h)\) has an \(\epsilon \)-SPE for each \(\epsilon > 0\). We prove that \(H^{*} = H\) by contradiction: assuming that \(H^* \ne H\) we show that there is a history \(h \in H \setminus H^{*}\) such that \(G(h)\) can be reduced to a stopping game and thus shown to have an \(\epsilon \)-SPE for each \(\epsilon > 0\).

Theorem 1 cannot be strengthened to conclude that there is a pure \(\epsilon \)-SPE, a counterexample is readily provided by Solan and Vieille (2003). The game is a two-player stopping game in which the two players move alternatingly, and each player can either stop the game or continue:

figure a

The game admits no pure \(\epsilon \)-SPE for \(\epsilon < 1\), but does admit an \(\epsilon \)-SPE in randomized strategies: Player 1 always stops with probability 1, while player 2 always stops with probability \(\epsilon \). The main idea is that randomization by player 2 makes it impossible for player 1 to ever reach the only point of discontinuity in the game, the play \((c,c,c,\ldots )\). This conveys much of the intuition for how randomized strategies will be used in the proof of theorem 1.

We also prove a partial converse to theorem 1. Recall that the game is specified by the set of players, the set of actions, the assignment of the active players to histories, and the payoff functions.

Theorem 2

Let \(A\) be any set of actions, and let \(D\) be a subset of \(P = A^{{\mathbb {N}}}\) that is not sigma-discrete. Then, there exists a two-player game with action set \(A\) and with payoff functions that are continuous outside of \(D\), such that the game admits no \(\epsilon \)-SPE for small \(\epsilon >0\).

The proof makes use of a game in Flesch et al. (2014) that admits no \(\epsilon \)-SPE for small \(\epsilon >0\). We embed this game into \(D\) and define the payoffs elsewhere in such a way that the large game admits no \(\epsilon \)-SPE for small \(\epsilon >0\) either.

The question that motivates the paper is for what kind of subsets of plays the condition of payoff continuity can be dropped without losing the existence of \(\epsilon \)-SPE. As shown above, the answer is sigma-discrete subsets of plays. Surprisingly, the answer is not strongly related to the topological notion of denseness, as illustrated by the following two examples.

Take \(D\) to be the set of eventually constant plays. Then, \(D\) is dense. However, \(D\) is sigma-discrete, and hence, theorem 1 implies that any game with payoffs continuous outside of \(D\) has an \(\epsilon \)-SPE for each \(\epsilon > 0\). Conversely, let the action set be \(A = \{0,1,2\}\), and \(D\) be the set \(\{1,2\}^{{\mathbb {N}}}\). Then, \(D\) is nowhere dense. It is also perfect and thus is not sigma-discrete. Hence, theorem 2 implies that there is a game with payoff functions that are continuous outside of \(D\) which has no \(\epsilon \)-SPE for small \(\epsilon > 0\).

4 The proof of theorem 1

4.1 A reduction to a finite range of payoffs

In this section, we argue that it suffices to prove theorem 1 for games with payoff functions having finite range.

Lemma 3

Let \(f : P \rightarrow [-r,r]\) be a Borel measurable function and let \(\epsilon > 0\). Then, there exists a Borel measurable function \(\bar{f} : P \rightarrow [-r,r]\) such that [1] \(\bar{f}\) has finite range, [2] for every \(p \in P, |\bar{f}(p) - f(p)| < \epsilon \), and [3] if \(f\) is continuous at a play \(p \in P\), then so is \(\bar{f}\).

The proof of lemma 3 can be found in the Appendix.

We say that \(\bar{f}\) is an \(\epsilon \)-discretization of the function \(f\).

Now let \(G\) be a game with the payoff functions \(u_{1}, \ldots , u_{n}\) satisfying the assumptions of theorem 1. Fix an \(\epsilon >0\), and let \(\bar{u}_{1}, \ldots , \bar{u}_{n}\) be \(\epsilon \)-discretizations of the respective payoff functions. Then, the set of discontinuity points of the function \(\bar{u}_{i}\) is contained in the set of the discontinuity points of the function \(u_{i}\) and is therefore sigma-discrete. Moreover, any \(\epsilon \)-SPE of the game with the payoff functions \(\bar{u}_{1}, \ldots , \bar{u}_{n}\) is a 2\(\epsilon \)-SPE of the game with the payoff functions \(u_{1}, \ldots , u_{n}\).

This shows that it is sufficient to prove theorem 1 for games with payoff functions having finite range. In view of this result, we henceforth restrict our attention to games with a finite range of payoffs.

4.2 Stopping games with a finite range of payoffs

A game \(G\) is said to be a stopping game if the action space is \(A = \{s, c\}\), where \(s\) stands for “stop”, \(c\) stands for “continue”, and if for each \(t \in {\mathbb {N}}\) and each player \(i\), the payoff function \(u_{i}\) is constant on the set \(P(c^{t},s)\). Here, we write \((c^{t},s)\) to denote the history where action \(c\) has been played successively \(t\) times followed by the action \(s\). So, in a stopping game, the payoffs are fixed once the active player decides to play action “stop”.

For each \(t \in {\mathbb {N}}\) and each player \(i\), let \(\iota ^{t} = \iota (c^{t})\), and let \(r_{i}^{t}\) denote player \(i\)’s constant payoff on \(P(c^{t},s)\) and let \(r_{i}^{\infty }\) denote \(u_{i}(c^{\infty })\). A stopping game can be represented as follows:

figure b

Lemma 4

A stopping game with a finite range of payoffs admits an \(\epsilon \)-SPE for each \(\epsilon > 0\).

This result follows from a more general theorem in Mashiah-Yaakovi (2014). For the sake of completeness, we give a direct proof of the lemma, see the Appendix.

4.3 Games with a finite range of payoffs

In this section, we prove theorem 1 for games with a finite range of payoffs. Let \(G\) be a game satisfying the conditions of theorem 1 such that moreover the payoff functions \(u_{1}, \ldots , u_{n}\) all have finite range. Let \(D_{i}\) denote the set of plays \(p\) such that \(u_{i}\) is not continuous at \(p\), and let \(D = \cup _{i \in N}D_{i}\). Then, for each \(p \in P \setminus D\), there exists a history \(h \prec p\) such that each \(u_{i}\) is constant on the set \(P(h)\). Consider

$$\begin{aligned} H^{*} = \{h \in H: \text {for each }\epsilon > 0\text { the game }G(h)\text { has an }\epsilon \text {-SPE}\}. \end{aligned}$$

The following lemma is straightforward.

Lemma 5

Let \(h \in H\). Then, \(h \in H^{*}\) if and only if for each \(a \in A, (h,a) \in H^{*}\).

Define the set

$$\begin{aligned} Q = \{q \in P:\text {for each }h \prec q\text { it holds that }h \notin H^{*}\}. \end{aligned}$$

Lemma 6

The set \(Q\) has the following properties:

  1. [1]

    \(Q\) is closed,

  2. [2]

    \(Q \subseteq D\),

  3. [3]

    \(H^{*} = \{h \in H: P(h) \cap Q = \emptyset \}\),

  4. [4]

    \(Q\) is empty if and only if \(H^{*} = H\).

Proof

  1. [1]

    To prove that \(Q\) is closed we show that the complement of \(Q\) is open. Thus, take \(p \in P \setminus Q\). Then, there is an \(h \prec p\) such that \(h \in H^{*}\), and hence, \(P(h) \subseteq P \setminus Q\).

  2. [2]

    To prove that \(Q \subseteq D\) take a \(p \in P \setminus D\). Since this is a continuity point of \(u_{1}, \ldots , u_{n}\), and since each payoff function has finite range, there is an \(h \prec p\) such that for every \(i \in N\) the function \(u_{i}\) is constant on \(P(h)\). But then \(h \in H^{*}\) as any strategy profile is an SPE of \(G(h)\). Hence, \(p \in P \setminus Q\).

  3. [3]

    The inclusion \(H^{*} \subseteq \{h \in H: P(h) \cap Q = \emptyset \}\) is trivial. We prove the converse. Thus, take \(h \in H\) such that \(P(h) \cap Q = \emptyset \). Suppose that \(h \in H \setminus H^{*}\). We recursively define a sequence \(h_{0} \prec h_{1} \prec \cdots \) of elements of \(H \setminus H^{*}\), as follows: Let \(h_{0} = h\). Suppose we have defined \(h_{k} \in H \setminus H^{*}\). By lemma , there is an action \(a_{k} \in A\) such that \((h_{k},a_{k})\in H \setminus H^{*}\). Let \(h_{k+1} = (h_{k},a_{k})\). Define \(p = (h, a_{0},a_{1},\ldots )\). Then, \(p \in P(h) \cap Q\), which is a contradiction.

  4. [4]

    It follows immediately from [3]. \(\square \)

To prove theorem 1, we need to show that \(H = H^{*}\). Suppose not. Then, \(Q\) is non-empty. If \(Q\) has no isolated points, by the above lemma, it is a perfect set contained in \(D\), contradicting the assumption of theorem 1. Therefore, \(Q\) has an isolated point, say \(p \in Q\). Choose an \(h \in H\) such that \(P(h) \cap Q = \{p\}\). We argue that \(h \in H^{*}\), thus obtaining a contradiction to the item [3] of the lemma above.

Thus, take an \(\epsilon > 0\). We show that \(G(h)\) has an \(\epsilon \)-SPE.

For this purpose, we argue that \(G(h)\) can be reduced to a stopping game. First, we rename the actions so that \(p = (c,c,c,\ldots )\). Hence, \(h = c^{k}\) for some \(k \in {\mathbb {N}}\). Take an \(m \ge k\), and consider the history \(c^{m}\). If \(A\) is a singleton, there is nothing to prove. So we assume that \(A\) contains at least two actions. For each \(a \ne c, P(c^{m},a)\) does not contain \(p\) and is contained in \(P(h)\). As \(P(h) \cap Q = \{p\}\), we have \(P(c^{m},a) \cap Q =\emptyset \). Therefore, by item [3] of the previous lemma, \((c^{m},a) \in H^{*}\). This allows us to replace the subgame \(G(c^{m},a)\) by a payoff on some \(\tfrac{1}{2}\) \(\epsilon \)-SPE of \(G(c^{m},a)\), denoted \((w_{1}^{m}(a),\ldots ,w_{n}^{m}(a))\). Choose an action \(a^{m} \in A \setminus \{c\}\) so as to maximize \(w_{i}^{m}(a)\) for the player \(i\) active at the history \(c^{m}\). We can rename the actions so that \(a^{m} = s\) for each \(m \ge k\).

We thus obtain a game \(G^{\prime }(h)\) where there are two distinguished actions, \(s\) and \(c\). Playing any action other than \(c\) terminates the game (in the sense that subsequent actions do not affect the payoffs). Should a player choose to terminate the game, action \(s\) yields the highest payoff. It is clear that all actions other than \(c\) and \(s\) can be safely ignored, so that \(G^{\prime }(h)\) becomes a stopping game.

By lemma 4, the game \(G^{\prime }(h)\) has an \(\tfrac{1}{2}\) \(\epsilon \)-SPE. It is clear that the combination of the \(\tfrac{1}{2}\) \(\epsilon \)-SPE in \(G^{\prime }(h)\) with the chosen \(\tfrac{1}{2}\) \(\epsilon \)-SPE in the subgames \(G(c^{m},a)\), for \(m \ge k\) and \(a \in A \setminus \{c\}\), constitutes an \(\epsilon \)-SPE of \(G(h)\).

5 Proof of theorem 2

In this section, we prove theorem 2. The proof makes use of the following game in Flesch et al. (2014) that does not admit an \(\epsilon \)-SPE for small \(\epsilon >0\): There are two players, and the set of actions is \(A = \{1, 2\}\). Player 1 starts the game. The active player decides who the next active player is by choosing the corresponding action. The payoffs are \((-1,2)\) if player 2 is active only finitely many times, \((-2,1)\) if player 1 is active only finitely many times, and \((0,0)\) if both players are active infinitely many times. In this game, the payoff functions are Borel, but discontinuous at every play. Flesch et al. (2014) prove that this game admits no \(\epsilon \)-SPE for \(\epsilon \in (0,0.1]\).

Proof of theorem 2 Let \(A\) be any set of actions, and let \(D\) be a subset of \(P = A^{{\mathbb {N}}}\) that is not sigma-discrete. We construct a two-player game \(G\), with action set A and with payoff functions that are continuous outside of D, such that the game admits no \(\epsilon \)-SPE for small \(\epsilon >0\).

Step 1: Construction of the game \(G\). Since \(D\) is not sigma-discrete, it contains a perfect subset \(P^{*}\) of the set of plays \(P\). We first construct a subset \(D^*\) of \(P^*\), on which we can define a game that is strategically equivalent with the game in Flesch et al. (2014).

Let \(S\) denote the set of finite sequences of elements of \(\{1,2\}\). Define a function \(\varphi : S \rightarrow H\) recursively on the length of the sequence \(s\). Let \(\varphi \)(ø) = ø. Suppose \(\varphi (s)\) has been defined so that \(P(\varphi (s)) \cap P^{*} \ne \emptyset \). The set \(P(\varphi (s)) \cap P^{*}\) contains at least two distinct plays, say \(p\) and \(p^{\prime }\). Let \(h\) be the longest common prefix of \(p\) and \(p^{\prime }\), and let \(a\) and \(a^{\prime }\) be the unique actions such that \((h,a) \prec p\) and \((h, a^{\prime }) \prec p^{\prime }\). Define \(\varphi (s,1) = (h,a)\) and \(\varphi (s,2) = (h,a^{\prime })\).

Now define the function \(f : \{1,2\}^{{\mathbb {N}}} \rightarrow P^*\) by letting \(f(x)\) be the unique play that extends the history \(\varphi (x_{0},\ldots ,x_{m})\) for each \(m \in {\mathbb {N}}\). Let \(D^{*}\) be the image of \(f\).Footnote 4 Thus, \(D^*\subseteq P^*\subseteq D\subset P\).

Let \(H^*\) denote the set of histories that are consistent with \(D^*\), i.e., there is a play \(p\in D^*\) such that \(h\prec p\). At every \(h\in H^*\), let \(A^*(h)\) denote the set of actions that keep play consistent with \(D^*\), i.e., \(A^*(h)=\{a\in A: (h,a)\in H^*\}\).

By construction, for every \(h\in H^*\), it holds that

  • \(A^*(h)\) is either a singleton or it contains exactly two actions,

  • there exists a history \(\overline{h}\in H^*\) such that \(\overline{h}\succeq h\) and \(A^*(\overline{h})\) contains exactly two actions.

Define the game \(G\) as follows: The set of players is \(\{1, 2\}\). The function \(\iota : H \rightarrow \{1,2\}\) is defined recursively as follows. Let \(\iota \)(ø)\(\, =\, 1\). Suppose that \(\iota (h)\) has been defined for \(h \in H\). If \(h \in H^{*}\) and \(A^{*}(h)\) is the singleton \(\{a\}\), let \(\iota (h,a) = \iota (h)\). In this case, we rename the action \(a\) into \(\iota (h)\). If \(h \in H^{*}\) and \(A^{*}(h)\) consist of two actions \(a\) and \(a^{\prime }\), take the unique \(s \in S\) such that \(\varphi (s,1) = (h,a)\) and \(\varphi (s,2) = (h,a^{\prime })\). Let \(\iota (h,a) = 1\) and \(\iota (h,a^{\prime }) = 2\). In this case, rename \(a\) into \(1\) and \(a^{\prime }\) into \(2\). If \(h \in H \setminus H^{*}\), then \(\iota (h,a)\) can be arbitrary, say equal to \(1\), for each \(a \in A\).

The payoffs on \(D^{*}\) are defined exactly as in the game of Flesch et al. (2014). It remains to define the payoffs on the complement of \(D^{*}\). Take a play \(p \in P \setminus D^{*}\) and let \(h\) be the shortest prefix of \(p\) at which the active player, say player \(i\), chooses an action \(a\) such that \((h,a) \notin H^*\). In that case, regardless of future play, we define the payoff for player \(i\) to be \(-10\), and for the other player to be \(5\).Footnote 5 Notice that the payoffs are continuous outside of \(D^*\), and hence, outside of \(D\) as well.

Thus, the game \(G\) proceeds just as the game in Flesch et al. (2014), unless an active player chooses an action outside of \(A^{*}(h)\), in which that player is punished and the other player is rewarded.

Step 2: Proving that \(G\) has no \(\epsilon \)-SPE for small \(\epsilon >0\). We let \(H_{i}^{*}\) be the set of histories in \(H^{*}\) where player \(i\) is active. Let \(S_1\) denote the set of plays with the payoff \((-1,2)\); \(S_2\) the set with the payoff \((-2,1)\); \(Q_1\) the set with the payoff \((-10,5); Q_2\) denote the set of plays with the payoff \((5,-10)\), and \(R\) the set with the payoff \((0,0)\).

Let \(\sigma =(\sigma _1,\sigma _2)\) be an \(\epsilon \)-SPE where \(\epsilon \in (0,\tfrac{1}{7})\). We first argue that

$$\begin{aligned} \mu _{\sigma ,h}(Q_{1}) \le \frac{\epsilon }{9},\quad \mu _{\sigma ,h}(Q_{2}) \le \frac{\epsilon }{9}\quad \text { for each }h \in H^{*}. \end{aligned}$$
(5.1)

To prove the first inequality, consider the following strategy \(\tau _{1}\) for player 1: Follow \(\sigma _{1}\) unless the outcome of the lottery \(\sigma _{1}(h)\) at a history \(h \in H_{1}^{*}\) is an action outside of \(A^{*}(h)\). If that happens, play action \(1\) for the rest of the game. Recall playing an action outside \(A^{*}(h)\) gives player 1 a payoff of \(-10\), whereas playing \(1\) forever gives \(-1\). Then, for each \(h \in H^{*}\)

$$\begin{aligned} \epsilon \ge u_{1}((\tau _{1},\sigma _{2}), h) - u_{1}((\sigma _{1},\sigma _{2}), h) \ge 9\cdot \mu _{\sigma ,h}(Q_{1}), \end{aligned}$$

from which the first inequality follows. The proof of the second inequality is similar.

The rest of the proof follows very closely the analysis in Flesch et al. (2014).

We argue that

$$\begin{aligned} \mu _{\sigma ,h}(R) = 0\quad \text { for each }h \in H^{*}. \end{aligned}$$
(5.2)

Otherwise by Lévy’s zero-one law, there is a history \(h \in H^{*}\) where \(\mu _{\sigma ,h}(R) \ge \frac{19}{20}\), which implies that \(u_{2}(\sigma ,h) \le 10\cdot \frac{1}{20} < 1 - \epsilon \). This is impossible since at \(h\) player 2 can guarantee a payoff of \(1\) by always playing action 2 whenever it is his turn.

At any history \(h \in H_{1}^{*}\), player 1 can guarantee a payoff of \(-1\) by always playing action \(1\), hence

$$\begin{aligned} u_{1}(\sigma ,h) \ge -1-\epsilon \quad \text { for each }h \in H_{1}^{*}. \end{aligned}$$
(5.3)

Now

$$\begin{aligned} u_1(\sigma ,h) = \;\mu _{\sigma ,h}(S_1)\cdot (-1)+\mu _{\sigma ,h}(S_2)\cdot (-2)+\mu _{\sigma ,h}(Q_1)\cdot (-10)+\mu _{\sigma ,h}(Q_2)\cdot 5. \end{aligned}$$

Combining this with (5.1), (5.2), and (5.3), we obtain \(\mu _{\sigma ,h}(S_{1}) \ge 1 - 2\epsilon \) for every \(h \in H_{1}^{*}\). This last inequality together with (5.1) give \(u_2(\sigma ,h) \ge 2-6\epsilon \) for each \(h \in H_{1}^{*}\). It is then easy to conclude that \(u_2(\sigma ,h) \ge 2-7\epsilon \) for each \(h \in H^{*}\). Since \(u_2(\sigma ,h) \le \mu _{\sigma ,h}(S_{2}) + 10(1 - \mu _{\sigma ,h}(S_{2}))\), we obtain

$$\begin{aligned} \mu _{\sigma ,h}(S_{2}) \le \frac{8 + 7\epsilon }{9} < 1\quad \text { for each }h \in H^{*}. \end{aligned}$$

Therefore, applying Lévy’s zero-one law yields \(\mu _{\sigma ,h}(S_{2}) = 0\) for each \(h \in H^{*}\). Then, for each \(h \in H^{*}\), we have \(u_{1}(\sigma ,h) \le -\mu _{\sigma ,h}(S_{1}) + 10\mu _{\sigma ,h}(Q_{2}) \le -1+4\epsilon < -\epsilon \). On the other hand, player 1 can guarantee a payoff of \(0\) at any history in \(H^{*}\) by playing action \(2\) whenever it is available. We arrive at a contradiction. \(\square \)

6 Concluding remarks

An interesting direction for future research is extending the results of this paper to the case where the set of players is infinite. Consider a game with infinitely many players.

First suppose that the payoff functions are all continuous. If the set of actions is finite, the existence of a pure SPE follows by the truncation approach of Fudenberg and Levine (1983). When the set of actions is infinite, an SPE need not exist, but one could follow the approach in Flesch and Predtetchinski (2015) to obtain existence of a pure \(\epsilon \)-SPE, for all \(\epsilon >0\).

Much more challenging is the case where the continuity of payoff functions is only assumed outside of a sigma-discrete set. In this case, the question remains open whether an \(\epsilon \)-SPE exists for all \(\epsilon >0\). We conjecture that the existence can be established at least in the special case when, on every play, infinitely many players are active. The intuition is that we can require a player who appears for the first time to randomize and place probability at least \(\delta \) on at least two actions, where \(\delta \) is a small positive number. This way the set of discontinuities can be avoided almost surely. A similar idea appeared in Cingiz et al. (2015).