1 Introduction

Literature review. Evolutionary Game Theory made its debut in 1973 with John Maynard Smith’s and George R. Price’s work on the formalisation of animal contests, thus successfully using Classical Game Theory to create a new framework that could predict the evolutionary outcomes of the interaction between competing individuals. Ever since, it has been widely used to study myriad questions in various disciplines like Evolutionary Biology, Ecology, Physics, Sociology and Computer Science, including what the mechanisms underlying the emergence and stability of cooperation are (Nowak 2006) and how to mitigate climate change and its risks (Santos and Pacheco 2011).

Cooperation is the act of paying a cost in order to convey a benefit to somebody else. Although it initially seems against the Darwinian theory of natural selection, cooperation has been, is, and will be a vital part of life, from cellular clusters to bees to humans (Sigmund 2010; Nowak and Highfield 2011). Several mechanisms for promoting the evolution of cooperation have been identified, including kin selection, direct reciprocity, indirect reciprocity, network reciprocity, group selection and different forms of incentives (Nowak 2006; Sigmund 2010; Perc et al. 2017; Rand and Nowak 2013; Van Lange et al. 2014). The current work focuses on institutional incentives (Sasaki et al. 2012; Sigmund et al. 2010; Wang et al. 2019; Duong and Han 2021; Cimpeanu et al. 2021; Sun et al. 2021; Van Lange et al. 2014; Gürerk et al. 2006), which are a plan of actions involving the use of reward (i.e., increasing the payoff of cooperators) and punishment (i.e., reducing the payoff of defectors) by an external decision-maker, in particular, how they can be used in combination (i.e. hybrid or mixed incentives) in a cost-efficient way for maximising the levels of cooperative behaviour in a well-mixed, finite population of self-regarding individuals.

Promoting and implementing cooperation via incentives is costly to the incentive-providing institution, such as the United Nations or the European Union (Ostrom 2005; Van Lange et al. 2014). Thus, it is crucial to understand how to minimise the said cost while ensuring a desirable standard of cooperation. In this work, players interact via cooperation dilemmas, both the pairwise Donation Game (DG) and its multi-player version, the Public Goods Game (PGG) (Sigmund 2010; Nowak 2006). These games have been widely adopted to model social dilemma situations in which collective rationality leads to individual irrationality.

Several theoretical models studied how to combine institutional reward and punishment for enhancing the emergence and stability of cooperation (Chen and Perc 2014; Góis et al. 2019; Berenji et al. 2014; Hilbe and Sigmund 2010; Sun et al. 2021; Gürerk et al. 2006). However, little attention has been given to addressing the cost optimisation of providing incentives. Chen et al. Sasaki et al. (2015) looked at a rewarding policy that switches the incentive from reward to punishment when the frequency of cooperators exceeds a certain threshold. This policy establishes cooperation at a lower cost and under a wider range of conditions than either reward or punishment alone, in both well-mixed and spatial populations. Others have applied the ‘carrot and stick’ idea to criminal recidivism and rehabilitation as now the justice system is switching its focus to reintegrating wrongdoers into society after the penalty has been served. Berenji et al.’s work (Berenji et al. 2014) studied the game where players decide to permanently reform or continue engaging in criminal activity, eventually reaching a state where they are considered incorrigible. Since resources may be limited, they fixed the combined rehabilitation and penalty costs per crime. The most successful strategy in reducing crime is to optimally allocate resources so that after being having served the penalty, criminals are reintroduced into society via impactful programs. Wang et al. (2019) explored the optimal incentive that not only minimises the total cost, but also guarantees a sufficient level of cooperation in an infinite and well-mixed population via optimal control theory.

This work however does not take into account various stochastic effects of evolutionary dynamics such as mutation and non-deterministic behavioural update. In a deterministic system of cooperators and defectors, once the latter disappear, there is no further change in the system and hence no further interference is needed. When mutation is present, defectors can reappear and become numerous over time, requiring the external institution to spend more on providing further incentives. Moreover, the intensity of selection - how strongly an individual bases their decision to copy another individual’s strategy on their fitness difference - is overlooked. When selection is weak, providing incentives would make little difference in causing behavioural change as no individual would be motivated to copy another and all changes in strategy would be due to noise. When selection is strong, incentives that ensure a minimum fitness advantage to cooperators would ensure a positive behavioural change as the players would be more likely to copy one another. Recently, in Zhang et al. (2022) by simulating the weak prisoner’s dilemma in finite populations, the authors find that a combination of appropriate punishment and reward mechanisms can promote cooperation’s prosperity regardless of how large or small the temptation to defect is.

Whilst the above works were concerned with well-mixed populations, the following two studies deal with spatial ones. Szolnoki and Perc (2013) looked at whether there are evolutionary advantages in correlating positive and negative reciprocity, as opposed to adopting only reward or punishment. Their work supports others that use empirical data. In those studies, the data fails to support the central assumption of the strong reciprocity model that negative and positive reciprocity are correlated. In a different work (Szolnoki and Perc 2017), the authors showed how second-order free-riding on antisocial punishment restores the effectiveness of prosocial punishment, providing an evolutionary escape from adverse effects of antisocial punishment. Both these works use Monte Carlo simulations.

Moreover, several works have provided insights into how best to promote the emergence of collective behaviours such as cooperation and fairness while also considering the institutional costs of providing incentives (Liu et al. 2018; Chen and Perc 2014; Duong and Han 2021; Cimpeanu et al. 2021, 2019; Han and Tran-Thanh 2018; Cimpeanu et al. 2023), see also (Wang et al. 2021) for a recent survey on these papers. Indeed, most relevant to our work are Duong and Han (2021) and Han and Tran-Thanh (2018), which derived analytical conditions for which a general incentive scheme can guarantee a given level of cooperation while at the same time minimising the total cost of investment. These results are highly sensitive to the intensity of selection. They also studied a class of incentive strategies that make an investment whenever the number of players with a desired behaviour reaches a certain threshold \(t\in \{1,\ldots ,N-1\}\) (N is the population size), showing that there is a wide range of values for the threshold that outperforms standard institutional incentive strategies - those which invest in all players, i.e. the threshold is \(t=N-1\) (Sasaki et al. 2015). These works however did not study the cost-efficiency of the mixed incentive scheme, which is the focus of the present work.

Overview of contribution of this paper. As mentioned above, in this work, we consider a well-mixed, finite population of self-regarding individuals where players interact via cooperation dilemmas (DG and PGG) and rigorously study the problem of cost optimisation of hybrid institutional incentives (combination of reward and punishment) for maximising the levels of cooperative behaviour (or guaranteeing at least a certain levels of cooperation). This problem is challenging due to the number of parameters involved such as the number of individuals in the population, the strength of selection, the game-specific quantities, as well as the efficiency ratios of providing the corresponding incentive. In particular, the Markov chain modelling the evolutionary process is of order equal to the population size, which is large but finite. The calculation of the entries of the corresponding fundamental matrix is intricate, both analytically and computationally.

Our present work provides a rigorous and delicate analysis of this problem, combining techniques from different branches of Mathematics including Markov chain theory, polynomial theory, and numerical analysis. The main results of the paper can be summarised as follows (detailed and precise statements are provided in Sect. 2).

  1. (i)

    We show that a mixed incentive scheme can offer a more cost-efficient approach for providing incentives while ensuring the same level or standard of cooperation in the long-run.

  2. (ii)

    We obtain the asymptotic behaviour of the cost function in the limits of neutral drift, strong selection as well as infinite population sizes.

  3. (iii)

    We prove the existence of a phase transition for the cost function, obtaining the critical threshold of the strength of selection at which the monotonicity of the cost function changes and finding the optimal value of the individual incentive cost.

Furthermore, we provide numerical simulations to illustrate the analytical results.

1.1 Organisation of the paper

The rest of the paper is organised as follows. In Sect. 2 we present the model, methods, and the main results. Our main results include Proposition 1 on the efficiency of the combined reward and punishment incentives compared to implementing them separately, Theorem 1 on the asymptotic behaviour (neutral drift, strong selection, and infinite population limits) of the cost function, and Theorem 2 on the optimal incentive. In Sect. 3 we provide detailed computations and proofs of Theorem 2. Proof of Theorem 1 is given in Sect. 4. Summary and further discussions are provided in Sect. 5. Finally, Sect. 6 contains the proof of Proposition 1, detailed computations, and proofs for the technical results.

2 Model, methods, and main results

In this section, we present the model, methods, and main results of the paper. We first introduce the class of games, namely cooperation dilemmas, that we are interested in throughout this paper.

2.1 Cooperation dilemmas

We consider a well-mixed, finite population of N self-regarding individuals (players) who engage with one another using one of the following one-shot (i.e. non-repeated) cooperation dilemmas, the Donation Game (DG) or its multi-player version, the Public Goods Game (PGG). Strategy wise, each player can choose to either cooperate (C) or defect (D).

Let \(\Pi _C(j)\) be the average payoff of a C player (cooperator) and \(\Pi _D(j)\) that of a D player (defector), in a population with j C players and \((N-j)\) D players. As can be seen below, the difference in payoffs \(\delta = \Pi _C(j) - \Pi _D(j)\) in both games does not depend on j. For the two cooperation dilemmas considered in this paper, namely the Donation Games and the Public Goods Games, it is always the case that \(\delta < 0\). This does not cover some weak social dilemmas such as the Snowdrift Game, where \(\delta >0\) for some j, the general prisoners’ dilemma, and the collective risk game (Sun et al. 2021), where \(\delta \) depends on j. We will investigate these games in future research (see Sect. 5 for further discussion).

2.1.1 Donation game (DG)

The Donation Game is a form of Prisoners’ Dilemma in which cooperation corresponds to offering the other player a benefit B at a personal cost c, satisfying that \(B > c\). Defection means offering nothing. The payoff matrix of DG (for the row player) is given as follows

$$\begin{aligned} \bordermatrix {~&C&D\\C&B-c&-c \\D&B&0 \\}. \end{aligned}$$

Denoting \(\pi _{X,Y}\) the payoff of a strategist X when playing with a strategist Y from the payoff matrix above, we obtain

$$\begin{aligned} \begin{aligned} \Pi _C(j)&=\frac{(j-1)\pi _{C,C} + (N-j)\pi _{C,D}}{N-1} = \frac{(j-1) (B-c) + (N-j) (-c)}{N-1},\\ \Pi _D(j)&=\frac{j\pi _{D,C} + (N-j-1)\pi _{D,D}}{N-1} =\frac{j B}{N-1}. \end{aligned} \end{aligned}$$

Thus,

$$\begin{aligned} \delta = \Pi _C(j) - \Pi _D(j) = -\Big (c + \frac{B}{N-1}\Big ). \end{aligned}$$

2.1.2 Public goods game (PGG)

In a Public Goods Game, players interact in a group of size n, where they decide to cooperate, contributing an amount \(c > 0\) to a common pool, or to defect, contributing nothing to the pool. The total contribution in a group is multiplied by a factor r, where \(1< r < n\) (for the PGG to be a social dilemma), which is then shared equally among all members of the group, regardless of their strategy. Intuitively, contributing nothing offers one a higher amount of money after redistribution.

The average payoffs, \(\Pi _C(j)\) and \(\Pi _D(j)\), are calculated based on the assumption that the groups engaging in a public goods game are given by multivariate hypergeometric sampling. Thereby, for transitions between two pure states, this reduces to sampling, without replacement, from a hypergeometric distribution. More precisely, we obtain (Hauert et al. 2007)

$$\begin{aligned} \begin{aligned} \Pi _C(j)&= \sum ^{n-1}_{i=0}\frac{\genfrac(){0.0pt}0{j-1}{i}\genfrac(){0.0pt}0{N-j}{n-1-i}}{ \genfrac(){0.0pt}0{N-1}{n-1}} \ \left( \frac{(i+1)rc}{n} - c\right) \\ {}&=\frac{rc}{n}\left( 1 + (j-1)\frac{n-1}{N-1}\right) - c,\\ \Pi _D(j)&=\sum ^{n-1}_{i=0}\frac{\genfrac(){0.0pt}0{j}{i}\genfrac(){0.0pt}0{N-1-j}{n-1-i}}{ \genfrac(){0.0pt}0{N-1}{n-1}} \ \frac{jrc}{n} =\frac{rc(n-1)}{n(N-1)}j. \end{aligned} \end{aligned}$$

Thus,

$$\begin{aligned} \delta = \Pi _C(j) - \Pi _D(j) = -c \left( 1 - \frac{r(N-n)}{n(N-1)} \right) . \end{aligned}$$

2.2 Cost of institutional reward and punishment

To reward a cooperator (respectively, punish a defector), the institution has to pay an amount \(\theta /a\) (resp., \(\theta /b\)) so that the cooperator’s (defector’s) payoff increases (decreases) by \(\theta \), where \(a, b > 0\) are constants representing the efficiency ratios of providing the corresponding incentive.

In an institutional enforcement setting, we assume that the institution has full information about the population composition or statistics at the time of decision-making. That is, given the well-mixed population setting, we assume that the number j of cooperators in the population is known. Thus, if both reward and punishment are feasible options (i.e., mixed incentives), the institution can minimise its cost by choosing the minimum of \(\frac{j}{a}\) and \(\frac{N-j}{b}\). Thus, the key question here is: what is the optimal value of the individual incentive cost \(\theta \) that ensures a sufficient desired level of cooperation in the population (in the long-run) while minimising the total cost spent by the institution?

Note that, as discussed above, this mixed incentive, also known as the ‘carrot and stick’ approach, has been shown efficient for promoting cooperation in both pairwise and multi-player interactions (Sasaki et al. 2015; Hilbe and Sigmund 2010; Sun et al. 2021; Góis et al. 2019; Gürerk et al. 2006). However, these works have not studied cost optimisation and have not shown whether this approach is actually more cost-efficient and by how much.

2.2.1 Deriving the expected cost of providing institutional incentives

In this model, we adopt the finite population dynamics with the Fermi strategy update rule (Traulsen and Nowak 2006), stating that a player X with fitness \(f_X\) adopts the strategy of another player Y with fitness \(f_Y\) with a probability given by \(P_{X,Y}=\left( 1 + e^{-\beta (f_Y-f_X)}\right) ^{-1}\), where \(\beta \) represents the intensity of selection. We compute the expected number of times the population contains j C players, \(1 \le j \le N-1\). For that, we consider an absorbing Markov chain of \((N+1)\) states, \(\{S_0,..., S_N\}\), where \(S_j\) represents a population with j C players. \(S_0\) and \(S_N\) are absorbing states. Let \(U = \{u_{ij}\}_{i,j = 1}^{N-1}\) denote the transition matrix between the \(N-1\) transient states, \(\{S_1,..., S_{N-1}\}\). The transition probabilities can be defined as follows, for \(1\le i \le N-1\):

$$\begin{aligned} u_{i,i\pm k}&= 0 \qquad \text { for all } k \ge 2, \nonumber \\ u_{i,i\pm 1}&= \frac{N-i}{N} \frac{i}{N} \left( 1 + e^{\mp \beta [\Pi _C(i) - \Pi _D(i)+\theta ]}\right) ^{-1},\nonumber \\ u_{i,i}&= 1 - u_{i,i+1} -u_{i,i-1}. \end{aligned}$$
(1)

The entries \(n_{ik}\) of the so-called fundamental matrix \({\mathcal {N}}=(n_{ik})_{i,k=1}^{N-1}= (I-U)^{-1}\) of the absorbing Markov chain gives the expected number of times the population is in the state \(S_j\) if it is started in the transient state \(S_i\) (Kemeny 1976). As a mutant can randomly occur either at \(S_0\) or \(S_N\), the expected number of visits at state \(S_i\) is thus, \(\frac{1}{2} (n_{1i} + n_{N-1,i})\). The total cost per generation is

$$\begin{aligned} \theta _j = \min \Big (\frac{j}{a}, \frac{N-j}{b}\Big ) \times \theta . \end{aligned}$$

Hence, the expected total cost of interference for mixed reward and punishment is

$$\begin{aligned} E_{mix}(\theta ) = \frac{\theta }{2} \sum _{j=1}^{N-1}(n_{1j} + n_{N-1,j}) \min \Big (\frac{j}{a}, \frac{N-j}{b}\Big ). \end{aligned}$$
(2)

As a comparison, we recall the cost for reward and punishment incentives, \(E_r\) and \(E_p\), respectively, when being used separately (Duong and Han 2021)

$$\begin{aligned} E_{r}(\theta ) = \frac{\theta }{2}\sum _{j=1}^{N-1}(n_{1j}+n_{N-1,j})\frac{j }{a},\quad E_{p}(\theta )=\frac{\theta }{2}\sum _{j=1}^{N-1}(n_{1j}+n_{N-1,j})\frac{N-j}{b}. \end{aligned}$$
(3)

By comparing (2) and (3) one clearly expects that the efficiency ratios a and b strongly affect the incentive cost. In the cost functions \(E_r\) and \(E_p\), they are just scaling factors and do not affect the analysis of these functions. This is not the case in the combined incentive. One of the main objectives of this paper is to reveal explicitly the influence of a and b on the cost function. From a mathematical point of view, the presence and interplay of a and b make the analysis of the combined incentive much harder than that of the separate ones.

Remark

(On the interference scheme) In the mixed incentive setting being considered in this paper, the institution either rewards every cooperator or punishes every defector, depending on which one is less costly. Although being rather unsophisticated, this incentive strategy is typically considered in the literature of institutional incentives modelling. However, other interference schemes are also investigated in many works, for instance, the institution only rewards C players whenever their frequency or number in the population does not exceed a given threshold t, where \(1\le t\le N-1\). The scheme studied in this paper corresponds to the case where \(t=N-1\). We refer the reader to Han and Tran-Thanh (2018) and references therein for more information about different interference schemes. We plan to generalise the results of this paper to more complicated incentive strategies in future work, see Sect. 5 for further discussion.

2.2.2 Cooperation frequency

Since the population consists of only two strategies, the fixation probabilities of a C (D) player in a homogeneous population of D (C) players when the interference scheme is carried out are, respectively, Novak (2006)

$$\begin{aligned} \begin{aligned} \rho _{D,C}&= \left( 1+\sum _{i = 1}^{N-1} \prod _{k = 1}^i \frac{1+e^{\beta (\Pi _C(k)-\Pi _D(k) + \theta )}}{1+e^{-\beta (\Pi _C(k)-\Pi _D(k)+\theta )}} \right) ^{-1}, \\ \rho _{C,D}&= \left( 1+\sum _{i = 1}^{N-1} \prod _{k = 1}^i \frac{1+e^{\beta (\Pi _D(k)-\Pi _C(k) - \theta )}}{1+e^{-\beta (\Pi _D(k)-\Pi _C(k)-\theta )}} \right) ^{-1}. \end{aligned} \end{aligned}$$

Computing the stationary distribution using these fixation probabilities, we obtain the frequency of cooperation

$$\begin{aligned} \frac{\rho _{D,C}}{\rho _{D,C}+\rho _{C,D}}. \end{aligned}$$

Hence, this frequency of cooperation can be maximised by maximising

$$\begin{aligned} \max _{\theta } \left( \rho _{D,C}/\rho _{C,D}\right) . \end{aligned}$$
(4)

The fraction in Equation (4) can be simplified as follows (Nowak 2006)

$$\begin{aligned} \frac{\rho _{D,C}}{\rho _{C,D}}= & {} \prod _{k = 1}^{N-1} \frac{u_{i,i-1}}{u_{i,i+1}} =\prod _{k = 1}^{N-1} \frac{1 + e^{\beta [\Pi _C(k)-\Pi _D(k) + \theta ]}}{1 + e^{-\beta [\Pi _C(k)-\Pi _D(k) + \theta ]}} \nonumber \\= & {} e^{\beta \sum _{k = 1}^{N-1} \left( \Pi _C(k)-\Pi _D(k) + \theta \right) } \nonumber \\= & {} e^{\beta (N-1)(\delta + \theta )}. \end{aligned}$$
(5)

In the above transformation, \(u_{i,i-1}\) and \(u_{i,i-1}\) are the probabilities to decrease or increase the number of C players (i.e. i) by one in each time step, respectively.

We consider non-neutral selection, i.e. \(\beta > 0\) (under neutral selection, there is no need to use incentives as no player is likely to copy another player and any changes in strategy that happen are due to noise as opposed to payoffs). Assuming that we desire to obtain at least an \(\omega \in [0,1]\) fraction of cooperation, i.e. \(\frac{\rho _{D,C}}{\rho _{D,C}+\rho _{C,D}} \ge \omega \), it follows from equation (5) that

$$\begin{aligned} \theta \ge \theta _0(\omega ) = \frac{1}{(N-1)\beta } \log \left( \frac{\omega }{1-\omega }\right) - \delta . \end{aligned}$$
(6)

Therefore it is guaranteed that if \(\theta \ge \theta _0(\omega )\), at least an \(\omega \) fraction of cooperation can be expected. This condition implies that the lower bound of \(\theta \) monotonically depends on \(\beta \). Namely, when \(\omega \ge 0.5\), it increases with \(\beta \) and when \(\omega < 0.5\), it decreases with \(\beta \).

To summarise, we obtain the following constrained minimisation problem

$$\begin{aligned} \min _{\theta \ge \theta _0} E_{mix}(\theta ). \end{aligned}$$
(7)

Remark

(On the formula of the cost of the mixed incentive) In the derivation of the cost function (2), we assumed that the population is equally likely to start in the homogeneous state \(S_0\) as well as in the homogeneous state \(S_N\). However, in general, this might not be correct. For example, if cooperators are very likely to fixate in a population of defectors, but defectors are unlikely to fixate in a population of cooperators, mutants are on average more likely to appear in the homogeneous cooperative population (that is in \(S_N\)). Similarly, the population might also be likely to appear in \(S_0\) rather than \(S_N\). In general, in the long-run, the population will start at \(i = 0\) (\(i = N\), respectively) with probability equal to the frequency of D (C) computed at the equilibrium, \(f_D = 1/(r+1)\) (\(f_C = r/(r+1)\), respectively), where \(r = e^{\beta (N-1)(\delta + \theta )}\). Thus generally, the expected number of visits at state \(S_i\) will be \( f_D n_{1i} + f_C n_{N-1,i}\). Therefore, instead of (2), in the general setting the formula for the cost function should be

$$\begin{aligned} E_{mix}= \sum _{j=1}^{N-1}(f_D n_{1j} + f_C n_{N-1,j}) \min \Big (\frac{j}{a}, \frac{N-j}{b}\Big ). \end{aligned}$$

In practice, in many works based on agent-based simulations (Chen and Perc 2014; Cimpeanu et al. 2023; Szolnoki and Perc 2017; Han et al. 2018; Sasaki et al. 2015), it is often assumed that mutation is negligible and simulations end whenever the population fixates in a homogeneous state. Moreover, these simulations usually assume that the initial population starts at a homogeneous state or has a uniform distribution of different types. In this work, we thus assume an equal likelihood that the population starts at one of the homogeneous states and our formula (2) captures such scenarios. This assumption enables us to analytically study the cost function and its behaviour. As will be clear in the subsequent sections, the analysis is already very complicated in this simplified setting. Our results encapsulate the intermediate-run dynamics, an approximation that is valid if the time-scale is long enough for one type to reach fixation, but too short for the next mutant to appear. Our findings might thus be more practically useful for the optimisation of the institutional budget for providing incentives on an intermediate timescale.

We will study this problem in the most general case, where the initial population can start at an arbitrary state, in future work. The cost function can be obtained from extensive agent-based simulations of the evolutionary process. However, this approach is very computationally expensive especially when one wants to analytically study the cost function as a function of the individual incentive cost for large population sizes, which is the focus of this paper. Previous works have already shown that outcomes from our adopted evolutionary processes (small-mutation limit) are in line with extensive agent-based simulations, e.g. in Han (2013); Van Segbroeck et al. (2012); Hauert et al. (2007); Sigmund et al. (2010).

2.3 Main results of the present paper

Proposition 1

(Combined incentives vs separate ones) It is always more cost efficient to use the mixed incentive approach than a separate incentive, reward or punishment,

$$\begin{aligned} E_{mix}\le \min \{E_r, E_p\}. \end{aligned}$$

If \(\frac{b}{a}\le \frac{1}{N-1}\), then \(E_{mix}(\theta )=E_r(\theta )\). If \(\frac{b}{a}\ge N-1\), then \(E_{mix}(\theta )=E_p(\theta )\). That is, if providing reward for a cooperator is much more cost-efficient for the institution than punishing a defector, i.e., when \(b/a \ge N-1\), then it is optimal to use reward entirely. Symmetrically, if punishment is much more efficient, i.e. \(a/b \ge N-1\), then it is optimal to use punishment entirely. Otherwise, a mixed approach is more cost-efficient. Note however that the mixed approach require the institution to be able to observe the population composition (i.e. the number of cooperators in the population, j).

The proof of this Proposition will be given in Sect. 6.2 and see Fig. 2 for an illustration.

The following number is central to the analysis of this paper

$$\begin{aligned} H_{N,a,b}=\sum \limits _{j=1}^{N-1}\frac{1}{j(N-j)}\min \Big (\frac{j}{a},\frac{N-j}{b}\Big ). \end{aligned}$$
(8)

It plays a similar role as the harmonic number \(H_N\) in Duong and Han (2021), where a similar cost optimisation problem but for a separate reward or punishment incentive is studied. However, unlike the harmonic function, which has a growth of \(\ln N+\gamma \) (where \(\gamma \) is the Euler-Mascheroni constant) as \(N\rightarrow +\infty \), we will show that \(H_{N,a,b}\) is always bounded and its asymptotic behaviour is given by (see more details in Proposition 4)

$$\begin{aligned} H_{a,b}:=\lim \limits _{N\rightarrow +\infty } H_{N,a,b}=\frac{1}{a}\ln \Big (\frac{a+b}{b}\Big )+\frac{1}{b}\ln \Big (\frac{a+b}{a}\Big ). \end{aligned}$$

Now, our second main result below studies the asymptotic behaviour (neutral drift limit, strong selection limit, and infinite population limit) of the cost function \(E_{mix}(\theta )\).

Theorem 1

(Asymptotic behaviour of the cost function)

  1. (i)

    (Growth of the cost function) The cost function satisfies the following lower and upper bound estimates

    $$\begin{aligned}{} & {} \frac{N^2\theta }{2}\Big (H_{N,a,b}+\frac{1}{\max (a,b)(N-1)}\Big )\le E_{mix}(\theta )\\{} & {} \quad \le N(N-1)\theta \Big (H_{N,a,b}+\frac{1}{\min (a,b)\lfloor \frac{(N-1)}{2} \rfloor }\Big ). \end{aligned}$$

    In particular, since \(H_{N,a,b}\) is uniformly bounded (with respect to N), it follows that the cost function grows quadratically with respect to N.

  2. (ii)

    Neutral drift limit:

    $$\begin{aligned} \lim \limits _{\beta \rightarrow 0}E_{mix}(\theta )=\theta N^2 H_{N,a,b}. \end{aligned}$$
  3. (iii)

    Strong selection limit:

    $$\begin{aligned} \lim \limits _{\beta \rightarrow +\infty }E_{mix}(\theta )={\left\{ \begin{array}{ll} \frac{N^2\theta }{2}\Big (H_{N,a,b} + \frac{1}{a(N-1)}\Big ), \quad \text {for}\quad \theta <-\delta ,\\ \frac{N A}{2}\Big [2N H_{N,a,b}+\frac{1}{a(N-1)}+\frac{1}{b(N-1)}\\ \quad -\frac{\min (2/a, (N-2)/b)}{2(N-2)}-\frac{\min ((N-1)/a,1/b)}{N-1}\Big ], \quad \text {for}\quad \theta =-\delta ,\\ \frac{N^2\theta }{2}\bigg [H_{N,a,b}+\frac{1}{b(N-1)}\bigg ] \quad \text {for}\quad \theta >-\delta . \end{array}\right. } \end{aligned}$$
  4. (iv)

    Infinite population limit:

    $$\begin{aligned} \lim _{N\rightarrow +\infty }\frac{E_{mix}(\theta )}{\frac{N^2\theta }{2}H_{a,b}}={\left\{ \begin{array}{ll} 1+e^{-\beta |\theta -c|} \quad \text {for DG},\\ 1+ e^{-\beta |\theta -c(1-\frac{r}{n})|}\quad \text {for PGG}. \end{array}\right. } \end{aligned}$$
    (9)

It is worth noticing that the neutral drift and strong selection limits of the cost function do not depend on the underlying games, but the infinite population limit does.

The proof of this Theorem will be given in Sect. 4. Figures 4 and 5 provide numerical simulations of the neutral drift and strong selection limits and the infinite population one.

The following result provides a detailed analysis for the minimisation problem (7) for \(a=b\). Note that since \(N\ge 2\), this case belongs to the interesting regime where \(\frac{1}{N-1}\le \frac{b}{a}\le N-1\), see Proposition 1 above. Mathematically, this case is distinguishable since it gives rise to many useful and beautiful symmetric properties and cancellations, see Sect. 3. We also numerically investigate the case \(a\ne b\) and conjecture that the result also holds true.

Theorem 2

(Optimisation problems and phase transition phenomenon)

  1. 1.

    (Phase transition phenomena and behaviour under the threshold) For \(a=b\), there exists a threshold value \(\beta ^*\) given by

    $$\begin{aligned} \beta ^*=-\frac{F^*}{\delta }>0, \end{aligned}$$

    with

    $$\begin{aligned} F^*=\min \{F(u):\quad u>1\}, \end{aligned}$$

    where \(F(u):=\frac{Q(u)}{uP(u)}-\log (u)\) for

    $$\begin{aligned} P(u)&:=(1+u)\Bigg [\Big (\sum _{j=0}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) u^j\Big )\Big (\sum _{j=1}^{N-1} j u^{j-1}\Big )\nonumber \nonumber \\ {}&\qquad \qquad -\Big (\sum _{j=1}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) j u^{j-1}\Big )\Big (\sum _{j=0}^{N-1}u^j\Big )\Bigg ]\nonumber \nonumber \\&\qquad \qquad -\Big (\sum _{j=0}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) u^j\Big )\Big (\sum _{j=0}^{N-1}u^j\Big ) \end{aligned}$$
    (10)

    and

    $$\begin{aligned} Q(u)&:=(1+u)\Big (\sum _{j=0}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) u^j\Big )\Big (\sum _{j=0}^{N-1} u^{j}\Big ), \end{aligned}$$
    (11)

    with \(u:=e^x\), such that \(\theta \mapsto E_{mix}(\theta )\) is non-decreasing for all \(\beta \le \beta ^*\) and it is non-monotonic when \(\beta >\beta ^*\). As a consequence, for \(\beta \le \beta ^*\)

    $$\begin{aligned} \min \limits _{\theta \ge \theta _0}E_{mix}(\theta )=E_{mix}(\theta _0). \end{aligned}$$
    (12)
  2. 2.

    (Behaviour above the threshold value) For \(\beta >\beta ^*\), the number of changes of sign of \(E_{mix}'(\theta )\) is at least two for all N and there exists an \(N_0\) such that the number of changes is exactly 2 for \(N\le N_0\). As a consequence, for \(N\le N_0\), there exist \(\theta _1<\theta _2\) such that for \(\beta >\beta ^*\), \(E_{mix}(\theta )\) is increasing when \(\theta < \theta _1\), decreasing when \(\theta _1<\theta <\theta _2\), and increasing when \(\theta >\theta _2\). Thus, for \(N\le N_0\),

    $$\begin{aligned} \min \limits _{\theta \ge \theta _0}E_{mix}(\theta )=\min \{E_{mix}(\theta _0),E_{mix}(\theta _2)\}. \end{aligned}$$

The proof of this Theorem is detailed in Sect. 3.

Fig. 1
figure 1

We use Algorithm 1 to find the optimal cost per capita \(\theta \), denoted by \(\theta ^*\) (represented by the red line in the figures), that minimises \(E_{mix}(\theta )\) while ensuring a minimum level of cooperation \(\omega \), where \(N=3\) for DG with \(B = 2, c = 1\). The left image illustrates the behaviour of the cost function for \(\beta = 1\), the middle one for \(\beta =5\), while the right one for \(\beta = 10\). The critical threshold value for the strength of selection is \(\beta ^* = 3.67\) and the desired level of cooperation is \(\omega =0.7\). The numerical results obtained are in accordance with Theorem 2: for \(\beta \le \beta ^*\), the cost function increases, while for \(\beta >\beta ^*\) it is not monotonic (colour figure online)

Fig. 2
figure 2

Comparison of the total costs for mixed incentives (blue), only reward (orange), and only punishment (green) as a function of the cost per capita \(\theta \), for different values of N and \(\beta \) for DG with \(B = 2, c = 1\). The first row illustrates the behaviour of the different cost functions when \(N=3\) with \(\beta = 0.01, 0.5, 1\) respectively, while the second row presents the comparison between the cost functions for \(N=10\) with \(\beta = 0.01, 0.5, 1\), respectively. As proven in Proposition 1, mixed incentives are less costly than either reward or punishment (colour figure online)

2.4 Algorithms

Based on the results above, we describe below an algorithm for the computation of the critical threshold \(\beta ^*\) of the strength selection.

Algorithm 1

(Finding optimal cost of incentive \(\theta ^\star \))

Inputs: i) \(N\le N_0\): population size, ii) \(\beta \): intensity of selection, iii) game and parameters: DG (c and B) or PGG (c, r and n), iv) \(\omega \): minimum desired cooperation level, v) a and b: reward and punishment efficiency ratios.

  1. (1)

    Compute \(\delta \) \(\Big \{in DG: \delta = - (c + \frac{B}{N-1})\); in PGG: \(\delta = -c \left( 1 - \frac{r(N-n)}{n(N-1)} \right) \Big \}\).

  2. (2)

    Compute \(\theta _0 = \frac{1}{(N-1)\beta } \log \left( \frac{\omega }{1-\omega }\right) - \delta \);

  3. (3)

    Compute

    $$\begin{aligned} F^*=\min \{F(u): u>1\}, \end{aligned}$$

    where F(u) is defined in (31).

  4. (4)

    Compute \(\beta ^*=-\frac{F^*}{\delta }\).

  5. (5)

    If \(\beta \le \beta ^*\):

    $$\begin{aligned} \theta ^*=\theta _0,\quad \min E_{mix}(\theta )=E_{mix}(\theta _0). \end{aligned}$$
  6. (6)

    Otherwise (i.e. if \(\beta >\beta ^*\))

    1. (a)

      Compute \(u_2\) that is the largest root of the equation \(F(u)+\beta \delta =0\).

    2. (b)

      Compute \(\theta _2=\frac{\log u_2}{\beta }-\delta \).

      • If \(\theta _2 \le \theta _0\): \(\theta ^*=\theta _0,\quad \min E_{mix}(\theta )=E_{mix}(\theta _0)\);

      • Otherwise (if \(\theta _2 > \theta _0\)):

        • If \(E_{mix}(\theta _0)\le E_{mix}(\theta _2)\): \(\theta ^*=\theta _0,\quad \min E_{mix}(\theta )=E_{mix}(\theta _0)\);

        • if \(E(\theta _2)< E(\theta _0)\): \(\theta ^*=\theta _2,\quad \min E_{mix}(\theta )=E_{mix}(\theta _2)\).

Output: \(\theta ^*\) and \(E_{mix}(\theta ^*)\).

3 The expected cost function, phase transition, and the minimisation problem

In this section, we study in detail the cost function, establishing the phase transition phenomena and solving the minimisation problem 7 of finding the optimal incentive, thus proving Theorem 2.

We consider the case

$$\begin{aligned} \frac{1}{N-1}<\frac{b}{a}< N-1, \end{aligned}$$

and focus on the most important case when \(a=b\). This is due to Proposition 1, when \(\frac{b}{a}\) is not in this interval, the mixed incentive problem reduces to either the reward or the punishment problem that has been studied in Duong and Han (2021).

3.1 The cost function and its derivative

In this section, we provide the explicit computation for the cost function and its derivative. The class of cooperation dilemmas, namely DG and PGG, introduced in Sect. 2 are of crucial importance in the analysis of this paper. This is because, as already mentioned, the difference in payoffs between a cooperator and a defector, \(\delta =\Pi _C(i)-\Pi _D(i)\), does not depend on the state i, which gives rise to explicit and analytically tractable formulas for the entries of the fundamental matrix \({\mathcal {N}}\) of the absorbing Markov chain describing the population dynamics. These entries are given in the following lemma, whose detailed proof can be found in Duong and Han (2021).

Lemma 1

The entries \(n_{1,i}\) and \(n_{N-1,i}\), \(i=1,\ldots , N-1\), of the fundamental matrix \({\mathcal {N}}\) (see its definition underneath (1)) are given by

$$\begin{aligned} n_{1,i}=\frac{N^2}{(N-i)i}\frac{(e^{(N-1)x}-e^{(i-1)x})(1+e^x)}{e^{Nx}-1},\quad n_{N-1,i}=\frac{N^2}{(N-i)i}\frac{(e^{ix}-1)(1+e^x)}{e^{Nx}-1}, \end{aligned}$$

where \(x=x(\theta ):=\beta (\theta +\delta )\).

Using Lemma 1, we obtain a more explicit formula for the cost function as follows:

$$\begin{aligned} E_{mix}(\theta )&= \frac{\theta }{2}\sum _{j=1}^{N-1}(n_{1j}+n_{N-1,j}) \min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )\\&=\frac{\theta }{2} \sum _{j=1}^{N-1} \frac{N^2}{(N-j)j} \Big ((W^{-1})_{1j} +(W^{-1})_{N-1,j}\Big ) \min \Big (\frac{j}{a}, \frac{N-j}{b}\Big ) \\&= \frac{N^2 \theta (1+e^x)}{2(e^{Nx}-1)} \sum _{j=1}^{N-1} \frac{e^{(N-1)x} - e^{(j-1)x} + e^{jx} - 1 }{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big ) \\&=\frac{N^2 \theta (1+e^x)}{2(e^{Nx}-1)}\Big [\big (e^{(N-1)x}-1\big )\sum _{j=1}^{N-1}\frac{1}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )+ \\&\quad + (e^x-1)\sum _{j=1}^{N-1}\frac{e^{(j-1) x}}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )\Big ] \\&=\frac{N^2 \theta (1+e^x)}{2(1+e^x+\ldots e^{(N-1)x})}\Big [\big (1+e^x+\ldots \\&\quad +e^{(N-2)x}\big )\sum _{j=1}^{N-1}\frac{1}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )+ \\ {}&\quad +\sum _{j=1}^{N-1}\frac{e^{(j-1) x}}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )\Big ]. \end{aligned}$$

To study the monotonicity of the cost function, in the following lemma, we compute the derivative of \(E_{mix}(\theta )\) with respect to \(\theta \).

Lemma 2

The total cost of interference for the mixed institutional incentive is given by

$$\begin{aligned} E_{mix}(\theta )&=\frac{N^2 \theta (1+e^x)}{2(1+e^x+\ldots e^{(N-1)x})}\Big [\big (1+e^x+\ldots +e^{(N-2)x}\big )\\ {}&\qquad \sum _{j=1}^{N-1}\frac{1}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big ) \\ {}&\qquad +\sum _{j=1}^{N-1}\frac{e^{(j-1) x}}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )\Big ]. \end{aligned}$$

Its derivative is given by

$$\begin{aligned} E_{mix}'(\theta )=\frac{N^2}{2g(x)^2}\left[ Q(u)-(x-\beta \delta )uP(u)\right] , \end{aligned}$$

where \(P(u)=f(x)g'(x)-f'(x)g(x)\) and \(Q(u)=f(x)g(x)\) for \(f(x)=(1+u)\sum _{j=0}^{N-2} u^j\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big )\) and \(g(x)=\sum _{j=0}^{N-1} u^j\) with \(u=e^x\).

See Sect. 6.3 for a proof of this lemma.

3.2 The polynomial P

This section contains details about

$$\begin{aligned} P(u)&:=(1+u)\Bigg [\Big (\sum _{j=0}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) u^j\Big )\Big (\sum _{j=1}^{N-1} j u^{j-1}\Big ) \\ {}&\qquad \qquad -\Big (\sum _{j=1}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) j u^{j-1}\Big )\Big (\sum _{j=0}^{N-1}u^j\Big )\Bigg ] \\ {}&\qquad \qquad -\Big (\sum _{j=0}^{N-2}\Big (H_{N,a,b} + \frac{\min (\frac{j+1}{a},\frac{N-j-1}{b})}{(j+1)(N-j-1)}\Big ) u^j\Big )\Big (\sum _{j=0}^{N-1}u^j\Big ). \end{aligned}$$

The following proposition studies the properties of P.

Proposition 2

Let P(u) be the polynomial defined in (29). Then it is a polynomial of degree \(2N-4\),

$$\begin{aligned} P(u)= \sum _{k=0}^{2N-4} p_k u^k, \end{aligned}$$

where the leading coefficient \(p_{2N-4}\) is positive. When \(a=b\), the coefficients of P are anti-symmetric, that is we have

$$\begin{aligned} p_k=-p_{2N-4-k}<0 \quad \text {for}\quad k=0,\ldots , N-3, \quad \text {and}\quad p_{N-2}=0. \end{aligned}$$

As a consequence, when \(a=b\), P has exactly one positive root, which is equal to 1.

The proof of this proposition is lengthy and delicate. The case \(a=b\) is special since it gives rise to many useful symmetric properties and nice cancellations. To focus on the main points here, we postpone the proof to the Appendix, see Sect. 6.4.

3.3 The derivative of F

In this section, we study the derivative of the function F defined in (31). The analysis of this section will play an important role in the study of the phase transition of the cost function in the next section.

We have

$$\begin{aligned} F'(u)=\frac{u Q'(u)P(u)-Q(u)(P(u)+uP'(u))}{u^2P(u)^2}-\frac{1}{u}=:\frac{M(u)}{u^2P(u)^2}, \end{aligned}$$

where

$$\begin{aligned} M(u):=u Q'(u)P(u)-Q(u)(P(u)+uP'(u))-uP(u)^2. \end{aligned}$$
(13)

The sign of \(F'(u)\) (thus, the monotonicity of F) is the same as that of the polynomial M. The next proposition presents some properties of M.

Proposition 3

The following statements hold

  1. 1.

    M is a polynomial of degree \(4N-6\),

    $$\begin{aligned} M(u)=\sum _{i=0}^{4N-6} m_i u^{i}, \end{aligned}$$

    where the leading coefficient is

    $$\begin{aligned} m_{4N-6}=a_{N-2}a_{N-3}>0. \end{aligned}$$
  2. 2.

    When \(a=b\), the coefficients of M are symmetric, that is for all \(i=0,\ldots , 4N-6\)

    $$\begin{aligned} m_{i}=m_{4N-6-i}. \end{aligned}$$
  3. 3.

    M has at least two positive roots, one is less than 1 and the other is bigger than 1. For sufficiently small N, namely \(N\le N_0\), M has exactly two positive roots, \(u_1\) and \(u_2\), where \(u_1<1<u_2\). As a consequence, for \(1<u<u_2\), \(F'(u)<0\), thus F is decreasing. While for \(u_2<u\), \(F'(u)>0\), thus F is increasing.

The proof of this proposition is presented in Sect. 6.5. We conjecture that the sequence of M has exactly two changes of signs, and thus M has exactly two positive roots.

3.4 The phase transition and the minimisation problem

In this section, we study the phase transition problem, which describes the change in the behaviour of the cost function when varying the strength of selection \(\beta \), and the optimal incentive problem, thus proving Theorem 2. We focus on the case \(a=b\).

Proof of Theorem 2

It follows from Proposition 2 that for \(0<u\), \(P(u)>0\) if and only if \(u>1\).

Thus according to the argument at the end of Sect. 3.1, if \(u\le 1\) then \(E'_{mix}>0\) (thus \(E_{mix}\) is increasing); and for \(u> 1\) we have (see (32))

$$\begin{aligned} E_{mix}'(\theta )=\frac{N^2}{2g(x)^2}(uP(u))\Big (\frac{Q(u)}{uP(u)}-\log (u) + \beta \delta \Big )=\frac{N^2}{2g(x)^2}(uP(u))(F(u) + \beta \delta ), \end{aligned}$$

where the function F is (see (31))

$$\begin{aligned} F(u)= \frac{Q(u)}{uP(u)}-\log (u). \end{aligned}$$

Since Q(u) is a polynomial of degree \(2N-2\) and uP(u) is a polynomial of degree \(2N-3\) and their leading coefficients are both positive,

$$\begin{aligned} \lim _{u\rightarrow +\infty } F(u)=+\infty =\lim _{u\rightarrow 1^+} F(u). \end{aligned}$$

This, together with the fact that F is smooth on \((1,+\infty )\), we deduce that there exists a global minimum of F in the interval \((1,+\infty )\)

$$\begin{aligned} F^*:=\min \{F(u),~u>1\}. \end{aligned}$$

Let

$$\begin{aligned} \beta ^*:=-\frac{F^*}{\beta \delta }=-\frac{F^*}{\delta }. \end{aligned}$$

Then it follows from the above formula of \(E'_{mix}(\theta )\) that for \(\beta \le \beta ^*\), \(E_{mix}'(\theta )\ge 0\). Thus for \(\beta \le \beta ^*\), \(E_{mix}(\theta )\) is always increasing. For \(\beta >\beta ^*\), the sign of \(E_{mix}'(\theta )\) depends on the sign of the term \(F(u)+\beta \delta \). For arbitrary N, the equation \(F(u)=-\beta \delta \) has at least two roots, thus the sign of the term \(F(u)+\beta \delta \) changes at least twice, therefore E is not monotonic. In particular, for \(N\le N_0\), since F is decreasing in \((1,u_2)\) and increasing in \((u_2,+\infty )\), where \(F^*=F(u_2)<-\beta \delta \), the equation \(F(u)=-\beta \delta \) has two roots \(1<\bar{u_1}<u_2<{\bar{u}}_2\), and \(F(u)+\beta \delta <0\) when \({\bar{u}}_1< u< {\bar{u}}_2\) while \(F(u)+\beta \delta \ge 0\) when \(u\in (1,{\bar{u}}_1]\cup [{\bar{u}}_2,+\infty )\), see Fig. 3 for an illustration.

Fig. 3
figure 3

Behaviour of F and determination of the critical threshold \(\beta ^*\). For sufficiently small N, i.e. \(N\le N_0\), since F is decreasing in \((1,u_2)\) and increasing in \((u_2,+\infty )\), where \(F^*=F(u_2)<-\beta \delta \), the equation \(F(u)=-\beta \delta \) has two roots \(1<\bar{u_1}<u_2<{\bar{u}}_2\), and \(F(u)+\beta \delta <0\) when \({\bar{u}}_1< u< {\bar{u}}_2\) while \(F(u)+\beta \delta \ge 0\) when \(u\in (1,{\bar{u}}_1]\cup [{\bar{u}}_2,+\infty )\)

Hence \(E_{mix}'(\theta )<0\) when \({\bar{u}}_1< u< {\bar{u}}_2\) while \(E'_{mix}(\theta )\ge 0\) when \(u\in (1,{\bar{u}}_1]\cup [{\bar{u}}_2,+\infty )\). Thus \(E_{mix}(\theta )\) is increasing in \((1,{\bar{u}}_1)\), decreasing in \(({\bar{u}}_1,{\bar{u}}_2)\), and increasing again in \(({\bar{u}}_2,+\infty )\). In term of the variable \(\theta \), \(E_{mix}(\theta )\) is increasing in \((-\delta ,\theta _1)\), decreasing in \((\theta _1,\theta _2)\), and increasing again in \((\theta _2,+\infty )\), where

$$\begin{aligned} \theta _1:=\frac{\log (u_1)}{\beta } - \delta ,\quad \theta _2:=\frac{\log (u_2)}{\beta }- \delta . \end{aligned}$$

As a consequence, for \(N\le N_0\),

$$\begin{aligned} \min _{\theta \ge \theta _0} E_{mix}(\theta )=\min \{E_{mix}(\theta _0), E_{mix}(\theta _2)\}. \end{aligned}$$

This completes the proof of this theorem. \(\square \)

4 Asymptotic behaviour of the expected cost function

In this section, we study the asymptotic behaviour (neutral drift, strong selection, and infinite population limits) of the cost function, proving the main results in Theorem 1. To this end, we will first need some auxiliary technical lemmas.

4.1 Some auxiliary lemmas

The first auxiliary lemma is an elementary inequality which will be used to estimate the cost function. Its proof can be found in Duong and Han (2021).

Lemma 3

For all \(x\in {\mathbb {R}}\), we have

$$\begin{aligned} 0\le \frac{e^x+\ldots +e^{(N-2)x}}{1+e^x+\ldots +e^{(N-1)x}}\le \frac{N-2}{N}. \end{aligned}$$
(14)

In the next lemma, we provide lower and upper bounds for the number \(H_{N,a,b}\) defined in (8). As mentioned in the introduction, this number plays a similar role as the harmonic number \(H_n\) in Duong and Han (2021). However, unlike the harmonic number, we will show that \(H_{N,a,b}\) is always bounded and has a finite limit as \(N\rightarrow +\infty \).

Lemma 4

It holds that

$$\begin{aligned} \frac{2(\ln {2}+\frac{1}{2N-1}-\frac{1}{N+1})}{\max (a,b)}\le H_{N,a,b}\le \frac{2(\ln {2}+\frac{1}{2N+1}-\frac{1}{N-1})}{\min (a,b)}. \end{aligned}$$

The proof of this lemma is given in Sect. 6.6.

The following proposition characterises the asymptotic limit of \(H_{N,a,b}\) as \(N\rightarrow +\infty \), which will be used later to obtain asymptotic limits of the cost function.

Proposition 4

It holds that

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty } H_{N,a,b}=H_{a,b}\quad \text {,where}\quad H_{a,b}=\frac{1}{a}\ln \Big (\frac{a+b}{b}\Big )+\frac{1}{b}\ln \Big (\frac{a+b}{a}\Big ). \end{aligned}$$

Proof

We recall that \(H_{N,a,b}=\sum \limits _{j=1}^{N-1}\frac{1}{j(N-j)}\min (\frac{j}{a},\frac{N-j}{b})\). As in the proof of the above lemma, we will link \(H_{N,a,b}\) to the harmonic number \(H_N\) by splitting the sum at an appropriate point, which allows us to determine the minimum between j/a and \((N-j)/a\) explicitly, given by

$$\begin{aligned} N_{a,b}=\left\lfloor \frac{N}{\frac{b}{a}+1} \right\rfloor . \end{aligned}$$

We have

$$\begin{aligned} H_{N,a,b}= \sum \limits _{j=1}^{N_{a,b}}\frac{\frac{j}{a}}{j(N-j)} + \sum \limits _{j=N_{a,b}+1}^{N-1}\frac{\frac{N-j}{b}}{j(N-j)} = \frac{1}{a}\sum \limits _{j=1}^{N_{a,b}}\frac{1}{N-j} + \frac{1}{b}\sum \limits _{j=N_{a,b}}^{N-1}\frac{1}{j}. \end{aligned}$$

Let \({\hat{j}}=N-j\). Thus:

$$\begin{aligned} H_{N,a,b}= \frac{1}{a}\sum \limits _{{\hat{j}}=N-N_{a,b}}^{N-1}\frac{1}{{\hat{j}}} + \frac{1}{b}\sum \limits _{j=N_{a,b}+1}^{N-1}\frac{1}{j}= \frac{1}{a}\Big (H_{N}-H_{N-N_{a,b}}\Big ) + \frac{1}{b}\Big (H_{N}-H_{N_{a,b}}\Big ). \end{aligned}$$

Note that \(N-N_{a,b}=N(1-\frac{a}{a+b})=N\frac{b}{a+b}\) and \(N_{a,b}=N\frac{a}{a+b}.\) Thus both \(N_{a,b}\) and \(N-N_{a,b}\) go to \(+\infty \) as \(N\rightarrow +\infty \). Then it follows from the following well-known asymptotic behaviour of the harmonic number (see (52))

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty }H_N=\ln N+\gamma , \end{aligned}$$

we obtain the following asymptotic behaviour of \(H_{N,a,b}\)

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty } H_{N,a,b}=\frac{1}{a}\ln \Big (\frac{a+b}{b}\Big )+\frac{1}{b}\ln \Big (\frac{a+b}{a}\Big ). \end{aligned}$$

This completes the proof of the proposition. \(\square \)

The following lemma provides lower and upper bounds for the cost function, which show that it grows quadratically with respect to the population size.

Lemma 5

It holds that

$$\begin{aligned} \frac{N^2\theta }{2}\Big (H_{N,a,b}+m\Big )\le E_{mix}(\theta )\le N(N-1)\theta \Big (H_{N,a,b}+M\Big ), \end{aligned}$$
(15)

where

$$\begin{aligned} m=\min _{i}\frac{\min (\frac{i+1}{a},\frac{N-i-1}{b})}{(i+1)(N-i-1)},\quad M=\max _{i}\frac{\min (\frac{i+1}{a},\frac{N-i-1}{b})}{(i+1)(N-i-1)}. \end{aligned}$$
(16)

Proof

Let \(H_{N,a,b}=\sum \limits _{j=1}^{N-1}\frac{1}{j(N-j)}\min (\frac{j}{a},\frac{N-j}{b})\). We have

$$\begin{aligned} \frac{(1+e^x)(1+e^x+\ldots +e^{(N-2)x})}{1+e^x+\ldots +e^{(N-1)x}}=1+\frac{e^x+\ldots +e^{(N-2)x}}{1+e^x+\ldots +e^{(N-1)x}}. \end{aligned}$$

Using Lemma 3, we get

$$\begin{aligned} 1\le \frac{(1+e^x)(1+e^x+\ldots +e^{(N-2)x})}{1+e^x+\ldots +e^{(N-1)x}}\le \frac{2(N-1)}{N}. \end{aligned}$$

Let \(m=\min \limits _{i}\frac{\min (\frac{i+1}{a},\frac{N-i-1}{b})}{(i+1)(N-i-1)}\) and \(M=\max \limits _{i}\frac{\min (\frac{i+1}{a},\frac{N-i-1}{b})}{(i+1)(N-i-1)}\). Since

$$\begin{aligned} m\sum _{j=0}^{N-2}e^{jx}\le \sum _{j=1}^{N-1}\frac{e^{(j-1) x}}{j(N-j)}\min \Big (\frac{j}{a},\frac{N-j}{b}\Big )\le M\sum _{j=0}^{N-2}e^{jx}. \end{aligned}$$

we obtain the following estimates

$$\begin{aligned} m&\le m\frac{(1+e^x)(1+e^x+\ldots +e^{(N-2)x})}{1+e^x+\ldots +e^{(N-1)x}}\\ {}&\le \frac{(1+e^x)}{1+e^x+\ldots +e^{(N-1)x}}\sum _{j=1}^{N-1}\frac{e^{(j-1)x}}{j(N-j)}\quad \min \Big (\frac{j}{a},\frac{N-j}{b}\Big ) \\ {}&\le (H_{N,a,b}+M)\frac{(1+e^x)(1+e^x+\ldots +e^{(N-2)x})}{1+e^x+\ldots +e^{(N-1)x}}\le \frac{2(N-1)(H_{N,a,b}+M)}{N}. \end{aligned}$$

Thus for \(\theta >0\) we have

$$\begin{aligned} \frac{N^2\theta }{2}\Big (H_{N,a,b}+m\Big )\le E_{mix}(\theta )\le N(N-1)\theta \Big (H_{N,a,b}+M\Big ), \end{aligned}$$

which completes the proof of the lemma. \(\square \)

We can further estimate m and M in (16) from below and above respectively in terms of the population size N as follows.

Lemma 6

For m and M defined in (16), it holds that

$$\begin{aligned} m\ge \frac{1}{\max (a,b)(N-1)},\quad M\le \frac{1}{\min (a,b)\lfloor \frac{(N-1)}{2} \rfloor }. \end{aligned}$$

As a consequence, we have

$$\begin{aligned}{} & {} \frac{N^2\theta }{2}\Big (H_{N,a,b}+\frac{1}{\max (a,b)(N-1)}\Big )\le E_{mix}(\theta )\le N(N-1)\theta \\ {}{} & {} \quad \Big (H_{N,a,b}+\frac{1}{\min (a,b)\lfloor \frac{(N-1)}{2} \rfloor }\Big ). \end{aligned}$$

The proof of this lemma is given in Sect. 6.7.

4.2 Neutral drift limit

In this section, we study the neutral drift limit of the cost function, proving the second part of Theorem 1.

Proposition 5

(neutral drift limit) It holds that

$$\begin{aligned} \lim \limits _{\beta \rightarrow 0}E_{mix}(\theta )=\theta N^2 H_{N,a,b}. \end{aligned}$$

It is worth noting that the neutral drift limit of the cost function depends on the population size N, the reward and punishment efficiency ratios a and b through the number \(H_{N,a,b}\), but does not depend on the underlying game.

Proof

We recall that \(x=\beta (\theta +\delta )\). Since \(\beta \rightarrow 0\) implies \(x\rightarrow 0\), it follows from the formula of \(E_{mix}(\theta )\) that

$$\begin{aligned} \lim _{\beta \rightarrow 0}E_{mix}(\theta )&=\lim _{\beta \rightarrow 0} \frac{N^2 \theta (1+e^x)}{2(1+e^x+\ldots e^{(N-1)x})}\\ {}&\quad \bigg [\big (1+e^x+\ldots +e^{(N-2)x}\big )\sum _{j=1}^{N-1}\frac{1}{j(N-j)}\min \Big (\frac{j}{a},\frac{N-j}{b}\Big )\\&\quad +\sum _{j=1}^{N-1}\frac{e^{(j-1) x}}{j(N-j)}\min \Big (\frac{j}{a}, \frac{N-j}{b}\Big )\bigg ]\\&=\theta N[(N-1)H_{N,a,b}+H_{N,a,b}]\\&=\theta N^2H_{N,a,b}. \end{aligned}$$

\(\square \)

4.3 Strong selection limits

In this section, we study the strong selection limits of the cost function, proving the third statement of Theorem 1.

Proposition 6

(strong selection limits)

$$\begin{aligned} \lim \limits _{\beta \rightarrow +\infty }E_{mix}(\theta )={\left\{ \begin{array}{ll} \frac{N^2\theta }{2}\Big (H_{N,a,b} + \frac{1}{a(N-1)}\Big ), \quad \text {for}\quad \theta <-\delta ,\\ \frac{N A}{2}\Big [2N H_{N,a,b}+\frac{1}{a(N-1)}+\frac{1}{b(N-1)}\\ \quad -\frac{\min (2/a, (N-2)/b)}{2(N-2)}-\frac{\min ((N-1)/a,1/b)}{(N-1)}\Big ], \quad \text {for}\quad \theta =-\delta ,\\ \frac{N^2\theta }{2}\bigg [H_{N,a,b}+\frac{1}{b(N-1)}\bigg ] \quad \text {for}\quad \theta >-\delta . \end{array}\right. } \end{aligned}$$

Similarly to the neutral drift limit, the strong selection limit of the cost function depends on the population size N, the reward and punishment efficiency ratios a and b, but does not depend on the underlying game.

Proof

To establish the strong selection limit \(\lim \limits _{\beta \rightarrow +\infty } E_{mix}(\theta )\), we rewrite \(E_{mix}(\theta )\) in a more convenient form as follows:

$$\begin{aligned} E_{mix}(\theta )&=\frac{\frac{N^2\theta }{2}(1+e^x)}{\sum \nolimits _{j=1}^{N-1}e^{jx}}\\&\quad \times \left[ \sum \limits _{j=0}^{N-2} e^{jx}H_{N,a,b}+ \sum \limits _{j=0}^{N-2} \frac{e^{{\hat{j}}x}}{({\hat{j}}+1)(N-{\hat{j}}-1)\min \Big (\frac{{\hat{j}}+1}{a}, \frac{N-{\hat{j}}-1}{b}\Big )}\right] \\&=\frac{\frac{N^2\theta }{2}(1+e^x)}{\sum \nolimits _{j=0}^{N-1}e^{jx}}\Bigg [ \sum \limits _{j=0}^{N-2} e^{jx}(H_{N,a,b} + h_j)\Bigg ], \end{aligned}$$

where \(h_j=\frac{\min (\frac{j+1}{a}, \frac{N-j-1}{b})}{(j+1)(N-j-1)}\).

Now, note that:

$$\begin{aligned} (1+e^x)\sum \limits _{j=0}^{N-2} e^{jx}(H_{N,a,b} + h_j)&= \sum \limits _{j=0}^{N-2} e^{jx}(H_{N,a,b} + h_j) + \sum \limits _{j=0}^{N-2} e^{(j+1)x}(H_{N,a,b} + h_j) \\ {}&= \sum \limits _{j=0}^{N-1}\eta _j e^{jx}, \end{aligned}$$

where

$$\begin{aligned}&\eta _0=H_{N,a,b}+h_0=H_{N,a,b}+\frac{\min (\frac{1}{a}, \frac{N-1}{b})}{N-1}=H_{N,a,b}+\frac{1}{a(N-1)},\\&\eta _{N-1}=H_{N,a,b}+h_{N-2}=H_{N,a,b}+\frac{1}{b(N-1)},\\&\eta _j=H_{N,a,b}+h_j+H_{N,a,b}+h_{j-1}=2H_{N,a,b}+h_j+h_{j-1}, \; \text {for}\; 1\le j\le N-2. \end{aligned}$$

By putting all of the above together, we obtain that:

$$\begin{aligned} E_{mix}(\theta ) = \frac{N^2\theta }{2\sum \nolimits _{j=0}^{N-1}e^{jx}}\sum \limits _{j=0}^{N-1}\eta _j e^{jx}. \end{aligned}$$

Recall that \(x=\beta (\theta +\delta )\) with \(\delta \) being the difference of the average payoffs between a cooperator and a defector. We study 3 cases:

  1. 1.

    If \(\theta <-\delta \), then \(j(\theta +\delta )<0\), so \(e^{jx}=\Big [e^{j(\theta +\delta )}\Big ]^{\beta }\) for all \(1\le j \le N-1\). Thus, \(\lim \limits _{\beta \rightarrow +\infty } e^{jx}=\lim \limits _{\beta \rightarrow +\infty } \Big [e^{j(\theta +\delta )}\Big ] = 0\). Therefore,

    $$\begin{aligned} \lim \limits _{\beta \rightarrow +\infty }E_{mix}(\theta )&=\frac{N^2\theta }{2}\frac{1}{(1+0)}\eta _0 \\ {}&=\frac{N^2\theta }{2}\bigg (H_{N,a,b}+\frac{1}{a(N-1)}\bigg ). \end{aligned}$$
  2. 2.

    If \(\theta =-\delta \), then \(x=0\). So \(E_{mix}(\theta )=E_{mix}(-\delta )=-\frac{N^2\delta }{2N}\sum \limits _{j=0}^{N-1} \eta _j\). We have

    $$\begin{aligned} \sum \limits _{j=0}^{N-1} \eta _j&=\eta _0 + \sum _{j=1}^{N-2}\eta _j+\eta _{N-1} \\ {}&=H_{N,a,b}+\frac{1}{a(N-1)}+H_{N,a,b}+\frac{1}{b(N-1)}\\ {}&\quad +\sum _{j=1}^{N-2}\Big (2 H_{N,a,b}+h_j+h_{j-1}\Big ) \\ {}&=2(N-1) H_{N,a,b}+\frac{1}{a(N-1)}+\frac{1}{b(N-1)}\\ {}&\quad +H_{N,a,b}-h_1+H_{N,a,b}-h_{N-2} \\ {}&= 2N H_{N,a,b}+\frac{1}{a(N-1)}+\frac{1}{b(N-1)}\\&\quad -\frac{\min (2/a, (N-2)/b)}{2(N-2)}-\frac{\min ((N-1)/a,1/b)}{(N-1)}. \end{aligned}$$

    Therefore,

    $$\begin{aligned} E_{mix}(-\delta )= & {} -\frac{N \delta }{2}\Big [2N H_{N,a,b}+\frac{1}{a(N-1)}+\frac{1}{b(N-1)}\\{} & {} -\frac{\min (2/a, (N-2)/b)}{2(N-2)}-\frac{\min ((N-1)/a,1/b)}{(N-1)}\Big ]. \end{aligned}$$
  3. 3.

    If \(\theta >-\delta \), then we obtain

    $$\begin{aligned} \lim \limits _{\beta \rightarrow +\infty } E_{mix}(\theta )&= \frac{N^2\theta }{2} \lim \nolimits _{\beta \rightarrow +\infty } \frac{ \sum \nolimits _{j=0}^{N-1} \eta _j e^{jx}}{\sum \nolimits _{j=0}^{N-1} e^{jx}} \\&=\frac{N^2\theta }{2}\eta _{N-1} \\&=\frac{N^2\theta }{2}\bigg [H_{N,a,b}+\frac{1}{b(N-1)}\bigg ], \end{aligned}$$

    since

    $$\begin{aligned} \lim \limits _{\beta \rightarrow +\infty } \frac{ \sum \nolimits _{j=0}^{N-1} \eta _j e^{jx}}{\sum \nolimits _{j=0}^{N-1} e^{jx}}&=\frac{e^{(N-1)x}\sum \nolimits _{j=0}^{N-1} e^{(j-(N-1))x}}{e^{(N-1)x}\sum \nolimits _{j=0}^{N-1} e^{(j-(N-1))x}} \\&=\frac{\sum \nolimits _{j=0}^{N-1} \eta _j e^{(j-(N-1))x}}{\sum \nolimits _{j=0}^{N-1} e^{(j-(N-1))x}}, \end{aligned}$$

    so \(e^{(j-(N-1))x}=e^{(j-(N-1))\beta (\theta +\delta )}=\bigg [ e^{(j-(N-1))(\theta +\delta )}\bigg ]^{\beta }\) for all \(0\le j \le N-2\). Thus \(\lim \nolimits _{\beta \rightarrow +\infty } e^{(j-(N-1))x}=0\) for all \(0\le j \le N-2\).\(\square \)

Fig. 4
figure 4

Neutral drift and strong selection limits. We calculate numerically the expected total cost for the mixed incentive, varying the intensity of selection \(\beta \). The dashed red lines represent the corresponding theoretical limiting values obtained in Proposition 5 for the neutral drift limit, while the dashed green lines represent the corresponding theoretical limiting values obtained in Proposition 6 for the strong selection limit. We observe that numerical results are in close accordance with those obtained theoretically. Results are obtained for DG with \(N = 2500, B = 2, c = 1\) (left) and for PGG with \(N = 2500, c = 1, r = 3, n = 5\) (right) (colour figure online)

4.4 Infinite population limits

In this section, we establish the infinite population limits of the cost function, proving the fourth statement of Theorem 1.

Proposition 7

(infinite population limits) We have

$$\begin{aligned} \lim _{N\rightarrow +\infty }\frac{E_{mix}(\theta )}{\frac{N^2\theta }{2}H_{a,b}}={\left\{ \begin{array}{ll} 1+e^{-\beta |\theta -c|} \quad \text {for DG},\\ 1+ e^{-\beta |\theta -c(1-\frac{r}{n})|}\quad \text {for PGG}. \end{array}\right. } \end{aligned}$$
(17)

Unlike the neutral drift and strong selection limits, the infinite population limit of the cost function strongly depends on the underlying game. The strength of selection, \(\beta \), is fixed. See Fig. 5 below for an illustration of these limits.

Proof

Using

$$\begin{aligned} E_{mix}(\theta )=\frac{\frac{N^2\theta }{2}(1+e^x)}{\sum \nolimits _{j=0}^{N-1}e^{jx}}\Big [ \sum \limits _{j=0}^{N-2} e^{jx}(H_{N,a,b} + h_j)\Big ]\; \text {where}\; h_j=\frac{\min (\frac{j+1}{a}, \frac{N-j-1}{b})}{(j+1)(N-j-1)}, \end{aligned}$$

we can estimate

$$\begin{aligned} \frac{(1+e^x)(1+\ldots +e^{(N-2)x})}{1+\ldots +e^{(N-1)x}}\Big (\frac{H_{N,a,b}}{H_{a,b}}+\frac{{\underline{h}}}{H_{a,b}}\Big )\le \frac{E_{mix}(\theta )}{\frac{N^2\theta }{2} H_{a,b}}\nonumber \\ \le \frac{(1+e^x)(1+\ldots +e^{(N-2)x})}{1+\ldots +e^{(N-1)x}}\Big (\frac{H_{N,a,b}}{H_{a,b}}+\frac{{\overline{h}}}{H_{a,b}}\Big ), \end{aligned}$$
(18)

where \({\underline{h}}=\min h_j\) and \({\overline{h}}=\max h_j\).

We recall that \(x=\beta (\theta +\delta )\), where

$$\begin{aligned} \delta =\delta (N)={\left\{ \begin{array}{ll} -(c+\frac{B}{N-1})\quad \text {for DG},\\ -c(1-\frac{r(N-n)}{n(N-1)}) \quad \text {for PGG}. \end{array}\right. } \end{aligned}$$

Let

$$\begin{aligned} w(\theta ):=\frac{(1+e^x)(1+\ldots +e^{(N-2)x})}{1+\ldots +e^{(N-1)x}}. \end{aligned}$$

Then we have

$$\begin{aligned} w(\theta )= & {} 1+\frac{e^x+\ldots e^{(N-2)x}}{1+\ldots +e^{(N-1)x}}\nonumber \\ {}= & {} {\left\{ \begin{array}{ll} \frac{2(N-1)}{N}\quad \text {if}\quad x=0,\\ 1+e^x\frac{1-e^{(N-2)x}}{1-e^{N x}}\quad \text {if}\quad x\ne 0. \end{array}\right. } \end{aligned}$$
(19)

Since for fixed \(\theta \), x equals to 0 for only one value of N, we can consider \(x\ne 0\) when we study the limit \(N\rightarrow +\infty \). We have

$$\begin{aligned} \lim _{N\rightarrow +\infty } e^x=\lim _{N\rightarrow +\infty }e^{\beta (\theta +\delta (N))}={\left\{ \begin{array}{ll} e^{\beta (\theta -c)}\quad \text {for DG},\\ e^{\beta (\theta -c(1-\frac{r}{n}))}\quad \text {for PGG}. \end{array}\right. } \end{aligned}$$
(20)

In addition, for DG, we have

$$\begin{aligned} \lim _{N\rightarrow +\infty }\frac{1-e^{(N-2)x}}{1-e^{N x}}= & {} \lim _{N\rightarrow +\infty }\frac{1-e^{-\beta b(N-3)/(N-1)}e^{\beta (\theta -c)(N-3)}}{1-e^{-\beta b}e^{\beta (\theta -c)(N-1)}}\nonumber \\ {}= & {} {\left\{ \begin{array}{ll} 1\quad \text {if}\quad \theta \le c,\\ e^{-2\beta (\theta -c)}\quad \text {if}\quad \theta >c. \end{array}\right. } \end{aligned}$$
(21)

While for PGG, we have

$$\begin{aligned} \frac{1-e^{(N-2)x}}{1-e^{N x}}= & {} \frac{1 - e^{(N-2)\beta [\theta -c(1-\frac{r}{n})-\frac{cr}{n}\frac{n-1}{N-1}]}}{1-e^{N\beta [\theta -c(1-\frac{r}{n})-\frac{cr}{n}\frac{n-1}{N-1}]}} \nonumber \\ {}= & {} \frac{1 - e^{(N-2)\beta (\theta -c(1-\frac{r}{n}))}e^{-\beta (N-2)\frac{cr}{n}\frac{n-1}{N-1}}}{1-e^{N\beta (\theta -c(1-\frac{r}{n}))} e^{-N\beta \frac{cr}{n}\frac{n-1}{N-1}}} \end{aligned}$$
(22)

If \(\theta -c(1-\frac{r}{n}) \le 0\), then

$$\begin{aligned}&\lim _{N\rightarrow +\infty } e^{(N-2)\beta (\theta -c(1-\frac{r}{n}))} = \lim _{N\rightarrow +\infty } e^{-\beta (N-2)\frac{cr}{n}\frac{n-1}{N-1}} = 0, \\ {}&\lim _{N\rightarrow +\infty } e^{-\beta (N-2)c \frac{r(n-1)}{n(N-1)}} = e^{-\beta c \frac{r(n-1)}{n}}, \\ {}&\lim _{N\rightarrow +\infty } e^{-N\beta c \frac{r(n-1)}{n(N-1)}} = e^{-\beta c \frac{r(n-1)}{n}}. \end{aligned}$$

Substituting these above limits in (22), it follows that, if \(\theta -c(1-\frac{r}{n}) \le 0\), we have

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty }\frac{1-e^{(N-2)x}}{1-e^{N x}} = 1. \end{aligned}$$

If \(\theta -c(1-\frac{r}{n})>0\), then we have

$$\begin{aligned}{} & {} \frac{1 - e^{(N-2)\beta (\theta -c(1-\frac{r}{n}))}e^{-\beta (N-2)\frac{cr}{n}\frac{n-1}{N-1}}}{1-e^{N\beta (\theta -c(1-\frac{r}{n}))} e^{-N\beta \frac{cr}{n}\frac{n-1}{N-1}}}\nonumber \\ {}{} & {} = \frac{\frac{1}{e^{N\beta (\theta -c(1-\frac{r}{n}))}} - e^{-2\beta (\theta -c(1-\frac{r}{n}))} e^{-\beta (N-2)\frac{cr}{n}\frac{n-1}{N-1}}}{\frac{1}{e^{N\beta (\theta -c(1-\frac{r}{n}))}}-e^{-N\beta \frac{cr}{n}\frac{n-1}{N-1}}}. \end{aligned}$$
(23)

We have

$$\begin{aligned}&\lim \limits _{N\rightarrow +\infty }\frac{1}{e^{N\beta (\theta -c(1-\frac{r}{n}))}}=0, \\ {}&\lim \limits _{N\rightarrow +\infty } e^{-\beta (N-2)\frac{cr}{n}\frac{n-1}{N-1}}=e^{-\beta c \frac{r(n-1)}{n}}, \\ {}&\lim _{N\rightarrow +\infty } e^{-N\beta c \frac{r(n-1)}{n(N-1)}} = e^{-\beta c \frac{r(n-1)}{n}}. \end{aligned}$$

From the above limits, (22) and (23) we obtain

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty }\frac{1-e^{(N-2)x}}{1-e^{N x}}&= \lim _{N\rightarrow +\infty } \frac{\frac{1}{e^{N\beta (\theta -c(1-\frac{r}{n}))}} - e^{-2\beta (\theta -c(1-\frac{r}{n}))} e^{-\beta (N-2)\frac{cr}{n}\frac{n-1}{N-1}}}{\frac{1}{e^{N\beta (\theta -c(1-\frac{r}{n}))}}-e^{-N\beta \frac{cr}{n}\frac{n-1}{N-1}}} \\ {}&=\frac{- e^{-2\beta (\theta -c(1-\frac{r}{n}))} e^{-\beta \frac{cr(n-1)}{n}}}{-e^{-\beta \frac{cr(n-1)}{n}}} \\ {}&= e^{-2\beta (\theta -c(1-\frac{r}{n}))}. \end{aligned}$$

Hence for PGG

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty }\frac{1-e^{(N-2)x}}{1-e^{N x}}={\left\{ \begin{array}{ll} 1,\quad \text {if}\quad \theta \le c(1-\frac{r}{n}),\\ e^{-2\beta (\theta -c(1-\frac{r}{n}))}\quad \text {if}\quad \theta > c(1-\frac{r}{n}). \end{array}\right. } \end{aligned}$$
(24)

From (19), (20), and (21), we obtain, for DG

$$\begin{aligned} \lim _{N\rightarrow +\infty }w(\theta )&={\left\{ \begin{array}{ll} 1+e^{\beta (\theta -c)}\quad \text {if}\quad \theta \le c,\\ 1+e^{-\beta (\theta -c)}\quad \text {if}\quad \theta >c \end{array}\right. }\nonumber \\&=1+e^{-\beta |\theta -c|}, \end{aligned}$$
(25)

and similarly, from (19), (20), and (24), we get, for PGG

$$\begin{aligned} \lim _{N\rightarrow +\infty }w(\theta )&={\left\{ \begin{array}{ll} 1+e^{\beta (\theta -c)}e^{\beta c\frac{r}{n}}\quad \text {if}\quad \theta \le c(1-\frac{r}{n}),\\ 1+e^{-\beta (\theta -c)}e^{-\beta c\frac{r}{n}}\quad \text {if}\quad \theta >c(1-\frac{r}{n}) \end{array}\right. }\nonumber \\ {}&=1+ e^{-\beta |\theta -c(1-\frac{r}{n})|}. \end{aligned}$$
(26)

Now we study the limit of \(\frac{H_{N,a,b}+{\underline{h}}}{H_{a,b}}\) and of \(\frac{H_{N,a,b}+{\overline{h}}}{H_{a,b}}\) as \(N\rightarrow +\infty \).

For \(i=1,\ldots , N_{a,b}\), \(h_i=\frac{1}{a(N-i)}< \frac{1}{a(N-N_{a,b})}\) as \(N-i\ge N-N_{a,b}\). Since \(\lim \limits _{N\rightarrow +\infty }\frac{1}{a(N-N_{a,b})}=0\), it follows that \(\lim \limits _{N\rightarrow +\infty } h_i=0\).

For \(i=N_{a,b}+1,\ldots , N-2\), \(h_i=\frac{1}{b(i+1)}< \frac{1}{b(N_{a,b}+1)}\) as \(i\ge N_{a,b}+1\). Since \(\lim \limits _{N\rightarrow +\infty }\frac{1}{b(N_{a,b}+2)}=0\), it follows that \(\lim \limits _{N\rightarrow +\infty } h_i=0\).

As \(N_{a,b}=\lfloor \frac{N}{\frac{b}{a}+1} \rfloor \) \(\rightarrow +\infty \) for \(N\rightarrow +\infty \), we deduce that \(h_j \rightarrow 0\) as \(N\rightarrow +\infty \) for any j,

$$\begin{aligned} \lim _{N\rightarrow +\infty }\frac{H_{N,a,b}}{H_{a,b}}+\frac{{\underline{h}}}{H_{a,b}}=\lim _{N\rightarrow +\infty }\frac{H_{N,a,b}}{H_{a,b}}+\frac{{\overline{h}}}{H_{a,b}}=1. \end{aligned}$$
(27)

Therefore, from (18), (25), and (27) we obtain, for DG:

$$\begin{aligned} \lim _{N\rightarrow +\infty }\frac{E_{mix}(\theta )}{\frac{N^2\theta }{2}H_{N,a,b}}=1+e^{-\beta |\theta -c|}, \end{aligned}$$

and from (18), (26) and (27), for PGG:

$$\begin{aligned} \lim _{N\rightarrow +\infty }\frac{E_{mix}(\theta )}{\frac{N^2\theta }{2}H_{N,a,b}}=1+ e^{-\beta |\theta -c(1-\frac{r}{n})|}. \end{aligned}$$

\(\square \)

Fig. 5
figure 5

Large population size limit. We calculate numerically the expected total cost for the mixed incentive, varying the population size N. The dashed orange lines represent the corresponding theoretical limiting values obtained in Proposition 7 for the large population size limit, \(N\rightarrow +\infty \). We observe that numerical results are in close accordance with those obtained theoretically. Results are obtained for DG with \(N = 2500, B = 2, c = 1, \beta = 1, \theta = 1\) (left) and for PGG with \(N = 2500, c = 1, r = 3, n = 5, \beta = 1, \theta = 1\) (right) (colour figure online)

5 Discussion

The use of institutional incentives such as reward and punishment is an effective tool for the promotion of cooperation in social dilemmas, as proven both by theoretical (Hauert et al. 2007; Sasaki et al. 2015; Sigmund et al. 2010; Han 2022; Duong and Han 2021) and experimental results (Ostrom 2005; Van Lange et al. 2014; Jia-Jia et al. 2014). In past works, although mixed incentives were used, the aspect of minimisation of the cost function while ensuring a minimum level of cooperation was overlooked. Moreover, existing works that consider this question usually omit the stochastic effects that drive population dynamics, namely when the strength of selection, \(\beta \), varies.

In this work, we used a stochastic evolutionary game theoretic approach for finite, well-mixed populations and obtained theoretical results for the optimal cost of mixed incentives that ensure a desired level of cooperation, for a given intensity of selection, \(\beta \). We show that this cost strongly depends on the value of \(\beta \) due to the existence of a phase transition in the cost function for providing mixed incentives. This behaviour is missing in works that consider a deterministic evolutionary approach (Wang et al. 2019). We also characterised asymptotic behaviours of the cost function and showed that mixed incentives are always cost efficient than either using only reward or only punishment. In particular, we successfully obtained an infinite population limit, as well as those for neutral drift and strong selection.

For the mathematical analysis of the mixed incentive cost function to be possible, we made some assumptions. Firstly, in order to derive the analytical formula for the frequency of cooperation, we assumed a small mutation limit (Rockenbach and Milinski 2007; Nowak et al. 2004; Sigmund 2010). Despite the simplified assumption, this small mutation limit approach has wide applicability to scenarios which go well beyond the strict limit of very small mutation rates (Zisis et al. 2015; Hauert et al. 2007; Sigmund et al. 2010; Rand et al. 2013). If we were to relax this assumption, the derivation of a closed form for the frequency of cooperation would be intractable. Secondly, we focused on two important cooperation dilemmas, the Donation Game and the Public Goods Game. Both have in common that the difference in average payoffs between a cooperator and a defector does not depend on the population composition. This special property allowed us to simplify the fundamental matrix of the Markov chain to a tridiagonal form and apply the techniques of matrix analysis to obtain a closed form of its inverse matrix. In games with more complex payoff matrices such as the general prisoners’ dilemma and the collective risk game (Sun et al. 2021), this property no longer holds (e.g. in the former case the payoff difference, \(\Pi _C(i)-\Pi _D(i)\), depends additively on i) and the technique in this paper cannot be directly applied. In these scenarios, we might consider other approaches to approximate the inverse matrix, exploiting its block structure.

More recent works looked at the effect of indirect exclusion on cooperation, having found that the introduction of indirect exclusion could induce the stable coexistence of cooperators and defectors or the dominance of cooperators, successfully promoting cooperation (Liu and Chen 2022). Looking at prior agreement before an interaction takes place also showed that cooperation would be increased. Thus, individuals choose whether to take part in a social dilemma and those who do are rewarded (Ogbo et al. 2022; Han 2022). Our future work will consider cost-efficiency in these different form of institutional incentives, including the problem of providing incentives whenever the frequency or number of cooperators (or defectors) in the population does not exceed a given threshold.

We furthermore aim to investigate the optimisation problems of incentives such as reward, punishment, and exclusion in complex networks. There has been little attention to providing analytical results for cost-efficient incentives in structured populations or in more complex games, so this would also be an interesting research avenue. Finally, since time is most precious, we intent to explore the time that the system needs in order to achieve a desired level of cooperation.