Skip to main content
Log in

Asymptotic equivalence between frequentist and Bayesian prediction limits for the Poisson distribution

  • Research Article
  • Published:
Journal of the Korean Statistical Society Aims and scope Submit manuscript

Abstract

Bayesian prediction limits are constructed based on some maximum allowed probability of wrong prediction. However, the frequency of wrong prediction in a long run often exceeds this probability. The literature on frequentist and Bayesian prediction limits, and their interpretation is sparse; more attention is given to prediction intervals obtained based on parameter estimates or empirical studies. Under the Poisson distribution, we investigate frequentist properties of Bayesian prediction limits derived from conjugate priors. The frequency of wrong prediction is used as a criterion for their comparison. Bayesian prediction based on the uniform and Jeffreys’ non-informative priors yield one sided prediction limits that can be interpreted in a frequentist context. It is shown here, by proving a theorem, that Bayesian lower prediction limit derived from Jeffreys’ noninformative prior is the only optimal (largest) Bayesian lower prediction limit that possesses frequentist properties. In addition, it is concluded as corollary that there is no prior distribution such that Bayesian upper and lower prediction limits obtained from it will both coincide with their respective frequentist prediction limits. Our results are based on asymptotic considerations. An example with real data is included, and the sensitivity of the Bayesian prediction limits with respect to conjugate priors is numerically explored through simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Availability of data and material

Data consist of NOAA records www.aoml.noaa.gov/hrd/tcfaq/E11.html on Atlantic tropical storms annual occurrences during 1851 through 2018. Latest observations have been included using publicly available records.

Notes

  1. https://www.aoml.noaa.gov/hrd/hurdat/comparison_table.html.

References

  • Aitchinson, J., & Dunsmore, J. R. (1975). Statistical prediction analysis. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Aitchison, J., & Sculthorpe, D. (1965). Some problems of statistical prediction. Biometrika, 52(3–4), 469–483.

    Article  MathSciNet  Google Scholar 

  • Bain, L. J., & Patel, J. K. (1993). Prediction intervals based on partial observations for some discrete distributions. IEEE Transactions on Reliability, 42(3), 459–463.

    Article  Google Scholar 

  • Barndorff-Nielsen, O. E., & Cox, D. R. (1996). Prediction and asymptotics. Bernoulli, 2(4), 319–340.

    Article  MathSciNet  Google Scholar 

  • Bejleri, V., & Nandram, B. (2018). Bayesian and frequentist prediction limits for the Poisson distribution. Communications in Statistics-Theory and Methods, 47(17), 4254–4271.

    Article  MathSciNet  Google Scholar 

  • Bejleri, V., Sartore, L., & Nandram, B.: Plpoisson: Prediction Limits for Poisson Distribution (2021). R package version 0.2.0

  • Cox, D. R. (1975). Prediction intervals and empirical bayes confidence intervals. Journal of Applied Probability, 12(S1), 47–55.

    Article  MathSciNet  Google Scholar 

  • Cox, D. R., & Hinkley, D. V. (1974). Theoretical Statistics. London: Chapman & Hall.

    Book  Google Scholar 

  • Dunsmore, I. R. (1976). A note on faulkenberry’s method of obtaining prediction intervals. Journal of the American Statistical Association,71(353), 193–194. https://doi.org/10.1080/01621459.1976.10481513

  • Elsner, J. B., & Bossak, B. H. (2001). Bayesian analysis of us hurricane climate. Journal of Climate, 14(23), 4341–4350.

    Article  Google Scholar 

  • Faulkenberry, G. D. (1973). A method of obtaining prediction intervals. Journal of the American Statistical Association, 68(342), 433–435.

    Article  Google Scholar 

  • Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC Press.

  • Hahn, G. J., & Nelson, W. (1973). A survey of prediction intervals and their applications. Journal of Quality Technology, 5(4), 178–188.

    Article  Google Scholar 

  • Hall, P., Peng, L., & Tajvidi, N. (1999). On prediction intervals based on predictive likelihood or bootstrap methods. Biometrika, 86(4), 871–880.

    Article  MathSciNet  Google Scholar 

  • Kass, R. E., & Wasserman, L. (1996). The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435), 1343–1370.

    Article  Google Scholar 

  • Khuri, A. I. (1993). Advanced Calculus and Applications in Statistics. Amsterdam: Wiley.

    MATH  Google Scholar 

  • Knuth, D. (1973). Searching and Sorting, the art of computer programming (Vol. 3). Reading: Addison-Wesley.

    MATH  Google Scholar 

  • Krishnamoorthy, K., & Peng, J. (2011). Improved closed-form prediction intervals for binomial and Poisson distributions. Journal of Statistical Planning and Inference, 141(5), 1709–1718.

    Article  MathSciNet  Google Scholar 

  • Kvam, P. H., & Miller, J. G. (2002). Discrete predictive analysis in probabilistic safety assessment. Journal of Quality Technology, 34(1), 106–117.

    Article  Google Scholar 

  • Lawless, J. F., & Fredette, M. (2005). Frequentist prediction intervals and predictive distributions. Biometrika, 92(3), 529–542.

    Article  MathSciNet  Google Scholar 

  • Lehmann, E. L., & Casella, G. (2006). Theory of point estimation. Springer.

  • Lejeune, M., & Faulkenberry, G. D. (1982). A simple predictive density function. Journal of the American Statistical Association, 77(379), 654–657.

    Article  MathSciNet  Google Scholar 

  • Minka, T. P. (2000). Beyond newton’s method. Microsoft Research, Cambridge, UK: Tech. Rep.

  • Minka, T. P. (2002). Estimating a gamma distribution. Microsoft Research, Cambridge, UK: Tech. Rep.

  • Nadarajah, S., Alizadeh, M., & Bagheri, S. F. (2015). Bayesian and non-Bayesian interval estimators for the Poisson mean. REVSTAT-Statistical Journal, 13(3), 245–262.

    MathSciNet  MATH  Google Scholar 

  • Nelson, W. (1970). Confidence intervals for the ratio of two Poisson means and Poisson predictor intervals. IEEE Transactions on Reliability, 19(2), 42–49.

    Article  Google Scholar 

  • Possolo, A., & Iyer, H. K. (2017). Invited article: Concepts and tools for the evaluation of measurement uncertainty. Review of Scientific Instruments, 88(1).

  • Schafer, R. E., & Angus, J. E. (1977). Predicting the confidence of passing life tests. IEEE Transactions on Reliability, 26(2), 141–143.

    Article  Google Scholar 

  • Shah, B. V. (1969). On predicting failures in a future time period from known observations. IEEE Transactions on Reliability, 18(4), 203–204.

    Article  Google Scholar 

  • Taylor, M., & Karlin, S. (1998). An introduction to stochastic modeling (3rd edn.). Academic Press.

  • Thatcher, A. R. (1964). Relationships between Bayesian and confidence limits for predictions. Journal of the Royal Statistical Society: Series B (Methodological), 26(2), 176–192.

    MathSciNet  MATH  Google Scholar 

  • Wang, H. (2008). Coverage probability of prediction intervals for discrete random variables. Computational Statistics & Data Analysis, 53(1), 17–26.

    Article  MathSciNet  Google Scholar 

  • Wang, H. (2010). Closed form prediction intervals applied for disease counts. The American Statistician, 64(3), 250–256.

    Article  MathSciNet  Google Scholar 

  • Weiss, L. (1955). A note on confidence sets for random variables. The Annals of Mathematical Statistics, 26(1), 142–144.

    Article  MathSciNet  Google Scholar 

  • Weiss, L. (1961). Statistical Decision Theory. New York: McGraw-Hill.

    MATH  Google Scholar 

  • Winkler, R. L. (2003). An introduction to Bayesian inference and decision (2nd ed.). Gainesville: Probabilistic Publishing.

    Google Scholar 

  • Woolhouse, M. (2011). How to make predictions about future infectious disease risks. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1573), 2045–2054.

    Article  Google Scholar 

  • Xie, M., & Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: A review. International Statistical Review, 81(1), 3–39.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank two anonymous reviewers for their valuable comments, which considerably improved this article.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Valbona Bejleri.

Ethics declarations

Conflict of interest

The findings and conclusions in this article are those of the authors, have not been formally disseminated by the U.S. Department of Agriculture and should not be construed to represent any Agency determination or policy.

Code availability

R statistical software was used to compute frequentist and Bayesian prediction limits Bejleri et al. (2021).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Definition of Poisson process based on Taylor and Karlin (1998)

Definition 1

A Poisson process of intensity, or rate \(\lambda > 0\) is an integer valued stochastic process \(\left\{ W\left( s \right) ; s \ge 0 \right\}\) satisfying:

  1. (i)

    For any time points \(s_{0} = 0< s_{1}< s_{2}< \ldots < s_{n}\) the process increments

    $$\begin{aligned} W\left( s_{1} \right) - W\left( s_{0} \right) , ~ W\left( s_{2} \right) - W\left( s_{1} \right) , ~\ldots , ~ W\left( s_{n} \right) - W\left( s_{n - 1} \right) \end{aligned}$$

    are independent random variables.

  2. (ii)

    For \(s \ge 0\) and \(t > 0\), the random variable \(X\left( t \right) = W\left( s + t \right) - W\left( s \right)\) has Poisson distribution

    $$\begin{aligned} \Pr \big \{ X\left( t \right) = k \big \} = \Pr \big \{ W\left( s + t \right) - W\left( s \right) = k \big \} = \frac{\left( \lambda t \right) ^{k}e^{- \lambda t}}{k!}. \end{aligned}$$
  3. (iii)

    \(W\left( 0 \right) = 0\).

Please note that t in Part (ii) denotes a time interval. Part (ii) and (iii) of the definition yield that \(X\left( t \right)\) has Poisson distribution of rate \(\lambda > 0\). The mean of X is \(\mathrm {E}\left[X\left( t \right) \right]= \lambda t\) and the variance is \(\mathrm {Var}\left[X\left( t \right) \right]= \lambda t\). Poisson distribution is stationary when the rate parameter \(\lambda\) remains constant over time.

The limit of \(p(\ell (z; t) | z)\)

The equality

$$\begin{aligned} 0 < p\left( \ell \left( z;t \right) |z \right) = \frac{\displaystyle \int _{0}^{\infty }{\frac{\left( \lambda t \right) ^{\ell \left( z;t \right) }e^{- \lambda t}}{ \ell \left( z;t \right) !}\frac{\left( \lambda ns \right) e^{- \lambda ns}}{z!} p\left( \lambda \right) \mathrm {d}\lambda } }{\displaystyle \int _{0}^{\infty }{\frac{\left( \lambda ns \right) e^{- \lambda ns}}{z!} p\left( \lambda \right) \mathrm {d}\lambda } }, \end{aligned}$$
(22)

and

$$\begin{aligned} \frac{\left( \lambda t \right) ^{\ell \left( z;t \right) }e^{- \lambda t}}{ \ell \left( z;t \right) !} = e^{- \lambda t}\left\{ \sum _{y = 0}^{\ell \left( z;t \right) } \frac{\left( \lambda t \right) ^{y}}{y!} - \sum _{y = 0}^{\ell \left( z;t \right) - 1} \frac{\left( \lambda t \right) ^{y}}{y!} \right\} . \end{aligned}$$
(23)

Under Assumption 1, \(\lim _{t \rightarrow \infty } \ell \left( z;t \right) t^{-1}\) exist and is finite. Hence,

$$\begin{aligned}\lim _{t \rightarrow \infty } \ell \left( z;t \right) = \lim _{t \rightarrow \infty } \left[\left( \ell \left( z;t \right) t^{-1} \right) t \right]= \infty .\end{aligned}$$

Then, both sums in (23) are finite sums of a convergent series \(\sum _{y = 0}^{\infty } \frac{\left( \lambda t \right) ^{y}}{y!} = e^{\lambda t}\), for every \(t~>~0\). Therefore,

$$\begin{aligned} \frac{\left( \lambda t \right) ^{\ell \left( z; t \right) } e^{- \lambda t}}{ \ell \left( z; t \right) !} = e^{- \lambda t}\left\{ \sum _{y = 0}^{\ell \left( z; t \right) } \frac{\left( \lambda t \right) ^{y}}{y!} - \sum _{y = 0}^{l\left( z;t \right) - 1} \frac{\left( \lambda t \right) ^{y}}{y!} \right\} = e^{- \lambda t}\left[e^{\lambda t} - e^{\lambda t} \right]= 0,~\text {as }{t \rightarrow \infty }. \end{aligned}$$

Hence, the numerator in (22) goes to 0, and \(\lim _{t \rightarrow \infty } p\left( \ell \left( z;t \right) | z \right) = 0.\)

Differentiating the left side of (14) and (16) with respect to \(\beta\)

Based on Theorem 6.4.8 in Khuri (1993): Let \(f:D \rightarrow {\mathbb {R}}\), where \(D \subset {\mathbb {R}}^{2}\) contains the two-dimensional cell \(c_{2}\left( a,b \right) = \left\{ \left( x_{1},x_{2} \right) |\ a_{1}{\le x}_{1} \le b_{1},\ \text {\ a}_{2} \le x_{2} \le b_{2} \right\}\). Suppose that f is continuous and has a continuous first order partial derivatives with respect to \(x_{2}\) in D. Furthermore, let \(\theta (x_{2})\) and \(k(x_{2})\) be functions defined and with continuous derivatives in \(\left[a_{2},b_{2} \right]\), such that

$$\begin{aligned} a_{1} \le \theta (x_{2}) \le k(x_{2}) \le b_{1}, \end{aligned}$$

for all \(x_{2}\) in \(\left[a_{2},b_{2} \right]\). Then the function \(G:\left[a_{2},b_{2} \right]\rightarrow {\mathbb {R}}\) defined by \(G\left( x_{2} \right) = \int _{\theta (x_{2})}^{k(x_{2})} {f(x_{1},x_{2}) \mathrm {d}x_{1}}\) is differentiable for \(x_{2} \in [a_{2}, b_{2}]\), and

$$\begin{aligned} \frac{\partial }{ \partial x_{2}} G = \int _{\theta (x_{2})}^{k(x_{2})} {\frac{\partial f(x_{1},x_{2})}{\partial x_{2}} \mathrm {d}x_{1} + k'\left( x_{2} \right) f\left( k\left( x_{2} \right) , x_{2} \right) - \theta '(x_{2})f(\theta (x_{2}),x_{2})}.\end{aligned}$$

Taking \(x_{1} = \lambda\), and \(x_{2} = \beta\), we can write \(G\left( \beta \right) = \int _{\theta (\beta )}^{k(\beta )}{f\left( \lambda ,\beta \right) \mathrm {d}\lambda }\) and

$$\begin{aligned} \frac{\partial }{\partial \beta } G = \int _{\theta (\beta )}^{k(\beta )} {\frac{\partial f\left( \lambda , \beta \right) }{\partial \beta }\mathrm {d}\lambda } + k'\left( \beta \right) f\left( k\left( \beta \right) ,\beta \right) - \theta '(\beta ) f\left( \theta (\beta ),\beta \right) , \end{aligned}$$
(24)

where \(k\left( \beta \right) = \lim _{t \rightarrow \infty } \ell \left( z; t \right) t^{-1}\), and \(\theta \left( \beta \right)\) any positive constant function on \(\beta\) yielding

$$\begin{aligned} \theta '\left( \beta \right) ~ = 0.\end{aligned}$$

Then,

  1. 1.

    For \(f\left( \lambda ,\beta \right) = \lambda ^{z} e^{- n\lambda } p\left( \lambda \right)\), \(\frac{\partial }{\partial \beta } f\left( \lambda ,\beta \right) = 0\), and the definite integral

    $$\begin{aligned} \int _{\theta (\beta )}^{k(\beta )}{\frac{\partial }{\partial \beta } f\left( \lambda ,\beta \right) \mathrm {d}\lambda } = 0. \end{aligned}$$

    Hence,

    $$\begin{aligned} \frac{\partial }{\partial \beta } G = k'(\beta ) f\left( k(\beta ),\beta \right) , \end{aligned}$$
    (25)

    and, for \(\theta \left( \beta \right) = 0\),

    $$\begin{aligned} \frac{\partial }{\partial \beta } \int _{0}^{k\left( \beta \right) }{\lambda ^{z}e^{- n\lambda } p\left( \lambda \right) \mathrm {d}\lambda } = k'\left( \beta \right) {k\left( \beta \right) }^{z} e^{- n k\left( \beta \right) } p\left( k\left( \beta \right) \right) . \end{aligned}$$
    (26)
  2. 2.

    For \(f\left( \lambda ,\beta \right) = \lambda ^{z - 1} e^{- n\lambda }\), \(\frac{\partial }{\partial \beta } f\left( \lambda ,\beta \right) = 0\), and the definite integral

    $$\begin{aligned} \int _{0}^{k(\beta )} {\frac{\partial }{\partial \beta } f\left( \lambda ,\beta \right) \mathrm {d}\lambda } = 0. \end{aligned}$$

    Hence, similar as in part 1, equation (25) holds, and

    $$\begin{aligned} \frac{\partial }{\partial \beta } \int _{0}^{k\left( \beta \right) } {\lambda ^{z - 1} e^{- n\lambda } \mathrm {d}\lambda } = k'\left( \beta \right) {k\left( \beta \right) }^{z - 1} e^{- n k\left( \beta \right) }. \end{aligned}$$
    (27)

    For more on differentiation under the integral sign see Theorem 6.4.8 in Khuri (1993).

Evaluating \(k(\beta )\) as \(t \rightarrow \infty\) for \(z = 1\)

In the case when \(z = 1\), equation (16) reduces to \(\int _{0}^{k\left( \beta \right) }{e^{- n\lambda }\mathrm {d}\lambda } = \beta n^{-1}\). Then, from (15),

$$\begin{aligned} \int _{0}^{k\left( \beta \right) } {e^{- n\lambda }\mathrm {d}\lambda }&= \beta \int _{0}^{\infty } {e^{- n\lambda }\mathrm {d}\lambda } \\&= \beta \left[\frac{- e^{- n\lambda }}{n} \right]_0^\infty = \beta \left[- 0 + n^{-1} e^{- 0} \right]= \beta n^{-1}. \end{aligned}$$

Hence,

$$\begin{aligned}\int _{0}^{k\left( \beta \right) }{e^{- n\lambda }\mathrm {d}\lambda } = \left[\frac{- e^{- n\lambda }}{n} \right]_0^{k\left( \beta \right) } = \left[\frac{- e^{- nk\left( \beta \right) }}{n} + \frac{e^{0}}{n} \right]= \frac{1}{n} - \frac{e^{- n k\left( \beta \right) }}{n} = \frac{\beta }{n}.\end{aligned}$$

Thus,

$$\begin{aligned} e^{- n k\left( \beta \right) }&= 1 - \beta ,\\ - n k\left( \beta \right)&= \ln \left( 1 - \beta \right) ,\\ k\left( \beta \right)&= \frac{1}{n} \ln \left( \frac{1}{1 - \beta } \right) , \end{aligned}$$

where \(k\left( \beta \right) = \lim _{t \rightarrow \infty } \ell \left( 1; t \right) t^{-1}\).

Proving the equality to zero of the second term in the equation (11)

Following, we state some true mathematical/probability facts under the conditions of Theorem 1 and Assumption 1, and based on Lemma 3.

Fact 1: \({\lim _{t \rightarrow \infty }\ell \left( z; t \right) t^{-1}}\) exists and it is finite. Let denote it by k.

Fact 2: The following implications are true (based on Lemma 3):

$$\begin{aligned} \lambda > {k}\Longrightarrow \lim _{t \rightarrow \infty } \sum _{y = 0}^{\ell \left( z; t \right) } \frac{\left( \lambda t \right) ^{y}e^{- \lambda t}}{y!} = 0. \end{aligned}$$

and

$$\begin{aligned} \lambda \le {k}\Longrightarrow \lim _{t \rightarrow \infty } \sum _{y = 0}^{\ell \left( z; t \right) } \frac{\left( \lambda t \right) ^{y}e^{- \lambda t}}{y!} = 1. \end{aligned}$$

Fact 3: For an observed sample, i.e., s, n, and z are known, the expression \(\left( n \lambda s \right) ^z e^{- n\lambda s} p\left( \lambda \right)\) is well-defined at any point in the space of \(\lambda\), i.e., \([{k}, \infty ]\), and it takes positive values. Furthermore, \(\displaystyle \int _{k}^{\infty }{\left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda }\) exists and it is greater than zero for any given s, n, and z. Let denote its value by c:

$$\begin{aligned}\displaystyle \int _{k}^{\infty }{\left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda } = {c}\end{aligned}$$

Now, rewrite Eq. (11) from the main text as sum of two fraction,

$$\begin{aligned} \sum _{y = 0}^{\ell \left( z;t \right) }{p\left( y|z \right) }&= \frac{\displaystyle \int _{0}^{k} {\left[\sum _{y = 0}^{\ell \left( z;t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \frac{\left( n \lambda s \right) ^{z} e^{- n\lambda s}}{z!} p\left( \lambda \right) \mathrm {d}\lambda }}{\displaystyle \int _{0}^{\infty }{\frac{\left( n \lambda s \right) ^z e^{- n\lambda s}}{z!} p\left( \lambda \right) \mathrm {d}\lambda } } \\&\qquad + \frac{\displaystyle \int _{k}^{\infty }{\left[\sum _{y = 0}^{\ell \left( z; t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \frac{\left( n \lambda s \right) ^{z} e^{- n\lambda s}}{z!} p\left( \lambda \right) \mathrm {d}\lambda }}{\displaystyle \int _{0}^{\infty }{\frac{\left( n \lambda s \right) ^z e^{- n\lambda s}}{z!} p\left( \lambda \right) \mathrm {d}\lambda } }. \end{aligned}$$

In this last expression, both ratios can be reduced by z! since we integrate with respect to \(\lambda\). Hence, in order to reduce equation (11) to (12), we need to prove that the second ratio goes to zero as \(t \rightarrow \infty\) by showing that its numerator goes to zero:

$$\begin{aligned} \lim _{t \rightarrow \infty } \displaystyle \int _{k}^{\infty }{\left[\sum _{y = 0}^{\ell \left( z; t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda }=0. \end{aligned}$$
(28)

To do so, we have to show that for any given \(\epsilon >0\), \({\exists t^{*}>0}\), such that for \({t > t^*}\),

$$\begin{aligned}\left| \int _{k}^{\infty }{\left[\sum _{y = 0}^{\ell \left( z; t \right) } \frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda } \right| \le \epsilon . \end{aligned}$$

For a given \(\epsilon >0\), we take \(\epsilon ^{'}= \epsilon /c >0\), where c is introduced in Fact 3. Next, for this \({\epsilon ^{'} > 0}\), we choose \(t^* = \max \left\{ t_{1},t_{2},t_{3},t_{4} \right\}\), where \(t_{1},t_{2},t_{3}\),\(t_{4}\) were introduced in Lemmas 1-3. Based on Fact 1 and Fact 2, for all \(t >t^{*}\),

$$\begin{aligned}-\epsilon ^{'}\le \sum _{y = 0}^{\ell \left( z; t \right) } \frac{\left( \lambda t \right) ^{y}e^{-\lambda t}}{y!}\le \epsilon ^{'}.\end{aligned}$$

Also, \(\left( n\lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) > {0}\), for any given s, n, z and any \(\lambda > {k}\). Hence,

$$\begin{aligned} -\epsilon ^{'} \left( n\lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \le \left[\sum _{y = 0}^{\ell \left( z;t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\left( n\lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \le \epsilon ^{'} \left( n\lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) . \end{aligned}$$
(29)

By integrating all sides of (29), one obtains based on Fact 3:

$$\begin{aligned}-\epsilon ^{'} {c} \le \int _{k}^{\infty }{\left[\sum _{y = 0}^{\ell \left( z; t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda }\le \epsilon ^{'} {c}, \end{aligned}$$

or

$$\begin{aligned}\frac{-\epsilon }{c} {c} \le \int _{k}^{\infty }{\left[\sum _{y = 0}^{\ell \left( z; t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda }\le \frac{-\epsilon }{c} {c}. \end{aligned}$$

Hence, for every \(t >t^{*}\),

$$\begin{aligned}-\epsilon \le \int _{k}^{\infty }{\left[\sum _{y = 0}^{\ell \left( z; t \right) }\frac{\left( \lambda t \right) ^{y} e^{- \lambda t}}{y!} \right]\ \left( n \lambda s \right) ^{z} e^{- n\lambda s} p\left( \lambda \right) \mathrm {d}\lambda }\le \epsilon . \end{aligned}$$

This proves (28), and therefore, the second fraction in equation (11) is zero.

Algorithms to compute frequentist PLs

Any algorithm designed to compute the frequentist PLs performs a search of the optimal integer values that satisfy the properties described in Bejleri and Nandram (2018). In particular, the lower PL, \(\ell _*\), is the largest integer number that satisfies the following inequality:

$$\begin{aligned}\sum _{i = 0}^z \left( {\begin{array}{c}z + \ell _*\\ i\end{array}}\right) \pi ^i (1 - \pi )^{z + \ell _* - i} < \beta ; \end{aligned}$$

and, on the other hand, the upper PL, \(\ell ^*\), is the smallest integer number that satisfies the following inequality:

$$\begin{aligned} \sum _{i = 0}^z \left( {\begin{array}{c}z + \ell _*\\ i\end{array}}\right) \pi ^i (1 - \pi )^{z + \ell _* - i} > 1 - \beta .\end{aligned}$$

The sequential search Knuth (1973) proposed in Bejleri and Nandram (2018) is conducted on all the values in the set \(\{0, 1, \ldots , q\}\), where q is a predetermined \(1 - \epsilon\) quantile from a Poisson distribution. To reduce the computational complexity from \(O(\lambda )\) to \(O(\log (\lambda ))\), we present Algorithm 1 and Algorithm 2 for the computations of the lower and upper frequentist PLs respectively. These two algorithms are based on binary search Knuth (1973).

figure a
figure b

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bejleri, V., Sartore, L. & Nandram, B. Asymptotic equivalence between frequentist and Bayesian prediction limits for the Poisson distribution. J. Korean Stat. Soc. 51, 633–665 (2022). https://doi.org/10.1007/s42952-021-00157-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42952-021-00157-x

Keywords

Mathematics Subject Classification

Navigation