Abstract

Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

1. Introduction

In insurance modeling, the Poisson process has been used as a claim arrival process. Extensive discussion of the Poisson process, from both applied and theoretical viewpoints, can be found in [1–6]. However there has been a significant volume of literature that questions the suitability of the Poisson process in insurance modeling [7, 8]. From a practical point of view, there is no doubt that the insurance industry needs a more suitable claim arrival process than the Poisson process that has deterministic intensity.

As an alternative point process to generate the claim arrivals, we can employ a Cox process or a doubly stochastic Poisson process [9–15]. An important book on Cox processes is the book by Bening and Korolev [16], where the applications in both insurance and finance are discussed. A Cox process provides us with the flexibility to allow the intensity not only to depend on time but also to be a stochastic process. Dassios and Jang [17] demonstrated how a Cox process with shot noise intensity could be used in the pricing of catastrophe reinsurance and derivatives.

It is important to measure the time interval between the claims in insurance. Thus in this paper, we examine the distribution of the interval of a Cox process with shot noise intensity for insurance claims. The result of this paper can be used or easily modified in computer science/telecommunications modeling, electrical engineering, and queueing theory.

We start by defining the quantity of interest; this is a doubly stochastic (with a shot-noise intensity) point process of claim arrivals. Then, we derive the probability generating function of a Cox process with shot noise intensity using piecewise deterministic Markov processes (PDMPs) theory, for which see the appendix. The piecewise deterministic Markov processes theory is a powerful mathematical tool for examining nondiffusion models. For details, we refer the reader to [17–25]. In Section 3, we derive the Laplace transform of the distribution of the shot noise process at claim times, using stationary assumption of the shot noise process at any times. Using this Laplace transform within the probability generating function of a Cox process with shot noise intensity, we derive the distribution between events of a Cox process with shot noise intensity. These can be insurance claims for examples. We also derive the first two moments of this distribution. Section 4 contains some concluding remarks.

2. A Cox Process and The Shot Noise Process

A Cox process (or a doubly stochastic Poisson process) can be viewed as a two-step randomisation procedure. A process πœ†π‘‘ is used to generate another process 𝑁𝑑 by acting as its intensity. That is, 𝑁𝑑 is a Poisson process conditional on πœ†π‘‘ which itself is a stochastic process (if πœ†π‘‘ is deterministic then 𝑁𝑑 is a Poisson process). Many alternative definitions of a doubly stochastic Poisson process can be given. We will offer the one adopted by BrΓ©maud [15].

Definition 2.1. Let (Ξ©,𝐹,𝑃) be a probability space with information structure given by 𝐹={ℑ𝑑,π‘‘βˆˆ[0,𝑇]}. Let 𝑁𝑑 be a point process adapted to 𝐹. Let πœ†π‘‘ be a nonnegative process adapted to 𝐹 such that ξ€œπ‘‘0πœ†π‘ π‘‘π‘ <∞almostsurely(noexplosions).(2.1) If for all 0≀𝑑1≀𝑑2 and π‘’βˆˆβ„œπΈξ‚†π‘’π‘–π‘’ξ€·π‘π‘‘2βˆ’π‘π‘‘1ξ€Έβˆ£β„‘πœ†π‘‘2𝑒=expξ‚†ξ‚€π‘–π‘’ξ‚ξ€œβˆ’1𝑑2𝑑1πœ†π‘ ξ‚‡π‘‘π‘ (2.2) then 𝑁𝑑 is called a ℑ𝑑-doubly stochastic Poisson process with intensity, πœ†π‘‘ where β„‘πœ†π‘‘ is the 𝜎-algebra generated by πœ† up to time 𝑑, that is, β„‘πœ†π‘‘=𝜎{πœ†π‘ ;𝑠≀𝑑}.

Equation (2.2) gives us ξ‚†π‘π‘ƒπ‘Ÿπ‘‘2βˆ’π‘π‘‘1=π‘˜βˆ£πœ†π‘ ;𝑑1≀𝑠≀𝑑2=ξ‚€βˆ’βˆ«exp𝑑2𝑑1πœ†π‘ βˆ«π‘‘π‘ ξ‚ξ‚€π‘‘2𝑑1πœ†π‘ ξ‚π‘‘π‘ π‘˜ξ‚†πœπ‘˜!,(2.3)π‘ƒπ‘Ÿ2>π‘‘βˆ£πœ†π‘ ;𝑑1≀𝑠≀𝑑2𝑁=π‘ƒπ‘Ÿπ‘‘2βˆ’π‘π‘‘1=0βˆ£πœ†π‘ ;𝑑1≀𝑠≀𝑑2ξ‚‡ξ‚€βˆ’ξ€œ=exp𝑑2𝑑1πœ†π‘ ξ‚π‘‘π‘ ,(2.4)πœπ‘˜=inf{𝑑>0βˆΆπ‘π‘‘=π‘˜} where ξ‚€πœπ‘ƒπ‘Ÿ2ξ‚ξ‚†πœ†β‰€π‘‘=𝐸𝑑2ξ‚€βˆ’ξ€œexp𝑑2𝑑1πœ†π‘ π‘‘π‘ ξ‚ξ‚‡.(2.5). Therefore from (2.4), we can easily find that Λ𝑑=βˆ«π‘‘0πœ†π‘ π‘‘π‘ If we consider the process πΈξ‚€πœƒπ‘π‘‘2βˆ’π‘π‘‘1𝑒=πΈβˆ’(1βˆ’πœƒ)(Λ𝑑2βˆ’Ξ›π‘‘1),(2.6) (the aggregated process), then from (2.3) we can also easily find that πœƒwhere 𝑁𝑑 is a constant between 0 and 1. Equation (2.6) suggests that the problem of finding the distribution of Λ𝑑, the point process, is equivalent to the problem of finding the distribution of 𝑁𝑑, the aggregated process. It means that we just have to find the probability generating function (p.g.f.) of Λ𝑑 to retrieve the moment generating function (m.g.f.) of πœ†π‘‘=πœ†0π‘’βˆ’π›Ώπ‘‘+𝑀𝑑𝑖=1π‘Œπ‘–π‘’βˆ’π›Ώ(π‘‘βˆ’π‘†π‘–),(2.7) and vice versa.

One of the processes that can be used to measure the impact of primary events is the shot noise process [26–28]. The shot noise process is particularly useful within the claim arrival process as it measures the frequency, magnitude, and time period needed to determine the effect of primary events. As time passes, the shot noise process decreases as more and more claims are settled. This decrease continues until another event occurs which will result in a positive jump in the shot noise process. Therefore the shot noise process can be used as the parameter of doubly stochastic Poisson process to measure the number of claims due to primary events, that is, we will use it as a claim intensity function to generate the Cox process. We will adopt the shot noise process used by Cox and Isham [26]: πœ†0 where (i)πœ†π‘‘ is initial value of {π‘Œπ‘–}𝑖=1,2,…;(ii)𝐺(𝑦) is a sequence of independent and identically distributed random variables with distribution function 𝑦>0 (𝐸(π‘Œπ‘–)=πœ‡1), where {𝑆𝑖}𝑖=1,2,…;(iii)𝑀𝑑 is the sequence representing the event times of a Poisson process 𝜌 with constant intensity 𝛿;(iv)𝑀𝑑 is rate of exponential decay. We assume that the Poisson process {π‘Œπ‘–}𝑖=1,2,… and the sequences (Λ𝑑,πœ†π‘‘,𝑑) are independent of each other. Figure 1 is the graph illustrating shot noise process. Figure 2 is the graph illustrating a Cox process with shot noise intensity.

The generator of the process 𝑓(Ξ›,πœ†,𝑑) acting on a function 𝐴𝑓(Ξ›,πœ†,𝑑)=πœ•π‘“πœ•π‘‘+πœ†πœ•π‘“πœ•Ξ›βˆ’π›Ώπœ†πœ•π‘“ξ‚ƒξ€œπœ•πœ†+𝜌∞0𝑓(Ξ›,πœ†+𝑦,𝑑)𝑑𝐺(𝑦)βˆ’π‘“(Ξ›,πœ†,𝑑).(2.8) belonging to its domain is given by 𝑓(Ξ›,πœ†,𝑑)For 𝐴 to belong to the domain of the generator 𝑓(Ξ›,πœ†,𝑑), it is sufficient that Ξ› is differentiable with respect to πœ†, 𝑑, Ξ› for all πœ†, 𝑑, |∫∞0𝑓(β‹…,πœ†+𝑦,β‹…)𝑑𝐺(𝑦)βˆ’π‘“(β‹…,πœ†,β‹…)|<∞ and that 𝑁𝑑.

Let us find a suitable martingale in order to derive the probability generating function (p.g.f.) of 𝑑 at time Λ𝑑.

Theorem 2.2. Let us assume that πœ†π‘‘ and π‘‘βˆ— evolve up to a fixed time π‘˜1. Considering constants π‘˜2 and π‘˜1β‰₯0 are such that π‘˜2β‰₯βˆ’π‘˜1π‘’βˆ’π›Ώπ‘‘βˆ— and ξ‚€expβˆ’π‘˜1π›ΏΞ›π‘‘ξ‚ξ‚†βˆ’ξ‚€π‘˜exp1+π‘˜2π‘’π›Ώπ‘‘ξ‚πœ†π‘‘ξ‚‡ξ‚ƒπœŒξ€œexp𝑑0ξ‚†ξ‚€π‘˜1βˆ’Μ‚π‘”1+π‘˜2𝑒𝛿𝑠𝑑𝑠(2.9), βˆ«Μ‚π‘”(𝑒)=𝑑0π‘’βˆ’π‘’π‘¦π‘‘πΊ(𝑦) is a martingale, where 𝑑>0 and π‘Šπ‘‘=𝛿Λ𝑑+πœ†π‘‘.

Proof. Define 𝑍𝑑=πœ†π‘‘π‘’π›Ώπ‘‘ and (π‘Šπ‘‘,𝑍𝑑,𝑑), then the generator of the process 𝑓(𝑀,𝑧,𝑑) acting on a function 𝐴𝑓(𝑀,𝑧,𝑑)=πœ•π‘“ξ‚ƒξ€œπœ•π‘‘+𝜌∞0𝑓𝑀+𝑦,𝑧+𝑦𝑒𝛿𝑑,𝑑𝑑𝐺(𝑦)βˆ’π‘“(𝑀,𝑧,𝑑),(2.10) is given by 𝑓(𝑀,𝑧,𝑑)and 𝐴𝑓=0 has to satisfy 𝑓(π‘Šπ‘‘,𝑍𝑑,𝑑) for π‘’βˆ’π‘˜1π‘€π‘’βˆ’π‘˜2π‘§β„Ž(𝑑) to be a martingale. We try a solution of the form β„Ž(𝑑), where β„Žξ…žξ‚ƒξ‚€π‘˜(𝑑)βˆ’πœŒ1βˆ’Μ‚π‘”1+π‘˜2π‘’π›Ώπ‘‘ξ‚ξ‚„β„Ž(𝑑)=0.(2.11) is a differentiable function. Then we get the following equation: π‘’βˆ’π‘˜1π‘€π‘’βˆ’π‘˜2π‘§β„Ž(𝑑)π‘˜1 belongs to the domain of the generator because of our choice of π‘˜2, π‘‘β‰€π‘‘βˆ—; the function is bounded for all π‘‘βˆ— and our process evolves up to time β„Ž(𝑑)=πΎπ‘’πœŒβˆ«π‘‘0ξ€½1βˆ’Μ‚π‘”(π‘˜1+π‘˜2𝑒𝛿𝑠)𝑑𝑠,(2.12) only. Solving (2.11) 𝐾where π‘’βˆ’π‘˜1π‘Šπ‘‘π‘’βˆ’π‘˜2π‘π‘‘π‘’πœŒβˆ«π‘‘0ξ€½1βˆ’Μ‚π‘”(π‘˜1+π‘˜2𝑒𝛿𝑠)𝑑𝑠(2.13) is an arbitrary constant. Therefore 𝜈1β‰₯0,𝜈2β‰₯0,𝜈β‰₯0,0β‰€πœƒβ‰€1 is a martingale and hence the result follows.

Corollary 2.3. Let 𝑑1,𝑑2, and πΈξ‚†π‘’βˆ’πœˆ1(Λ𝑑2βˆ’Ξ›π‘‘1)π‘’βˆ’πœˆ2πœ†π‘‘2βˆ£Ξ›π‘‘1,πœ†π‘‘1ξ‚‡ξ‚ƒβˆ’ξ‚†πœˆ=exp1𝛿+ξ‚€πœˆ2βˆ’πœˆ1π›Ώξ‚π‘’βˆ’π›Ώ(𝑑2βˆ’π‘‘1)ξ‚‡πœ†π‘‘1ξ‚„ξ‚ƒξ€œΓ—expβˆ’πœŒπ‘‘2βˆ’π‘‘10ξ‚ƒξ‚†πœˆ1βˆ’Μ‚π‘”1𝛿+ξ‚€πœˆ2βˆ’πœˆ1π›Ώξ‚π‘’βˆ’π›Ώπ‘ ξ‚„,πΈξ‚†πœƒξ‚‡ξ‚„π‘‘π‘ (2.14)(𝑁𝑑2βˆ’π‘π‘‘1)π‘’βˆ’πœˆπœ†π‘‘2βˆ£π‘π‘‘1,πœ†π‘‘1ξ‚‡ξ‚ƒβˆ’ξ‚†=exp1βˆ’πœƒπ›Ώ+ξ‚€πœˆβˆ’1βˆ’πœƒπ›Ώξ‚π‘’βˆ’π›Ώ(𝑑2βˆ’π‘‘1)ξ‚‡πœ†π‘‘1ξ‚„ξ‚ƒξ€œΓ—expβˆ’πœŒπ‘‘2βˆ’π‘‘101βˆ’Μ‚π‘”1βˆ’πœƒπ›Ώ+ξ‚€πœˆβˆ’1βˆ’πœƒπ›Ώξ‚π‘’βˆ’π›Ώπ‘ ξ‚„.𝑑𝑠(2.15) be fixed times. Then π‘˜1=𝜈1/𝛿,π‘˜2=(𝜈2βˆ’πœˆ1/𝛿)π‘’βˆ’π›Ώπ‘‘2,π‘‘βˆ—β‰₯𝑑2

Proof. We set 𝑁𝑑 in Theorem 2.2 and (2.14) follows immediately. Equation (2.15) follows from (2.14) and (2.6).

Now we can easily derive the probability generating function (p.g.f.) of πœ†π‘‘ and the Laplace transform of 𝑁𝑑 using Corollary 2.3.

Corollary 2.4. The probability generating function of πΈξ‚†πœƒ(𝑁𝑑2βˆ’π‘π‘‘1)βˆ£πœ†π‘‘1ξ‚‡ξ‚ƒβˆ’=exp1βˆ’πœƒπ›Ώξ‚†1βˆ’π‘’βˆ’π›Ώ(𝑑2βˆ’π‘‘1)ξ‚‡πœ†π‘‘1ξ‚„ξ‚ƒξ€œΓ—expβˆ’πœŒπ‘‘2βˆ’π‘‘101βˆ’Μ‚π‘”1βˆ’πœƒπ›Ώξ‚€1βˆ’π‘’βˆ’π›Ώπ‘ ξ‚„,𝑑𝑠(2.16) is given by πœ†π‘‘ the Laplace transform of the distribution of πΈξ‚†π‘’βˆ’πœˆπœ†π‘‘βˆ£πœ†0=expβˆ’πœˆπœ†0π‘’βˆ’π›Ώπ‘‘ξ‚ξ‚ƒξ€œexpβˆ’πœŒπ‘‘01βˆ’Μ‚π‘”πœˆπ‘’βˆ’π›Ώπ‘ ξ‚„ξ‚ξ‚‡π‘‘π‘ (2.17) is given by πœ†π‘‘ and if πΈξ‚€π‘’βˆ’πœˆπœ†π‘‘ξ‚ξ‚ƒξ€œ=expβˆ’πœŒβˆž01βˆ’Μ‚π‘”πœˆπ‘’βˆ’π›Ώπ‘ ξ‚„ξ‚ξ‚‡π‘‘π‘ (2.18) is asymptotic (stationary), it is given by πΈξ‚€π‘’βˆ’πœˆπœ†π‘‘ξ‚ξ‚†βˆ’πœŒ=expπ›Ώξ€œπœˆ0𝐆(𝑒)𝑑𝑒,(2.19) which can also be written as 𝐆(𝑒)=(1βˆ’Μ‚π‘”(𝑒))/𝑒where 𝜈=0.

Proof. If we set 𝜈1=0 in (2.15) then (2.16) follows. Equation (2.17) follows if we either set πœƒ=1 in (2.14) or set π‘‘β†’βˆž in (2.15). Let 𝜈 in (2.17) and the result follows immediately.

Theorem 2.2, Corollaries 2.3 and 2.4 can be found in [17, 19], but they have been included here for completeness and for comparison purposes.

If we differentiate (2.17) and (2.19) with respect to 𝜈=0 and put πœ†π‘‘, we can easily obtain the first moments of πΈξ‚€πœ†π‘‘βˆ£πœ†0=πœ‡1πœŒπ›Ώ+ξ‚€πœ†0βˆ’πœ‡1πœŒπ›Ώξ‚π‘’βˆ’π›Ώπ‘‘πΈξ‚€πœ†,(2.20)𝑑=πœ‡1πœŒπ›Ώ.(2.21), that is, ξ‚€πœ†Varπ‘‘βˆ£πœ†0=ξ‚€1βˆ’π‘’βˆ’2π›Ώπ‘‘ξ‚πœ‡2𝜌,ξ‚€πœ†2𝛿Var𝑑=πœ‡2𝜌,2𝛿(2.22) The higher moments can be obtained by differentiating them further, that is, πœ‡2=𝐸(π‘Œ2∫)=∞0𝑦2𝑑𝐺(𝑦) where 𝑛.

3. The Distribution of The Interval between Events of A Cox Process with Shot Noise Intensity and Its Moment

Let us examine the Laplace transform of the distribution of the shot noise intensity at claim times. To do so, let us denote the time of the 𝑁𝑑th claim of πœπ‘› by πœ†π‘‘ and denote the value of 𝑁𝑑, when 𝑛 takes the value πœ†πœπ‘› for the first time by 𝜏. Since a claim occurs at time πœ†πœ, this implies that the intensity at claim times, πœ†π‘‘, should be higher than the intensity at any times πœ†πœ. Therefore the distribution of πœ†π‘‘ should not be the same as the distribution of 𝑁𝑑, which will be clear from Theorem 3.2.

Let us start with the following lemma in order to obtain the Laplace transform of the distribution of the shot noise intensity at claim times. We assume that the claims and jumps (or primary events) in shot noise intensity do not occur at the same time.

Lemma 3.1. Let πœ†π‘‘ be a Cox process with shot noise intensity 𝐴. Let πœ†π‘‘ be the generator of the process 𝑓(πœ†) and suppose that limπ‘‘β†’βˆžπΈξ‚†π‘“ξ‚€πœ†π‘‘ξ‚ξ‚€βˆ’ξ€œexp𝑑0πœ†π‘ ξ‚π‘‘π‘ βˆ£πœ†0=0.(3.1) is a function belonging to its domain and furthermore that it satisfies β„Ž(πœ†) If πœ†ξ‚†ξ‚‡β„Ž(πœ†)βˆ’π‘“(πœ†)+𝐴𝑓(πœ†)=0(3.2) is such that πΈξ‚†β„Žξ‚€πœ†πœ1ξ‚βˆ£πœ†0ξ‚‡ξ‚€πœ†=𝑓0.(3.3) then π‘“ξ‚€πœ†π‘‘ξ‚+ξ€œπ‘‘0ξ‚ƒπœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡ξ‚„π‘‘π‘ (3.4)

Proof. From (3.2) πœπ‘‘1 is a martingale and since π‘ƒπ‘Ÿ(𝜏1≀𝑠)=π‘ƒπ‘Ÿ(𝑁𝑠>0) is a stopping time, where 𝑁𝑠 and πœ†π‘  is πœ†πΈπ‘“ξ‚†ξ‚€πœπ‘‘1βˆ£πœ†0ξ‚ƒξ€œξ‚ξ‚‡+πΈπœπ‘‘10ξ‚ƒπœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡ξ‚„π‘‘π‘ βˆ£πœ†0ξ‚„=𝑓(πœ†0).(3.5)-measurable, we have πœ†π‘£ Conditioning on the realisation 0≀𝑣≀𝑑, πœπ‘‘1, πœ†π‘Ÿξ‚€βˆ’ξ€œexpπ‘Ÿ0πœ†π‘’ξ‚π‘‘π‘’(3.6) is distributed with density (0,𝑑) on ∫exp(βˆ’π‘‘0πœ†π‘’π‘‘π‘’) and a mass 𝑑 at πΈξ‚†π‘“ξ‚€πœ†πœπ‘‘1ξ‚βˆ£πœ†π‘£ξ‚‡=ξ€œ,0≀𝑣≀𝑑𝑑0ξ‚†π‘“ξ‚€πœ†π‘Ÿξ‚πœ†π‘Ÿξ‚€βˆ’ξ€œexpπ‘Ÿ0πœ†π‘’ξ‚€πœ†π‘‘π‘’ξ‚ξ‚‡π‘‘π‘Ÿ+π‘“π‘‘ξ‚ξ‚€βˆ’ξ€œexp𝑑0πœ†π‘’ξ‚πΈξ‚ƒξ€œπ‘‘π‘’,(3.7)πœπ‘‘10πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡π‘‘π‘ βˆ£πœ†π‘£ξ‚„=ξ€œ,0≀𝑣≀𝑑𝑑0ξ€œπ‘Ÿ0πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡π‘‘π‘ πœ†π‘Ÿξ‚€βˆ’ξ€œexpπ‘Ÿ0πœ†π‘’ξ‚+ξ€œπ‘‘π‘’π‘‘π‘Ÿπ‘‘0πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚€βˆ’ξ€œξ‚ξ‚‡π‘‘π‘ exp𝑑0πœ†π‘’ξ‚.𝑑𝑒(3.8). Hence, =ξ€œπ‘‘0ξ€œπ‘‘π‘ πœ†π‘Ÿξ‚€βˆ’ξ€œexpπ‘Ÿ0πœ†π‘’ξ‚π‘‘π‘’π‘‘π‘Ÿπœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ +ξ€œξ‚ξ‚‡π‘‘π‘ π‘‘0πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚€βˆ’ξ€œξ‚ξ‚‡π‘‘π‘ exp𝑑0πœ†π‘’ξ‚=ξ€œπ‘‘π‘’π‘‘0ξ‚†ξ‚€βˆ’ξ€œexp𝑠0πœ†π‘’ξ‚ξ‚€βˆ’ξ€œπ‘‘π‘’βˆ’exp𝑑0πœ†π‘’πœ†π‘‘π‘’ξ‚ξ‚‡π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ +ξ€œξ‚ξ‚‡π‘‘π‘ π‘‘0πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚€βˆ’ξ€œξ‚ξ‚‡π‘‘π‘ exp𝑑0πœ†π‘’ξ‚=ξ€œπ‘‘π‘’π‘‘0ξ‚€βˆ’ξ€œexp𝑠0πœ†π‘’ξ‚πœ†π‘‘π‘’π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡π‘‘π‘ .(3.9)πΈξ‚†π‘“ξ‚€πœ†πœπ‘‘1+ξ€œπœπ‘‘10πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡π‘‘π‘ βˆ£πœ†π‘£ξ‚‡=ξ€œ,0≀𝑣≀𝑑𝑑0ξ‚€βˆ’ξ€œexp𝑠0πœ†π‘’ξ‚πœ†π‘‘π‘’π‘ β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†π‘‘π‘ +π‘“π‘‘ξ‚ξ‚€βˆ’ξ€œexp𝑑0πœ†π‘’ξ‚ξ‚†β„Žξ‚€πœ†π‘‘π‘’=𝐸𝜏11ξ€½πœ1β‰€π‘‘ξ€Ύβˆ£πœ†π‘£ξ‚‡ξ‚€πœ†,0≀𝑣≀𝑑+π‘“π‘‘ξ‚ξ‚€βˆ’ξ€œexp𝑑0πœ†π‘’ξ‚,𝑑𝑒(3.10) Changing the order of integration on the first term of this, it becomes πΈξ‚†π‘“ξ‚€πœ†πœπ‘‘1+ξ€œπœπ‘‘10πœ†π‘ ξ‚†β„Žξ‚€πœ†π‘ ξ‚ξ‚€πœ†βˆ’π‘“π‘ ξ‚ξ‚‡π‘‘π‘ βˆ£πœ†0ξ‚‡β„Žξ‚€πœ†=πΈξ‚†ξ‚€πœ11ξ€½πœ1β‰€π‘‘ξ€Ύξ‚ξ‚€πœ†+π‘“π‘‘ξ‚ξ‚€βˆ’ξ€œexp𝑑0πœ†π‘’ξ‚π‘‘π‘’βˆ£πœ†0.(3.11) Adding (3.7) and (3.9), we notice that more terms cancel and we get πΈβ„Žξ‚€πœ†ξ‚†ξ‚€πœ11ξ€½πœ1β‰€π‘‘ξ€Ύξ‚ξ‚€πœ†+π‘“π‘‘ξ‚ξ‚€βˆ’ξ€œexp𝑑0πœ†π‘’ξ‚π‘‘π‘’βˆ£πœ†0ξ‚‡ξ‚€πœ†=𝑓0(3.12) and hence π‘‘β†’βˆž From (3.5), we then have πœ†π‘‘ and setting πœ†πœ, we get (3.3).

Assuming that the shot noise process πœ†π‘‘ is stationary, let us derive the Laplace transform of the distribution of the shot noise process at claim times, πΈξ‚€π‘’βˆ’πœˆπœ†πœπ‘–ξ‚=𝐆(𝜈)πœ‡1ξ‚†βˆ’πœŒβ‹…expπ›Ώξ€œπœˆ0𝐆(𝑒)𝑑𝑒,(3.13).

Theorem 3.2. If the shot noise process 𝐆(𝑒)=(1βˆ’Μ‚π‘”(𝑒))/𝑒 is stationary, the Laplace transform of the distribution of the shot noise process at claim times is given by βˆ«Μ‚π‘”(𝑒)=𝑑0π‘’βˆ’π‘’π‘¦π‘‘πΊ(𝑦)where 𝑓(πœ†) and β„Ž(πœ†).

Proof. From Lemma 3.1, which implies that if πœ†ξ‚†ξ‚‡β„Ž(πœ†)βˆ’π‘“(πœ†)βˆ’π›Ώπœ†π‘“ξ…žξ‚†ξ€œ(πœ†)+𝜌∞0𝑓(πœ†+𝑦)𝑑𝐺(𝑦)βˆ’π‘“(πœ†)=0(3.14) and πΈξ‚†β„Žξ‚€πœ†πœπ‘–+1ξ‚βˆ£πœ†πœπ‘–ξ‚‡ξ‚€πœ†=π‘“πœπ‘–ξ‚(3.15) are such that πœπ‘– and (3.1) is satisfied, we have 𝑔𝑓(πœ†)={πœ†βˆ’ξ…ž(𝜈)/(1βˆ’Μ‚π‘”(𝜈))}π‘’βˆ’πœˆπœ† by starting the process from 𝑓(πœ†). Employing πœ†ξ‚†β„Ž(πœ†)βˆ’πœ†π‘’βˆ’πœˆπœ†+ξπ‘”ξ…ž(𝜈)𝑒1βˆ’Μ‚π‘”(𝜈)βˆ’πœˆπœ†ξ‚‡ξ‚†ξπ‘”+π›Ώπœˆπœ†πœ†βˆ’ξ…ž(𝜈)𝑒1βˆ’Μ‚π‘”(𝜈)βˆ’πœˆπœ†βˆ’π›Ώπœ†π‘’βˆ’πœˆπœ†=βˆ’πœŒπœ†π‘’βˆ’πœˆπœ†ξ‚†ξ‚‡Μ‚π‘”(𝜈)βˆ’1.(3.16), the function πœ† clearly satisfies (3.1) and substituting into (3.14), then we have β„Ž(πœ†)=πœ†π‘’βˆ’πœˆπœ†(1βˆ’π›Ώπœˆ)+π›Ώπ‘’βˆ’πœˆπœ†ξπ‘”βˆ’(1βˆ’π›Ώπœˆ)ξ…ž(𝜈)𝑒1βˆ’Μ‚π‘”(𝜈)βˆ’πœˆπœ†+πœŒπ‘’βˆ’πœˆπœ†ξ‚†ξ‚‡1βˆ’Μ‚π‘”(𝜈).(3.17) Divide by πΈξ‚†β„Žξ‚€πœ†πœπ‘–+1ξ‚ƒπΈξ‚†β„Žξ‚€πœ†ξ‚ξ‚‡=πΈπœπ‘–+1ξ‚βˆ£πœ†πœπ‘–ξ‚†π‘“ξ‚€πœ†ξ‚‡ξ‚„=πΈπœπ‘–ξ‚ξ‚‡.(3.18) and simplify then we have πΈξ‚ƒπœ†πœπ‘–+1ξ‚€expβˆ’πœˆπœ†πœπ‘–+1(1βˆ’π›Ώπœˆ)+𝛿expβˆ’πœˆπœ†πœπ‘–+1ξ‚ξπ‘”βˆ’(1βˆ’π›Ώπœˆ)ξ…ž(𝜈)ξ‚€1βˆ’Μ‚π‘”(𝜈)Γ—expβˆ’πœˆπœ†πœπ‘–+1+𝜌expβˆ’πœˆπœ†πœπ‘–+1ξ‚†πœ†ξ‚ξ‚†1βˆ’Μ‚π‘”(𝜈)=πΈπœπ‘–ξ‚€expβˆ’πœˆπœ†πœπ‘–+1ξ‚βˆ’ξπ‘”ξ…ž(𝜈)ξ‚€1βˆ’Μ‚π‘”(𝜈)expβˆ’πœˆπœ†πœπ‘–+1.(3.19) From (3.15), it is given that πœ†π‘‘ So put (3.17) into (3.18), then πœ†πœπ‘–+1 When the process πœ†πœπ‘– is stationary, 𝐻(𝜈)=𝐸(π‘’βˆ’πœˆπœ†πœπ‘–), and βˆ’(1βˆ’π›Ώπœˆ)π»ξ…žξπ‘”(𝜈)βˆ’(1βˆ’π›Ώπœˆ)ξ…ž(𝜈)1βˆ’Μ‚π‘”(𝜈)𝐻(𝜈)+𝛿+𝜌1βˆ’Μ‚π‘”(𝜈)𝐻(𝜈)=βˆ’π»ξ…žξπ‘”(𝜈)βˆ’ξ…ž(𝜈)1βˆ’Μ‚π‘”(𝜈)𝐻(𝜈).(3.20) have the same distribution whose Laplace transform we denote by π›Ώπœˆ. Therefore from (3.19), we have π»ξ…žξπ‘”(𝜈)+ξ…ž(𝜈)11βˆ’Μ‚π‘”(𝜈)𝐻(𝜈)+𝜈+πœŒπ›Ώ1βˆ’Μ‚π‘”(𝜈)πœˆξ‚‡π»(𝜈)=0.(3.21) Divide both sides of (3.20) by 𝐻(0)=1(3.22), then we have 𝐻(𝜈)=𝐾1βˆ’Μ‚π‘”(𝜈)πœˆξ‚ξ‚†βˆ’πœŒexpπ›Ώξ€œπœˆ0𝐆(𝑒)𝑑𝑒,(3.23) Solving (3.21), subject to 𝐾 then the Laplace transform of a distribution of the shot noise process at claim times is given by 𝐾=1/πœ‡1where 1𝐻(𝜈)=πœ‡11βˆ’Μ‚π‘”(𝜈)πœˆξ‚†βˆ’πœŒβ‹…expπ›Ώξ€œπœˆ0=𝐆(𝑒)𝑑𝑒𝐆(𝜈)πœ‡1ξ‚†βˆ’πœŒβ‹…expπ›Ώξ€œπœˆ0𝐆(𝑒)𝑑𝑒.(3.24) is a constant. Therefore from (3.22), πœ†π‘‘ and 𝐆(𝑦)/πœ‡1

Equation (3.24) provides us with an interesting result. The distribution defined by the Laplace transform (3.24) (and (3.13)) is the same as the distribution of two random variables; one having the stationary distribution of 𝐆(𝑦)=1βˆ’πΊ(𝑦) (see Corollary 2.4) and the other having density πœ†π‘‘, where 𝐆(𝜈)πœ‡1ξ‚†βˆ’πœŒβ‹…expπ›Ώξ€œπœˆ0ξξ‚‡ξ‚†βˆ’πœŒπ†(𝑒)𝑑𝑒>expπ›Ώξ€œπœˆ0𝐆(𝑒)𝑑𝑒.(3.25). Comparing it with the distribution of the shot noise process, πœ†πœ at any times, we can easily find that πœ†π‘‘It is therefore the case that 𝑁𝑑 is stochastically larger than πœ†π‘‘. In other words, the intensity at claim times is higher than the intensity at any times.

Now let us derive the distribution of the interval of a Cox process with shot noise intensity for insurance claims using Theorem 3.2.

Corollary 3.3. Assume that 0 is the time at which a claim of ξπ†ξ‚€π‘ƒπ‘Ÿ(𝜏>𝑑)=1/π›Ώβˆ’(1/𝛿)π‘’βˆ’π›Ώπ‘‘ξ‚πœ‡1ξ‚†βˆ’πœŒexpπ›Ώξ€œπ‘‘0𝐆1π›Ώβˆ’1π›Ώπ‘’βˆ’π›Ώπ‘ ξ‚ξ‚‡π‘‘π‘ .(3.26) has occurred and the stationary of 𝑁𝑑 has been achieved. Then the tail of the distribution of the interval of a Cox process with shot noise intensity is given by πΈξ‚€πœƒπ‘π‘‘βˆ£πœ†0ξ‚ξ‚†βˆ’=exp1βˆ’πœƒπ›Ώξ‚€1βˆ’π‘’βˆ’π›Ώπ‘‘ξ‚πœ†0ξ‚‡ξ‚ƒξ€œexpβˆ’πœŒπ‘‘01βˆ’Μ‚π‘”1βˆ’πœƒπ›Ώξ‚€1βˆ’π‘’βˆ’π›Ώπ‘ ξ‚„ξ‚ξ‚‡ξ‚„π‘‘π‘ .(3.27)

Proof. From (2.16), the probability generating function of πœƒ=0 is given by 𝜏 Set ξ‚ƒξ€œπ‘ƒπ‘Ÿ(𝜏>𝑑)=expβˆ’πœŒπ‘‘01βˆ’Μ‚π‘”1βˆ’π‘’π›Ώπ‘ π›Ώξ‚„πΈξ‚ƒξ‚†βˆ’ξ‚€ξ‚ξ‚‡π‘‘π‘ exp1βˆ’π‘’βˆ’π›Ώπ‘‘ξ‚π›Ώπœ†0.(3.28) in (3.27) and take expectation, then the tail of the distribution of πœ†π‘‘ is given by ξ€œπΈ(𝜏)=∞0π›Ώπ‘ƒπ‘Ÿ(𝜏>𝑑)𝑑𝑑=πœ‡1πœŒξ€œ,(3.29)Var(𝜏)=2∞0𝑒𝐆1/π›Ώβˆ’(1/𝛿)π‘’βˆ’π›Ώπ‘’ξ‚πœ‡1ξ‚†βˆ’πœŒexpπ›Ώξ€œπ‘’0𝐆1π›Ώβˆ’1π›Ώπ‘’βˆ’π›Ώπ‘ ξ‚ξ‚€π›Ώπ‘‘π‘ ξ‚‡ξ‚„π‘‘π‘’βˆ’πœ‡1πœŒξ‚2.(3.30)Substitute (3.13) into (3.28), then the result follows immediately as 0 is the time at which a claim has occurred and 𝐸(𝜏2ξ€œ)=∞0𝑑2ξ€œπ‘“(𝑑)𝑑𝑑=2∞0𝑒𝐆1/π›Ώβˆ’(1/𝛿)π‘’βˆ’π›Ώπ‘’ξ‚πœ‡1ξ‚†βˆ’πœŒexpπ›Ώξ€œπ‘’0𝐆1π›Ώβˆ’1π›Ώπ‘’βˆ’π›Ώπ‘ ξ‚π‘‘π‘ ξ‚‡ξ‚„π‘‘π‘’.(3.31) is stationary.

Corollary 3.4. The expectation and variance of the interval between claims are given by 𝑋𝑑(πœ‚π‘‘,πœ‰π‘‘)

Proof. Integrate (3.26), then (3.29) follows. (3.30) is obtained from πœ‚π‘‘

An interesting result we can find from (3.29) and (2.21) is that the expected interval between claims is the inverse of the expected number of claims, where the number of claims follows a Cox process with shot noise intensity, which is also the case for a Poisson process.

4. Conclusion

We started with deriving the probability generating function of a Cox process with shot noise intensity, employing piecewise deterministic Markov processes theory. It was necessary to obtain the distribution of the shot noise process at claim times as it is not the same as the distribution of the shot noise process at any times. Assuming that the shot noise process is stationary, we derived the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments from its probability generating function. The result of this paper can be used or easily modified in computer science/telecommunications modeling, electrical engineering, and queueing theory as an alternative counting process to a Poisson process.

Appendix

This appendix explains the basic definition of a piecewise deterministic Markov process (PDMP) that is adopted from [20]. A detailed discussion can also be found in [18, 24].

PDMP is a Markov process 𝐾 with two components πœ‚π‘‘=π‘›βˆˆπΎ, where πœ‰π‘‘ takes values in a discrete set π‘€π‘›βŠ‚β„œπ‘‘(𝑛) and given π‘‘βˆΆπΎβ†’π‘, 𝑋𝑑 takes values in an open set 𝐸={(𝑛,𝑧)βˆΆπ‘›βˆˆπΎ,π‘§βˆˆπ‘€π‘›} for some function π‘₯=(𝑛,𝑧)∈𝐸. The state space of πœ™π‘›(𝑑,𝑧)βŠ‚π‘€π‘› is equal to πœ’π‘›. We further assume that for every point β„œπ‘‘(𝑛), there is a unique, deterministic integral curve π‘§βˆˆπœ™π‘›(𝑑,𝑧), determined by a differential operator 𝑑0βˆˆβ„œ+ on 𝑋𝑑0=(𝑛0,𝑧0)∈𝐸, such that πœ‰π‘‘. If for some 𝑑β‰₯𝑑0, πœ™π‘›0(𝑑,𝑧0), then 𝑑=𝑇0, where 𝜌 follows πœ‰π‘‘=πœ•π‘€π‘›0 until either 𝑀𝑛0, some random time with hazard rate of function 𝑋𝑑 or until 𝑄, the boundary of 𝐸. In both cases, the process (𝑛1,𝑧1)∈𝐸 jumps, according to a Markov transition measure πœ‰π‘‘ on πœ™π‘›1, to a point 𝑇1. 𝑇0 again follows the deterministic path πœ‰π‘‘=πœ•π‘€π‘›1 till a random time 𝑇𝑖 (independent of ξ‚€ξ“βˆ€π‘‘>0,𝐸𝑖𝐼𝑇𝑖≀𝑑<∞.(A.1)) or till 𝐴, and so forth. The jump times 𝑋𝑑 are assumed to satisfy the following condition: Ξ“

The stochastic calculus that will enable us to analyse various models rests on the notion of (extended) generator 𝐸 of Ξ“={(𝑛,𝑧)βˆΆπ‘›βˆˆπΎ,π‘§βˆˆπœ•π‘€π‘›}. Let 𝐴 denotes the set of boundary points of π‘“βˆΆπΈβˆͺΞ“β†’β„œ, 𝑑→𝑓(𝑛,πœ™π‘›(𝑑,𝑧)), and let π‘‘βˆˆ[0,𝑑(𝑛,𝑧)] be an operator acting on measurable functions (𝑛,𝑧)∈𝐸 satisfying the following.(i)The function π‘₯βˆˆΞ“ is absolutely continuous for βˆ«π‘“(π‘₯)=𝐸𝑓(𝑦)𝑄(π‘₯;𝑑𝑦) for all 𝑑β‰₯0.(ii)For all βˆ‘πΈ{𝑇𝑖≀𝑑|𝑓(𝑋𝑇𝑖)βˆ’π‘“(π‘‹π‘‡π‘–βˆ’)|}<∞, 𝐴 (boundary condition).(iii)For all 𝐷(𝐴), 𝐴.

Hence, the set of measurable functions satisfying (i), (ii), and (iii) form a subset of the domain of the extended generator ξ€œβˆ€π‘“βˆˆπ·(𝐴):𝐴𝑓(π‘₯)=πœ’π‘“(π‘₯)+𝜌(π‘₯)𝐸𝑓(𝑦)βˆ’π‘“(π‘₯)𝑄(π‘₯;𝑑𝑦).(A.2), denoted by 𝑑. Now, for piecewise deterministic Markov processes, we can explicitly calculate 𝐴 by [18, Theorem 5.5] πœ•/πœ•π‘‘+𝐴𝑑 In some cases, it is important to have time 𝐴𝑑 as an explicit component of the PDMP. In those cases 𝜎{π‘‹π‘ βˆΆπ‘ β‰€π‘‘} can be decomposed as 𝑑, where 𝑓(β‹…,𝑑) is given by (A.2) with possibly time-dependent coefficients.

An application of Dynkin’s formula provides us with the following important result (martingales will always be with respect to the natural filtration 𝐴𝑑).(a) If for all (πœ•/πœ•π‘‘)𝑓(π‘₯,𝑑)+𝐴𝑑𝑓(π‘₯,𝑑)=0, 𝑓(𝑋𝑑,𝑑) belongs to the domain of 𝑓 and 𝐴, then process 𝐴𝑓(π‘₯)=0 is a martingale.(b)If 𝑓(𝑋𝑑) belongs to the domain of 𝑋𝑑 and 𝑓(𝑋𝑑), then 𝑋𝐴𝑓𝑑=limβ„Žβ†“0𝐸𝑓𝑋𝑑+β„Žξ‚βˆ£π‘‹π‘‘ξ‚‡ξ‚€π‘‹=π‘₯βˆ’π‘“π‘‘ξ‚β„Ž.(A.3) is a martingale. The generator of the process 𝐴𝑓(𝑋𝑑) acting on a function 𝑋𝑑 belonging to its domain as described above is also given by 𝑑In other words, 𝑑+β„Ž is the expected increment of the process 𝑋𝑑 between 𝑑 and 𝐸𝑓𝑋𝑑+β„Žξ‚βˆ£π‘‹π‘‘ξ‚„ξ‚€π‘‹=π‘₯βˆ’π‘“π‘‘ξ‚=ξ€œβ„Ž0𝐸𝑋𝐴𝑓𝑠𝑑𝑠(A.4), given the history of at time . From this interpretation the following inversion formula is plausible, that is, which is Dynkin’s formula.