Abstract
Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299–2361, 2012). It was proved that when the degrees obey a power law with exponent \(\tau \in (3,4)\), the sequence of clusters ordered in decreasing size and multiplied through by \(n^{-(\tau -2)/(\tau -1)}\) converges as \(n\rightarrow \infty \) to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237–269, 2001) for the Erdős–Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.
Similar content being viewed by others
1 Introduction
The Erdős–Rényi random graph G(n, p) on the vertex set \([n]:=\{1,\ldots ,n\}\) is constructed by including each of the \({n\atopwithdelims ()2}\) possible edges with probability p, independently of all other edges. Erdős and Rényi discovered the double-jump phenomenon: The size of the largest component was shown to be, in probability, of order \(\log n\), \(n^{2/3}\), or n, depending on whether the average vertex degree was less than, close to, or more than one. In 1984 Bollobás [10] and subsequently Łuczak [28] showed for the scaling window \(p=(1+\lambda n^{-1/3})/n\), that the largest component is of the order \(n^{2/3}\). Since then, the critical, or near-critical behavior of random graphs has received tremendous attention (see [2, 4, 9, 18, 27]). Let \(({\mathcal {C}}_{\scriptscriptstyle (i)})_{i\ge 1}\) denote the connected components of G(n, p), ordered in size, i.e., \(|{\mathcal {C}}_{\mathrm{max}}|=|{\mathcal {C}}_{\scriptscriptstyle (1)}|\ge |{\mathcal {C}}_{\scriptscriptstyle (2)}|\ge \cdots \) Aldous [2] proved the following result:
Theorem 1.1
(Aldous [2]). For \(p=(1+\lambda n^{-1/3})/n\), \(\lambda \in \mathbb {R}\) fixed, and \(n\rightarrow \infty \),
where \(\gamma _1(\lambda )>\gamma _2(\lambda )>\cdots \) are the ordered excursions of the reflected version of the process \((W^\lambda _t)_{t\ge 0} \equiv (W_t + \lambda t -t^2/2)_{t\ge 0}\) with \((W_t)_{t\ge 0}\) a standard Wiener process.
Theorem 1.1 says that the ordered connected components in the critical Erdős–Rényi random graph are described by the ordered excursions of the reflected version of \((W^\lambda _t)_{t\ge 0}\). The strict inequalities between the scaling limits of the ordered cluster follows from the local limit theorem proved in [23], see also [25, 29]. Pittel [31, Eq. (1.12)] derived an exact formula for the distribution function of the limiting variable \(\gamma _1(\lambda )\) (of the largest component) and various asymptotic results were obtained, including
As pointed out in [32, 33], the constant \(\sqrt{9\pi /8}\) was mistakenly reported in [31, Eq. (1.12)] as \(\sqrt{2\pi }\) due to a small oversight in the derivation. The result in (1.2) gives sharp asymptotics for the largest component in the critical Erdős–Rényi graph. It was rederived and extended in [33] using the original approach in [31]. Another generalization of (1.2) was obtained in [24] by studying the excursions of the scaling limit of the exploration process that is used to describe the limits in Theorem 1.1. In this paper, we follow a similar path, but then for a class of inhomogeneous random graphs and its scaling limit, and extend (1.2) to this setting.
Several recent works have studied inhomogeneity in random graphs and how it changes the critical nature. In our model, the vertices have a weight associated to them, and the weight of a vertex moderates its degree. Therefore, by choosing these weights appropriately, we can generate random graphs with highly variable degrees. For our class of random graphs, it is shown in [22, Theorem 1.1] that when the weights do not vary too much, the critical behavior is similar to the one in the Erdős–Rényi random graph. See in particular the recent works [6, 34], where it was shown that if the degrees have finite third moment, then the scaling limit for the largest critical components in the critical window are essentially the same (up to a trivial rescaling that we explain in more detail below) as for the Erdős–Rényi random graph in Theorem 1.1.
When the degrees have infinite third moment, instead, it was shown in [22, Theorem 1.2] that the sizes of the largest critical clusters are quite different. In [7] scaling limits were obtained for the sizes of the largest components at criticality for rank-1 inhomogeneous random graphs with power-law degrees with power-law exponent \(\tau \in (3,4)\). For \(\tau \in (3,4)\), the degrees have finite variance but infinite third moment. It was shown that the sizes of the largest components, rescaled by \(n^{-(\tau -2)/(\tau -1)}\), converge to hitting times of a thinned Lévy process. The latter is a special case of the general multiplicative coalescents studied by Aldous and Limic in [2] and [3]. We next discuss these results in more detail.
1.1 Inhomogeneous Random Graphs
In our random graph model, vertices have weights, and the edges are independent, with edge probabilities being approximately equal to the rescaled product of the weights of the two end vertices of the edge. While there are many different versions of such random graphs (see below), it will be convenient for us to work with the so-called Poissonian random graph or Norros–Reittu model [30]. To define the model, we consider the vertex set \([n]:=\{1,2,\ldots , n\}\) and suppose each vertex is assigned a weight, vertex i having weight \(w_i\). Now, attach an edge between vertices i and j with probability
Different edges are independent. In this model, the average degree of vertex i is close to \(w_i\), thus incorporating inhomogeneity in the model.
There are many adaptations of this model, for which equivalent results hold. Indeed, the model considered here is a special case of the so-called rank-1 inhomogeneous random graph introduced in great generality by Bollobás et al. [11]. It is asymptotically equivalent with many related models, such as the random graph with prescribed expected degrees or Chung-Lu model, where instead
and which has been studied intensively by Chung and Lu (see [13,14,15,16,17]). A further adaptation is the generalized random graph introduced by Britton et al. [12], for which
See Janson [26] for conditions under which these random graphs are asymptotically equivalent, meaning that all events have asymptotically equal probabilities. As discussed in more detail in [22, Sect. 1.3], these conditions apply in the setting to be studied in this paper. Therefore, all results proved here also hold for these related rank-1 models. We refer the interested reader to [22, Sect. 1.3] for more details.
Having specified the edge probabilities as functions of the vertex weights \(\varvec{w}= (w_i)_{i\in [n]}\) in (1.3), we now explain how we choose the vertex weights. Let the weight sequence \(\varvec{w}= (w_i)_{i\in [n]}\) be defined by
where F is a distribution function on \([0,\infty )\) for which we assume that there exists a \(\tau \in (3,4)\) and \(0<c_{\scriptscriptstyle F}<\infty \) such that
and where \([1-F]^{-1}\) is the generalized inverse function of \(1-F\) defined, for \(u\in (0,1)\), by
By convention, we set \([1-F]^{-1}(1)=0\). Note that our inhomogeneity is chosen in such a way that the vertex weights \(i\mapsto w_i\) are decreasing, with \(w_1\) being the largest vertex weight.
For the setting in (1.3) and (1.6), by [11, Theorem 3.13], the number of vertices with degree k, which we denote by \(N_k\), satisfies
where \({\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathbb {P}}}}\) denotes convergence in probability, and where W has distribution function F appearing in (1.6). We recognize the limiting distribution as a so-called mixed Poisson distribution with mixing distribution F, i.e., conditionally on \(W=w\), the distribution is Poisson with mean w. As discussed in more detail in [22], since a Poisson random variable with large parameter w is closely concentrated around its mean w, the tail behavior of the degrees in our random graph is close to that of the distribution F. As a result, when (1.7) holds, and with \(D_n\) the degree of a uniformly chosen vertex in [n], \(\limsup _{n\rightarrow \infty } \mathbb {E}[D^a_n]<\infty \) when \(a<\tau -1\) and \(\limsup _{n\rightarrow \infty } \mathbb {E}[D^a_n]=\infty \) when \(a\ge \tau -1\). In particular, the degree of a uniformly chosen vertex in [n] has finite second, but infinite third moment when (1.7) holds with \(\tau \in (3,4)\).
Under the key assumption in (1.7),
and the third moment of the degrees tends to infinity, i.e., with \(W\sim F\), we have \(\mathbb {E}[W^3]=\infty \). Define
so that, again by (1.7), \(\nu <\infty \). Then, by [11, Theorem 3.1] (see also [11, Sect. 16.4] for a detailed discussion on rank-1 inhomogeneous random graphs, of which our random graph is an example), when \(\nu > 1\), there is one giant component of size proportional to n, while all other components are of smaller size o(n), and when \(\nu \le 1\), the largest connected component contains a proportion of vertices that converges to zero in probability. Thus, the critical value of the model is \(\nu =1\). The main goal of this paper is to investigate what happens close to the critical point, i.e., when \(\nu =1\).
With the definition of the weights in (1.6) and for F such that \(\nu =1\), we write \(\mathcal {G}_n^0(\varvec{w})\) for the graph constructed with the probabilities in (1.3), while, for any fixed \(\lambda \in {\mathbb {R}}\), we write \(\mathcal {G}_n^\lambda (\varvec{w})\) when we use the weight sequence
We shall assume that n is so large that \(1+\lambda n^{-(\tau -3)/(\tau -1)}\ge 0\), so that \({w_i(\lambda )\ge 0}\) for all \(i\in [n]\). When \(\tau >4\), so that \(\mathbb {E}[W^3]<\infty \), it was shown in [6, 22, 34] that the scaling limit of the random graphs studied here are (apart from a trivial rescaling of time and \(\lambda \)) equal to the scaling limit of the ordered connected components in the Erdős–Rényi random graph in Theorem 1.1. The rescaling of time and \(\lambda \) is due to the variance of the step distribution of the cluster exploration process being unequal to 1 (see Sect. 1.2 below for more details on what we mean with ‘cluster exploration’). For the Erdős–Rényi random graph the step distribution has a Poisson distribution with parameter 1 minus one. When \(\tau \in (3,4)\) the situation is entirely different, as discussed next.
Throughout this paper, we make use of the following standard notation. We let \({\mathop {\longrightarrow }\limits ^{d}}\) denote convergence in distribution, and \({\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathbb {P}}}}\) convergence in probability. For a sequence of random variables \((X_n)_{n\ge 1}\), we write \(X_n=o_{\scriptscriptstyle \mathbb {P}}(b_n)\) when \(|X_n|/b_n{\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathbb {P}}}}0\) as \(n\rightarrow \infty \). For a non-negative function \(n\mapsto g(n)\), we write \(f(n)=O(g(n))\) when |f(n)| / g(n) is uniformly bounded, and \(f(n)=o(g(n))\) when \(\lim _{n\rightarrow \infty } f(n)/g(n)=0\). Furthermore, we write \(f(n)=\Theta (g(n))\) if \(f(n)=O(g(n))\) and \(g(n)=O(f(n))\). Finally, we abbreviate
1.2 The Scaling Limit for \(\tau \in (3,4)\)
We next recall two key results that we recently established in [7]:
Theorem 1.2
(Weak convergence of the ordered critical clusters for \(\tau \in (3,4)\) [7]) Fix the Norros–Reittu random graph with weights \(\varvec{w}(\lambda )\) defined in (1.6) and (1.12). Assume that \(\nu =1\) and that (1.7) holds. Then, for all \(\lambda \in {\mathbb {R}}\),
in the product topology, for some non-degenerate limit \((\gamma _i(\lambda ))_{i\ge 1}\).
In order to further specify the scaling limit \((\gamma _i(\lambda ))_{i\ge 1}\), we need to introduce a continuous-time process \((\mathcal {S}_t)_{t\ge 0}\), referred to as a thinned Lévy process, and defined as
where a, b, c have been identified in [7, Theorem 2.4] as \(a=c_{\scriptscriptstyle F}^{\alpha }/\mathbb {E}[W]\), \(b=c_{\scriptscriptstyle F}^{\alpha }\) and \(c=\theta =\lambda + \zeta \) with
the constant given in [7, (2.18)]Footnote 1. The process \((\mathcal {S}_t)_{t\ge 0}\) starts out positive. It can be positive or negative, and we will be interested in the first hitting time of \((\mathcal {S}_t)_{t\ge 0}\) of zero.
Further, here we use the notation
where \((T_i)_{i\ge 2}\) are independent exponential random variables with mean
The term thinned Lévy process refers to the fact that \(\mathcal {I}_i(t)\) can be interpreted as \(\mathbb {1}_{\{N_i(t)\ge 1\}}\), where \((N_i(t))_{i\ge 1}\) are independent Poisson processes with rate \(a/i^{\alpha }\). If we replace \(\mathbb {1}_{\{N_i(t)\ge 1\}}\) by \(N_i(t)\) in this representation, then the corresponding process is a Lévy process. In \((\mathcal {S}_t)_{t\ge 0}\), only the first point in these Poisson processes is counted, thus we can think about the Poisson processes as being thinned. See below for more details on the interpretation of \((\mathcal {S}_t)_{t\ge 0}\).
Let \(H_1(0)\) denote the first hitting time of 0 of the process \((\mathcal {S}_t)_{t\ge 0}\), i.e.,
and \({\mathcal {C}}(1)\) the connected component to which vertex 1 (with the largest weight) belongs. We recall from [7, Theorem 2.1 and Proposition 3.7] that also \(|{\mathcal {C}}(1)|n^{-\rho }\) converges in distribution:
Theorem 1.3
(Weak convergence of the cluster of vertex 1 for \(\tau \in (3,4)\)). Fix the Norros–Reittu random graph with weights \(\varvec{w}(\lambda )\) defined in (1.6) and (1.12). Assume that \(\nu =1\) and that (1.7) holds. Then, for all \(\lambda \in {\mathbb {R}}\),
with \(H_1^{a}(0)\) the hitting time of 0 of \((\mathcal {S}_t)_{t\ge 0}\) with \(a=c_{\scriptscriptstyle F}^{\alpha }/\mathbb {E}[W]\), \(b=c_{\scriptscriptstyle F}^{\alpha }\), \(c=\theta \).
Let us informally describe how the process \((\mathcal {S}_t)_{t\ge 0}\) arises through a cluster exploration, and how it is linked to \(H_1^{a}(0)\) in (1.20) as well as \((\gamma _i(\lambda ))_{i\ge 1}\) in (1.14). In Theorem 1.3, we explore the connected component of vertex 1 one vertex at a time in a breadth-first way, and keep track of the number of active vertices, which are vertices that are found to be in \({\mathcal {C}}(1)\), but whose neighbors have not yet been inspected whether they are in \({\mathcal {C}}(1)\). Let \(\mathcal {S}_k^{\scriptscriptstyle (n)}\) be the number of active vertices after k steps, so that \(\mathcal {S}_0^{\scriptscriptstyle (n)}=1.\) Obviously, \(|{\mathcal {C}}(1)|=\inf \{k :\mathcal {S}_k^{\scriptscriptstyle (n)}=0\}\), since we are done with the exploration of a cluster when there are no unexplored vertices left, and we explore one vertex at a time. By construction, \(\mathcal {S}_1^{\scriptscriptstyle (n)}\) is the number of neighbors of vertex 1, which can be seen to be close to \(w_1\approx b n^{\alpha }\). Thus, the exploration process can be expected to be of order \(n^{\alpha }\), and we will rescale \(\mathcal {S}_1^{\scriptscriptstyle (n)}\) by a factor \(n^{-\alpha }\).
As explained in more detail in [7] and for the edge-probabilities in (1.3), the exploration can be performed rather effectively in terms of a marked branching process with mixed Poisson offspring distribution. Here, an unexplored vertex in the branching process, v, first draws a mark \(M_v\) for which \(\mathbb {P}(M_v=i)=w_i/\ell _n\), and after this, it draws a Poisson number of children with mean \(w_{M_v}\). The connection to the cluster exploration in the graph is obtained by thinning all vertices whose mark has appeared earlier. Here, we can think of the mark \(M_v=i\) as indicating that the vertex v in the branching process is being mapped to vertex i in the graph.
The largest weights correspond to the small values of \(i\in [n]\). The amount of time it takes us to draw a mark corresponding to vertex i is of the order \(\ell _n/w_i\), which is of order \(n^{\rho } a/i^\alpha \), which suggests that \((n^{-\alpha }\mathcal {S}_{tn^\rho }^{\scriptscriptstyle (n)})_{t\ge 0}\) converges in distribution to some process \((\mathcal {S}_t)_{t\ge 0}\). Further, the first time that \(i\ge 2\) is chosen, \(T_i^{\scriptscriptstyle (n)}\), satisfies \(n^{-\rho } T_i^{\scriptscriptstyle (n)}{\mathop {\longrightarrow }\limits ^{d}}T_i\), where \(T_i\) is exponential with rate \(a/i^\alpha \). Further, vertex i has roughly \(w_i\approx n^{\alpha } b/i^\alpha \) neighbors, so that \(\mathcal {S}_k^{\scriptscriptstyle (n)}\) makes a jump of order \(n^{\alpha } b/i^\alpha \) when i is found to be in \({\mathcal {C}}(1)\). This informally explains the process (1.15)–(1.18), while (1.19) explains that when the exploration process hits zero, then the cluster is fully explored. Turning this into a formal proof was one of the main steps in [7].
The above description does not yet describe the scaling limit \((\gamma _i(\lambda ))_{i\ge 1}\) in (1.14). For this, we note that after the exploration of \({\mathcal {C}}(1)\), we need to explore the clusters of the (high-weight) vertices that are not part of \({\mathcal {C}}(1)\). We do this by taking the vertex with largest weight that is not in \({\mathcal {C}}(1)\), which in the scaling limit corresponds to the smallest i for which \(\mathcal {I}_i(H_1(0))=0\), and start exploring the cluster of that vertex. This is again done by using processes similar to \((\mathcal {S}_t)_{t\ge 0}\), but changes arise due to the depletion-of-points effect. Indeed, since \({\mathcal {C}}(1)\) is fully explored, in later explorations those vertices cannot arise again. We refrain from describing this in more detail, as it is not needed in this paper. We repeat this procedure, and explore the connected components of unexplored vertices of the highest weights one by one. After performing these explorations infinitely often, we obtain \((\gamma _i(\lambda ))_{i\ge 1}\) as the ordered vector of hitting times of zero of these cluster exploration processes. Some more details are given in Sect. 2.4.
By scaling, \(H_1^{a}(0)/a\) for some a, b, c has the same distribution as the hitting time \(H_1(0)\) obtained by taking \(b'=a'=1\), and \(c'=c/(ab)=(\lambda +\zeta )/(ab)\). We shall reparametrize \(a'=b'=1\) and let
where we set
have used the notation
and where \(\mathcal {I}_i(t)\) is defined in (1.17)–(1.18) with a replaced by \(a'=1\). This scaling is convenient, as it reduces the clutter in our notation.
1.3 Main Results
In this section we state our main results. Recall \(\gamma _1(\lambda )\) from (1.14). Our Main Theorem establishes a generalization of Pittel’s result in (1.2) to our rank-1 inhomogeneous random graph with power-law exponents \(\tau \in (3,4)\):
Main Theorem
(Tail behavior scaling limit for \(\tau \in (3,4)\)). When \(u\rightarrow \infty \), there exists \(I>0\) independent of \(\lambda \) and \(A=A(\lambda ), \kappa _{i,j}=\kappa _{i,j}(\lambda )\) such that
The constants I, A and \(\kappa _{ij}\) are specified in Sect. 2. By scaling, these constants only depend on a, b through \(c'=c/(ab)=(\lambda +\zeta )/(ab)\), any other dependence disappears since the law of \(H_1(0)\) only depends on \(c'\). Since \(\tau \in (3,4)\), the sum over i, j such that \(i+j\ge 1\) is in fact finite, as we can ignore all terms for which \(\tau -1-i(\tau -2)-j(\tau -3)<0\). We also see that the Main Theorem connects up nicely with Pittel’s result in (1.2) that arises for \(\tau =4\), as for example seen in the fact that the exponent of u in the exponential is equal to 3 for \(\tau =4\) and the exponent in the power of u in the prefactor is equal to 3 / 2, as in (1.2). That these powers depend sensitively on \(\tau \) is a manifestation of the importance of the inhomogeneity, which we will see throughout this paper.
Aside from the Main Theorem, we prove two further theorems about the structure of the largest connected component when it is large. The first theorem concerns the probability that \(H_1^a(0)>u\) for some \(u>0\) large, where \(H_1^a(0)\) is the weak limit of \(n^{-\rho } |{\mathcal {C}}(1)|\) identified in Theorem 1.3. This is achieved by investigating the hitting time \(H_1(0)\) of 0 of the process \((\mathcal {S}_t)_{t\ge 0}\) in (1.21).
Theorem 1.4
(Tail behavior scaling limit cluster vertex 1 for \(\tau \in (3,4)\)). When \(u\rightarrow \infty \), there exists \(I>0\) independent of \(\tilde{\beta }\) and \(A=A(\tilde{\beta })\) and \(\kappa _{ij}(\tilde{\beta })\in {\mathbb R}\) such that
The constants I, A and \(\kappa _{ij}\) are equal to those in the Main Theorem. Comparing the Main Theorem and Theorem 1.4, we see that \(\mathbb {P}(H_1(0)>u)=\mathbb {P}(\gamma _1(\lambda )>a u) \cdot (1+o(1))\).
This has the interpretation that vertex 1, which is the vertex with the largest weight in our rank-1 inhomogeneous random graph, is with overwhelming probability in the largest connected component when this largest connected component is quite large.
We can even go one step further and study the optimal trajectory the process \(t\mapsto \mathcal {S}_t\) takes in order to achieve the unlikely event that \(H_1(0)>u\) when u is large. In order to describe this trajectory, we need to introduce some further notation. In the proof, it will be crucial to tilt the distribution, i.e., to investigate the measure \(\tilde{\mathbb {P}}\) with Radon–Nikodym derivative \({\mathrm {e}}^{\theta u \mathcal {S}_{u}}/\mathbb {E}[{\mathrm {e}}^{\theta u \mathcal {S}_{u}}]\), for some appropriately chosen \(\theta \). The selection of an appropriate \(\theta \) for the thinned Lévy process\((\mathcal {S}_t)_{t\ge 0}\) is quite subtle, and has been the main topic of our paper [1]. The main results from paper [1] are reported in Sect. 2, and will play an important role in the present analysis. We refer to below (2.13) for the definition of \(\theta ^*\) that appears in the description of the optimal trajectory that is identified in the following theorem (Fig. 1):
Theorem 1.5
(Optimal trajectory). For \(p\in [0,1]\), define
with \(\theta ^*\) as defined below (2.13). Then, for \(u\rightarrow \infty \) and for any \(\varepsilon >0\),
Our Main Theorem follows by combining Theorems 1.4 and 1.5, and showing that, for u large, the probability that \(1\in {\mathcal {C}}_{\scriptscriptstyle (1)}\) is overwhelmingly large. This argument is performed in detail in Sect. 2.4:
Brownian motion on a parabola Note that substituting \(\tau =4\) into (1.24) yields \(\frac{A}{u^{3/2}}{\mathrm {e}}^{-Iu^{3}+\kappa _{01} u^2+(\kappa _{10}+\kappa _{02})u}(1+o(1))\), which agrees with the result of Pittel in (1.2). This suggests a smooth transition from the case \(\tau \in (3,4)\) to the case \(\tau >4\). We next further explore this relation.
Consider the process \((W^\lambda _t)_{t\ge 0}=(W_t + \lambda t -t^2/2)_{t\ge 0}\) with \((W_t)_{t\ge 0}\) a standard Wiener process as mentioned in Theorem 1.1. We now apply the technique of exponential change of measure to this process. First note that the moment generating function of \(W_u^{\lambda }\) can be computed as
and let \(\theta ^{*}_u\) be the solution of \(\theta ^{*}_u=\arg \min _{\vartheta } \log \phi (u; \vartheta )\), which is given by
The main term is
Noting that
we see that this upper bound agrees to leading order with the result of Pittel in (1.2). In order to derive the full asymtptotics in (1.2), one can define the measure
rewrite
and then deduce the asymptotics of the latter expectation in full detail. Our analysis will be based on this intuition, now applied to a more involved, so-called thinned Lévy, stochastic process.
2 Overview of the Proofs
In this section, we give the overview of the proofs of Theorems 1.4–1.3. The point of departure for our proofs is the conjecture that \(\mathbb {P}(H_1(0)>u)\approx \mathbb {P}(\mathcal {S}_u>0)\) for large u. The event \(\{H_1(0)>u\}\) obviously implies \(\{\mathcal {S}_u>0\}\), but because of the strong downward drift of the process \((\mathcal {S}_t)_{t\ge 0}\), it seems plausible that both events are roughly equivalent.
In [1] a detailed study was presented on the large deviations behavior of the process \((\mathcal {S}_t)_{t\ge 0}\). Using exponential tilting of measure the following two theorems were proved.
Theorem 2.1
(Exact asymptotics tail \(\mathcal {S}_u\) [1, Theorem 1.1]). There exists \(I, D>0\) and \(\kappa _{ij}\in {\mathbb R}\) such that, as \(u\rightarrow \infty \),
Theorem 2.2
(Sample path large deviations [1, Theorem 1.2]). There exists a function \(p\mapsto I_{\scriptscriptstyle E}(p)\) on [0, 1] such that, for any \(\varepsilon >0\) and \(p\in [0,1)\),
In [1] it is explained that specific challenges arise in the identification of a tilted measure due to the power-law nature of \((\mathcal {S}_t)_{t\ge 0}\). General principles prescribe that the tilt should follow from a variational problem, but in the case of \((\mathcal {S}_t)_{t\ge 0}\) this involves a Riemann sum that is hard to control. In [1] this Riemann sum is approximated by its limiting integral, and it is proved that the tilt that follows from the corresponding approximate variational problem is sufficient to establish the large deviations results in Theorems 2.1 and 2.2. Details about this tilted measure are presented in Sect. 2.1.
It is clear that Theorems 2.1 and 2.2 for the event \(\{\mathcal {S}_u>0\}\) are the counterparts of Theorems 1.4 and 1.5 for \(\{H_1(0)>u\}\). Let us now sketch how we make the conjecture that \(\mathbb {P}(H_1(0)>u)\approx \mathbb {P}(\mathcal {S}_u>0)\) for large u formal. We show that \(\mathbb {P}(H_1(0)>u)\) has the same asymptotic behavior as \(\mathbb {P}(\mathcal {S}_u>0)\) in (2.1), with the same constants except for the constant D. Despite the similarity of this result, the proof method we shall use is entirely different from the exponential tilting in [1]. In order to establish the asymptotics for \(\mathbb {P}(H_1(0)>u)\), we establish sample path large deviations, not conditioned on the event \(\{\mathcal {S}_u>0\}\), but on the event \(\{H_1(0)>u\}\). This is much harder, since we have to investigate the probability that \(\mathcal {S}_t>0\) for all \(t\in [0,u]\). However, this is also more important, as only the hitting times \(H_1(0)\) give us asymptotics of the limiting cluster sizes. In order to prove these strong sample-path properties, we first prove that, under the tilted measure, \(\mathcal {S}_t\) is close to its expected value for a finite, but large, number of t’s, followed by a proof that the path cannot deviate much in the small time intervals between these times.
Now here is our strategy for the proofs. We extend the conjecture \(\mathbb {P}(H_1(0)>u)\approx \mathbb {P}(\mathcal {S}_u>0)\) by a conjectured sample path behavior that says that, under the tilted measure, the typical sample path of \((\mathcal {S}_t)_{t\ge 0}\) that leads to the event \(\{\mathcal {S}_u>0\}\) remains positive and hence implies \(\{H_1(0)>u\}\). To be more specific, we divide up this likely sample path into three parts: the early part, the middle part, and the end part. Our proof consists of treating each of these parts separately. We shall prove consecutively that with high probability the process:
-
(i)
Does not cross zero in the initial part of the trajectory (‘no early hits’);
-
(ii)
Is high up in the state space in the middle part of the trajectory, while experiencing small fluctuations, and therefore does not hit zero (‘no middle ground’);
-
(iii)
Is forced to remain positive until the very end.
In the last step, we have to be very careful, and it is in this step that it will turn out that the constant D arising in the asymptotics of \(\mathbb {P}(\mathcal {S}_u>0)\) in (2.1) is different from the constant A arising in the asymptotics of \(\mathbb {P}(H_1(0)>u)\) in (1.25). This is due to the fact that even when \(\mathcal {S}_u>0\), the path could dip below zero right before time u and does so with non-vanishing probability. The proof reveals that then it will do so in the time interval \([u-Tu^{-(\tau -2)},u]\) for some large T.
We next summarize the technique of exponential tilting developed in [1] for the thinned Lévy process \((\mathcal {S}_t)_{t\ge 0}\) with \(\tau \in (3,4)\), which allows us to give more details about how we shall establish the conjectured sample path behavior for each of the three parts described above.
2.1 Tilting and Properties of the Tilted Process
All results presented in this subsection are proved in [1].
Exponential tilting Parts of this section are taken almost verbatim from [1]. We use the notion of exponential tilting of measure in order to rewrite
where \(\vartheta \) is chosen later on. For every event E, define the measure \(\widetilde{\mathbb {P}}_{\vartheta }\) with corresponding expectation \(\widetilde{\mathbb {E}}_{\vartheta }\) by means of the equality
with normalizing constant \(\phi (u;\vartheta )\) given by
In terms of this notation, we are interested in
where we write \(\{\mathcal {S}_{[0,u]}>0\}=\{\mathcal {S}_{t}>0\forall t\in [0,u]\}\).
We now explain in more detail how to choose a good \(\vartheta \). The independence of the indicators \((\mathcal {I}_i(u))_{i\ge 2}\) yields
with
The function \(x\mapsto f(x;\vartheta )\) is integrable at \(x=0\) and at \(x=\infty \), so the above sum can be approximated by the integral
for some error term \(u\mapsto e_\vartheta (u)\) given by
where \(\alpha =1/(\tau -1)\) and the Riemann zeta functions \(\zeta (\cdot )\) defined as
where \({\mathrm {Re}}(s)\) denotes the real part of \(s\in {\mathbb C}\). Equation (2.11) follows from Euler–Maclaurin summation [21, p. 333]. The error term in (2.10) converges to 0 uniformly for \(\vartheta \) in compact sets bounded away from zero. As a result,
Let \(\theta ^{*}_u\) be the solution of
Moreover, let \(\theta ^{*}\) be the value of \(\vartheta \) where \(\vartheta \mapsto \Lambda (\vartheta )\) is minimal. It follows easily that \(I\equiv -\Lambda (\theta ^{*})>0\) and that \(\theta ^{*}\) is unique. In [1, Lemma 3.6], we have seen that \(\theta ^{*}_u=\theta ^{*}+o(1)\). Further, \(\theta ^{*}>0\) by [1, Lemma 3.5]. Set \(\phi (u)=\phi (u;\theta ^{*}_u)\). The asymptotics of \(\phi (u)\) are as follows.
Proposition 2.3
(Asymptotics of main term [1, Proposition 2.1]). As \(u\rightarrow \infty \), and with \(I=-\min _{\vartheta \ge 0} \Lambda (\vartheta )>0\), there exist \(\kappa _{ij}\in {\mathbb R}\) such that
Properties of the process under the tilted measure In what follows, take \(\vartheta =\theta ^{*}_u\), and let \(\widetilde{\mathbb {P}}=\widetilde{\mathbb {P}}_{\theta ^{*}_u}\) with corresponding expectation \(\widetilde{\mathbb {E}}=\widetilde{\mathbb {E}}_{\theta ^{*}_u}\). Abbreviate \(\theta =\theta ^{*}_u\). Under this new measure, the rare event of \(\mathcal {S}_u\) being positive becomes quite likely. To describe these results, let us introduce some notation. Recall from (1.26) that, for \(p\in [0,1]\),
where we take \(\vartheta =\theta ^{*}\), which turns out to be the limit of \(\theta ^{*}_u\) as \(u\rightarrow \infty \) (see, e.g., [1, Lemma 3.6]). The asymptotic mean of the process \(p\mapsto \mathcal {S}_{pu}\) conditionally on \(\mathcal {S}_u > 0\) can be described with the help of the function \(p\mapsto I_{\scriptscriptstyle E}(p)\), cf. Theorem 2.2. One easily checks that
the latter by definition of \(\theta ^{*}\), as \(0 = \Lambda '(\theta ^{*}) = I_{\scriptscriptstyle E}(1)\). Finally,
and
Lemma 2.4
(Expectation of \(\mathcal {S}_t\) [1, Lemma 2.2]). As \(u\rightarrow \infty \),
-
(a)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{t}] = u^{\tau -2}I_{\scriptscriptstyle E}(t/u)+ O(1+t + t |\theta ^*- \theta _u^*| u^{\tau -3})\) uniformly in \(t\in [0,u]\).
-
(b)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{t}-\mathcal {S}_u] = u^{\tau -2}I_{\scriptscriptstyle E}(t/u)+ O(u-t + u^{-1} + |\theta ^*- \theta _u^*| u^{\tau -2})\) uniformly in \(t\in [u/2,u]\).
-
(c)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{t}-\mathcal {S}_u] = u^{\tau -3}I_{\scriptscriptstyle E}'(1)(t-u)(1+o(1))+ O(u^{-1})\) when \(u-t=o(u)\).
-
(d)
\(u\widetilde{\mathbb {E}}[\mathcal {S}_u]=o(1)\) when \(u\rightarrow \infty \).
We will also need some consequences of the asymptotic properties of \(\widetilde{\mathbb {E}}[\mathcal {S}_{t}]\). This is stated in the following corollary:
Corollary 2.5
As \(u\rightarrow \infty \),
-
(a)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{t}]\ge \underline{c} t u^{\tau -3}\) and \(\widetilde{\mathbb {E}}[\mathcal {S}_{t}]\le \overline{c} t u^{\tau -3}\) uniformly for \(t\in [\varepsilon ,u/2]\), where \(0<\underline{c}<\overline{c}<\infty \);
-
(b)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{u-t}-\mathcal {S}_u]\ge \underline{c} t u^{\tau -3}\) and \(\widetilde{\mathbb {E}}[\mathcal {S}_{u-t}-\mathcal {S}_u]\le \overline{c} t u^{\tau -3}\) uniformly for \(t\in [T u^{-(\tau -2)},u/2]\), where \(0<\underline{c}<\overline{c}<\infty \);
-
(c)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{t}]=\widetilde{\mathbb {E}}[\mathcal {S}_{t_1}](1+o(1))\) for \(t\in [t_1,t_2]\) and \(t_1\in [\varepsilon , u/2]\) and \(t_2-t_1=O(u^{-(\tau -2)})\);
-
(d)
\(\widetilde{\mathbb {E}}[\mathcal {S}_{t}]=\widetilde{\mathbb {E}}[\mathcal {S}_{t_1}](1+o_{\scriptscriptstyle T}(1))\) for \(t\in [t_1,t_{2}]\) and \(t_1\in [u/2, u-Tu^{-(\tau -2)}]\), where \(o_{\scriptscriptstyle T}(1)\) denotes a quantity c(T, u) such that \(\lim _{T\rightarrow \infty } \limsup _{u\rightarrow \infty }c(T,u)=0\) and \(t_2-t_1=O(u^{-(\tau -2)})\).
Proof
Part (a) for \(t\in [\varepsilon ,\varepsilon u]\) for \(\varepsilon >0\) sufficiently small follows from Lemma 2.4(a) together with the facts that \(I_{\scriptscriptstyle E}(0)=0\), \(I_{\scriptscriptstyle E}'(0)>0\), and that \(1+t + t |\theta ^*- \theta _u^*| u^{\tau -3}=o(t u^{\tau -3})\). The fact that \(I_{\scriptscriptstyle E}'(0)>0\) also implies that \(\underline{c}\) can be taken to be strictly positive. For \(t \in [\varepsilon u, u/2]\), Part (a) follows from the fact that \(I_{\scriptscriptstyle E}(p)>0\) for all \(p\in [\varepsilon ,1/2]\) and that \(1+t + t |\theta ^*- \theta _u^*| u^{\tau -3}=o(u^{\tau -2})\).
Part (b) follows as Part (a), now using Lemma 2.4(b) together with the fact that \(I_{\scriptscriptstyle E}(1)=0\), \(I_{\scriptscriptstyle E}'(1)<0\).
Part (c) follows from Lemma 2.4(a), by subtracting the two terms. Note that the error term \(O(1+t_1 + t_1 |\theta ^*- \theta _u^*| u^{\tau -3})\) is \(o(t_1 u^{\tau -3})\) since \(t_1\ge \varepsilon \), while \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_1}]=\Theta (t_1 u^{\tau -3})\) by Part (a) of this corollary. Further, note that
which is \(o(1)\widetilde{\mathbb {E}}[\mathcal {S}_{t_1}]\).
Part (d) follows again from Lemma 2.4(a) by subtracting the two terms. Note again that the error term \(O(1+t_1 + t_1 |\theta ^*- \theta _u^*| u^{\tau -3})\) is \(o(t_1 u^{\tau -3})\), while \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_1}]=\Theta (t_1 u^{\tau -3})\) by part (b) of this corollary and Lemma 2.4(a). Further, note that
which is \(o_{\scriptscriptstyle T}(1)\widetilde{\mathbb {E}}[\mathcal {S}_{t_1}]\). \(\square \)
The next lemma gives asymptotic properties of the variance of \(\mathcal {S}_{t}\). Define, for \(p\in [0,1]\),
and
Again, it is not hard to check that
and
Lemma 2.6
(Covariance structure of \(\mathcal {S}_t\) [1, Lemma 2.3]). As \(u\rightarrow \infty \),
-
(a)
\(\widetilde{\mathrm{Var}}[\mathcal {S}_{t}] = u^{\tau -3}I_{\scriptscriptstyle V}(t/u) + O(1 + t |\theta ^* - \theta _u^*| u^{\tau -4})\) uniformly in \(t\in [0,u]\).
-
(b)
\(\widetilde{\mathrm{Var}}[\mathcal {S}_{t}-\mathcal {S}_u] = u^{\tau -3}J_{\scriptscriptstyle V}(t/u) + O((u-t)u^{-1} + (u-t) |\theta ^* - \theta _u^*| u^{\tau -4})\) uniformly in \(t\in [0,u]\).
-
(c)
\(\widetilde{\mathrm{Cov}}[\mathcal {S}_t,\mathcal {S}_u-\mathcal {S}_t] = -u^{\tau -3}G_{\scriptscriptstyle V}(t/u)+ O((u-t)u^{-1} + (u-t) |\theta ^* - \theta _u^*| u^{\tau -4})\) uniformly in \(t\in [0,u]\).
The next result bounds the Laplace transform of the couple \((\mathcal {S}_t,\mathcal {S}_u)\):
Proposition 2.7
(Joint moment generating function of \((\mathcal {S}_t,\mathcal {S}_u)\) [1, Proposition 2.4]). (a) As \(u\rightarrow \infty \),
where \(|\Theta |\le o_u(1)\) as \(u\rightarrow \infty \) uniformly in \(t\in [u/2,u]\) and \(\lambda \) in a compact set.
(b) Fix \(\varepsilon >0\) small. As \(u\rightarrow \infty \), for any \(\lambda _1, \lambda _2 \in \mathbb {R}\),
where \(|\Theta |\le o_u(1) + O(t^{3(3-\tau )/2})\) uniformly in \(t\in [\varepsilon ,u-u^{-(\tau -5/2)}]\) and \(\lambda _1, \lambda _2\) in a compact set.
Combine Proposition 2.7 and \(u\widetilde{\mathbb {E}}[\mathcal {S}_u]=o(1)\) (see [1, Lemma 4.1]) to show that \(u^{-(\tau -3)/2}\mathcal {S}_u\) converges to a normal distribution with mean 0 and variance \(I_{\scriptscriptstyle V}(1)\). Moreover, as we see below, the density of \(\mathcal {S}_{u}\) close to zero behaves like \(\left( 2\pi I_{\scriptscriptstyle V}(1)\right) ^{-1/2}u^{-(\tau -3)/2}\):
Proposition 2.8
(Density of \(\mathcal {S}_u\) near zero [1, Proposition 2.5]) Uniformly in \(s=o(u^{(\tau -3)/2})\), the density \(\widetilde{f}_{\mathcal {S}_u}\) of \(\mathcal {S}_u\) satisfies
with \(B= \left( 2\pi I_{\scriptscriptstyle V}(1)\right) ^{-1/2}\) and \(I_{\scriptscriptstyle V}(p)\) defined in (2.21). Moreover, \(\widetilde{f}_{\mathcal {S}_t}(s)\) is uniformly bounded by a constant times \(u^{-(\tau -3)/2}\) for all s, u and \(t \in [u/2,u]\).
There are three more results from [1] that will be used in this paper. The first is a description of the distribution of the indicator processes \((\mathcal {I}_i(t))_{t\ge 0}\) under the measure \(\widetilde{\mathbb {P}}\). Since our indicator processes \((\mathcal {I}_i(t))_{t\ge 0}\) are independent, this property also holds under the measure \(\widetilde{\mathbb {P}}\):
Lemma 2.9
(Indicator processes under the tilted measure [1, Lemma 4.2]) Under the measure \(\widetilde{\mathbb {P}}\), the distribution of the indicator processes \((\mathcal {I}_i(t))_{t\ge 0}\) is that of independent indicator processes. More precisely,
where \((T_i)_{i\ge 2}\) are independent random variables with distribution
The second lemma describes what happens to the variances for small p or for p close to 1:
Lemma 2.10
(Asymptotic variance near extremities [1, Lemma 4.3(b)]). As \(p\rightarrow 1\), \(J_{\scriptscriptstyle V}(p) = -(1-p)J_{\scriptscriptstyle V}'(1)(1+o(1))\) with \(J_{\scriptscriptstyle V}'(1)<0\), while, as \(p\rightarrow 0\),
Consequently, there exist \(0<\underline{c}<\overline{c}<\infty \) such that, for every \(p\in [0,\varepsilon ]\) with \(\varepsilon >0\) sufficiently small,
We finally rely on the following corollary that allows us to compute sums that we will encounter frequently:
Corollary 2.11
(Replacing sums by integrals in general [1, Corollary 3.3]). For every \(a\in {\mathbb {R}}, a > \tau -1\) and \(b>0\), there exists a constant c(a, b) such that
2.2 No Early Hits and Middle Ground
In this section, we prove that the tilted process is unlikely to hit 0 until a time that is very close to u. We start by investigating the early hits.
No early hits In this step, we prove that it is unlikely that the process hits zero early on, i.e., in the first time interval \([0,\varepsilon ]\) for some \(\varepsilon >0\) sufficiently small. In its statement, we write \(0\in \mathcal {S}_{[0,t]}\) for the event that \(\{\mathcal {S}_s=0\}\) for some \(s\in [0,t]\), so that \(\mathbb {P}(H_1(0)>u)=\mathbb {P}(0\not \in \mathcal {S}_{[0,u]})\).
Lemma 2.12
(No early hits). For every \(u\in [0,\infty )\), as \(\varepsilon \downarrow 0\),
where \(o_\varepsilon (1)\) denotes a function that converges to zero as \(\varepsilon \downarrow 0\), uniformly in u.
The proof of Lemma 2.12 follows from a straightforward application of the FKG-inequality for independent random variables (see [19], or [20, Theorem 2.4, p. 34]). The standard versions of the FKG-inequality hold for independent indicator random variables, and in our case we need it for independent exponentials. It is not hard to prove that the FKG-inequality we need holds by an approximation argument.
Proof
We note that the process \((\mathcal {S}_t)_{t\ge 0}\) is a deterministic function of the exponential random variables \((T_i)_{i\ge 2}\) (recall (1.15), (1.17) and (1.18)). Now, the event \(\{0\in \mathcal {S}_{[0,\varepsilon ]}\}\) is increasing in terms of the random variables \((T_i)_{i\ge 2}\) (use that \(\mathcal {S}_t\) only has positive jumps). Here we say that an event A is increasing when, if A occurs for a realization \((t_i)_{i\ge 2}\) of \((T_i)_{i\ge 2}\), and if \((t_i')_{i\ge 2}\) is coordinatewise larger than \((t_i)_{i\ge 2}\), then A also occurs for \((t_i')_{i\ge 2}\). Clearly, the event \(\{\mathcal {S}_{u}>0\}\) is decreasing (for a definition, change the role of \(t_i\) and \(t'_i\) in the definition of an increasing event), so that the FKG-inequality implies that these events are negatively correlated:
We conclude the proof by noting that \(\mathbb {P}(0\in \mathcal {S}_{[0,\varepsilon ]})=o_\varepsilon (1)\) independently of u. \(\square \)
The key to our proof of Theorem 1.4 will be to show that \(\mathbb {P}(H_1(0)>u)=\Theta (\mathbb {P}(\mathcal {S}_{u}>0))\), so that Lemma 2.12 and the known asymptotics of \(\mathbb {P}(\mathcal {S}_{u}>0)\) imply that it is unlikely to have an early hit of zero.
No middle ground By (2.4) (recall that \(\phi (u)=\phi (u;\theta )\) with \(\theta =\theta ^{*}_u\)), Lemma 2.12 and Theorem 2.1,
For \(M>0\) arbitrarily fixed, we split
By Proposition 2.8, we can bound
As a result, we arrive at
where \(o_{\scriptscriptstyle M}(1)\) denotes a quantity c(M, u) such that \(\limsup _{M\rightarrow \infty } \limsup _{u\rightarrow \infty } c(M,u)=0\).
We continue to prove that the dominant contribution to the expectation of the right-hand side of (2.39) originates from paths that remain positive until time \(u-t\) for \(t=Tu^{-(\tau -2)}\), with \(T>0\) arbitrarily fixed.
Proposition 2.13
(No middle ground). Fix \(\varepsilon >0\). For every \(u\in [0,\infty )\) and \(\varepsilon , M>0\) fixed,
where we recall that \(o_{\scriptscriptstyle T}(1)\) denotes a quantity c(T, u) such that \(\lim _{T\rightarrow \infty } \limsup _{u\rightarrow \infty }c(T,u)=0\).
We prove Proposition 2.13 in Sect. 3.
By (2.39) and Proposition 2.13,
Since \(\varepsilon \), M and T are arbitrary, it now suffices to identify the asymptotics of the expectation appearing on the right-hand side of (2.41).
2.3 Remaining Positive Near the End
To prove Theorem 1.4, by Proposition 2.3 and Eq. (2.41), it suffices to prove that, with \(\gamma =(\tau -1)/2\),
where \(T>0\) fixed. In the above expectation, we see two terms. The term \({\mathrm {e}}^{-\theta u \mathcal {S}_u}\) forces \(\mathcal {S}_u\) to be small, more precisely, \(\mathcal {S}_u=\Theta (1/u)\) for u large, while the term \(\mathbb {1}_{\{\mathcal {S}_{[u-Tu^{-(\tau -2)},u]}>0\}}\) forces the path to remain positive until time u. We now study these two effects.
We start by highlighting the ideas behind the analysis of the process \((\mathcal {S}_t)_{t\in [u-T u^{-(\tau -2)},u]}\). Comparing Theorem 1.4 to Theorem 2.1, we see that they are identical, except for the precise constant, which is A in Theorem 1.4 and \(D>A\) in Theorem 2.1. This difference is due to the fact that, conditionally on \(\mathcal {S}_u>0\), the process has a probability of not hitting zero in the interval \( [u-T u^{-(\tau -2)},u]\) that is strictly positive and bounded away from zero. In order to analyse this probability, we identify the scaling limit of the process \((u \mathcal {S}_{u-tu^{-(\tau -2)}} - u\mathcal {S}_u)_{t \ge 0}\) as \(u\rightarrow \infty \) conditionally on \(u\mathcal {S}_u=v\), and relate it to a certain Lévy process. The parameter A / D is closely related to the probability that this limiting process is bounded below by \(-v\), integrated over v. Let us now give the details.
In order to investigate the probability that \(\mathcal {S}_{[u-Tu^{-(\tau -2)},u]}>0\), we proceed as follows. Let
denote the set of indices for which \(T_j\le u\). We condition on the set \(\mathcal {J}(u)\). Note that \(\mathcal {S}_u\) is measurable with respect to \(\mathcal {J}(u)\). We now rewrite \(\mathcal {S}_{u-t}\) in a convenient form. For this, recall (1.21) and write
Thus, with
we have that \(\mathcal {S}_{u-t}>0\) precisely when \(Q_u(t) > -t-(u-t)\mathcal {S}_u\). We rewrite
Note that, for any \(t=o(u)\),
We aim to use dominated convergence on the above integral, and we start by proving pointwise convergence. By Proposition 2.8, \(\widetilde{f}_{\mathcal {S}_u}(v/u)=B u^{-(\tau -3)/2}(1+o(1))\) pointwise in v (in fact, even when \(v=o(u^{(\tau -1)/2})\)). This leads us to study, for all \(v>0\),
We split
where
Thus, \((A_u(t))_{t\in [0,u]}\) is deterministic given \(\mathcal {J}(u)\), while \((B_u(t))_{t\in [0,u]}\) is random given \(\mathcal {J}(u)\). The main result for the near-end regime is the following proposition, which proves that \(g_u(v)\) converges pointwise.
Proposition 2.14
(Weak conditional convergence of time-reversed process). (a) As \(u\rightarrow \infty \), conditionally on \(u\mathcal {S}_{u}=v\),
where \(\kappa \in (0,\infty )\) is given by
(b) As \(u\rightarrow \infty \), conditionally on \(u\mathcal {S}_{u}=v\),
where \((-L_t)_{t\ge 0}\) is a Lévy process with no positive jumps and with Laplace transform
and characteristic measure
Proposition 2.14 is proved in Sect. 5, and determines the precise constant A from (1.25), as we now explain in more detail.
We proceed by investigating some properties of the supremum of the Lévy process from (2.53) that we need later on. Note in particular that the distribution of \(L_s\) in (2.54) does not depend on v. With a slight abuse of notation, also the probability law describing the limiting process \((L_s)_{s\ge 0}\) shall be denoted by \(\mathbb {P}\).
Lemma 2.15
(Supremum of the Lévy process). Let \(I_\infty \equiv \inf _{t \ge 0} (-L_t+\kappa t)\). Then
where \(\mathcal {W}:[0,\infty ) \rightarrow [0,\infty )\) is the unique continuous increasing function that has Laplace transform
where the Laplace exponent \(\psi \) is given by \(\mathbb {E}[{\mathrm {e}}^{a (\kappa t-L_t)}]={\mathrm {e}}^{t\psi (a)}\) and is computed in (2.58) below, while \(\Psi (0)\) is the largest solution of the equation \(\psi (a)=0\), and \(\mathcal {W}(\infty )=1/\psi '(0)=1/\kappa \) is a constant.
Proof
We rewrite (2.54) to see that \(X_s \equiv -L_s+\kappa s\) is a Lévy process with no positive jumps and Laplace exponent
with
as defined in [5, Sect. VII.1]. Indeed, recall from [5, Sect. VII.1] that \(\mathbb {E}[{\mathrm {e}}^{aX_s}] = {\mathrm {e}}^{s \psi (a)}\) and note that our \(\beta '\) corresponds to a in [5]. Also note from (2.52) that \(\kappa >0\). Thus \(\psi '(0+) = \kappa >0\) and [5, Corollary 2(ii) in Sect. VII.1] yields that \(X_s\) drifts to \(\infty \) (for a definition, see [5, Theorem 12(ii) in Sect. VI.3]). This in turn implies (see [5, Proof of Theorem 8, in Sect. VII.2])
where \(\mathcal {W}\) is given in the statement of [5, Theorem 8, in Sect. VII.2]. For the definition of \(\Psi \) see before [5, Theorem 1 of Sect. VII.1]. Also note from the second equation of the proof of [5, Proof of Theorem 8, in Sect. VII.2] that \(\mathcal {W}(\infty )>0\). To see that \(\mathcal {W}(\infty )=1/\psi (0)\), note that if \(a\downarrow 0\),
Now, \(\psi (0)=0\), so that \(1/\psi (a)=1/(a\psi '(0))(1+o(1))\) as \(a\downarrow 0\), which identifies \(\mathcal {W}(\infty )=1/\psi '(0)=1/\kappa \). \(\square \)
By Proposition 2.14 and the continuity of \(\mathcal {W}\) in Lemma 2.15, with \(\mathcal {M}_T=\sup _{0\le s\le T} (L_s-\kappa s)\) for each \(v\ge 0\) and for \(t=Tu^{-(\tau -2)}\), for \(u \rightarrow \infty \),
Further, as \(T\rightarrow \infty \),
Now we are ready to complete the proofs of our main results.
2.4 Completion of the Proofs
Completion of the Proof of Theorem1.4 We start by completing the proof of Theorem 1.4. Recall that it remains to prove (2.42) with \(\gamma =(\tau -1)/2\). By (2.47) and (2.48), we need to compute
where \(t=Tu^{-(\tau -2)}\). A similar problem was encountered in [1, Proof of Theorem 1.1], which is restated here as Theorem 2.1, apart from the fact that there the function \(g_{u,t}(v)\) was absent.
We wish to use bounded convergence. For this, we note that \(u^{(\tau -3)/2}\widetilde{f}_{\mathcal {S}_u}(v/u)\rightarrow B\) by Proposition 2.8 for each v (in fact, for all \(v=o(u)\)), while, by (2.62)–(2.63), \(g_{u,t}(v)\rightarrow g_T(v)\), which, in turn, converges to g(v) as \(T\rightarrow \infty \). Further, since \(g_{u,t}(v)\le 1\) and \(u^{(\tau -3)/2}\widetilde{f}_{\mathcal {S}_u}(v/u)\) is uniformly bounded (see Proposition 2.8), the integrand \({\mathrm {e}}^{-\theta v}g_{u,t}(v)\big [u^{(\tau -3)/2}\widetilde{f}_{\mathcal {S}_u}(v/u)\big ]\) is uniformly bounded by a constant. Thus, by the Bounded Convergence Theorem,
This identifies (recall (2.42), (2.56), (2.57) and (2.62))
Recall that D is the constant appearing in Theorem 2.1. Since \(D=B/\theta \) by [1, (7.4)] and \(\mathbb {P}\big (\mathcal {M}\le v\big )<1\) for every v, we also immediately obtain that \(A\in (0,D)\). and completes the proof of Theorem 1.4. \(\square \)
Path properties: Proof of Theorem1.5 We bound, using that \(\{H_1(0)>u\}\subseteq \{\mathcal {S}_u>0\}\),
By Theorems 2.1 and 1.4, the ratio of probabilities converges to \(D/A\in (0,\infty )\), while, by Theorem 2.2, the conditional probability converges to 0. This completes the proof of Theorem 1.5. \(\square \)
Completion of the Proof of the Main Theorem We finally complete the proof of the scaling of the critical clusters in the Main Theorem using Theorem 1.4 and recalling (1.22). For this, we go back to the random graph setting. Let us start by giving some introduction.
The process \((\mathcal {S}_t)_{t\ge 0}\) in (1.21) arises when exploring a cluster in the Norros–Reittu random graph with weights \(\varvec{w}(\lambda )\) defined in (1.6) and (1.12), as described informally in Sect. 1.2. Recall Theorem 1.3. Here \(\mathcal {S}_t\) denotes the scaling limit of \(n^{-1/(\tau -1)}=n^{-\alpha }\) times the number of vertices found at time \(tn^{(\tau -2)/(\tau -1)}=tn^{\rho }\).
The key idea is that each time that a vertex, say \(j\in [n]\), is being explored, we have a chance \((1+\lambda n^{-(\tau -3)/(\tau -1)})w_iw_j/\ell _n\) that the edge to the vertex i with the \(i\hbox {th}\) largest weight is present. As it turns out (see e.g., [6, Lemma 1.3]), the vertices are found in a size-biased reordered way, meaning that the \(k\hbox {th}\) vertex found is \(v_{\scriptscriptstyle (k)}\), where (here the factor \((1+\lambda n^{-(\tau -3)/(\tau -1)})\) cancels)
Thus, the average weight of the \(k\hbox {th}\) vertex found is
which informally corresponds to the graph being close to critical (as made more precise in [7]). By (2.68), the probability that at the \(k\hbox {th}\) exploration we find the vertex i with the \(i\hbox {th}\) largest weight is close to \(w_i/\ell _n\). By (1.10) and (1.6),
so the probability of finding i is close to \((c_{\scriptscriptstyle F}/i)^{1/(\tau -1)} n^{(2-\tau )/(\tau -1)}/\mathbb {E}[D]\approx a i^{-1/(\tau -1)}n^{-\rho }\) by the definitions below (1.15) and (1.13). If this occurs, then the number of vertices added to the exploration process is close to \(w_i(1+\lambda n^{-(\tau -3)/(\tau -1)})\approx (c_{\scriptscriptstyle F}/i)^{1/(\tau -1)}n^{1/(\tau -1)}=bi^{-1/(\tau -1)} n^{\alpha }.\) Further, the probability that vertex i is not found in the time interval \([0,t]n^{\rho }\) is close to \({\mathrm {e}}^{-t a i^{-1/(\tau -1)}}=\mathbb {P}(\mathcal {I}_i(t)=0)\). It is not hard to see that these events are weakly dependent, so that the scaling limits of the times that the high-weight vertices are found are close to independent exponential random variables with rate \(a i^{-1/(\tau -1)}\). This explains the random variables arising in (1.21). The restriction to \(i\ge 2\) in (1.21) arises since we explore the cluster of vertex 1. The cluster is fully explored when there are no more active vertices waiting to be explored. This corresponds to \(\mathcal {S}_t=0\) for the first time, which is \(H_1^{a}(0)\) and explains the result in Theorem 1.3. Recall the informal description in Sect. 1.2 here as well.
Next, we claim that when a particularly large cluster is found, then, since the weight \(w_1\) is the largest of all weights, the maximal cluster is whp the cluster of vertex 1. This explains why the asymptotics in the Main Theorem for the maximal cluster is identical to the asymptotics in Theorem 1.4 for the cluster of vertex 1. To make this heuristic precise, we show, in this section, that indeed it is unlikely for an unusually large cluster to be found that does not contain 1. We next make the ideas in this heuristic precise, by introducing the exploration process of the cluster of vertex i for a general \(i\ge 1\).
Denote
where \(\tilde{\beta }_i=(\lambda +\zeta )/(ab)-c_i^2\) (see [7, Remark 3.9] and recall \(a',b',c'\) from above (1.21)). The intuition for the above formula is that
where we slightly abuse notation to now set \(\mathcal {I}_i(0)=1\) for the process \((\mathcal {S}_t^{\scriptscriptstyle (i)})_{t\ge 0}\) since vertex i is almost surely in the cluster of vertex i. Since \((\mathcal {S}_t^{\scriptscriptstyle (i)})_{t\ge 0}\) describes the scaling limit of the exploration process of the cluster of vertex \(i\ge 1\), while \(\mathcal {I}_j(t)\) has the interpretation as the indicator that vertex j is found in the exploration before time t, it is reasonable to set \(\mathcal {I}_i(0)=1\) for \((\mathcal {S}_t^{\scriptscriptstyle (i)})_{t\ge 0}\).Footnote 2 Again recall the informal description of the exploration process in Sect. 1.2.
We continue to show that it is highly unlikely that the cluster of vertex i is large, while vertex 1 is not in it. For this, we define
Then, \(H_1(0)=H^{\scriptscriptstyle (1)}(0)\) and \(H^{\scriptscriptstyle (i)}(0)\) denotes (an appropriate multiple of) the scaling limit of the cluster of vertex i, i.e., \(n^{-\rho }|{\mathcal {C}}(i)|{\mathop {\longrightarrow }\limits ^{d}}a H^{\scriptscriptstyle (i)}(0)\), where \({\mathcal {C}}(i)\) denotes the connected component to which vertex i belongs to. Further, let \({\mathcal {C}}_{\scriptscriptstyle \le }(i)\) be the set \({\mathcal {C}}(i)\) if none of the vertices \(j\in [i-1]=\{1,\ldots ,i-1\}\) belongs to \({\mathcal {C}}(i)\), and the empty set \(\varnothing \) otherwise. We know from [7, (3.78)] and the scaling explained around (1.21) that \(n^{-\rho }|{\mathcal {C}}(i)|{\mathop {\longrightarrow }\limits ^{d}}a \cdot H^{\scriptscriptstyle (i)}(0)\) for each \(i\ge 1\) with \(\rho =(\tau -2)/(\tau -1)\) (cf. (1.13)). Finally, denote
Then, by [7, (3.79)], \(n^{-\rho }|{\mathcal {C}}_{\scriptscriptstyle \le }(i)| {\mathop {\longrightarrow }\limits ^{d}}a \cdot H_i(0)\). This provides us with the appropriate background to complete the proof of the Main Theorem.
We start with the lower bound. By construction, \(\gamma _1(\lambda )\ge a \cdot H_1(0)\) (see [7, Theorems 1.1 and 2.1] and recall that \({\mathcal {C}}_{\scriptscriptstyle (i)}\) denotes the \(i\hbox {th}\)-largest connected component). Therefore,
and thus the lower bound follows from Theorem 1.4.
For the upper bound, we use that (cf. [7, Theorems 1.1])
Here we have used the fact that there are with high probability only finitely many clusters that are larger than \(\varepsilon n^{\rho }\) (as proved in [7, Theorem 1.6]).
By the weak convergence of \( n^{-\rho }|{\mathcal {C}}_{\scriptscriptstyle \le }(i)|\), it holds that \(\lim _{n\rightarrow \infty } \mathbb {P}(n^{-\rho }|{\mathcal {C}}_{\scriptscriptstyle \le }(i)|\ge au)=\mathbb {P}(H_i(0)>u)\) for all \(i\ge 1\), so that we arrive at
The first term is the main term, and we prove that \(\sum _{i\ge 2} \mathbb {P}(H_i(0)>u)=o(\mathbb {P}(H_1(0)>u))\) now.
For this, we note that
We can rewrite, on the event \(\{\mathcal {I}_j(u)=0 \, \forall j\in [i-1]\}\), and using that \(c_1\ge c_i\) for every \(i\ge 1\),
Therefore,
where in the last equality, we use that, conditionally on \(\mathcal {I}_j(u)=0\) forall \(j\in [i]\setminus \{1\}\), the equality
holds.
The event \(\big \{\mathcal {I}_j(u)=0\forall j\in [i]\setminus \{1\}\big \}\) is decreasing (recall the notions used in the proof of Lemma 2.12) in the random variables \((T_i)_{i\ge 2}\), while the event \(\{\mathcal {S}_{[0,u]}^{\scriptscriptstyle (1)}>0\}\) is increasing. Thus, by the FKG-inequality,
We can identify
Combining (2.77), (2.81)–(2.82) we arrive at
Since \(c_j=j^{-\alpha }\) with \(\alpha \in (1/3,1/2)\), \(\sum _{j=1}^{i-1} c_j\ge (i-1) c_{i-1}=(i-1)^{1-\alpha }\). Therefore,
This completes the proof of the Main Theorem. \(\square \)
3 No Middle Ground: Proof of Proposition 2.13
In this section, we show that the probability to hit zero in the time interval \([\varepsilon ,u-Tu^{-(\tau -2)}]\), where T is a constant, becomes negligible as \(T\rightarrow \infty \).
The strategy of proof is as follows. We start in Proposition 3.2 by investigating the value of \(\mathcal {S}_t\) at some discrete times \((t_k)_{k\ge 1}\) in [0, u] and show that with high probability \(\mathcal {S}_t\) does not deviate far from its mean. Next, in Proposition 3.3, we show that it is unlikely for the process \((\mathcal {S}_t)_{t\ge 0}\) to make a substantial deviation in the interval \([t_k,t_{k+1}]\) from its value in \(t_k\).
We start with a preparatory lemma that will allow us to give bounds on the asymptotic parameters appearing in the upcoming proofs:
Lemma 3.1
(Asymptotics of parameters). There exists \(K\ge 1\) such that
and, for all \(|\lambda | \le \delta u\) with \(\delta >0\) sufficiently small, there exists \(K>0\) such that
Proof
We use the second moment method. With Lemma 2.9 we compute that
Split the sum into i with \(c_i u \le 1\) and \(c_i u>1\). For the first, we bound \(1-{\mathrm {e}}^{-c_i u}\le O(1) c_iu\), for the latter, we bound \(1-{\mathrm {e}}^{-c_i u}\le 1\), to obtain
the latter by an explicit computation using that \(c_i=i^{-1/(\tau -1)}\).
Further, with Corollary 2.11
The Chebychev inequality now proves (3.1).
For (3.2), we again compute
Thus, for \(|\lambda |\le \delta u\) and again using Corollary 2.11, we obtain
Further,
Again the claim follows from the Chebychev inequality. \(\square \)
We continue to show that the probability for \(\mathcal {S}_t\) to deviate far from its mean at some discrete times in the time interval \([\varepsilon ,u-Tu^{-(\tau -2)}]\) is small when T is large enough:
Proposition 3.2
(Probability to deviate far from mean at discrete times). Let \(\eta >0\) and \(\delta _u=u^{-(\tau -2)}\). For any \(\varepsilon >0\) and \(M>0\),
where we recall the definition of \(o_{\scriptscriptstyle T}(1)\) from Proposition 2.13.
Proof
The proof is split between the cases \(t\in [\varepsilon ,u/2]\), \(t\in [u/2, u-\varepsilon ]\) and \(t\in [u-\varepsilon , u-u^{-(\tau -2)}]\), where \(t=k\delta _u\) and \(\varepsilon >0\) is some arbitrary constant.
Proof for \(t\in [\varepsilon ,u/2]\). We start by proving the proposition for \(t\in [\varepsilon ,u/2]\), for which we use Proposition 2.7 with \(\lambda _1=\pm 1\) and \(\lambda _2=0\) to see that, for any \(x>0\),
where we note that the \({\mathrm {e}}^{\Theta }\) error term can be put inside the constant c since \(|\Theta |\le o_u(1)+O(t^{3(\tau -3)})\) and \(t\ge \varepsilon \) is strictly positive. By (2.32) in Lemma 2.10, \(I_{\scriptscriptstyle V}(p) \le c p^{\tau -3}\) for all \(p\in [0,1/2]\). Applying this to \(p=t/u\) yields
By Corollary 2.5(a), we have \(\widetilde{\mathbb {E}}[\mathcal {S}_t]/(tu^{\tau -3}) \in [\underline{c},\overline{c}]\) for \(t\in [\varepsilon ,u/2]\) and some constants \(\underline{c},\overline{c}>0\). Therefore, taking \(x=a \eta t^{\frac{1}{2} (5-\tau )}u^{\tau -3}\) for some \(a>0\) chosen appropriately,
We take \(t=k\delta _u\) for \(k\delta _u \in [\varepsilon ,u/2]\), so that there are at most \(u/\delta _u=u^{\tau -1}\)possible values of k. Thus,
This proves the proposition for \(k\delta _u \in [\varepsilon ,u/2]\).
Proof for \(t\in [u/2,u-\varepsilon ]\). We continue by proving the proposition for \(t\in [u/2,u-\varepsilon ]\), for which we again use Proposition 2.7 with \(\lambda _1=\pm 1\) and \(\lambda _2=0\) to see that, for any \(x>0\),
By Lemma 2.10 and the fact that \(I_{\scriptscriptstyle V}(p)>0\) for every \(p\in (0,1)\), we obtain that there exists a constant \(c>0\) such that \(I_{\scriptscriptstyle V}(p) \ge c\) for all \(p\in [1/2, 1-\varepsilon ]\). Applying this to \(p=t/u\) yields
By Lemma 2.4(d) and Corollary 2.5(b), we have \(\widetilde{\mathbb {E}}[\mathcal {S}_t] u^{-(\tau -2)} \in [\underline{c},\overline{c}]\) for all \(t\in [u/2,u-\varepsilon ]\) and some constants \(\underline{c}=\underline{c}(\varepsilon ),\overline{c}=\overline{c}(\varepsilon )>0\). Therefore, taking \(x=a \eta u^{(\tau -1)/2}\) for some \(a>0\) chosen appropriately,
We take \(t=t_k=k\delta _u\) for \(k\delta _u \in [u/2, u-\varepsilon ]\), so that there are at most \(u/\delta _u=u^{\tau -1}\) possible values of k. Thus,
This proves the proposition for \(k\delta _u \in [u/2,u-\varepsilon ]\).
Proof for \(t\in [u-\varepsilon ,u-Tu^{-(\tau -2)}]\): Rewrite The proof for \(t\in [u-\varepsilon ,u-Tu^{-(\tau -2)}]\) is the hardest, and is split into three steps. We start by rewriting the event of interest. We define \(s=u-t\) and investigate \(\mathcal {S}_{u-s}\) in what follows, so that now \(s\in [Tu^{-(\tau -2)},\varepsilon ]\).
Recall the definition of \(Q_u(s)\) in (2.45),
so that \(|\mathcal {S}_{u-s}-\widetilde{\mathbb {E}}[\mathcal {S}_{u-s}]|>\eta \widetilde{\mathbb {E}}[\mathcal {S}_{u-s}]\) precisely when
When \(\mathcal {S}_u\in [0,M/u]\) and using that \(u\widetilde{\mathbb {E}}[\mathcal {S}_u]=o(1)\) by Lemma 2.4(d), we therefore obtain that if (3.19) holds, then
By Lemma 2.4(d) and Corollary 2.5(b), we have that \(\widetilde{\mathbb {E}}[\mathcal {S}_{u-s}]\ge c su^{\tau -3}\) for some \(c>0\). Therefore, \(\eta u \widetilde{\mathbb {E}}[\mathcal {S}_{u-s}]\ge c \eta T\), so that, by taking \(T=T(M)\) sufficiently large, we obtain that
We condition on \(\mathcal {J}(u)\) from (2.43), and note that \(\mathcal {S}_u\) is measurable w.r.t \(\mathcal {J}(u)\) to obtain
This is the starting point of our analysis. We split, writing \(\eta '=\eta /2\),
We conclude using the union bound that
We will bound both contributions separately, and start by setting the stage. We compute that
where we abbreviate
It turns out that both contributions in (3.24) can be expressed in terms of \(p_{i,u}(s)\), and we continue our analysis by studying this quantity in more detail.
Proof for \(t\in [u-\varepsilon ,u-Tu^{-(\tau -2)}]\): Analysis of \(p_{i,u}(s)\). We next analyse the conditional probability \(p_{i,u}(s)\). We compute (recall (1.23), (2.29) and (2.43))
Using the distribution of \(T_i\) formulated in Lemma 2.9, we obtain, for any \(s\in [0,u]\),
so that
We start by bounding \(p_{i,u}(s)\), for \(s\in [0,\varepsilon ]\), by
Moreover, for u sufficiently large,
Proof for \(t\in [u-\varepsilon ,u-Tu^{-(\tau -2)}]\): Completion first term (3.24). For the first term in (3.24), we use Markov’s inequality in the form \(\mathbb {P}(|X-\mathbb {E}[X]|>a)\le a^{-4} \mathbb {E}[(X-\mathbb {E}[X])^4]\) to obtain
and recall from (3.25) that
The summands are conditionally independent given \(\mathcal {J}(u)\) and identically 0 when \(\mathcal {I}_i(u)=0\), so that
By the second bound in (3.30) and Corollary 2.11, the first term is at most
By (3.1) in Lemma 3.1, we may assume that \(\sum _{i=2}^{\infty } c_i^2 \mathcal {I}_i(u)\le K u^{\tau -3}\), since the complement has a probability that is \(o(u^{-(\tau -1)/2})\). Then, in a similar way, using the first bound in (3.30), the second term is at most
As a result,
Since \(s\ge Tu^{-(\tau -2)}\), this can be simplified to
We conclude using (3.32) that, on the event that \(\{\sum _{i=2}^{\infty } c_i^2 \mathcal {I}_i(u)\le K u^{\tau -3}\},\)
so that, also using that \(\widetilde{\mathbb {P}}(\mathcal {S}_u\in [0,M/u]) = O(u^{-(\tau -1)/2})\) by Proposition 2.8,
This bound is true for any \(s\in [Tu^{-(\tau -2)},\varepsilon ]\). Taking \(s=s_k=k u^{-(\tau -2)}\) and summing out over \(k\ge T\) leads to
when we take \(T=T(\eta )\) sufficiently large, as required.
Proof for \(t\in [u-\varepsilon ,u-Tu^{-(\tau -2)}]\): Completion second term (3.24). For the second term in (3.24), we need to bound
We compute using (3.18)
while
As a result, using (3.26),
where with (3.29)
As a result,
For both terms, we use the Chebychev inequality.
For X, as \(\widetilde{\mathbb {E}}[X]=0\), this leads to
We use Lemma 2.9 to see that \(\widetilde{\mathbb {P}}(i\in \mathcal {J}(u))=\frac{1-{\mathrm {e}}^{-c_i u}}{1-{\mathrm {e}}^{-c_iu} +{\mathrm {e}}^{-c_iu(1+\theta )}}\), so that
since \(1-{\mathrm {e}}^{-x}+{\mathrm {e}}^{-x(1+\theta )}\) is uniformly bounded from below away from 0 for all \(x\ge 0\). We use this together with Corollary 2.11 to compute that
Therefore,
as required below.
For the term involving Y(s), we start by using the union bound to obtain
Then, by the Chebychev inequality and as \(\widetilde{\mathbb {E}}[Y(s_k)]=0\),
where, using (3.49), \({\mathrm {e}}^{c_is}-1-c_is=O(s^2c_i^2)\) and \({\mathrm {e}}^{c_iu}-1\ge c_iu\),
where we used Corollary 2.11 in the last equality. Substituting this into (3.52) and (3.53), we arrive at
since \(\tau \in (3,4)\). Combining (3.51) and (3.55) in (3.47) completes the proof. \(\square \)
We now know that with high probability the process does not deviate much from its mean when observed at the discrete times \(k\delta _u \in [\varepsilon ,u-T\delta _u]\). We continue to show that this actually holds with high probability on the whole interval \([\varepsilon ,u-T\delta _u]\). We complete the preparations for the proof of Proposition 2.13 by proving that it is unlikely for the process to deviate far from the mean for all times \(t\in [\varepsilon , u-T\delta _u]\) simultaneously:
Proposition 3.3
(Probability to deviate far from mean at some time). For every \(\eta >0\) and \(M>0\),
Proof
Fix \(T>0\) and recall that \(\delta _u=u^{-(\tau -2)}\). Let
where we take \(\lambda =\delta u\) with \(\delta >0\) sufficiently small and \(K \ge 1\) as in Lemma 3.1. We first give a bound on \(\widetilde{\mathbb {P}}(E_u^c \cap \{\mathcal {S}_u\in [0,M/u]\})\). We apply (3.1) in Lemma 3.1 to obtain that
which is contained in the error term in (3.56). Further, by (3.2) in Lemma 3.1
Combined with Proposition 3.2, this ensures that
As a result, we are left to control the fluctuations of the process on any interval \(I_k=[k\delta _u,(k+1)\delta _u]\). We use Boole’s inequality to bound
Let \(t_k=k\delta _u\), so that \(I_k=[t_{k},t_{k+1}]\). We split the analysis into four cases, depending on whether \(t_k\le u/2\) or not, and on whether \(\mathcal {S}_t-\widetilde{\mathbb {E}}[\mathcal {S}_t]\ge 10\eta \widetilde{\mathbb {E}}[\mathcal {S}_t]\) or \(\mathcal {S}_t-\widetilde{\mathbb {E}}[\mathcal {S}_t]\le -10\eta \widetilde{\mathbb {E}}[\mathcal {S}_t]\), which we refer to as ‘large upper’ and ‘large lower’ deviations, respectively.
In all of the four cases, we take advantage of the following observations concerning the law of our indicator processes under \(\widetilde{\mathbb {P}}\). By (1.21),
For k respectively \(t_k\) fixed and \(t \ge 0\), let \(\Delta _i^k(t) = \mathcal {I}_i(t+t_k)-\mathcal {I}_i(t_k) = [1-\mathcal {I}_i(t_k)] [\mathcal {I}_i(t+t_k)-\mathcal {I}_i(t_k) ] \in \{0,1\}\). By (2.30),
and
As a result,
Let \(( \mathcal {T}_i ^k)_{i \ge 2}\) be a sequence of independent exponential random variables with mean \(1/c_i\) (under \(\widetilde{\mathbb {P}}\)) that are independent of \(\mathcal {F}_{t_k}\), the \(\sigma \)-algebra generated by \((\mathcal {S}_t)_{t\in [0,t_k]}\). Let \((B_i^k(t))_{0 \le t \le \delta _u}=(B_i^k(t;t_k,u))_{0 \le t \le \delta _u}\) be a sequence of processes that are independent in i (and also independent of all randomness so far), non-decreasing, taking values in \(\{0,1\}\) and with success probability at time \(t \in [0,\delta _u]\) of
Thus, conditionally on \(\mathcal {I}_i(t_k)=0\),
We can therefore without loss of generality assume that under \(\widetilde{\mathbb {P}}\), the sequence of processes \((\mathcal {I}_i(t): t \ge 0)_{i \ge 2}\) in (3.62) is constructed inductively as follows. Recall that \(t_{k+1}=t_k+\delta _u\). Conditional on \((\mathcal {I}_i(t_k))_{i \ge 2}\), for \(0 \le t \le \delta _u\),
For lower deviations (see Part 2 and 4 below), we will use as a lower bound in (3.62)
For upper deviations (see Part 1 and 3 below), we require an upper bound instead. In a first step, we replace \(\Delta _i^k(t)\) by \(\mathbb {1}_{\{\mathcal {T}_i^k \le t\}}\) and show that the resulting error is sufficiently small in case \(t_k \le u/2\) (see Part 1). Indeed, let
and define, for \(t\in I_k=[0,\delta _u]\),
Then, we obtain that
By Lemma 3.1 and for \(t\le \delta _u\), the term \(\mathcal {B}_{t_k,t}^-\) is, with probability at least \(1-Cu^{-(\tau -1)}\) bounded by \(\delta _u K u^{\tau -3}=K/u\), which is \(o(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}])\) for \(t_k\in [\varepsilon ,u/2]\) respectively \(o_{\scriptscriptstyle T}(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}])\) for \(t_k\in [u/2,u-Tu^{-(\tau -2)}]\) by Corollary 2.5(a), (b) and Lemma 2.4(d). Further, in Part 5 below, we will prove that
We first complete the proof subject to (3.73). In Part 5 we will prove (3.73). There, we will rely on the sharp bounds obtained on the middle term in (3.72), which will be obtained in what follows by careful domination arguments in terms of Lévy processes.
Part 1: The case \(t_k\le u/2\) and a large upper deviation We start by bounding the probability that there exists a \(t\in I_k=[t_{k},t_{k+1}]\), \(\varepsilon \le t_k \le u/2\) such that \(\mathcal {S}_t-\widetilde{\mathbb {E}}[\mathcal {S}_t] \ge 10\eta \widetilde{\mathbb {E}}[\mathcal {S}_t]\). Using that \(\widetilde{\mathbb {E}}[\mathcal {S}_{t}]=\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}](1+o(1))\) for any \(t\in I_k\) by Corollary 2.5(c), we bound
By (3.72),
which can be stochastically dominated by the process \(\tilde{\beta }t + \mathcal {R}_t + \mathcal {B}_{t_k,t}^+\) with \(\mathcal {R}_t \equiv \sum _{i=2}^{\infty } c_i [N_i(t)-c_i t],\) where \((N_i(t))_{t\ge 0}\) is a Poisson process with rate \(c_i\). As a result, with (3.73),
Since \((\mathcal {R}_t)_{t\ge 0}\) is a finite-variance Lévy process, it is well-concentrated. In more detail, for \(\lambda \in {\mathbb {R}}\), we define the exponential martingale
Then, for every \(\lambda \ge 0\), using that \(\phi (\lambda )\ge 0\) and by Doob’s inequality,
We apply this inequality to \(x=\frac{\eta }{2}\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\), \(t=\delta _u\) and \(\lambda =1\), and Corollary 2.5(a) implies that \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\ge c t_k u^{\tau -3} =ck/u\) for \(t_k=k\delta _u\in [\varepsilon ,u/2]\). Therefore (using \(t_k=k\delta _u\))
which is small even when summed out over k as above.
Part 2: The case \(t_k\le u/2\) and a large lower deviation We continue with bounding the probability that there exists a \(t\le \delta _u\) and \(\varepsilon \le t_k \le u/2\) such that \(\mathcal {S}_{t_k+t}-\widetilde{\mathbb {E}}[\mathcal {S}_{t_k+t}]\le -10\eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_k+t}]\), which is slightly more involved. Here we can use that \(\mathcal {B}_{t_k,t}^+\ge 0\). Again using that \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k+t}]=\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}](1+o(1))\) for any \(t\le \delta _u\) by Corollary 2.5(c), we bound
Further, using (3.62) and (3.69), as well as the realization that \(\mathbb {1}_{\{\mathcal{T}_i^k\le t\}}=\mathbb {1}_{\{N_i(t)\ge 1\}}\), we obtain
where we set
and
Thus, conditionally on \((\mathcal {I}_i(t_k))_{i\ge 2}\), the process \((\mathcal {R}_t')_{t\ge 0}\) is a Lévy process similar to the Lévy process investigated in Part 1 above and \(\mathcal {D}_t\) is the contribution due to i for which \(N_i(t)\ge 2\). We deal with the two terms one by one (recalling that we have already dealt with \(\mathcal {B}_{t_k,t}^-\) above (3.73)), starting with \((\mathcal {R}_t')_{t\ge 0}\). As in the previous part, we show that
is small enough even when summed out over k such that \(t_k\in [\varepsilon ,u/2]\). This again follows by Doob’s inequality and the bound that for any \(\lambda \ge 0\), and with \(\mathcal {F}_{t_k}\) the \(\sigma \)-algebra generated by \((\mathcal {S}_t)_{t\in [0,t_k]}\),
where
We compute that
Now follow the same steps as in Part 1, using that \(0 \le \phi '(-2) \le \text{ const. }\) We continue to bound \(\mathcal {D}_t\) by bounding
since the process \(t\mapsto \mathcal {D}_t\) is non-decreasing. By the Markov inequality,
Applying this to \(x=2\eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\) with \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}] \ge c t_k u^{\tau -3}\) and \(t=\delta _u=u^{-(\tau -2)}\) yields
When summing this out over k such that \(t_k=k\delta _u\in [\varepsilon ,u/2]\) we obtain a bound \(c(\log {u}) u^{-(2\tau -5)}=o(u^{-(\tau -1)/2}),\) since \((2\tau -5)>(\tau -1)/2\) precisely when \(\tau >3\). This proves that
as required. Collecting terms completes Part 2.
Part 3: The case \(t_k\ge u/2\) and a large upper deviation This proof is more subtle. We fix k such that \(t_k\in [u/2, u-T\delta _u]\) and condition on \(\mathcal {F}_{t_k}\), which is the \(\sigma \)-field generated by \((\mathcal {S}_t)_{t\le t_k}\) to write (recall (3.57))
First observe that on \(\{ |S_{t_k}-\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]| \le \eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\}\), we have
by using that \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k+t}]=\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}](1+o_{\scriptscriptstyle T}(1))\) for any \(t\le \delta _u\) by Corollary 2.5(d). Using (3.72) similarly to (3.81), we bound
where we note that \(\mathcal {R}_t'\) is as in Part 2, (3.81). Conditionally on \(\mathcal {F}_{t_k}\), the process \((\mathcal {R}_t')_{t\ge 0}\) is a Lévy process, and we use
where we recall Eqs. (3.86) and (3.87). Since \({\mathrm {e}}^{\lambda c_i}-1-\lambda c_i\ge 0\) for every \(\lambda \in {\mathbb {R}}\), and since \(1-\mathcal {I}_i(t_k)\le 1-\mathcal {I}_i(u/2)\) for every \(t_k\ge u/2\), a.s.
On the event \(\{\sum _{i=2}^{\infty } c_i [1-\mathcal {I}_i(u/2)]\big ({\mathrm {e}}^{\lambda c_i}-1-\lambda c_i\big ) \le K\lambda ^2u^{\tau -4}\}\) (recall (3.92)), we have that \(\phi '(\lambda )\le K\lambda ^2u^{\tau -4}\), so that we can further bound, choosing \(\lambda =\delta u\) and \(t=\delta _u=u^{-(\tau -2)}\),
We take \(x=\frac{\eta }{2}\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\) and note that Corollary 2.5(b) and Lemma 2.4(d) yield that \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\ge c(u-t_k) u^{\tau -3}\) for \(t_k\in [u/2,u-T\delta _u]\). Then,
Summing over k with \(t_k=k \delta _u\in [u/2, u-T\delta _u]\) and \(\delta _u=u^{-(\tau -2)}\), using Proposition 2.8 and \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\le c(u-t_k) u^{\tau -3}\) by Corollary 2.5(b) and Lemma 2.4(d) yields as an upper bound (recall also (3.92) and the definition of \(E_u\) from (3.57))
as required.
Part 4: The case \(t_k\ge u/2\) and a large lower deviation We again start from (3.81), and note that \(\mathcal {B}_{t_k,t}^+\ge 0\) and that the bound on \(\mathcal {D}_t\) proved in Part 2 and that on \(\mathcal {B}_{t_k,t}^{-}\) proved around (3.73) still apply, now using that by Corollary 2.5(b) and Lemma 2.4(d) \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]\ge c(u-t_k) u^{\tau -3}\) for \(t_k\in [u/2, u-T\delta _u]\) with \(\delta _u=u^{-(\tau -2)}\) below (3.89). The exponential martingale bound for \(\mathcal {R}_t'\) performed in Part 3 can easily be adapted to deal with a large lower deviation as well. We omit further details. \(\square \)
Part 5: The error term \(\mathcal {B}_{t_k,t}^+\). Recall the definition of \(\mathcal {B}_{t_k,t}^+\) in (3.71), and the bound that we need to prove in (3.73). Write
We first use the first moment method to obtain the estimate
Indeed, note that \(\sup _{t\le \delta _u}\mathcal {B}_{t_k,t}^{+,2}=\mathcal {B}_{t_k,\delta _u}^{+,2}\) by the fact that \(t\mapsto B_i^k(t)\) is non-decreasing. By the independence of \(\mathcal {I}_i(t_k) \in \{0,1\}, \mathcal {T}_i^k\) and \(B_i^k(t) \in \{0,1\}\) (cf. below (3.65)),
Use (3.64) and (3.66) to bound this from above by
using Corollary 2.11. As a result, using Markov’s inequality,
as required.
We continue with \(\mathcal {B}_{t_k,t}^{+,1}\), which we bound as
again by the fact that \(t\mapsto B_i^k(t)\) is non-decreasing. Thus, we can write, using (3.72),
Using that \(\widetilde{\mathbb {E}}[\mathcal {S}_{t_{k+1}}]=\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}](1+o(1))\) by Corollary 2.5(c), we can bound this by
We write
By the analysis in Parts 1–4, as well as (3.101), we know that (with a possibly different value for \(\eta \) for the last term)
Indeed, for the bound on \(\mathcal {B}_{t_k,\delta _u}^-\), see the argument below (3.72). The last term is bounded in terms of Lévy processes in each of the different parts. We conclude that it suffices to investigate \(\mathcal {B}_{t_k,\delta _u}^{+,1}\) on the event \(E_u\cap F_u\).
On \(E_u\), \(|\mathcal {S}_{t_{k+1}}-\widetilde{\mathbb {E}}[\mathcal {S}_{t_{k+1}}]|\le \eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_{k+1}}]\le 2\eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_{k}}]\) and \(|\mathcal {S}_{t_k}-\widetilde{\mathbb {E}}[\mathcal {S}_{t_k}]|\le \eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_{k}}]\). On \(F_u\), the last three terms in (3.107) are bounded by \(\eta \widetilde{\mathbb {E}}[\mathcal {S}_{t_{k}}]\) as well. Thus, we obtain, on \(E_u\cap F_u\) with probability at least \(1-o_{\scriptscriptstyle T}(u^{-(\tau -1)/2})\),
as required. \(\square \)
Proof of Proposition 2.13
The proof follows by Proposition 3.3. Indeed, choose \(\eta =1/12\) and observe that \(\widetilde{\mathbb {E}}[\mathcal {S}_t] > 0\) on \([\varepsilon ,u-T\delta _u]\), using that \(\widetilde{\mathbb {E}}[\mathcal {S}_t]\ge c t u^{\tau -3}\) for \(t\in [\varepsilon ,u/2]\) and \(\widetilde{\mathbb {E}}[\mathcal {S}_t]\ge c(u-t)u^{\tau -3}\) for all \(t \in [u/2,u-T \delta _u]\) by Corollary 2.5(a),(b) and Lemma 2.4(d). \(\square \)
4 Conditional Expectations Given \(u\mathcal {S}_u=v\)
A major difficulty in the proof of Proposition 2.14 is the fact that, while the summands in the definition of \(Q_u(t)\) in (2.45) are independent, this property is lost due to the fact that we condition on \(\mathcal {S}_u\). The following lemma allows us to deal with such expectations:
Lemma 4.1
(Conditional expectations given a continuous random variable). Let \(G((\mathcal {S}_s)_{s\ge 0})\) be a functional of the process \((\mathcal {S}_s)_{s\ge 0}\) such that \(G((\mathcal {S}_s)_{s\ge 0})\ge 0\) \(\widetilde{\mathbb {P}}\)-a.s., and \(0<\widetilde{\mathbb {E}}[G((\mathcal {S}_s)_{s\ge 0})]<\infty \). Then, for every \(w\in {\mathbb {R}}\),
where \({\mathrm {i}}\) denotes the imaginary unit.
For \(G((\mathcal {S}_s)_{s\ge 0})=1\), (4.1) is just the usual Fourier inversion theorem applied to the (continuous) random variable \(\mathcal {S}_u\). The expectation \(\widetilde{\mathbb {E}}\big [G((\mathcal {S}_s)_{s\ge 0}){\mathrm {e}}^{{\mathrm {i}}k\mathcal {S}_u}]\) factorizes when \(G((\mathcal {S}_s)_{s\ge 0})\) is of product form in the underlying random variables \((\mathcal {I}_i(s))_{s\ge 0}\). In our applications, \(\widetilde{\mathbb {E}}\big [G((\mathcal {S}_s)_{s\ge 0})\mid \mathcal {S}_u=w\big ]\) will be close to constant in w. Then, in order to compute its asymptotics, it suffices to check that the computation in the proof of Proposition 2.8 is hardly affected by the presence of \(G((\mathcal {S}_s)_{s\ge 0})\).
Proof
Define the measure \(\widetilde{\mathbb {P}}^{\scriptscriptstyle G}\) by
Under the measure \(\widetilde{\mathbb {P}}^{\scriptscriptstyle G}\), the random variable \(\mathcal {S}_u\) is again continuous, since \(0<\widetilde{\mathbb {E}}[G((\mathcal {S}_s)_{s\ge 0})] <\infty \). Let \(\widetilde{f}_{\mathcal {S}_u}^{\scriptscriptstyle G}\) denote the density of \(\mathcal {S}_u\) under the measure \(\widetilde{\mathbb {P}}^{\scriptscriptstyle G}\). Then, we obtain, by the Fourier inversion theorem applied to \(\widetilde{\mathbb {P}}^{\scriptscriptstyle G}\), that
Now, by (4.2),
while
Therefore, substituting both sides in (4.3) and multiplying through by \(\widetilde{\mathbb {E}}\big [G((\mathcal {S}_s)_{s\ge 0})\big ]\) proves the claim. \(\square \)
Let \(\widetilde{\mathbb {P}}_v\) denote \(\widetilde{\mathbb {P}}\) conditionally on \(u\mathcal {S}_u=v\), so that Lemma 4.1 implies that
In many cases, it shall prove to be convenient to rewrite the above using
since the random variables \((T_i)_{i\in \mathcal {J}(u)}\) are, conditionally on \(\mathcal {J}(u)\), independent with
In the following lemma, we investigate the effect on \(\mathbb {P}(i\in \mathcal {J}(u))\) of conditioning on \(\mathcal {S}_u=w\):
Lemma 4.2
(The set \(\mathcal {J}(u)\) conditionally on \(\mathcal {S}_u=w\)). There exists a constant \(d>0\) such that for any i and \(w=o(u^{(\tau -3)/2})\),
Proof
By Lemma 4.1 (for the second term use \(G\equiv 1\))
Recall Lemma 2.9. Under the measure \(\widetilde{\mathbb {P}}\), the distribution of the indicator processes \((\mathcal {I}_j(t))_{t\ge 0}\) is that of independent indicator processes. Define \(\mathcal {S}_u^{\scriptscriptstyle (j)}=\mathcal {S}_u-c_j(\mathcal {I}_j(u)-c_ju)\). By (1.21) and (2.43), the random variables \(\mathcal {I}_j(u)\) and \(\mathcal {S}_u^{\scriptscriptstyle (j)}\) are independent under \(\widetilde{\mathbb {P}}\). This yields
Next we claim that there exist constants \(C_1, C_2\) such that for all \(j \ge 2\)
Indeed, for \(\mathcal {S}_u^{\scriptscriptstyle (j)}\) replaced by \(\mathcal {S}_u\) the result was derived in the proof of Proposition 2.8 in [1]. To prove the same for \(\mathcal {S}_u^{\scriptscriptstyle (j)}\) with \(j \ge 2\) arbitrary, and following the approach in [1], we obtain for \(\frac{k}{2\pi } u^{-(\tau -3)/2-1} \le 1/8\) the bound
while for \(y_k=8\frac{k}{2\pi }u^{-(\tau -3)/2}>u\),
Substituting (4.12) in (4.11) yields
We further have
which yields
For \(w=o(u^{(\tau -3)/2})\) and by Proposition 2.8, \(u^{(\tau -3)/2} \widetilde{f}_{\mathcal {S}_u}(w)=B(1+o(1))\) uniformly in w and the claim in (i) follows. \(\square \)
Corollary 4.3
There exists a constant \(C>0\) such that for any i and \(w=o(u^{(\tau -3)/2})\),
Proof
The bound by 1 is obvious. The bound by \(Cc_iu\) follows once we recall (2.30) and observe that for \(c_j\le 1/u\), \(\widetilde{\mathbb {P}}(T_j \le u) = \widetilde{\mathbb {P}}(j\in \mathcal {J}(u)) \le C(\tau ) c_j u\). Now use Lemma 4.2(i). \(\square \)
5 The Near-End Ground: Proof of Proposition 2.14
In this section, we prove Proposition 2.14. The proof is divided into several key parts. In Sect. 5.1, we show convergence of the mean process \(A_u\) in Proposition 2.14(a). In Sect. 5.2, we prove the convergence of \(B_u\) in Proposition 2.14(b).
5.1 Convergence of the Mean Process \(A_u\)
Recall the definition of \(A_u\) from (2.50). By (4.8),
We use that \(|{\mathrm {e}}^x-1-x| \le {\mathrm {e}}^D x^2/2\) for \(0 \le x \le D\) with \(x=c_jtu^{-(\tau -2)}\), where for \(0 \le t \le T\), \(c_jtu^{-(\tau -2)} \le tu^{-(\tau -2)} \le \text{ const. }\), to obtain
with an error term \(E_u(t)\) bounded by
uniformly in \(t \le T\). Since \(\sum _{j \ge 2} c_j^3 < \infty \) and \(u^{-2(\tau -2)+1}=u^{5-2\tau }=o(1)\), the first term vanishes. Further, by Corollary 4.3 with \(w=v/u\),
so that also the second term is \(o_{\scriptscriptstyle \widetilde{\mathbb {P}}_v}(1)\).
In the above proof, we see that it is useful to split a sum over \(j \in \mathcal {J}(u)\) into \(j\in \mathcal {J}(u)\) such that \(c_j>1/u\) and \(j\in \mathcal {J}(u)\) such that \(c_j \le 1/u\). Then we use upper bounds similar to the ones in Corollary 4.3 to bound the arising sums. We will follow this strategy often below.
We further rewrite (5.2) into
Note that \(0 \le q_j(u) \le 1\) for u big. Below, we will frequently rely on the bounds
and, using (2.30) for \(t=u\),
By (5.3), to prove the claim of Proposition 2.14(a), it is enough to show that
For this, we compute the Laplace transform of \(\kappa _u\) under the measure \(\widetilde{\mathbb {P}}_v\) using Lemma 4.1 and a change of variable. For \(a\ge 0\),
By Proposition 2.8, for each \(v>0\), \(u^{(\tau -3)/2} \widetilde{f}_{\mathcal {S}_u}(v/u)\rightarrow B\). We aim to use dominated convergence on the integral appearing in (5.9), for which we have to prove (a) pointwise convergence for each \(k\in {\mathbb {R}}\); and (b) a uniform bound that is integrable. We start by proving pointwise convergence:
Lemma 5.1
(Pointwise convergence). For \(a \ge 0\) arbitrary, \(v=o(u^{(\tau -1)/2})\), and with \(\kappa _u\) as in (5.8),
Proof
Trivially, \({\mathrm {e}}^{-{\mathrm {i}}k vu^{-(\tau -1)/2}}\rightarrow 1\) pointwise when \(v=o(u^{(\tau -1)/2})\). To compute \(\widetilde{\mathbb {E}}\big [{\mathrm {e}}^{-a \kappa _u}{\mathrm {e}}^{{\mathrm {i}}k u^{-(\tau -3)/2}\mathcal {S}_u}\big ]\), recall the definition of \(\mathcal {S}_u\) from (1.21) and recall that the indicator processes \(\mathcal {I}_j(t)=\mathbb {1}_{\{T_j\le t\}}\) are independent under the measure \(\widetilde{\mathbb {P}}\) (cf. Lemma 2.9), to see that
The remainder of the proof proceeds in three steps.
Step 1: Asymptotic factorization We start by proving that
To this end, we first use
to get (recall that \(q_j(u)\ge 0\))
where we abbreviate \(q \equiv -a q_j(u) \le 0\) such that
To bound \(\Delta _j(q)\), we write \({\mathrm {e}}^{{\mathrm {i}}k u^{-(\tau -3)/2} c_j}=1+({\mathrm {e}}^{{\mathrm {i}}k u^{-(\tau -3)/2} c_j}-1)\) and use the triangle inequality to bound each summand by
We can bound
which gives a bound \(|q|{\mathrm {e}}^{(q\vee 0)}|k| u^{-(\tau -3)/2} c_j \widetilde{\mathbb {P}}(T_j \le u)\) on the last line of (5.16).
To bound the first line of (5.16), we use the error bounds \(|{\mathrm {e}}^{-x}-1+x| \le |x|^2\) for all \(x\ge 0\) to all the exponential functions in it, to obtain
Together, this leads us to
To prove (5.12), by (5.14) and (5.19) it is enough to show that \(\sum _{j \ge 2} \Xi _j= o(1).\) Consider the sum over \(c_j> 1/u\) first. By (5.6),
where we have used that \(\sum _j c_j^3<\infty \) and \(\tau >3\) in the last equality. For \(c_j\le 1/u\) and by (5.6), we similarly get
This completes the proof that \(\sum _{j \ge 2} \Xi _j= o(1)\) and thus of the claim in (5.12). \(\square \)
Step 2: The limit of \(\widetilde{\mathbb {E}}[\kappa _u]\). We proceed by showing that \(\lim _{u \rightarrow \infty } \widetilde{\mathbb {E}}[\kappa _u] = \kappa \) with \(\kappa >0\) as in (2.52). By definition of \(\kappa _u\) in (5.8), \(q_j(u)\) in (5.5) and \(\widetilde{\mathbb {P}}(T_j \le u)\) in (2.30),
with \(\Delta =u^{-(\tau -1)}\) and \(x_j=j \Delta , \ j \ge 2\). Here we used that the integrand in the last line of (5.22) is continuous and integrable over \((0,\infty )\). Set \(-x^{-\alpha }=z\) to get the representation (2.52) for \(\kappa \). \(\square \)
Step 3: Completion of the proof By Proposition 2.7, we know that
Therefore, Steps 1-2 and (5.23) complete the proof of pointwise convergence in Lemma 5.1. \(\square \)
To show that the dominated convergence theorem can be applied, it remains to show that the integrand in (5.9) has an integrable dominating function:
Lemma 5.2
(Domination by an integrable function).
Proof
By definition of \(\mathcal {S}_u\) from (1.21) and the independence in Lemma 2.9,
We can rewrite each factor as
since \(q_j(u)\ge 0\). We then use \(\log (1+x) \le x\) for \(x \ge -1\) to obtain
The latter equals
with an overall error term (using that \(\sup _j q_j(u)\) is arbitrarily small for u big enough)
Applying (5.7), we get
where we have used the bounds
whose proof is straightforward.
Together with (5.27) and (5.28), we obtain
As all summands are nonpositive we obtain together with (2.30)
Following the proof of [1, Proposition 2.5, (6.7)-(6.10)], we obtain
and integrability of \(| \widetilde{\mathbb {E}}[ {\mathrm {e}}^{-a \kappa _u} {\mathrm {e}}^{{\mathrm {i}}k u^{-(\tau -3)/2} \mathcal {S}_u}]|\) against k uniformly in u follows. \(\square \)
Completion of the proof of Proposition2.14(a). By the dominated convergence theorem, Lemmas 5.1 and 5.2 complete the proof of Proposition 2.14(a). \(\square \)
5.2 Convergence of the Process \(B_u\)
In this section, we investigate the convergence of the \(B_u\) process and prove Proposition 2.14(b). Since the limit is a random process, this part is more involved than the previous section. We first note that
and the processes \((\mathbb {1}_{\{T_i\in (u-tu^{-(\tau -2)},u]\}})_{t\ge 0}\) are, conditionally on \(\mathcal {J}(u)\), independent. Thus, \((B_u(tu^{-(\tau -2)}))_{t\ge 0}\) is, conditionally on \(\mathcal {J}(u)\), a sum of (conditionally) independent processes having zero mean. We make crucial use of this observation, as well as the technique in Lemma 4.1, to compute expectations of various functionals of the process \((B_u(tu^{-(\tau -2)}))_{t\ge 0}\).
In order to prove the stated convergence in distribution, we follow the usual path of first proving weak convergence of the one-dimensional marginals, followed by the weak convergence of all finite-dimensional distributions, and complete the proof by showing tightness. We now discuss each of these steps in more detail.
5.2.1 Convergence of the One-Dimensional Marginal of \(B_u\)
We start by computing the one-dimensional marginal of \(B_u(tu^{-(\tau -2)})\) (recall (5.35)) and show that it is consistent with the claimed Lévy process limit. We achieve this by computing the Laplace transform
and proving that it converges to the Laplace transform of the claimed Lévy process limit at time t. The main result in this section is the following proposition:
Proposition 5.3
(One-time marginal of \(B_u(tu^{-(\tau -2)})\)). There exists a measure \(\Pi \) such that, for every \(v, a>0\) fixed and as \(u\rightarrow \infty \),
which is the Laplace transform of a Lévy process \((L_s)_{s\ge 0}\) with non-negative jumps and characteristic measure \(\Pi \)
Therefore, the one-dimensional marginals of the process \((B_u(su^{-(\tau -2)}))_{t\ge 0}\) converge to those of \((L_s)_{s\ge 0}\).
The remainder of this section is devoted to the proof of Proposition 5.3. As for \(A_u\), we use Lemma 4.1 and a change of variables to rewrite
where
and where we abbreviate
by (4.8). We again wish to use dominated convergence on the integral in (5.39).
We proceed along the lines of the proof of the convergence of the mean process \(A_u\). Basically, in the proof below, we replace \(-a \kappa _u\) in (5.9) (recall the definition of \(\kappa _u\) and \(q_j(u)\) from (5.8) and (5.5)) by \(\sum _{j\in \mathcal {J}(u)} r_{j,t}^u\), where we define
In what follows, we frequently make use of the bounds
and
We again start by proving pointwise convergence:
Lemma 5.4
[Pointwise convergence revisited] For \(a \ge 0\) arbitrary, \(v=o(u^{(\tau -1)/2})\),
Proof
The first factor on the left-hand side of (5.45) converges to 1. We identify the limit of the expectation in the following steps that mimic the pointwise convergence proof in Lemma 5.1. It will be convenient to split the asymptotic factorization in Step 1 of that proof into two parts, denoted by Steps 1(a) and 1(b). We start by showing that we can simplify \(\psi _{\scriptscriptstyle \mathcal {J}}(a)\):
Step 1(a): Simplification of \(\psi _{\scriptscriptstyle \mathcal {J}}(a)\) As a first step towards the identification of the pointwise limit, we show that we can simplify the expectation in (5.45) as follows:
To prove (5.46), we denote the difference in (5.46) by
so that
Using the first line of (5.13) and applying the error bound \(|{\mathrm {e}}^x-(1+x)| \le |x|^2\) for \(|x| \le 1\) to the differences \(|a_j-b_j|\), the error of the approximation can be bounded by
Next use that \(1-x \le {\mathrm {e}}^{-x}\) for \(x \ge 0\) to obtain as a further bound to the above
For \(t \le T\) with \(T>0\) fixed, we further have by (5.43) that \({\mathrm {e}}^{a c_ju p_{j,t}^u} \le C(a,T)\). Together with \({\mathrm {e}}^{-x}-1+x\ge 0\) for \(x \ge 0\), we obtain
The bound \({\mathrm {e}}^{-x}-1+x\le x^2/2, \,\forall x \ge 0\) yields
We first bound the sum in (5.52). With \(1-{\mathrm {e}}^{-x} \le x\) for \(x \ge 0\) and by (5.43) we obtain
This yields as an upper bound for (5.46) (recall (5.48)),
where we have used (5.7) in the last line.
The claim (5.46) follows once we show that \(\widetilde{\mathbb {E}}\Big [ {\mathrm {e}}^{ \frac{a^2}{2} \sum _{i\in \mathcal {J}(u)} (c_iu)^2 p_{i,t}^u}\Big ]\) is bounded. To prove this, consider first the sum over \(c_i>1/u\) only. By (5.43) and (5.31),
Using (5.43) once more, it remains to show the boundedness of
which is equivalent to bounding
appropriately. Here we used that \(\log (1+x) \le x\) for \(x \ge 0\). Next bound \(\widetilde{\mathbb {P}}(T_i\le u) \le C c_iu\) in the above to obtain that for \(c_i\le 1/u\) we have \(C(a) (c_iu)^2 t u^{-(\tau -1)} \le C(a,T) u^{-(\tau -1)} \le \log (2)\) for u big enough. Hence we can use that \({\mathrm {e}}^x-1 \le 2x\) for \(0 \le x \le \log (2)\) and thus get as a further upper bound to (5.57)
The last inequality follows from (5.31). This completes the proof of (5.46). \(\square \)
Step 1(b): Asymptotic factorization We next show that
To prove (5.59), we note that, by the definition of \(r_{j,t}^u\) in (5.42),
As in the calculations of the Laplace transform of \(A_u\) in (5.14), we now apply (5.13). Note that here we cannot apply the second bound of (5.13) as \(\sup _j (|a_j| \vee |b_j|)\) is not bounded by 1 (recall that \(r_{j,t}^u \ge 0\)). Instead, we get
We proceed to prove that the first and the third product are bounded by constants. Indeed, we can bound the third product using (5.7) by
For the first product in (5.61), we obtain as an upper bound
As \(r_{j,t}^u\) is uniformly bounded for u big enough the above is again bounded by (5.63).
Hence, it suffices to bound the middle part of (5.61), that is, it remains to show that
where we recall the definition of \(\Delta _j(q)\) in (5.15). By (5.19),
for \(u=u(k)\) big enough. The bound on \(r_{j,t}^u\) in (5.44) is equal to C(a, T) times the bounds on \(q_j(u)\) in (5.6). The remaining calculations for \(A_u\) in (5.19)–(5.21) therefore directly carry over, so that (5.65) follows. \(\square \)
Step 2: The limit of \(\mathbb {E}[\sum _{j\in \mathcal {J}(u)} r_{j,t}^u ]\). In this step, we identify the limit of \(\mathbb {E}[\sum _{j\in \mathcal {J}(u)} r_{j,t}^u ]\). For this, we use that by definition of \(r_{j,t}^u\) in (5.42), that of \(p_{j,t}^u\) in (5.41), and (2.30) with \(t=u\),
The convergence of the sum to the integral follows as in (5.22). Next set \(-x^{-\alpha }=z\) to get
with \(\Pi (dz)\) as in (5.38), respectively, (2.55). For \(\Pi \) to be the Lévy measure of a real-valued Lévy process with no positive jumps as in [5, Sect. V.1], by the Lévy-Khintchine formula in [5, Sect. 0.2 and Theorem 1 in Sect. I.1], we have to check that \(\Pi \) is a measure on \((-\infty ,0)\) that satisfies \(\int \Pi (dz) (1 \wedge z^2) < \infty \). Indeed, close to 0, \(z^2 \Pi (dz)\) behaves like \((\tau -1) z^{-(\tau -3)} dz\), which is integrable at 0 and for \(z \rightarrow \infty \), \(\Pi (dz)\) behaves like \({\mathrm {e}}^{-z} (\tau -1) z^{-(\tau -1)} dz\), whose integral is finite for all \(n \in \mathbb {N}\). \(\square \)
Step 3: Completion of the proof The convergence of \(\widetilde{\mathbb {E}}\big [{\mathrm {e}}^{{\mathrm {i}}k u^{-(\tau -3)/2} \mathcal {S}_u}\big ]\rightarrow {\mathrm {e}}^{-k^2 I_{\scriptscriptstyle V}(1)/2}\) is already proved in (5.23). Therefore, Steps 1(a)–1(b) and 2, together with (5.23), complete the proof of pointwise convergence in Lemma 5.4. \(\square \)
To show that the dominated convergence theorem can be applied, it again remains to show that the integrand has an integrable dominating function:
Lemma 5.5
(Domination by an integrable function).
Proof
This follows in a similar way as in the proof of Lemma 5.2. We compute
This is identical to the bound appearing in (5.25), apart from the fact that the term \(e^{-aq_j(u)}\) in (5.25) is replaced with \(b_{j,t}(u)={\mathrm {e}}^{a c_ju p_{j,t}^u}\big (1+({\mathrm {e}}^{-a c_ju}-1)p_{j,t}^u\big )\) in the above. Proceeding as in (5.25) to (5.28), we finally obtain
where the additional first term in comparison to (5.27) arises because \(b_{j,t}(u)\le 1\) no longer holds. Indeed, since \({\mathrm {e}}^{xp}(1+({\mathrm {e}}^{-x}-1)p)\ge 1\) for \(x\ge 0\) and \(p\in [0,1]\), we have that \(b_{j,t}(u)\ge 1\). Further,
The first part of the sum in (5.71) can, by (5.72) and since \({\mathrm {e}}^x-1 \le 2x\) for \(0 \le x \le \log (2)\), be bounded by
Now we can apply (5.44) and (5.31) to get as a further bound
For the second part of the sum (5.71), we proceed as in (5.27)–(5.34) to split it as
By a second order Taylor expansion and the fact that \(r_{j,t}(u)\) is bounded, there exists a constant C such that \(b_{j,t}(u)-1\le C r_{j,t}(u).\) Now we can proceed as in (5.27)–(5.34), where we again take advantage of being able to dominate the bounds on \(r_{j,t}^u\) in (5.44) by the bounds on \(q_j(u)\) in (5.6). Integrability of \(| \widetilde{\mathbb {E}}[\psi _{\scriptscriptstyle \mathcal {J}}(a) {\mathrm {e}}^{{\mathrm {i}}k u^{-(\tau -3)/2} \mathcal {S}_u} ] |\) against k follows. \(\square \)
Proof of Proposition 5.3. The claim follows from Lemmas 5.4, 5.5 and the dominated convergence theorem. \(\square \)
5.2.2 Convergence of the Finite-Dimensional Distributions of \(B_u\)
In this section, the convergence of the one-dimensional marginals of the process \((B_u(tu^{-(\tau -2)}))_{t\ge 0}\) gets extended to convergence of its finite-dimensional distributions. In the same way as above, it can be shown that, for \(0<t_1\cdots <t_n\), the increments \((B_u(t_iu^{-(\tau -2)})-B_u(t_{i-1}u^{-(\tau -2)}))_{i=1}^n\) (where, by convention, \(t_0=0\)) converge in distribution, under \(\widetilde{\mathbb {P}}_v\), to independent Lévy random variables with the correct distribution.
In what follows, we only outline some minor changes in the proof. Instead of (5.40), we fix \(n \in \mathbb {N}\), \(\mathbf {a} \in (\mathbb {R}^+)^n\) and \(0 = t_0< t_1< \cdots < t_n \le T\) and consider
with (the two-point analogue to (5.43))
for \(0 \le s \le t \le T\), using (4.8). Then, clearly, (5.43) is replaced with
We follow Steps 1(a)–(b) to Step 3 in the proof of convergence of the one-time marginal.
Similarly to Step 1(a), one can show that
We then continue to reason as from (5.39) onwards, where \(r_{j,t}^u\) in (5.42) gets replaced by
The remaining calculations are analogous to the one-dimensional case. The asymptotic factorization in Step 1(b) is replaced with
and we calculate the limit of \(\widetilde{\mathbb {E}}[ \sum _{j\in {\mathcal J}(u)} r_{j,\mathbf {t}}^u]\) in a similar way as in Step 2 in the previous subsection as
Finally, we note that
where we have used that, by definition, Lévy processes have independent stationary increments. This completes the convergence of the finite-dimensional distributions of \((B_u(tu^{-(\tau -2)}))_{t\ge 0}\). \(\square \)
5.2.3 Tightness of \(B_u\)
We next turn to tightness of the process \((B_u(tu^{-(\tau -2)}))_{t\ge 0}\). For this, we use the following tightness criterion:
Proposition 5.6
(Tightness criterion [8, Theorem 15.6 and the comment following it]). The sequence \(\{X_n\}\) is tight in \(D([0,T],{\mathbb {R}}^d)\) if the limiting process X has a.s. no discontinuity at \(t=T\) and there exist constants \(C>0\), \(r>0\) and \(a>1\) such that for \(0\le t_1<t_2<t_3\le T\) and for all n,
Let
We show tightness of \(V^{\scriptscriptstyle (u)}(t)\) given \(u\mathcal {S}_u=v\). In what follows, we therefore bound
First observe that with
we have \(p_{i,s,t}^u = \widetilde{\mathbb {E}}[ \mathcal {I}_i^u(s,t) \mid T_i \le u ]\) (recall (5.77)) and
By the conditional independence of the processes conditional on \(\mathcal {J}(u)\) (recall comment preceding (4.8)), and as we subtract their respective expectations, we obtain
We can bound this from above by
By (5.78),
For the first sum, note that \((c_iu)^4 u^{-2(\tau -1)} =c_i^4 u^{-2(\tau -3)}\), so that its sum is order o(1) as \(\sum _i c_i^3<\infty \) and \(\tau >3\). For the second sum in (5.91), we note that the sum over i such that \(c_i>1/u\) is clearly bounded, since it is bounded by
which converges to a constant as \(u\rightarrow \infty \) since it is a Riemann approximation to a finite integral. For the contributions due to \(c_i\le 1/u\), we bound its expectation as
by (5.31). Hence, we get with (5.86) and (5.91)–(5.93),
as required. \(\square \)
5.2.4 Completion of the Proof of Proposition 2.14(b)
The convergence of the finite-dimensional distributions together with tightness yields \((B_u(tu^{-(\tau -2)}))_{t\ge 0}{\mathop {\longrightarrow }\limits ^{d}}(L_t)_{t\ge 0}\) by [8, Theorem 5.1].
Notes
There is a typo in [7, Theorem 2.4], in which \(c=\theta -ab\) should read \(c=\theta =\lambda + \zeta \).
We take this opportunity to correct some typos in [7]. In [7, (3.76)], the term \(-abti^{-\alpha }=-abtc_i\) should be replaced by \(-abti^{-2\alpha }=-abtc_i^2.\) This corresponds to the choice of \(\tilde{\beta }_i=(\lambda +\zeta )/(ab)-c_i^2\) here. Further, in [7, (3.79)], the product over \(j\in [i-1]\) should be over \(q\in [i-1]\).
References
Aïdékon, E., van der Hofstad, R., Kliem, S., van Leeuwaarden, J.S.H.: Large deviations for power-law thinned Lévy processes. Stoch. Process. Appl. 126(5), 1353–1384 (2016)
Aldous, D.: Brownian excursions, critical random graphs and the multiplicative coalescent. Ann. Probab. 25(2), 812–854 (1997)
Aldous, D., Limic, V.: The entrance boundary of the multiplicative coalescent. Electron. J. Probab 3(3), 59 (1998). electronic
Alon, N., Spencer, J.: The probabilistic method. Wiley-Interscience Series in Discrete Mathematics and Optimization, 2nd edn. Wiley, New York (2000)
Bertoin, J.: Lévy processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge (1996)
Bhamidi, S., van der Hofstad, R., van Leeuwaarden, J.S.H.: Scaling limits for critical inhomogeneous random graphs with finite third moments. Electron. J. Probab. 15, 1682–1702 (2010)
Bhamidi, S., van der Hofstad, R., van Leeuwaarden, J.S.H.: Novel scaling limits for critical inhomogeneous random graphs. Ann. Probab. 40, 2299–2361 (2012)
Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Mathematical Statistics, 2nd edn. Wiley, New York (1999)
Bollobás, B.: Random graphs, volume 73 of Cambridge Studies in Advanced Mathematics, 2nd edn. Cambridge University Press, Cambridge (2001)
Bollobás, B.: The evolution of random graphs. Trans. Am. Math. Soc. 286(1), 257–274 (1984)
Bollobás, B., Janson, S., Riordan, O.: The phase transition in inhomogeneous random graphs. Random Struct. Algorithms 31(1), 3–122 (2007)
Britton, T., Deijfen, M., Martin-Löf, A.: Generating simple random graphs with prescribed degree distribution. J. Stat. Phys. 124(6), 1377–1397 (2006)
Chung, F., Lu, L.: Complex Graphs and Networks, volume 107 of CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington, DC (2006)
Chung, F., Lu, L.: The average distances in random graphs with given expected degrees. Proc. Natl. Acad. Sci. USA 99(25), 15879–15882 (2002). (electronic)
Chung, F., Lu, L.: Connected components in random graphs with given expected degree sequences. Ann. Comb. 6(2), 125–145 (2002)
Chung, F., Lu, L.: The average distance in a random graph with given expected degrees. Internet Math. 1(1), 91–113 (2003)
Chung, F., Lu, L.: The volume of the giant component of a random graph with given expected degrees. SIAM J. Discret. Math. 20, 395–411 (2006)
Durrett, R.: Random graph dynamics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (2007)
Fortuin, C.M., Kasteleyn, P.W., Ginibre, J.: Correlation inequalities on some partially ordered sets. Commun. Math. Phys. 22, 89–103 (1971)
Grimmett, G.: Percolation, 2nd edn. Springer, Berlin (1999)
Hardy, G.H. (ed.).: Divergent Series. Clarendon (Oxford University) Press, Oxford (1949). Reprint of the 1972 edition
van der Hofstad, R.: Critical behavior in inhomogeneous random graphs. Random Struct. Algorithms 42(4), 480–508 (2013)
van der Hofstad, R., Kager, W., Müller, T.: A local limit theorem for the critical random graph. Electron. Commun. Probab. 14, 122–131 (2009)
van der Hofstad, R., Janssen, A.J.E.M., van Leeuwaarden, J.S.H.: Critical epidemics, random graphs and Brownian motion with a parabolic drift. Adv. Appl. Probab. 42, 1187–1206 (2010)
Janson, S., Knuth, D.E., Łuczak, T., Pittel, B.: The birth of the giant component. Random Struct. Algorithms 4(3):231–358, (1993). With an introduction by the editors
Janson, S.: Asymptotic equivalence and contiguity of some random graphs. Random Struct. Algorithms 36(1), 26–45 (2010)
Janson, S., Łuczak, T., Rucinski, A.: Random graphs. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, New York (2000)
Łuczak, T.: Component behavior near the critical point of the random graph process. Random Struct. Algorithms 1(3), 287–310 (1990)
Łuczak, T., Pittel, B., Wierman, J.: The structure of a random graph at the point of the phase transition. Trans. Am. Math. Soc. 341(2), 721–748 (1994)
Norros, I., Reittu, H.: On a conditionally Poissonian graph process. Adv. Appl. Probab. 38(1), 59–75 (2006)
Pittel, B.: On the largest component of the random graph at a nearcritical stage. J. Combin. Theory Ser. B 82(2), 237–269 (2001)
Roberts, M.I., Sengul, B.: Exceptional times of the critical dynamical Erdős–Rényi graph. arXiv:1610.06000, Preprint (2016)
Roberts, M.I.: The probability of unusually large components in the near-critical Erdős-Rényi graph. arXiv:1610.05485, Preprint (2016)
Turova, T.: Diffusion approximation for the components in critical inhomogeneous random graphs of rank 1. Random Struct. Algorithms 43(4), 486–539 (2013)
Acknowledgements
The work of RvdH, JvL and SK was supported in part by the Netherlands Organisation for Scientific Research (NWO). The work of JvL was supported by the European Research Council (ERC). We thank Elie Aïdékon for numerous discussions. We thank the referee for careful reading of the paper, and for making numerous helpful suggestions to improve the presentation, as well as spotting an error in the proof of Proposition 3.3.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
van der Hofstad, R., Kliem, S. & van Leeuwaarden, J.S.H. Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs. J Stat Phys 171, 38–95 (2018). https://doi.org/10.1007/s10955-018-1978-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-018-1978-0
Keywords
- Critical random graphs
- Power-law degrees
- Inhomogeneous networks
- Thinned Lévy processes
- Exponential tilting
- Large deviations