1 Introduction and results

Throughout this paper we denote \({\mathcal {H}}_N(I)\) the set of hermitian matrices of size \(N=1,2,\dots \) with eigenvalues in the interval \(I\subseteq {\mathbb {R}}\). In particular \({\mathcal {H}}_N(I)\) can be endowed with the Lebesgue measure

$$\begin{aligned} \mathrm dX=\prod _{i=1}^N\mathrm dX_{ii}\prod _{1\le i<j\le N}\mathrm d\mathrm {Re}\,X_{ij}\,\mathrm d\mathrm {Im}\,X_{ij}. \end{aligned}$$

The Jacobi Unitary Ensemble (JUE) is defined by the following measure on \({\mathcal {H}}_N(0,1)\)

$$\begin{aligned} \mathrm dm_N^{\mathsf{J}}(X)=\frac{1}{C_N^{\mathsf{J}}}{\det }^\alpha (X){\det }^\beta ({\mathbf {1}}-X)\mathrm dX, \end{aligned}$$
(1.1)

with parameters \(\alpha ,\beta \) satisfying \(\mathrm {Re}\,\alpha ,\mathrm {Re}\,\beta >-1\). The normalizing constant

$$\begin{aligned} C_N^{\mathsf{J}}=\int _{{\mathcal {H}}_N(0,1)}{\det }^\alpha (X){\det }^\beta ({\mathbf {1}}-X)\mathrm dX=\pi ^{\frac{N(N-1)}{2}} \prod _{k=0}^{N-1} \frac{\Gamma (\alpha +k+1) \Gamma (\beta +k+1)}{\Gamma (\alpha +\beta +2N-k)}, \end{aligned}$$

ensures that \(\mathrm dm_N^{\mathsf{J}}\) has total mass 1; the above integral can be computed by a standard formula [19] in terms of the norming constants \(h_\ell ^{\mathsf{J}}\) of the monic Jacobi polynomials, see (4.2).

If \(\alpha \) and \(\beta \) are integers, so that \(M_\alpha =\alpha +N\) and \(M_\beta =\beta +N\) are integers, the probability measure (1.1) describes the distribution of the matrix \(X=(W_A+W_B)^{-1/2}W_A(W_A+W_B)^{-1/2}\in {\mathcal {H}}_N(0,1)\) where \(W_A=A^\dagger A\) and \(W_B=B^\dagger B\) are the Wishart matrices associated with the random matrices AB of size \(M_\alpha \times N,M_\beta \times N\) respectively, with i.i.d. Gaussian entries [30].

Given positive integers \(k_1,\dots ,k_\ell \ge 0\) we shall consider the expectation values

$$\begin{aligned} \left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{\pm k_j}\right\rangle :=\int _{{\mathcal {H}}_N(0,1)}\left( \prod _{j=1}^\ell \mathrm {tr}\,X^{\pm k_j}\right) \mathrm dm_N^{\mathsf{J}}(X), \end{aligned}$$
(1.2)

which we term (respectively, positive and negative) JUE correlators.

Remark 1.1

Although (1.2) is defined only for \(\mathrm {Re}\,\alpha \pm \sum _{i=1}^\ell k_i>-1,\mathrm {Re}\,\beta >-1\), it will be clear from the formulæ of Corollary 1.6 below that the JUE correlators extend to rational functions of \(N,\alpha ,\beta \).

1.1 JUE correlators and Hurwitz numbers

Our first result gives a combinatorial interpretation for the large N topological expansion [25, 26, 28] of JUE correlators (1.2). This provides an analogue of the classical result of Bessis, Itzykson and Zuber [14] expressing correlators of the Gaussian Unitary Ensemble as a generating function counting ribbon graphs weighted according to their genus, see also [25]. At the same time, it is more similar in spirit (and actually a generalization, see Remark 1.10) of the analogous result for the Laguerre Unitary Ensemble, whose correlators are expressed in terms of double monotone Hurwitz numbers [17], and (for a specific value of the parameter) in terms of Hodge integrals [20, 22, 31]; in particular in [31] we provide an ELSV-type formula [24] for weighted double monotone Hurwitz numbers in terms of Hodge integrals.

Our description of the JUE correlators involves triple monotone Hurwitz numbers, which we promptly define; to this end let us recall that a partition is a sequence \(\lambda =(\lambda _1,\dots ,\lambda _{\ell })\) of integers \(\lambda _1\ge \dots \ge \lambda _\ell >0\), termed parts of \(\lambda \); the number \(\ell \) is called length of the partition, denoted in general \(\ell (\lambda )\), and the number \(|\lambda |=\sum _{j=1}^{\ell }\lambda _j\) is called weight of the partition. We shall use the notation \(\lambda \vdash n\) to indicate that \(\lambda \) is a partition of n, i.e. \(|\lambda |=n\).

We denote \({\mathfrak {S}}_n\) the group of permutations of \(\{1,\dots ,n\}\); for any \(\lambda \vdash n\) let \(\mathrm {cyc}(\lambda )\subset {\mathfrak {S}}_n\) the conjugacy class of permutations of cycle-type \(\lambda \). It is worth recalling that the centralizer of any permutation in \(\mathrm {cyc}(\lambda )\) has order

$$\begin{aligned} z_\lambda =\prod _{i\ge 1}i^{m_i}(m_i!),\qquad m_i=\left| \left\{ j:\ \lambda _j=i\right\} \right| , \end{aligned}$$
(1.3)

where the symbol \(|\cdot |\) denotes the cardinality of the set.

Hurwitz numbers were introduced by Hurwitz to count the number of non-equivalent branched coverings of the Riemann sphere with a given set of branch points and branch profile [37]. This problem is essentially equivalent to count factorizations in the symmetric groups with permutations of assigned cycle-type and, possibly, other constraints. It is a problem of long-standing interest in combinatorics, geometry, and physics [3, 4, 32,33,34,35,36, 47].

The type of Hurwitz numbers relevant to our study is defined as follows.

Definition 1.2

Given \(n\ge 0\), three partitions \(\lambda ,\mu ,\nu \vdash n\) and an integer \(g\ge 0\), we define \(h_g(\lambda ,\mu ,\nu )\) to be the number of tuples \((\pi _1,\pi _2,\tau _1,\dots ,\tau _r)\) of permutations in \({\mathfrak {S}}_n\) such that

  1. 1.

    \(r=2g-2-n+\ell (\mu )+\ell (\nu )+\ell (\lambda )\),

  2. 2.

    \(\pi _1\in \mathrm {cyc}(\mu )\), \(\pi _2\in \mathrm {cyc}(\nu )\),

  3. 3.

    \(\tau _i=(a_i,b_i)\) are transpositions, with \(a_i<b_i\) and \(b_1\le \cdots \le b_r\), and

  4. 4.

    \(\pi _1\pi _2\tau _1\cdots \tau _r\in \mathrm {cyc}(\lambda )\).

The relation of these Hurwitz numbers to the JUE is expressed by the following result.

Theorem 1.3

Under the re-scaling \(\alpha =(c_\alpha -1)N\), \(\beta =(c_\beta -1)N\), for any partition \(\lambda \) we have the following Laurent expansions as \(N\rightarrow \infty \);

$$\begin{aligned}&(-1)^{|\lambda |}N^{\ell (\lambda )}\frac{|\lambda |!}{z_\lambda }\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{\lambda _j}\right\rangle \\&\quad = \sum _{g\ge 0}\frac{1}{N^{2g-2}}\sum _{\mu ,\nu \vdash |\lambda |} \frac{c_\alpha ^{\ell (\nu )}}{(-c_\alpha -c_\beta )^{\ell (\mu )+\ell (\nu )+\ell (\lambda )+2g-2}} h_g(\lambda ,\mu ,\nu ),\\&(-1)^{|\lambda |}N^{\ell (\lambda )}\frac{|\lambda |!}{z_\lambda }\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{-\lambda _j}\right\rangle \\&\quad = \sum _{g\ge 0}\frac{1}{N^{2g-2}}\sum _{\mu ,\nu \vdash |\lambda |} \frac{\left( 1-c_\alpha -c_\beta \right) ^{\ell (\nu )}}{(c_\alpha -1)^{\ell (\mu )+\ell (\nu )+\ell (\lambda )+2g-2}} h_g(\lambda ,\mu ,\nu ), \end{aligned}$$

where \(z_\lambda \) is given in (1.3) and \(h_g(\lambda ,\mu ,\nu )\) are the monotone triple Hurwitz numbers of Definition 1.2.

The proof is in Sect. 2. There is a similar result for the Laguerre Unitary Ensemble (LUE) [17] which is recovered by the limit explained in Remark 1.10. However, the proof presented in this paper uses substantially different methods than those employed in [17]; in particular our proof is completely self-contained and uses the notion of multiparametric weighted Hurwitz numbers, see, e.g. [36] and Sect. 2.1.

Remark 1.4

(Connected correlators and connected Hurwitz numbers) By standard combinatorial methods [49] it is possible to conclude from Theorem 1.3 that the connected JUE correlators

$$\begin{aligned} \left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{\pm \lambda _j}\right\rangle ^{\mathsf{c}}=\sum _{{\mathcal {P}}\text { partition of }\{1,\dots ,\ell \}}(-1)^{|{\mathcal {P}}|-1}(|{\mathcal {P}}|-1)!\prod _{A\in {\mathcal {P}}}\left\langle \prod _{a\in A}\mathrm {tr}\,X^{\pm \lambda _a}\right\rangle \end{aligned}$$

admit the same large N expansions as in Theorem 1.3, with the Hurwitz numbers \(h_g(\lambda ,\mu ,\nu )\) replaced by their connected counterparts \(h_g^{\mathsf{c}}(\lambda ,\mu ,\nu )\). The latter are defined as the number of tuples \((\pi _1,\pi _2,\tau _1,\dots ,\tau _r)\) satisfying (1)–(4) in Definition 1.2 and the additional constraint that the subgroup generated by \(\pi _1,\pi _2,\tau _1,\dots ,\tau _r\) acts transitively on \(\{1,\dots ,n\}\).

1.2 Computing correlators of hermitian models

To provide an effective computation of the JUE correlators we first consider the general case of a measure on \({\mathcal {H}}_N(I)\) of the form

$$\begin{aligned} \mathrm dm_N(X)=\frac{1}{C_N}\exp \mathrm {tr}\,V(X)\mathrm dX, \end{aligned}$$
(1.4)

with normalizing constant \(C_N=\int _{{\mathcal {H}}_N(I)}\exp \mathrm {tr}\,V(X)\mathrm dX\). Here V(x) is a continuous function of \(x\in I^\circ \) (the interior of I) and we assume that \(\exp V(x)={\mathcal {O}}\left( |x-x_0|^{-1+\varepsilon }\right) \) for some \(\varepsilon >0\) as \(x\in I^\circ \) approaches a finite endpoint \(x_0\) of I; further, if I extends to \(\pm \infty \) we assume that \(V(x)\rightarrow -\infty \) fast enough as \(x\rightarrow \pm \infty \) in order for the measure (1.4) to have finite moments of all orders, so that the associated orthogonal polynomials exist. The expression \(\mathrm {tr}\,V(X)\) in (1.4) for an hermitian matrix X is defined via the spectral theorem. The JUE is recovered for \(I=[0,1]\) and \(V(x)=\alpha \log x+\beta \log (1-x)\), \(\mathrm {Re}\,\alpha ,\mathrm {Re}\,\beta >-1\).

Introduce the cumulant functions

$$\begin{aligned} {\mathscr {C}}_{\ell }(z_1,\dots ,z_\ell ):=\int _{{\mathcal {H}}_N(I)}\prod _{i=1}^\ell \mathrm {tr}\,\left[ \left( z_i-X\right) ^{-1}\right] \mathrm dm_N(X),\qquad \ell \ge 1, \end{aligned}$$
(1.5)

which are analytic functions of \(z_1,\dots ,z_\ell \in {\mathbb {C}}\setminus I\), symmetric in the variables \(z_1,\dots ,z_\ell \). To simplify the analysis it is convenient to introduce the connected cumulant functions

$$\begin{aligned} {\mathscr {C}}_{\ell }^{\mathsf{c}}(z_1,\dots ,z_\ell )= \sum _{{\mathcal {P}}\text { partition of }\{1,\dots ,\ell \}}(-1)^{|{\mathcal {P}}|-1}(|{\mathcal {P}}|-1)! \prod _{A\in {\mathcal {P}}}{\mathscr {C}}_{|A|}(\{z_a\}_{a\in A}),\qquad \end{aligned}$$
(1.6)

from which the cumulant functions can be recovered by

$$\begin{aligned} {\mathscr {C}}_{\ell }(z_1,\dots ,z_\ell )=\sum _{{\mathcal {P}}\text { partition of }\{1,\dots ,\ell \}}\prod _{A\in {\mathcal {P}}}{\mathscr {C}}_{|A|}^{\mathsf{c}}(\{z_a\}_{a\in A}). \end{aligned}$$
(1.7)

For example, \({\mathscr {C}}_1(z)={\mathscr {C}}_1^{\mathsf{c}}(z)\), \({\mathscr {C}}_{2}^{\mathsf{c}}(z_1,z_2)=\mathscr {C}_{2}(z_1,z_2)-{\mathscr {C}}_{1}(z_1){\mathscr {C}}_1(z_2)\),

$$\begin{aligned} {\mathscr {C}}_{3}^{\mathsf{c}}(z_1,z_2,z_3)={}&\mathscr {C}_{3}(z_1,z_2,z_3)-{\mathscr {C}}_{2}(z_1,z_2){\mathscr {C}}_1(z_3)-\mathscr {C}_{2}(z_2,z_3){\mathscr {C}}_1(z_1) \\&-{\mathscr {C}}_{2}(z_1,z_3){\mathscr {C}}_1(z_2)+2\,\mathscr {C}_{1}(z_1){\mathscr {C}}_1(z_2){\mathscr {C}}_1(z_3). \end{aligned}$$

We now express the connected cumulant functions in terms of the monic orthogonal polynomials \(P_\ell (z)=z^\ell +\dots \) uniquely defined by

$$\begin{aligned} \int _{I} P_\ell (x)P_m(x)\mathrm {e}^{V(x)}\mathrm dx=h_\ell \delta _{\ell ,m}, \end{aligned}$$
(1.8)

and of the \(2\times 2\) matrix

$$\begin{aligned} Y_N(z):= \left( \begin{array}{cc} P_N(z) &{} \frac{1}{2\pi \mathrm {i}}\int _IP_N(x)\mathrm {e}^{V(x)}\frac{\mathrm dx}{x-z} \\ -\frac{2\pi \mathrm {i}}{h_{N-1}}P_{N-1}(z) &{} -\frac{1}{h_{N-1}}\int _IP_{N-1}(x)\mathrm {e}^{V(x)}\frac{\mathrm dx}{x-z} \end{array}\right) , \end{aligned}$$
(1.9)

which is the well-known solution to the Riemann-Hilbert problem of orthogonal polynomials [29]; it is an analytic function of \(z\in {\mathbb {C}}\setminus I\).

Theorem 1.5

Let

$$\begin{aligned} R(z):=Y_N(z)\begin{pmatrix}1 &{} 0\\ 0 &{}0 \end{pmatrix}Y_N^{-1}(z)\,, \end{aligned}$$

with \(Y_N(z)\) as in (1.9). Then the connected cumulant functions (1.6) are given by

$$\begin{aligned} {\mathscr {C}}_1^{\mathsf{c}}(z)&=\mathrm {tr}\,\left( Y^{-1}_N(z)Y_N'(z)\frac{\sigma _3}{2}\right) , \quad \sigma _3=\begin{pmatrix} 1&{} 0 \\ 0 &{} -1 \end{pmatrix}\,,\\ {\mathscr {C}}_2^{\mathsf{c}}(z_1,z_2)&=\frac{\mathrm {tr}\,\left( R(z_1)R(z_2)\right) -1}{(z_1-z_2)^2},\\ {\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )&=-\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}( (\ell ) )}\frac{\mathrm {tr}\,\left( R(z_{i_1})\dots R(z_{i_\ell })\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_\ell }-z_{i_1})},\quad \ell \ge 3, \end{aligned}$$

where prime in the second formula denotes derivative with respect to z and \(\mathrm {cyc}((\ell ))\) in the last formula is the set of \(\ell \)-cycles in the symmetric group \({\mathfrak {S}}_\ell \).

The proof is given in Sect. 3. Formulæ of this sort for correlators of hermitian matrix models have been recently discussed in the literature, see, e.g. [21, 27]. They are directly related to the theory of tau functions (formal [21] and isomonodromic [9, 31]) and to topological recursion theory [5, 6, 15, 28]. Incidentally, similar formulæ also appear for matrix models with external source [7, 8, 11,12,13, 41]. In Sect. 3 we provide an extremely direct derivation based solely on the Riemann-Hilbert characterization of the matrix \(Y_N(z)\).

We can apply these formulæ to the Jacobi measure \(\mathrm dm_N^{\mathsf{J}}\), see (1.1), for which the support is \(I=[0,1]\). Therefore we can expand the cumulants near the points \(z=0\) or \(z=\infty \); the expansion at \(z=1\) could be considered but we omit it as it is recovered from the one at \(z=0\) by exchanging \(\alpha ,\beta \) and \(z \mapsto 1-z\) see (1.1). Using the definition in (1.5) and (1.6), we obtain the generating functions for the JUE connected correlators (1.2), namely

$$\begin{aligned}&{\mathscr {C}}_1(z)\overset{z\rightarrow \infty }{\sim } {\mathscr {F}}_{1,\infty }^{\mathsf{c}}(z)-\frac{N}{z},\quad {\mathscr {C}}_1(z)\overset{z\rightarrow 0}{\sim }{\mathscr {F}}_{1,0}^{{\mathsf{c}}}(z),\\&{\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )\overset{z\rightarrow p}{\sim }{\mathscr {F}}_{\ell ,p}^{\mathsf{c}}(z_1,\dots ,z_\ell )\quad (p=0,\infty ) \,\ell \ge 2, \end{aligned}$$

where

$$\begin{aligned} {\mathscr {F}}_{\ell ,\infty }^{\mathsf{c}}(z_1,\dots z_\ell )&:=\sum _{k_1,\dots ,k_\ell \ge 1}\frac{\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{k_j}\right\rangle ^{\mathsf{c}}}{z_1^{k_1+1}\cdots z_\ell ^{k_\ell +1}}, \nonumber \\ {\mathscr {F}}_{\ell ,0}^{\mathsf{c}}(z_1,\dots ,z_\ell )&:=(-1)^\ell \sum _{k_1,\dots ,k_\ell \ge 1}\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{-k_j}\right\rangle ^{\mathsf{c}} z_1^{k_1-1}\cdots z_\ell ^{k_\ell -1}. \end{aligned}$$
(1.10)

On the other hand, performing the same expansion on the right hand side of the expressions for the cumulants in Theorem 1.5, we have an explicit tool to compute the correlators.

For the specific case of Jacobi polynomials, we prove in Sect. 4 (Proposition 4.1) that at \(z=\infty \) the matrix R(z) has the Taylor expansion (valid for \(|z|>1\)) of the form \(R(z) =T^{-1}R^{[\infty ]}(z)T\) and the Poincaré asymptotic expansion \(R(z) \sim T^{-1} R^{[0]}(z)T \) at \(z=0\) valid in the sector \(0<\arg z<2\pi \). Here T is the constant matrix

$$\begin{aligned} T= \begin{pmatrix} 1 &{} 0 \\ 0 &{} \frac{h_{N-1}^{\mathsf{J}}}{2 \pi i} \frac{1}{(\alpha +\beta +2N)(\alpha +\beta +2N-1)} \end{pmatrix}, \end{aligned}$$
(1.11)

with \(h_\ell ^{\mathsf{J}}\) given in (4.2), and the series \(R^{[\infty ]}(z),R^{[0]}(z)\) are

$$\begin{aligned}&R^{[\infty ]}(z)\nonumber \\&\quad =\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}+\sum _{\ell \ge 0}\frac{1}{z^{\ell +1}} \frac{1}{\alpha +\beta +2N}\begin{pmatrix} \ell A_{\ell }(N) &{} N (\alpha +N) (\beta +N) (\alpha +\beta +N) B_\ell (N+1) \\ - B_\ell (N) &{} -\ell A_{\ell }(N) \end{pmatrix}, \end{aligned}$$
(1.12)
$$\begin{aligned}&R^{[0]}(z)\nonumber \\&\quad =\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}+\sum _{\ell \ge 0} \frac{z^\ell }{\alpha +\beta +2N}\begin{pmatrix} (\ell +1) \widetilde{A}_{\ell }(N) &{} -N (\alpha +N) (\beta +N) (\alpha +\beta +N) \widetilde{B}_\ell (N+1) \\ \widetilde{B}_\ell (N) &{} -(\ell +1) \widetilde{A}_{\ell }(N) \end{pmatrix}, \end{aligned}$$
(1.13)

where

(1.14)

and

$$\begin{aligned}&\widetilde{A}_\ell (N)=\frac{(\alpha +\beta +2N-\ell )_{2\ell +1}}{(\alpha -\ell )_{2\ell +1}}A_\ell (N),\nonumber \\&\widetilde{B}_\ell (N)=\frac{(\alpha +\beta +2N-1-\ell )_{2\ell +1}}{(\alpha -\ell )_{2\ell +1}} B_\ell (N),\quad \ell \ge 0. \end{aligned}$$
(1.15)

Here \({}_4F_3\) is the generalized hypergeometric function, and we use the rising factorial

$$\begin{aligned} (s)_k=s(s+1)\cdots (s+k-1). \end{aligned}$$
(1.16)

For example, the first few terms read

$$\begin{aligned} A_1(N)&= \frac{N (\alpha +N) (\beta +N) (\alpha +\beta +N)}{(\alpha +\beta +2N -1) (\alpha +\beta +2N) (\alpha +\beta +2N +1)}, \\ B_0(N)&= \frac{1}{\alpha +\beta +2 N-1}, \\ B_1(N)&= \frac{ (\alpha -1) (\alpha +\beta )+2N (\alpha +\beta -1)+2N^2}{(\alpha +\beta +2N -2) (\alpha +\beta +2N -1) (\alpha +\beta +2 N )}. \end{aligned}$$

Since the constant conjugation by T in (1.11) of the matrix R(z) does not affect the formulæ of Theorem 1.5 (see also Sect. 4) we obtain the following corollary, which provides explicit formulæ for the generating functions of the correlators.

Corollary 1.6

Let \(R^{[\infty ]}(z)\) and \(R^{[0]}(z)\) be the explicit series given in (1.12) and (1.13). The one-point generating function (1.10) of the JUE are

$$\begin{aligned} {\mathscr {F}}_{1,\infty }(z)&=\frac{\alpha +\beta +2N}{z(1-z)}\int _\infty ^z\left( 1-R_{1,1}^{[\infty ]}(w)\right) \mathrm dw-\frac{N(\alpha +N)}{z(1-z)(\alpha +\beta +2N)}, \\ {\mathscr {F}}_{1,0}(z)&=\frac{\alpha +\beta +2N}{z(1-z)}\int _0^z\left( 1-R_{1,1}^{[0]}(w)\right) \mathrm dw-\frac{N}{1-z}, \end{aligned}$$

where \(R_{1,1}^{[\infty ]},R_{1,1}^{[0]}\) denote the (1, 1)-entry of \(R^{[\infty ]},R^{[0]}\) respectively. The multi-point generating functions (1.10) are

$$\begin{aligned} {\mathscr {F}}_{2,p}^{{\mathsf{c}}}(z_1,z_2)&=\frac{\mathrm {tr}\,\left( R^{[p]}(z_1)R^{[p]}(z_2)\right) -1}{(z_1-z_2)^2},\\ {\mathscr {F}}_{\ell ,p}^{{\mathsf{c}}}(z_1,\dots ,z_\ell )&=-\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}((\ell ))}\frac{\mathrm {tr}\,\left( R^{[p]}(z_{i_1})\dots R^{[p]}(z_{i_\ell })\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_\ell }-z_{i_1})},\quad \ell \ge 3,\qquad p=0,\infty . \end{aligned}$$

The proof is in Sect. 4.2, and is obtained from the formulæ of Theorem 1.5 by expansion at \(z_i\rightarrow \infty ,0\). In this corollary, the formulæ on the right hand side are interpreted as power series expansions at \(z=0\) or \(z=\infty \); to this end we remark that for \(\ell \ge 2\) these series are well defined, as it follows from the fact that the corresponding analytic functions are holomorphic in \(({\mathbb {C}}\setminus I)^\ell \) and in particular regular along the diagonals \(z_a=z_b\) for \(a\not =b\) (see also Remark 3.2 and Lemma 3.3).

The coefficients of \(R^{[0]}(z)\) and \(R^{[\infty ]}(z)\) are rational functions of \(N,\alpha ,\beta \) and we conclude that JUE correlators extend to rational functions of \(N,\alpha ,\beta \).

Examining more closely the formula for \({\mathscr {F}}_{1,\infty }\) we see that

$$\begin{aligned} (1-z){\mathscr {F}}_{1,\infty }(z)=-\frac{N}{z}+\sum _{k\ge 0}\frac{\left\langle \mathrm {tr}\,X^k\right\rangle -\left\langle \mathrm {tr}\,X^{k+1}\right\rangle }{z^{k+1}}, \end{aligned}$$

which by the explicit expansion \(R^{[\infty ]}(z)\) in (1.12) implies

$$\begin{aligned} \left\langle \mathrm {tr}\,X^k\right\rangle -\left\langle \mathrm {tr}\,X^{k+1}\right\rangle =A_k(N), \end{aligned}$$
(1.17)

where \(A_k(N)\) is defined in (1.14). Reasoning in the same way for \({\mathscr {F}}_{1,0}(z)\) we obtain

$$\begin{aligned} \left\langle \mathrm {tr}\,X^{-k-1}\right\rangle -\left\langle \mathrm {tr}\,X^{-k}\right\rangle =\widetilde{A}_k (N) = \frac{(\alpha +\beta +2N-k)_{2k+1}}{(\alpha -k)_{2k+1}} \left( \left\langle \mathrm {tr}\,X^k\right\rangle -\left\langle \mathrm {tr}\,X^{k+1}\right\rangle \right) ,\nonumber \\ \end{aligned}$$
(1.18)

where \(\widetilde{A}_k(N)\) is defined in (1.15). Equations (1.17) and (1.18) agree with the results of [18].

Remark 1.7

The coefficients \(A_\ell (N),B_\ell (N)\) can be expressed in terms of Wilson polynomials [40, 50], which are defined by

(1.19)

for more details see Proposition 4.4. Thus the formulæ of Corollary 2.7 extend the connection between JUE moments \(\left\langle \mathrm {tr}\,X^k\right\rangle \) and Wilson polynomials described in [18] (see also [42, 43]) to the JUE multi-point correlators \(\left\langle \mathrm {tr}\,X^{k_1}\cdots \mathrm {tr}\,X^{k_\ell }\right\rangle \).

Remark 1.8

(JUE mixed correlators) We could consider more general generating functions as follows; take \(q,r,s\ge 0\) with \(q+r+s>0\) and expand the cumulant function

$$\begin{aligned} {\mathscr {C}}_{q+r+s}(z_1,\dots ,z_q,w_1,\dots ,w_r,y_1,\dots ,y_{s}) \end{aligned}$$

for the Jacobi measure as \(z_i\rightarrow \infty ,w_i\rightarrow 0,y_i\rightarrow 1\), to obtain the generating function

$$\begin{aligned}&\sum _{{\begin{matrix}k_1,\dots ,k_q\ge 1\\ i_1,\dots ,i_r\ge 1\\ j_1,\dots ,j_s\ge 1\end{matrix}}}\int _{{\mathcal {H}}_N(0,1)}\mathrm {tr}\,X^{k_1}\cdots \mathrm {tr}\,X^{k_q}\mathrm {tr}\,X^{-i_1}\cdots \mathrm {tr}\,X^{-i_r}\mathrm {tr}\,({\mathbf {1}}-X)^{j_1}\cdots \mathrm {tr}\,({\mathbf {1}}-X)^{j_s}\mathrm dm_N^{\mathsf{J}}(X) \\&\quad \times \frac{w_1^{i_1-1}\cdots w_r^{i_r-1}(y_1-1)^{j_1-1}\cdots (y_s-1)^{j_s-1}}{z_1^{k_1+1}\cdots z_q^{k_q+1}}. \end{aligned}$$

It is then clear that we can compute the coefficients of such series in terms of the matrix series \(R^{[0]},R^{[\infty ]}\), and thus of Wilson polynomials, by the formulæ of Theorem 1.5; note that the expansion of R(z) at \(z=1\) is obtained from \(R^{[0]}\) by exchanging \(\alpha \) with \(\beta \) and z with \(1-z\).

Example 1.9

From the formulæ of Corollary 1.6 we can compute

$$\begin{aligned}&\left\langle (\mathrm {tr}\,X)^3 \right\rangle ^{\mathsf{c}}\\&= \frac{2N(\alpha +\beta )(\beta -\alpha )(\alpha +N)(\beta +N)(\alpha +\beta +N)}{(\alpha +\beta +2N-2)(\alpha +\beta +2N-1)(\alpha +\beta +2N)^3(\alpha +\beta +2N+1)(\alpha +\beta +2N+2)}. \end{aligned}$$

With the substitution \(\alpha =(c_\alpha -1)N\) and \(\beta =(c_\beta -1)N\) we have the large N expansion

$$\begin{aligned} \left\langle \left( \mathrm {tr}\,X\right) ^3 \right\rangle ^{\mathsf{c}}&= \frac{1}{N}\left[ c_\alpha \left( \frac{2}{(c_\alpha +c_\beta )^3}-\frac{6}{(c_\alpha +c_\beta )^4}+\frac{4}{(c_\alpha +c_\beta )^5}\right) \right. \\&+c_\alpha ^2 \left( -\frac{6}{(c_\alpha +c_\beta )^4}+\frac{18}{(c_\alpha +c_\beta )^5}-\frac{12}{(c_\alpha +c_\beta )^6}\right) \\&\left. + c_\alpha ^3 \left( \frac{4}{(c_\alpha +c_\beta )^5}-\frac{12}{(c_\alpha +c_\beta )^6}+\frac{8}{(c_\alpha +c_\beta )^7}\right) \right] +{\mathcal {O}}\left( \frac{1}{N^3}\right) . \end{aligned}$$

Matching the coefficients as in Theorem 1.3 we get the values for \(h_{g=0}^{\mathsf{c}}(\lambda =(1,1,1),\mu ,\nu )\) (the connected Hurwitz numbers defined in Remark 1.4) reported in the following table;

$$\begin{aligned}{}\begin{array}[t]{c|c|c|c} &{} \nu =(3) &{} \nu =(2,1) &{} \nu =(1,1,1)\\ \hline \mu =(3) &{} 2 &{} 6 &{} 4\\ \hline \mu =(2,1) &{} 6 &{} 18 &{} 12 \\ \hline \mu =(1,1,1) &{} 4 &{} 12 &{} 8\\ \end{array} \end{aligned}$$

For example, the numbers in the first row (\(\mu =(3)\)) can be read from the following factorizations in \({\mathfrak {S}}_3\). To list them let us first note that we have \(\mathrm {cyc}(\lambda )=\{\mathrm{Id}\}\) and \(\mathrm {cyc}(\mu )=\{(123),(132)\}\); therefore for \(\nu =(3)\) we have 2 factorizations (\(r=\text{ number } \text{ of } \text{ transpositions }=0\))

$$\begin{aligned} (123) (132) = \mathrm{Id}, \qquad (132) (123) = \mathrm{Id}, \end{aligned}$$

for \(\nu =(2,1)\) (\(\mathrm {cyc}(\nu )=\{(12),(23),(13)\}\)) we have 6 factorizations (\(r=1\))

$$\begin{aligned}&(123)(12)(13)=\mathrm{Id},&(123)(13)(23)=\mathrm{Id},&(123)(23)(12)=\mathrm{Id},\\&(132)(13)(12)=\mathrm{Id},&(132)(12)(23)=\mathrm{Id},&(132)(23)(13)=\mathrm{Id}, \end{aligned}$$

and for \(\nu =(1,1,1)\) we have the 4 factorizations (\(r=2\), here the monotone condition plays a role)

$$\begin{aligned}&(123) \mathrm{Id}(12)(13) = \mathrm{Id},&(123) \mathrm{Id}(13)(23) = \mathrm{Id}, \\&(132) \mathrm{Id}(12)(23) = \mathrm{Id},&(123) \mathrm{Id}(23)(13) = \mathrm{Id}. \end{aligned}$$

Similarly we can compute from Corollary 1.6

$$\begin{aligned} \left\langle \left( \mathrm {tr}\,X^{-1}\right) ^3\right\rangle ^{\mathsf{c}}&= \frac{2 N (\alpha +N) (\alpha +2 N) (\beta +N) (\alpha +\beta +N) (\alpha +2\beta +2N)}{(\alpha -2) (\alpha -1) \alpha ^3 (\alpha +1) (\alpha +2)}\\&=\frac{1}{N}\left[ \left( \frac{2}{(c_\alpha -1)^3}+\frac{6}{(c_\alpha -1)^4}+\frac{4}{(c_\alpha -1)^5}\right) (c_\alpha +c_\beta -1)\right. \\&\qquad -\left( \frac{6}{(c_\alpha -1)^4}+\frac{18}{(c_\alpha -1)^5}+\frac{12}{(c_\alpha -1)^6}\right) (c_\alpha +c_\beta -1)^2\\&\qquad \left. +\left( \frac{4}{(c_\alpha -1)^5}+\frac{12}{(c_\alpha -1)^6}+\frac{8}{(c_\alpha -1)^7}\right) (c_\alpha +c_\beta -1)^3 \right] +{\mathcal {O}}\left( \frac{1}{N^3}\right) \end{aligned}$$

and from Theorem 1.3 we recognize the connected Hurwitz numbers tabulated above.

Remark 1.10

(Laguerre limit) There is a scaling limit of the JUE correlators to the LUE correlators; if \(k_1,\cdots ,k_\ell \) are arbitrary integers we have

$$\begin{aligned} \lim _{\beta \rightarrow +\infty }\beta ^{k_1+\cdots +k_\ell }\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{k_j}\right\rangle =\frac{ \int _{{\mathcal {H}}_N(0,+\infty )}\left( \prod _{j=1}^\ell \mathrm {tr}\,X^{k_j}\right) {\det }^\alpha (X)\exp (-\mathrm {tr}\,X)\mathrm dX}{\int _{{\mathcal {H}}_N(0,+\infty )}{\det }^\alpha (X)\exp (-\mathrm {tr}\,X)\mathrm dX}. \end{aligned}$$

Therefore the results of the present work about the JUE directly imply analogous results for the LUE; these results are already known from [17, 31]. See also Remark 2.9.

2 JUE Correlators and Hurwitz numbers

In this section we prove Theorem 1.3. For the proof we will consider the so-called multiparametric weighted Hurwitz numbers; this far-reaching generalization of classical Hurwitz numbers was introduced and related to tau functions of integrable systems in several works by Harnad, Orlov [36], and Guay-Paquet [35], after the impetus of the seminal work of Okounkov [47].

2.1 Multiparametric weighted Hurwitz numbers

Let \({\mathbb {C}}[{\mathfrak {S}}_n]\) be the group algebra of the symmetric group \({\mathfrak {S}}_n\); namely, \({\mathbb {C}}[{\mathfrak {S}}_n]\) consists of formal linear combinations with complex coefficients of permutations of \(\{1,\dots ,n\}\). We shall need two important type of elements of \({\mathbb {C}}[{\mathfrak {S}}_n]\), which we now introduce. For any \(\lambda \vdash n\) denote

$$\begin{aligned} {\mathcal {C}}_\lambda :=\sum _{\pi \in \mathrm {cyc}(\lambda )}\pi , \end{aligned}$$
(2.1)

where we recall that \(\mathrm {cyc}(\lambda )\subset {\mathfrak {S}}_n\) is the conjugacy class of permutations of cycle-type \(\lambda \). It is well known [48] that the set of \({\mathcal {C}}_\lambda \) for \(\lambda \vdash n\) form a linear basis of the center \(Z({\mathbb {C}}[{\mathfrak {S}}_n])\) of the group algebra.

The second class of elements consists of the Young-Jucys-Murphy (YJM) elements [39, 46] \({\mathcal {J}}_a\), for \(a=1,\dots ,n\), defined as

$$\begin{aligned} {\mathcal {J}}_1=0,\quad {\mathcal {J}}_a=(1,a)+(2,a)+\dots +(a-1,a),\ 2\le a\le n, \end{aligned}$$

denoting (ab) (with \(a<b\)) the transposition of \(\{1,\dots ,n\}\) switching ab and fixing everything else.

Although singularly the YJM elements are not central, they commute amongst themselves, and symmetric polynomials of n variables evaluated at \({\mathcal {J}}_1,\dots ,{\mathcal {J}}_n\) generate \(Z({\mathbb {C}}[{\mathfrak {S}}_n])\). Indeed the following relation [39] takes place in \(Z({\mathbb {C}}[{\mathfrak {S}}_n])[\epsilon ]\);

$$\begin{aligned} \prod _{a=1}^n(1+\epsilon {\mathcal {J}}_a)=\sum _{\lambda \vdash n}\epsilon ^{n-\ell (\lambda )}{\mathcal {C}}_\lambda . \end{aligned}$$
(2.2)

With these preliminaries we are ready to introduce the class of multiparametric Hurwitz numbers [10, 35, 36] which we need. Fix the real parameters \(\gamma _1,\dots ,\gamma _L\) and \(\delta _1,\dots ,\delta _M\) (\(L,M\ge 0\)) and collect them into the rational function

$$\begin{aligned} G(z):=\frac{\prod _{i=1}^L(1+\gamma _iz)}{\prod _{j=1}^M(1-\delta _jz)}. \end{aligned}$$
(2.3)

Then, the (rationally weighted) multiparametric (single) Hurwitz numbers \(H_G^{d}(\lambda )\), associated to the function G in (2.3) and labeled by the integer \(d\ge 1\) and by the partition \(\lambda \vdash n\), are defined by

$$\begin{aligned} H_G^{d}(\lambda ):=\frac{1}{z_\lambda }[\epsilon ^d {\mathcal {C}}_\lambda ]\prod _{a=1}^nG\left( \epsilon {\mathcal {J}}_a\right) , \end{aligned}$$
(2.4)

where the notation \([\epsilon ^d {\mathcal {C}}_\lambda ]\) denotes the coefficient in front of \(\epsilon ^d{\mathcal {C}}_\lambda \) in the expansion of \(\prod _{a=1}^nG\left( \epsilon {\mathcal {J}}_a\right) \in Z({\mathbb {C}}[{\mathfrak {S}}_n])[[\epsilon ]]\) in the basis \(\{{\mathcal {C}}_\lambda \}\); to compute the expression \(G\left( \epsilon {\mathcal {J}}_a\right) \in Z({\mathbb {C}}[{\mathfrak {S}}_n])[[\epsilon ]]\), the denominators in (2.3) are to be understood as \((1-\delta _jz)^{-1}=\sum _{k\ge 0}\delta _j^kz^k\).

2.2 Generating functions of multiparametric Hurwitz numbers in the Schur basis

The following result (see [36]) expresses the generating functions of multiparametric weighted Hurwitz numbers in the Schur basis. In this context, the latter is regarded as the basis \(\{s_\lambda ({\mathbf {t}})\}\) (\(\lambda \) running in the set of all partitions) of the space of weighted homogeneous polynomials in \({\mathbf {t}}=(t_1,t_2,\dots )\), with \(\deg t_k=k\), whose elements are

$$\begin{aligned} s_\lambda ({\mathbf {t}})=\det \left[ h_{\lambda _i-i+j}({\mathbf {t}})\right] _{i,j=1}^{\ell (\lambda )}, \end{aligned}$$
(2.5)

where the complete homogeneous symmetric polynomials \(h_k({\mathbf {t}})\) are defined by the generating seriesFootnote 1

$$\begin{aligned} \sum _{k\ge 0}w^kh_k({\mathbf {t}})=\exp \left( \sum _{k\ge 1}\frac{t_k}{k}w^k\right) . \end{aligned}$$

In the following we shall denote \({\mathcal {P}}\) the set of all partitions.

Proposition 2.1

([36]) The generating function

$$\begin{aligned} \tau _G(\epsilon ;{\mathbf {t}})=\sum _{d\ge 1}\epsilon ^d\sum _{\lambda \in {\mathcal {P}}}H_G^d(\lambda )\prod _{i=1}^{\ell (\lambda )}t_{\lambda _i} \end{aligned}$$
(2.6)

of multiparametric weighted Hurwitz numbers (2.4) associated to the rational function (2.3) is equivalently expressed as

$$\begin{aligned} \tau _G(\epsilon ;{\mathbf {t}})=\sum _{\lambda \in {\mathcal {P}}}\frac{\dim \lambda }{|\lambda |!}r^{(G,\epsilon )}_\lambda s_\lambda ({\mathbf {t}}), \end{aligned}$$
(2.7)

where \(s_\lambda ({\mathbf {t}})\) are the Schur polynomials (2.5) and the coefficients are given explicitly by

$$\begin{aligned} r^{(G,\epsilon )}_\lambda =\prod _{(i,j)\in \lambda }G(\epsilon (j-i)), \end{aligned}$$
(2.8)

\(\dim \lambda \) being the dimension of the irreducible representation of \({\mathfrak {S}}_{|\lambda |}\) associated with \(\lambda \).

Before the proof we give a couple of remarks.

  1. 1.

    In (2.8) and below we use the notation \((i,j)\in \lambda \) where the partition \(\lambda \) is identified with its diagram, i.e. the set of \((i,j)\in {\mathbb {Z}}^2\) satisfying \(1\le i\le \ell (\lambda )\), \(1\le j\le \lambda _i\). For example, the diagram of the partition \(\lambda =(4,2,2,1)\vdash 9\) is depicted below;

    $$\begin{aligned} \begin{array}{c|cccc} &{} j=1 &{} j=2 &{} j=3 &{} j=4 \\ \hline i=1 &{} \bullet &{}\bullet &{}\bullet &{}\bullet \\ i=2 &{} \bullet &{}\bullet &{}&{} \\ i=3 &{} \bullet &{}\bullet &{}&{} \\ i=4 &{} \bullet &{}&{}&{} \end{array} \end{aligned}$$
    (2.9)
  2. 2.

    There exist several equivalent formulæ for \(\dim \lambda \), including the well-known hook-length formula; for later convenience we recall the expression

    $$\begin{aligned} \frac{\dim \lambda }{|\lambda |!}= \frac{\prod _{1 \le i < j \le N}(\lambda _i-\lambda _j+j-i)}{\prod _{k=1}^N(\lambda _k-k+N)!}, \end{aligned}$$
    (2.10)

    valid for all \(N\ge \ell (\lambda )\), setting \(\lambda _i=0\) for all \(\ell (\lambda )<i\le N\).

Proof of Proposition 2.1

We need a few preliminaries. First we recall that \(Z\left( {\mathbb {C}}[{\mathfrak {S}}_n]\right) \) is a semi-simple commutative algebra; a basis of idempotents is given by (see, e.g. [48])

$$\begin{aligned} {\mathcal {E}}_\lambda =\frac{\dim \lambda }{|\lambda |!}\sum _{\mu \vdash n}\chi _\lambda ^\mu {\mathcal {C}}_\mu , \end{aligned}$$

where \(\chi _\lambda ^\mu \) are the characters of the symmetric group and \({\mathcal {C}}_\mu \) are given in (2.1). Namely

$$\begin{aligned} {\mathcal {E}}_\lambda {\mathcal {E}}_{\lambda '}={\left\{ \begin{array}{ll} {\mathcal {E}}_\lambda &{}\lambda =\lambda '\\ 0&{}\lambda \not =\lambda '. \end{array}\right. } \end{aligned}$$
(2.11)

For any symmetric polynomial \(p(y_1,\dots ,y_n)\) in n variables we have already mentioned that \(p({\mathcal {J}}_1,\dots ,{\mathcal {J}}_n)\) belongs to \(Z\left( {\mathbb {C}}[{\mathfrak {S}}_n]\right) \); central elements are diagonal on the basis of idempotents and it is proven in [39] that

$$\begin{aligned} p({\mathcal {J}}_1,\dots ,{\mathcal {J}}_n)\mathcal E_\lambda =p\left( \{j-i\}_{(i,j)\in \lambda }\right) {\mathcal {E}}_\lambda , \end{aligned}$$
(2.12)

where in the right hand side we denote \(p\left( \{j-i\}_{(i,j)\in \lambda }\right) \) the evaluation of the symmetric polynomial p at the n values of \(j-i\) for \((i,j)\in {\mathbb {Z}}^2\) in the diagram of \(\lambda \vdash n\); in the example \(\lambda =(4,2,2,1)\vdash 9\) above, see (2.9), this denotes the evaluation \(p(0,1,2,3,-1,0,-2,-1,-3)\).

We are ready for the proof proper. First note that by (2.12) and (2.8) we have

$$\begin{aligned} \left[ \prod _{a=1}^nG(\epsilon {\mathcal {J}}_a)\right] \mathcal E_\lambda =r_\lambda ^{(G,\epsilon )}{\mathcal {E}}_\lambda , \end{aligned}$$

which implies, using (2.11), that

$$\begin{aligned} \prod _{a=1}^nG(\epsilon {\mathcal {J}}_a)=\sum _{\lambda \vdash n}r_\lambda ^{(G,\epsilon )}{\mathcal {E}}_\lambda . \end{aligned}$$

By the definition of \(H_G^d(\mu )\) in (2.4) we can rewrite the last identity as

$$\begin{aligned} \sum _{\mu \vdash n}\sum _{d\ge 1}\epsilon ^d z_\mu H_G^d(\mu ){\mathcal {C}}_\mu =\sum _{\lambda \vdash n}r_\lambda ^{(G,\epsilon )}\mathcal E_\lambda =\sum _{\lambda ,\mu \vdash n}\frac{\dim \lambda }{|\lambda |!}r_\lambda ^{(G,\epsilon )}\chi _\lambda ^\mu {\mathcal {C}}_\mu . \end{aligned}$$

Since \({\mathcal {C}}_\mu \) form a basis of \(Z\left( {\mathbb {C}}[{\mathfrak {S}}_n]\right) \) we get that for any partition \(\mu \)

$$\begin{aligned} \sum _{d\ge 1}\epsilon ^d H_G^d(\mu )=\sum _{\lambda \vdash |\mu |}\frac{\dim \lambda }{|\lambda |!}r_\lambda ^{(G,\epsilon )}\frac{\chi _\lambda ^\mu }{z_\mu }. \end{aligned}$$

Multiplying this identity by \(\prod _{i=1}^{\ell (\mu )}t_{\mu _i}\) and summing over all partitions \(\mu \), on the left we obtain (2.6) and on the right, thanks to the well-known identity [45]

$$\begin{aligned} s_\lambda ({\mathbf {t}})=\sum _{\mu \vdash |\lambda |}\frac{\chi ^\mu _\lambda }{z_\mu }\prod _{i=1}^{\ell (\mu )}t_{\mu _i}, \end{aligned}$$

we obtain (2.7). The proof is complete. \(\square \)

Remark 2.2

This result is used by the authors of [36] to prove that the generating function \(\tau _G(\epsilon ;{\mathbf {t}})\) is a one-parameter family in \(\epsilon \) of Kadomtsev-Petviashvili tau functions in the times \({\mathbf {t}}\); a tau function such that the coefficients of the Schur expansion have the form (2.8) is termed hypergeometric tau function. It is also worth remarking that the theorem stated here is a reduction of a more general result, proved in [35], dealing with generating functions of double (weighted) Hurwitz numbers. In this general setting, the corresponding integrable hierarchy is the 2D Toda hierarchy.

2.3 JUE partition functions

Let us introduce the formal generating functions

$$\begin{aligned} Z_N^\pm ({\mathbf {u}})&:=\int _{{\mathcal {H}}_N(0,1)}\exp \left( \sum _{k\ge 1}\frac{u_k}{k}\mathrm {tr}\,X^{\pm k}\right) \mathrm dm^{\mathsf{J}}_N(X)=\sum _{\lambda \in {\mathcal {P}}}\frac{\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{\pm \lambda _j}\right\rangle }{z_\lambda }\prod _{i=1}^{\ell (\lambda )}u_{\lambda _i}, \end{aligned}$$
(2.13)

of JUE correlators; the sum in the right hand side is a formal power series in \(\mathbf{u}\) running over all partitions \(\lambda \), with the combinatorial factor \(z_\lambda \) defined in (1.3)Footnote 2. We call \(Z^+_N({\mathbf {u}})\) (resp. \(Z^-_N({\mathbf {u}})\)) the positive (resp. negative) JUE partition function. Although it will not be needed in the following, we mention that these partition functions are Toda tau functions in the times \(u_1,u_2,\dots \) [1, 2, 19].

Our goal in this paragraph is to show that the JUE partition functions can be expressed in the form (2.7) for appropriate choices of G (see Corollary 2.7).

The first step is to expand the JUE partition functions in the Schur basis; this is achieved by the following well-known general lemma, whose proof we report for the reader’s convenience. The idea of expanding a hermitian matrix model partition function over the Schur basis has been recently used in the computation of correlators in [38].

We first introduce the following notations

$$\begin{aligned} \Delta (\underline{x})=\prod _{1\le i< j\le N}(x_j-x_i)=\det \left( x^{N-j}_i\right) _{i,j=1}^N \end{aligned}$$

for the Vandermonde determinant and

$$\begin{aligned} \chi _\lambda (\underline{x}):=\frac{\det \left[ x^{N-i+\lambda _i}_j\right] _{i,j=1}^N}{\Delta (\underline{x})} \end{aligned}$$
(2.14)

for the characters of \(\mathrm{GL}_n\); again, we set \(\lambda _i=0\) for all \(\ell (\lambda )<i\le N\).

Lemma 2.3

For any potential V(x) (\(x\in I\)) we have

$$\begin{aligned} \frac{\int _{{\mathcal {H}}_N(I)}\exp \mathrm {tr}\,\left( V(X)+\sum _{k\ge 1}\frac{u_k}{k}X^{\pm k}\right) \mathrm dX}{\int _{{\mathcal {H}}_N(I)}\exp \mathrm {tr}\,\left( V(X)\right) \mathrm dX} =\sum _{\lambda \in {\mathcal {P}}:\,\ell (\lambda )\le N}c_{\lambda ,N}^{\pm }s_\lambda (\mathbf{u}), \end{aligned}$$

where the Schur polynomials are defined in (2.5) and the coefficients are

$$\begin{aligned} c_{\lambda ,N}^\pm =\frac{\int _{I^N} \chi _\lambda ({\underline{x}}^{\pm 1}) \Delta ^2({\underline{x}}) \prod _{a=1}^N\exp \left[ V(x_a)\right] \mathrm d^N{\underline{x}}}{\int _{I^N}\Delta ^2(\underline{x})\prod _{a=1}^N\exp \left[ V(x_a)\right] \mathrm d^N {\underline{x}}}. \end{aligned}$$
(2.15)

Here \({\underline{x}}=(x_1,\dots ,x_N)\) and \(\underline{x}^{-1}=(x_1^{-1},\dots ,x_N^{-1})\).

Proof

We have

$$\begin{aligned}&\frac{\int _{{\mathcal {H}}_N(I)}\exp \mathrm {tr}\,\left( V(X)+\sum _{k\ge 1}\frac{u_k}{k}X^{\pm k}\right) \mathrm dX}{\int _{H_N(I)}\exp \mathrm {tr}\,\left( V(X)\right) \mathrm dX}\nonumber \\&\quad =\frac{\int _{I^N}\Delta ^2(\underline{x})\prod _{a=1}^N\exp \left[ V(x_a)+\sum _{k\ge 1}\frac{u_k}{k}x_a^{\pm k}\right] \mathrm d^N {\underline{x}}}{\int _{I^N}\Delta ^2(\underline{x})\prod _{a=1}^N\exp \left[ V(x_a)\right] \mathrm d^N {\underline{x}}}, \end{aligned}$$
(2.16)

where we use the standard decomposition \(\mathrm dX=\Delta ^2(\underline{x})\mathrm d^N{\underline{x}}\mathrm dU\) of the Lebesgue measure into eigenvalues \({\underline{x}}=(x_1,\dots ,x_N)\) and eigenvectors \(U\in \mathrm {U}_N\) of the hermitian matrix \(X=UXU^\dagger \), with \(\mathrm dU\) a Haar measure on \(\mathrm {U}_N\) (whose normalization is irrelevant as it cancels in (2.16) between numerator and denominator). The proof follows by an application of the identity

$$\begin{aligned} \exp \left[ \sum _{k\ge 1}\frac{u_k}{k}(x_1^{\pm 1}+\dots +x_N^{\pm 1})^k\right] =\sum _{\lambda \in {\mathcal {P}}:\,\ell (\lambda )\le N}\chi _\lambda (\underline{x}^{\pm 1})s_\lambda ({\mathbf {u}}), \end{aligned}$$

which is nothing but a form of Cauchy identity, see, e.g. [49]. \(\square \)

Remark 2.4

By applying Andreief identity

$$\begin{aligned} \int _{I^N}\det \left[ f_i(x_j)\right] _{i,j=1}^N\det \left[ g_i(x_j)\right] _{i,j=1}^N\mathrm d^N\underline{x}=N!\det \left[ \int _I f_i(x)g_j(x)\mathrm dx\right] _{i,j=1}^N \end{aligned}$$

it is straightforward to show that the coefficients \(c_{\lambda ,N}\) in (2.15) can also be expressed as

$$\begin{aligned} c_{\lambda ,N}^\pm =\frac{\det \left[ \mathcal M^\pm _{\lambda _i+N-i,N-j}\right] _{i,j=1}^N}{\det \left[ \mathcal M^\pm _{N-i,N-j}\right] _{i,j=1}^N}, \qquad \mathcal M_{i,j}^\pm =\int _I x^{\pm (i+j)}\mathrm {e}^{V(x)}\mathrm dx, \end{aligned}$$

see also [38]. However, for our purposes it is more convenient to work with the representation (2.15).

Applying this general lemma to \(I=[0,1]\) and \(V(x)=\alpha \log x+\beta \log (1-x)\) we can expand the positive and negative JUE partition functions in the Schur basis as

$$\begin{aligned} Z_N^{\pm }({\mathbf {u}})=\sum _{\lambda \in {\mathcal {P}}:\,\ell (\lambda )\le N}c_{\lambda ,N}^\pm s_\lambda ({\mathbf {u}}), \end{aligned}$$
(2.17)

where

$$\begin{aligned} c_{\lambda ,N}^\pm =\frac{\int _{(0,1)^N} \chi _\lambda ({\underline{x}}^{\pm 1}) \Delta ^2({\underline{x}}) \prod _{a=1}^Nx_a^\alpha (1-x_a)^\beta \mathrm d^N\underline{x}}{\int _{(0,1)^N}\Delta ^2(\underline{x})\prod _{a=1}^Nx_a^\alpha (1-x_a)^\beta \mathrm d^N{\underline{x}}}. \end{aligned}$$
(2.18)

For the negative coefficients \(c_{\lambda ,N}^-\) we shall use the following elementary lemma.

Lemma 2.5

For any partition \(\lambda =(\lambda _1,\dots ,\lambda _\ell )\) of length \(\ell \le N\) we have

$$\begin{aligned} \chi _\lambda (\underline{x}^{-1})=\left( \prod _{a=1}^Nx_a^{-\lambda _1}\right) \chi _{\widehat{\lambda }}(\underline{x}), \end{aligned}$$

where \(\widehat{\lambda }\) is the partition of length \(<N\) whose parts are \(\widehat{\lambda }_j=\lambda _1-\lambda _{N-j+1}\).

Proof

The proof follows from the following chain of equalities;

$$\begin{aligned} \chi _\lambda (\underline{x}^{-1})&=\frac{\det \left[ x_i^{-N+j-\lambda _j}\right] _{i,j=1}^N}{\det \left[ x^{-N+j}_i\right] _{i,j=1}^N} =\frac{\det \left[ x_i^{1-j-\lambda _{N-j+1}}\right] _{i,j=1}^N}{\det \left[ x^{1-j}_i\right] _{i,j=1}^N} \\&=\left( \prod _{a=1}^Nx_a^{-\lambda _1}\right) \frac{\det \left[ x_i^{N-j+\lambda _1-\lambda _{N-j+1}}\right] _{i,j=1}^N}{\det \left[ x^{N-j}_i\right] _{i,j=1}^N} =\left( \prod _{a=1}^Nx_a^{-\lambda _1}\right) \chi _{\widehat{\lambda }}({\underline{x}}). \end{aligned}$$

In the first step we have shuffled the columns as \(j\mapsto N-j+1\), then we have multiplied both numerator and denominator by \((x_1\cdots x_N)^{N+\lambda _1}\), and finally we have applied the definition (2.14). \(\square \)

For the simplification of the coefficients (2.18) we rely on the following Schur-Selberg integral

$$\begin{aligned}&\int _{(0,1)^N}\chi _\lambda (\underline{x})\Delta ^2({\underline{x}})\prod _{a=1}^nx_a^\alpha (1-x_a)^\beta \mathrm d^N\underline{x}\nonumber \\&\quad = N! \prod _{1 \le i < j \le N}(\lambda _i-\lambda _j+j-i) \prod _{k=1}^N \frac{\Gamma (\beta +k)\Gamma (\alpha +N+\lambda _k-k+1)}{\Gamma (\alpha +\beta +2N+\lambda _k-k+1)},\,\,\qquad \qquad \end{aligned}$$
(2.19)

for which we refer, e.g. to [45, page 385]. The above allows us to prove the following proposition.

Proposition 2.6

We have

$$\begin{aligned}&c_{\lambda ,N}^+=\frac{\dim \lambda }{|\lambda |!}\prod _{(i,j) \in \lambda } \frac{(N-i+j) (\alpha +N-i+j)}{(\alpha +\beta +2N-i+j)} ,\nonumber \\&c_{\lambda ,N}^-=\frac{\dim \lambda }{|\lambda |!}\prod _{(i,j) \in \lambda } \frac{(N-i+j) (\alpha +\beta +N+i-j)}{(\alpha +i-j)}. \end{aligned}$$
(2.20)

Proof

We start with \(c_{\lambda ,N}^+\); using (2.18), (2.19), and (2.10) we compute

$$\begin{aligned} c_{\lambda ,N}^+&= \frac{\prod _{1 \le i< j \le N}(\lambda _i-\lambda _j+j-i) }{\prod _{1 \le i < j \le N}(j-i)} \prod _{k=1}^N\frac{\Gamma (\alpha +N+\lambda _k-k+1)\Gamma (\alpha +\beta +2N-k+1)}{\Gamma (\alpha +\beta +2N+\lambda _k-k+1)\Gamma (\alpha +N-k+1)}\\&=\frac{\dim \lambda }{|\lambda |!} \prod _{k=1}^{N-1} \frac{(N-k+1)_{\lambda _k}(\alpha +N-k+1)_{\lambda _k}}{(\alpha +\beta +2N-k+1)_{\lambda _k}}\\&=\frac{\dim \lambda }{|\lambda |!}\prod _{(i,j) \in \lambda } \frac{(N-i+j) (\alpha +N-i+j)}{(\alpha +\beta +2N-i+j)}. \end{aligned}$$

We remind that we are using the notation (1.16) for the rising factorial. For \(c_{\lambda ,N}^-\) we first note that, thanks to Lemma 2.5 and (2.19), we have

$$\begin{aligned}&\int _{(0,1)^N}\chi _\lambda ({\underline{x}}^{-1})\Delta ^2(\underline{x})\prod _{a=1}^Nx_a^\alpha (1-x_a)^\beta \mathrm d^N{\underline{x}}\\&\quad = N!\prod _{1 \le i < j \le N}(\lambda _i-\lambda _j+j-i) \prod _{k=1}^N\frac{\Gamma (\beta +k)\Gamma (\alpha -\lambda _{k}+k)}{\Gamma (\alpha +\beta +N-\lambda _{k}+k)}, \end{aligned}$$

then with similar computations as above we obtain

$$\begin{aligned} c_{\lambda ,N}^-&= \frac{\prod _{1 \le i< j \le N}(\lambda _i-\lambda _j+j-i) }{\prod _{1 \le i < j \le N}(j-i)} \prod _{k=1}^N\frac{\Gamma (\alpha -\lambda _k+k)\Gamma (\alpha +\beta +N+k)}{\Gamma (\alpha +\beta +N-\lambda _k+k)\Gamma (\alpha +k)}\\&=\frac{\dim \lambda }{|\lambda |!} \prod _{k=1}^{N-1} \frac{(N-k+1)_{\lambda _k}(\alpha +\beta +N-\lambda _k+k)_{\lambda _k}}{(\alpha -\lambda _k+k)_{\lambda _k}}\\&=\frac{\dim \lambda }{|\lambda |!}\prod _{(i,j) \in \lambda } \frac{(N-i+j) (\alpha +\beta +N+i-j)}{(\alpha +i-j)}. \end{aligned}$$

\(\square \)

This proposition enables us to identify the Jacobi generating function (2.13) with the generating function of multiparametric weighted Hurwitz numbers in (2.6). Indeed we have the following result.

Corollary 2.7

Let \(c_\alpha :=1+\alpha /N\) and \(c_\beta :=1+\beta /N\); then the Jacobi formal partition functions in (2.13) take the form

$$\begin{aligned} Z_N^+(\mathbf{u})&=\tau _{G^+}\left( \epsilon =\frac{1}{N},{\mathbf {t}}\right) ,\qquad G^+(z)=\frac{(1+z)\left( 1+\frac{z}{c_\alpha }\right) }{1+\frac{z}{c_\alpha +c_\beta }},\\&\quad t_k=\left( \frac{c_\alpha N}{c_\alpha +c_\beta }\right) ^ku_k, \\ Z_N^-(\mathbf{u})&=\tau _{G^-}\left( \epsilon =\frac{1}{N},{\mathbf {t}}\right) ,\qquad G^-(z) =\frac{(1+z)\left( 1-\frac{z}{c_\alpha +c_\beta -1}\right) }{1-\frac{z}{c_\alpha -1}},\\&\quad t_k=\left( \frac{(c_\alpha +c_\beta -1) N}{c_\alpha -1}\right) ^ku_k, \end{aligned}$$

where \(\tau _G\) is introduced in Theorem 2.1.

Proof

We first note that we can rewrite the expansion (2.17) as

$$\begin{aligned} Z_N^\pm ({\mathbf {u}})=\sum _{\lambda \in {\mathcal {P}}}c_{\lambda ,N}^\pm s_\lambda ({\mathbf {u}}), \end{aligned}$$

with the sum over all partitions \({\mathcal {P}}\) and no longer restricted to \(\ell (\lambda )\le N\); this is clear as \(c_{N,\lambda }^\pm =0\) whenever \(N=0,1,2,\dots \) and \(\ell (\lambda )>N\). Then the proof is immediate by the formula (2.8) for the coefficients \(r_\lambda ^{(G,\epsilon )}\), since (2.20) can be rewritten as

$$\begin{aligned} c_{\lambda ,N}^+&= \frac{\dim \lambda }{|\lambda |!} \left( \frac{c_\alpha N}{c_\alpha +c_\beta }\right) ^{|\lambda |} \prod _{(i,j) \in \lambda }\frac{\left( 1+\frac{1}{N}(j-i)\right) \left( 1+\frac{1}{c_\alpha N}(j-i)\right) }{1+\frac{1}{(c_\alpha +c_\beta )N}(j-i)},\\ c_{\lambda ,N}^-&= \frac{\dim \lambda }{|\lambda |!} \left( \frac{(c_\alpha +c_\beta -1) N}{c_\alpha -1}\right) ^{|\lambda |} \prod _{(i,j) \in \lambda } \frac{\left( 1+\frac{1}{N}(j-i)\right) \left( 1-\frac{1}{(c_\alpha +c_\beta -1)N}(j-i)\right) }{1-\frac{1}{(c_\alpha -1)N}(j-i)}. \end{aligned}$$

\(\square \)

2.4 Hurwitz numbers \(h_g(\lambda ,\mu ,\nu )\) and multiparametric Hurwitz numbers

We now connect the multiparametric Hurwitz numbers (2.4) for the functions \(G^\pm (z)\), appearing in Corollary 2.7, with the counting problem in Definition 1.2.

Proposition 2.8

If  \(G(z)=\frac{(1+z)(1+\gamma z)}{1-\delta z}\), with \(\gamma \) and \(\delta \) parameters, then for all partitions \(\lambda \vdash n\) and all integers \(g\ge 0\) we have

$$\begin{aligned} H_G^{2g-2+n+\ell (\lambda )}(\lambda )&= \frac{1}{n!} \sum _{\mu ,\nu \vdash n}\gamma ^{n-\ell (\nu )}\delta ^{\ell (\mu )+\ell (\nu )+\ell (\lambda )+2g-2-n}h_g(\lambda ,\mu ,\nu ), \end{aligned}$$

where the triple monotone Hurwitz number \(h_g(\lambda ,\mu ,\nu )\) has been introduced in Definition 1.2.

Proof

We apply (2.2) to the first two factors of the following to get

$$\begin{aligned}&\prod _{a=1}^nG\left( \epsilon J_a\right) =\prod _{a=1}^n(1+\epsilon {\mathcal {J}}_a)(1+\epsilon \gamma {\mathcal {J}}_a)\frac{1}{1-\epsilon \delta {\mathcal {J}}_a}\\&\quad =\left( \sum _{\mu \vdash n}\epsilon ^{n-\ell (\mu )}{\mathcal {C}}_\mu \right) \left( \sum _{\nu \vdash n}(\epsilon \gamma )^{n-\ell (\nu )}{\mathcal {C}}_\nu \right) \left( \sum _{r\ge 0}(\epsilon \delta )^r\sum _{1\le a_1\le \dots \le a_r\le n}{\mathcal {J}}_{a_1}\cdots {\mathcal {J}}_{a_r}\right) . \end{aligned}$$

By definition (2.4), extracting the coefficient of \(\epsilon ^d{\mathcal {C}}_\lambda \) and dividing by \(z_\lambda \) we obtain \(H^d_G(\lambda )\); therefore

$$\begin{aligned} H_{G}^{d}(\lambda )= \frac{1}{z_\lambda |\mathrm {cyc}(\lambda )|} \sum _{\mu ,\nu \vdash n}\gamma ^{n-\ell (\nu )}\delta ^{r} h_g(\lambda ,\mu ,\nu ), \end{aligned}$$

where drg in this identity are related via

$$\begin{aligned} r=\ell (\lambda )+\ell (\mu )+\ell (\nu )+2g-2-n,\qquad d=2n-\ell (\mu )-\ell (\nu )+r. \end{aligned}$$

The proof is complete by the identity \(z_\lambda |\mathrm {cyc}(\lambda )|=n!\). \(\square \)

2.5 Proof of Theorem 1.3

From Corollary 2.7 we have, with the scaling \(\alpha =(c_\alpha -1)N\), \(\beta =(c_\beta -1)N\),

$$\begin{aligned} Z^+_N({\mathbf {u}})&=\sum _{d\ge 1}\frac{1}{N^d}\sum _{\lambda \in {\mathcal {P}}} \left( \frac{c_\alpha N}{c_\alpha +c_\beta }\right) ^{|\lambda |} H^d_{G^+}(\lambda )\prod _{i=1}^{\ell (\lambda )}u_{\lambda _i},\\ Z^-_N({\mathbf {u}})&=\sum _{d\ge 1}\frac{1}{N^d}\sum _{\lambda \in {\mathcal {P}}} \left( \frac{(c_\alpha +c_\beta -1) N}{c_\alpha -1}\right) ^{|\lambda |} H^d_{G^-}(\lambda )\prod _{i=1}^{\ell (\lambda )}u_{\lambda _i}, \end{aligned}$$

where we have used Proposition 2.1. It follows from (2.13) that

$$\begin{aligned} \frac{\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{\lambda _j}\right\rangle }{z_\lambda }&=\sum _{d\ge 1}N^{|\lambda |-d}\left( \frac{c_\alpha }{c_\alpha +c_\beta }\right) ^{|\lambda |}H_{G^+}^d(\lambda ),\\ \frac{\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{-\lambda _j}\right\rangle }{z_\lambda }&=\sum _{d\ge 1}N^{|\lambda |-d}\left( \frac{c_\alpha +c_\beta -1}{c_\alpha -1}\right) ^{|\lambda |}H_{G^-}^d(\lambda ), \end{aligned}$$

and using finally Proposition 2.8 we have

$$\begin{aligned} \frac{\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{\lambda _j}\right\rangle }{z_\lambda }&= \frac{1}{|\lambda |!}\sum _{g\ge 0}N^{2-2g-\ell (\lambda )}\sum _{\mu ,\nu \vdash |\lambda |}(-1)^{|\lambda |} \frac{c_\alpha ^{\ell (\nu )}}{(-c_\alpha -c_\beta )^{\ell (\mu )+\ell (\nu )+\ell (\lambda )+2g-2}}h_g(\lambda ,\mu ,\nu ),\\ \frac{\left\langle \prod _{j=1}^\ell \mathrm {tr}\,X^{-\lambda _j}\right\rangle }{z_\lambda }&= \frac{1}{|\lambda |!}\sum _{g\ge 0}N^{2-2g-\ell (\lambda )} \sum _{\mu ,\nu \vdash n}(-1)^{|\lambda |} \frac{\left( 1-c_\alpha -c_\beta \right) ^{\ell (\nu )}}{\left( c_\alpha -1\right) ^{\ell (\mu )+\ell (\nu )+\ell (\lambda )+2g-2}}h_g(\lambda ,\mu ,\nu ). \end{aligned}$$

The proof is complete.\(\square \)

Remark 2.9

Let us note that letting \(c_\beta \rightarrow \infty \) in the functions \(G^\pm \) of Corollary 2.7 we have \(G^+(z)\rightarrow (1+z)(1+z/c_\alpha )\) and \(G^-(z)\rightarrow (1+z)/(1-z/(c_\alpha -1))\). The Hurwitz numbers corresponding to these limit functions can be identified as in Proposition 2.8 in terms of double strictly (\(+\)) or weakly (−) Hurwitz numbers, respectively. Thus, bearing in mind the scaling limit for \(\beta \rightarrow \infty \) of JUE correlators to the correlators of the Laguerre Unitary Ensemble of Remark 1.10, the Theorem 1.3 recovers the results of [17].

3 Computing correlators of Hermitian models

In this section we prove Theorem 1.5. First of all, we introduce a few notations and recall some standard facts about orthogonal polynomials. We denote with \(P_\ell (z)\) the monic orthogonal polynomials, \(h_\ell =\int _IP^2_\ell (x)\mathrm {e}^{V(x)}\mathrm dx\), see (1.8), and

$$\begin{aligned} \widehat{P}_\ell (z) := \frac{1}{2 \pi \mathrm {i}}\int _IP_\ell (x)\mathrm {e}^{V(x)}\frac{\mathrm dx}{x-z} \end{aligned}$$
(3.1)

their Cauchy transforms. The matrix

$$\begin{aligned} Y_N(z):= \left( \begin{array}{cc} P_N(z) &{} \widehat{P}_N(z) \\ -\frac{2\pi \mathrm {i}}{h_{N-1}} P_{N-1}(z) &{} -\frac{2\pi \mathrm {i}}{h_{N-1}}\widehat{P}_{N-1}(z) \end{array}\right) , \end{aligned}$$
(3.2)

introduced in (1.9), is an analytic function of \(z\in {\mathbb {C}}\setminus I\). It satisfies the jump condition

$$\begin{aligned} Y_{N,+}(x)=Y_{N,-}(x)\begin{pmatrix} 1 &{} \mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix}, \qquad x\in I^\circ , \end{aligned}$$
(3.3)

where we use the notation

$$\begin{aligned} Y_{N,\pm }(x)=\lim _{\epsilon \rightarrow 0_+}Y_N(x\pm \mathrm {i}\epsilon ),\qquad x\in I^\circ , \end{aligned}$$

and \(I^\circ \) is the interior of the interval I. As \(z\rightarrow \infty \) we have

$$\begin{aligned} Y_{N}(z)=\left( {\mathbf {1}}+{\mathcal {O}}(z^{-1})\right) z^{N\sigma _3}, \end{aligned}$$
(3.4)

where we introduce also the standard notation \(\sigma _3=\begin{pmatrix} 1&{} 0 \\ 0 &{} -1 \end{pmatrix}\).

It is well known [19] that

$$\begin{aligned} {\mathscr {C}}_\ell (z_1,\dots ,z_\ell )=\int _{I^\ell }\frac{\rho _\ell (x_1,\dots ,x_\ell )}{(z_1-x_1)\cdots (z_\ell -x_\ell )}\mathrm dx_1\cdots \mathrm dx_\ell , \end{aligned}$$
(3.5)

where

$$\begin{aligned} \rho _\ell (x_1,\dots ,x_\ell )=\det \left[ k_N(x_i,x_j)\right] _{i,j=1}^\ell ,\qquad x_1,\dots ,x_\ell \in I, \end{aligned}$$
(3.6)

and \(k_N(x,y)\) is the Christoffel-Darboux kernel

$$\begin{aligned} k_N(x,y)=\frac{\mathrm {e}^{\frac{V(x)+V(y)}{2}}}{h_{N-1}}\frac{P_N(x)P_{N-1}(y)-P_{N-1}(x)P_{N}(y)}{x-y}\,, \end{aligned}$$

with \(P_N(x)\) the monic orthogonal polynomials. Using the matrix entries of \(Y_N(z)\) in (3.2), the above expression can be conveniently rewritten as

$$\begin{aligned} k_N(x,y)= -\frac{\mathrm {e}^{\frac{V(x)+V(y)}{2}}}{2\pi \mathrm {i}(x-y)}\begin{pmatrix}0&1 \end{pmatrix}Y^{-1}_N(x)Y_N(y)\begin{pmatrix}1 \\ 0\end{pmatrix}, \end{aligned}$$
(3.7)

which is independent of the choice of boundary value of \(Y_N\). Let us finally note that the connected cumulant functions can be computed as

$$\begin{aligned} {\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )=\int _{I^\ell }\frac{\rho _\ell ^{\mathsf{c}}(x_1,\dots ,x_\ell )}{(z_1-x_1)\cdots (z_\ell -x_\ell )}\mathrm dx_1\cdots \mathrm dx_\ell , \end{aligned}$$
(3.8)

where (1.6), (1.7) and (3.6) imply

$$\begin{aligned} \rho _\ell ^{\mathsf{c}}(x_1,\dots ,x_\ell )=(-1)^{\ell -1}\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}((\ell ))} k_N(x_{i_1},x_{i_2}) \cdots k_N(x_{i_\ell },x_{i_1}), \end{aligned}$$

and the sum extends over the transitive permutations of \(\{1,\dots ,\ell \}\), i.e. cycles of length \(\ell \) in \({\mathfrak {S}}_\ell \).

3.1 Case \(\ell =1\)

In this case, it follows from (3.7) that

$$\begin{aligned} \rho _1(x)=k_N(x,x)=\lim _{y\rightarrow x}k_N(x,y)=\frac{\mathrm {e}^{V(x)}}{2\pi \mathrm {i}}\begin{pmatrix}0&1 \end{pmatrix}Y^{-1}_N(x)Y_N'(x)\begin{pmatrix}1 \\ 0\end{pmatrix}. \end{aligned}$$
(3.9)

In the following we shall use the notation

$$\begin{aligned} \Delta [f(x)]:=f_+(x)-f_-(x),\qquad x\in I^\circ , \end{aligned}$$

for the jump of a function f across I, namely \(f_\pm (x)=\lim _{\epsilon \rightarrow 0_+}f(x\pm \mathrm {i}\epsilon )\).

The following lemma is well known, see, e.g. [16], and it is proven here for the reader’s convenience.

Lemma 3.1

We have

$$\begin{aligned} \rho _1(x)=-\frac{1}{2\pi \mathrm {i}}\Delta \left[ \mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \right] . \end{aligned}$$

Proof

It follows from the jump condition (3.3) for \(Y_N\) that

$$\begin{aligned} Y'_{N,+}(x)=Y'_{N,-}(x)\begin{pmatrix} 1 &{} \mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix}+Y_{N,-}(x)\begin{pmatrix} 0 &{} V'(x)\mathrm {e}^{V(x)} \\ 0 &{} 0 \end{pmatrix}, \qquad x\in I^\circ . \end{aligned}$$

Therefore we compute

$$\begin{aligned}&\Delta \left[ \mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \right] =\mathrm {tr}\,\left( Y^{-1}_{N,+}(x)Y_{N,+}'(x)\frac{\sigma _3}{2}\right) -\mathrm {tr}\,\left( Y^{-1}_{N,-}(x)Y_{N,-}'(x)\frac{\sigma _3}{2}\right) \\&\quad =\mathrm {tr}\,\left( \begin{pmatrix} 1 &{} -\mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix} Y^{-1}_{N,-}(x)Y_{N,-}'(x) \begin{pmatrix} 1 &{} \mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix}\frac{\sigma _3}{2}\right) - \mathrm {tr}\,\left( Y^{-1}_{N,-}(x)Y_{N,-}'(x)\frac{\sigma _3}{2}\right) \\&\qquad + \mathrm {tr}\,\left( \begin{pmatrix} 1 &{} -\mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix} \begin{pmatrix} 0 &{} V'(x)\mathrm {e}^{V(x)} \\ 0 &{} 0 \end{pmatrix}\frac{\sigma _3}{2}\right) \end{aligned}$$

The last term vanishes and so, by the cyclic property of the trace, we have

$$\begin{aligned} \Delta \left[ \mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \right] =\mathrm {tr}\,\left[ Y^{-1}_{N,-}(x)Y_{N,-}'(x)\left( \begin{pmatrix} 1 &{} \mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix}\frac{\sigma _3}{2}\begin{pmatrix} 1 &{} -\mathrm {e}^{V(x)} \\ 0 &{} 1 \end{pmatrix}-\frac{\sigma _3}{2}\right) \right] \end{aligned}$$

which is equivalent to relation (3.9). \(\square \)

From this lemma and (3.5) we get

$$\begin{aligned}&{\mathscr {C}}_1(z)=\int _I\frac{\rho _1(x)}{z-x}\mathrm dx=-\frac{1}{2\pi \mathrm {i}}\int _I\Delta \left[ \mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \right] \frac{\mathrm dx}{z-x}\\&\quad =\frac{1}{2\pi \mathrm {i}}\int _\Gamma \mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \frac{\mathrm dx}{z-x}, \end{aligned}$$

where \(\Gamma \) is a smooth contour enclosing I, oriented counterclockwise (namely, the interval I is always to the left of \(\Gamma \)), and leaving z outside (namely z is to the right of \(\Gamma \))Footnote 3. Using Cauchy residue theorem we have

$$\begin{aligned} {\mathscr {C}}_1(z)=-\mathop {\mathrm {res}}\limits _{x=\infty }\mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \frac{\mathrm dx}{z-x}-\mathop {\mathrm {res}}\limits _{x=z}\mathrm {tr}\,\left( Y^{-1}_N(x)Y_N'(x)\frac{\sigma _3}{2}\right) \frac{\mathrm dx}{z-x}. \end{aligned}$$

The first residue vanishes due to (3.4), while the second one is readily computed to give

$$\begin{aligned} {\mathscr {C}}_1(z)=\mathrm {tr}\,\left( Y^{-1}_N(z)Y_N'(z)\frac{\sigma _3}{2}\right) . \end{aligned}$$

3.2 Case \(\ell =2\)

In this case, using (3.7),

$$\begin{aligned}&\rho _2^{\mathsf{c}}(x_1,x_2)=-k_N(x_1,x_2)k_N(x_2,x_1)\\&\quad =-\frac{\mathrm {e}^{V(x_1)+V(x_2)}}{(2\pi \mathrm {i})^2(x_1-x_2)(x_2-x_1)}\begin{pmatrix}0&1 \end{pmatrix}Y^{-1}_N(x_1)Y_N(x_2)\begin{pmatrix}1 \\ 0\end{pmatrix}\begin{pmatrix}0&1 \end{pmatrix}Y^{-1}_N(x_2)Y_N(x_1)\begin{pmatrix} 1 \\ 0\end{pmatrix} \\&\quad =\frac{\mathrm {tr}\,\left( Y_N(x_1)\begin{pmatrix} 0 &{} \mathrm {e}^{V(x_1)} \\ 0 &{} 0 \end{pmatrix}Y_N^{-1}(x_1)Y_N(x_2)\begin{pmatrix} 0 &{} \mathrm {e}^{V(x_2)} \\ 0 &{} 0 \end{pmatrix}Y_N^{-1}(x_2)\right) }{(2\pi \mathrm {i})^2(x_1-x_2)^2}. \end{aligned}$$

Introduce, as in the statement of Theorem 1.5,

$$\begin{aligned} R(z):=Y_N(z)\left( \begin{array}{cc}1 &{} 0 \\ 0 &{} 0 \end{array}\right) Y_N^{-1}(z). \end{aligned}$$
(3.10)

It is an analytic function of \(z\in {\mathbb {C}}\setminus I\), satisfying

$$\begin{aligned} \Delta R(x)=R_+(x)-R_-(x)=Y_N(x)\left( \begin{array}{cc} 0 &{} -\mathrm {e}^{V(x)} \\ 0 &{} 0 \end{array}\right) Y_N^{-1}(x),\quad x\in I^\circ , \end{aligned}$$

as it follows from (3.3). Furthermore \(\Delta R(x)\) is nilpotent:

$$\begin{aligned} \left[ \Delta R(x)\right] ^2=0\,, \end{aligned}$$

so that the right hand side of the expression

$$\begin{aligned} \rho _2^{\mathsf{c}}(x_1,x_2)=\frac{\mathrm {tr}\,(\Delta R(x_1)\Delta R(x_2))}{(2\pi \mathrm {i})^2(x_1-x_2)^2} \end{aligned}$$

is regular on the diagonal. Therefore we have, from (3.8),

$$\begin{aligned} {\mathscr {C}}_2^{\mathsf{c}}(z_1,z_2)&=\frac{1}{(2\pi \mathrm {i})^2}\int _{I^2}\frac{\mathrm {tr}\,\left( \Delta R(x_1)\Delta R(x_2)\right) \mathrm dx_1\mathrm dx_2}{(z_1-x_1)(z_2-x_2)(x_1-x_2)^2} \\&=-\mathrm {tr}\,\left[ \frac{1}{2\pi \mathrm {i}}\int _{I}\frac{\Delta R(x_2)}{(z_2-x_2)}\left( \frac{1}{2\pi \mathrm {i}}\int _\Gamma \frac{ R(x_1)\mathrm dx_1}{(z_1-x_1)(x_1-x_2)^2}\right) \mathrm dx_2\right] , \end{aligned}$$

where \(\Gamma \) is an anticlockwise contour encircling I and we assume that both \(z_1,z_2\) are outside the interior of \(\Gamma \). Therefore the inner integral can be computed by Cauchy residue theorem

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _\Gamma \frac{ R(x_1)\mathrm dx_1}{(z_1-x_1)(x_1-x_2)^2}=-\mathop {\mathrm {res}}\limits _{x_1=\infty }\frac{ R(x_1)\mathrm dx_1}{(z_1-x_1)(x_1-x_2)^2} -\mathop {\mathrm {res}}\limits _{x_1=z_1}\frac{R(x_1)\mathrm dx_1}{(z_1-x_1)(x_1-x_2)^2}. \end{aligned}$$

The residue at infinity vanishes, as from (3.4) we see that

$$\begin{aligned} R(z)=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}+{\mathcal {O}}(z^{-1}) \text{ as } z\rightarrow \infty , \end{aligned}$$
(3.11)

and the one at \(z_1\) is readily computed as

$$\begin{aligned} \mathop {\mathrm {res}}\limits _{x_1=z_1}\frac{R(x_1)\mathrm dx_1}{(z_1-x_1)(x_1-x_2)^2}=-\frac{R(z_1)}{(z_1-x_2)^2}. \end{aligned}$$

Therefore

$$\begin{aligned} {\mathscr {C}}_2^{\mathsf{c}}(z_1,z_2)= & {} -\mathrm {tr}\,\left[ \frac{1}{2\pi \mathrm {i}}\int _{I}\frac{R(z_1)\Delta R(x_2)\mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}\right] \\= & {} \mathrm {tr}\,\left[ \frac{1}{2\pi \mathrm {i}}\int _{\Gamma }\frac{R(z_1)R(x_2)\mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}\right] \end{aligned}$$

with the same contour \(\Gamma \). Again by Cauchy residue theorem (both \(z_1,z_2\) are outside \(\Gamma \))

$$\begin{aligned} {\mathscr {C}}_2^{\mathsf{c}}(z_1,z_2)=- & {} \mathop {\mathrm {res}}\limits _{x_2=\infty }\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) \mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}-\mathop {\mathrm {res}}\limits _{x_2=z_1}\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) \mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}\\- & {} \mathop {\mathrm {res}}\limits _{x_2=z_2}\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) \mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}. \end{aligned}$$

The residue at infinity vanishes again because of (3.11) and the remaining ones are computed as follows

$$\begin{aligned} \mathop {\mathrm {res}}\limits _{x_2=z_2}\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) \mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}&=-\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) }{(z_1-z_2)^2} \\ \mathop {\mathrm {res}}\limits _{x_2=z_1}\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) \mathrm dx_2}{(z_2-x_2)(z_1-x_2)^2}&=\left. \frac{\partial }{\partial x_2}\frac{\mathrm {tr}\,\left( R(z_1)R(x_2)\right) }{(z_2-x_2)}\right| _{x_2=z_1}\\&=\frac{\mathrm {tr}\,\left( R^2(z_1)\right) }{(z_2-z_1)^2}+\frac{\mathrm {tr}\,\left( R(z_1)R'(z_1)\right) }{z_2-z_1}=\frac{1}{(z_1-z_2)^2} \end{aligned}$$

where in the last step we have used \(\mathrm {tr}\,R^2(z)=1\) and its derivative \(\mathrm {tr}\,\left( R(z)R'(z)\right) =0\), as it follows directly from the definition (3.10) of R(z). The theorem is proved also for \(\ell =2\).

Remark 3.2

The function \(\left[ \mathrm {tr}\,(R(z_1)R(z_2))-1\right] /(z_1-z_2)^2\) is regular at \(z_1=z_2\), as \({\mathscr {C}}_2^{\mathsf{c}}(z_1,z_2)\) is. To verify this concretely, it suffices to note that \(\mathrm {tr}\,R^2(z)=1\) from the definition (3.10) of R(z), which implies that the numerator \(\left[ \mathrm {tr}\,(R(z_1)R(z_2))-1\right] \) vanishes at second order at \(z_1=z_2\).

3.3 Case \(\ell \ge 3\)

Using (3.7) we write

$$\begin{aligned}&(-1)^{\ell -1}k_N(x_{i_1},x_{i_2})\dots k_N(x_{i_\ell },x_{i_1})\\&\qquad =-\frac{\prod _{i=1}^\ell \mathrm {e}^{V(x_i)}}{(2\pi \mathrm {i})^\ell (x_{i_1}-x_{i_2})\cdots (x_{i_\ell }-x_{i_1})}\begin{pmatrix}0&1 \end{pmatrix}Y^{-1}_N(x_{i_1})Y_N(x_{i_2})\begin{pmatrix}1 \\ 0\end{pmatrix}\\&\qquad \cdots \begin{pmatrix}0&1 \end{pmatrix}Y^{-1}_N(x_{i_\ell })Y_N(x_{i_1})\begin{pmatrix}1 \\ 0\end{pmatrix} \\&\qquad =-\frac{\mathrm {tr}\,\left( Y_N(x_{i_1})\begin{pmatrix} 0 &{} \mathrm {e}^{V(x_{i_1})} \\ 0 &{} 0 \end{pmatrix}Y^{-1}_N(x_{i_1})\cdots Y_N(x_{i_\ell })\begin{pmatrix} 0 &{} \mathrm {e}^{V(x_{i_\ell })} \\ 0 &{} 0 \end{pmatrix}Y^{-1}_N(x_{i_\ell })\right) }{(2\pi \mathrm {i})^\ell (x_{i_1}-x_{i_2})\cdots (x_{i_\ell }-x_{i_1})} \\&\qquad =(-1)^{\ell -1}\frac{\mathrm {tr}\,\left( \Delta R(x_{i_1})\cdots \Delta R(x_{i_\ell })\right) }{(2\pi \mathrm {i})^\ell (x_{i_1}-x_{i_2})\cdots (x_{i_\ell }-x_{i_1})} \end{aligned}$$

so that from (3.8) we obtain

$$\begin{aligned}&{\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )\\&\quad =(-1)^{\ell -1}\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}((\ell ))}\int _{I^\ell }\frac{\mathrm {tr}\,\left( \Delta R(x_{i_1})\cdots \Delta R(x_{i_\ell })\right) \mathrm dx_1\cdots \mathrm dx_\ell }{(2\pi \mathrm {i})^\ell (x_{i_1}-x_{i_2})\cdots (x_{i_\ell }-x_{i_1})(z_1-x_1)\cdots (z_\ell -x_\ell )} \\&\quad =-\frac{1}{(2\pi \mathrm {i})^\ell }\int _{\Gamma ^\ell }\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}((\ell ))}\frac{\mathrm {tr}\,\left( R(x_{i_1})\cdots R(x_{i_\ell })\right) }{(x_{i_1}-x_{i_2})\cdots (x_{i_\ell }-x_{i_1})}\frac{\mathrm dx_1\cdots \mathrm dx_\ell }{(z_1-x_1)\cdots (z_\ell -x_\ell )}, \end{aligned}$$

where, similarly as before, \(\Gamma \) is a contour enclosing I in counterclockwise sense and leaving \(z_1,\dots ,z_\ell \) outside (namely I is to the left of \(\Gamma \) and \(z_1,\dots ,z_\ell \) are to the right of \(\Gamma \)).

Lemma 3.3

For all \(\ell \ge 3\) the function

$$\begin{aligned} S_\ell (z_1,\dots ,z_\ell ):=\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}((\ell ))}\frac{\mathrm {tr}\,\left( R(z_{i_1})\cdots R(z_{i_\ell })\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_\ell }-z_{i_1})} \end{aligned}$$

is holomorphic for \((z_1,\dots ,z_\ell )\in ({\mathbb {C}}\setminus I)^\ell \), in particular it is regular on the diagonals \(z_a=z_b\) for all \(a\not =b\). Moreover, \(S(z_1,\dots ,z_\ell )={\mathcal {O}}(1/z_j)\) as \(z_j\rightarrow \infty \), for any \(j=1,\dots ,\ell \).

Proof

For the first statement, the denominators in S vanish at \(z_{a}=z_b\) only for \(\ell \)-cycles of the form \((i_1,\dots ,i_{\ell -2},a,b)\) and \((i_1,\dots ,i_{\ell -2},b,a)\); these terms have simple poles at \(z_a=z_b\) of the form

$$\begin{aligned}&\frac{\mathrm {tr}\,\left( R(z_{i_1})\cdots R(z_{i_{\ell -2}})R(z_{a})R(z_{b})\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_{\ell -2}}-z_{a})(z_a-z_b)(z_b-z_{i_1})}\\&\quad = \frac{1}{z_a-z_b}\left[ \frac{\mathrm {tr}\,\left( R(z_{i_1})\cdots R(z_{i_{\ell -2}})R^2(z_{a})\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_{\ell -2}}-z_{a})(z_a-z_{i_1})}+{\mathcal {O}}(z_a-z_b)\right] \end{aligned}$$

and

$$\begin{aligned}&\frac{\mathrm {tr}\,\left( R(z_{i_1})\cdots R(z_{i_{\ell -2}})R(z_{b})R(z_{a})\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_{\ell -2}}-z_{b})(z_b-z_a)(z_a-z_{i_1})}\\&\quad = \frac{1}{z_b-z_a}\left[ \frac{\mathrm {tr}\,\left( R(z_{i_1})\cdots R(z_{i_{\ell -2}})R^2(z_{a})\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_{\ell -2}}-z_{a})(z_a-z_{i_1})}+{\mathcal {O}}(z_a-z_b)\right] , \end{aligned}$$

so that the polar parts at \(z_a=z_b\) cancel each other in the summation. The second statement follows directly from (3.11). \(\square \)

Using this lemma we can complete our computation,

$$\begin{aligned}&{\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )=-\frac{1}{(2\pi \mathrm {i})^\ell }\int _{\Gamma ^\ell }S(x_1,\dots ,x_\ell )\frac{\mathrm dx_1\cdots \mathrm dx_\ell }{(z_1-x_1)\cdots (z_\ell -x_\ell )} \\&\quad =-\frac{1}{(2\pi \mathrm {i})^{\ell -1}}\int _{\Gamma ^{\ell -1}}\left( \frac{1}{2\pi \mathrm {i}}\int _\Gamma S(x_1,\dots ,x_\ell )\frac{\mathrm dx_1}{(z_1-x_1)}\right) \frac{\mathrm dx_2\cdots \mathrm dx_\ell }{(z_2-x_2)\cdots (z_\ell -x_\ell )} \\&\quad =-\frac{1}{(2\pi \mathrm {i})^{\ell -1}}\int _{\Gamma ^{\ell -1}}\left( -\mathop {\mathrm {res}}\limits _{x_1=z_1}S(x_1,\dots ,x_\ell )\frac{\mathrm dx_1}{(z_1-x_1)}\right) \frac{\mathrm dx_2\cdots \mathrm dx_\ell }{(z_2-x_2)\cdots (z_\ell -x_\ell )} \end{aligned}$$

because \(S(x_1,\dots ,x_\ell )\frac{\mathrm dx_1}{(z_1-x_1)}\) has no residue at infinity. Thus

$$\begin{aligned}&{\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )=-\frac{1}{(2\pi \mathrm {i})^{\ell -1}}\int _{\Gamma ^{\ell -1}}S(z_1,x_2,\dots ,x_\ell )\frac{\mathrm dx_2\cdots \mathrm dx_\ell }{(z_2-x_2)\cdots (z_\ell -x_\ell )} \\&\quad =-\frac{1}{(2\pi \mathrm {i})^{\ell -2}}\int _{\Gamma ^{\ell -2}}\left( \frac{1}{2\pi \mathrm {i}}\int _\Gamma S(z_1,x_2,\dots ,x_\ell )\frac{\mathrm dx_2}{(z_2-x_2)}\right) \frac{\mathrm dx_3\cdots \mathrm dx_\ell }{(z_3-x_3)\cdots (z_\ell -x_\ell )} \\&\quad =-\frac{1}{(2\pi \mathrm {i})^{\ell -2}}\int _{\Gamma ^{\ell -2}}\left( -\mathop {\mathrm {res}}\limits _{x_2=z_2}S(z_1,x_2,\dots ,x_\ell )\frac{\mathrm dx_2}{(z_2-x_2)}\right) \frac{\mathrm dx_3\cdots \mathrm dx_\ell }{(z_3-x_3)\cdots (z_\ell -x_\ell )} \end{aligned}$$

because \(S(z_1,x_2,\dots ,x_\ell )\frac{\mathrm dx_2}{(z_2-x_2)}\) has no residue at infinity (and also it has no residue at \(z_1\) because S is regular along diagonals). Then

$$\begin{aligned} {\mathscr {C}}_\ell ^{{\mathsf{c}}}(z_1,\dots ,z_\ell )= -\frac{1}{(2\pi \mathrm {i})^{\ell -2}}\int _{\Gamma ^{\ell -2}}S(z_1,z_2,x_3,\dots ,x_\ell )\frac{\mathrm dx_3\cdots \mathrm dx_\ell }{(z_3-x_3)\cdots (z_\ell -x_\ell )}. \end{aligned}$$

Iterating this argument we arrive at

$$\begin{aligned} {\mathscr {C}}_\ell ^{{\mathsf{c}}}(z_1,\dots ,z_\ell )=-S(z_1,\dots ,z_\ell ), \end{aligned}$$

which proves the theorem also in the case \(\ell > 3\).

Remark 3.4

We note here that since R(z) is a rank one matrix, the formulae of Theorem 1.5 for \(\mathscr {C}^{\mathsf{c}}_\ell \), \(\ell \ge 2\), can be expressed in terms of the scalar quantities

$$\begin{aligned} w(x,y)=\frac{2\pi \mathrm {i}}{h_{N-1}}\frac{\pi _N(x)\widehat{\pi }_{N-1}(y)-\pi _{N-1}(x)\widehat{\pi }_{N}(y)}{x-y} \end{aligned}$$

as

$$\begin{aligned} {\mathscr {C}}_\ell ^{{\mathsf{c}}}(z_1,\dots ,z_\ell )=-\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}( (\ell ) )}w(z_{i_1},z_{i_2})\cdots w(z_{i_{\ell -1}},z_{i_\ell })w(z_{i_\ell },z_{i_1}),\qquad \ell > 2, \end{aligned}$$

compare for instance with [23, 51].

4 JUE correlators and Wilson Polynomials

In this section we prove Corollary 1.6. This is done by expanding the general formulæ of Theorem 1.5 as \(z_i\rightarrow 0,\infty \). To this end we consider the monic orthogonal polynomials for the Jacobi measure, which are the classical (monic) Jacobi polynomials

$$\begin{aligned} P^{{\mathsf{J}}}_\ell (z) = \frac{\ell !}{(\alpha +\beta +\ell +1)_\ell }\sum _{k=0}^{\ell } {{\ell +\alpha } \atopwithdelims ()k}{{\ell +\beta } \atopwithdelims (){\ell -k}}\left( z-1 \right) ^k z^{\ell -k}, \end{aligned}$$
(4.1)

satisfying the orthogonality property

$$\begin{aligned}&\int _0^1 P^{\mathsf{J}}_\ell (x)P^{{\mathsf{J}}}_m(x)x^\alpha (1-x)^\beta \mathrm dx=h_\ell ^{\mathsf{J}}\delta _{\ell ,m},\nonumber \\&h_\ell ^{{\mathsf{J}}}=\frac{\ell !\,\Gamma (\alpha +\ell +1) \Gamma (\beta +\ell +1) \Gamma (\alpha +\beta +\ell +1)}{\Gamma (\alpha +\beta +2\ell +1) \Gamma (\alpha +\beta +2\ell +2)}. \end{aligned}$$
(4.2)

4.1 Expansion of the matrix R

This paragraph is devoted to the proof of the following proposition.

Proposition 4.1

We have the Taylor expansion at \(z=\infty \)

$$\begin{aligned} R(z)=T^{-1}R^{[\infty ]}(z)T,\qquad |z|>1, \end{aligned}$$

where T is the constant matrix (1.11) and \(R^{[\infty ]}(z)\) is the matrix-valued power series in \(z^{-1}\) in (1.12). We have the Poincaré asymptotic expansion at \(z=0\) uniformly within the sector \(0<\arg z<2\pi \)

$$\begin{aligned} R(z)\sim T^{-1}R^{[0]}(z)T, \end{aligned}$$

where T is the constant matrix (1.11) and \(R^{[0]}(z)\) is the matrix-valued (formal) power series in z in (1.12).

Looking back at the definition (3.10) for the matrix R(z),

$$\begin{aligned} R(z)=Y_N(z)\left( \begin{array}{cc}1 &{} 0 \\ 0 &{} 0 \end{array}\right) Y_N^{-1}(z) = \left( \begin{array}{cc} -\frac{2 \pi \mathrm {i}}{h_{N-1}} P^{\mathsf{J}}_N(z) {\widehat{P}}^{\mathsf{J}}_{N-1}(z) &{} - P^{\mathsf{J}}_N(z) {\widehat{P}}^{\mathsf{J}}_N(z) \\ -\left( \frac{2 \pi \mathrm {i}}{h_{N-1}} \right) ^2 P^{\mathsf{J}}_{N-1}(z) {\widehat{P}}^{\mathsf{J}}_{N-1}(z) &{} -\frac{2 \pi \mathrm {i}}{h_{N-1}} P^{\mathsf{J}}_{N-1}(z) {\widehat{P}}^{\mathsf{J}}_N(z) \end{array}\right) , \end{aligned}$$

we notice that it is sufficient to compute the expansions of the product of the Jacobi polynomials with their Cauchy transforms at the prescribed points. To this end, recall the explicit formula (4.1) for the monic Jacobi orthogonal polynomials, which can be rewritten as the Rodrigues’ formula

$$\begin{aligned} P^{\mathsf{J}}_\ell (z) =\frac{(-1)^\ell }{(\alpha +\beta +\ell +1)_\ell } z^{-\alpha } (1-z)^{-\beta } \frac{\mathrm d^\ell }{\mathrm dz^\ell }\left[ z^{\alpha +\ell } (1-z)^{\beta +\ell } \right] . \end{aligned}$$
(4.3)

The Cauchy transforms \(\widehat{P}_\ell ^{\mathsf{J}}(z)\) defined in (3.1) can be expanded as stated below.

Lemma 4.2

The following relations hold true:

$$\begin{aligned} {\widehat{P}}^{{\mathsf{J}}}_\ell (z)&=-\frac{1}{2\pi \mathrm {i}(\alpha +\beta +\ell +1)_\ell } \sum _{j\ge 0}\frac{1}{z^{j+\ell +1}}(j+1)_\ell \frac{\Gamma (\alpha +\ell +j+1)\Gamma (\beta +\ell +1)}{\Gamma (\alpha +\beta +2\ell +j+1)}, \end{aligned}$$
(4.4)
$$\begin{aligned} {\widehat{P}}^{\mathsf{J}}_\ell (z)&\overset{z \rightarrow 0}{\sim }\, (-1)^\ell \frac{1}{2\pi \mathrm {i}(\alpha +\beta +\ell +1)_\ell } \sum _{j\ge 0} z^j (j+1)_\ell \frac{\Gamma (\alpha -j)\Gamma (\beta +\ell +1)}{\Gamma (\alpha +\beta +\ell -j+1)}, \end{aligned}$$
(4.5)

where the first relation is a genuine Taylor expansion at \(z=\infty \), valid for all \(|z| > 1\), whilst the second one is a Poincaré asymptotic expansion at \(z=0\) uniform in the sector \(0<\arg z<2\pi \).

Proof

We start with the expansion (4.4) at \(z=\infty \), which is computed as follows;

$$\begin{aligned} \widehat{P^{\mathsf{J}}_\ell }(z)&=\frac{1}{2\pi \mathrm {i}}\int _0^{1} P^{\mathsf{J}}_\ell (x)x^{\alpha }(1-x)^{\beta }\frac{\mathrm dx}{x-z}\\&\overset{(i)}{=}-\frac{1}{2\pi \mathrm {i}}\sum _{j\ge 0}\frac{1}{z^{j+1}}\int _0^{1}P^{\mathsf{J}}_\ell (x)x^{\alpha +j}(1-x)^{\beta }\mathrm dx\\&\overset{(ii)}{=}-\frac{1}{2\pi \mathrm {i}} \sum _{j\ge 0}\frac{1}{z^{j+\ell +1}}\int _0^{1}P^{\mathsf{J}}_\ell (x)x^{\alpha +j+\ell }(1-x)^{\beta }\mathrm dx\\&\overset{(iii)}{=}-\frac{1}{2\pi \mathrm {i}} \frac{(-1)^\ell }{(\alpha +\beta +\ell +1)_\ell } \sum _{j\ge 0}\frac{1}{z^{j+\ell +1}}\int _0^{1} \left( \frac{\mathrm d^\ell }{\mathrm dx^\ell }x^{\alpha +\ell } (1-x)^{\beta +\ell }\right) x^{j+\ell }\mathrm dx\\&\overset{(iv)}{=}-\frac{1}{2\pi \mathrm {i}} \frac{1}{(\alpha +\beta +\ell +1)_\ell } \sum _{j\ge 0}\frac{1}{z^{j+\ell +1}}\int _0^{1}x^{\alpha +\ell } (1-x)^{\beta +\ell } \frac{\mathrm d^\ell }{\mathrm dx^\ell }(x^{j+\ell })\mathrm dx\\&\overset{(v)}{=}-\frac{1}{2\pi \mathrm {i}} \frac{1}{(\alpha +\beta +\ell +1)_\ell } \sum _{j\ge 0}\frac{(j+1)_\ell }{z^{j+\ell +1}}\int _0^{1} x^{\alpha +\ell +j} (1-x)^{\beta +\ell } \mathrm dx\\&\overset{(vi)}{=}-\frac{1}{2\pi \mathrm {i}} \frac{1}{(\alpha +\beta +\ell +1)_\ell } \sum _{j\ge 0}\frac{1}{z^{j+\ell +1}}(j+1)_\ell \frac{\Gamma (\alpha +\ell +j+1)\Gamma (\beta +\ell +1)}{\Gamma (\alpha +\beta +2\ell +j+1)}. \end{aligned}$$

In (i) we have expanded the geometric series and exchanged sum and integral by Fubini theorem, in (ii) we use that \(P^{\mathsf{J}}_\ell (z)\) is orthogonal to \(z^j\) for \(j<\ell \), in (iii) we use the Rodrigues’ formula (4.3), in (iv) we integrate by parts, in (v) we compute the derivative, and finally in (vi) we use the Euler beta integral. The computation at \(z=0\) is completely analogous, with the only difference that in (i) it is not legitimate to exchange sum and integral so this step holds only in the sense of a Poincaré asymptotic series. \(\square \)

The next step is to compute the expansions of the products of the Jacobi polynomials and their Cauchy transforms. To this end it is convenient to study more in detail the properties of R(z).

Proposition 4.3

The matrix \(\Psi _N(z):=Y_N(z)z^{\alpha \sigma _3/2}(1-z)^{\beta \sigma _3/2}\) satisfies the following linear differential equation

$$\begin{aligned} \partial _z\Psi _N(z) = U(z)\Psi _N(z) \end{aligned}$$
(4.6)

and the matrix R(z) satisfies the following Lax differential equation,

$$\begin{aligned} \partial _z R(z)=[U(z),R(z)]. \end{aligned}$$
(4.7)

Here the matrix U(z) is explicitly given as

$$\begin{aligned} U(z)=\frac{U_0}{z}+\frac{U_1}{1-z}, \end{aligned}$$
(4.8)

with

$$\begin{aligned} U_0&= \begin{pmatrix} \frac{2 N (\alpha +\beta +N)+\alpha (\alpha +\beta )}{2 (\alpha +\beta +2N)} &{} -\frac{h_N^{\mathsf{J}}}{2 \pi \mathrm {i}} (\alpha +\beta +2N+1) \\ \frac{2 \pi \mathrm {i}}{h_{N-1}^{\mathsf{J}}} (\alpha +\beta +2N-1) &{} -\frac{2 N (\alpha +\beta +N)+\alpha (\alpha +\beta )}{2 (\alpha +\beta +2N)} \end{pmatrix}, \\ U_1&= \begin{pmatrix} -\frac{2 N (\alpha +\beta +N)+\beta (\alpha +\beta )}{2 (\alpha +\beta +2N)} &{}-\frac{h_N^{\mathsf{J}}}{2 \pi \mathrm {i}} (\alpha +\beta +2N+1) \\ \frac{2 \pi \mathrm {i}}{h_{N-1}^{\mathsf{J}}} (\alpha +\beta +2N-1) &{} \frac{2 N (\alpha +\beta +N)+\beta (\alpha +\beta )}{2 (\alpha +\beta +2N)} \end{pmatrix}. \end{aligned}$$

Proof

From the definition (3.10) we obtain \(R(z)=\Psi _N(z)\begin{pmatrix}1 &{} 0 \\ 0 &{} 0 \end{pmatrix}\Psi _N^{-1}(z)\); therefore the Lax equation (4.7) follows from (4.6). The latter is a classical property of Jacobi orthogonal polynomials [44]. \(\square \)

To prove Proposition 4.1 is equivalent to prove that \(\widetilde{R}(z)\sim R^{[p]}(z)\) for \(p=\infty ,0\) where

$$\begin{aligned} \widetilde{R}(z)=TR(z)T^{-1}. \end{aligned}$$
(4.9)

It follows from the previous proposition that \(\widetilde{R}(z)\) satisfies

$$\begin{aligned} \frac{\partial }{\partial z}\widetilde{R}(z)=[\widetilde{U}(z),\widetilde{R}(z)] \end{aligned}$$
(4.10)

where \(\widetilde{U}(z)=TU(z)T^{-1}=\widetilde{U}_0/z+\widetilde{U}_1/(1-z)\), with

$$\begin{aligned} \widetilde{U}_0&=T U_0 T^{-1} = \frac{1}{\alpha +\beta +2N} \begin{pmatrix} \frac{2 N (\alpha +\beta +N)+\alpha (\alpha +\beta )}{2} &{} -N(\alpha +N)(\beta +N)(\alpha +\beta +N) \\ 1 &{} - \frac{2 N (\alpha +\beta +N)+\alpha (\alpha +\beta )}{2} \end{pmatrix},\\ \widetilde{U}_1&= T U_1 T^{-1} = \frac{1}{\alpha +\beta +2N} \begin{pmatrix} - \frac{2 N (\alpha +\beta +N)+\beta (\alpha +\beta )}{2} &{} -N(\alpha +N)(\beta +N)(\alpha +\beta +N) \\ 1 &{} \frac{2 N (\alpha +\beta +N)+\beta (\alpha +\beta )}{2} \end{pmatrix}. \end{aligned}$$

Introduce the matrices

$$\begin{aligned} \sigma _3=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{}-1 \end{array} \right) ,\qquad \sigma _+=\left( \begin{array}{cc} 0 &{} 1 \\ 0 &{}0 \end{array} \right) ,\qquad \sigma _-=\left( \begin{array}{cc} 0 &{} 0 \\ 1 &{}0 \end{array} \right) , \end{aligned}$$

and write

$$\begin{aligned} \widetilde{R}(z)=\frac{1}{2} {\mathbf {1}}+r_3\sigma _3+r_+\sigma _++r_-\sigma _-,\qquad \widetilde{U}(z)=u_3\sigma _3+u_+\sigma _++u_-\sigma _-, \end{aligned}$$

where we used that \(\mathrm {tr}\,R(z)= 1, \mathrm {tr}\,U(z)=0\). For the sake of brevity we omit the dependence on z in the \(\mathfrak {sl}_2\) components. The Lax equation (4.10) yields the coupled first-order linear ODEs

$$\begin{aligned} \partial _z r_3=u_+r_--u_-r_+,\qquad \partial _z r_+=2(u_3r_+-u_+r_3),\qquad \partial _z r_-=2(u_-r_3-u_3r_-), \end{aligned}$$

which are equivalent to three decoupled third-order linear ODEs, one for \(\partial _zr_3\)

$$\begin{aligned}&3 \left[ 2N(\alpha +\beta +N)+\alpha (\alpha +\beta )-2 -z \left( (\alpha +\beta +2 N)^2-4\right) \right] \partial _z r_3\nonumber \\&\quad - \left[ \alpha ^2-4-2 z\left( 2 N (\alpha +\beta +N)+\alpha (\alpha +\beta )-12\right) +z^2 \left( (\alpha +\beta +2 N)^2-24\right) \right] \partial _z^2 r_3\nonumber \\&\quad - 5 z(z-1) (1-2 z) \partial _z^3 r_3 +z^2(z-1)^2 \partial _z^4 r_3 =0, \end{aligned}$$
(4.11)

and for \(r_\pm \)

$$\begin{aligned}&\left[ 2 N (\alpha +\beta +N\pm 1)+(\alpha \pm 1) (\alpha +\beta )-z (\alpha +\beta +2 N\pm 2) (\alpha +\beta +2 N)\right] r_\pm \nonumber \\&\quad - \left[ \alpha ^2-1 -z \left( -4 N (\alpha +\beta +N\pm 1)-2(\alpha +\beta )(\alpha \pm 1)+6 \right) \right. \nonumber \\&\quad \left. + z^2 \left( 4 N (\alpha +\beta +N\pm 1)+(\alpha +\beta )(\alpha +\beta \pm 2)-6\right) \right] \partial _z r_\pm \nonumber \\&\quad +3z(z-1)(2z-1)\partial _z^2 r_\pm +z^2 (z-1)^2\partial _z^3 r_\pm =0. \end{aligned}$$
(4.12)

The following ansatz is quite natural in view of our previous work [31] about the Laguerre Unitary Ensemble (see also [21] for the Gaussian Unitary Ensemble); namely we write the expansions of the entries of R(z) at \(z=\infty \) as

$$\begin{aligned} r_3(z)&\sim \frac{1}{2}+\frac{1}{\alpha +\beta +2N}\sum _{\ell \ge 0} \frac{1}{z^{\ell +1}} \ell A_\ell (N),\nonumber \\ r_+(z)&\sim \frac{1}{\alpha +\beta +2N}\sum _{\ell \ge 0} \frac{N(\alpha +N)(\beta +N)(\alpha +\beta +N)}{z^{\ell +1}} B_\ell (N+1), \nonumber \\ r_-(z)&\sim -\frac{1}{\alpha +\beta +2N}\sum _{\ell \ge 0} \frac{1}{z^{\ell +1}} B_\ell (N), \end{aligned}$$
(4.13)

for some coefficients \(A_\ell (N)=A_\ell (N,\alpha ,\beta )\) and \(B_\ell (N)=B_\ell (N,\alpha ,\beta )\). By substitution in (4.11) and (4.12) we see that the ansatz is consistent with them; in particular we get the following three term recurrence relations for \(A_\ell (N),B_\ell (N)\);

$$\begin{aligned}&(2\ell +1)\left( \alpha (\alpha +\beta )-\ell (\ell +1)+2N(\alpha +\beta +N)\right) A_{\ell }(N)\nonumber \\&\qquad +(\ell -1)(\ell ^2-\alpha ^2)A_{\ell -1}(N)+(\ell +2)\left( (\ell +1)^2-(\alpha +\beta +2N)\right) A_{\ell +1}(N)=0, \end{aligned}$$
(4.14)
$$\begin{aligned}&(2\ell +1)\left( (\alpha - 1)(\alpha +\beta )-\ell (\ell +1)+2N(\alpha +\beta +N-1)\right) B_{\ell }(N)\nonumber \\&\qquad +\ell (\ell ^2-\alpha ^2)B_{\ell -1}(N)+(\ell +1)\left( (\ell +1)^2-(\alpha +\beta +2N-1)\right) B_{\ell +1}(N)=0, \end{aligned}$$
(4.15)

for \(\ell \ge 1\), together with the initial conditions

$$\begin{aligned} A_0(N,\alpha ,\beta )&=\frac{N(\beta +N)}{\alpha +\beta +2N},&A_1(N,\alpha ,\beta )=\frac{N(\alpha +N)(\beta +N)(\alpha +\beta +N)}{(\alpha +\beta +2 N-1) (\alpha +\beta +2 N) (\alpha +\beta +2 N+1)}, \\ B_0(N,\alpha ,\beta )&= \frac{1}{(\alpha +\beta +2N-1)},&B_1(N,\alpha ,\beta )= \frac{(\alpha -1) (\alpha +\beta )+2 N (\alpha +\beta +N-1)}{(\alpha +\beta +2 N-2) (\alpha +\beta +2 N-1) (\alpha +\beta +2 N)}. \end{aligned}$$

The initial conditions are obtained from (4.1) and (4.4). It can be checked that the recurrence relation for the coefficients of \(r_+(z)\) are actually those of \(r_-(z)\), modulo a shift in N, as claimed in (4.13).

The three term recurrence relations (4.14) and (4.15) can be solved in terms of Wilson polynomials (1.19).

Proposition 4.4

The coefficients \(A_\ell (N,\alpha ,\beta )\) and \(B_\ell (N,\alpha ,\beta )\) can be expressed in terms of Wilson Polynomials, defined in (1.19), as

$$\begin{aligned}&A_\ell (N,\alpha ,\beta )=\frac{(-1)^{N-1}(\alpha +\ell )!(\alpha +\beta +N)!(\beta +N)}{(N-1)!(\alpha +N-1)!(\alpha +\beta +2N+\ell )!}\\&\quad W_{N-1}\left( -\left( \ell + \frac{1}{2} \right) ^2;\frac{3}{2}, \frac{1}{2},\alpha +\frac{1}{2},\frac{1}{2}-\alpha -\beta -2N\right) , \\&B_\ell (N,\alpha ,\beta )=\frac{(-1)^{N-1}(\alpha +\ell )!(\alpha +\beta +N-1)!}{(N-1)!(\alpha +N-1)!(\alpha +\beta +2N+\ell -1)!}\\&\quad W_{N-1}\left( -\left( \ell + \frac{1}{2} \right) ^2;\frac{1}{2}, \frac{1}{2},\alpha +\frac{1}{2},\frac{3}{2}-\alpha -\beta -2N\right) . \end{aligned}$$

This is equivalent to the hypergeometric representation

Proof

The identification with the Wilson polynomials is obtained by comparing the recurrence relations (4.14) and (4.15) with the difference equation for this family of orthogonal polynomials, which reads

$$\begin{aligned} n(n+a+b+c+d-1)w(k)=C(k)w(k+\mathrm {i})-\left[ C(k)+D(k)\right] w(k)+D(k)w(k-\mathrm {i}), \end{aligned}$$

where \(w(k)=W_n(k^2;a,b,c,d)\) and

$$\begin{aligned} C(k)=\frac{(a-\mathrm {i}k)(b-\mathrm {i}k)(c-\mathrm {i}k)(d-\mathrm {i}k)}{2\mathrm {i}k (2 \mathrm {i}k -1)},\quad D(k)=\frac{(a+\mathrm {i}k)(b+\mathrm {i}k)(c+\mathrm {i}k)(d+\mathrm {i}k)}{2\mathrm {i}k (2 \mathrm {i}k +1)}. \end{aligned}$$

The hypergeometric representation of \(A_\ell ,B_\ell \) then directly follows from that of the Wilson polynomials in (1.19). \(\square \)

The above Proposition, together with the expansions (4.13), yields the first part of Proposition 4.1. The asymptotics of R(z) at \(z=0\) are obtained in a similar way. More precisely, we claim that the expansion at \(z=0\) of the entries of \(\widehat{R}(z)\) reads as

$$\begin{aligned} r_3(z)&\sim \frac{1}{2}+\frac{1}{\alpha +\beta +2N} \sum _{\ell \ge 0} \frac{(\alpha +\beta +2N-\ell )_{2\ell +1}}{(\alpha -\ell )_{2\ell +1}} (\ell +1) A_\ell (N,\alpha ,\beta ) z^\ell , \nonumber \\ r_+(z)&\sim -\frac{1}{\alpha +\beta +2N} \sum _{\ell \ge 0} N(\beta +N)(\alpha +N)(\alpha +\beta +N)\nonumber \\&\frac{(\alpha +\beta +2N+1-\ell )_{2\ell +1}}{(\alpha -\ell )_{2\ell +1}} B_\ell (N+1,\alpha ,\beta ) z^\ell , \nonumber \\ r_-(z)&\sim \frac{1}{\alpha +\beta +2N} \sum _{\ell \ge 0} \frac{(\alpha +\beta +2N-1-\ell )_{2\ell +1}}{(\alpha -\ell )_{2\ell +1}} B_\ell (N,\alpha ,\beta ) z^\ell . \end{aligned}$$
(4.16)

This can be proven by checking that plugging the formulæ (4.16) in the equations (4.11), (4.12), one obtains the same recurrence relations (4.14) and (4.15). The associated initial conditions can again be computed from (4.1) and (4.5). This concludes the proof of Proposition 4.1.\(\square \)

4.2 Proof of Corollary 1.6

4.2.1 Case \(\ell =1\)

From Theorem 1.5 we write the formula for \(\mathscr {C}_1(z)\) by using the differential equation (4.6) as

$$\begin{aligned}&{\mathscr {C}}_1(z) =\mathrm {tr}\,\left( Y^{-1}_N(z)Y_N'(z)\frac{\sigma _3}{2}\right) =\mathrm {tr}\,\left( Y^{-1}_N(z)Y_N'(z) E_{1,1}\right) \nonumber \\&=\mathrm {tr}\,(U(z)R(z))-\frac{1}{2}\left( \frac{\alpha }{z}-\frac{\beta }{1-z}\right) , \end{aligned}$$
(4.17)

where we denote \(E_{1,1}=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}\); in the first step we used that \(\mathrm {tr}\,(Y^{-1}_N(z)Y_N'(z))=\mathrm {tr}\,U(z)=0\) and in the second one we used the definition of \(R(z)=Y_N(z)E_{1,1}Y_N(z)^{-1}\), the cyclic property of the trace and the equation

$$\begin{aligned} Y'_N(z)=U(z)Y_N(z)-\left( \frac{\alpha }{z}-\frac{\beta }{1-z}\right) Y_N(z)\frac{\sigma _3}{2}, \end{aligned}$$

which follows from (4.6).

Lemma 4.5

We have

$$\begin{aligned} \partial _z\left[ z(1-z) \mathrm {tr}\,(U(z)R(z))\right] = -(\alpha +\beta +2N) \mathrm {tr}\,\left( R(z)\frac{\sigma _3}{2}\right) . \end{aligned}$$

Proof

We compute

$$\begin{aligned}&\partial _z\left[ z(1-z) \mathrm {tr}\,(U(z)R(z))\right] =(1-2z)\mathrm {tr}\,(U(z)R(z))\\&+z(1-z)\mathrm {tr}\,(U'(z)R(z))+z(1-z)\mathrm {tr}\,(U(z)R'(z)). \end{aligned}$$

The last term vanishes due to the Lax equation (4.7), because \(\mathrm {tr}\,(U(z)[U(z),R(z)])=0\) by the cyclic property of the trace. Then we use the identity

$$\begin{aligned} (1-2z)U(z)+z(1-z)U'(z)= - \frac{\alpha +\beta +2N}{2}\sigma _3, \end{aligned}$$

which can be checked directly from (4.8). The proof is complete. \(\square \)

By this lemma and (4.17) we obtain

$$\begin{aligned}&\partial _z[z(1-z)\mathscr {C}_1(z)]=-(\alpha +\beta +2N) \left( R_{1,1}(z)-\frac{1}{2}\right) \\&+\frac{\alpha +\beta }{2}=-(\alpha +\beta +2N) \left( R_{1,1}(z)-1\right) -N, \end{aligned}$$

where we use that \(\mathrm {tr}\,R(z)=1\) to compute \(\mathrm {tr}\,(R(z)\sigma _3)=2R_{1,1}(z)-1\), and we denote \(R_{1,1}\) the (1, 1)-entry of R. Integrating this identity implies that for any \(p\in {\mathbb {C}}\setminus [0,1]\) we have

$$\begin{aligned}&z(1-z){\mathscr {C}}_1(z)-p(1-p)\mathscr {C}_1(p)=(\alpha +\beta +2N)\nonumber \\&\int _p^z\left( 1-R_{1,1}(w)\right) \mathrm dw+N(p-z). \end{aligned}$$
(4.18)

Letting \(p\rightarrow 0\) in (4.18) we have \(p(1-p){\mathscr {C}}_1(p)\rightarrow 0\) and so

$$\begin{aligned} {\mathscr {C}}_1(z)=\frac{(\alpha +\beta +2N)}{z(1-z)}\int _0^z\left( 1-R_{1,1}(w)\right) \mathrm dw-\frac{N}{1-z}. \end{aligned}$$

Expanding this identity at \(z=0\) we get at the left hand side

$$\begin{aligned} {\mathscr {C}}_1(z)\sim -\sum _{k\ge 0}\left\langle \mathrm {tr}\,X^{-k-1}\right\rangle z^{k}={\mathscr {F}}_{1,0}(z), \end{aligned}$$

and using Proposition 4.1 (note that \((TR(z)T)_{1,1}=R_{1,1}(z)\) because T is diagonal) the formula for \({\mathscr {F}}_{1,0}(z)\) is proved.

Letting instead \(p\rightarrow \infty \) we have \(p(1-p){\mathscr {C}}_1(p)\sim (1-p)N-\left\langle \mathrm {tr}\,X\right\rangle +{\mathcal {O}}(1/p)\) and therefore from (4.18) we have (noting that \(R_{1,1}(w)=1+{\mathcal {O}}(w^{-2})\) so the integral is well defined)

$$\begin{aligned} z(1-z){\mathscr {C}}_1(z)=(\alpha +\beta +2N)\int _\infty ^z\left( 1-R_{1,1}(w)\right) \mathrm dw+(1-z)N-\left\langle \mathrm {tr}\,X\right\rangle . \end{aligned}$$

We can compute

$$\begin{aligned} \left\langle \mathrm {tr}\,X\right\rangle =\frac{N(\alpha +N)}{\alpha +\beta +2N} \end{aligned}$$

by expanding the general formula \(\mathscr {C}_1(z)=\mathrm {tr}\,\left( Y_N^{-1}(z)Y_N'(z)\sigma _3/2\right) \) at \(z=\infty \), using (4.1) and the first few terms in (4.4). We finally obtain

$$\begin{aligned} {\mathscr {C}}_1(z)=\frac{\alpha +\beta +2N}{z(1-z)}\int _\infty ^z\left( 1-R_{1,1}(w)\right) \mathrm dw+\frac{N}{z} -\frac{N(\alpha +N)}{z(1-z)(\alpha +\beta +2N)}. \end{aligned}$$

Expanding this identity at \(z=\infty \) we get at the left hand side

$$\begin{aligned} {\mathscr {C}}_1(z)\sim \sum _{k\ge 0}\frac{\left\langle \mathrm {tr}\,X^{k}\right\rangle }{z^{k+1}}=\frac{N}{z}+{\mathscr {F}}_{1,\infty }(z) \end{aligned}$$

and using Proposition 4.1 (again note that \((TR(z)T)_{1,1}=R_{1,1}(z)\) because T is diagonal) the formula for \({\mathscr {F}}_{1,\infty }(z)\) is also proved.

4.2.2 Case \(\ell \ge 2\)

In this case we note that

$$\begin{aligned} {\mathscr {C}}_\ell ^{{\mathsf{c}}}(z_1,\dots ,z_\ell )=-\sum _{(i_1,\dots ,i_\ell )\in \mathrm {cyc}((\ell ))}\frac{\mathrm {tr}\,\left( \widetilde{R}(z_{i_1})\cdots \widetilde{R}(z_{i_\ell })\right) }{(z_{i_1}-z_{i_2})\cdots (z_{i_\ell }-z_{i_1})}-\frac{\delta _{\ell ,2}}{(z_1-z_2)^2},\qquad \ell \ge 2 \end{aligned}$$

where \(\widetilde{R}(z)=TR(z)T^{-1}\) as in (4.9). We now expand both sides of this identity at \(z=0,\infty \). The expansion of the right hand side follows from Proposition 4.1 which asserts that \(\widetilde{R}(z)\sim R^{[0]}(z),R^{[\infty ]}(z)\) as \(z\rightarrow 0,\infty \), respectively. For the left hand side instead, at \(z\rightarrow 0\) we have

$$\begin{aligned} {\mathscr {C}}_\ell ^{{\mathsf{c}}}(z_1,\dots ,z_\ell )\sim (-1)^\ell \sum _{k_1,\dots ,k_\ell \ge 0}\left\langle \mathrm {tr}\,X^{-k_1-1}\cdots \mathrm {tr}\,X^{-k_\ell -1}\right\rangle ^{\mathsf{c}}z_1^{k_1}\cdots z_\ell ^{k_\ell }={\mathscr {F}}_{\ell ,0}^{\mathsf{c}}(z_1,\dots ,z_\ell ), \end{aligned}$$

while at \(z\rightarrow \infty \) we have

$$\begin{aligned} {\mathscr {C}}_\ell ^{\mathsf{c}}(z_1,\dots ,z_\ell )\sim \sum _{k_1,\dots ,k_\ell \ge 0}\left\langle \mathrm {tr}\,X^{k_1}\cdots \mathrm {tr}\,X^{k_\ell }\right\rangle ^{\mathsf{c}}z_1^{-k_1-1}\cdots z_\ell ^{-k_\ell -1}=\mathscr {F}_{\ell ,\infty }^{\mathsf{c}}(z_1,\dots ,z_\ell ), \end{aligned}$$

where in the last identity we use that terms with \(k_i=0\) for some i do not contribute to the sum; indeed the connected correlator \(\left\langle \mathrm {tr}\,X^{k_1}\cdots \mathrm {tr}\,X^{k_\ell }\right\rangle ^{\mathsf{c}}\) vanishes whenever \(k_i=0\) for some i. The proof is complete.\(\square \)