1 Introduction

The Robinson-Schensted-Knuth (RSK) correspondence is a combinatorial mapping which plays an important role in the theory of Young tableaux, symmetric functions and representation theory [23, 45]. It is deeply connected with Schur functions and provides a combinatorial framework for understanding the Cauchy-Littlewood identity and Schur measures on integer partitions. It is also the basic structure which lies behind the solvability of a particular family of combinatorial models in probability and statistical physics including longest increasing subsequence problems, directed last passage percolation in 1+1 dimensions and the totally asymmetric simple exclusion process, see for example [1, 3, 33, 40].

The RSK map is defined on matrices with non-negative integer coefficients and can be described by expressions in the max-plus semi-ring. This was extended to matrices with real entries by Berenstein and Kirillov [10]. Replacing these expressions by their analogues in the usual algebra, A.N. Kirillov [35] introduced a geometric lifting of the Berenstein-Kirillov correspondence which he called the ‘tropical RSK correspondence’, in honor of M.-P. Schützenberger (1920–1996). However, for many readers nowadays the word ‘tropical’ indicates just the opposite, so to avoid confusion we will refer to Kirillov’s construction as the geometric RSK (gRSK) correspondence, as in the theory of geometric crystals [8, 9], which is closely related.

The geometric RSK correspondence is a birational mapping from \(({\mathbb{R}}_{>0})^{n\times m}\) onto itself. It was introduced by Kirillov [35] for square matrices (n=m) and generalized to the rectangular setting by Noumi and Yamada [37]. In the paper [19] it was shown that there is a fundamental connection between the gRSK correspondence and \(\mathit{GL}(n,{\mathbb{R}})\)-Whittaker functions, analogous to the well-known connection between the RSK correspondence and Schur functions. In particular, it is explained there that the analogue of the Cauchy-Littlewood identity in the setting of gRSK can be seen as a generalization of a Whittaker integral identity which was originally conjectured by Bump [15] and later proved by Stade [43]. The connection to Whittaker functions gives rise to a natural family of measures (Whittaker measures) which play a similar role in this setting to Schur measures on integer partitions. It also has applications to random polymers. In the paper [19], an explicit integral formula is obtained for the Laplace transform of the law of the partition function associated with a random directed polymer model on the two-dimensional lattice with log-gamma weights introduced in [42]. For related recent developments, see [11, 12, 38].

In the present work, we first provide further insight into the results of [19] by showing:

  1. (a)

    the gRSK mapping is volume preserving with respect to the product measure ∏ ij dx ij /x ij on \(({\mathbb{R}}_{>0})^{n\times m}\), and

  2. (b)

    the integrand in Givental’s integral formula for \(\mathit{GL}(n,{\mathbb{R}})\)-Whittaker functions [26, 32] arises naturally through the application of the gRSK map (see Theorem 3.2 below).

The volume preserving property can be seen as a consequence of a new description of the gRSK map as a composition of local moves which we introduce in this paper. This description is a re-formulation of the geometric row-insertion algorithm introduced by Noumi and Yamada in [37]. Combining (a) and (b) gives a direct ‘combinatorial’ proof of Stade’s identity (with some restrictions on the parameters) analogous to the bijective proof of the Cauchy-Littlewood identity via the classical RSK correspondence (see, for example, Fulton [23, §4.3]).

The second aim of this paper is to initiate a program of understanding the gRSK mapping in the presence of symmetry constraints in much the same spirit as the work of Baik and Rains [2, 4, 5] on longest increasing subsequence and last passage percolation problems. Here we consider one particular symmetry, namely the restriction of gRSK to symmetric matrices. We show that the volume preserving property continues to hold in this setting and deduce the analogue of the Whittaker measure. The corresponding Whittaker integral identity (Corollary 5.5) involves only a single Whittaker function, and turns out to be equivalent to a formula for a certain Mellin transform of the \(\mathit{GL}(n,{\mathbb{R}})\)-Whittaker function which was conjectured by Bump and Friedberg [16] and proved by Stade [44], again with some restrictions on the parameters. We also consider a degeneration of this model in which the diagonal entries of the input matrix vanish and the gRSK map rescales to a new version of gRSK defined on triangles. This model has a surprising and non-trivial connection to the symmetric case (see remarks at the end of Sect. 6 below).

One particular motivation for our study of the gRSK mapping is the analysis of directed polymer models. The basic directed polymer in a random environment is a model from statistical physics introduced by Huse and Henley [29] that couples a random path with an environment of random weights. Given random positive weights {w i,j } indexed by the two-dimensional lattice, each directed lattice path π from (1,1) to (n,m) is given the quenched probability

$$Q_{nm}(\pi)= Z_{nm}^{-1} \prod _{(i,j)\in\pi} w_{i,j} $$

where the normalization, also called partition function, is

$$ Z_{nm}=\sum_{\pi\in\varPi_{nm}} \prod_{(i,j)\in\pi} w_{i,j} $$
(1.1)

and Π nm is the set of such paths. A great deal of work in probability and statistical physics has been devoted to understanding the large-scale behavior of the random path π and the partition function Z nm , but the subject is far from complete. The reader is referred to [17, 18, 20] for reviews. The connection with gRSK is that the partition function appears as an entry in the output matrix (equation (3.9) below).

As an application of our gRSK results we determine the law of the partition function of a family of random polymer models with inverse gamma weights that are constrained to be symmetric about the main diagonal. (The model with inverse gamma weights is also called the log-gamma polymer because conventionally the weights are written as exponentials to create a Gibbs-like measure.) We also consider a degeneration of this model in which the polymer paths are constrained to stay below the diagonal. This can be seen as a discrete version of the continuum random polymer above a hard wall, which appeared recently in the physics literature [28]. Formally, our results yield integral formulae for the Laplace transforms of these laws which we anticipate will be made rigorous in future work and then used as a starting point for further asymptotic development. Similar integral formulae obtained in [19] for the polymer model without symmetry were used in [12] to prove Tracy-Widom GUE asymptotics for the law of the partition function. The polymer models we consider also give rise to a positive temperature version of the interpolating ensembles of Baik and Rains [2, 4]. In the KPZ scaling limit they should correspond to the KPZ equation on the half-line with mixed boundary conditions at zero and narrow wedge initial condition.

The outline of the paper is as follows.

  • In the next section we give some background on Whittaker functions, introduce a generalization of these functions and explain how these functions can be regarded as generating functions for patterns. This interpretation can be seen as a generalization of Givental’s integral formula [24, 26, 32] and is analogous to the combinatorial interpretation of Schur functions as generating functions for semistandard Young tableaux.

  • In Sect. 3 we give a new description of the gRSK map as a composition of local moves (based on Noumi and Yamada’s dynamical description of gRSK) and use this to establish several basic results. In particular, we show that the gRSK mapping is volume-preserving with respect to a natural product measure on \(({\mathbb{R}}_{>0})^{n\times m}\) and establish a fundamental identity (Theorem 3.2) which provides an elementary explanation of the appearance of Whittaker functions in this setting. This gives further insight into earlier results from [19] and yields a new proof and generalization of two of Stade’s Whittaker integral identities (Theorems 7.1 and 7.3).

  • In Sect. 4 we explain the relationship between the local-moves description of gRSK and the geometric row-insertion algorithm of Noumi and Yamada [37].

  • In Sect. 5 we consider the restriction of gRSK to symmetric matrices. We show that the volume preserving property continues to hold in this setting and deduce several consequences, including a new proof (with some restriction on the parameters) of the Whittaker integral identity (Theorem 7.5) involving a single Whittaker function due to Stade [44].

  • In Sect. 6 we introduce gRSK for triangular arrays. Again we prove a fundamental identity and the volume preserving property, and deduce the probability distribution of the shape vector of the output array under inverse gamma distributed initial weights. The polymer version of the problem describes paths restricted to lie below a hard wall.

  • In Sect. 7 we explain how the results of this paper relate to some of the Whittaker integral identities which have appeared previously in the automorphic forms literature.

  • In Sect. 8 we explain how the Berenstein-Kirillov extension of the RSK correspondence can be recovered by taking a limit (tropicalization). In statistical physics terminology this is a zero-temperature limit that takes polymer partition functions to last-passage percolation values. By analogy with Sect. 3, we give a description of the Berenstein-Kirillov mapping in terms of local moves which shows that this map is also volume preserving. Under exponentially distributed weights the probability distribution of the shape vector of the resulting pair of Gelfand-Tsetlin patterns is given by a non-central Laguerre ensemble. This connection to random matrix theory has had important applications to last passage percolation models [4, 13, 21, 22, 33].

2 Whittaker functions and patterns

For \(\lambda\in {\mathbb{C}}\), \(x,y\in({\mathbb{R}}_{>0})^{n}\), define

$$Q^{n}_\lambda(x,y)= \Biggl(\prod _{i=1}^n\frac{y_i}{x_i} \Biggr)^\lambda \exp \Biggl( -\sum_{i=1}^n \frac{y_i}{x_i} -\sum_{i=1}^{n-1} \frac {x_{i+1}}{y_i} \Biggr) . $$

For \(\lambda\in {\mathbb{C}}\), \(x\in({\mathbb{R}}_{>0})^{n}\) and \(y\in({\mathbb{R}}_{>0})^{n-1}\), define

$$Q^{n,n-1}_\lambda(x,y)= \biggl(\frac{\prod_{i=1}^{n-1} y_i}{\prod_{i=1}^n x_i} \biggr)^\lambda \exp \Biggl( -\sum_{i=1}^{n-1} \frac{y_i}{x_i} -\sum_{i=1}^{n-1} \frac{x_{i+1}}{y_i} \Biggr) . $$

We regard these as integral operators: for suitable test functions,

$$\begin{aligned} Q^n_\lambda f (x) &= \int_{({\mathbb{R}}_{>0})^n} Q^{n}_\lambda(x,y) f(y) \prod_{i=1}^n \frac{dy_i}{y_i},\\ Q^{n,n-1}_\lambda f (x)& = \int_{({\mathbb{R}}_{>0})^{n-1}} Q^{n,n-1}_\lambda (x,y) f(y) \prod_{i=1}^{n-1} \frac{dy_i}{y_i}. \end{aligned}$$

These operators were introduced in the papers [24, 25]. We remark that, in those papers, they are referred to as Baxter Q-type operators by analogy with similar operators that were originally introduced by Baxter (see, for example, [6, 7]) as a tool for solving the eight-vertex model.

Define \(\varPsi^{n}_{\lambda}(x)\) for \(\lambda\in {\mathbb{C}}^{n}\), \(x\in({\mathbb{R}}_{>0})^{n}\) recursively as follows. For n=1, \(\lambda\in {\mathbb{C}}\) and \(x\in {\mathbb{R}}_{>0}\) we set \(\varPsi^{1}_{\lambda}(x)=x^{-\lambda}\). For n≥2 and \(\lambda=(\lambda_{1},\dotsc,\lambda_{n})\in {\mathbb{C}}^{n}\),

$$ \varPsi^n_\lambda\equiv\varPsi^n_{\lambda_1,\ldots,\lambda_n} = Q^{n,n-1}_{\lambda_n} \varPsi^{n-1}_{\lambda_1,\ldots,\lambda_{n-1}}. $$
(2.1)

We note here, for later reference, some identities which follow easily from the definitions. For a>0 we have

$$ \varPsi^n_\alpha(ax)=a^{-\sum_i\alpha_i} \varPsi^n_\alpha(x). $$
(2.2)

If \(\alpha_{i}'=\alpha_{i}+c\) for some \(c\in {\mathbb{C}}\) then, writing x=(x 1,…,x n ),

$$ \varPsi^n_{\alpha'}(x)= \biggl(\prod _i x_i^{-c} \biggr) \varPsi^n_\alpha(x). $$
(2.3)

Finally, if we set \(x_{i}'=1/x_{n-i+1}\), then

$$ \varPsi^n_{\lambda}(x)=\varPsi^n_{-\lambda} \bigl(x'\bigr). $$
(2.4)

The functions \(\varPsi^{n}_{\lambda}\) are \(\mathit{GL}(n,{\mathbb{R}})\)-Whittaker functions [24, 25] (see also [26, 32]). These functions were first introduced by Jacquet [31]. They play an important role in the theory of automorphic forms [1416, 27, 30, 43, 44] and the quantum Toda lattice [2426, 32, 34, 36, 41]. In the latter literature they arise as eigenfunctions of the open quantum Toda chain with n particles with Hamiltonian given by

$$H=-\sum_i \frac{\partial^2}{\partial x_i^2} + 2 \sum _{i=1}^{n-1} e^{x_{i+1}-x_i}. $$

If we define \(\psi^{n}_{\lambda}(x)=\varPsi^{n}_{-\lambda}(z)\), where x i =logz i for i=1,…,n, then

$$H\psi^n_\lambda=- \biggl(\sum_i \lambda_i^2 \biggr) \psi^n_\lambda. $$

See, for example, [24] for more details.

In the automorphic forms literature the standard ‘normalization’ is slightly different. In particular, in the notation of the paper [30], we have the relation, for n≥2:

$$ \varPsi^n_{-\lambda}(x)= \biggl(\prod _i x_i \biggr)^{(1/n)\sum_i\lambda _i} \Biggl(\prod _{j=1}^{n-1} y_j^{-j(n-j)/2} \Biggr) W_{n,a}(y), $$
(2.5)

where a k =λ k −(1/n)∑ i λ i for k=1,…,n and \(\pi y_{j} =\sqrt{x_{n-j+1}/x_{n-j}}\), for j=1,…,n−1. This is easily verified by comparing the recursion (2.1) with a similar recursion obtained by Ishii and Stade [30] for the functions W n,a (y), and using the elementary relation (2.3). Indeed, first note that, by (2.3), we only need to check this for λ=a, that is, when ∑ i λ i =0. In the case n=2 we have, writing a=(a,−a) and y 1=y,

$$W_{2,a}(y)=2\sqrt{y} K_{2a}(2\pi y)=\sqrt{y} \varPsi^{2}_{a}(x_1,x_2) $$

where \(\pi y=\sqrt{x_{2}/x_{1}}\) and K ν is the Macdonald function

$$K_\nu(z)=\frac{1}{2}\int_0^\infty t^{\nu-1} e^{-\frac{z}{2}(t+t^{-1})} dt . $$

For n≥3, in [30] it is shown that

$$\begin{aligned} W_{n,a}(y)&=\prod_{j=1}^{n-1} y_j^{(n-j)/2+2a_1(n-j)/(n-1)} \int_{({\mathbb{R}}_{>0})^{n-1} } e^{-\pi\sum_{j=1}^{n-1} y_j^2 u_j +1/u_j}\\ &\quad {}\times\prod_{j=1}^{n-1} u_j^{(n-2j)/4+na_1/(n-1)} W_{n-1,b} \Biggl( y_2\sqrt{\frac{u_2}{u_1}},\ldots, y_{n-1}\sqrt {\frac{du_{n-1}}{u_{n-2}}} \,\Biggr) \\ &\quad {}\times\frac{du_1}{u_1}\cdots \frac{du_{n-1}}{u_{n-1}} , \end{aligned}$$

where

$$b= \biggl( a_2+\frac{a_1}{n-1},\ldots,a_n+ \frac{a_1}{n-1} \biggr). $$

Making the change of variables

$$\pi y_j =\sqrt{\frac{x_{n-j+1}}{x_{n-j}} },\qquad\frac{\pi}{u_j} = \frac{x_{n-j}}{z_{n-j}}, \qquad \pi y_j^2 u_j = \frac{z_{n-j+1}}{x_{n-j}} , $$

for j=1,…,n−1, and using (2.5) above, we see that this is equivalent to the recursion

$$\varPsi^n_{-a}(x)=\int_{({\mathbb{R}}_{>0})^{n-1} } Q_{-a_1}^{n,n-1}(x,z) \varPsi ^{n-1}_{-a_2,\ldots,-a_n} (z) \frac{dz_1}{z_1}\cdots\frac{dz_{n-1}}{z_{n-1}} , $$

which agrees with (2.1) above.

We will also consider the following generalization of the functions \(\varPsi^{n}_{\lambda}\). For \(\lambda\in {\mathbb{C}}^{n}\), \(x\in({\mathbb{R}}_{>0})^{n}\) and \(s\in {\mathbb{C}}\), define

$$ \varPsi^n_{\lambda;s}(x)=e^{-s/x_n} \varPsi_\lambda^n(x); $$
(2.6)

for \(\lambda\in {\mathbb{C}}^{n+k}\), k≥1, and ℜs>0, define

$$ \varPsi^n_{\lambda;s} = Q^n_{\lambda_{n+k}} Q^n_{\lambda_{n+k-1}} \cdots Q^n_{\lambda_{n+1}} \varPsi^n_{\lambda_1,\ldots,\lambda_n;s} . $$
(2.7)

It is straightforward to see that \(\varPsi^{n}_{\lambda;s}(x)\) is well-defined, as an absolutely convergent integral, for each \(x\in {\mathbb{R}}^{n}\). The functions \(\varPsi^{n}_{\lambda;s}\) can be regarded as generating functions for ‘patterns’, as we shall now explain.

Let \(x\in({\mathbb{R}}_{>0})^{n}\). We define a pattern P with shape \(\operatorname{sh}P=x\) and height hn to be an array of positive real numbers

(2.8)

with bottom row z h=x. The range of indices is

$$L(n,h)=\bigl\{(i,j):\ 1\le i\le h,\ 1\le j\le i\wedge n\bigr\}. $$

If h=n then P is a triangle in the sense of Kirillov [35]. Fix a pattern P as above. Set ρ 0=1 and, for 1≤ih, \(\rho_{i}=\prod_{j=1}^{i\wedge n} z_{ij}\) and τ i =ρ i /ρ i−1. We shall refer to τ as the type of P and write τ=type P. For \(\alpha\in {\mathbb{C}}^{h}\) define

$$\begin{aligned} P^\alpha= \prod_{i=1}^h \tau_i^{\alpha_i}. \end{aligned}$$
(2.9)

For \(s\in {\mathbb{C}}\), define

$$ {\mathcal{F}}_s(P)=\frac{s}{z_{nn}} + \sum _{(i,j)\in L(n,h)} \frac {z_{i-1,j}+z_{i+1,j+1}}{z_{ij}} $$
(2.10)

with the convention that z ij =0 if (i,j)∉L(n,h). Denote by Π h(x) the set of patterns with shape x and height h. Then, for \(\lambda\in {\mathbb{C}}^{h}\) and ℜs>0 (this condition is only required if h>n)

$$ \varPsi^n_{\lambda;s}(x)=\int_{\varPi^h(x)} P^{-\lambda} e^{-{\mathcal{F}}_s(P)} dP $$
(2.11)

where

$$dP = \prod_{(i,j)\in L(n,h-1)} \frac{dz_{ij}}{z_{ij}} . $$

This formula is just a re-writing of the above definition (2.7) of \(\varPsi^{n}_{\lambda;s}\).

We remark that, although it is not obvious from the above definition, the function \(\varPsi^{n}_{\lambda}\) is invariant under permutations of the indices λ 1,…,λ n  [25, 34]. In fact, the same is true of the function \(\varPsi^{n}_{\lambda;s}\), where \(\lambda\in {\mathbb{C}}^{n+k}\), k≥1 and ℜs>0. That is, \(\varPsi^{n}_{\lambda;s}\) is invariant under permutations of the indices λ 1,…,λ n+k . This follows from the definition (2.7), using the relation

$$ Q^n_aR^n_sQ^n_b=Q^n_bR^n_sQ^n_a, $$
(2.12)

where R s denotes multiplication by the function \(e^{-s/x_{n}}\), and the invariance of \(\varPsi^{n}_{\lambda_{1},\ldots,\lambda_{n}}\) under permutations of λ 1,…,λ n . The relation (2.12) is a straightforward extension of the commutativity property \(Q^{n}_{a}Q^{n}_{b}=Q^{n}_{b}Q^{n}_{a}\) obtained in [25, Theorem 2.3].

There is a Plancherel theorem for the Whittaker functions [34, 41, 46], which states that the integral transform

$$\hat{f}(\lambda) = \int_{({\mathbb{R}}_{>0})^n} f(x) \varPsi^n_\lambda(x) \prod_{i=1}^n \frac{dx_i}{x_i} $$

defines an isometry from \(L_{2}(({\mathbb{R}}_{>0})^{n}, \prod_{i=1}^{n} dx_{i}/x_{i})\) onto \(L^{sym}_{2}(\iota {\mathbb{R}}^{n},s_{n}(\lambda)d\lambda)\), where \(L_{2}^{sym}\) is the space of L 2 functions which are symmetric in their variables, \(\iota=\sqrt{-1}\) and

$$s_n(\lambda)=\frac{1}{(2\pi\iota)^n n!} \prod _{i\ne j} \varGamma (\lambda_i-\lambda_j)^{-1}, $$

is the Sklyanin measure.

3 Geometric RSK correspondence

The geometric RSK (gRSK) correspondence is a bijective mapping

$$T:({\mathbb{R}}_{>0})^{n\times m}\to({\mathbb{R}}_{>0})^{n\times m}. $$

It is also birational in the sense that both T and its inverse are rational maps. It was introduced by Kirillov [35] as a geometric lifting of the Berenstein-Kirillov extension of the RSK correspondence and further studied by Noumi and Yamada [37]. We will define it here via a sequence of ‘local moves’ on matrix elements. This is essentially a reformulation of the row-insertion procedure introduced in [37], as will be explained in Sect. 4 below.

For each 2≤in and 2≤jm define a mapping l ij which takes as input a matrix \(X=(x_{ij})\in({\mathbb{R}}_{>0})^{n\times m}\) and replaces the submatrix

$$ \begin{pmatrix} x_{i-1,j-1}& x_{i-1,j}\\ x_{i,j-1}& x_{ij} \end{pmatrix} $$

of X by its image under the map

$$ \begin{pmatrix} a& b\\ c& d \end{pmatrix} \mapsto \begin{pmatrix} bc/(ab+ac) & b\\ c& d(b+c) \end{pmatrix} , $$
(3.1)

and leaves the other elements unchanged. For 2≤in and 2≤jm, define l i1 to be the mapping that replaces the element x i1 by x i−1,1 x i1 and l 1j to be the mapping that replaces the element x 1j by x 1,j−1 x 1j . For convenience define l 11 to be the identity map. For 1≤in and 1≤jm, set

$$\pi^j_i=l_{ij}\circ\cdots\circ l_{i1}, $$

and, for 1≤in,

$$ R_i = \begin{cases} \pi_1^{m-i+1}\circ\cdots\circ\pi^m_i, & i\le m,\\ \pi^1_{i-m+1} \circ\cdots\circ\pi^m_i, & i\ge m . \end{cases} $$
(3.2)

The mapping T is defined by

$$ T=R_n\circ\cdots\circ R_1. $$
(3.3)

For example, suppose n=m=2. Then

$$R_1=\pi^2_1=l_{12}\circ l_{11}=l_{12},\qquad R_2=\pi^1_1 \circ\pi ^2_2=l_{11}\circ l_{22} \circ l_{21}=l_{22}\circ l_{21} $$

and so

$$T=R_2\circ R_1=l_{22}\circ l_{21} \circ l_{12}. $$

Here is an illustration:

$$\begin{aligned} T: \begin{pmatrix} a&b\\c&d \end{pmatrix} \stackrel{l_{12}}{\longmapsto} \begin{pmatrix} a&ab\\c&d \end{pmatrix} \stackrel{l_{21}}{\longmapsto} \begin{pmatrix} a&ab\\ac&d \end{pmatrix} \stackrel{l_{22}}{\longmapsto} \begin{pmatrix} bc/(b+c)&ab\\ac&ad(b+c) \end{pmatrix} . \end{aligned}$$

Note that each l ij is birational. For example, the inverse of the map (3.1) is given by

$$ \begin{pmatrix} a& b\\ c& d \end{pmatrix} \mapsto \begin{pmatrix} bc/(ab+ac) & b\\ c& d/(b+c) \end{pmatrix} . $$

The birational property of T can thus be seen directly from the above definition.

Each matrix \(X\in({\mathbb{R}}_{>0})^{n\times m}\) can be identified with a pair of patterns (P,Q) with respective heights m and n, and common shape

$$\operatorname{sh}P=\operatorname{sh} Q =(x_{nm},x_{n-1,m-1}, \ldots,x_{n-p+1,m-p+1}), $$

where p=nm, as illustrated in the following example:

$$X = \begin{array}{cccc} & & x_{31} & \\ & x_{21} & & x_{32} \\ x_{11} & & x_{22} & \\ & x_{12} & & \end{array} $$
$$P = \begin{array}{ccc} & x_{31} & \\ x_{21} & & x_{32} \end{array} , \qquad Q = \begin{array}{cccc} & x_{12} & & \\ x_{11} & & x_{22} & \\ & x_{21} & & x_{32} \end{array} $$
$$\operatorname{sh}P=\operatorname{sh} Q = (x_{32},x_{21}). $$

In the following, we will simply write X=(P,Q) to indicate that X is identified with the pair (P,Q).

The mappings R i defined above can also be written as

$$R_i=\rho^i_m\circ\cdots\circ \rho^i_2\circ\rho^i_1 $$

where

$$\begin{aligned} \rho^i_j= \begin{cases} l_{1,j-i+1}\circ\cdots\circ l_{i-1,j-1}\circ l_{ij}, & i\le j,\\ l_{i-j+1,1}\circ\cdots\circ l_{i-1,j-1}\circ l_{ij}, & i\ge j. \end{cases} \end{aligned}$$
(3.4)

Here we are just using the obvious fact that l ij l ij=l ijl ij whenever |ii′|+|jj′|>2. This representation is closely related to the Bender-Knuth transformations, as we shall now explain. For each 1≤in and 1≤jm, denote by b ij the map on \(({\mathbb{R}}_{>0})^{n\times m}\) which takes a matrix X=(x qr ) and replaces the entry x ij by

$$ x'_{ij} = \frac{1}{x_{ij}} (x_{i,j-1}+x_{i-1,j}) \biggl( \frac{1}{x_{i+1,j}}+ \frac{1}{x_{i,j+1}} \biggr)^{-1}, $$
(3.5)

leaving the other entries unchanged, with the conventions that x 0j =x i0=0, x n+1,j =x i,m+1=∞ for 1<i<n and 1<j<m, but \(x_{10}+x_{01}=x_{n+1,m}^{-1}+x_{n,m+1}^{-1}=1\). Denote by r j the map which replaces the entry x nj by x n,j+1/x nj if j<m and 1/x nm if j=m, leaving the other entries unchanged. For jm, define

$$ h_j= \begin{cases} b_{n-j+1,1}\circ\cdots\circ b_{n-1,j-1}\circ b_{nj}, & j\le n,\\ b_{1,j-n+1}\circ\cdots\circ b_{n-1,j-1}\circ b_{nj}, & j\ge n. \end{cases} $$
(3.6)

It is straightforward from the definitions to see that \(\rho ^{n}_{j}=h_{j}\circ r_{j}\). Now, observe that if X=(P,Q), then for each j<m, h j (X)=(t j (P),Q) where t j is defined by this relation. It is easy to see that the mappings b ij , h j and t j are all involutions.

In the case n=m, the mappings t 1,…,t n−1 are the analogues of the Bender-Knuth transformations in this setting, as discussed in [35]. In this case, if we define, for i<n,

$$ q_{i}=t_1\circ(t_2\circ t_1)\circ\cdots\circ(t_i\circ\cdots\circ t_1), $$
(3.7)

then, as explained in [35], the involutions s i =q i t 1q i , i<n, satisfy the braid relations (s i s i+1)3=Id, and hence define an action of S n on the set of triangles of height n. The mapping q n−1 is the analogue of Schützenberger’s involution in this setting.

An immediate consequence of the above re-formulation of gRSK is the following volume preserving property. Denote the input matrix by \(W=(w_{ij})\in({\mathbb{R}}_{>0})^{n\times m}\) and the output matrix by \(T=T(W)=(t_{ij})\in({\mathbb{R}}_{>0})^{n\times m}\).

Theorem 3.1

The gRSK mapping in logarithmic variables

$$(\log w_{ij},\ 1\le i\le n, 1\le j\le m)\mapsto(\log t_{ij},\ 1\le i\le n, 1\le j\le m) $$

has Jacobian ±1.

Proof

It is easy to see that the Jacobians of the mappings l ij in logarithmic variables are all ±1. This follows from the fact that the mappings

$$\begin{aligned} (\log a, \log b) &\mapsto(\log a, \log a + \log b)\\ (\log a,\log b,\log c,\log d)&\mapsto\bigl(\log\bigl(bc/(ab+ac)\bigr),\log b,\log c,\log(db+dc)\bigr) \end{aligned}$$

each have Jacobian ±1. The result follows from the definition (3.3) of T. □

We remark that, by a similar argument it can be seen that the involutions q i , i<n, on the set of triangles of height n, all have Jacobian ±1 in logarithmic variables.

We recall here some basic properties of the gRSK map T, which are either obvious from the definitions or proved in the papers [36, 37]. Suppose \(W\in({\mathbb{R}}_{>0})^{n\times m}\) and T=T(W)=(P,Q). If we define row and column products R i =∏ j w ij and C j =∏ i w ij , then type Q=R and type P=C. Note that this implies, for \(\lambda\in {\mathbb{C}}^{m}\) and \(\nu\in {\mathbb{C}}^{n}\),

$$ \prod_{ij} w_{ij}^{-\nu_i-\lambda_j} = \prod_i R_i^{-\nu_i} \prod _j C_j^{-\lambda_j} = P^{-\lambda} Q^{-\nu}. $$
(3.8)

Also, the following symmetries hold:

  • T(W t)=T(W)t;

  • T(W)=(P,Q)⇔T(W t)=(Q,P);

  • W is symmetric ⇔ T is symmetric ⇔P=Q;

  • W is symmetric across the anti-diagonal ⇔Q=q n−1(P).

The connection to directed polymers is via the following formula for t nm :

$$ t_{nm}=Z_{n,m}=\sum _{\pi\in\varPi _{n,m}} \prod_{(i,j)\in\pi} w_{ij}, $$
(3.9)

where Z n,m is the partition function that appeared in (1.1). Recall that Π n,m is the set of directed nearest-neighbor lattice paths in \({\mathbb{Z}}^{2}\) from (1,1) to (n,m), that is, the set of paths π={π(1),π(2),…,π(n+m−1)} such that π(1)=(1,1), π(n+m−1)=(n,m) and π(k+1)−π(k)∈{(1,0),(0,1)} for 1≤k<n+m−1. We shall refer to the variable t nm as the polymer partition function. In this context it is natural to refer to the w ij as weights and W as the weight matrix. In fact, the remaining entries of T=(P,Q) can also be expressed in terms of similar partition functions, as follows. For 1≤km and 1≤rnk,

$$ t_{n-r+1,k-r+1}\dotsm t_{n-1,k-1} t_{nk} = \sum _{(\pi_1,\ldots,\pi_r)\in\varPi^{(r)}_{n,k}} \prod_{(i,j)\in \pi_1\cup\cdots\cup\pi_r} w_{ij}, $$
(3.10)

where \(\varPi^{(r)}_{n,k}\) denotes the set of r-tuples of non-intersecting directed nearest-neighbor lattice paths π 1,…,π r starting at positions (1,1),(1,2),…,(1,r) and ending at positions (n,kr+1),…,(n,k−1),(n,k). (See Fig. 1. When we use the path representation we draw the weight matrix in Cartesian coordinates.) This determines the entries of P. The entries of Q are given by similar formulae using T(W t)=(Q,P). We note here the following identity, which follows from the above lattice path representation for T: setting p=nm, we have

$$ \sum_{i=1}^p \frac{1}{w_{i,p-i+1}} = \frac{1}{t_{11}}. $$
(3.11)

To see this if nm, take the ratio of (3.10) for \(\varPi ^{(n-1)}_{n,n}\) and \(\varPi^{(n)}_{n,n}\). In the opposite case apply the same to W t.

Fig. 1
figure 1

Three paths (π 1,π 2,π 3) of a particular 3-tuple in \(\varPi^{(3)}_{n,k}\) in an n×m weight matrix. Note that the picture is in Cartesian coordinates. The paths start at the lower left at (1,1), (1,2) and (1,3) and end at the upper right at (n,k−2), (n,k−1), (n,k)

Now, for \(X\in({\mathbb{R}}_{>0})^{n\times m}\) and \(s\in {\mathbb{C}}\), define

$$ {\mathcal{E}}_s(X)=\frac{s}{x_{11}}+\sum _{ij} \frac{x_{i-1,j}+x_{i,j-1}}{x_{ij}}, $$
(3.12)

where the summation is over 1≤in, 1≤jm with the conventions x ij =0 for i=0 or j=0. Note that, if X=(P,Q), then

$${\mathcal{E}}_s(X)= \begin{cases} {\mathcal{F}}_0(P)+{\mathcal{F}}_s(Q), & n\ge m,\\ {\mathcal{F}}_s(P)+{\mathcal{F}}_0(Q), & n\le m, \end{cases} $$

where \({\mathcal{F}}_{s}\) is defined by (2.10). An important property of the maps b ij defined by (3.5) above is that they preserve the quantity \({\mathcal{E}}_{0}(X)\), that is, \({\mathcal{E}}_{0}\circ b_{ij}={\mathcal{E}}_{0}\). To see this, recall that the map b ij takes a matrix X=(x qr ) and replaces the entry x ij by

$$x'_{ij} = \frac{1}{x_{ij}} (x_{i,j-1}+x_{i-1,j}) \biggl( \frac{1}{x_{i+1,j}}+ \frac{1}{x_{i,j+1}} \biggr)^{-1}, $$

leaving the other entries unchanged, with the conventions that x 0j =x i0=0, x n+1,j =x i,m+1=∞ for 1<i<n and 1<j<m, and \(x_{10}+x_{01}=x_{n+1,m}^{-1}+x_{n,m+1}^{-1}=1\). It is readily verified that

$$ \frac{x'_{i,j-1}+x'_{i-1,j}}{x'_{ij}}+\frac {x'_{ij}}{x'_{i+1,j}}+\frac{x'_{ij}}{x'_{i,j+1}} = \frac{x_{i,j-1}+x_{i-1,j}}{x_{ij}}+\frac{x_{ij}}{x_{i+1,j}}+\frac {x_{ij}}{x_{i,j+1}} $$
(3.13)

with the conventions that \(x_{0j}=x_{i0}=x'_{0j}=x'_{i0}=0\) and \(x_{n+1,j}=x_{i,m+1}=x'_{n+1,j}=x'_{i,m+1}=\infty\) for each i and j. This implies \({\mathcal{E}}_{0}(b_{ij}(X))={\mathcal{E}}_{0}(X)\). We remark that, in particular, this implies \({\mathcal{E}}_{0}\circ h_{j}={\mathcal{E}}_{0}\), \({\mathcal{F}}_{0}\circ t_{j}={\mathcal{F}}_{0}\) for all j<m and, in the case m=n, \({\mathcal{F}}_{0}\circ q_{n-1}={\mathcal{F}}_{0}\), where q n−1 is the geometric analogue of Schützenberger’s involution defined by (3.7).

The cornerstone of the present paper is the following identity which, combined with Theorem 3.1, explains the appearance of \(\mathit{GL}(n,{\mathbb{R}})\)-Whittaker functions in the context of geometric RSK.

Theorem 3.2

Let \(W\in({\mathbb{R}}_{>0})^{n\times m}\), T=T(W) and \(s\in {\mathbb{C}}\). Then

$$\sum_{i=1}^p \frac{s}{w_{i,p-i+1}} + \sum {}^{'} \frac{1}{w_{ij}} = {\mathcal{E}}_s(T), $$

where p=nm and ∑′ denotes the sum over 1≤in, 1≤jm such that jpi+1.

Proof

From the identity (3.11), we can assume without loss of generality that s=1. We will prove the theorem by induction on n and m. The statement is immediate in the case n=m=1. Write \(R_{i}=R_{i}^{n,m}\), T=T n,m and \({\mathcal{E}}_{s}^{n,m}\) for the mappings defined above. Recall that T m,n(W t)=[T n,m(W)]t, for any values of m and n. It therefore suffices to show that the proposition holds for T n,m, assuming that nm and that the proposition holds for T n−1,m.

Write W n−1,m =(w ij , 1≤in−1, 1≤jm), S=T n−1,m(W n−1,m ) and T=T n,m(W). Then

$$T=R_n^{n,m} \begin{pmatrix} S \\ w_{n1}\ \ldots\ w_{nm} \end{pmatrix} , $$

and we are required to show that

$${\mathcal{E}}_1^{n,m}(T)={\mathcal{E}}_1^{n-1,m}(S)+\sum _{j=1}^m \frac{1}{w_{nj}}. $$

Now,

$$R^{n,m}_n = \rho^n_m\circ\cdots \circ\rho^n_2\circ\rho^n_1 $$

where

$$\rho^n_k= h_k \circ r_{k}=b_{n-k+1,1} \circ\cdots\circ b_{nk}\circ r_{k}. $$

Set

$$T^{(0)}= \begin{pmatrix} S \\ w_{n1}\ \ldots\ w_{nm} \end{pmatrix} $$

and, for k=1,…,m,

$$T^{(k)}=\rho^n_k\circ\cdots\circ \rho^n_2\circ\rho^n_1 \bigl(T^{(0)}\bigr). $$

For \(X\in({\mathbb{R}}_{>0})^{n\times m}\) and 0≤km, define

$${\mathcal{E}}^{n,m;k}(X)=\frac{1}{x_{11}}+ \sum_{ij}^{(k)} \frac {x_{i-1,j}+x_{i,j-1}}{x_{ij}}+\sum_{j=k+1}^m \frac{1}{x_{nj}}, $$

where X=(x ij ) and the first summation is over pairs of indices (i,j) such that either 1≤i<n and 1≤jm or i=n and 1≤jk, with the conventions x ij =0 for i=0 or j=0. Note that

$${\mathcal{E}}^{n,m;0}\bigl(T^{(0)}\bigr)={\mathcal{E}}_1^{n-1,m}(T)+ \sum_{j=1}^m \frac{1}{w_{nj}}, \qquad {\mathcal{E}}^{n,m;m}\bigl(T^{(m)}\bigr)={\mathcal{E}}_1^{n,m}(T). $$

We will show that

$$ {\mathcal{E}}^{n,m;k}\circ\rho^n_k= {\mathcal{E}}^{n,m;k-1} $$
(3.14)

for each k=1,…,m. Note that this implies

$${\mathcal{E}}^{n,m;k}\bigl(T^{(k)}\bigr)={\mathcal{E}}^{n,m;k-1} \bigl(T^{(k-1)}\bigr) $$

for each k=1,…,m, and the statement of the theorem follows.

Let \(X=(x_{ij})\in({\mathbb{R}}_{>0})^{n\times m}\) and write

$$X'=\bigl(x'_{ij}\bigr)= \rho^n_k(X)= h_k\circ r_k (X). $$

Note that \(x'_{ij}=x_{ij}\) for all (i,j) except (nq+1,kq+1), 1≤qk. Applying b nk r k gives the relation

$$\frac{x'_{n,k-1}+x'_{n-1,k}}{x'_{nk}}= \frac{1}{x_{nk}}. $$

The next three relations follow from the invariance of \({\mathcal{E}}_{0}\) under the b ij mappings as discussed earlier, see (3.13). If (i,j)=(nq+1,kq+1) for some 1<q<k, then

$$\frac{x'_{i,j-1}+x'_{i-1,j}}{x'_{ij}}+\frac {x'_{ij}}{x'_{i+1,j}}+\frac{x'_{ij}}{x'_{i,j+1}} = \frac{x_{i,j-1}+x_{i-1,j}}{x_{ij}}+ \frac{x_{ij}}{x_{i+1,j}}+\frac {x_{ij}}{x_{i,j+1}}. $$

If k<n, then

$$\frac{x'_{n-k,1}}{x'_{n-k+1,1}}+\frac {x'_{n-k+1,1}}{x'_{n-k+2,1}}+\frac{x'_{n-k+1,1}}{x'_{n-k+1,2}} = \frac{x_{n-k,1}}{x_{n-k+1,1}}+ \frac{x_{n-k+1,1}}{x_{n-k+2,1}}+\frac {x_{n-k+1,1}}{x_{n-k+1,2}}; $$

If k=n (this can only occur if m=n), then

$$\frac{1}{x'_{11}}+\frac{x'_{11}}{x'_{21}}+\frac{x'_{11}}{x'_{12}} = \frac{1}{x_{11}}+ \frac{x_{11}}{x_{21}}+\frac{x_{11}}{x_{12}}. $$

It follows that \({\mathcal{E}}^{n,m;k}(X')={\mathcal{E}}^{n,m;k-1}(X)\), as required. □

Let s>0 and consider the measure on input matrices (w ij ) defined by

$$\nu_{\hat{\theta},\theta;s}(dw) = \prod_{ij} w_{ij}^{-\hat{\theta}_i-\theta_j} \exp \Biggl( - \sum _{i=1}^p \frac{s}{w_{i,p-i+1}} - \sum {}^{'} \frac{1}{w_{ij}} \Biggr) \prod _{ij} \frac{dw_{ij}}{w_{ij}}, $$

where \(\hat{\theta}_{i}+\theta_{j}>0\) for each i and j. Note that

$$\int_{({\mathbb{R}}_{>0})^{n\times m} }\nu_{\hat{\theta},\theta;s}(dw) = s^{-\sum_{i=1}^p (\hat{\theta}_i+\theta_{p-i+1})} \prod _{ij}\varGamma (\hat{\theta}_i+ \theta_j) . $$

Suppose \(W\in({\mathbb{R}}_{>0})^{n\times m}\) and T=T(W)=(P,Q). Define a mapping \(\sigma: ({\mathbb{R}}_{>0})^{n\times m} \to({\mathbb{R}}_{>0})^{p}\) by

$$ \sigma(W)=\operatorname{sh}P=\operatorname{sh}Q =(t_{nm},t_{n-1,m-1}, \ldots ,t_{n-p+1,m-p+1}), $$
(3.15)

where p=nm. The next two corollaries are essentially a re-formulation of two of the main results in [19].

Corollary 3.3

The push-forward of the measure \(\nu_{\hat{\theta},\theta;s}\) under the geometric RSK map T is given by

$$\nu_{\hat{\theta},\theta;s}\circ T^{-1} (dt) = P^{-\theta} Q^{-\hat{\theta}} e^{ - {\mathcal{E}}_s(T) } \prod_{ij} \frac{dt_{ij}}{t_{ij}} . $$

Proof

This follows immediately from Theorems 3.1, 3.2 and the relation (3.8). □

Corollary 3.4

The push-forward of \(\nu_{\hat{\theta},\theta;s}\) under σ is given by

$$\nu_{\hat{\theta},\theta;s}\circ\sigma^{-1} (dx) = \begin{cases} \varPsi^p_\theta(x) \varPsi^p_{\hat{\theta};s}(x)\prod_{i=1}^p \frac {dx_i}{x_i}, &n\ge m ,\\ \varPsi^p_{\theta;s}(x) \varPsi^p_{\hat{\theta}}(x)\prod_{i=1}^p \frac {dx_i}{x_i}, & n\le m. \end{cases} $$

Proof

This follows from Corollary 3.3 and the integral formula (2.11) for \(\varPsi^{p}_{\lambda;s}\). □

We also obtain from Theorems 3.1 and 3.2 the following integral identity. This is the analogue of the Cauchy-Littlewood identity in this setting.

Corollary 3.5

Suppose s>0, \(\lambda\in {\mathbb{C}}^{m}\) and \(\nu\in {\mathbb{C}}^{n}\), where nm and ℜ(λ i +ν j )>0 for all i and j. Then

$$ \int_{({\mathbb{R}}_{>0})^m } \varPsi^m_{\nu;s}(x) \varPsi^m_\lambda(x) \prod_{i=1}^m \frac{dx_i}{x_i} = s^{-\sum_{i=1}^m (\nu_i+\lambda_i)} \prod_{ij} \varGamma(\nu _i+\lambda_j) . $$
(3.16)

Proof

From the definitions (2.11), (3.12), the identity (3.8), Theorems 3.1 and 3.2, and Fubini’s theorem,

$$\begin{aligned} & s^{-\sum_{i=1}^m (\nu_i+\lambda_i)} \prod_{ij}\varGamma(\nu _i+\lambda_j) \\ &\quad = \int_{({\mathbb{R}}_{>0})^{n\times m} } \prod_{ij} w_{ij}^{-\nu_i-\lambda_j-1} \exp \Biggl( - \sum _{i=1}^m \frac{s}{w_{i,m-i+1}} - \sum {}^{'} \frac{1}{w_{ij}} \Biggr) \prod _{ij} dw_{ij} \\ &\quad = \int_{({\mathbb{R}}_{>0})^{n\times m} } P^{-\lambda} Q^{-\nu} e^{ - {\mathcal{E}}_s(T) } \prod_{ij} \frac {dt_{ij}}{t_{ij}} \\ &\quad = \int_{({\mathbb{R}}_{>0})^m } \biggl( \int_{\varPi^n(x)} Q^{-\nu} e^{-{\mathcal{F}}_s(Q)} dQ \biggr) \biggl( \int_{\varPi^m(x)} P^{-\lambda} e^{-{\mathcal{F}}_0(P)} dP \biggr) \prod_{i=1}^m \frac{dx_i}{x_i} \\ &\quad = \int_{({\mathbb{R}}_{>0})^m } \varPsi^m_{\nu;s}(x) \varPsi^m_\lambda(x) \prod_{i=1}^m \frac{dx_i}{x_i} , \end{aligned}$$

as required. □

When m=n−1 this is equivalent to an integral identity which was conjectured by Bump [15] and proved by Stade [44, Theorem 3.4], see Theorem 7.4 below. We note that in this case, the identity is proved in [44] without assuming the condition ℜ(λ i +ν j )>0 for all i and j. In this case, the integral is associated with Archimedean L-factors of automorphic L-functions on \(\mathit{GL}(n-1,{\mathbb{R}})\times \mathit{GL}(n,{\mathbb{R}})\).

When n=m, (3.16) becomes:

Corollary 3.6

Suppose s>0 and \(\lambda,\nu\in {\mathbb{C}}^{n}\), where ℜ(λ i +ν j )>0 for all i and j. Then

$$\int_{({\mathbb{R}}_{>0})^n } e^{-s/x_n} \varPsi^n_\nu(x) \varPsi^n_{\lambda }(x)\prod_{i=1}^n \frac{dx_i}{x_i} = s^{-\sum_{i=1}^n (\nu_i+\lambda_i)} \prod_{ij} \varGamma(\nu _i+\lambda_j) . $$

Using (2.4), this is equivalent to the following integral identity for \(\mathit{GL}(n,{\mathbb{R}})\)-Whittaker functions, due to Stade [43], see Theorem 7.2 below.

Corollary 3.7

Stade

Suppose r>0 and \(\lambda,\nu\in {\mathbb{C}}^{n}\), where ℜ(λ i +ν j )>0 for all i and j. Then

$$\int_{({\mathbb{R}}_{>0})^n } e^{-r x_1} \varPsi^n_{-\nu}(x) \varPsi^n_{-\lambda }(x)\prod_{i=1}^n \frac{dx_i}{x_i} = r^{-\sum_{i=1}^n (\nu_i+\lambda_i)} \prod_{ij} \varGamma(\nu _i+\lambda_j). $$

Again, we note that this identity is proved in [43] without assuming the condition ℜ(λ i +ν j )>0 for all i and j. In this case, the integral is associated, via the Rankin-Selberg method, with Archimedean L-factors of automorphic L-functions on \(\mathit{GL}(n,{\mathbb{R}})\times \mathit{GL}(n,{\mathbb{R}})\).

Corollary 3.8

Suppose s>0 and \(\nu\in {\mathbb{C}}^{n}\) withν i >0 for each i. Then, for each mn, the function \(\varPsi^{m}_{\nu;s}\) is in \(L_{2}(({\mathbb{R}}_{>0})^{m}, \prod_{i=1}^{m} dx_{i}/x_{i})\), and the function \(e^{-s x_{1}} \varPsi^{n}_{-\nu}(x)\) is in \(L_{2}(({\mathbb{R}}_{>0})^{n}, \prod_{i=1}^{n} dx_{i}/x_{i})\).

Proof

The first claim follows from Corollary 3.5 and the Plancherel theorem, as follows. We first note that, under the above hypotheses,

$$\hat{\varPsi}^m_{\nu;s} (\lambda) = s^{-\sum_{i=1}^m (\nu_i+\lambda_i)} \prod _{ij}\varGamma(\nu _i+ \lambda_j) \in L_2\bigl(\iota {\mathbb{R}}^m, s_m(\lambda)d\lambda\bigr). $$

This is easily verified using Stirling’s approximation

$$\lim_{b\to\infty}\bigl|\varGamma(a+\iota b)\bigr| e^{\frac{\pi }{2}|b|}|b|^{\frac{1}{2}-a}= \sqrt{2\pi}, \quad a,b \in\mathbb{R} . $$

Now, suppose \(f\in L_{2}(({\mathbb{R}}_{>0})^{m}, \prod_{i=1}^{m} dx_{i}/x_{i})\) such that \(\hat{f}\) is continuous and compactly supported on \(\iota {\mathbb{R}}^{m}\). By the Plancherel theorem, such functions are dense in \(L_{2}(({\mathbb{R}}_{>0})^{m}, \prod_{i=1}^{m} dx_{i}/x_{i})\) and, moreover, satisfy

$$ f(x)=\int_{\iota {\mathbb{R}}^m} \hat{f}(\lambda) \varPsi^m_{\lambda}(x) s_m(\lambda) d\lambda $$
(3.17)

almost everywhere. Indeed, for any \(g\in L_{2}(({\mathbb{R}}_{>0})^{m}, \prod_{i=1}^{m} dx_{i}/x_{i})\) which is continuous and compactly supported we have, by Fubini’s theorem,

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^m} \biggl( \int_{\iota {\mathbb{R}}^m} \hat{f}( \lambda) \varPsi ^m_{\lambda}(x) s_m(\lambda) d \lambda \biggr) \overline{g(x)} \prod_{i=1}^m \frac{dx_i}{x_i} \\ &\quad =\int_{\iota {\mathbb{R}}^m} \hat{f}(\lambda) \overline{ \hat{g}(\lambda)} s_m(\lambda) d\lambda\\ &\quad = \int_{({\mathbb{R}}_{>0})^m} f(x) \overline{g(x)} \prod _{i=1}^m \frac {dx_i}{x_i} . \end{aligned}$$

This implies (3.17). Now, by Corollary 3.5,

$$\int_{({\mathbb{R}}_{>0})^m} \bigl|\varPsi^m_{\nu;s}(x) \varPsi^m_{\lambda}(x)\bigr| \prod_{i=1}^m \frac{dx_i}{x_i} \le\int_{({\mathbb{R}}_{>0})^m} \varPsi^m_{\Re\nu;s}(x) \varPsi^m_0(x) \prod_{i=1}^m \frac{dx_i}{x_i} <\infty. $$

It follows that, for \(f\in L_{2}(({\mathbb{R}}_{>0})^{m}, \prod_{i=1}^{m} dx_{i}/x_{i})\) such that \(\hat{f}\) is continuous and compactly supported on \(\iota {\mathbb{R}}^{m}\), the integral

$$\int_{({\mathbb{R}}_{>0})^m}\int_{\iota {\mathbb{R}}^m} \varPsi^m_{\nu;s}(x) \overline {\hat{f}(\lambda)} \varPsi^m_{\lambda}(x) s_m(\lambda) d\lambda \frac {dx_i}{x_i} $$

is absolutely convergent, and so, by Fubini’s theorem,

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^m} \varPsi^m_{\nu;s}(x) \overline{f(x)} \prod_{i=1}^m \frac{dx_i}{x_i} \\ &\quad = \int_{({\mathbb{R}}_{>0})^m} \varPsi^m_{\nu;s}(x) \biggl( \int_{\iota {\mathbb{R}}^m} \overline{\hat{f}(\lambda)} \varPsi^m_{\lambda}(x) s_m(\lambda) d\lambda \biggr) \prod_{i=1}^m \frac{dx_i}{x_i} \\ &\quad =\int_{\iota {\mathbb{R}}^m} \hat{\varPsi}^m_{\nu;s}( \lambda) \overline{\hat{f}(\lambda)} s_m(\lambda) d\lambda. \end{aligned}$$

Hence, using the Cauchy-Schwarz inequality,

$$\begin{aligned} & \Biggl \vert \int_{({\mathbb{R}}_{>0})^m} \varPsi^m_{\nu;s}(x) \overline{f(x)} \prod_{i=1}^m \frac{dx_i}{x_i} \Biggr \vert \\ &\quad = \biggl \vert \int_{\iota {\mathbb{R}}^m} \hat{\varPsi}^m_{\nu;s}(\lambda) \overline{\hat{f}(\lambda)} s_m(\lambda) d\lambda\biggr \vert \\ &\quad \le \biggl( \int_{\iota {\mathbb{R}}^m} \bigl| \hat{\varPsi}^m_{\nu;s}( \lambda)\bigr|^2 s_m(\lambda) d\lambda \biggr)^{1/2} \biggl( \int_{\iota {\mathbb{R}}^m} \bigl|\hat{f}(\lambda)\bigr|^2 s_m(\lambda) d\lambda \biggr)^{1/2} . \end{aligned}$$

This proves the first claim. The second claim follows from the first, letting m=n and using (2.4). □

Consider the probability measure on input matrices W defined by

$$\tilde{\nu}_{\hat{\theta},\theta;s}(dw)= Z_{\hat{\theta},\theta ;s}^{-1} \nu_{\hat{\theta},\theta;s}(dw) $$

where

$$Z_{\hat{\theta},\theta;s} = s^{-\sum_{i=1}^p (\hat{\theta}_i+\theta _i)} \prod_{ij} \varGamma(\hat{\theta}_i+\theta_j). $$

The following result was obtained in [19].

Corollary 3.9

Suppose \(\hat{\theta}_{i}+\theta_{j}>0\) for each i and j, and (w.l.o.g.) that nm, θ i <0 for each i and \(\hat{\theta}_{j}>0\) for each j. Then, the Laplace transform of the law \(\tilde{\nu}_{\hat{\theta},\theta;s}\circ t_{nm}^{-1}\) of the polymer partition function t nm under \(\tilde{\nu}_{\hat{\theta},\theta;s}\) is given by

$$\begin{aligned} & \int e^{-r t_{nm}} \tilde{\nu}_{\hat{\theta},\theta;s}(dw)\\ &\quad = \int_{\iota {\mathbb{R}}^m} (rs)^{\sum_{i=1}^m (\theta_i-\lambda_i)} \prod_{ij}\varGamma( \lambda_i-\theta_j) \prod_{ij} \frac{\varGamma(\hat{\theta}_i+\lambda_j)}{\varGamma(\hat{\theta}_i+\theta_j)} s_n(\lambda) d\lambda. \end{aligned}$$

Proof

By Corollary 3.4,

$$\int e^{-r t_{nm}} \tilde{\nu}_{\hat{\theta},\theta;s}(dw) = Z_{\hat{\theta},\theta;s}^{-1} \int_{({\mathbb{R}}_{>0})^m } e^{-rx_1} \varPsi^m_\theta(x) \varPsi^m_{\hat{\theta};s}(x)\prod_{i=1}^m \frac{dx_i}{x_i} . $$

By Corollary 3.8, the functions \(e^{-rx_{1}} \varPsi^{m}_{\theta}(x)\) and \(\varPsi^{m}_{\hat{\theta};s}(x)\) are in the space \(L_{2}(({\mathbb{R}}_{>0})^{m}, \prod_{i=1}^{m} dx_{i}/x_{i})\). The result follows, by Corollaries 3.6, 3.7 and the Plancherel theorem. □

4 Equivalence of old and new description of geometric RSK

We explain here the equivalence of the Noumi-Yamada row insertion construction [37] and the definition of geometric RSK given in Sect. 3. The input weight matrix (w ij ) is n×m, where m is fixed and n represents time. After n time steps the Noumi-Yamada process gives two patterns P={z kℓ } and \(Q=\{z'_{ij}\}\). P has height m, Q has height n, and their common shape vector \(z_{m\centerdot}=z'_{n\centerdot}\) is of length p=mn. The rows of Q indexed by s=1,…,n from top to bottom are the successive shape vectors (bottom rows) z m(s)=(z m, (s))1≤ms of the temporal evolution {z(s):1≤sn} of the P pattern. Thus as in classic RSK the Q pattern serves as a recording pattern.

The Noumi-Yamada process begins with an empty pattern at time n=0. Then the following step is repeated for n=1,2,3,….

Noumi-Yamada construction for time step n−1→n.

Let z=z(n−1) denote the P pattern obtained after n−1 steps. Insertion of row w n of weights into z transforms z into \(\check{z}=z(n)\) as follows.

  1. (i)

    If nm+1 (in other words, the triangle was filled by time n−1), then

    $$ \begin{aligned} &a_{k,1}=w_{n,k} & \textrm{for } 1\le k\leq m,\quad \ \ \, \\ &a_{k+1,\ell+1}=a_{k+1,\ell} \frac{z_{k+1,\ell} \check{z}_{k,\ell }}{\check{z}_{k+1,\ell} z_{k,\ell}} &\textrm{for } 1\le\ell \leq k<m, \\ & \check{z}_{k,\ell}= a_{k,\ell}(z_{k,\ell}+\check{z}_{k-1,\ell })&\textrm{for } 1\le\ell<k\leq m, \\ & \check{z}_{k,k}= a_{k,k}z_{k,k} &\textrm{for } 1 \le k\leq m.\quad \ \ \ \end{aligned} $$
    (4.1)
  2. (ii)

    If nm, then the equations above define \(\check{z}_{k,\ell}\) for 1≤k∧(n−1). Set

    $$ \check{z}_{k,n}=a_{n,n}\dotsm a_{k,n} \quad\text{for $k=n,\dotsc,m$,} $$
    (4.2)

    while \(\check{z}_{k,\ell}\) for n+1 remain undefined.

Proposition 4.1

Let (w ij ) be an n×m weight matrix and T=T(W) defined by (3.3). Then the output T is equivalent to the patterns (P,Q) obtained from n steps of the Noumi-Yamada evolution, through these equations:

$$\begin{aligned} P\textit{ pattern:} \quad z_{k\ell}&= t_{n-\ell+1, k-\ell+1} , \quad 1\le\ell\le k\wedge n, 1\le k\le m, \end{aligned}$$
(4.3)
$$\begin{aligned} Q\textit{ pattern:} \quad z'_{s\ell}&= t_{s-\ell+1, m-\ell+1} , \quad 1\le\ell\le m\wedge s, 1\le s\le n. \end{aligned}$$
(4.4)

Note in particular the common shape vector

$$z_{m\centerdot}=z'_{n\centerdot}=(t_{n-\ell+1,m-\ell+1})_{1\le \ell\le p}. $$

Here is an illustration for n×m=3×6.

$$ T= \begin{bmatrix} z_{33} &z_{43} &z_{53} &z_{63}=z'_{33} &z'_{22} &z'_{11} \\ z_{22} &z_{32} &z_{42} &z_{52} &z_{62}=z'_{32} &z'_{21} \\ z_{11} &z_{21} &z_{31} &z_{41} &z_{51} &z_{61}=z'_{31} \end{bmatrix}. $$
(4.5)

Proof of Proposition 4.1

We keep m fixed and do induction on n. In the case n=1, the m-vector \(\check{z}_{\centerdot 1}\) described by (4.2) is the same as that obtained by applying \(R_{1}=\pi^{m}_{1}=l_{1m}\circ\dotsm\circ l_{11} \) to the top row w 1⬝ of the weight matrix.

Suppose the statement is true for T n−1,m. Add the nth weight row w n to T n−1,m and call the resulting n×m matrix . Then \(T^{n,m}=R_{n}(\widetilde{T}^{n,m})\). From the definition of R n we see that on row i∈{1,…,n−1} it alters only elements \(\tilde{t}_{ij}\) for jimn. Consequently after the application of R n , the induction assumption implies that (4.4) remains in force for 1≤sn−1. It only remains to check that (4.3) holds after the application of R n .

Again we do induction, starting from the bottom row of T n,m and moving up row by row. This corresponds to executing \(R_{n}= \pi^{(m-n)\vee0+1}_{(n-m)\vee0+1}\circ\dotsm\circ\pi ^{m-1}_{n-1}\circ\pi^{m}_{n}\) step by step.

Before applying \(\pi^{m}_{n}\), the two bottom rows of \(\widetilde{T}^{n,m}\) are

$$\widetilde{T}^{n,m} = \begin{bmatrix} &\dotsm &\dotsm & \\ z_{11} &z_{21} &\dotsm &z_{m1} \\ w_{n1} &w_{n2} &\dotsm &w_{nm} \end{bmatrix} = \begin{bmatrix} &\dotsm &\dotsm & \\ z_{11} &z_{21} &\dotsm &z_{m1} \\ a_{11} &a_{21} &\dotsm &a_{m1} \end{bmatrix} $$

where we used the first row of (4.1). Apply \(\pi^{m}_{n}=l_{nm}\circ l_{n,m-1}\circ\dotsm\circ l_{n1}\). Only the bottom two rows are impacted. Use the notation from (4.1).

$$\begin{aligned} & \begin{bmatrix} z_{11} &z_{21} &z_{31} &\dotsm &z_{m1} \\ a_{11} &a_{21} &a_{31} &\dotsm &a_{m1} \end{bmatrix} \\ &\quad \stackrel{l_{n1}}{ \longmapsto} \begin{bmatrix} z_{11} &z_{21} &z_{31} &\dotsm &z_{m1} \\ \check{z}_{11} &a_{21} &a_{31} &\dotsm &a_{m1} \end{bmatrix} \\ &\quad \stackrel{l_{n2}}{\longmapsto} \begin{bmatrix} a_{22} &z_{21} &z_{31} &\dotsm &z_{m1} \\ \check{z}_{11} &\check{z}_{21} &a_{31} &\dotsm &a_{m1} \end{bmatrix} \stackrel{l_{n3}}{\longmapsto} \begin{bmatrix} a_{22} &a_{32} &z_{31} &\dotsm &z_{m1} \\ \check{z}_{11} &\check{z}_{21} &\check{z}_{31} &\dotsm &a_{m1} \end{bmatrix} \\ &\quad \stackrel{l_{n4}}{\longmapsto} \dotsm \stackrel {l_{nm}}{\longmapsto} \begin{bmatrix} a_{22} &a_{32} &a_{42} &\dotsm&a_{m2} &z_{m1} \\ \check{z}_{11} &\check{z}_{21} &\check{z}_{31} &\dotsm &\check{z}_{m-1,1} &\check{z}_{m1} \end{bmatrix} . \end{aligned}$$

Now the bottom row of T n,m is in place. Note that the transformations above left in place \(z_{m1}=z'_{n1}\) as they should, for this entry is in accordance with (4.4).

Next, an application of \(\pi^{m-1}_{n-1}=l_{n-1,m-1}\circ l_{n-1,m-2}\circ\dotsm\circ l_{n-1,1}\) transforms rows n−2 and n−1 in this manner:

$$\begin{aligned} & \begin{bmatrix} z_{22} &z_{32} &z_{42} &\dotsm&z_{m-1,2} & z'_{n2} &z'_{n-1,1} \\ a_{22} &a_{32} &a_{42} &\dotsm&a_{m-1,2} &a_{m2} &z'_{n1} \\ \check{z}_{11} &\check{z}_{21} &\check{z}_{31} &\dotsm &\check{z}_{m-2,1} &\check{z}_{m-1,1} &\check{z}_{m1} \end{bmatrix}\\ &\quad \stackrel{\pi^{m-1}_{n-1}}{\longmapsto} \begin{bmatrix} a_{33} &a_{43} &a_{53} &\dotsm&a_{m-2,3} & z'_{n2} &z'_{n-1,1} \\ \check{z}_{22} &\check{z}_{32} &\check{z}_{42} &\dotsm&\check{z}_{m-1,2} &\check{z}_{m2} &z'_{n1} \\ \check{z}_{11} &\check{z}_{21} &\check{z}_{31} &\dotsm &\check{z}_{m-2,1} &\check{z}_{m-1,1} &\check{z}_{m1} \\ \end{bmatrix} . \end{aligned}$$

The bottom two rows of T n,m are in place. These steps continue until we arrive at T n,m. □

5 Symmetric input matrix

As it is needed in the following, we will write \(R_{i}^{n,m}\) and T=T n,m for the mappings defined in (3.2)–(3.3), and note the following recursive structure. Let \(W=(w_{ij})\in({\mathbb{R}}_{>0})^{n\times m}\) and write W k,m =(w ij , 1≤ik, 1≤jm). Recall that

$$T^{n,m}=R^{n,m}_n\circ R^{n,m}_{n-1} \circ\cdots\circ R^{n,m}_1. $$

Now, for each in, the mapping \(R^{n,m}_{i}\) acts only on the first i rows of W and leaves the remaining rows of W unchanged. In fact, for each ikn, we have

$$R^{n,m}_i (W) = \begin{pmatrix} R^{k,m}_i(W_{k,m}) \\ W_{k,m}^c \end{pmatrix} , $$

where \(W_{k,m}^{c}=(w_{ij},\ k+1\le i\le n,\ 1\le j\le m)\). This property is immediate from the definitions. This gives the basic recursion

$$ T^{n,m}(W)=R_n^{n,m} \begin{pmatrix} T^{n-1,m}(W_{n-1,m}) \\ w_{n1}\ \ldots\ w_{nm} \end{pmatrix} . $$
(5.1)

Recall that

$$ T^{m,n}\bigl(W^t\bigr)=\bigl[T^{n,m}(W) \bigr]^t. $$
(5.2)

In particular, if n=m and W is symmetric, then T n,n(W) is also symmetric.

Lemma 5.1

Suppose that n=m and W is symmetric.

(a) The following recursion holds:

$$ T^{n,n}(W) = R^{n,n}_n \begin{pmatrix} \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W_{n-1,n-1}) \\ w_{1n}\ \ldots\ w_{n-1,n} \end{pmatrix} \right]^t \\ w_{1n}\ \ldots\ w_{nn} \end{pmatrix} . $$
(5.3)

Moreover, if we denote by (s ij ) the elements of the (n−1)×n matrix

$$ S= \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W_{n-1,n-1})\\ w_{1n}\ \ldots\ w_{n-1,n} \end{pmatrix} \right]^t $$
(5.4)

and by (t ij ) the elements of T n,n(W), then

$$ \begin{array}{l@{\quad}l} t_{ij}=s_{ij} & \textit{for }1\le i<j\le n, \\ t_{11}=s_{12}/2s_{11}, & \\ t_{ii}=s_{i,i+1}s_{i-1,i}/s_{ii}, & \textit{for } 2\le i\le n-1,\\ t_{nn}=2s_{n-1,n}w_{nn}. & \end{array} $$
(5.5)

(b) For n≥1 we have this identity:

$$ 4^{\lfloor{n/2}\rfloor} \prod_{i=1}^n w_{ii} = \frac{\prod_{j=0}^{\lfloor{\frac {n-1}{2}}\rfloor} t_{n-2j, n-2j}}{ \prod_{j=0}^{\lfloor{\frac{n-2}{2}}\rfloor} t_{n-1-2j, n-1-2j}}= \frac{\prod_{i\mathit{odd}} z_{ni}}{\prod_{i\mathit{even}} z_{ni}}. $$
(5.6)

Proof

Part (a). Using (5.1), (5.2) and the fact the W is symmetric,

$$\begin{aligned} T^{n,n}(W) &= R^{n,n}_n \begin{pmatrix} T^{n-1,n}(W_{n-1,n}) \\ w_{n1}\ \ldots\ w_{nn} \end{pmatrix} \\ &= R^{n,n}_n \begin{pmatrix} [ T^{n,n-1}([W_{n-1,n}]^t) ]^t \\ w_{n1}\ \ldots\ w_{nn} \end{pmatrix} \\ &= R^{n,n}_n \begin{pmatrix} [ T^{n,n-1}(W_{n,n-1}) ]^t \\ w_{n1}\ \ldots\ w_{nn} \end{pmatrix} \\ &= R^{n,n}_n \begin{pmatrix} \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W_{n-1,n-1}) \\ w_{n1}\ \ldots\ w_{n,n-1} \end{pmatrix} \right]^t \\ w_{n1}\ \ldots\ w_{nn} \end{pmatrix} \\ &= R^{n,n}_n \begin{pmatrix} \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W_{n-1,n-1}) \\ w_{1n}\ \ldots\ w_{n-1,n} \end{pmatrix} \right]^t \\ w_{1n}\ \ldots\ w_{nn} \end{pmatrix} . \end{aligned}$$

This proves the first claim. So we have

$$T^{n,n}(W) = R^{n,n}_n \begin{pmatrix} S \\ w_{1n}\ \ldots\ w_{nn} \end{pmatrix} , $$

where \(S\in({\mathbb{R}}_{>0})^{(n-1)\times n}\). To prove the second claim, first note that the mapping \(R^{n,n}_{n}\) leaves the elements of its input matrix which are strictly above the diagonal unchanged. Thus, t ij =s ij for 1≤i<jn. Using this, the symmetry of T, and recalling how the row insertion procedure works (see Sect. 4), we see that

$$\begin{aligned} t_{nn}&=w_{nn} (t_{n-1,n}+s_{n-1,n})=2s_{n-1,n}w_{nn},\\ t_{n-1,n-1} &= \frac{t_{n-1,n} s_{n-1,n}}{s_{n-1,n-1}(t_{n-1,n}+s_{n-1,n})} (t_{n-2,n-1}+s_{n-2,n-1})\\ &=s_{n-1,n} s_{n-2,n-1} / s_{n-1,n-1} , \end{aligned}$$

and so on; for 2≤in−1 we have t ii =s i,i+1 s i−1,i /s ii and then finally,

$$t_{11}=\frac{t_{12}s_{12}}{s_{11}(t_{12}+s_{12})}=s_{12}/2s_{11} , $$

as required.

Part (b). The second equality in (5.6) is a consequence of (4.3). The first equality is proved by induction on n. Cases n=2 and n=3 are checked by hand.

Suppose (5.6) is true for n−1. Observe first from the definition of the mappings that \(R^{n,n-1}_{n}\) operating on does not alter the diagonal \(\{ t^{n-1}_{ii}\}_{1\le i\le n-1}\) of T n−1,n−1. Consequently (5.4) implies that \(s_{ii}=t^{n-1}_{ii}\) for 1≤in−1.

Suppose n is even. Then the middle fraction of (5.6) develops as follows, through equations (5.5), \(s_{ii}=t^{n-1}_{ii}\) and by induction:

$$\begin{aligned} \frac{t_{nn}t_{n-2,n-2}\dotsm t_{22}}{t_{n-1,n-1}t_{n-3,n-3}\dotsm t_{11}} &=\frac{ 2s_{n-1,n}w_{nn} \cdot\frac{s_{n-2,n-1} s_{n-3,n-2}}{s_{n-2,n-2}} \dotsm\frac{s_{23} s_{12}}{s_{22}} }{ \frac{s_{n-1,n} s_{n-2,n-1}}{s_{n-1,n-1}} \cdot \frac{s_{n-3,n-2} s_{n-4,n-3}}{s_{n-3,n-3}} \dotsm \frac {s_{12}}{2s_{11}} } \\ &= 4w_{nn} \cdot\frac{s_{n-1,n-1}s_{n-3,n-3}\dotsm s_{11}}{s_{n-2,n-2}s_{n-4,n-4}\dotsm s_{22}} \\ &= 4w_{nn} \cdot4^{\frac{n}{2}-1} \prod_{i=1}^{n-1} w_{ii} = 4^{\lfloor{n/2}\rfloor} \prod_{i=1}^n w_{ii} . \end{aligned}$$

The case of odd n develops similarly except that now the product in the numerator finishes with s 12/2s 11 and consequently the factors of 2 cancel each other. □

Theorem 5.2

Suppose that n=m and W is symmetric. Then T=T(W)=(t ij ) is also symmetric, and the Jacobian of the map

$$(\log w_{ij}, 1\le i\le j\le n)\mapsto(\log t_{ij}, 1\le i\le j\le n) $$

is ±1.

Proof

We prove this by induction on n. When n=2, we have t 11=w 12/2, t 12=w 11 w 12, t 22=2w 11 w 12 w 22 and the result is immediate. Now, by the previous lemma,

$$T=R^{n,n}_n \begin{pmatrix} \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W_{n-1,n-1})\\ w_{1n}\ \ldots\ w_{n-1,n} \end{pmatrix} \right]^t \\ w_{1n}\ \ldots\ w_{n-1,n}\ w_{nn} \end{pmatrix} . $$

Denoting by (s ij ) the elements of the matrix

$$S= \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W_{n-1,n-1})\\ w_{1n}\ \ldots\ w_{n-1,n} \end{pmatrix} \right]^t, $$

we have, by the previous lemma,

$$ \begin{array}{l@{\quad}l} t_{ij}=s_{ij} & \mbox{for } 1\le i<j\le n, \\ t_{11}=s_{12}/2s_{11}, & \\ t_{ii}=s_{i,i+1}s_{i-1,i}/s_{ii} & \mbox{for } 2\le i\le n-1,\\ t_{nn}=2s_{n-1,n}w_{nn}. & \end{array} $$
(5.7)

This expresses the n(n+1)/2 variables t ij , 1≤ijn as a function, which we shall denote by F, of the n(n+1)/2 variables s ij ,1≤i<jn and s 11,…,s n−1,n−1,w nn .

Denote by \(t^{n-1}_{ij}\) the elements of the symmetric matrix T n−1,n−1(W n−1,n−1). By the induction hypothesis, the map

$$(\log w_{ij}, 1\le i\le j\le n-1)\mapsto\bigl(\log t^{n-1}_{ij}, 1\le i\le j\le n-1\bigr) $$

has Jacobian ±1. The mapping \(R^{n,n-1}_{n}\) on the whole of \(({\mathbb{R}}_{>0})^{n\times(n-1)}\) is a composition of l ij -maps and hence has Jacobian ±1 in logarithmic variables; since it leaves matrix elements above the diagonal unchanged, its restriction to the space of matrix elements on and below the diagonal also has Jacobian ±1 in logarithmic variables. It follows that the mapping

$$\begin{aligned} & (\log w_{ij}, 1\le i\le j <n; \log w_{in}, 1\le i<n) \\ &\quad \mapsto(\log s_{ij}, 1\le i<j\le n;\ \log s_{ii}, 1\le i <n) \end{aligned}$$

has Jacobian ±1. It therefore remains only to show that the Jacobian sub matrix of the map F (in logarithmic variables) which corresponds to the variables (logs 11,…,logs n−1,n−1,logw nn ) and (logt 11,…,logt nn ) has determinant ±1. From (5.7), this sub matrix is given by

$$\bordermatrix{& \log s_{11}\!\!\!\! & \log s_{22}\!\!\!\! & \ldots\!\!\!\!& \log s_{n-1,n-1}\!\!\!\! & \log w_{nn} \cr \log t_{11} & -1 &&&&\cr \log t_{22} & &-1&&&\cr \vdots&&&\ddots&&\cr \log t_{n-1,n-1} & &&&-1&\cr \log t_{n,n} & &&&&1\cr }, $$

which completes the proof. □

Consider the measure on symmetric input matrices with positive entries defined by

$$\begin{aligned} \nu_{\alpha,\zeta }(dw) = \prod_{i<j} w_{ij}^{-\alpha_i-\alpha_j} \prod_i w_{ii}^{-\alpha_i-\zeta } \exp \biggl(-\sum_{i<j} \frac{1}{w_{ij}} - \sum\frac{1}{2w_{ii}} \biggr) \prod _{i\le j} \frac{dw_{ij}}{w_{ij}} , \end{aligned}$$
(5.8)

where \(\alpha\in {\mathbb{R}}^{n}\) and \(\zeta \in {\mathbb{R}}\) satisfy α i +ζ>0 for each i and α i +α j >0 for ij. Note that

$$\int_{({\mathbb{R}}_{>0})^{n(n+1)/2} }\nu_{\alpha,\zeta }(dw) = 2^{\sum_{i=1}^n(\alpha_i+\zeta )} \prod _i \varGamma(\alpha_i+\zeta ) \prod _{i<j}\varGamma(\alpha_i+ \alpha_j) . $$

In this setting we have R=C and so, using (3.8) and Lemma 5.1(b),

$$\prod_{i<j} w_{ij}^{-\alpha_i-\alpha_j} \prod _i w_{ii}^{-\alpha _i-\zeta } = 4^{\lfloor n/2\rfloor \zeta }\prod_i z_{ni}^{(-1)^i\zeta }R^{-\alpha}. $$

Thus, by Theorems 3.2 and 5.2, we obtain the following result.

Corollary 5.3

The push-forward of ν α,ζ under σ is given by

$$ \nu_{\alpha,\zeta }\circ\sigma^{-1} (dx) = 4^{\lfloor n/2\rfloor \zeta } f(x)^\zeta e^{-\frac{1}{2x_n}} \varPsi^n_\alpha(x) \prod_{i=1}^n \frac {dx_i}{x_i} , $$
(5.9)

where

$$f(x)=\prod_i x_i^{(-1)^i}. $$

If \(\lambda\in {\mathbb{C}}^{n}\) and \(\gamma\in {\mathbb{C}}\) satisfy ℜ(λ i +γ)>0 for each i, and ℜ(λ i +λ j )>0 for ij, then

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} f(x)^\gamma e^{-\frac{1}{2x_n}} \varPsi^n_\lambda (x) \prod_{i=1}^n \frac{dx_i}{x_i} \\ &\quad = 4^{-\lfloor n/2\rfloor\gamma} 2^{\sum_{i=1}^n(\lambda_i+\gamma)} \prod_i \varGamma(\lambda_i+\gamma ) \prod_{i<j} \varGamma(\lambda_i+\lambda_j) . \end{aligned}$$

Now, using (2.2) we can strengthen this to:

Corollary 5.4

Suppose \(\lambda\in {\mathbb{C}}^{n}\) and \(\gamma\in {\mathbb{C}}\) satisfy ℜ(λ i +γ)>0 for each i, and ℜ(λ i +λ j )>0 for ij. Then, for s>0,

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} f(x)^\gamma e^{-s/x_n} \varPsi^n_\lambda(x) \prod_{i=1}^n \frac{dx_i}{x_i} \\ &\quad = c_n(s,\gamma) s^{-\sum_{i=1}^n\lambda_i} \prod _i \varGamma(\lambda_i+\gamma) \prod _{i<j}\varGamma(\lambda _i+\lambda_j), \end{aligned}$$

where

$$c_n(s,\gamma)= \begin{cases} 1 & \textit{if }n\textit{ is even},\\ s^{-\gamma} & \textit{if }n\textit{ is odd}. \end{cases} $$

By (2.4) this is equivalent to the following identity which is equivalent to an integral identity conjectured by Bump and Friedberg [16] and proved by Stade [44, Theorem 3.3], see Theorem 7.5 below. We note that in [44] the corresponding statement is proved without any restrictions on the parameters. This integral is associated with an Archimedean L-factor of an exterior square automorphic L-function on \(\mathit{GL}(n,{\mathbb{R}})\).

Corollary 5.5

(Stade)

Suppose \(\lambda\in {\mathbb{C}}^{n}\) and \(\gamma \in {\mathbb{C}}\) satisfy ℜ(λ i +γ)>0 for each i, and ℜ(λ i +λ j )>0 for ij. Then, for s>0,

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} f\bigl(x'\bigr)^\gamma e^{-sx_1} \varPsi^n_{-\lambda}(x) \prod _{i=1}^n \frac{dx_i}{x_i} \\ &\quad = c_n(s,\gamma) s^{-\sum_{i=1}^n\lambda_i} \prod _i \varGamma(\lambda_i+\gamma) \prod _{i<j}\varGamma(\lambda _i+\lambda_j), \end{aligned}$$

where \(x'_{i}=1/x_{n-i+1}\).

Note that f(x′)=f(x) if n is even and f(x′)=1/f(x) if n is odd.

Now, consider the probability measure on symmetric matrices with positive entries defined by

$$ \tilde{\nu}_{\alpha,\zeta }(dw)=Z_{\alpha,\zeta }^{-1} \nu_{\alpha,\zeta }(dw), $$
(5.10)

where

$$Z_{\alpha,\zeta }=2^{\sum_{i=1}^n(\alpha_i+\zeta )} \prod_i \varGamma (\alpha_i+\zeta ) \prod_{i<j} \varGamma(\alpha_i+\alpha_j). $$

From Corollary 5.3, we obtain:

Corollary 5.6

The Laplace transform of the law of the polymer partition function t nn under \(\tilde{\nu}_{\alpha,\zeta }\) is given for r>0 by

$$\int e^{-r t_{nn}} \tilde{\nu}_{\alpha,\zeta }(dw)= 4^{\lfloor n/2\rfloor \zeta } Z_{\alpha,\zeta }^{-1} \int_{({\mathbb{R}}_{>0})^n} f(x)^\zeta e^{-rx_1-\frac{1}{2x_n}} \varPsi^n_\alpha(x) \prod_{i=1}^n \frac{dx_i}{x_i}. $$

Remark

(A formal computation)

In the following, we formally rewrite the above formula as a multiple contour integral which we expect to be valid, at least in some suitably regularized sense. Let ϵ>0 and set \(\alpha_{i}'=\alpha_{i}+\epsilon\). It follows from Corollary 3.6 (or 3.8) that the function \(e^{-\frac{1}{2x_{n}}} \varPsi^{n}_{\alpha'}(x)\) is in \(L_{2}(({\mathbb{R}}_{>0})^{n},\prod_{i=1}^{n} dx_{i}/x_{i})\). Moreover, by Corollary 3.6, for \(\lambda\in\iota {\mathbb{R}}^{n}\),

$$ \int_{({\mathbb{R}}_{>0})^n} e^{-\frac{1}{2x_n}} \varPsi^n_{\alpha'}(x) \varPsi ^n_\lambda(x) \prod_{i=1}^n \frac{dx_i}{x_i} = 2^{\sum_i(\lambda_i+\alpha_i+\epsilon)} \prod_{i,j} \varGamma (\alpha_i+\lambda_j+\epsilon). $$
(5.11)

Thus, by the Plancherel theorem, for any \(g\in L_{2}(({\mathbb{R}}_{>0})^{n},\prod_{i=1}^{n} dx_{i}/x_{i})\) we can write

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} \overline{g(x)} e^{-\frac{1}{2x_n}} \varPsi ^n_{\alpha'}(x) \prod_{i=1}^n \frac{dx_i}{x_i} \\ &\quad = \int_{\iota {\mathbb{R}}^n} \overline{ \hat{g}(\lambda) } 2^{\sum_i(\lambda_i+\alpha_i+\epsilon)} \prod_{i,j} \varGamma(\alpha _i+\lambda_j+\epsilon) s_n(\lambda) d\lambda. \end{aligned}$$
(5.12)

Suppose n is even. By Corollary 5.5, if r>0 and ℜλ i >0 for each i,

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} f(x)^\zeta e^{-rx_1} \varPsi^n_{-\lambda}(x) \prod_{i=1}^n \frac{dx_i}{x_i}\\ &\quad = r^{-\sum_{i=1}^n\lambda_i} \prod_i \varGamma( \lambda_i+\zeta ) \prod_{i<j}\varGamma( \lambda_i+\lambda_j). \end{aligned}$$

By (2.3) it follows that, for ϵ>0 and \(\lambda\in \iota {\mathbb{R}}^{n}\),

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} f(x)^\zeta e^{-rx_1} \biggl(\prod_i x_i^\epsilon \biggr) \varPsi^n_{-\lambda}(x) \prod _{i=1}^n \frac {dx_i}{x_i} \\ &\quad = r^{-\sum_{i=1}^n(\lambda_i+\epsilon)} \prod_i \varGamma(\lambda _i+\zeta +\epsilon) \prod_{i<j} \varGamma(\lambda_i+\lambda_j+2\epsilon). \end{aligned}$$
(5.13)

Formally, combining (5.11), (5.13) and (5.12) yields the following integral formula for the Laplace transform of the law of the polymer partition function t nn under the probability measure \(\tilde{\nu}_{\alpha,\zeta }\):

$$\begin{aligned} & \int e^{-r t_{nn}} \tilde{\nu}_{\alpha,\zeta }(dw) \\ &\quad = \int _{\iota {\mathbb{R}}^n} \biggl(\frac{r}{2} \biggr)^{-\sum_i(\lambda _i+\epsilon)} \prod_i \frac{\varGamma(\lambda_i+\zeta +\epsilon)}{\varGamma(\alpha _i+\zeta )} \prod _{i,j} \varGamma(\alpha_i+\lambda_j+ \epsilon) \\ &\qquad {} \times\prod_{i<j} \frac{\varGamma(\lambda_i+\lambda_j+2\epsilon)}{ \varGamma(\alpha_i+\alpha_j)} s_n(\lambda) d\lambda \end{aligned}$$
(5.14)

or, equivalently,

$$\begin{aligned} & \int e^{-r t_{nn}} \tilde{\nu}_{\alpha,\zeta }(dw) \\ &\quad = \int \biggl(\frac{r}{2} \biggr)^{-\sum_i\lambda_i} \prod _i \frac {\varGamma(\lambda_i+\zeta )}{\varGamma(\alpha_i+\zeta )} \prod_{i,j} \varGamma(\alpha_i+\lambda_j) \prod _{i<j} \frac{\varGamma (\lambda_i+\lambda_j)}{\varGamma(\alpha_i+\alpha_j)} s_n(\lambda) d\lambda \end{aligned}$$
(5.15)

where the integration is along vertical lines with ℜλ i >0 for each i. If n is odd, we similarly formally obtain, this time using Theorem 7.4 instead of Corollary 5.5 because in this case f(x′)ζ=f(x)ζ and ζ>0,

$$\begin{aligned} & \int e^{-r t_{nn}} \tilde{\nu}_{\alpha,\zeta }(dw) \\ &\quad = \int \biggl(\frac{r}{2} \biggr)^{-\sum_i\lambda_i} \prod _i \frac {\varGamma(\lambda_i-\zeta )}{\varGamma(\alpha_i+\zeta )} \prod_{i,j} \varGamma(\alpha_i+\lambda_j) \prod _{i<j} \frac{\varGamma (\lambda_i+\lambda_j)}{\varGamma(\alpha_i+\alpha_j)} s_n(\lambda) d\lambda \end{aligned}$$
(5.16)

where the integration is along vertical lines with ℜλ i >0 for each i. It seems reasonable to expect the integral formulas (5.15) and (5.16) to be valid, at least in some suitably regularized sense.

6 Geometric RSK for triangular arrays and paths below a hard wall

In this section we introduce a birational, geometric RSK type mapping \(T^{\Delta }_{n}\) that maps triangular arrays X n =(x ij ,1≤j<in) to triangular arrays T=(t ij ,1≤j<in), both with positive real entries. The motivation comes from the symmetric polymer of Sect. 5, with a (de)pinning parameter ζ that tends to infinity. This will become clear later on in Proposition 6.4 (see also the remarks at the end of the section). Notions like the type and the shape can be defined also for this mapping. We prove that it satisfies a version of the fundamental identity (Theorem 3.2) and preserves volume in logarithmic variables. Moreover, we can relate the shape to partition functions of nonintersecting paths below a “hard wall”, that is, paths restricted to {(i,j):j<i}.

For n=2 the mapping is defined by

$$\begin{aligned} T^\Delta _2 ( x_{21} )=x_{21}. \end{aligned}$$
(6.1)

For n≥3 we define inductively

$$\begin{aligned} T^\Delta _n(X_n)=R^\Delta _n \begin{pmatrix} T^\Delta _{n-1}(X_{n-1}) \\ x_{n1}\ \ldots\ x_{n,n-1} \end{pmatrix} , \end{aligned}$$
(6.2)

with X n−1=(x ij ,1≤j<in−1) and

$$\begin{aligned} R^\Delta _n= \rho^{\Delta ,n}_{n-1} \circ\cdots\circ\rho^{\Delta ,n}_1 \end{aligned}$$
(6.3)

where

$$ \begin{aligned} \rho^{\Delta ,n}_j&= \rho^{n}_j\quad \mathrm{for}\ j=1,\ldots,n-2,\quad \text{and}\\ \rho^{\Delta ,n}_{n-1}&=b^{\Delta , n}_{2,1} \circ\cdots\circ b^{\Delta , n}_{n-1,n-2}\circ b^{\Delta , n}_{n,n-1} \circ r^\Delta _{n,n-1}, \end{aligned} $$
(6.4)

and \(\rho^{n}_{j}\) is defined in (3.4). To complete the definition of \(T^{\Delta }_{n}\) we define the mappings \(b^{\Delta , n}_{j,j-1}\) and \(r^{\Delta }_{n,n-1}\) on a triangular array X n =(x ij ,1≤j<in). This is done as follows. The mapping \(r^{\Delta }_{n,n-1}\) replaces x n,n−1 by 1/x n,n−1. Observing the conventions x i0=x n+1,n−1=1, make these definitions:

  • For \(k=0,1,2,\dotsc,\lfloor{\frac{n}{2}}\rfloor -1\), \(b^{\Delta , n}_{n-2k,n-2k-1}\) replaces x n−2k,n−2ki−1 with

    $$\begin{aligned} x'_{n-2k,n-2k-1} =&\frac{x_{n-2k+1,n-2k-1} x_{n-2k,n-2k-2}}{x_{n-2k,n-2k-1}}. \end{aligned}$$
    (6.5)
  • For \(k=1,2,\dotsc,\lfloor{\frac{n-1}{2}}\rfloor\), \(b^{\Delta , n}_{n-2k+1,n-2k}\) is the identity mapping.

We present explicitly the cases n=3,4 to clarify the definitions. For n=3,

$$\begin{aligned} T^\Delta _3\left ( \begin{array}{ccc} x_{21}\\ x_{31}& x_{32}\\ \end{array} \right ) &= \rho^{\Delta ,3}_2\circ\rho^{\Delta ,3}_1 \begin{pmatrix} T^\Delta _{2}(x_{21}) \\ x_{31}\,\,\,\,\,\, x_{32} \end{pmatrix} = \rho^{\Delta ,3}_2\circ \rho^{\Delta ,3}_1 \left ( \begin{array}{cc} x_{21} & \\ x_{31} & x_{32} \end{array} \right ) \\ &= \rho^{\Delta ,3}_2 \left ( \begin{array}{cc} x_{21} & \\ x_{21}x_{31} & x_{32} \end{array} \right ) = \left ( \begin{array}{cc} x_{21} & \\ x_{21}x_{31} & x_{21}x_{31}x_{32} \end{array} \right ). \end{aligned}$$

For n=4,

$$\begin{aligned} &T^\Delta _4\left ( \begin{array}{ccc} x_{21}\\ x_{31}& x_{32}\\ x_{41}& x_{42} & x_{43}\\ \end{array} \right )\\ &\quad = \rho^{\Delta ,4}_3\circ\rho^{\Delta ,4}_2 \circ\rho^{\Delta ,4}_1 \begin{pmatrix} T^\Delta _3 \left ( \begin{array}{cc} x_{21}& \\ x_{31}& x_{32}\\ \end{array} \right ) \\ x_{41}\,\,\,\,\,\, x_{42}\,\,\,\,\,\, x_{43} \end{pmatrix} \\ &\quad = \rho^{\Delta ,4}_3\circ\rho^{\Delta ,4}_2 \circ\rho^{\Delta ,4}_1 \left ( \begin{array}{ccc} x_{21}\\ x_{21}x_{31}& x_{21}x_{31}x_{32}\\ x_{41}& x_{42} & x_{43}\\ \end{array} \right ) \\ &\quad= \rho^{\Delta ,4}_3\circ\rho^{\Delta ,4}_2 \left ( \begin{array}{ccc} x_{21}\\ x_{21}x_{31}& x_{21}x_{31}x_{32}\\ x_{21}x_{31}x_{41}& x_{42} & x_{43}\\ \end{array} \right ) \\ &\quad= \rho^{\Delta ,4}_3 \left ( \begin{array}{ccc} x_{21}\\ \frac{x_{21}x_{32}x_{41}}{x_{32}+x_{41}}& x_{21}x_{31}x_{32}\\ x_{21}x_{31}x_{41}& x_{21}x_{31}x_{42}(x_{32}+x_{41}) & x_{43}\\ \end{array} \right ) \\ &\quad= \left ( \begin{array}{ccc} \frac{x_{32}x_{41}}{x_{32}+x_{41}}\\ \frac{x_{21}x_{32}x_{41}}{x_{32}+x_{41}}& x_{21}x_{31}x_{32}\\ x_{21}x_{31}x_{41}& x_{21}x_{31}x_{42}(x_{32}+x_{41}) & x_{21}x_{31}x_{42}x_{43}(x_{32}+x_{41}) \\ \end{array} \right ). \end{aligned}$$

For a triangular array X=(x ij , 1≤j<in) define

$${\mathcal{E}}^\Delta (X)=\frac{1}{x_{21}}+\sum_{j\leq i-1} \frac {x_{i-1,j}+x_{i,j-1}}{x_{ij}}, $$

with the convention that x i0=x ii =0 for i=1,…,n. Here is the analogue of Theorem 3.2 for triangular arrays.

Theorem 6.1

Let W n =(w ij ,  1≤j<in ) with \(w_{ij}\in\mathbb {R}_{>0}\). Then the output array \(T_{n}=T^{\Delta }_{n}(W)\) satisfies

$$\begin{aligned} {\mathcal{E}}^\Delta (T_n)=\sum _{1\leq j<i\leq n}\frac{1}{w_{ij}}. \end{aligned}$$
(6.6)

Proof

We will show that

$${\mathcal{E}}^{\Delta }(T_n)={\mathcal{E}}^{\Delta }\bigl(T^\Delta _{n-1}(W_{n-1}) \bigr)+\sum_{j=1}^{n-1}\frac{1}{w_{nj}}. $$

To this end, let \(T^{0}=T^{\Delta }_{n-1}(W_{n-1})\) and \(T^{k}=\rho^{\Delta ,n}_{k}\circ\cdots\circ\rho^{\Delta ,n}_{1}(T^{\Delta }_{n-1}(W_{n-1}))\) for k=1,2,…,n−1. For a triangular array X define

$${\mathcal{E}}^{\Delta ,n,k}(X)=\frac{1}{x_{21}}+\sum_{i,j}^{(k)} \frac {x_{i-1,j}+x_{i,j-1}}{x_{ij}}+\sum_{j=k+1}^{n-1} \frac{1}{x_{ij}}, $$

where summation \(\sum_{ij}^{(k)}\) is over all indices (i,j) such that 1≤j<in, but (i,j)≠(n,k+1),…,(n,n−1). The boundary conventions x i0=x ii =0 are still in force. We will show that

$${\mathcal{E}}^{\Delta ,n,k}\bigl(T^k\bigr)={\mathcal{E}}^{\Delta ,n,k-1} \bigl(T^{k-1}\bigr)\quad\text{for} \,\, k=1,2,\dotsc,n-1, $$

and this will conclude the proof. Notice that for k=1,2,…,n−2 this fact is already included in the proof of Theorem 3.2, since \(\rho^{\Delta ,n}_{i}=\rho^{n}_{i}\) for in−2. To check the case k=n−1, let X=T n−2 and \(X'=\rho^{\Delta ,n}_{n-1}(X)=T^{n-1}\). Since \(\rho^{\Delta ,n}_{n-1}\) alters only the elements x i,i−1,i=2,…,n, and leaves the rest unchanged,

$$\begin{aligned} {\mathcal{E}}^{\Delta ,n,n-1}\bigl(X'\bigr)&={\mathcal{E}}^{\Delta ,n,n-1}\bigl( \rho^{\Delta ,n}_{n-1}(X)\bigr) \\ & = \frac{1}{x'_{2,1}}+\sum _{j<i}\frac {x'_{i-1,j}+x'_{i,j-1}}{x'_{ij}} \\ &= \mathop{\tilde{\sum}}\limits_{ j<i-1}\frac{x_{i-1,j}+x_{i,j-1}}{x_{ij}} \\ &\quad{}+ \frac{1}{x'_{2,1}} + \frac{x'_{2,1}}{x_{3,1}}+ \sum _{i=3}^{n-1} \biggl( \frac{x_{i,i-2}}{x'_{i,i-1}} + \frac {x'_{i,i-1}}{x_{i+1,i-1}} \biggr) +\frac{x_{n,n-2}}{x'_{n,n-1}} \end{aligned}$$
(6.7)

where in the summation \(\tilde{\sum}\) we set appearances of terms x i,i−1,i=2,…,n, equal to zero. Consider the three parts of line (6.7).

First

$$\frac{1}{x'_{21}}+\frac{x'_{21}}{x_{31}}=\frac{1}{x_{21}}+\frac {x_{21}}{x_{31}} $$

because either n is odd and \(x'_{21}=x_{21}\), or n is even and \(x'_{21}=x_{31}/x_{21}\). The middle terms satisfy

$$\frac{x_{i,i-2}}{x'_{i,i-1}} + \frac{x'_{i,i-1}}{x_{i+1,i-1}} = \frac{x_{i,i-2}}{x_{i,i-1}} + \frac{x_{i,i-1}}{x_{i+1,i-1}}\,, $$

either by virtue of (6.5) if i=n−2k, or because \(x'_{i,i-1}=x_{i,i-1}\) when i=n−2k+1. Finally,

$$\begin{aligned} \frac{x'_{n,n-2}}{x'_{n,n-1}}=\frac{1}{x_{n,n-1}} \end{aligned}$$

by (6.5) and the definition of \(r^{\Delta }_{n,n-1}\). Making these substitutions on line (6.7) converts \({\mathcal{E}}^{\Delta ,n,n-1}(X')\) into \({\mathcal{E}}^{\Delta ,n,n-2}(X)\) and completes the proof. □

The following theorem states the volume preserving property of the map \(T^{\Delta }_{n}\). It follows from the volume preservation of the individual steps in (6.4).

Theorem 6.2

Let W=(w ij ,1≤j<in)∈(R >0)n(n−1)/2 as above, and consider the mapping \(W\mapsto T^{\Delta }_{n}(W)= (t_{ij}, 1\leq j < i\leq n)\). In logarithmic variables

$$(\log w_{ij}, 1\leq j < i\leq n) \mapsto (\log t_{ij}, 1 \leq j < i\leq n) $$

has Jacobian equal to ±1.

Consider a triangular array W=(w ij ,1≤j<in) and the output pattern \(P^{\Delta }=T^{\Delta }_{n}(W)=(t_{ij}, 1\leq j<i \leq n)\). The shape of the pattern P Δ is defined as

$$\begin{aligned} \operatorname{sh} P^\Delta =&\operatorname{sh} T^\Delta _n(W)= (t_{n,n-1},t_{n-1,n-2},\dotsc,t_{21}). \end{aligned}$$

Our next goal is to relate the shape to ratios of partition functions. Let \(\varPi^{(r)}_{n}\) be the collection of r-tuples of non-intersecting nearest-neighbor lattice paths π 1,…,π r that start at positions (2,1),(3,2),…,(r+1,r), end at positions (n,n−1),(n−1,n−2),…,(nr+1,nr), and stay strictly below the diagonal in the matrix picture, i.e. never leave the set {(i,j):1≤j<in}. See Fig. 2. Naturally 1≤rn/2. Denote the partition sums by

$$\begin{aligned} z_r = \sum_{(\pi_1,\ldots,\pi_r)\in\varPi^{(r)}_{n}} \prod _{(i,j)\in\pi_1\cup\cdots\cup\pi_r} w_{ij}. \end{aligned}$$
(6.8)

The definition includes the case of a path consisting of a single point, which happens when n is even and r=n/2.

Fig. 2
figure 2

A pair \((\pi_{1},\pi_{2})\in\varPi^{(2)}_{8}\). We have used matrix representation, as opposed to Cartesian coordinates. On the upper left the paths begin at (2,1) and (3,2). On the lower right the paths end at (8,7) and (7,6). The diagonal (dashed line) runs from (1,1) to (8,8)

The next theorem states that the odd coordinates of the shape vector \(\operatorname{sh} P^{\Delta }\) are given by ratios of partition functions.

Theorem 6.3

Consider a triangular array \(W=(w_{ij}, 1\leq j<i\leq n)\in({\mathbb{R}}_{>0})^{n(n-1)/2}\), the output pattern \(P^{\Delta }=T^{\Delta }_{n}(W)=(t_{ij}, 1\leq j<i\leq n)\) and the partition functions z r ,r=1,2,…,⌊n/2⌋ as defined in (6.8). Then

$$\begin{aligned} & (t_{n,n-1}, t_{n-2,n-3},\dotsc,t_{n-2\lfloor{n/2}\rfloor +2,n-2\lfloor{n/2}\rfloor+1})\\ &\quad =(z_1, z_2/z_1, \dotsc, z_{\lfloor{n/2}\rfloor}/z_{\lfloor {n/2}\rfloor-1}). \end{aligned}$$

The proof of this theorem will be presented after Proposition 6.4 below. Define an operator \(\varLambda^{\varepsilon }_{n}\) acting on n×n matrices W by

$$\begin{array}{l@{\quad}l} w_{ij} \mapsto \varepsilon w_{ij}, & i\neq j,\\ w_{n-2i,n-2i}\mapsto 2 \varepsilon ^2 w_{w_{n-2i,n-2i}}, & i=0,1,\dotsc,\lfloor{n/2}\rfloor-1,\\ w_{n-2i+1,n-2i+1} \mapsto \frac{1}{2} w_{n-2i+1,n-2i+1}, & i=1,2,\dotsc,\lfloor{n/2}\rfloor-1, \end{array} $$

and \(w_{11}\mapsto\frac{1}{2}w_{11}\), if n is even, while w 11εw 11, if n is odd.

Let \(W_{n}^{\Delta }=(w_{ij}^{\Delta ,n},\,\, 1\leq j< i\leq n)\) be a given triangular array. Let \(W_{n}^{\varepsilon}\) be the symmetric n×n matrix with \(w_{ii}^{\varepsilon }=\varepsilon \) for 1≤in and \(w_{ij}^{\varepsilon }=w_{ij}^{\Delta ,n}\) for 1≤j<in. Finally, denote by \(T^{\diamond }_{n}(W_{n}^{\Delta })=(w^{\diamond ,n}_{ij},\,\,\, 1\leq i,j\leq n)\) a symmetric n×n output matrix whose lower triangular part (t ij ,  1≤j<in) agrees with the output array \(T^{\Delta }_{n}(W^{\Delta }_{n})\), while the diagonal elements (t ii ) i=1,…,n are determined by

$$ \begin{aligned} &t_{n-2k,n-2k}=t_{n-2k-1,n-2k-1}=t_{n-2k,n-2k-1} \quad\text{for} \ k=0,1,\dotsc \quad \text{and} \\ &t_{11}=1 \quad\mathrm{if}\ n\ \mathrm{is}\ \mathrm{odd}. \end{aligned} $$
(6.9)

Proposition 6.4

Let T n,n be the geometric RSK mapping on n×n matrices with positive entries, defined in (3.3), and \(W^{\varepsilon }_{n},\,\varLambda ^{\varepsilon }_{n},\, T^{\diamond }_{n},\,W^{\Delta }_{n}\) as above. Then, as ε↘0,

$$\begin{aligned} T^{n,n}\bigl(W^\varepsilon _n\bigr)= \varLambda^\varepsilon _n\circ T^\diamond _n \bigl(W^\Delta _n\bigr)+ S_n^\varepsilon \end{aligned}$$
(6.10)

where \(S_{n}^{\varepsilon }\) is an n×n matrix of lower order terms, specifically

$$ \begin{aligned} & \bigl(S_n^\varepsilon \bigr)_{ij}=o(\varepsilon ) , \qquad \qquad \qquad i\ne j, \\ & \bigl(S_n^\varepsilon \bigr)_{n-2i,n-2i}= o\bigl( \varepsilon ^2\bigr), \qquad\ \ i=0,1,\dotsc,\lfloor {n/2}\rfloor-1, \\ & \bigl(S_n^\varepsilon \bigr)_{n-2i+1,n-2i+1}= o(1), \quad i=1,2,\dotsc,\lfloor {n/2}\rfloor-1, \\ & \bigl(S_n^\varepsilon \bigr)_{1,1}= \begin{cases} o(\varepsilon ), &n\textit{ is odd}, \\ o(1),& n\textit{ is even}. \end{cases} \end{aligned} $$
(6.11)

Proof

From (5.3) we have this recursion:

$$ T^{n,n}\bigl(W^\varepsilon _n\bigr) = \rho^n_n\circ\bigl(\rho^n_{n-1}\circ \dotsm\circ \rho^n_1\bigr) \begin{pmatrix} \left[ R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W^\varepsilon _{n-1}) \\ w_{n1}\ \ldots\ w_{n,n-1} \end{pmatrix} \right]^t \, \\ w_{1n}\ \ldots\ w_{n,n-1} \,\,\,\varepsilon \end{pmatrix} . $$
(6.12)

Symmetry of \(W^{\varepsilon }_{n}\) makes \(T^{n,n}(W^{\varepsilon }_{n})\) also symmetric. Since \(\rho^{n}_{n}\) alters only diagonal elements, the matrix must be symmetric just before the last application of \(\rho^{n}_{n}\). The mappings \(\rho^{n}_{n-1}\circ\dotsm\circ\rho^{n}_{1}\) alter only entries strictly below the diagonal. Consequently we can skip the steps \(\rho^{n}_{n-1}\circ\dotsm\circ\rho^{n}_{1}\) if we simply take the upper triangular part of the matrix just before and extend it to a symmetric matrix. We insert one extra transposition and then keep the lower triangular instead of the upper triangular part. In other words, let

$$\begin{aligned} W'=R^{n,n-1}_n \begin{pmatrix} T^{n-1,n-1}(W^\varepsilon _{n-1}) \\ w_{n1}\ \ldots\ w_{n,n-1} \end{pmatrix} \quad \text{(an $n\times(n-1)$ matrix)} \end{aligned}$$

and define the symmetric matrix \(\tilde{W}=\{\tilde{w}_{ij},\, 1\leq i,j\leq n\}\) by \(\tilde{w}_{ij}=w'_{ij}\) for 1≤ji∧(n−1) and \(\tilde{w}_{nn} =\varepsilon \). Then \(T^{n,n}(W^{\varepsilon }_{n}) =\rho ^{n}_{n}(\tilde{W})\). In particular, the part of \(T^{n,n}(W^{\varepsilon }_{n})\) strictly below the diagonal is already present in W′.

We prove (6.10) by induction on n. Case n=2 begins with \(W_{2}^{\Delta }=(w_{21})\), from which

$$\varLambda^\varepsilon _2\circ T^\diamond _2 \bigl(W^\Delta _2\bigr)= \begin{pmatrix} \frac{1}{2}w_{21} & \varepsilon w_{21}\\ \varepsilon w_{21} & 2\varepsilon ^2 w_{21} \end{pmatrix} = T^{2,2} \begin{pmatrix} \varepsilon & w_{21}\\ w_{21} & \varepsilon \end{pmatrix} = T^{2,2} \bigl(W^\varepsilon _{2}\bigr). $$

Assume that

$$T^{n-1,n-1}\bigl(W^\varepsilon _{n-1}\bigr)= \varLambda^\varepsilon _{n-1}\circ T^\diamond _{n-1} \bigl(W^\Delta _{n-1}\bigr) +S_{n-1}^\varepsilon . $$

Abbreviate \(T^{\varepsilon }=(t^{\varepsilon }_{ij},\,1\leq i,j\leq n-1)=T^{n-1,n-1}(W^{\varepsilon }_{n-1})\) so that the induction assumption reads:

$$\begin{aligned} t^\varepsilon _{ij} =&\varepsilon w^{\diamond ,n-1}_{ij}+o( \varepsilon ),\quad i\neq j,\\ t^\varepsilon _{n-2i-1,n-2i-1} =&2\varepsilon ^2 w^{\diamond ,n-1}_{n-2i-1,n-2i-1}+o \bigl(\varepsilon ^2\bigr),\quad i=0,1,\dotsc, \\ t^\varepsilon _{n-2i,n-2i} =& \frac{1}{2}w^{\diamond ,n-1}_{n-2i,n-2i}+o(1), \quad i=1,\dotsc, \\ t^\varepsilon _{11} =&\left\{\begin{array}{l@{\quad}l} \varepsilon w^{\diamond ,n-1}_{11}+o( \varepsilon ),& \text{if}\ (n-1)\ \text{is odd},\\ \frac{1}{2}w^{\diamond ,n-1}_{11}+o(1), & \text{if}\ (n-1) \ \text{is even}. \end{array}\right. \end{aligned}$$

We now perform the mapping

$$W' = \rho^n_{n-1}\circ\cdots\circ \rho^n_1 \begin{pmatrix} T^{n-1,n-1}(W^\varepsilon _{n-1}) \\ w_{n1}\ \ldots\ w_{n,n-1} \end{pmatrix} $$

inductively. Assume that we have applied the transformations

$$\rho^n_{k-1}\circ\cdots\circ\rho^n_1, \quad k<n-1, $$

and this has resulted in output entries

$$w'_{ij}=\varepsilon w^{\diamond ,n}_{ij}+o( \varepsilon ), \quad1\leq j < k-(n-i),\, \, n-k+1< i\leq n, $$

where \(w^{\diamond ,n}_{ij}\) denotes the entries of the matrix \(T^{\diamond }_{n}(W^{\Delta }_{n})\) (recall that the lower triangular part of \(T^{\diamond }_{n}(W^{\Delta }_{n})\) is identical to \(T^{\Delta }_{n}(W^{\Delta }_{n})\)). This is readily checked when k−1=1. We will show that this is also true after the transformation \(\rho^{n}_{k}\). To this end, using the relations (3.6) and (3.5), we have that

$$\begin{aligned} w'_{nk} =&w_{nk}\bigl(w'_{n,k-1}+t^\varepsilon _{n-1,k} \bigr)=\varepsilon w_{nk} \bigl(w^{\diamond ,n}_{n,k-1}+w^{\diamond ,n-1}_{n-1,k} \bigr) +o(\varepsilon )\\ =&\varepsilon w^{\diamond ,n}_{n,k}+o(\varepsilon ),\\ w'_{n-j,k-j} =&\frac{w'_{n+1-j,k-j}\,t^\varepsilon _{n-j,k-j+1}}{t^\varepsilon _{n-j,k-j}} \frac{w'_{n-j,k-j-1}+t^\varepsilon _{n-j-1,k-j}}{w'_{n+1-j,k-j}+t^\varepsilon _{n-j,k-j+1}}\\ =& \frac{(\varepsilon w^{\diamond ,n}_{n+1-j,k-j}+o(\varepsilon ))\,(\varepsilon w^{\diamond ,n-1}_{n-j,k-j+1}+o(\varepsilon ))}{\varepsilon w^{\diamond ,n-1}_{n-j,k-j}+o(\varepsilon )}\\ &{}\times \frac{\varepsilon (w^{\diamond ,n}_{n-j,k-j-1}+w^{\diamond ,n-1}_{n-j-1,k-j})+o(\varepsilon )}{\varepsilon (w^{\diamond ,n}_{n+1-j,k-j}+w^{\diamond ,n-1}_{n-j,k-j+1})+o(\varepsilon )}\\ =& \varepsilon \frac{w^{\diamond ,n}_{n+1-j,k-j}\,w^{\diamond ,n-1}_{n-j,k-j+1}}{w^{\diamond ,n-1}_{n-j,k-j}}\,\, \,\, \frac{w^{\diamond ,n}_{n-j,k-j-1}+w^{\diamond ,n-1}_{n-j-1,k-j}}{w^{\diamond ,n}_{n+1-j,k-j}+w^{\diamond ,n-1}_{n-j,k-j+1}}+o(\varepsilon ) \\ =& \varepsilon w^{\diamond ,n}_{n-j,k-j}+o(\varepsilon ), \end{aligned}$$

and this verifies the proposition for the above entries. The next step is to confirm that \(w'_{n-j,n-j-1}=\varepsilon w^{\diamond ,n}_{n-j,n-j-1}+o(\varepsilon )\) for j=0,…,n−2. To this end, assume that we have performed the transformations \(\rho^{n}_{n-2}\circ\cdots\circ\rho^{n}_{1}\) and then we operate with \(\rho^{n}_{n-1}\). First for j=0,

$$\begin{aligned} w'_{n,n-1} =&w_{n,n-1}\bigl(w'_{n,n-2}+t^\varepsilon _{n-1,n-1} \bigr) \\ =& w_{n,n-1}\bigl(\varepsilon w^{\diamond ,n}_{n,n-2}+2 \varepsilon ^2 w^{\diamond ,n-1}_{n-1,n-1}+o(\varepsilon )\bigr) \\ =& \varepsilon w_{n,n-1}w^{\diamond ,n}_{n,n-2}+o(\varepsilon ) \\ =& \varepsilon w^{\diamond ,n}_{n,n-1}+o(\varepsilon ). \end{aligned}$$

For j>0

$$\begin{aligned} w'_{n-j,n-j-1}= \frac{w'_{n+1-j,n-j-1}\,t^\varepsilon _{n-j,n-j}}{t^\varepsilon _{n-j,n-j-1}}\, \frac{w'_{n-j,n-j-2}+t^\varepsilon _{n-j-1,n-j-1}}{w'_{n+1-j,n-j-1}+t^\varepsilon _{n-j,n-j}}. \end{aligned}$$

To develop this further we distinguish between odd and even j. For even j,

$$\begin{aligned} w'_{n-j,n-j-1} =&\frac{(\varepsilon w^{\diamond ,n}_{n+1-j,n-j-1}+o(\varepsilon ) )\, \, (\frac{1}{2} w^{\diamond ,n-1}_{n-j,n-j}+o(1)) }{\varepsilon w^{\diamond ,n-1}_{n-j,n-j-1}+o(\varepsilon )} \\ &{}\times\frac{\varepsilon w^{\diamond ,n}_{n-j,n-j-2}+o(\varepsilon ) + 2\varepsilon ^2 w^{\diamond ,n-1}_{n-j-1,n-j-1}+o(\varepsilon ^2)}{\varepsilon w^{\diamond ,n}_{n+1-j,n-j-1}+o(\varepsilon )+\frac{1}{2} w^{\diamond ,n-1}_{n-j,n-j}+o(1)} \\ =&\varepsilon \frac{w^{\diamond ,n}_{n+1-j,n-j-1}\,\,w^{\diamond ,n}_{n-j,n-j-2}}{w^{\diamond ,n-1}_{n-j,n-j-1}}+o(\varepsilon ) \\ =& \varepsilon w^{\diamond ,n}_{n-j,n-j-1}+o(\varepsilon ) \end{aligned}$$

where the last step came from (6.5). In the odd case

$$\begin{aligned} w'_{n-j,n-j-1} =&\frac{(\varepsilon w^{\diamond ,n}_{n+1-j,n-j-1}+o(\varepsilon ) )\, \, (2\varepsilon ^2 w^{\diamond ,n-1}_{n-j,n-j}+o(\varepsilon ^2)) }{\varepsilon w^{\diamond ,n-1}_{n-j,n-j-1}+o(\varepsilon )} \\ &{}\times\frac{\varepsilon w^{\diamond ,n}_{n-j,n-j-2}+o(\varepsilon ) + \frac {1}{2} w^{\diamond ,n-1}_{n-j-1,n-j-1}+o(1)}{\varepsilon w^{\diamond ,n}_{n+1-j,n-j-1}+o(\varepsilon )+2\varepsilon ^2 w^{\diamond ,n-1}_{n-j,n-j}+o(\varepsilon ^2)} \\ =&\varepsilon \frac{w^{\diamond ,n-1}_{n-j,n-j}\,\,w^{\diamond ,n-1}_{n-j-1,n-j-1}}{w^{\diamond ,n-1}_{n-j,n-j-1}}+o(\varepsilon ) \\ =& \varepsilon w^{\diamond ,n-1}_{n-j,n-j-1}+o(\varepsilon ) = \varepsilon w^{\diamond ,n}_{n-j,n-j-1}+o(\varepsilon ). \end{aligned}$$

The second last equality follows from the fact that \(T^{\diamond }_{n-1}(W_{n-1}^{\Delta })\) satisfies (6.9) with n replaced by n−1. The last equality comes from the definition of \(b^{\triangle,n}_{n-j,n-j-1}\) as the identity mapping (see the bullet below (6.5)). In the case (nj,nj−1)=(2,1) we need to distinguish between the case n is even or odd. In the even case we have

$$\begin{aligned} w'_{21} =&t^\varepsilon _{11} \frac{w'_{31}t^\varepsilon _{22}}{t^\varepsilon _{21}(w'_{31}+t^\varepsilon _{22})} \\ =&\bigl(\varepsilon w^{\diamond ,{n-1}}_{11}+o(\varepsilon )\bigr) \frac{(\varepsilon w^{\diamond ,n}_{31}+o(\varepsilon ))(\frac{1}{2}w^{\diamond ,n-1}_{22}+o(1))}{(\varepsilon w^{\diamond ,n-1}_{21}+o(\varepsilon ))(\varepsilon w^{\diamond ,n}_{31}+o(\varepsilon )+\frac {1}{2}w^{\diamond ,n-1}_{22}+o(1) ) } \\ =& \varepsilon w^{\diamond ,{n-1}}_{11} \frac{ w^{\diamond ,n}_{31}}{w^{\diamond ,n-1}_{21}}+o(\varepsilon )= \varepsilon \frac{ w^{\diamond ,n}_{31}}{w^{\diamond ,n-1}_{21}}+o(\varepsilon ) =\varepsilon w^{\diamond ,n}_{21}+o( \varepsilon ), \end{aligned}$$

where the second to last equality follows from (6.9), since (n−1) is odd and therefore \(w^{\diamond ,{n-1}}_{11} =1\). The case that n is odd follows similarly.

To complete the construction of \(T^{n,n}(W^{\varepsilon }_{n})\), extend W′ to the symmetric matrix \(\tilde{W}\) as explained above and define \(W'' =\rho^{n}_{n}(\tilde{W})\). By computations similar to the ones above and by symmetry, the diagonal elements \((w''_{ii})_{i=1,\dotsc,n}\) satisfy \(w''_{n-2k,n-2k}=2\varepsilon ^{2} w^{\diamond ,n}_{n-2k,n-2k-1}\) and \(w''_{n-2k-1,n-2k-1}=\frac{1}{2}w^{\diamond ,n}_{n-2k,n-2k-1}\) for k=0,1,…. The proof is then complete. □

Proof of Theorem 6.3

Consider a symmetric, n×n, matrix, \(W^{\varepsilon }_{n}\), with diagonal weights, w ii =ε,  i=1,2,…,n. Let v r denote the partition sum introduced in (3.10) with k=m=n:

$$ v_r = \sum_{(\pi_1,\ldots,\pi_r)\in\varPi ^{(r)}_{n,n}} \prod _{(i,j)\in\pi_1\cup\cdots\cup\pi_r} w_{ij}. $$
(6.13)

The key observation is the following. For 1≤k≤⌊n/2⌋,

$$ \begin{aligned} v_{2k}= \Biggl( \prod _{i=1}^k w_{ii} w_{n-i+1, n-i+1} \Biggr) z_k^2 + V(2k+1) \end{aligned} $$
(6.14)

and

$$ \begin{aligned} v_{2k-1}= \Biggl( \prod _{i=1}^k w_{ii} w_{n-i+1, n-i+1} \Biggr) 2z_{k-1}z_k + V(2k+1) \end{aligned} $$
(6.15)

where z 0=1, z r is defined by (6.8), and the unspecific notation V() represents any sum of products of weights where each term contains at least diagonal weights w ii .

To see the origin of (6.14)–(6.15), consider first v 1, the sum of products ∏(i,j)∈π w ij over all paths π from (1,1) to (n,n). Those products that contain only weights w 11 w nn from the diagonal correspond to paths that stay either strictly above or strictly below the diagonal, except at points (1,1) and (n,n). By the symmetry of the weights this gives two copies of z 1. Similarly for v 2, pairs (π 1,π 2) that intersect the diagonal only at {(1,1),(n,n)} correspond to pairs such that π 2 connects (1,2) to (n−1,n) above the diagonal and π 1 connects (2,1) to (n,n−1) below the diagonal. Weights of paths are multiplied, and so symmetry gives \(z_{1}^{2}\). The higher cases work the same way.

For the symmetric weight matrix the shape vector x=(x 1,…,x n ) is given by

$$ x_1=v_1, \qquad x_i= z_{n,i}= \frac{v_i}{v_{i-1}} \quad\text{for }2\le i\le n. $$
(6.16)

Here we recalled that the shape vector is the bottom row z n of the P pattern, see (2.8), and combined (3.10) with (4.3).

Since w ii =ε, (6.14)–(6.16) combine to give the following asymptotics for k=1,2,…,⌊n/2⌋ as ε↘0:

$$\begin{aligned} x_{1} =&v_1=2\varepsilon ^2 z_1+o\bigl(\varepsilon ^2\bigr),\\ x_{2k} =&\frac{v_{2k}}{v_{2k-1}}=\frac{\varepsilon ^{2k} z_k^2+o(\varepsilon ^{2k})}{\varepsilon ^{2k}\,\,2z_{k-1}z_k+o(\varepsilon ^{2k})} =\frac{1}{2} \frac{z_k}{z_{k-1}}+o(1),\\ x_{2k+1} =&\frac{v_{2k+1}}{v_{2k}}=\frac{\varepsilon ^{2(k+1)} 2z_kz_{k+1}+o(\varepsilon ^{2(k+1)})}{\varepsilon ^{2k}\,z_k^2+o(\varepsilon ^{2k})} =2 \varepsilon ^2\,\,\frac{z_{k+1}}{z_k}+o\bigl(\varepsilon ^2\bigr). \end{aligned}$$

The proof can be now completed by comparing to (6.10) and using (6.9). □

For a triangular array \(X=(x_{ij},\,1\leq j< i\leq n)\in(\mathbb {R}_{>0})^{n(n-1)/2}\) we define its type, \(\tau=(\tau^{n}_{j})_{0\le j\le n-1}=\text{type} \,X\), as the vector with entries

$$\begin{aligned} \tau^n_{j}(X)=\frac{D_{nj}(X)}{D_{n,j-1}(X)} \end{aligned}$$

where

$$\begin{aligned} D_{n0}(X) =&1\quad \text{and}\\ D_{nj}(X) =&x_{nj}x_{n-1,j-1}\cdots x_{n-j+1,1} ,\quad j=1,2,\dotsc,n-1. \end{aligned}$$

Proposition 6.5

Let W n =(w ij , 1≤j<in) with \(w_{ij}\in\mathbb {R}_{>0}\). We have

$$\begin{aligned} \tau^n_{j}\bigl(T^\Delta _n(W_n) \bigr)= \prod_{\ell=1}^{j-1} w_{j,\ell} \prod_{k=j+1}^{n} w_{kj},\quad1 \leq j\leq n-1. \end{aligned}$$
(6.17)

Proof

Let us first notice that if X=(x ij , 1≤j<in) is a triangular array and \(X'=\rho^{\Delta ,n}_{j}(X)\), then

$$\begin{aligned} \frac{x'_{nj}\cdots x'_{n-j+1,1}}{x'_{n,j-1}\cdots x'_{n-j+2,1}}=x_{nj}\, \frac{x_{n-1,j}\cdots x_{n-j,1}}{x_{n-1,j-1}\cdots x_{n-j+1,1}},\quad j< n-1. \end{aligned}$$
(6.18)

To check this we notice that \(\rho^{\Delta ,n}_{j}=\rho^{n}_{j}=h_{j}\circ r_{j}\), where h j and r j are defined in (3.6) via the Bender-Knuth transformations. Let us recall that

$$\begin{aligned} \bigl(b_{ij}(X)\bigr)_{ij}=x'_{ij}= \frac{x_{i+1,j}\,x_{i,j+1}}{x_{ij}}\frac {x_{i,j-1}+x_{i-1,j}}{x_{i+1,j}+x_{i,j+1}}, \end{aligned}$$
(6.19)

with the same convention as in (3.5). Multiplying the various relations (6.19) for (i,j)=(n,j),…,(nj+1,1) leads to (6.18). Iterating this leads to

$$\begin{aligned} \tau^n_{j}\bigl(T^\Delta _n(W_n) \bigr) =&w_{n,j} \tau^{n-1}_{j} \bigl(T^\Delta _{n-1}(W_{n-1})\bigr) \\ =&w_{n,j}\cdots w_{j+2,j}\tau^{j+1}_{j} \bigl(T^\Delta _{j+1}(W_{j+1})\bigr). \end{aligned}$$
(6.20)

Denoting by \(w'_{ij}\) the elements of \(T^{\Delta }_{j+1}(W_{j+1})\), by \(w^{\dagger}_{ij}\) the elements of \(T^{\Delta }_{j}(W_{j})\) and using the transformations in (6.5) we obtain that

$$\begin{aligned} \tau^{j+1}_{j}\bigl(T^\Delta _{j+1}(W_{j+1}) \bigr) =&\frac{w'_{j+1,j}\cdots w'_{21}}{w'_{j+1,j-1}\cdots w'_{31}} \\ =&w_{j+1,j}\,\,\frac{\prod_{\ell=0}^{\lfloor{j/2}\rfloor-1} w^\dagger_{j-2\ell,j-2\ell-1} }{\prod_{\ell=0}^{\lfloor {(j-1)/2}\rfloor-1}w^\dagger_{j-2\ell-1,j-2\ell-2}}. \end{aligned}$$

Using Theorem 6.3 we have that

$$\prod_{\ell=0}^{\lfloor{j/2}\rfloor-1}w^\dagger_{j-2\ell,j-2\ell -1} =\prod_{1\leq\ell<k \leq j}w_{k\ell}. $$

The definition below (6.5) implies that

$$w^\dagger_{j-2\ell -1,j-2\ell-2} =T^\Delta _{j-1}(W_{j-1})_{(j-1)-2\ell,(j-1)-2\ell-1} $$

for =0,…,⌊(j−1)/2⌋−1 and using again Theorem 6.3 we obtain

$$\prod_{\ell=0}^{\lfloor{(j-1)/2}\rfloor-1} w^\dagger_{j-2\ell -1,j-2\ell-2} = \prod_{1\leq\ell< k \leq j-1} w_{k\ell}. $$

Combining the last three relations gives

$$\tau^{j+1}_{j}\bigl(T^\Delta _{j+1}(W_{j+1}) \bigr)= w_{j+1,j} \prod_{1\leq\ell< j} w_{j\ell}, $$

and this completes the proof. □

By combining Theorems 6.1 and 6.2 and Proposition 6.5 we identify the probability distribution of the shape vector of the triangular array under inverse gamma weights. The mapping that gives the shape vector is \(\sigma^{\Delta }: ({\mathbb{R}}_{>0})^{n(n-1)/2} \to({\mathbb{R}}_{>0})^{n-1}\) defined by

$$\begin{aligned} \sigma^\Delta (W)=\operatorname{sh}T^\Delta _n(W) =&(t_{n,n-1},t_{n-1,n-2}, \ldots, t_{2,1} ). \end{aligned}$$
(6.21)

Consider the probability measure

$$\begin{aligned} \lambda_{\alpha}(dw) = Z_\alpha^{-1} \prod_{1\leq j<i\leq n} w_{ij}^{-\alpha_i-\alpha_j} \exp \biggl(-\sum_{1\leq j<i\leq n} \frac{1}{w_{ij}} \biggr) \prod_{1\leq j< i\leq n} \frac{dw_{ij}}{w_{ij}} \end{aligned}$$
(6.22)

on the space of triangular arrays \((w_{ij},\,1\leq j < i\leq n)\in({\mathbb{R}}_{>0})^{n(n-1)/2}\), where α=(α 1,…,α n ), α i +α j >0 and the normalization is

$$Z_\alpha= \prod_{1\le j<i\le n} \varGamma( \alpha_i+\alpha_j). $$

Corollary 6.6

For the λ α -distributed triangular array of weights, the distribution of the shape vector is given by

$$\begin{aligned} &\lambda_{\alpha}\circ\bigl(\sigma^\Delta \bigr)^{-1}(dt)\\ &\quad = \prod_{1\le j<i\le n} \varGamma(\alpha_i+ \alpha_j)^{-1} \biggl(\frac{\prod_{\ell=0}^{\lfloor{\frac{n-1}{2}}\rfloor-1 }t_{n-2\ell-1,n-2\ell-2}}{\prod_{\ell=0}^{\lfloor{\frac {n}{2}}\rfloor-1}t_{n-2\ell,n-2\ell-1} } \biggr)^{\alpha_n}\\ &\qquad {}\times e^{-\, \frac{1}{t_{2,1}}} \varPsi^{n-1}_{\alpha'}(t) \prod_{0\leq i \leq n-2} \frac{dt_{n-i,n-i-1}}{t_{n-i,n-i-1}} \end{aligned}$$

where α′=(α 1,…,α n−1).

Proof

Let \(T=(t_{ij},\,\,1\leq j< i \leq n)=T^{\Delta }_{n}(W)\). We convert the density (6.22) into t ij variables. By Proposition 6.5,

$$\begin{aligned} \prod_{j<i} w_{ij}^{-\alpha_i-\alpha_j} =& \prod_{j=1}^{n} \Biggl(\prod _{\ell=1}^{j-1} w_{j\ell} \cdot \prod _{k=j+1}^n w_{kj} \Biggr)^{-\alpha_j} = \prod_{j=1}^{n-1} \bigl(\tau^n_j\bigr)^{-\alpha_j} \cdot \Biggl(\prod _{j=1}^{n-1} w_{nj} \Biggr)^{-\alpha_n}. \end{aligned}$$

From the proof of Proposition 6.5 (after relation (6.20)),

$$\begin{aligned} \prod_{\ell=0}^{\lfloor{n/2}\rfloor-1} t_{n-2\ell,n-2\ell -1} =& \prod_{1\leq j<i\leq n} w_{ij}\quad \text{and} \\ \prod_{\ell=0}^{\lfloor{(n-1)/2}\rfloor-1} t_{n-2\ell-1,n-2\ell-2} =& \prod_{1\leq j<i\leq n-1} w_{ij}. \end{aligned}$$

Combine these with Theorem 6.1 to obtain

$$\begin{aligned} & \prod_{j<i} w_{ij}^{-\alpha_i-\alpha_j} \exp \biggl(-\sum_{j<i} \frac{1}{w_{ij}} \biggr)\\ &\quad = \biggl(\frac{\prod_{\ell=0}^{\lfloor{\frac{n-1}{2}}\rfloor-1 }t_{n-2\ell-1,n-2\ell-2}}{\prod_{\ell=0}^{\lfloor{\frac {n}{2}}\rfloor-1}t_{n-2\ell,n-2\ell-1} } \biggr)^{\alpha_n} \,\, \prod _{j=1}^{n-1}\bigl(\tau^n_j \bigr)^{-\alpha_j} \,\,e^{-\mathcal{E^\Delta }(T)}. \end{aligned}$$

By the volume preserving property of the WT map (Theorem 6.2),

$$\begin{aligned} \lambda_{\alpha}\circ\bigl(T^\Delta \bigr)^{-1}(dt) =& \biggl(\frac{\prod_{\ell=0}^{\lfloor{\frac{n-1}{2}}\rfloor-1 }t_{n-2\ell-1,n-2\ell-2}}{\prod_{\ell=0}^{\lfloor{\frac {n}{2}}\rfloor-1}t_{n-2\ell,n-2\ell-1} } \biggr)^{\alpha_n} \\ &{}\times \prod _{j=1}^{n-1}\bigl(\tau^n_j \bigr)^{-\alpha_j} \,\, e^{-\mathcal{E^\Delta }(T)} \prod_{j<i} \frac{dt_{ij}}{t_{ij}}. \end{aligned}$$

The result then follows by integrating over the variables (t ij ,1≤j<i−1,1≤in) and the definition of the Whittaker function. □

As a further corollary we record the distribution of the vector (z 1,z 2/z 1,…,z n/2⌋/z n/2⌋−1) of ratios of partition functions z r defined by (6.8). The result comes by combining Corollary 6.6 with Theorem 6.3.

Corollary 6.7

Let the array of weights (w ij , 1≤j<in) have distribution λ α of (6.22), and as before α=(α 1,…,α n )=(α′,α n ). Then the distribution of the vector (z 1,z 2/z 1,…,z n/2⌋/z n/2⌋−1), with the partition functions z r defined in (6.8), is given as follows in terms of the integral of a bounded Borel function φ:

$$\begin{aligned} &\int\varphi(z_1, z_2/z_1, \dotsc, z_{\lfloor{n/2}\rfloor }/z_{\lfloor{n/2}\rfloor-1})\, \lambda_\alpha(dw) \\ &\quad =\prod_{1\le j<i\le n} \!\!\varGamma(\alpha_i+ \alpha_j)^{-1} \int _{({\mathbb{R}}_{>0})^{\lfloor{\frac{n}{2}}\rfloor}} \prod _{0\le i\le n-2:\atop i \mathit{even}} \frac {dt_{n-i,n-i-1}}{t_{n-i,n-i-1}} \\ &\qquad {}\times\varphi( t_{n,n-1}, t_{n-2,n-3}, \dotsc, t_{n-2\lfloor{\frac {n}{2}}\rfloor+2, n-2\lfloor{\frac{n}{2}}\rfloor+1}) \\ &\qquad{}\times\int _{({\mathbb{R}}_{>0})^{\lceil{n/2}\rceil-1}} \, \biggl(\frac{\prod_{k=0}^{\lfloor{\frac{n-1}{2}}\rfloor-1 }t_{n-2k-1,n-2k-2}}{\prod_{k=0}^{\lfloor{\frac{n}{2}}\rfloor -1}t_{n-2k,n-2k-1} } \biggr)^{\alpha_n} e^{-\,\frac{1}{t_{2,1}}} \varPsi ^{n-1}_{\alpha'}(t) \\ &\qquad {}\times\prod _{1\le i\le n-2:\atop i\mathit{odd}} \frac {dt_{n-i,n-i-1}}{t_{n-i,n-i-1}}. \end{aligned}$$
(6.23)

The results above are related to those of symmetric weight matrices in several ways.

(i) Replace n with n−1 in Corollary 5.3 and consider a symmetric (n−1)×(n−1) weight matrix with distribution (5.10), and set ζ=α n . Let σ 1=t n−1,n−1 be the polymer partition function of the symmetric matrix, or equivalently, the front element of its shape vector. Then a comparison of (6.23) and (5.9) reveals that the distribution of the partition function z 1 is identical to the distribution of 2t n−1,n−1.

(ii) Corollary 6.6 can be obtained as the ζ→∞ limit of Corollary 5.3. Using the recursive structure (2.1) of Whittaker functions, namely \(\varPsi^{n}_{\alpha}= Q^{n,n-1}_{\alpha_{n}}\varPsi^{n-1}_{\alpha'}\), one can show that

$$\tilde{\nu}_{\alpha,\zeta }\circ\sigma^{-1} \Rightarrow \lambda_{\alpha}\circ\bigl(\sigma^\Delta \bigr)^{-1} \quad \text{as}\ \zeta \to\infty, $$

where “⇒” denotes weak convergence of probability measures. Under the measure \(\tilde{\nu}_{\alpha,\zeta }\) the diagonal element w ii of the symmetric input matrix has probability distribution

$$\rho_{ii}(du)=\frac{ u^{-\alpha_i-\zeta } \, e^{-1/(2u)}}{2^{\alpha _i+\zeta }\varGamma(\alpha_i+\zeta ) } \cdot\frac{du}{u} \quad\mathrm{on}\ 0<u<\infty, $$

and hence its reciprocal \(w_{ii}^{-1}\) is twice a gamma variable with parameter α i +ζ. Consequently ζw ii →1/2 almost everywhere as ζ→∞. Thus w ii decays as (1/2)ζ −1. This corresponds to the appearance, in our proof, of triangular arrays with diagonal elements ε→0.

(iii) From a physical point of view, the limit ζ→∞, or equivalently ε→0, introduces a depinning effect on the polymer, which is responsible for the appearance of the hard wall phenomenon.

7 Whittaker integral identities

In this section, we recall three integral identities for Whittaker functions which were proved in the papers [43, 44], and explain how they are equivalent to (and in fact generalized by) those which have appeared naturally in the context of the present paper (Corollaries 3.5, 3.7 and 5.5). We first note that the functions W n,a (y) introduced in Sect. 2 are denoted by W n,2a (y) in the papers [43, 44]. The following identity was conjectured by Bump [15] and proved by Stade [43, Theorem 1.1].

Theorem 7.1

(Stade)

For \(s\in {\mathbb{C}}\), \(a,b\in {\mathbb{C}}^{n}\) with i a i =∑ i b i =0,

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^{n-1} } W_{n,a}(y) W_{n,b}(y) \prod_{j=1}^{n-1} (\pi y_j)^{2js} \bigl(2 y_j^{-j(n-j)}\bigr) \frac {dy_j}{y_j} \\ &\quad = \varGamma(ns)^{-1} \prod_{j,k} \varGamma(s+a_j+b_k) . \end{aligned}$$
(7.1)

This integral is associated, via the Rankin-Selberg method, with Archimedean L-factors of automorphic L-functions on \(\mathit{GL}(n,{\mathbb{R}})\times \mathit{GL}(n,{\mathbb{R}})\). Using (2.5), it is straightforward to see that this is equivalent to:

Theorem 7.2

(Stade)

Suppose r>0 and \(\lambda,\nu\in {\mathbb{C}}^{n}\). Then

$$ \int_{({\mathbb{R}}_{>0})^n } e^{-r x_1} \varPsi^n_{-\nu}(x) \varPsi^n_{-\lambda }(x) \prod_{i=1}^n \frac{dx_i}{x_i} = r^{-\sum_{i=1}^n (\nu_i+\lambda_i)} \prod_{ij}\varGamma(\nu _i+\lambda_j). $$
(7.2)

Indeed, if we let

$$a_j=\lambda _j-(1/n)\sum_i \lambda _i,\qquad b_j=\nu_j-(1/n)\sum _i\nu_i $$

and s=(1/n)∑ i (λ i +ν i ) then, using (2.5) and (2.3), (7.1) becomes

$$\varGamma(ns) \int_{({\mathbb{R}}_{>0})^{n-1} } \varPsi^n_{-\nu}(x) \varPsi ^n_{-\lambda}(x) x_1^{-ns} 2^{n-1} \prod_{j=1}^{n-1} \frac{dy_j}{y_j} = \prod_{ij}\varGamma(\nu _i+\lambda_j), $$

where \(\pi y_{j} =\sqrt{x_{n-j+1}/x_{n-j}} \) for j=1,…,n−1. It is important to note here that we are regarding \(\varPsi^{n}_{-\nu}(x) \varPsi^{n}_{-\lambda}(x) x_{1}^{-ns}\) as a function of y 1,…,y n−1. Now, writing

$$\varGamma(ns)=\int_0^\infty x_1^{ns} e^{-x_1} \frac{dx_1}{x_1} $$

we can absorb this into the integral, changing variables from y 1,…,y n−1,x 1 to x 1,…,x n , to obtain

$$\int_{({\mathbb{R}}_{>0})^n } e^{-x_1} \varPsi^n_{-\nu}(x) \varPsi^n_{-\lambda }(x)\prod_{i=1}^n \frac{dx_i}{x_i} = \prod_{ij}\varGamma( \nu_i+\lambda_j). $$

The identity (7.2) follows, using (2.2).

The second identity is a formula for the Mellin transform

$$\begin{aligned} N_{n,b,a}(s)&=\int_{({\mathbb{R}}_{>0})^{n-1} } W_{n,a}(y_1, \ldots,y_{n-1}) W_{n-1,b}(y_1,\ldots,y_{n-2}) \\ &\quad {}\times \prod_{j=1}^{n-1} (\pi y_j)^{2js} \bigl(2 y_j^{-j(n-j-1/2)}\bigr) \frac {dy_j}{y_j} , \end{aligned}$$

for \(s\in {\mathbb{C}}\), n≥3 and \(a\in {\mathbb{C}}^{n}\), \(b\in {\mathbb{C}}^{n-1}\) with ∑ i a i =∑ j b j =0. This integral is associated with Archimedean L-factors of automorphic L-functions on \(\mathit{GL}(n-1,{\mathbb{R}})\times \mathit{GL}(n,{\mathbb{R}})\). The following identity was conjectured by Bump [15] and proved by Stade [44, Theorem 3.4].

Theorem 7.3

(Stade)

$$N_{n,b,a}(s)=\prod_{i,j} \varGamma(s+a_i+b_j). $$

Now, for \(\lambda\in {\mathbb{C}}^{n}\) and r>0,

$$\varPsi^{n-1}_{\lambda;r}(x_1,\ldots,x_{n-1})=r^{\lambda_n} \varPsi ^n_\lambda(x_1,\ldots,x_{n-1},r). $$

Using this, and the relations (2.5) and (2.4), it is straightforward to see that Theorem 7.3 is equivalent to:

Theorem 7.4

(Stade)

Let r>0, \(\lambda\in {\mathbb{C}}^{n-1}\) and \(\nu\in {\mathbb{C}}^{n}\). Then

$$ \int_{({\mathbb{R}}_{>0})^{n-1} } \varPsi^{n-1}_{\nu;r}(x) \varPsi^{n-1}_\lambda (x) \prod_{i=1}^{n-1} \frac{dx_i}{x_i} = r^{-\sum_{i=1}^{n-1} (\nu_i+\lambda_i)} \prod_{ij} \varGamma(\nu _i+\lambda_j) . $$
(7.3)

The third identity is a formula for the Mellin transform

$$M_{n,a}(s) = \int_{({\mathbb{R}}_{>0})^{n-1} } W_{n,a}(y) \prod _{j=1}^{n-1} (\pi y_j)^{2s_j} \bigl(2 y_j^{-j(n-j)/2}\bigr) \frac {dy_j}{y_j} , $$

for particular values of s=(s 1,…,s n−1) lying on a two-dimensional subspace of \({\mathbb{C}}^{n-1}\). This integral is associated with an Archimedean L-factor of an exterior square automorphic L-function on \(\mathit{GL}(n,{\mathbb{R}})\). The following identity was conjectured by Bump and Friedberg [16] and proved by Stade [44, Theorem 3.3].

Theorem 7.5

Stade

Let \(s_{1},s_{2}\in {\mathbb{C}}\) and \(a\in {\mathbb{C}}^{n}\) with i a i =0. Suppose that, for 2<jn−1, s j =ϵ(j)s 1+(jϵ(j))s 2/2, where ϵ(j)=1 if j is odd and 0 otherwise. Set s n =ϵ(n)s 1+(nϵ(n))s 2/2. Then for s=(s 1,…,s n−1),

$$ \varGamma(s_n) M_{n,a}(s) = \prod _i \varGamma(s_1+a_i) \prod _{i<j} \varGamma (s_2+a_i+a_j). $$
(7.4)

In terms of the \(\varPsi^{n}_{\lambda}\), this is equivalent to the following identity, which is straightforward to verify using (2.5) and (2.2) as above.

Theorem 7.6

(Stade)

Suppose r>0, \(\lambda\in {\mathbb{C}}^{n}\) and \(\gamma\in {\mathbb{C}}\). Then

$$\begin{aligned} & \int_{({\mathbb{R}}_{>0})^n} f\bigl(x'\bigr)^\gamma e^{-r x_1} \varPsi^n_{-\lambda}(x) \prod _{i=1}^n \frac{dx_i}{x_i} \\ &\quad = c_n(r,\gamma) r^{-\sum_{i=1}^n\lambda_i} \prod _i \varGamma(\lambda_i+\gamma) \prod _{i<j}\varGamma(\lambda _i+\lambda_j) , \end{aligned}$$

where \(x'_{i}=1/x_{n-i+1}\), \(f(x)=\prod_{i} x_{i}^{(-1)^{i}}\) and

$$c_n(s,\gamma)= \begin{cases} 1 & \textit{if }n \textit{ is even},\\ s^{-\gamma} & \textit{if }n\textit{ is odd}. \end{cases} $$

Note that f(x′)=f(x) if n is even and f(x′)=1/f(x) if n is odd.

8 Tropicalization, last passage percolation and random matrices

The geometric RSK correspondence is a geometric lifting of the (Berenstein-Kirillov extension of the) RSK correspondence. Going the other way, let \(x^{\epsilon}_{ij}=e^{y_{ij}/\epsilon}\) where \(Y=(y_{ij})\in {\mathbb{R}}^{n\times m}\) and ϵ>0. Let \(X^{\epsilon}=(x^{\epsilon}_{ij})\) and \(T^{\epsilon}=(t^{\epsilon}_{ij})=T(X^{\epsilon})\). Then the mapping \(U:{\mathbb{R}}^{n\times m}\to {\mathbb{R}}^{n\times m}\) defined by U(Y)=(u ij ) where \(u_{ij}=\lim_{\epsilon\to0} \epsilon\log t^{\epsilon}_{ij}\) is the extension of the RSK mapping to matrices with real entries introduced by Berenstein and Kirillov [10]. We identify the output U(Y) with a pair of patterns as before, but now the entries are allowed to take real values. In this context, we define a real pattern of height h and shape \(x\in {\mathbb{R}}^{n}\) as an array of real numbers

with bottom row r h=x. The range of indices is

$$L(n,h)=\bigl\{(i,j): 1\le i\le h,\ 1\le j\le i\wedge n\bigr\}. $$

Fix a real pattern R as above. Set s 0=1 and, for 1≤ih, \(s_{i}=\sum_{j=1}^{i\wedge n} r_{ij}\) and c i =s i s i−1. We shall refer to c as the type of R and write c=type(R). Denote by Σ h(x) the set of real patterns with shape x and height h. We say that a real pattern R is a (generalized) Gelfand-Tsetlin pattern if r nn ≥0 and it satisfies the interlacing property r i+1,j+1r ij r i+1,j for all (i,j)∈L(n,h) with i<h, with the conventions r i+1,n+1=0 for i=n,…,h−1. Denote the set of generalized Gelfand-Tsetlin patterns with height h and shape \(x\in {\mathbb{R}}_{+}^{n}\) by GT h(x). This is a Euclidean polytope of dimension d=n(n−1)/2+(hn+1)n. Denote the corresponding Euclidean measure by dR. The analogue of the Whittaker functions in this setting are the functions J λ (x) defined, for \(\lambda\in {\mathbb{C}}^{h}\) and \(x\in {\mathbb{R}}_{+}^{n}\) by

$$J_\lambda(x)=\int_{GT^h(x)} e^{-\lambda\cdot\mathrm{type}(R)} dR. $$

Note that, from (2.11), we have

$$\lim_{\epsilon\to0} \epsilon^{d} \varPsi^n_{\epsilon\lambda;1} \bigl( e^{x/\epsilon}\bigr)=J_\lambda(x). $$

If h=n then \(J_{\lambda}(x)=\det(e^{-\lambda_{i} x_{j}})/\Delta(\lambda )\) where Δ(λ)=∏ i>j (λ i λ j ) (see, for example, [39]).

The analogue of Theorem 3.2 in this setting is the following. This result can be inferred directly from results of [10] (see Property 8 after the statement of Theorem 1.1) or seen as a consequence of Theorem 3.2. We identify the output U(Y) with a pair of real patterns (R,S) of respective heights m and n, and common shape (u nm ,…,u np+1.np+1), where p=nm.

Corollary 8.1

The output U(Y)=(R,S) is a pair of generalized Gelfand-Tsetlin patterns if, and only if, all of the entries of Y are non-negative.

We note that the corresponding statement for matrices with integer entries follows as a particular case. If Y has non-negative integer entries then the pair of generalized Gelfand-Tsetlin patterns obtained can be interpreted in the usual way as the pair of semistandard tableaux obtained via the RSK correspondence.

The Berenstein-Kirillov [10] definition of U in terms of lattice paths is given as follows. For 1≤km and 1≤rnk,

$$ u_{n-r+1,k-r+1}+ \dotsm+ u_{n-1,k-1} + u_{nk} = \max _{(\pi_1,\ldots,\pi_r)\in\varPi^{(r)}_{n,k}} \sum_{(i,j)\in \pi_1\cup\cdots\cup\pi_r} y_{ij}, $$
(8.1)

where \(\varPi^{(r)}_{n,k}\) denotes the set of r-tuples of non-intersecting directed nearest-neighbor lattice paths π 1,…,π r starting at positions (1,1),(1,2),…,(1,r) and ending at positions (n,kr+1),…,(n,k−1),(n,k) (see Fig. 1). This determines the entries of R. The entries of S are given by similar formulae using U(Y t)=(S,R). In particular,

$$ u_{nm}=\max_{\pi\in\varPi^{(1)}_{n,m}} \sum _{(i,j)\in\pi} y_{ij}, $$
(8.2)

where \(\varPi^{(1)}_{n,m}\) is the set of directed nearest-neighbor lattice paths in \({\mathbb{Z}}^{2}\) from (1,1) to (n,m). This formula provides a connection to last passage directed percolation which we will discuss shortly. The formula (8.1) is the analogue of Greene’s theorem in this setting (see, for example, [23, §3.1]).

The local move description of Sect. 3 carries over to the tropical setting, as follows. For convenience and clarity we adopt the same notation as in the geometric setting. For each 2≤in and 2≤jm define a mapping l ij which takes as input a matrix \(Y=(y_{ij})\in {\mathbb{R}}^{n\times m}\) and replaces the submatrix

$$ \begin{pmatrix} y_{i-1,j-1}& y_{i-1,j}\\ y_{i,j-1}& y_{ij} \end{pmatrix} $$

of Y by its image under the map

$$ \begin{pmatrix} a& b\\ c& d \end{pmatrix} \mapsto \begin{pmatrix} b\wedge c-a & b\\ c& d+b\vee c \end{pmatrix} , $$
(8.3)

and leaves the other elements unchanged. For 2≤in and 2≤jm, define l i1 to be the mapping that replaces the element y i1 by y i−1,1+y i1 and l 1j to be the mapping that replaces the element y 1j by y 1,j−1+y 1j . As before we define l 11 to be the identity map. For 1≤in and 1≤jm, set

$$\pi^j_i=l_{ij}\circ\cdots\circ l_{i1}, $$

and, for 1≤in,

$$R_i = \begin{cases} \pi_1^{m-i+1}\circ\cdots\circ\pi^m_i, & i\le m,\\ \pi^1_{i-m+1} \circ\cdots\circ\pi^m_i, & i\ge m . \end{cases} $$

Then the Berenstein-Kirillov map is given by

$$ U=R_n\circ\cdots\circ R_1. $$
(8.4)

Now observe that each l ij is invertible. Indeed, the inverse of the map (8.3) is given by

$$ \begin{pmatrix} a& b\\ c& d \end{pmatrix} \mapsto \begin{pmatrix} b\wedge c-a & b\\ c& d-b\vee c \end{pmatrix} , $$
(8.5)

and the boundary moves l 1j , l i1 are clearly invertible. It follows that the map U is invertible. Moreover, U preserves the Lebesgue measure on \({\mathbb{R}}^{n\times m}\). The Jacobians of the l ij are clearly almost everywhere equal to ±1. Combining this with Corollary 8.1 we conclude that the restriction of U to \({\mathbb{R}}_{+}^{n\times m}\) is volume preserving with respect to the Euclidean measure, injective and its image is given by the Euclidean set of pairs of generalized Gelfand-Tsetlin patterns with respective heights m and n, having the same shape in

$$C^{(p)}=\bigl\{x\in {\mathbb{R}}_+^p:\ x_1\ge\cdots\ge x_p\bigr\}. $$

Finally, we recall the following straightforward fact. If we define row and column sums r i =∑ j y ij and c j =∑ i y ij , then type(S)=r and type(R)=c. Note that this implies, for \(\lambda\in {\mathbb{C}}^{m}\) and \(\nu\in {\mathbb{C}}^{n}\),

$$ \sum_{ij} ( \nu_i+\lambda_j) y_{ij} = \sum _i \nu_i r_i+ \sum _j \lambda_j c_j. $$
(8.6)

The analogue of the Cauchy-Littlewood identity in this setting (cf. Corollary 3.5) is thus given as follows.

Proposition 8.2

Suppose \(\lambda\in {\mathbb{C}}^{m}\) and \(\nu\in {\mathbb{C}}^{n}\), where nm and ℜ(λ i +ν j )>0 for all i and j. Then

$$ \int_{C^{(m)} } J_\nu(x) J_\lambda(x) \prod_{i=1}^m dx_i = \prod_{ij}(\nu_i+ \lambda_j)^{-1}. $$
(8.7)

This basic structure has been exploited in the papers [4, 13, 21, 22, 33] to study last passage percolation models with exponential weights, as we shall now explain. We note that the development in those papers is via a discrete approximation and as such differs from the present framework, but the main ideas are the same. Let \(a\in {\mathbb{R}}^{n}\) and \(b\in {\mathbb{R}}^{m}\) such that a i +b j >0 for all i and j. Consider the measure on input matrices \((y_{ij})\in {\mathbb{R}}_{+}^{n\times m}\) defined by

$$\nu_{a,b}(dy)=\prod_{i,j} e^{-(a_i+b_j)y_{ij}} dy_{ij}. $$

From the above, it follows that the push-forward of ν a,b under the map U is given by

$$\nu_{a,b}\circ U^{-1} (du)=e^{-a\cdot\mathrm{type}(S)-b\cdot \mathrm{type}(R)} \prod _{i,j} du_{ij}. $$

Now, the variable u nm defined by (8.2) has the interpretation as a last passage time in the percolation model on the lattice with weights given by the y ij . Choosing these weights at random so that they are independent and exponentially distributed with respective parameters a i +b j corresponds to choosing the input matrix (y ij ) according to the probability measure

$$\tilde{\nu}_{a,b}(dy)=\prod_{ij}(a_i+b_j) \nu_{a,b}(dy). $$

From the above, under this probability measure, the law of the random variable u nm is the same, assuming nm, as the first marginal of the probability measure on C (m) defined by

$$\mu_{a,b}(dx) = \prod_{ij}(a_i+b_j) J_a(x) J_b(x) \prod_{i=1}^m dx_i . $$

In other words, for bounded continuous f,

$$\int_{{\mathbb{R}}_+^{n\times m}} f(u_{mn}) \tilde{\nu}_{a,b}(dy) = \int_{C^{(m)}} f(x_1) \mu_{a,b}(dx) . $$

The probability measures μ a,b are non-central Laguerre (or complex Wishart) ensembles and the integrals (8.7) are the corresponding Selberg-type integrals [4, 13, 21, 22, 33].

Similarly, in the symmetric case, one arrives at the interpolating ensembles of Baik and Rains [2, 4]. These are probability measures on \({\mathbb{R}}_{+}^{n}\) defined for \(\alpha\in {\mathbb{R}}_{+}^{n}\) and \(\zeta\in {\mathbb{R}}_{+}\) by

$$\mu_{\alpha;\zeta}(dx)=\prod_{i<j}( \alpha_i+\alpha_j) J_\alpha (x) \prod _{i=1}^n (\alpha_i+\zeta) e^{(-1)^i\zeta x_i} dx_i . $$

We note that, in the notation of Sect. 5, as ϵ→0,

$$\nu_{\epsilon\alpha;\epsilon \zeta }\circ\sigma^{-1} \bigl(d e^{x/\epsilon}\bigr) \Rightarrow\mu_{\alpha;\zeta}(dx), $$

where “⇒” denotes weak convergence of probability measures. In this setting (see [4]) if the input matrix \((y_{ij})\in {\mathbb{R}}_{+}^{n\times n}\) is symmetric and chosen according to the probability measure

$$\prod_{i< j}(\alpha_i+ \alpha_j)e^{-( \alpha_i+\alpha_j) y_{ij}} dy_{ij} \prod _i (\alpha_i+\zeta) e^{-(\alpha_i+\zeta)y_{ii}} dy_{ii} $$

then the last passage time u nn is distributed as the first marginal of μ α;ζ .