1 Introduction

We consider undirected, connected graphs with no multiple edges and no self-loops. Each edge (xy) is given a positive weight c(xy). A possible interpretation is that (xy) is a resistor with resistance 1/c(xy). The graph then becomes an electrical network.

More precisely, a (weighted) graph \(G = (V,c)\) consists of an at most countable set of vertices V and a weight function \(c : V \times V \rightarrow {\mathbb {R}}_{\ge 0}\) such that c is symmetric and for all \(x \in V\), we have \(c(x,x) = 0\) and

$$\begin{aligned} c_x := \sum _{y \in V} c(x,y) < \infty . \end{aligned}$$

We think of two vertices \(x,y \in V\) as being adjacent if \(c(x,y) > 0\). For \(x \in V\), let \(N(x) := \left\{ y \in V ~ | ~ c(x,y) > 0 \right\} \) be the set of neighbors of x. Throughout this work, we assume that every vertex has finite degree in G, i.e., \(|N(x)| < \infty \) for every \(x \in V\).

For \(x \in V\), let \(\mathbb {P}_x\) be the random walk on G starting at x. It is the Markov chain defined by the transition matrix

$$\begin{aligned} p(x,y) = \frac{c(x,y)}{c_x}~, x,y \in V \end{aligned}$$

and initial distribution \(\delta _x\). We will think of \(\mathbb {P}_x\) as a probability measure on \(\Omega = V^{{\mathbb {N}}_0}\) equipped with the \(\sigma \)-algebra \((2^V)^{\otimes {\mathbb {N}}_0}\). If not explicitly stated otherwise, we will from now on assume that every occurring graph is connected. In that case, \(\mathbb {P}_x\) is irreducible.

For a set of vertices \(A \subseteq V\) and \(\omega = (\omega _k)_{k \in {\mathbb {N}}_0} \in \Omega \), let

$$\begin{aligned} \tau _A(\omega )&:= \inf \left\{ k \ge 0 ~ | ~ \omega _k \in A \right\} \text { and}\\ \tau _A^+(\omega )&:= \inf \left\{ k \ge 1 ~ | ~ \omega _k \in A \right\} ~ (\inf \emptyset := \infty ), \end{aligned}$$

be hitting times of A. For \(x \in V\), we use the shorthand notation \(\tau _{\left\{ x \right\} } =: \tau _x\).

Suppose that G is finite. Ohm’s Law states that the effective resistance R(xy) between two vertices xy is the voltage drop needed to induce an electrical current of exactly 1 ampere from x to y.

The relationship between electrical currents and the random walk of G has been studied intensively [3, 5,6,7]. For finite graphs, \(x \ne y\), one has the following probabilistic representations

$$\begin{aligned} R(x,y)&= \frac{1}{c_x} {\mathbb {E}}_x\left[ \sum _{k=0}^{\tau _y-1} \mathbbm {1}_x(\omega _k) \right] \end{aligned}$$
(1.1)
$$\begin{aligned}&= \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y \le \tau _x^+]} \end{aligned}$$
(1.2)
$$\begin{aligned}&= \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y < \tau _x^+]}~. \end{aligned}$$
(1.3)

Note that \((c_z)_{z \in V}\) is an invariant measure of p. A proof of the first equality in the unweighted case can be found in [7] and can be extended to fit our more general context. To see that (1.1) equals (1.2), realize that \(\sum _{k=0}^{\tau _y-1} \mathbbm {1}_x(\omega _k)\) is geometrically distributed with parameter \(\mathbb {P}_x[\tau _x^+ < \tau _y]\). For the last equality, use that any finite graph is recurrent and thus \(\mathbb {P}_x[\tau _x^+ =\tau _y = \infty ] = 0\).

The subject of effective resistances gets much more complicated on infinite graphs since those may admit multiple different notions of effective resistances. Recurrent graphs, however, have a property which is often referred to as unique currents [6] and consequently also have one unique effective resistance. In this case, the above representation holds [1, 8]. Indeed, [1, Lemma 2.61] states the more general inequalities

$$\begin{aligned} \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y \le \tau _x^+]} \le R^F(x,y) \le \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y < \tau _x^+]} \end{aligned}$$
(1.4)

for the free effective resistance \(R^F\) (see Sect. 2) of any infinite graph whose vertices have only finitely many neighbors. For the convenience of the reader, we include a proof of the result, see Lemma 2.4.

In [5, Corollaries 3.13 and 3.15], it is suggested that one seems to have

$$\begin{aligned} R^F(x,y) = \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y < \tau _x^+]} \end{aligned}$$
(1.5)

on all transient graphs. However, this is false as our example in Sect. 3 shows.

The main result of this work (Corollary 6.3) states that the free effective resistance of a transient graph \(G = (V,c)\) admits the representation (1.5) for all \(x,y \in V\) if and only if G is a subgraph of an infinite line. Corollary 6.5 states that the lower bound in (1.4) is attained if and only if G is recurrent.

2 Free Effective Resistance

Let \(G = (V,c)\) be an infinite connected graph. For any \(W \subseteq V\), let \(G \mathord \restriction _W := (W, c \mathord \restriction _{W \times W})\) be the subgraph of G induced by W. We say a sequence \((V_n)_{n \in {\mathbb {N}}}\) of subsets of V is a finite exhaustion of V if \(|V_n|< \infty \), \(V_n \subseteq V_{n+1}\) and \(V = \cup _{n \in {\mathbb {N}}} V_n\). Define \(G_n = (V_n, c_n):= G\mathord \restriction _{V_n}\).

Definition 2.1

Let \((V_n)_{n \in {\mathbb {N}}}\) be any finite exhaustion of V such that \(G_n\) is connected. For \(x,y \in V\), the free effective resistance \(R^F(x,y)\) of G is defined by

$$\begin{aligned} R^F(x,y) = \lim _{n \rightarrow \infty } R_{G_n}(x,y). \end{aligned}$$

Remark 2.2

The fact that \(R_{G_n}(x,y)\) converges with the limit independent of a choice of \((V_n)_{n \in {\mathbb {N}}}\) is due to Rayleigh’s monotonicity principle (see, e.g., [2, 4]).

We denote by \(\mathbb {P}^n_x\) the random walk on \(G_n\) starting at x with transition matrix \(p_n\) associated with \(c_n\). Since we can extend it to a function on V by defining \(p_n(x,y) = 0\) whenever \(x \notin V_n\) or \(y \notin V_n\), \(\mathbb {P}^n_x\) is a probability measure on \(\Omega = V^{{\mathbb {N}}_0}\) for each \(x \in V_n\) and we have

$$\begin{aligned} p_n(x,y) = \frac{c_n(x,y)}{(c_n)_x} = \frac{c(x,y)}{\sum _{w \in V_n} c(x,w)} \end{aligned}$$

for all \(x,y \in V_n\).

Remark 2.3

Note that for any \(x,y \in V_n\),

$$\begin{aligned} p_n(x,y) = p(x,y) \cdot \frac{c_x}{(c_n)_x} = p(x,y) \cdot \left( 1 + \frac{\sum _{v \notin V_n} c(x,v)}{\sum _{v \in V_n} c(x,v)}\right) \ge p(x,y). \end{aligned}$$

Lemma 2.4

([1, Lemma 2.61]). Let \(G = (V,c)\) be an infinite, connected graph such that \(|N(x)| < \infty \) for all \(x \in V\) and let \(R^F\) be the free effective resistance of G. Then,

$$\begin{aligned} \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y \le \tau _x^+]} \le R^F(x,y) \le \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y < \tau _x^+]} \end{aligned}$$

holds for all \(x,y \in V\) with \(x \ne y\).

Proof

For any \(v \in V\), we have \((c_n)_v \rightarrow c_v\) as \(n \rightarrow \infty \) and thus \(p_n(v,w) \rightarrow p(v,w)\) for all \(v,w \in V\). By the definition of \(R^F\) and (1.3), we know that

$$\begin{aligned} R^F(x,y)&= \lim _{n \rightarrow \infty } R_{G_n}(x,y) = \lim _{n \rightarrow \infty } \frac{1}{(c_n)_x \cdot \mathbb {P}_x^n[\tau _y< \tau _x^+]} \nonumber \\&= \frac{1}{c_x \cdot (\lim \limits _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+])} ~. \end{aligned}$$
(2.1)

In particular, \(\lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+]\) exists. Hence, the claim is equivalent to

$$\begin{aligned} \mathbb {P}_x[\tau _y< \tau _x^+] \le \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] \le \mathbb {P}_x[\tau _y \le \tau _x^+]. \end{aligned}$$

Consider the discrete topology on V and its product topology on \(\Omega = V^{{\mathbb {N}}_0}\). Since \(p_n \rightarrow p\) point-wise and \(\left\{ y \in V_n ~ | ~ c_n(x,y) > 0 \right\} \subset N(x)\) and \(|N(x)|<\infty \) for all \(n \in {\mathbb {N}}\) and all \(x \in V_n\), it follows that \((\mathbb {P}^n_x)_{n \in {\mathbb {N}}}\) converges weakly to \(\mathbb {P}_x\). In the product topology, the sets \(\left\{ \omega \in \Omega ~ | ~ \tau _y(\omega ) < \tau _x^+(\omega ) \right\} \) and \(\left\{ \omega \in \Omega ~ | ~ \tau _y(\omega ) \le \tau _x^+(\omega ) \right\} \) are open and closed, respectively. By the Portmanteau theorem, it follows that

$$\begin{aligned} \mathbb {P}_x[\tau _y< \tau _x^+] \le \liminf _{n \rightarrow \infty }\mathbb {P}^n_x[\tau _y< \tau _x^+] = \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] \le \limsup _{n \rightarrow \infty }\mathbb {P}^n_x[\tau _y \le \tau _x^+] \le \mathbb {P}_x[\tau _y \le \tau _x^+]. \end{aligned}$$

\(\square \)

In view of (2.1), equation (1.5) holds if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y< \tau _x^+] = \mathbb {P}_x[\tau _y < \tau _x^+]. \end{aligned}$$
(2.2)

Analogously, the lower bound of (1.4) is attained if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] = \mathbb {P}_x[\tau _y \le \tau _x^+]. \end{aligned}$$
(2.3)

3 The Transient \(\mathcal {T}\)

We will now show that (1.5) does not hold in general. Consider the graph \(\mathcal {T}\) shown in Fig. 1. It is transient and we have \(R^F(B,T) = 2\). However,

$$\begin{aligned} \mathbb {P}_B[\tau _T< \tau _B^+]&= \mathbb {P}_0[\tau _T< \tau _B]\\&= 1 - \mathbb {P}_0[\tau _B \le \tau _T]\\&= 1 - \mathbb {P}_0[\tau _B < \tau _T] - \mathbb {P}_0[\tau _B = \tau _T = \infty ]. \end{aligned}$$

Due to the symmetry of \(\mathcal {T}\) we have \( \mathbb {P}_0[\tau _B< \tau _T] = \mathbb {P}_0[\tau _T < \tau _B]\). Together with the transience of \(\mathcal {T}\), this implies

$$\begin{aligned} \mathbb {P}_B[\tau _T< \tau _B^+] = \mathbb {P}_0[\tau _T< \tau _B] = \frac{1 - \mathbb {P}_0[\tau _B = \tau _T= \infty ]}{2} < \frac{1}{2} \end{aligned}$$

and

Fig. 1
figure 1

The transient graph \(\mathcal {T}\)

$$\begin{aligned} \mathbb {P}_B[\tau _T \le \tau _B^+] = \mathbb {P}_0[\tau _T \le \tau _B] = \frac{1 + \mathbb {P}_0[\tau _B = \tau _T= \infty ]}{2} > \frac{1}{2}. \end{aligned}$$

More precisely, one can compute

$$\begin{aligned} \mathbb {P}_B[\tau _T < \tau _B^+] = \frac{2}{5} \text { and } \mathbb {P}_B[\tau _T \le \tau _B^+] = \frac{3}{5}~. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{1}{c_B \mathbb {P}_B[\tau _T < \tau _B^+]} \ne R^F(B,T) \end{aligned}$$

and

$$\begin{aligned} \frac{1}{c_B}{\mathbb {E}}_B\left[ \sum _{k=0}^{\tau _T - 1} \mathbbm {1}_B(\omega _k) \right] = \frac{1}{c_B\mathbb {P}_B[\tau _T \le \tau _B^+]} \ne R^F(B,T). \end{aligned}$$

4 Probability of Paths

To check whether (2.2) holds, it is useful to write both sides as sums of probabilities of paths.

A sequence \(\gamma = (\gamma _0, \ldots , \gamma _n) \in V^{n+1}\) is called a path (of length n) in G if \(c(\gamma _k, \gamma _{k+1}) > 0\) for all \(k = 0, \ldots , n-1\). We denote by \(L(\gamma )\) the length of \(\gamma \) and by \(\Gamma _G\) the set of all paths in G. A path \(\gamma \) is called simple if it does not contain any vertex twice. The probability of \(\gamma \) with respect to \(\mathbb {P}_x\) is defined by

$$\begin{aligned} \mathbb {P}_x(\gamma ) := \mathbb {P}_x(\left\{ \gamma \right\} \times V^{{\mathbb {N}}}) = \mathbbm {1}_x(\gamma _0) \cdot \prod _{k=0}^{L(\gamma )-1} p(\gamma _k, \gamma _{k+1})~. \end{aligned}$$

We say \(\gamma \) is \(x \rightarrow y\) if \(\gamma _0 = x, \gamma _{L(\gamma )} = y\) and \(\gamma _k \notin \{x,y\}\) for all \(k = 1,\ldots , L(\gamma ) - 1\). We denote by \(\Gamma _G(x,y)\) the set of all paths \(x \rightarrow y\) in G.

For \(A \subseteq V\), let

$$\begin{aligned} \Gamma _G(x,y;A) := \left\{ \gamma \in \Gamma _G(x,y) ~ | ~ \gamma _k \in A \text { for all } k = 0, \ldots , L(\gamma ) \right\} \end{aligned}$$

be the set of all paths \(x \rightarrow y\) in G that use only vertices in A.

Using this notion and \(\Gamma _{G_n}(x,y) = \Gamma _G(x,y;V_n)\), we see that (2.2) becomes

$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\gamma \in \Gamma _G(x,y;V_n)} \mathbb {P}^n_x(\gamma ) = \sum _{\gamma \in \Gamma _G(x,y)} \mathbb {P}_x(\gamma ). \end{aligned}$$
(4.1)

Since \(\Gamma _G(x,y;V_n)\) increases to \(\Gamma _G(x,y)\), this might look like an easy application of either the Monotone Convergence Theorem or the Dominated Convergence Theorem. However, neither is applicable since \(\mathbb {P}^n_x(\gamma )\) may be strictly greater than \(\mathbb {P}_x(\gamma )\).

To investigate when exactly (4.1) holds, we will introduce another random walk on V which can be considered an intermediary between \(\mathbb {P}^n_x\) and \(\mathbb {P}_x\).

5 Extended Finite Random Walk

The difference in the behavior of \(\mathbb {P}_x\) and \(\mathbb {P}^n_x\) occurs only when \(\mathbb {P}_x\) leaves \(V_n\). Instead, \(\mathbb {P}^n_x\) is basically reflected back to a vertex in \(V_n\). We will now construct an intermediary random walk which still has a finite state space, models the behavior of stepping out of \(V_n\) and has the same transition probabilities as \(\mathbb {P}_x\) in \(V_n\). This is done by adding boundary vertices to \(G_n\) wherever there is an edge from \(V_n\) to \(V \setminus V_n\).

For any set \(A \subseteq V\), let

$$\begin{aligned} \partial _i A := \left\{ v \in A ~ | ~ \ \exists \ w \in V \setminus A : c(v,w) > 0 \right\} \end{aligned}$$

be the inner boundary and \(\partial _o A := \partial _i (V \setminus A)\) be the outer boundary of A in G.

For any \(v \in \partial _i A\), let \(\overline{v}\) be a copy of v. Define \(\overline{G_n} = (\overline{V_n}, \overline{c_n})\) where

$$\begin{aligned} \overline{V_n} = V_n \cup \left\{ \overline{v} ~ | ~ v \in \partial _i V_n \right\} , \end{aligned}$$

and \(\overline{c_n}\) is defined as follows. For \(x,y \in \overline{V_n}\), let

$$\begin{aligned} \overline{c_n}(x,y) = \overline{c_n}(y,x) = {\left\{ \begin{array}{ll} c(x,y) &{}, ~ x,y \in V_n\\ \sum _{z \notin V_n} c(x,z) &{} , ~ y = \overline{x}\\ 0 &{} , \text { otherwise} \end{array}\right. }. \end{aligned}$$

In particular, we have \((\overline{c_n})_x = c_x\) for all \(x \in V_n\). We denote by \(\overline{\mathbb {P}^n_x}\) the random walk on \(\overline{G_n}\) starting at x with transition matrix \(\overline{p_n}\) given by

$$\begin{aligned} \overline{p_n}(x,y) = \frac{\overline{c_n}(x,y)}{(\overline{c_n})_x}~. \end{aligned}$$

Furthermore, let \(V_n^* := \overline{V_n} \setminus V_n\).

Example 5.1

Let G be the lattice \({\mathbb {Z}}^2\) with unit weights, see Fig. 2. Furthermore, let \(V_n := \left\{ -n\ldots ,0,\ldots ,n \right\} ^2\). \(G_1\) and \(\overline{G_1}\) are illustrated in Fig. 3. Note that \(c((1,1), \overline{(1,1)}) = 2\) since (1, 1) has two edges leaving \(V_1\) in G.

Fig. 2
figure 2

The lattice \({\mathbb {Z}}^2\)

Fig. 3
figure 3

\(G_1\) (left) and \(\overline{G_1}\) (right) for \(G = {\mathbb {Z}}^2\)

Lemma 5.2

(Relation of \(p_n, \overline{p_n}\) and p). For \(x,y \in V_n\) we have

$$\begin{aligned} p_n(x,y) \ge p(x,y) = \overline{p_n}(x,y). \end{aligned}$$

For \(x,y \in V\) and \(m \in {\mathbb {N}}\) such that \(x,y \in V_m\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty } p_n(x,y) = p(x,y) = \overline{p_m}(x,y). \end{aligned}$$

Note that for \(n \in {\mathbb {N}}\) and \(x,y \in V_n\), we have

$$\begin{aligned} \Gamma _{G_n}(x,y) = \Gamma _{\overline{G_n}}(x,y;V_n) = \Gamma _G(x,y;V_n). \end{aligned}$$

By Lemma 5.2, the following holds for all \(x,y \in V_n\).

$$\begin{aligned} \ \forall \ m \ge n \ \forall \ \gamma \in \Gamma _G(x,y;V_m): \mathbb {P}_x(\gamma ) = \overline{\mathbb {P}^m_x}(\gamma ). \end{aligned}$$

The connection between \(\mathbb {P}_x^n(\gamma )\) and \(\overline{\mathbb {P}^n_x}(\gamma )\) is a bit more intricate. In order to investigate this connection, first consider what kind of paths exist in \(\overline{G_n}\). Let \(x,y \in V_n\), \(x \ne y\) and \(\overline{\gamma } \in \Gamma _{\overline{G_n}}(x,y)\) with \(L(\overline{\gamma }) \ge 2\). Then, by the definition of \(\overline{G_n}\), there exist \(l\in {\mathbb {N}}\) with \(l \ge 2\), \(v_1, \ldots , v_{l-1} \in V_n \setminus \left\{ x,y \right\} \) and \(k_1, \ldots , k_{l-1} \in {\mathbb {N}}_0\) such that \(k_j = 0\) for any \(j \in \left\{ 1, \ldots , l-1 \right\} \) with \(v_j \notin \partial _iV_n\) and

$$\begin{aligned} \overline{\gamma } = (x, (v_1)_{k_1}, \ldots , (v_{n-1})_{k_{l-1}}, y) \end{aligned}$$
(5.1)

where \((v)_k := (v, \underbrace{\overline{v}, v, \ldots , \overline{v}, v}_{k \text { times}})\) for \((v,k) \in ((\partial _iV_n)\times {\mathbb {N}}_0) \cup ((V_n \setminus \partial _iV_n) \times \left\{ 0 \right\} )\). Note that the representation (5.1) is unique for \(\overline{\gamma }\) since \(G_n\) does not contain any self-loops.

Definition 5.3

For \(x,y \in V_n\), \(x \ne y\), let \(\pi : \Gamma _{\overline{G_n}}(x,y) \rightarrow \Gamma _{G_n}(x,y)\) be the projection of \(\Gamma _{\overline{G_n}}(x,y)\) onto \(\Gamma _{G_n}(x,y)\) which replaces all occurrences of \((v, \overline{v}, v)\) for any \(v \in \partial _i V_n\) by (v).

More precisely, let \(\overline{\gamma } \in \Gamma _{\overline{G_n}}(x,y)\). If \(L(\overline{\gamma }) = 1\), then \(\overline{\gamma } = (x,y)\) and we define \(\pi (\overline{\gamma }) := (x,y)\). If \(L(\overline{\gamma }) \ge 2\), it is of the form (5.1) and we define

$$\begin{aligned} \pi (\overline{\gamma }) := (x, v_1, \ldots , v_{l-1}, y). \end{aligned}$$
(5.2)

Lemma 5.4

For all \(x,y \in V_n\), \(x \ne y\) and \(\gamma \in \Gamma _{G_n}(x,y)\), we have

$$\begin{aligned} \mathbb {P}_x^n(\gamma ) = \frac{c_x}{(c_n)_x} \cdot \sum _{\overline{\gamma } \in \pi ^{-1}(\gamma )} \overline{\mathbb {P}_x^n}(\overline{\gamma }). \end{aligned}$$

Proof

For any \(v,w \in V_n\), we have

$$\begin{aligned} \frac{c_v}{(c_n)_v} \cdot \overline{p_n}(v,w) = \frac{c_v}{(c_n)_v} \cdot \frac{c(v,w)}{c_v} = p_n(v,w) \end{aligned}$$
(5.3)

and, if \(v \in \partial _i V_n\),

$$\begin{aligned} \sum _{k=0}^{\infty } (\overline{p_n}(v,\overline{v})\cdot \underbrace{\overline{p_n}(\overline{v},v)}_{=1})^k = \frac{1}{1 - \overline{p_n}(v, \overline{v})} = \frac{c_v}{c_v - \sum _{w \notin V_n} c(v,w)} = \frac{c_v}{(c_n)_v}~.\nonumber \\ \end{aligned}$$
(5.4)

For \( \gamma \in \Gamma _{G_n}(x,y)\) with \(L(\gamma ) = 1\), we have \(\gamma = (x,y)\). Since any \(\overline{\gamma } \in \pi ^{-1}(\gamma )\) visits x and y only once, \(\pi ^{-1}(\gamma ) = \left\{ \gamma \right\} \) holds and

$$\begin{aligned} \frac{c_x}{(c_n)_x} \cdot \sum _{\overline{\gamma } \in \pi ^{-1}(\gamma )} \overline{\mathbb {P}_x^n}(\overline{\gamma }) = \frac{c_x}{(c_n)_x} \cdot \overline{\mathbb {P}_x^n}(\gamma ) = \frac{c_x}{(c_n)_x} \cdot \overline{p_n}(x,y) = p_n(x,y) = \mathbb {P}_x^n(\gamma ). \end{aligned}$$

Now let \(\gamma = (x,v_1, \ldots , v_{l-1}, y) \in \Gamma _{G_n}(x,y)\) with \(l=L(\gamma ) \ge 2\). We define

$$\begin{aligned} A(\gamma ) := \left\{ (k_1, \ldots , k_{l-1}) \in ({\mathbb {N}}_0)^{l-1} ~ | ~ \text {for each } j \in \left\{ 1,\ldots , l-1 \right\} : k_j = 0 \text { if } v_j \notin \partial _i V_n \right\} . \end{aligned}$$

It follows that

$$\begin{aligned} \pi ^{-1}(\gamma )&= \left\{ \overline{\gamma } \in \Gamma _{\overline{G_n}}(x,y) ~ | ~ \pi (\overline{\gamma }) = \gamma \right\} \\&= \left\{ (x, (v_1)_{k_1}, \ldots , (v_{l-1})_{k_{l-1}},y) ~ | ~ (k_1,\ldots ,k_{l-1}) \in A(\gamma ) \right\} \end{aligned}$$

and we compute

$$\begin{aligned} \sum _{\overline{\gamma } \in \pi ^{-1}(\gamma )} \overline{\mathbb {P}^n_x}(\overline{\gamma })&= \sum _{(k_1,\ldots , k_{l-1}) \in A(\gamma )} \overline{\mathbb {P}^n_x}((x, (v_1)_{k_1}, \ldots , (v_{l-1})_{k_{l-1}},y))\\&= \sum _{(k_1,\ldots , k_{l-1}) \in A(\gamma )} \left[ \overline{\mathbb {P}^n_x}((x,v_1, \ldots , v_{l-1}, y)) \cdot \prod _{\begin{array}{c} j=1,\ldots , l-1\\ v_j \in \partial _i V_n \end{array}} \overline{p_n}(v_j, \overline{v_j})^{k_j}\right] \\&{\mathop { = }\limits ^{(5.4)}} \overline{\mathbb {P}^n_x}((x,v_1, \ldots , v_{l-1}, y)) \cdot \prod _{j=1}^{l-1} \frac{c_{v_j}}{(c_n)_{v_j}} \\&{\mathop { = }\limits ^{(5.3)}} \frac{(c_n)_x}{c_x} \cdot \mathbb {P}^n_x((x,v_1,\ldots , v_{l-1}, y)) = \frac{(c_n)_x}{c_x} \cdot \mathbb {P}^n_x(\gamma ). \end{aligned}$$

\(\square \)

Proposition 5.5

For \(x,y \in V_n\), \(x \ne y\), we have

$$\begin{aligned} \overline{\mathbb {P}^n_x}[\tau _y< \tau _x^+] = \frac{(c_n)_x}{c_x} \cdot \mathbb {P}^n_x[\tau _y < \tau _x^+]. \end{aligned}$$

Proof

Using

$$\begin{aligned} \Gamma _{\overline{G_n}}(x,y) = \bigsqcup _{\gamma \in \Gamma _{G_n}(x,y)} \pi ^{-1}(\gamma ), \end{aligned}$$

we compute

$$\begin{aligned} \mathbb {P}_x^n[\tau _y< \tau _x^+]&= \sum _{\gamma \in \Gamma _{G_n}(x,y)} \mathbb {P}_x^n(\gamma ) = \sum _{\gamma \in \Gamma _{G_n}(x,y)} \left( \frac{c_x}{(c_n)_x} \cdot \sum _{\overline{\gamma } \in \pi ^{-1}(\gamma )} \overline{\mathbb {P}_x^n}(\overline{\gamma }) \right) \\&= \frac{c_x}{(c_n)_x} \cdot \sum _{\overline{\gamma } \in \Gamma _{\overline{G_n}}(x,y)} \overline{\mathbb {P}_x^n}(\overline{\gamma }) = \frac{c_x}{(c_n)_x} \cdot \overline{\mathbb {P}_x^n}[\tau _y < \tau _x^+]. \end{aligned}$$

\(\square \)

Since we now have clarified the relation between \(\mathbb {P}^n_x\), \(\overline{\mathbb {P}^n_x}\) and \(\mathbb {P}_x\), we can return our attention to (2.2).

Proposition 5.6

For \(x,y \in V\), \(x \ne y\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y< \tau _x^+] = \mathbb {P}_x[\tau _y < \tau _x^+] \end{aligned}$$

if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = 0. \end{aligned}$$
(5.5)

Proof

We have

$$\begin{aligned} \mathbb {P}_x[\tau _y <\tau _x^+]&= \sum _{\gamma \in \Gamma _G(x,y)} \mathbb {P}_x(\gamma ) = \lim _{n \rightarrow \infty } \sum _{\gamma \in \Gamma _G(x,y;V_n)} \mathbb {P}_x(\gamma ) \\&= \lim _{n \rightarrow \infty } \sum _{\gamma \in \Gamma _G(x,y;V_n)} \overline{\mathbb {P}^n_x}(\gamma )\nonumber \end{aligned}$$
(5.6)

and

$$\begin{aligned} \mathbb {P}^n_x[\tau _y< \tau _x^+] = \frac{c_x}{(c_n)_x} \cdot \overline{\mathbb {P}^n_x}[\tau _y < \tau _x^+] = \frac{c_x}{(c_n)_x} \cdot \sum _{\gamma \in \Gamma _{\overline{G_n}}(x,y)} \overline{\mathbb {P}^n_x}(\gamma ). \end{aligned}$$
(5.7)

Since \(\Gamma _G(x,y;V_n) = \Gamma _{\overline{G_n}}(x,y;V_n)\) and \((c_n)_x \rightarrow c_x\), it follows that \(\mathbb {P}^n_x[\tau _y< \tau _x^+] \rightarrow \mathbb {P}_x[\tau _y < \tau _x^+]\) holds if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{ \begin{array}{c} \gamma \in \Gamma _{\overline{G_n}}(x,y)\\ \gamma \notin \Gamma _{\overline{G_n}}(x,y;V_n) \end{array}} \overline{\mathbb {P}^n_x}(\gamma )\ = 0. \end{aligned}$$

This is the same as

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = 0. \end{aligned}$$

\(\square \)

Using the same approach, we can also characterize when (2.3) holds.

Proposition 5.7

For \(x,y \in V\), \(x \ne y\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] = \mathbb {P}_x[\tau _y \le \tau _x^+] \end{aligned}$$

if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = \mathbb {P}_x[\tau _x^+ = \tau _y = \infty ] \end{aligned}$$
(5.8)

which in turn is equivalent to

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _x^+ < \tau _y] = 0. \end{aligned}$$
(5.9)

Proof

Using (5.6) and (5.7) from the proof of Proposition 5.6, we have

$$\begin{aligned} \mathbb {P}_x[\tau _y \le \tau _x^+] = \mathbb {P}_x[\tau _x^+ = \tau _y = \infty ] + \lim _{n \rightarrow \infty } \sum _{\gamma \in \Gamma _{\overline{G_n}}(x,y;V_n)} \overline{\mathbb {P}^n_x}(\gamma ) \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] = \lim _{n \rightarrow \infty } \sum _{\gamma \in \Gamma _{\overline{G_n}}(x,y)} \overline{\mathbb {P}^n_x}(\gamma ) \end{aligned}$$

provided either one of these two limits exists.

Hence, we have convergence as desired if and only if

$$\begin{aligned} \mathbb {P}_x[\tau _x^+ = \tau _y = \infty ] = \lim _{n \rightarrow \infty } \sum _{ \begin{array}{c} \gamma \in \Gamma _{\overline{G_n}}(x,y)\\ \gamma \notin \Gamma _{\overline{G_n}}(x,y;V_n) \end{array}} \overline{\mathbb {P}^n_x}(\gamma ) \ = \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+]. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \mathbb {P}_x[\tau _x^+ = \tau _y = \infty ]&= \lim _{n \rightarrow \infty } \mathbb {P}_x[\tau _{V \setminus V_n}< \min (\tau _x^+, \tau _y)]\nonumber \\&= \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \min (\tau _x^+, \tau _y)]\nonumber \\&= \lim _{n \rightarrow \infty } \left( \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _x^+< \tau _y] + \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] \right) \end{aligned}$$
(5.10)

which implies the second claim. \(\square \)

Remark 5.8

An equivalent approach would be to consider a lazy random walk on \(G_n\) which has the same transition probabilities p(vw) as \(\mathbb {P}_x\) for \(v \ne w\) but stays at v with probability

$$\begin{aligned} \sum _{w \in V \setminus V_n} p(v,w) = \mathbb {P}_v[\omega _1 \in V \setminus V_n]. \end{aligned}$$

In that case the notion of “stepping out of \(V_n\)” would be modeled by staying at any vertex \(v \in V_n\).

6 Embedding \(\mathcal {T}\) into Transient Graphs

We will show that whenever a graph G is transient and not part of an infinite line, one can find a subgraph of G which is similar to \(\mathcal {T}\) from Sect. 3. We will also show that this is sufficient for (5.5) not to hold.

Proposition 6.1

Let G be a transient, connected graph which is not a subgraph of a line. Then, there exist \(x,y,z \in V\) such that \(x \ne y\), (xzy) is a path in G and

$$\begin{aligned} \mathbb {P}_z [\tau _x = \tau _y = \infty ] > 0. \end{aligned}$$
Fig. 4
figure 4

Embedding \(\mathcal {T}\) into a transient graph

Proof

Since G is transient, it is infinite. If G is not a subgraph of a line, then there exists some \(z \in V\) with at least three adjacent vertices. Let F be a set of exactly three neighbors of z. Since G is transient and F is finite, there exists \(v \in \partial _o F\) such that

$$\begin{aligned} \mathbb {P}_v[\tau _{F} = \infty ] > 0. \end{aligned}$$

If \(v = z\), we can choose \(x,y \in F\), \(x \ne y\), and get

$$\begin{aligned} \mathbb {P}_z[\tau _x = \tau _y = \infty ] \ge \mathbb {P}_v[\tau _F = \infty ] > 0. \end{aligned}$$

If \(v \ne z\), then there exists \(v' \in F\) such that \((z,v',v)\) is a path in G. Let \(x,y \in V\) be such that \(F = \left\{ x,y,v' \right\} \), see Fig. 4. It follows that

$$\begin{aligned} \mathbb {P}_z[\tau _x = \tau _y = \infty ]&\ge \mathbb {P}_z[\omega _1 = v', \omega _2 = v, \tau _x = \tau _y = \infty ] \\&= p(z,v')\cdot p(v',v) \cdot \mathbb {P}_v[\tau _x = \tau _y = \infty ]\\&\ge p(z,v')\cdot p(v',v) \cdot \mathbb {P}_v[\tau _F = \infty ] > 0. \end{aligned}$$

\(\square \)

Theorem 6.2

Let G be a transient, connected graph. Then,

$$\begin{aligned} \ \forall \ x,y \in V, ~ x \ne y: \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = 0 \end{aligned}$$

holds if and only if G is a subgraph of an infinite line.

Proof

First, assume that G is a subgraph of an infinite line and let \(x,y \in V\), \(x \ne y\). Then, for any \(n \in {\mathbb {N}}\) sufficiently big, we have

$$\begin{aligned} \Gamma _{\overline{G_n}}(x,y) \setminus \Gamma _{\overline{G_n}}(x,y;V_n) = \emptyset , \end{aligned}$$

i.e., there exists no path \(x \rightarrow y\) which leaves \(V_n\) before reaching y. Hence,

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = 0. \end{aligned}$$

To prove the converse direction, suppose that G is not a subgraph of a line. By Proposition 6.1, we know that there exist distinct vertices \(x,y,z \in V\) such that (xzy) is a path in G and \(\mathbb {P}_z[\tau _x = \tau _y = \infty ] > 0\). Hence,

$$\begin{aligned} 0&< \mathbb {P}_z[\tau _x = \tau _y = \infty ] \\&= \lim _{n \rightarrow \infty } \mathbb {P}_z[\tau _{\partial _o V_n}< \min (\tau _x, \tau _y)]\\&= \lim _{n \rightarrow \infty } \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \min (\tau _x,\tau _y)]\\&= \lim _{n \rightarrow \infty } \left( \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _x< \tau _y] + \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _y< \tau _x]\right) \\&\le \limsup _n \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _x< \tau _y] + \limsup _n \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _y < \tau _x]. \end{aligned}$$

Without loss of generality assume that \(\limsup _{n \rightarrow \infty } \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _y < \tau _x] > 0\). It follows that \(\limsup _{n \rightarrow \infty }\overline{\mathbb {P}_x^n}[\tau _{V_n^*}< \tau _y < \tau _x^+] > 0\) because for all \(n \in {\mathbb {N}}\) with \(\left\{ x,y,z \right\} \subseteq V_n\), we have

$$\begin{aligned} \overline{\mathbb {P}_x^n}[\tau _{V_n^*}< \tau _y< \tau _x]&\ge \overline{\mathbb {P}_x^n}[\tau _z< \tau _{V_n^*}< \tau _y< \tau _x]\nonumber \\&= \overline{\mathbb {P}_x^n}[\tau _z< \min (\tau _{V_n^*}, \tau _x, \tau _y)] \cdot \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _y< \tau _x]\nonumber \\&\ge p(x,z) \cdot \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _y < \tau _x]. \end{aligned}$$
(6.1)

\(\square \)

Corollary 6.3

Let G be a transient, connected graph with \(|N(x)| < \infty \) for any \(x \in V\). Then,

$$\begin{aligned} R^F(x,y) = \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y < \tau _x^+]} \end{aligned}$$

holds for all \(x,y \in V\) with \(x \ne y\) if and only if G is a subgraph of an infinite line.

Proof

As seen in (2.2), the desired probabilistic representation (1.5) holds if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}_x^n[\tau _y< \tau _x^+] = \mathbb {P}_x[\tau _y < \tau _x^+]. \end{aligned}$$

By Proposition 5.6, this is equivalent to

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = 0 \end{aligned}$$

and the claim follows by Theorem 6.2. \(\square \)

Theorem 6.4

Let G be an infinite, connected graph. If

$$\begin{aligned} \ \forall \ x,y \in V, ~ x \ne y: \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = \mathbb {P}_x[\tau _x^+ = \tau _y = \infty ] \end{aligned}$$

holds, then G is recurrent.

Proof

By Proposition 5.7, we have

$$\begin{aligned} \ \forall \ x,y \in V, ~ x \ne y: \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _x^+ < \tau _y] = 0. \end{aligned}$$
(6.2)

Suppose that G is transient and not a subgraph of a line. Using the same arguments as in the proof of Theorem 6.2, we see that there exist distinct vertices \(x,y,z \in V\) such that \((x,z,y) \in \Gamma _G(x,y)\) and

$$\begin{aligned} \limsup _n \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _x< \tau _y] + \limsup _n \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _y < \tau _x] > 0. \end{aligned}$$

Since the same argument as in (6.1) yields

$$\begin{aligned} \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _x^+< \tau _y] \ge p(x,z) \cdot \overline{\mathbb {P}^n_z}[\tau _{V_n^*}< \tau _x < \tau _y] \end{aligned}$$

for all \(n \in {\mathbb {N}}\) with \(\left\{ x,y,z \right\} \subseteq V_n\), it follows from (6.2) that

$$\begin{aligned} \limsup _n \overline{\mathbb {P}_z^n}[\tau _{V_n^*}< \tau _x < \tau _y] = 0 \end{aligned}$$

which implies

$$\begin{aligned} \limsup _{n \rightarrow \infty } \overline{\mathbb {P}^n_z}[\tau _{V_n^*}< \tau _y < \tau _x] > 0. \end{aligned}$$

However, we also have

$$\begin{aligned} \overline{\mathbb {P}^n_y}[\tau _{V_n^*}< \tau _y^+< \tau _x] \ge p(y,z) \cdot \overline{\mathbb {P}^n_z}[\tau _{V_n^*}< \tau _y < \tau _x] \end{aligned}$$

for all \(n \in {\mathbb {N}}\) with \(\left\{ x,y,z \right\} \subseteq V_n\) by the same argument as in (6.1), and it follows that

$$\begin{aligned} \limsup _n \overline{\mathbb {P}_y^n}[\tau _{V_n^*}< \tau _y^+ < \tau _x] > 0 \end{aligned}$$

which is a contradiction to (6.2).

Hence, if G is transient, then it must be a subgraph of a line. In this case,

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\mathbb {P}^n_x}[\tau _{V_n^*}< \tau _y < \tau _x^+] = 0 \end{aligned}$$

follows for all \(x,y \in V\) with \(x\ne y\) by Theorem 6.2. Together with (6.2) and (5.10), this implies

$$\begin{aligned} \mathbb {P}_x[\tau _x^+ = \tau _y = \infty ] = 0 \end{aligned}$$

for all \(x,y \in V\) with \(x\ne y\). However, this is a contradiction to the transience of G. \(\square \)

Corollary 6.5

Let G be an infinite, connected graph with \(|N(x)| < \infty \) for any \(x \in V\). Then,

$$\begin{aligned} R^F(x,y) = \frac{1}{c_x \cdot \mathbb {P}_x[\tau _y \le \tau _x^+]} \end{aligned}$$
(6.3)

holds for all \(x,y \in V\) with \(x \ne y\) if and only if G is recurrent.

Proof

If G is recurrent, we have \(\mathbb {P}_x[\tau _x^+ = \tau _y = \infty ] = 0\) for all \(x,y \in V\). Hence,

$$\begin{aligned} \mathbb {P}_x[\tau _x^+ < \tau _y] = \mathbb {P}_x[\tau _x^+ \le \tau _y] \end{aligned}$$

and (1.4) implies the claim.

If (6.3) holds for all \(x,y \in V\) with \(x \ne y\), then by (2.3) we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}^n_x[\tau _y < \tau _x^+] = \mathbb {P}_x[\tau _y \le \tau _x^+] \end{aligned}$$

for all \(x,y \in V\) with \(x \ne y\), and Proposition 5.7 and Theorem 6.4 imply the recurrence of G. \(\square \)

This shows that the lower bound in (1.4) is actually a strict inequality for some \(x,y \in V\) with \(x \ne y\) for any transient graph \(G = (V,c)\).