1 Introduction

The Wiener Sausage problem [15] is concerned with determining the volume traced out (the sausage) by a d-dimensional sphere attached to a Brownian particle in d dimensions. The problem is illustrated in Fig. 1 in dimension \(d=2\). It has been studied extensively in the literature [2, 9, 14, 16, 20, 23, 24, 26, 30, 32] from a probabilistic point of view and has a very wide range of applications, such as medical physics [6, for example], chemical engineering [11, for example] or ecology [33, for example]. On the lattice, the volume of the Sausage translates to the number of distinct sites visited [29]. In this work, we present an alternative, field-theoretic approach which is particularly flexible with respect to boundary conditions and observables with the aim to characterise and resolve the technical challenges in such an undertaking, not with the aim to improve upon the existing theory of the Wiener Sausage.

The approach has the additional appeal that, somewhat similar to percolation [25] where all non-trivial features are due to the imposed definition of clusters as being composed of occupied sites connected via open bonds between nearest neighbours, the “interaction” in the present case is one imposed in retrospect. After all, the Brownian particle studied is free and not affected by any form of interaction. Yet, the observable requires us to discount returns, i.e. loops in the trajectory of the particle, thereby inducing an interaction between the particle’s past and present.

Fig. 1
figure 1

Example of the Wiener Sausage problem in two dimensions. The blue area has been traced out by the Brownian particle attached to a disc shown in red (Color figure online)

Before describing the process to be analysed in further detail, we want to point out that some of the questions pursued in the following are common to the field-theoretic re-formulation of stochastic processes [4, 5, 8, 21, 27, 28]. Against the background of a field theory of the Manna Model [7, 19] one of us recently developed, the features we wanted to understand were: (1) “Fermionic”, “excluded volume” or “hard-core interaction” processes [13, for example], i.e. processes where lattice sites have a certain carrying capacity (unity in the present case) that cannot be exceeded. (2) Systems with boundaries, i.e. lack of momentum conservation in the vertices. (2’) Related to that, how different modes couple in finite, but translationally invariant systems (periodic boundary conditions). (3) The special characteristics of the propagator of the immobile species. (4) Observables that are spatial or spatio-temporal integrals of densities.

The Wiener Sausage incorporates all of the above and because it is exactly solvable or has been characterised by very different means [2, 9, 15, 31], it also gives access to a better understanding of the renormalisation process itself. In the following section we will describe the process we are investigating and contrast it with the original Wiener Sausage. In Sect. 3 we will introduce the field-theoretic description up to tree level, which is complemented by Sect. 4, where we perform a one-loop renormalisation procedure. It will turn out that there are no further corrections beyond one loop and our perturbative results may thus be regarded as exhaustive. Sections 4.3 and 4.4 are dedicated to calculations in finite systems. Section 5 contains a discussion of the results mostly from a field-theoretic point of view, with Sect. 5.1 however focusing on a summary of this work with regard to the original Wiener Sausage problem.

2 Model

Originally, the Wiener Sausage is concerned with the moments or generally statistical properties as a function of time of the volume traced out by a sphere of fixed given (say, unit) radius, which is attached to a Brownian particle. This volume is thus the set of point within a certain distance to the particle’s trajectory. Our field-theoretic approach will not recover that process, but one that can reasonably be assumed to reside in the same universality class. One may take the view that the field-theoretic description is merely a different view on the same phenomenon, namely the Wiener Sausage.

To motivate the field theory and link it to the original problem, we will distinguish three different models: (i) The original Wiener Sausage in terms of a sphere dragged by a Brownian particle [15], (ii) a discrete time random walker on a lattice, where the Sausage becomes the set of distinct sites visited [29], (iii) a Brownian particle in the continuum that spawns immobile offspring with a finite rate and subject to a finite carrying capacity. In the following, we will first describe how the phenomenon on the lattice, (ii), relates to the original Wiener Sausage, (i), and then how the field theory, (iii), relates to the lattice model, (ii).

The asymptote in long times t of the number of distinct sites visited by a discrete time random walker on a lattice ((ii) above) is expected to converge to that of the volume V(t) over the volume \(V_0\) of the sphere in the original process ((i) above), provided the walker returns repeatedly, so that the shape of the sphere and the structure of the lattice respectively do not enter into the shape and size of the volume visited. Frequent returns are realised in the limit of long times and below \(d=2\) dimensions. In that case, the walker on the lattice becomes a discretised version of the original Wiener Sausage, as the particle drags a sphere that is small compared to the volume traced out. Indeed, in one dimension, \(d=1\), the expected volume of the Wiener Sausage in units of the volume of the sphere is dominated by \(V(t)/V_0 \sim \sqrt{4 tD/(\pi b^2)}\), where t is the time, D is the diffusion constant and b the radius of the sphere, whereas the expected number of distinct sites visited by a random walker after n steps is dominated \(\sqrt{8n/\pi }\) [29]. The two expressions are identical for \(t=n\) and \(D=2b^2\), the effective diffusion constant of a random walker taking one step of distance 2b in one of the 2 directions in each time step. Above \(d=2\) dimensions, the walker is free, i.e. self-intersection of the trace becomes irrelevant on larger time scales. The number of distinct sites visited and the Wiener Sausage volume therefore both scale linearly in t and n respectively. However, the (non-universal) proportionality factor, e.g. \(\lim _{t\rightarrow \infty } V(t)/(V_0 t)\) for the original Wiener Sausage, is affected by the microscopic details such as the self-intersection of the sphere or the lattice structure of the random walker.

We proceed to relate the process on the lattice (ii) to a Brownian particle spawning immobile offspring (iii). To this end, we first describe (ii) in the language of reaction and diffusion. In (ii), an “active” particle (species “A”, the active species) performs a random walk on a lattice. Simultaneously, the particle spawns immobile offspring particles (species “B”, the blue ink traces of A shown in Fig. 1, below sometimes referred to as a “substrate particle”) at every site visited, provided that the site is not already occupied by an immobile B particle. In other words, A spawns exactly one B at every newly visited site, so that the number of B particles deposited becomes a proxy for the number of distinct sites visited. In dimensions less than 2 the A particle will return to every site visited arbitrarily often in the limit of long times. A finite spawning probability will therefore change the number of B particles deposited only at the fringes of the set of sites visited, without, however, changing the asymptotics of the number of B particles in the system as a function of time. If \(\overline{n}_B(\mathbf {x},t)\) is the number of B particles at position \(\mathbf {x}\) on the lattice and time t, the probability with which B particles are spawned by an A particle may be written as \(\overline{\gamma }(1-\overline{n}_B(\mathbf {x},t))\), so that deposition occurs with probability \(\overline{\gamma }\) if no B particle is present and not at all otherwise.

At this stage, we may introduce a carrying capacity \(\overline{n}_0\), which determines the maximum number of B particles deposited on any site, by making the spawning probability drop from \(\overline{\gamma }\) to 0 linearly in the particle number \(\overline{n}_B(\mathbf {x},t)\), i.e. like

$$\begin{aligned} \overline{\gamma }\frac{\overline{n}_0-\overline{n}_B(\mathbf {x},t)}{\overline{n}_0} \ . \end{aligned}$$
(1)

In the process (ii) discussed so far, \(\overline{n}_0\) is unity, but from what has been discussed above, \(\overline{n}_0>1\) will result in each (frequently) revisited site carrying \(\overline{n}_0\) immobile B particles. The meaning of the carrying capacity in relation to the field theory is further discussed in Sect. 2.2.

To see the relation between the third process, (iii) and the discrete time, discrete space process (ii), we first introduce continuous time in the latter. Random hopping which used to occur once in every time step now becomes a Poisson process with a certain rate, say H, as does the spawning of immobile offspring, with now takes place with rate \(\gamma (\overline{n}_0-\overline{n}_B(\mathbf {x},t))/\overline{n}_0\). In the limit of \(\gamma \gg H\) all distinct sites visited will carry \(\overline{n}_0\) immobile offspring. However, in dimensions \(d<2\) sites are visited repeatedly, so that even a finite deposition (attempt) rate \(\gamma \) yields the same asymptotic occupation. In dimensions \(d>2\) the number of B particles deposited will, on the other hand, be proportional to the rate \(\gamma \).

The expression \(\gamma (\overline{n}_0-\overline{n}_B(\mathbf {x},t))/\overline{n}_0\) may be written as \(\gamma - \overline{\kappa }\overline{n}_B(\mathbf {x},t)\) where \(\overline{\kappa }=\gamma /\overline{n}_0\) is a discount rate. In this interpretation (the view adopted in the perturbation theory below), deposition takes place unhindered with rate \(\gamma \) while unlimited (and thus supposedly suppressed) deposition is discounted by \(\overline{\kappa }\overline{n}_B(\mathbf {x},t)\), i.e. with a rate proportional to the occupation and inversely proportional to the carrying capacity.

It remains to take the continuum (space) limit to arrive at process (iii) to be written as a field theory, where occupation numbers \(\overline{n}_B(\mathbf {x},t)\) and \(\overline{n}_A(\mathbf {x},t)\) for B and A particles respectively turn into occupation densities, i.e. fields, namely \(n_B(\mathbf {x},t)\) and \(n_A(\mathbf {x},t)\). Moreover, the carrying capacity \(\overline{n}_0\) turns into a carrying density capacity, \(n_0\), so that the discount mentioned above is now parameterised by \(\kappa =\gamma /n_0\) (a rate per density). The deposition thus occurs with rate

$$\begin{aligned} \gamma \left( 1-\frac{n_B(\mathbf {x},t)}{n_0}\right) = \gamma - \kappa n_B(\mathbf {x},t) \ . \end{aligned}$$
(2)

The random movement of the A particle is now parameterised by the diffusion constant D, which may be obtained as the hopping rate H over the squared lattice spacing in the limit of the latter going to 0.

2.1 Intermediate Summary

The long-winded discussion above serves as a justification as to why we expect the field theory of (iii) to produce a phenomenon in the same universality class as the original Wiener Sausage. Starting from the original Wiener Sausage (i), we have motivated why the process on the lattice, (ii), can be regarded as a discretised version of (i) and introduced (iii) as its continuum approximation. In the course of the justification, we made use of some of the details of the processes involved, such as repeated returns to sites in process (ii). The field theoretic description of the universality class of the Wiener Sausage, however, may be derived without recourse to these details, simply by observing that the volume traced out by the sausage is proportional to the length of the trajectory with multiple visits discounted, corresponding to the number of immobile B particles deposited by a Brownian particle (of species A), if its spawning rate is moderated down in the presence of B particles.

To summarise, process (iii), to be cast in a Liouvillian and thus a field theory below, is defined as follows: The Brownian particle A freely diffuses with diffusion constant D and possibly subject to boundary conditions. While diffusing, the particle can spawn offspring with Poissonian rate \(\gamma \) which, however, belong to an immobile second species B. If \(n_B(\mathbf {x},t)\) is the density of these particles, the deposition is linearly regulated down in their presence, according to \(\gamma -\kappa n_B(\mathbf {x},t)\) with \(\kappa =\gamma /n_0\). Here, \(\gamma \) is the deposition rate and \(n_0\) is the carrying (density) capacity.

It is convenient in the field theory to allow for spontaneous extinction with rate (or “mass”) r. Ignoring boundary conditions, the propagator of the Brownian particle (species A, the “activity”) takes the familiar form \(1/(-\imath \omega + D \mathbf {k}^2 + r)\) where \(\omega \) and \(\mathbf {k}\) parameterise frequency and momentum (wave number) coordinates, respectively. The propagator of the immobile species takes the form \(1/(-\imath \omega + \epsilon ')\), where \(\epsilon '\) is the rate of spontaneous extinction of B therticles and the limit \(\epsilon '\rightarrow 0^+\) is implied to establish causality, as often done in field theories. The key observable, corresponding to the volume of the sausage, is the total number of immobile particles in the system after a given time t, i.e. the spatial integral over \(n_B(\mathbf {x},t)\).

The engineering dimension of \(\kappa \) is a rate per density, which in comparison to the engineering dimension of the diffusion constant, an area per time, reveals the upper critical dimension of 2. Alternatively, this can be seen from the density of unhindered deposition as a function of time, \((\gamma t)/(Dt)^{d/2}\), i.e. in the absence of discounts, \(\kappa =0\) or \(n_0\rightarrow \infty \).

In what follows, we will characterise process (iii) field theoretically. The Liouvillian of the process is split into a linear part, Eq. (11), discussed in Sect. 3.2, and a non-linear part Eq. (21), discussed in Sect. 3.3. The linear part of the Liouvillian can be constructed from the propagators mentioned above and vice versa. The Liouvillian will enter into a path-integral, which can be used to generate all correlation and vertex functions. The path integral itself is to be evaluated perturbatively in the non-linearity, which, for example, instantly indicates that the non-linearity has no bearing on the propagator of particle A.

Above two dimensions, the non-linearity causes an ultraviolet divergence, Eq. (56), which has its origin in the increasingly sharp divergence in t of the density of a random walker at the origin.Footnote 1 However, in dimensions above 2 the non-linearity is infrared irrelevant and so all long-time, long-range observables are covered by the tree-level. In dimension below 2 no ultraviolet divergence occurs and the infrared can be regularised using finite masses r and \(\epsilon '\). We will therefore work in dimensions \(2-\epsilon \), with \(\epsilon >0\), known as dimensional regularisation (of the ultraviolet).

Initially, the density fields will be studied on an infinite domain without boundaries. However, in Sect. 4.3 we will also consider an infinite slab and in Sect. 4.4 an infinite cylinder. We will use Fourier transforms to write the fields in the infinite domain and suitably chosen Fourier series in the presence of open (Dirichlet) or cylindrical boundaries. These transforms and series are discussed in Sect. 3.1 and used later in the respective sections.

We will demonstrate in the following that the field theory recovers exact results of the original Wiener Sausage as far as universal exponents are concerned, but also with respect to some amplitudes (namely the leading order term of the volume of the sausage in one dimension as a function of time and the leading order of the volume as a function of the system size of the infinite slab). Firstly, the present results confirm that the logistic term Eq. (2) is capable of capturing the constraints due to the carrying capacity. At a more technical level, the calculations for (partially) finite systems (infinite slab and cylinder) involve different propagators, which under renormalisation can lead to new non-linearities. Similar to the classic case discussed in [18] this problem, however, will be avoided. The results for these more complicated boundary conditions show very interesting crossover behaviour. Finally, from a physical point of view, it is particularly interesting that the infrared regularisation of the immobile species, \(\epsilon '\), a neccessary ingredient as to preserve causality in the absence of diffusion, can in principle be used to regularise the theory as a whole, i.e. without the need of a particle mass r.

Before introducing the field theory of the present model in Sect. 3, we discuss in the following briefly the intricacies of the fermionic nature of the B particles.

2.2 Finite Carrying Capacity

To fully understand the effect and consequences of the carrying capacity, it is best to reconsider the process on the lattice. A carrying capacity of \(\overline{n}_0=1\) in Eq. (1) switches off the deposition of B particles in their presence in a rather dramatic fashion, implementing a constraint that is normally referred to as fermionic, because there is never more than one B particle deposited on a site. Raising \(\overline{n}_0\) allows the spawning rate to drop linearly in the occupation in an otherwise bosonic setup. While this may raise suspicion and invite the criticism of a fudge, as demonstrated below, such a bosonic regularisation may be interpreted as the fermionic case on a lattice with a particular connectivity, i.e. the attempted regularisation is the original, fermionic case in disguise, suggesting that no such regularisation is needed.

Some authors [34, and references therein] avoid terms like Eqs. (1) or (2) by expanding a suitable expression for \(\delta _{1,\overline{n}_B(\mathbf {x},t)}\), a Kronecker \(\delta \)-function. Equations (1) and (2) are not leading order terms in an expansion. For \(\overline{n}_0=1\) and before taking any other approximation (e.g. continuous space and density or removing irrelevant terms in the field theory) a logistic term like (1) is a representation of the original process as exact as one involving the Kronecker \(\delta \)-function. For \(\overline{n}_0>1\) a logistic term gives rise to a model that may be strictly different compared to one with a sharp carrying capacity implemented by, say, a Heaviside step-function, \(\theta (\overline{n}_0-\overline{n}_B(\mathbf {x},t))\), but nonetheless one that may be of equal interest.

Large \(\overline{n}_0\) on the other hand, softens the cutoff, because spawning does not drop from suddenly from \(\overline{\gamma }\) to 0 but is more and more suppressed. One might therefore be inclined to study the problem in the limit of large \(\overline{n}_0\). At closer inspection, however, it turns out that such increased \(\overline{n}_0\) does not present a qualitative change of the problem: Having \(\overline{n}_0>1\) is as if each site was divided into \(\overline{n}_0\) spaces. When the Brownian particle jumps from site to site it arrives in one of those \(\overline{n}_0\) spaces, only \(\overline{n}_0-\overline{n}_B(\mathbf {n},t)\) of which are empty, so that an offspring can be left behind. The process with carrying capacity \(\overline{n}_0>1\) therefore corresponds to the process with a carrying capacity of unity per space on a lattice where \(\overline{n}_B(\mathbf {n},t)\) describes the number of immobile offspring in each “nest” or column of such spaces, as illustrated in Fig. 2. In effect, the carrying capacity \(\overline{n}_0>1\) is implemented per column, leaving the original fermionic constraint of at most one offspring per space (or site) in place. In other words, even when a carrying capacity \(\overline{n}_0\gg 1\) is introduced to smoothen the fermionic constraint, it is still nothing else but the original constraint \(\overline{n}_0=1\) on a different lattice. This led us to believe that there is no qualitative difference in \(\overline{n}_0=1\) or any other finite value of \(\overline{n}_0\). In the following, the field theory will retain the carrying capacity \(n_0\) because it is an interesting parameter (\(n_0\rightarrow \infty \) switches the interaction off) and a “marker” of the interaction. It may be set to any positive value.

3 Field Theory

In order to cast the model introduced above in a field-theoretic language, we take the Doi–Peliti [8, 21] approach without going through too many technical details. There are a number of reviews and extremely useful tutorials available [4, 5].

Fig. 2
figure 2

A one dimensional lattice of size L and carrying capacity \(\overline{n}_0=4\) corresponds to the lattice shown above, where the carrying capacity of the former is implemented by expanding each site into a column of \(\overline{n}_0\) sites. The Brownian particle can jump from every site to all sites in the neighbouring columns. In the new lattice, the carrying capacity per site is unity, the carrying capacity per column is \(\overline{n}_0\)

In the following the mobile particle is of species “A”, performing Brownian motion with (nearest neighbour) hopping rate H, which translates to diffusion constant \(D=H/(2d)\) on a d-dimensional hypercubic lattice. We expect universal scaling in the large time and space limit. To regularise the infrared, we also introduce an extinction rate r. A’s creation operator is , its annihilation operator is . The immobile species is “B”, spawned with rate \(\gamma \) by species A. Its creation operator is , its annihilation operator is , both commuting with the creation and annihilation operators of species A. The immobile species goes extinct with rate \(\epsilon '\), which allows us to have a Fourier transform and to restore causality (possible annihilation, i.e. existence, only after creation) even without spontaneous extinction, once we take the limit \(\epsilon '\rightarrow 0\).

3.1 Fourier Transform

After replacing the operators by real fields, the Gaussian (harmonic) part of the resulting path integral can be performed, once the fields have been Fourier transformed. We will use the sign and notational convention of

(3)

The field \(\phi (\mathbf {k},\omega )\) corresponds to the annihilator of the active particles, the field \(\tilde{\phi }(\mathbf {k},\omega )\) to the Doi-shifted creator . Correspondingly, \(\psi (\mathbf {k},\omega )\) and \(\tilde{\psi }(\mathbf {k},\omega )\) replace and , respectively.

It is instructive to consider a second set of orthogonal functions at this stage. Placing the process in a space that has a finite extension along one axis means that boundary conditions have to be met, which is more conveniently done in one eigensystem rather than another. Below, we will consider an infinite slab with finite thickness L, i.e. d-dimensional spaces which are infinitely extended (using the orthogonal functions and transforms introduced above) in \(\tilde{d}=d-1\) dimensions, while along one axis, the boundaries are open, i.e. the particle density of species A vanishes at the (two parallel, \(\tilde{d}\)-dimensional) boundaries and outside. This Dirichlet boundary condition is best met using eigenfunctions \(\sqrt{2/L} \sin (q_n z)\) with \(q_n=\pi n/L\) and \(n=1,2,\ldots \), making it complete and orthonormal because

$$\begin{aligned} \frac{2}{L} \int _0^L{\!\mathrm {d}z\,}\, \sin (q_n z) \sin (q_m z) = \delta _{n,m} \ . \end{aligned}$$
(4)

In passing, we have introduced the finite linear length of the space, L. Purely for ease of notation and in order to keep expressions in finite systems dimensionally as similar as possible to those in infinite ones, Eq. (3), we will transform as follows:

$$\begin{aligned} \phi (z)= & {} \frac{2}{L}\sum _{n=1}^\infty \sin (q_n z) \phi _n\end{aligned}$$
(5a)
$$\begin{aligned} \phi _n= & {} \int _0^L {\!\mathrm {d}z\,} \sin (q_n z) \phi (z) \end{aligned}$$
(5b)

using

$$\begin{aligned} \frac{2}{L}\sum _{n=1}^\infty \sin (q_n y) \sin (q_n z)= & {} \delta (z-y)\end{aligned}$$
(6a)
$$\begin{aligned} \int _0^L {\!\mathrm {d}z\,} \sin (q_n z) \sin (q_m z)= & {} \frac{L}{2} \delta _{m,n} \ , \end{aligned}$$
(6b)

where \(\delta (z-y)\) is the usual Dirac \(\delta \) function for \(z-y\in (0,L)\) but to be replaced by the periodic Dirac comb \(\sum _{m=-\infty }^\infty \delta (z-y+mL)\) for arbitrary \(z-y\). For ease of notation, we have omitted the time dependence of \(\phi (\mathbf {x},t)\) as well as \(\tilde{d}\) components other than z. The other fields, \(\tilde{\phi }\), as well as \(\psi \) and \(\tilde{\psi }\) transform correspondingly. The spatial transform of the latter is subject to some convenient choice, because the immobile species is not constrained by a boundary condition.

It will turn out that, as expected in finite size scaling, the lowest mode \(q_1=\pi /L\) plays the rôle of a temperature like variable, controlling the distance to the critical point.

We will also briefly study systems which are infinitely extended in \(\tilde{d}\) dimensions and periodically closed in one. In the periodic dimension, the spectrum of conveniently chosen eigenfunctions \(\sqrt{1/L} {\text {exp}} \left( \imath k_n y\right) \) is discrete with \(k_n=2\pi n/L\) and \(n\in \mathbb {Z}\),

$$\begin{aligned} \frac{1}{L} \int _0^L{\!\mathrm {d}y\,}\, e^{\imath k_n y} e^{\imath k_m y} = \delta _{n+m,0} \ . \end{aligned}$$
(7)

Again, we transform slightly asymmetrically (in L),

$$\begin{aligned} {} \phi (z)= & {} \frac{1}{L}\sum _{n=-\infty }^\infty e^{\imath k_n z} \phi _n\end{aligned}$$
(8a)
$$\begin{aligned} \phi _n= & {} \int _0^L {\!\mathrm {d}z\,} e^{-\imath k_n z} \phi (z) \end{aligned}$$
(8b)

with

$$\begin{aligned} \frac{1}{L}\sum _{n=-\infty }^\infty e^{\imath k_n y} e^{-\imath k_n z}= & {} \delta (z-y) \end{aligned}$$
(9a)
$$\begin{aligned} \int _0^L {\!\mathrm {d}z\,} e^{\imath k_n z} e^{-\imath k_m z}= & {} L \delta _{m,n} \ , \end{aligned}$$
(9b)

where again \(\delta (z-y)\) is to be replaced by a Dirac comb if considered for \(z-y\notin (0,L)\). Again, time and \(\tilde{d}\) spatial coordinates were omitted. Similar transforms apply to the other fields.

There is a crucial difference between eigenfunctions \({\text {exp}} \left( \imath k_ny\right) \) and \(\sin (q_nz)\), as the former conserves momenta in vertices, whereas the latter does not:

$$\begin{aligned} \int _0^L {\!\mathrm {d}y\,} e^{\imath k_n y} e^{\imath k_m y} e^{\imath k_{\ell } y} = L \delta _{n+m+\ell ,0} \end{aligned}$$
(10a)

while

$$\begin{aligned} \int _0^L {\!\mathrm {d}y\,} \sin (q_n y) \sin (q_m y) \sin (q_{\ell } y) = L \epsilon _{nm{\ell }}\ , \end{aligned}$$
(10b)

where

$$\begin{aligned} \epsilon _{nm{\ell }} = \left\{ \begin{array}{ll} \frac{1}{2\pi } \left( \frac{1}{n+m-\ell } +\frac{1}{n-m+\ell } +\frac{1}{-n+m+\ell } -\frac{1}{n+m+\ell } \right) &{}\quad \text {for}\quad \ell +m+n \quad \mathrm{odd}\\ 0 &{}\quad \text {otherwise} \end{array} \right. \end{aligned}$$
(10c)

with \(q_n=\pi n/L>0\), \(n\in \mathbb {N}^+\) and \(k_n=2\pi n/L\), \(n\in \mathbb {Z}\) (sign unconstrained) as introduced above.

Having made convenient choices such as Eq. (5), we will carry on using the Fourier transforms of the bulk Eq. (3), which is easily re-written for Dirichlet boundary conditions using Eq. (5), simply by replacing each integral over by \((2/L)\sum _n\) and similar for periodic boundary conditions, Eq. (8). Only the non-linearity, Sect. 3.3, is expected to require further careful analysis as \(\epsilon _{nm{\ell }}\) of Eq. (10b) is structurally far more demanding than \(\delta _{n+m+\ell ,0}\) of Eq. (10a).

3.2 Harmonic Part

Following the normal procedure [3, for example], the harmonic part \(\mathcal {L}_0\) of the Liouvillian \(\mathcal {L}=\mathcal {L}_0+\mathcal {L}_1\) reads

$$\begin{aligned} \mathcal {L}_0 = \tilde{\phi }\partial _t \phi + D \nabla \tilde{\phi }\nabla \phi + r \tilde{\phi }\phi + \tilde{\psi }\partial _t \psi + \epsilon ' \tilde{\psi }\psi \ . \end{aligned}$$
(11)

The non-linear part \(\mathcal {L}_1\), Eq. (21), is discussed in Sect. 3.3. The harmonic part, \(\mathcal {L}_0\), describes the diffusive evolution of the density field of A particles, represented by \(\phi \) and \(\tilde{\phi }\), which diffuse with diffusion constant D and get spontaneously extinct with rate r, as well as the evolution of immobile particles B, represented by densities \(\psi \) and \(\tilde{\psi }\), which do not diffuse but get extinct with rate \(\epsilon '\).

After Fourier transforming and without further ado the harmonic part of the path integral

can be performed, producing the two bare propagators

figure a

where and . Below, we will refer to the propagator of the diffusive particles as the “activity propagator” and to the one for the immobile species as the “substrate propagator” (or “activity” and “substrate legs”, respectively). As the propagation of the active particles is unaffected by the deposition of immobile particles, the activity propagator does not renormalise \(\left\langle \phi \tilde{\phi } \right\rangle =\left\langle \phi \tilde{\phi } \right\rangle _0\). The same is true for the immobile species, which might be spawned by active particles, however, once deposited remains inert, \(\left\langle \psi \tilde{\psi } \right\rangle =\left\langle \psi \tilde{\psi } \right\rangle _0\).

The Fourier transform Eq. (3) of the latter produces \(\delta (\mathbf {x}-\mathbf {x}')\theta (t-t')\) in the limit \(\epsilon '\rightarrow 0\), with \(\theta (x)\) denoting the Heaviside \(\theta \)-function as one would expect (with \(\mathbf {x},t\) being the position and time of “probing” and \(\mathbf {x}',t'\) position and time of creation). At this stage, there is no interaction and no transmutation, \(\left\langle \tilde{\psi }(\mathbf {k},\omega )\phi (\mathbf {k}',\omega ') \right\rangle =0\). Diffusing particles A happily co-exist with immobile ones.

3.3 Non-linearity

The harmonic part of the Liouvillian, \(\mathcal {L}_0\), discussed in the preceding section covers the diffusive motion and spontaneous extinction of A particles (fields \(\phi \) and \(\tilde{\phi }\)) and the spontaneous extinction of the resting B particles (fields \(\psi \) and \(\tilde{\psi }\)). In the following, we will discuss the non-linear (interacting) part of the Liouvillian, \(\mathcal {L}_1\), which introduces the spawning of B particles by the A particle, subject to the constraint of the finite carrying capacity, which establishes an effective interaction between previously deposited particles and any new particle to be deposited.

As discussed in Sect. 2.2, spawning is moderated down in the presence of B particles to \(\gamma (1-n_B(\mathbf {x},t)/n_0)\). At the level of a master equation, this conditional deposition gives a non-linear contribution of

(13)

where, for convenience, the problem is considered for individual lattice sites \(\mathbf {n}\) which contain \(n_A=n_A(\mathbf {n})\) particles of species A and \(n_B\) particles of species B. The contributions by harmonic terms, namely diffusion of A particles and spontaneous extinction of both, as discussed in the previous section, have been omitted. The first term in the sum describes the creation of a B particle in the presence of \(n_B-1\) of those to make up \(n_B\) in total, the second term makes the B particle number exceed \(n_B\), \(n_B\rightarrow n_B+1\). If

(14)

where the sum runs over all states of the entire lattice, then the conditional deposition produces the contribution

(15)

where we have used the commutator, and the Doi-shifted creation operator, , as well as the particle number operator .

Although using Doi-shifted operators throughout gives rise to a rather confusing six non-linear vertices, the resulting field theory does not turn out as messy as one may expect. However, we need to allow for different renormalisation, therefore introducing six different couplings below.

Replacing by in the first term of the sum generates the bilinearty , which we will parameterise in the following by \(\tau \), corresponding to a transmutation of an active particle to an immobile one. Transmutation is obviously spurious; it does not actually take place but will allow us in the Doi-shifted setup (and thus with the corresponding left vacuum [4, 5]) to probe for substrate particles (using ) after creating an active one (using ) without having to probe for the latter (using ). There is no advantage in moving that to the bilinear part \(\mathcal {L}_0\), because the determinant of the bilinear matrix M in

$$\begin{aligned} \mathcal {L}'_0= \left( \begin{array}{l} \tilde{\phi }\\ \tilde{\psi }\end{array} \right) ^T \underbrace{ \left( \begin{array}{cc} -\imath \omega + D \mathbf {k}^2 + r &{} 0 \\ -\tau &{} -\imath \omega + \epsilon ' \end{array} \right) }_{M} \left( \begin{array}{l} \phi \\ \psi \end{array} \right) \end{aligned}$$
(16)

is unaffected by \(\tau \ne 0\) and therefore none of the propagators mentioned above change. One may therefore treat all terms (including the bilinear transmutation) resulting from the interaction perturbatively, with transmutation

(17)

that is present regardless of the carrying capacity \(n_0\). At this stage it is worth noting the sign of \(\tau \) (and \(\sigma \) below) as positive, i.e. the perturbative expansion will generate terms with pre-factors \(\tau \) (and \(\sigma \) below).

The only other non-linearity independent from the carrying capacity \(n_0\) is the vertex (or \(\tilde{\psi }\tilde{\phi }\phi \)) in the following parameterised by the coupling constant \(\sigma \). Diagrammatically, it may be written as the (amputated vertex)

(18)

and can be thought of as spawning, rather than transmutation parameterised by \(\tau \).

According to Eq. (15), there are four non-linearities with bare-level couplings of \(\gamma /n_0\), generated by replacing the regular creation operators by their Doi-shifted counterparts, and , in . Each spawns at least one substrate particle, but more importantly, it also annihilates at least one substrate particle as it “probes for” its presence. The two simplest and most important (amputated) vertices are the ones introduced above with a “wriggly tail added”,

(19)

where we have also indicated their coupling. By mere inspection, it is clear that those two vertices can be strung together, renormalising the left one. In fact, \(\kappa \) is the one and only coupling that renormalises all non-linearities (\(\sigma ,\lambda ,\kappa ,\chi \), and \(\xi \)), including itself.

Two more vertices are generated,

(20)

which become important only for higher order correlation functions of the substrate particles, because there is no vertex annihilating more than one of them—correlations between substrate particles are present but not relevant for the dynamics. Notably, there is no vertex that has more incoming than outgoing substrate legs. Finally, we note that the sign with which \(\lambda \), \(\kappa \), \(\chi \) and \(\xi \) are generated in the perturbative expansion is negative.

For completeness, we state the interaction part of the Liouvillian (see Eq. (11))

$$\begin{aligned} \mathcal {L}_1 = - \tau \tilde{\psi }\phi - \sigma \tilde{\psi }\tilde{\phi }\phi + \lambda \tilde{\psi }\psi \phi + \kappa \tilde{\phi }\tilde{\psi }\psi \phi + \chi \tilde{\psi }^2 \psi \phi + \xi \tilde{\phi }\tilde{\psi }^2 \psi \phi \ , \end{aligned}$$
(21)

with

$$\begin{aligned} \tau =\sigma =\gamma \qquad \text {and} \qquad \lambda =\kappa =\chi =\xi =\gamma /n_0 \end{aligned}$$
(22)

at bare level.

3.4 Dimensional Analysis

Determining the engineering dimensions of the coupling introduced above is part of the “usual drill” and will allow us to determine the upper critical dimension and to remove irrelevant couplings. Without dwelling on details, analysis of the harmonic part, Eq. (11), reveals that \(\left[ D\right] =\mathtt {L}^2/\mathtt {T}\) (as expected for a diffusion constant) and \(\left[ r\right] =\left[ \epsilon '\right] =1/\mathtt {T}\) (as expected for all extinction rates), with \(\left[ x\right] =\mathtt {L}\), a length, and \(\left[ t\right] =\mathtt {T}\), a time. In real time and real space, \(\left[ \tilde{\phi }\phi \right] =\left[ \tilde{\psi }\psi \right] =\mathtt {L}^{-d}\).

Performing the Doi-shift in Eq. (15) first and introducing couplings for the non-linearities as outlined above allows for two further independent dimensions, say spawning \(\left[ \sigma \right] =\mathtt {A}\) and transmutation \(\left[ \tau \right] =\mathtt {B}\) (both originally equal to the rate \(\gamma \)), which implies \(\left[ \lambda \right] =\mathtt {A}^{-1}\mathtt {B}\mathtt {L}^d\mathtt {T}^{-1}\), \(\left[ \kappa \right] =\mathtt {L}^d\mathtt {T}^{-1}\), \(\left[ \chi \right] =\mathtt {L}^d\mathtt {B}\), \(\left[ \xi \right] =\mathtt {L}^d\mathtt {A}\), as well as \(\left[ \psi \right] =\mathtt {T}\mathtt {A}\mathtt {L}^{-d}\), \(\left[ \tilde{\psi }\right] =\mathtt {A}^{-1}\mathtt {T}^{-1}\), \(\left[ \phi \right] =\mathtt {A}\mathtt {B}^{-1}\mathtt {L}^{-d}\), \(\left[ \tilde{\phi }\right] =\mathtt {A}^{-1}\mathtt {B}\) in real space and time.

As far as the field theory is concerned, the only constraint is to retain the diffusion constant on large scales, which implies \(\mathtt {T}=\mathtt {L}^2\). As a result, the non-linear coupling \(\kappa \) (originally \(\gamma /n_0\)) becomes irrelevant in dimensions \(d>d_c\), as expected with upper critical dimension \(d_c=2\). The two independent engineering dimensions \(\mathtt {A}\) and \(\mathtt {B}\) will be used in the analysis below in order to maintain the existence of the associated processes of transmutation and spawning, which are expected to govern the tree level. If we were to argue that they become irrelevant above a certain upper critical dimension, the density of offspring and its correlations would necessarily vanish everywhere.Footnote 2

Even though we may want to exploit the ambiguity in the engineering dimensions [17, 28] in the scaling analysis (however, consistent with the results above), in the following section we will make explicit use of the Doi-shift when deriving observables, which means that both \(\tilde{\phi }\) and \(\tilde{\psi }\) are dimensionless (in real space and time), \(\left[ \tilde{\phi }\right] =\left[ \tilde{\psi }\right] =1\), which implies \(\mathtt {A}=\mathtt {T}^{-1}\) and \(\mathtt {A}=\mathtt {B}\). As expected, \(\tau \) is then a rate (namely the transmutation rate) and so is \(\sigma \), \(\left[ \tau \right] =\left[ \sigma \right] =1/\mathtt {T}\). Also not unexpectedly, the remaining four couplings all end up having the same engineering dimension, \(\left[ \lambda \right] =\left[ \kappa \right] =\left[ \chi \right] =\left[ \xi \right] =\mathtt {L}^d\mathtt {T}^{-1}\), as suggested by \(\gamma /n_0\), which is a rate per density (\(\gamma \) being the spawning rate and \(n_0\) turning into a carrying capacity density as we take the continuum limit).

3.5 Observables at Tree Level: Bulk

The aim of the present work is to characterise the volume of the Wiener Sausage field-theoretically. As discussed in Sect. 2, this is done not in terms of an actual spatial volume, but rather in terms of the number of spawned immobile offspring. In this section, we define the relevant observables in terms of the fields introduced above. This is best done at tree level, presented in the following, before considering loops and the subsequent renormalisation. While the tree level is the theory valid above the upper critical dimension, it is equivalently the theory valid in the absence of any physical interaction, i.e. the theory of \(n_0\rightarrow \infty \). We introduce the observables first in the presence of a mass r, which amounts to removing the particle after a time of 1 / r on average.

If \(v^{(1)}(\mathbf {x};\mathbf {x}^*)\) is the density of substrate particles at \(\mathbf {x}\) in a particular realisation of the process at the end of the life time of the diffusive particle which started at \(\mathbf {x}^*\), the volume of the Sausage is \(V=\int {\!\mathrm {d}^dx\,} v^{(1)}(\mathbf {x};\mathbf {x}^*)\). The ensemble average is then just \(\left\langle V \right\rangle =\int {\!\mathrm {d}^dx\,} \left\langle v^{(1)}(\mathbf {x};\mathbf {x}^*) \right\rangle \), where \(\left\langle \bullet \right\rangle \) denotes the ensemble average of \(\bullet \) and the dependence on \(\mathbf {x}^*\) drops out in the bulk. Alternatively (as done below), one may consider a distributionFootnote 3 \(d(\mathbf {x}^*)\) of initial starting points \(\mathbf {x}^*\), over which an additional expectation, denoted by an overline, \(\overline{\bullet }\), has to be taken.

Higher moments require higher order correlation functions

$$\begin{aligned} \left\langle V^n \right\rangle = \int {\!\mathrm {d}^dx_1\,}\ldots {\!\mathrm {d}^dx_n\,} \left\langle \overline{v}^{(n)}(\mathbf {x}_1,\ldots ,\mathbf {x}_n) \right\rangle \ , \end{aligned}$$
(23)

where

$$\begin{aligned} \left\langle \overline{v}^{(n)}(\mathbf {x}_1,\ldots ,\mathbf {x}_n) \right\rangle = \int {\!\mathrm {d}\mathbf {x}^*\,} d(\mathbf {x}^*) \left\langle v^{(n)}(\mathbf {x}_1,\ldots ,\mathbf {x}_n;\mathbf {x}^*) \right\rangle \end{aligned}$$
(24)

and \(\left\langle v^{(n)}(\mathbf {x}_1,\ldots ,\mathbf {x}_n;\mathbf {x}^*) \right\rangle \) denotes the n-point correlation function of the substrate particle density generated by a single diffusive particle started at \(\mathbf {x}^*\). Equivalently in \(\mathbf {k}\)-space

Given that is the particle density operator, that correlation function is the expectation

$$\begin{aligned}&\left\langle v^{(n)}(\mathbf {x}_1,\ldots ,\mathbf {x}_n;\mathbf {x}^*) \right\rangle = \lim _{t_1,t_2,\ldots ,t_n\rightarrow \infty } \left\langle \psi ^{\dagger }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1) \right. \nonumber \\&\quad \left. \times \psi ^{\dagger }(\mathbf {x}_2,t_2)\psi (\mathbf {x}_2,t_2) \times \ldots \times \psi ^{\dagger }(\mathbf {x}_n,t_n)\psi (\mathbf {x}_n,t_n) \times \phi ^{\dagger }(\mathbf {x}^*,t_0) \right\rangle \end{aligned}$$
(25)

with only a single,Footnote 4 initial, diffusive particle started at \(\mathbf {x}^*,t_0\). The multiple limits on the right are needed so we measure deposition due to the active particle left after its lifetime. As the present phenomenon is time-homogeneous, \(t_0\) will not feature explicitly, but rather enter in the difference \(t_i-t_0\), each of which diverges as the limits are taken. In principle, only a single limit is needed, \(t=t_1=t_2=\ldots =t_n\rightarrow \infty \), but as discussed below, equal times leave some ambiguity that can be avoided.

For \(n=1\), the relevant correlation function is \(\left\langle \psi ^{\dagger }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1) \phi ^{\dagger }(\mathbf {x}^*,0) \right\rangle \), which leaves us with four terms after replacing by Doi-shifted creation operators,

$$\begin{aligned}&\left\langle \psi ^{\dagger }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1) \phi ^{\dagger }(\mathbf {x}^*,0) \right\rangle \nonumber \\&\quad =\left\langle \psi (\mathbf {x}_1,t_1) \right\rangle +\left\langle \tilde{\psi }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1) \right\rangle \nonumber \\&\qquad +\left\langle \psi (\mathbf {x}_1,t_1)\tilde{\phi }(\mathbf {x}^*,0) \right\rangle +\left\langle \tilde{\psi }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1)\tilde{\phi }(\mathbf {x}^*,0) \right\rangle \ . \end{aligned}$$
(26)

Pure annihilation, \(\left\langle \psi \right\rangle \), vanishes—it is the expected density of substrate particles in the vacuum, as no active particle has been created first. The expectation \(\left\langle \tilde{\psi }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1) \right\rangle \propto \theta (t_1-t_1)\) vanishes as well, for \(\theta (0)=0\) (effectively the Itō interpretation of the time derivatives, [27]) is needed in order to make the Doi–Pelitti approach meaningful. The field \(\tilde{\psi }(\mathbf {x}_1,t_1)\) in the density \(\tilde{\psi }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1)\) is meant to re-create the particle annihilated by the operator corresponding to \(\psi (\mathbf {x}_1,t_1)\). For the same reason, \(\left\langle \tilde{\psi }(\mathbf {x}_1,t_1)\psi (\mathbf {x}_1,t_1)\tilde{\phi }(\mathbf {x}^*,0) \right\rangle \) vanishes, even when a vertex,

is available. In fact, to contribute, any occurrence of \(\tilde{\psi }(\mathbf {x}_1,t_1)\) requires an occurrence of \(\psi (\mathbf {x}_2,t_2)\) with \(t_2>t_1\). What remains of Eq. (26) is therefore only

(27)

Taking the Fourier transform of Eq. (17),

(28)

reveals the general mechanism of

(29)

provided \(g(\omega )\) itself has no pole at the origin, as otherwise additional residues that survive the limit \(t\rightarrow \infty \) would have to be considered.

In Eq. (28) the starting point of the walker still enters via \(\mathbf {k}_0\). If that “driving” is done with a distribution of initial starting points \(d(\mathbf {k}_0)\), the resulting deposition is given by

(30)

where the little circle on the right indicates the “driving” which “supplies” a certain momentum distribution. More specifically, an initial distribution of \(d(\mathbf {x})=\delta (\mathbf {x}-\mathbf {x}^*)\) has Fourier components

$$\begin{aligned} \int {\!\mathrm {d}^dx\,} d(\mathbf {x}) e^{-\imath \mathbf {k}_0 \mathbf {x}} =e^{-\imath \mathbf {k}_0 \mathbf {x}^*} =d(\mathbf {k}_0) \end{aligned}$$

and the resulting deposition is distributed according to

$$\begin{aligned} \left\langle \overline{v}^{(1)}(\mathbf {k};\mathbf {x}^*) \right\rangle = \tau e^{-\imath \mathbf {k}\mathbf {x}^*}/(D\mathbf {k}^2+r)\ . \end{aligned}$$

In an infinite system, the position of the initial driving should not and will not enter—to calculate the volume of the Sausage, we will evaluate at \(\mathbf {k}=0\). The same applies for the time of when the initial distribution of particles is made. In principle it would give rise to an additional factor of \({\text {exp}} \left( -\imath \omega t^*\right) \), but we will evaluate at \(\omega =0\).

Evaluating at \(\mathbf {k}=0\) in the bulk produces the volume integral over the offspring distribution, i.e. the expected volume V of the Sausage, in the absence of a limiting carrying capacity,

$$\begin{aligned} \left\langle V \right\rangle =\left\langle \overline{v}^{(1)}(\mathbf {k}=0;\mathbf {x}^*) \right\rangle =\frac{\tau }{r}\ , \end{aligned}$$
(31)

which corresponds to the naïve expectation of the (number) deposition rate \(\tau \) multiplied by the survival time of the random walker 1 / r. From this expression it is also clear that the “volume” calculated here is, as expected, dimensionless.

Following similar arguments for \(n=2\), the relevant diagrams are

(32)

where the symbol represents \(\tilde{\psi }(\mathbf {x},t)\psi (\mathbf {x},t)\), which is a convolution in Fourier space,

(33)

so that

(34)

which in real space and time gives a \(\delta (\mathbf {x}_2-\mathbf {x}_1)\delta (\mathbf {x}_1-\mathbf {x}_0)\theta (t_2-t_1)\theta (t_1-t_0)\), corresponding to an immobile particle deposited at \(t_0\) and \(\mathbf {x}_0\), found later at time \(t_1>t_0\) and \(\mathbf {x}_1=\mathbf {x}_0\) and left there to be found again at time \(t_2>t_1\) and \(\mathbf {x}_2=\mathbf {x}_1=\mathbf {x}_0\).

The effect of taking the limits \(t_i\rightarrow \infty \) is the same as for the first moment, namely it results in \(\omega _i=0\). The same holds here, except that in diagrams containing the convolution, the result depends on the order in which the limits are taken. This can be seen in the factor \(\theta (t_2-t_1)\theta (t_1-t_0)\), as one naturally expects from this diagram: The first probing must occur after creation and the second one after the first. A diagram like the second in Eq. (32) does not carry a constraint like that.

Each of the diagrams on the right hand side of Eq. (32) appears twice, as the external fields can be attached in two different ways. When evaluating at \(\mathbf {k}_1=\mathbf {k}_2=0\) this would lead to the same (effective) combinatorial factor of 2 for both diagrams. However, taking the time limits in a particular order means that one labelling of the first diagram results in a vanishing contribution. The resulting combinatorial factors are therefore 1 for and 2 for , i.e.

$$\begin{aligned} \left\langle V^2 \right\rangle =\frac{\tau }{r}\left( 1+2\frac{\sigma }{r}\right) \ , \end{aligned}$$
(35)

again dimensionless. Given that \(\tau =\sigma =\gamma \) initially, Eq. (15), the above may be written \(\gamma /r+2\gamma ^2/r^2\). Unsurprisingly, the moments correspond to those expected for a Poisson process with rate \(\gamma \) taking place during the exponentially distributed lifetime of the particle, subject to a Poisson process with rate r. The resulting moment generating function is simply

$$\begin{aligned} \mathcal {M}(x)= \frac{r/\gamma }{r/\gamma +1-e^{x}} \end{aligned}$$
(36)

with \(\left\langle V^n \right\rangle =\left. \frac{\mathrm {d}^n}{\mathrm {d}x^n}\right| _{x=0} \mathcal {M}(x)\) reproducing all moments once \(\tau =\sigma =\gamma \).

Carrying on with the diagrammatic expansion, higher order moments can be constructed correspondingly. At tree level (or \(n_0\rightarrow \infty \) equivalently), there are no further vertices contributing. Determining \(\left\langle v^{(n)}(\mathbf {k}_1,\ldots ,\mathbf {k}_n;\mathbf {k}_0) \right\rangle \) is therefore merely a matter of adding substrate legs, , either by adding a convolution, , or by branching with coupling \(\sigma \). For example,

(37)

Upon taking the limits, effective combinatorial factors become 1, 3, 3 and 6 respectively, so that

$$\begin{aligned} \left\langle V^3 \right\rangle = \frac{\tau }{r} \left( 1 + 6 \frac{\sigma }{r} + 6 \left( \frac{\sigma }{r}\right) ^2 \right) \ , \end{aligned}$$
(38)

and similarly

$$\begin{aligned} {} \left\langle V^4 \right\rangle= & {} \frac{\tau }{r} \left( 1 + 14 \frac{\sigma }{r} + 36 \left( \frac{\sigma }{r}\right) ^2 + 24 \left( \frac{\sigma }{r}\right) ^3 \right) \end{aligned}$$
(39a)
$$\begin{aligned} \left\langle V^5 \right\rangle= & {} \frac{\tau }{r} \left( 1 + 30 \frac{\sigma }{r} + 150 \left( \frac{\sigma }{r}\right) ^2 + 240 \left( \frac{\sigma }{r}\right) ^3 + 120 \left( \frac{\sigma }{r}\right) ^4 \right) \end{aligned}$$
(39b)
$$\begin{aligned} \left\langle V^6 \right\rangle= & {} \frac{\tau }{r} \left( 1 + 62 \frac{\sigma }{r} + 540 \left( \frac{\sigma }{r}\right) ^2 + 1560 \left( \frac{\sigma }{r}\right) ^3 + 1800 \left( \frac{\sigma }{r}\right) ^4 \right. \nonumber \\&\left. +\, 720 \left( \frac{\sigma }{r}\right) ^5 \right) \ . \end{aligned}$$
(39c)

In general, the leading order behaviour in small r at tree level in the bulk is dominated by diagrams with the largest number of branches, i.e. the largest power of \(\sigma \), like the right-most term in Eq. (37), so that

$$\begin{aligned} \left\langle V^m \right\rangle \propto m! \tau \sigma ^{m-1} r^{-m} \ , \end{aligned}$$
(40)

which is essentially determined by the time the active particle survives.

3.6 Observables at Tree Level: Open Boundary Conditions

Nothing changes diagrammatically when considering the observables introduced above in systems with open boundary conditions along one axis. As \(n_0\rightarrow \infty \) does not pose a constraint, it makes no difference whether the system is periodically closed (in \(d=2\) a finite cylinder) or infinitely extended (infinite slab) along the other axes—these directions simply do not matter for the observables studied, except when the diffusion constant enters. What makes the difference to the considerations in the bulk, Sect. 3.5, are open dimensions, in the following fixed to one, so that the number of infinite (or, at this stage equivalently, periodically closed) directions is \(\tilde{d}=d-1\); in the following \(\mathbf {k},\mathbf {k}'\in \mathbb {R}^{\tilde{d}}\).

While the diagrams obviously remain unchanged, their interpretation changes because of the orthogonality relations as stated in Eqs. (6b) and (10b) or, equivalently, the lack of momentum conservation due to the absence of translational invariance. Replacing the propagators by

figure b

where a single open dimension causes the appearance of the indices n and m, results in the one point function

where the index n refers to the Fourier-\(\sin \) component as discussed in Sect. 3.1. If driving (i.e. initial deposition) is uniform (homogeneous) along the open, finite axis, its Fourier transform is for odd n and vanishes otherwise. As for the periodic or infinite dimensions, the distribution of the driving does not enter into \(\left\langle V^n \right\rangle \), as momentum conservation implies that the only amplitudes of the driving that matter are that of the \(\mathbf {k}=0\) or \(k_0=0\) modes, Eqs. (3) and (7).

Integrating \((2/L)\sum _n \left\langle \overline{v}^{(1)}_n(\mathbf {k}) \right\rangle \sin (q_nz)\) over the interval [0, L] produces [35]

$$\begin{aligned} \left\langle V \right\rangle= & {} \frac{2}{L}\sum _{n\,\text {odd}} \frac{2}{q_n} \frac{\tau }{Dq_n^2+r} \frac{2}{Lq_n} \nonumber \\= & {} \frac{8\tau }{\pi ^4 D}L^2 \sum _{n\,\text {odd}} \frac{1}{n^2}\frac{1}{n^2+\frac{rL^2}{D\pi ^2}} =\frac{\tau }{r}\left( 1-\sqrt{\frac{4D}{rL^2}} \tanh \left( \sqrt{\frac{rL^2}{4D}}\right) \right) \ . \end{aligned}$$
(42)

In the limit of large L this result recovers Eq. (31), which would be less surprising if \(L\rightarrow \infty \) would simply restore the bulk, which is, however, not the case, because as the driving is uniform, some of it always takes place “close to” the open boundaries. However, open boundaries matter only up to a distance of \(\sqrt{D/r}\) from the boundaries, i.e. the fraction of walkers affected by the open boundaries is of the order \(\sqrt{D/r}/L\).

The limit \(r\rightarrow 0\) gives \(\left\langle V \right\rangle =\tau L^2/(12 D)\), matching results for the average residence time of a random walker on a finite lattice with cylindrical boundary conditions using \(D=1/(2d)\) [22]. Sticking with \(r\rightarrow 0\), calculating higher order moments for uniform driving is straight-forward, although somewhat tedious. For example, the two diagrams contributing to \(\left\langle v^{(2)} \right\rangle \) are

(43)

and

(44)

Using

$$\begin{aligned} 2\pi \sum _{\begin{array}{c} nm\ell \\ \text {odd} \end{array}} \frac{1}{n^3} \frac{1}{m} \frac{1}{\ell } \epsilon _{nm{\ell }}= & {} \frac{1}{6} \left( \frac{\pi }{2}\right) ^6 \end{aligned}$$
(45a)
$$\begin{aligned} 2\pi \sum _{\begin{array}{c} nm\ell \\ \text {odd} \end{array}} \frac{1}{n^3} \frac{1}{m} \frac{1}{l^3} \epsilon _{nm{\ell }}= & {} \frac{1}{15} \left( \frac{\pi }{2}\right) ^8\ , \end{aligned}$$
(45b)

where \(n,m,l\in \{1,3,5,\ldots \}\) (as driving is uniform and the sausage volume is an integral over the entire system), then produces

$$\begin{aligned} \left\langle V^2 \right\rangle =\frac{\tau L^2}{12D} \left( 1+\frac{\sigma L^2}{5D}\right) \ . \end{aligned}$$
(46)

This may be compared to the known expressions for the moments of the number of distinct sites visited by a random walker within n moves [29, in particular Eq. (A.14)], which contains logarithms even in three dimensions, where the present tree level results are valid. This is, apparently, caused by constraining the length of the Sausage by limiting the number of moves, rather than a Poissonian death rate.

Performing the summations Eq. (45) is straight-forward, but messy and tedious.Footnote 5 The relevant sums converge rather quickly, for the third moment producing (by summing numerically over 200 terms for each index), for example

$$\begin{aligned} \left\langle V^3 \right\rangle= & {} \frac{\tau L^2}{12D} \left( 1.00002196165\cdots + 0.60000307652\cdots \frac{\sigma L^2}{D}\right. \nonumber \\&\quad \left. +\, 0.060714286977\cdots \frac{\sigma ^2 L^4}{D^2} \right) . \end{aligned}$$
(47)

Just like in the bulk for small r, Eq. (39), the diagrams dominating large L are the tree-branch-like diagrams such as Eq. (44), with highest power of \(\sigma \), rather than those involving convolutions, Eq. (43). Each new branch produces a factor \(L^2\), so in general

$$\begin{aligned} \left\langle V^m \right\rangle \propto \tau \sigma ^{m-1} L^{2m} D^{-m}\ , \end{aligned}$$
(48)

as in Eq. (40) essentially determined by the time the particle stays on the lattice.

Similar to the bulk, the lack of interaction allows the volume moments of the Sausage to be determined on the basis of the underlying Poisson process. In the case of homogeneous drive, the mth moment of the residence time \(t_r\) of a Brownian particle diffusing on an open interval of length L is

$$\begin{aligned} \left\langle t_r^m \right\rangle = \frac{8 m!}{\pi ^{2(m+1)} D^m} L^{2m}\sum _{n\,\text {odd}} n^{-2(m+1)} \end{aligned}$$
(49)

and the moment generating function of the Poissonian deposition with rate \(\gamma \) is just \(\mathcal {M}(z)={\text {exp}} \left( -\gamma t_r (1-{\text {exp}} \left( z\right) )\right) \), so that \(\left\langle V^m \right\rangle =\left\langle \mathrm {d}^m \mathcal {M}(z)/\mathrm {d}z^m|{z=0} \right\rangle \), reproducing the results above such as

$$\begin{aligned} \left\langle V^3 \right\rangle =\frac{\gamma L^2}{12D} \left( 1+ \frac{3 \gamma L^2}{5D}+ \frac{17 \gamma ^2 L^4}{280 D^2}\ , \right) \end{aligned}$$
(50)

confirming, in particular, the high accuracy of the leading order term in L, as \(17/280=0.06071428571428571428\ldots \).

4 Beyond Tree Level

Below \(d_c=2\) the additional vertices parameterised by \(\lambda \), \(\kappa \), \(\chi \) and \(\xi \), Eqs. (19) and (20) respectively, have to be taken into account. Because \(\kappa \) is the only vertex that has the same number of incoming and outgoing legs, it is immediately clear that its presence can, and, in fact, will contribute to the renormalisation of all other vertices, say

(51)

but in particular itself:

(52)

Among the vertices introduced in Sect. 3.3, namely , , , , and , none has an outgoing activity leg if it does not have an incoming activity leg, and all have at least as many outgoing substrate legs as they have incoming substrate legs. Apart from \(\kappa \), each vertex has either more outgoing substrate legs than incoming ones or fewer outgoing activity legs than incoming ones. Combining them in any form will thus never result in a diagram contributing to the renormalisation of \(\kappa \), which has one leg of each kind.

Combinations of other vertices gives rise to “cross-production”, say \(\chi \), , by \(\lambda \xi \), , but none of these terms contains more than one loop without the involvement of \(\kappa \). As for the generation of higher order vertices, it is clear that the number of outgoing substrate-legs (on the left) can never be decreased by combining vertices, because within every vertex the number of outgoing substrate legs is at least that of incoming substrate legs. In particular does not exist. A vertex like that, combined, say, with \(\sigma \) to form the bubble , which renormalises the propagator, suggests the diffusive movement of active particles is affected by the presence of substrate particles. This is, by definition of the original problem, not the case.

Because no active particles are generated solely by a combination of substrate particles, none of the vertices has more outgoing then incoming activity legs. Denoting the tree level coupling of the proper vertex (with amputated legs)

(53)

of the correlation function

(54)

by , the topological conditions on the vertices can be summarised as \(a\ge b\), \(b\le 1\), \(n=1\), \(m\le 1\), \(m+a\ge 1\), which means that there are in fact only four different types of vertices, namely \(n=1\), \(b=0,1\) and \(m=0,1\), whereas a is hitherto undetermined. For future reference, we note

figure c

Dimensional analysis gives

Because diffusion is to be maintained, it follows that \(\mathtt {T}=\mathtt {L}^2\), yet, as indicated above, the dimensions of \(\mathtt {A}\) and \(\mathtt {B}\) are to some extent a matter of choice. Leaving them undetermined results in \(d(n+b-1)+2(a-b)\le 2\) for to be relevant in d dimensions. Setting, on the other hand, \(\mathtt {A}=\mathtt {B}=\mathtt {T}^{-1}\) (see above) results in \(d(n+b-1)\le 2\). As \(n=1\), this implies \((d-2)b+2a\le 2\) and \(db\le 2\), respectively. In both cases, the upper critical dimension for a vertex with \(b\ge 1\) and thus \(a\ge 1\) to be relevant is \(d_c=2\). On the other hand, no loop can be formed if \(b=0\), so above \(d=2\) (where \(b=1\) is irrelevant) there are no one-particle irreducibles contributing to any of the and so the set of couplings introduced above, \(\tau \), \(\sigma \), \(\lambda \), \(\kappa \), \(\chi \) and \(\xi \) remains unchanged. As far as Sausage moments are concerned, \(\lambda \), \(\kappa \), \(\chi \) and \(\xi \) do not enter, as there is no vertex available to pair up the incoming substrate leg on the right. The tree level results discussed in Sect. 3.5 therefore are the complete theory in \(d>d_c=2\).

Below \(d_c=2\), the dimensional analysis depends on the choice one makes for \(\mathtt {A}\) and \(\mathtt {B}\). If they remain independent, then the only relevant vertices that are topologically possible are those with \(a\le 1\), removing \(\chi \) and \(\xi \) from the problem. However, it is entirely consistent (and one may argue, even necessary) to assume \(\mathtt {A}=\mathtt {B}=\mathtt {T}^{-1}\), resulting in no constraint on a at all. Not only are therefore vertices for all a relevant, what is worse, they are all generated as one-particle irreducibles. For example, the reducible diagram contributing to \(\left\langle v^{(2)} \right\rangle \) at tree level, Sect. 3.5, possesses, even at one loop, two one-particle irreducible counterparts in \(d<2\),

contributing to the corresponding proper vertex. Such diagrams exist for all a, so, in principle, all these couplings have to be allowed for in the Liouvillian and all have to be renormalised in their own right. The good news is, however, that the Z-factor of \(\kappa \) (see below) contains all infinities of all couplings exactly once, i.e. the renormalisation of all couplings can be related to that of \(\kappa \) by a diagrammatic vertex identity, see Sect. 4.1.1.

4.1 Renormalisation

Without further ado, we will therefore carry on with renormalising \(\kappa \) only. As suggested in Eq. (52), this can be done to all orders, in a geometric sum. The one and only relevant integral isFootnote 6

(56)

where \(\epsilon =2-d\) and we have indicated the total momentum \(\mathbf {k}\) (i.e. the sum of the momenta delivered by the two incoming legs) and the total frequency \(\omega \) going through it.Footnote 7 This integral has the remarkable property that it is independent of \(\mathbf {k}\), because of the \(\mathbf {k}\)-independence of the substrate propagator. While the latter conserves momentum in the bulk by virtue of in Eq. (12b), its amplitude does not depend on \(\mathbf {k}\). Even if there were renormalisation of the activity propagator it would therefore not affect its \(\mathbf {k}\)-dependence, i.e. \(\eta =0\), whereas its \(\omega \) dependence may be affected, i.e. \(z\ne 2\) would be possible.

The expression \(((r+\epsilon '-\imath \omega )/D)^{1/2}\) can be identified as an inverse length; it is the infrared regularisation (or more precisely the normalisation point, \(R=1\), Eq. (74a)) that can, in the present case, be implemented either by considering finite time (\(\omega \ne 0\)), spontaneous extinction of activity (\(r>0\)) or, notably, spontaneous extinction (evaporation) of substrate particles (\(\epsilon '>0\)). In order to extract exponents, it is replaced by the arbitrary inverse length scale \(\mu \). We will return to the case \(\mu =\sqrt{-\imath \omega /D}\) in Sect. 4.2, e.g. Eq. (84). For the time being, the normalisation point is

$$\begin{aligned} \mu ^2=\frac{r}{D} \end{aligned}$$
(57)

with \(\epsilon '\rightarrow 0\), \(\omega \rightarrow 0\).

The renormalisation conditions are then (see Eq. (55))

figure d

where \(\{0,0\}\) indicates that the vertex is evaluated at vanishing momenta and frequencies. Defining \(Z=\kappa _{\mathscr {R}}/\kappa \) allows all renormalisation to be expressed in terms of Z, as detailed in Sect. 4.1.1.

Starting with only one loop, the renormalisation of \(\kappa \), Eq. (52), is therefore \(\kappa _{\mathscr {R}} = \kappa - \kappa ^2 W\) with

$$\begin{aligned} W = \frac{\Gamma \left( \frac{\epsilon }{2}\right) }{(4\pi )^{d/2}D} \mu ^{-\epsilon } \end{aligned}$$
(59)

or \(\kappa _{\mathscr {R}}=\kappa Z\) with \(Z=1-\kappa W\). Introducing the dimensionless coupling \(g=\kappa W/\Gamma (\epsilon /2)\) with \(g_{\mathscr {R}}=gZ\) gives \(Z=1-g\Gamma (\epsilon /2)\), which may be approximated to one loop by \(Z=1-g_{\mathscr {R}}\Gamma (\epsilon /2)\). Keeping, however, all loops in Eq. (52), this last expression is no longer an approximation: if all terms in Eq. (52) are retained, Z becomes a geometric sum in g,

$$\begin{aligned} Z=1-\kappa W+(\kappa W)^2-\cdots =\frac{1}{1+\kappa W} = \frac{1}{1+g\Gamma (\epsilon /2)} = 1-g_{\mathscr {R}}\Gamma (\epsilon /2) \ , \end{aligned}$$
(60)

incorporating all parquet diagrams [12]. The resulting \(\beta \)-function is \(\beta _g(g) = \mathrm {d}g_{\mathscr {R}}/\mathrm {d}\ln \mu |_g g_{\mathscr {R}} = -\epsilon g_{\mathscr {R}} - \kappa W \beta _g\) and therefore

$$\begin{aligned} \beta _g(g) = \frac{-\epsilon g_{\mathscr {R}}}{1+\kappa W} = - \epsilon g_{\mathscr {R}} Z = -\epsilon g_{\mathscr {R}} \left( 1-g_{\mathscr {R}} \Gamma (\epsilon /2)\right) \ . \end{aligned}$$
(61)

The last statement is exact to all orders; the non-trivial fixed point in \(\epsilon >0\) is exactly \(g_{\mathscr {R}}^*=1/\Gamma (\epsilon /2)\approx \epsilon /2\), which is when the Z-factor vanishes (as g diverges in small \(\mu \)).

4.1.1 Ward-Takahashi and Vertex Identities

Different vertices and therefore the renormalisation of different couplings can be related to each other by Ward-Takahashi identities. They are usually constructed by considering global symmetries [36], such as the invariance of the Liouvillian under [1]

$$\begin{aligned} \phi&\rightarrow \phi (1+\delta )&\qquad \quad \tilde{\phi }&\rightarrow \tilde{\phi }(1+\delta )^{-1} \end{aligned}$$
(62)
$$\begin{aligned} \psi&\rightarrow \psi (1+\delta )&\qquad \quad \tilde{\psi }&\rightarrow \tilde{\psi }(1+\delta )^{-1} \ , \end{aligned}$$
(63)

to be considered for small \(\delta \), which produces an identity on couplings involving an odd number of fields,

(64)

The identities derived in the following are certainly consistent with Eq. (64), but derived at diagrammatic level. To start with, we reiterate that Eq. (52) contains all contributions (and to all orders) to , the renormalised vertex \(\kappa \). Repeating for \(\sigma \), , and \(\lambda \), , the diagrammatic, topological argument presented for \(\kappa \) after Eq. (52), it turns out that diagrams contributing to their renormalisation are essentially identical to those contributing to \(\kappa \), as shown in Eq. (51). Using the same notation as in Eq. (58), we note that \(\kappa _{\mathscr {R}}=\kappa Z\) implies \(\sigma _{\mathscr {R}}=\sigma Z\) and \(\lambda _{\mathscr {R}}=\lambda Z\), i.e.

$$\begin{aligned} \lambda _{\mathscr {R}}=\frac{\lambda }{\kappa }\kappa _{\mathscr {R}} \qquad \sigma _{\mathscr {R}}=\frac{\sigma }{\kappa }\kappa _{\mathscr {R}} \ . \end{aligned}$$
(65)

The renormalisation of the coupling \(\tau \) breaks with that pattern as

$$\begin{aligned} \tau _{\mathscr {R}} = \tau \left( 1 + \frac{\sigma \lambda }{\kappa \tau } (Z-1)\right) \ , \end{aligned}$$
(66)

because the tree level contribution \(\tau \), Eq. (17), has higher order corrections such as , which do not contain \(\tau \) itself, but rather the combination \(\lambda \sigma \). However, at bare level, \(\sigma =\tau \) and \(\lambda =\kappa \), so that in the present case

$$\begin{aligned} \tau _{\mathscr {R}} = \frac{\tau }{\kappa } \kappa _{\mathscr {R}} \ . \end{aligned}$$
(67)

A different issue affects the renormalisation of \(\chi \) and \(\xi \). For example, the latter acquires contributions from any of the diagrams shown in Eq. (52) by “growing an outgoing substrate leg”, , on any of the \(\kappa \) vertices,

(68)

whereas contributions from , generated by \(\sigma \mathrm {d}/\mathrm {d}r\) are UV finite and therefore dropped. Given that Eq. (68) are the only contributions to the renormalisation of \(\xi \), it reads

$$\begin{aligned} \xi _{\mathscr {R}}=2 \xi \frac{\mathrm {d}\kappa _{\mathscr {R}}}{\mathrm {d}\kappa } - \xi \frac{\kappa _{\mathscr {R}}}{\kappa } \end{aligned}$$
(69)

and correspondingly for the one-particle irreducible contributions to \(\chi _{\mathscr {R}}\)

$$\begin{aligned} \chi _{\mathscr {R}}= 2 \chi \frac{\mathrm {d}\kappa _{\mathscr {R}}}{\mathrm {d}\kappa } - \chi \frac{\kappa _{\mathscr {R}}}{\kappa } \ , \end{aligned}$$
(70)

where we have used \(\chi -\xi \lambda /\kappa =0\). From Sect. 4.1, it is straight forward to show that

$$\begin{aligned} \frac{\mathrm {d}\kappa _{\mathscr {R}}}{\mathrm {d}\kappa } = Z^2 \end{aligned}$$
(71)

and we can therefore summarise

$$\begin{aligned} \tau _{\mathscr {R}}&= \tau Z&\qquad \quad \sigma _{\mathscr {R}}&= \sigma Z \end{aligned}$$
(72a)
$$\begin{aligned} \lambda _{\mathscr {R}}&= \lambda Z&\qquad \quad \kappa _{\mathscr {R}}&= \kappa Z \end{aligned}$$
(72b)
$$\begin{aligned} \chi _{\mathscr {R}}&= \chi (2Z^2-Z)&\qquad \quad \xi _{\mathscr {R}}&= \xi (2Z^2-Z) \end{aligned}$$
(72c)

In \(d<2\), the only proper vertices to consider are those with \(n=1\), \(b\le 1\), \(m\le 1\) and arbitrary a. The renormalisation for all of them can be traced back to that of . It is a matter of straight-forward algebra to demonstrate this explicitly. As these couplings play no further rôle for the observables analysed henceforth, we spare the reader a detailed account.

4.2 Scaling

We are now in the position to determine the scaling of all couplings. For the time being, we will focus solely, however, on calculating the first moment of the Sausage volume.

We have noted earlier (Sect. 4), that the governing non-linearity is \(\kappa \) and have already introduced the corresponding dimensionless, renormalised coupling \(g_{\mathscr {R}}\) and found its fixed point value. Following the standard procedure [27], we define the finite, dimensionless, renormalised vertex functions

(73)

where \(\{\mathbf {k}, \omega \}\) denotes the entire set of momenta and frequency arguments and \(\mu \) is an arbitrary inverse scale. In principle, there could be more bare couplings and there are certainly more generated, at least in principle, see Sect. 4.1.1. The vertex functions can immediately be related to their arguments via Eqs. (58) and (55):

$$\begin{aligned} R&= r D^{-1} \mu ^{-2}&\end{aligned}$$
(74a)
$$\begin{aligned} T_{\mathscr {R}}&= \tau Z D^{-1} \mu ^{-2}&\qquad \quad S_{\mathscr {R}}&= \sigma Z D^{-1} \mu ^{-2}\end{aligned}$$
(74b)
$$\begin{aligned} \ell _{\mathscr {R}}&= \lambda Z D^{-1} \mu ^{-\epsilon } (4\pi )^{d/2}&\qquad \quad g_{\mathscr {R}}&= \kappa Z D^{-1} \mu ^{-\epsilon } (4\pi )^{d/2}\end{aligned}$$
(74c)
$$\begin{aligned} c_{\mathscr {R}}&= \chi (2Z^2-Z) D^{-1} \mu ^{-\epsilon }&\qquad \quad x_{\mathscr {R}}&= \xi (2Z^2-Z) D^{-1} \mu ^{-\epsilon } \ , \end{aligned}$$
(74d)

where the normalisation point is \(R=1\). Because

$$\begin{aligned} \lim _{g_{\mathscr {R}}\rightarrow g_{\mathscr {R}}^*} \frac{\mathrm {d}}{\mathrm {d}\ln \mu } \ln Z = \epsilon , \end{aligned}$$

Z scales in \(\mu \) like \(Z\propto \mu ^{\epsilon }\). The asymptotic solution (of the Callan–Symanzik equation)

(75)

can be combined with the dimensional analysis of the renormalised vertex function, which gives

(76)

to give, using \(z^2=r\) and Eq. (73),

(77)

As far as scaling (but not amplitudes) is concerned, the tree level results apply to the right hand side as its mass r is finite, i.e.

(78)

and

(79)

so that following Eq. (31)

(80)

If \(r^{-1}\) is interpreted as the observation time t, the result

$$\begin{aligned} \left\langle V \right\rangle \propto t^{d/2} \end{aligned}$$
(81)

in \(d<2\) (and \(\left\langle V \right\rangle \propto t\) in \(d>2\), Eq. (31)) recovers the earlier result in [2], including the logarithmic corrections expected at the upper critical dimension. Eqs. (80) and (81) are the first two key results for the field theory of the Wiener Sausage reported in the present work. We will now further explore the results and their implications.

Fig. 3
figure 3

The volume of the Wiener Sausage in one dimension is the length covered by the Brownian particle (the set of all points actually visited) plus the volume \(V_0\) of the sphere the Brownian particle is dragging (indicated by the two rounded bumpers)

In \(d=1\), it is an exercise in complex analysis (albeit lengthy) to determine the amplitude of the first moment. To make contact with established results in the literature, we study the sausage in one dimension after finite time t. Following the tree level results Eqs. (27), (30) and (31) we now have

(82)

where the space integral is taken by setting \(\mathbf {k}=0\) and the driving has been evaluated to \(d(0)=1\), see Eq. (30). The Z-factor is given by Eq. (60), but \(\mu \) should be replaced by \(\sqrt{-\imath \omega /D}\), as we will consider the double limit \(r,\epsilon '\rightarrow 0\), but at finite \(\omega \), which is the total frequency flowing through the diagram, Eq. (56), so for \(d=1=\epsilon \)

$$\begin{aligned} Z= \frac{1}{1+\kappa \sqrt{\imath / (4 D \omega )}} \end{aligned}$$
(83)

which for small \(\omega \) and therefore large t (which we are interested in) is dominated by \(2 \sqrt{-\imath D \omega )}/\kappa \). Keeping only that term, the integral in Eq. (82) can be performed and gives

$$\begin{aligned} \left\langle V \right\rangle (t) = \frac{\tau }{\kappa } 4 \sqrt{\frac{tD}{\pi }} \ . \end{aligned}$$
(84)

On the lattice, i.e. before taking the continuum limit, sites have no volume and the ratio \(\tau /\kappa \) is just the carrying capacity \(n_0\). Setting that to unity one recovers, up to the additive volume mentioned above, see Fig. 3, the result in the continuum by Berezhkovskii, Makhnovskii and Suris [2, Eq. (10)] which coincides with the asymptote on the lattice [20, 29]. Given the difference in the process and the course a field-theoretic treatment taken, in particular the continuum limit, one might argue that this is a mere coincidence. In fact, attempting a similar calculation for the amplitude of the second moment does not suggest that it can be recovered.

As for higher moments of the volume, in addition to the two diagrams mentioned in Eq. (32), there is now also

(85)

and . However, as above, the second moment is dominated in small r by the second, tree-like term in Eq. (32), which gives to leading order

$$\begin{aligned} \left\langle V^2 \right\rangle \propto 2 \frac{\tau Z \sigma Z }{r^2} \propto 2 \tau \sigma r^{-d} \ , \end{aligned}$$
(86)

as \(Z\propto r^{\epsilon /2}\). Higher order moments follow that pattern \(\left\langle V^m \right\rangle \propto Z^m\), and as dimensional consistency is maintained by the dimensionless product \(r D^{d/\epsilon } \kappa ^{-2/\epsilon }\) entering Z, Eqs. (57), (59) and (60),

$$\begin{aligned} Z=\frac{1}{1+\Gamma \left( \frac{\epsilon }{2}\right) (4\pi )^{-d/2} (r D^{d/\epsilon } \kappa ^{-2/\epsilon })^{-\epsilon /2}} \end{aligned}$$

the general result is

$$\begin{aligned} \left\langle V^m \right\rangle \propto m! \tau \sigma ^{m-1} r^{-m} \left( \frac{r D^{d/\epsilon }}{\kappa ^{2/\epsilon }} \right) ^{\epsilon m/2} = m! \tau \sigma ^{m-1} r^{-md/2} \left( \frac{D^{d/2}}{\kappa } \right) ^{m} \ , \end{aligned}$$
(87)

for \(d<2\) with \(r\hat{=}1/t\). Compared to Eq. (40) the diffusion constant is present again, as the coverage depends not only on the survival time (determined by r), but also on the area explored during that time.

4.3 Infinite Slab

In the following, we study the renormalisation of the present field theory on an infinite slab, i.e. a lattice that is finite and open (Dirichlet boundary conditions) along one axis and infinite in \(\tilde{d}=d-1\) orthogonal dimensions. The same setup was considered at tree level in Sect. 3.6. Again, there are no diagrammatic changes, yet the renormalisation procedure itself requires closer attention.

Before carrying out the integration of the relevant loop, Eq. (56), we make a mild adjustment with respect to the set of orthogonal functions that we use for the substrate and the activity. While the latter is subject to Dirichlet boundary conditions in the present case, naturally leading to the set of \(\sin (q_n z)\) eigenfunctions introduced above, the former is not afflicted with such a constraint, i.e. in principle one may choose whatever set is most convenientFootnote 8 and suitable. As general as that statement is, there are, however, some subtle implications; to start with, whatever representation is used in the harmonic part of the Hamiltonian must result in the integrand factorising, so that the path integral over the Gaussian can be performed. In the presence of transmutation, that couples the choice of the set for one species to that for the other. With a suitable choice, all propagators fulfil orthogonality relations and therefore conserve momentum, i.e. they are proportional to \(\delta _{n,m}\) (in case of the basis \(\sin (q_nz)\)), \(\delta _{n,-m}\) (basis \({\text {exp}} \left( \imath k_nz\right) \)) and/or \(\delta (\mathbf {k}+\mathbf {k}')\) (basis \({\text {exp}} \left( \imath \mathbf {k}z\right) \)), which is obviously a welcome simplification of the diagrams and their corresponding integrals and sums.

This constraint can be relaxed by considering transmutation only perturbatively, i.e. removing it from the harmonic part. However, if different eigenfunctions are chosen for different species, transmutation vertices are no longer momentum conserving; if we choose, as we will below, \(\sin (q_n z)\) for the basis of the activity and \({\text {exp}} \left( \imath k_m z\right) \), then the proper vertex of \(\tau \) comes with

$$\begin{aligned} \int _0^L {\!\mathrm {d}z\,} e^{\imath k_n z} \sin (q_m z) = L \Delta _{n,m} \end{aligned}$$
(88)

and a summation of the n and m, connecting from the sides, Eq. (17), i.e.

(89)

where the \(m\in \mathbb {Z}\) refers to the index of the eigenfunction used for the activity and \(n\in \mathbb {N}^+\) to the eigenfunction of the substrate field. The fact that \(\Delta _{p,l}\) has off-diagonal elements indicates that momentum-conservation is broken. Obviously, in the presence of boundaries, translational invariance is always broken, but that does not necessarily result in a lack of momentum conservation in bare propagators, as it does here. However, it always results in a lack of momentum conservation in vertices with more than two legs, as only exponential eigenfunctions have the property that their products are eigenfunctions as well. If propagators renormalise through these vertices, they will eventually inherit the non-conservation, i.e. allowing them to have off-diagonal elements from the start will become a necessity in the process of renormalisation.

While the transmutation vertex introduced above may appear unnecessarily messy, it does not renormalise and does not require much further attention. Rewriting the four-point vertex \(\kappa \) in terms of the two different sets of eigenfunctions, however, proves beneficial. Introducing

$$\begin{aligned} \int _0^L {\!\mathrm {d}z\,} \sin (q_n z) e^{\imath k_m z} e^{\imath k_k z} \sin (q_{\ell } z) = L U_{n,m+k,\ell } \end{aligned}$$
(90)

means that the relevant loop is

(91)

Contrary to Eq. (56), it is now of great importance to know with which couplings (here two \(\kappa \) couplings) this loop was formed, because different couplings require different “tensors”, like \(U_{n,m+k,\ell }\) in the present case. For example, the coupling \(\sigma \) comes with \(\int _0^L {\!\mathrm {d}z\,} \sin (q_n z) {\text {exp}} \left( \imath k_m z\right) \sin (q_{\ell } z)\). The actual technical difficulty to overcome, however, is the possible renormalisation of \(U_{n,m,\ell }\) itself, as there is no guarantee that the right hand side of Eq. (91) is proportional to \(U_{n,m,\ell }\). In other words, the sum Eq. (52) may be of the form \(\kappa (L U_{n,m+k,\ell } + \kappa W L U'_{n,m+k,\ell } + \kappa ^2 W^2 L U''_{n,m+k,\ell }+\cdots )\), with \(U'_{n,m+k,\ell }\ne U'_{n,m+k,\ell }\) etc., rather than \(L U_{n,m+k,\ell } \kappa (1+\kappa W + \kappa ^2 W^2 + \cdots )\), which would spoil the renormalisation process.

Carrying on with that in mind, the integrals over \(\omega '\) and \(\mathbf {k}'\) are identical to the ones carried out in Eq. (56) and therefore straight-forward. The summation over \(m'\) is equally simple, because that index features only in \(U_{n,m,\ell }\) and Eq. (9a) implies

$$\begin{aligned}&\frac{1}{L} \sum _{m'} L^2 U_{n,m_2-m',n'} U_{n',m'+m_1,\ell } \nonumber \\&\quad = \int _0^L {\!\mathrm {d}z\,} \sin (q_n z) e^{\imath k_{m_2} z} e^{\imath k_{m_1} z} \sin (q_{\ell } z) \sin ^2(q_{n'} z) \ . \end{aligned}$$
(92)

Using that identity in Eq. (91) allows us to write

(93)

It is only that last sum that requires further investigation. In particular, if we were able to demonstrate that it is essentially independent of z, then the preceding integral becomes \(L U_{n,m_1+m_2,\ell }\) and this contribution to the renormalisation of \(\kappa U_{n,m_1+m_2,\ell }\) is proportional to \(U_{n,m_1+m_2,\ell }\).

The remaining summation in Eq. (93) can be performed [35] to leading order in the smallFootnote 9 dimensionless quantity \(\rho =L^2 \left( r + \epsilon ' -\imath \omega \right) /(\pi ^2 D)\),

$$\begin{aligned}&\sum _{n'=1}^{\infty } (n'^2 + \rho )^{\frac{d-3}{2}} \sin ^2(q_{n'} z) \nonumber \\&\quad = {\frac{1}{2}}\zeta (3-d) - {\frac{1}{4}}\left( \mathrm{Li}_{3-d}\left( e^{\frac{2\pi \imath z}{L}}\right) + \mathrm{Li}_{3-d}\left( e^{-\frac{2\pi \imath z}{L}}\right) \right) + \mathcal {O}(\rho ) \end{aligned}$$
(94)

with \(\zeta (3-d)=\zeta (1+\epsilon )=(1/\epsilon ) + \mathcal {O}(1)\), the Riemann \(\zeta \)-function, and \(\mathrm{Li}_s(z)\) the polylogarithm with [35]

$$\begin{aligned} \mathrm{Li}_{1+\epsilon }\left( e^{\frac{2\pi \imath z}{L}}\right) + \mathrm{Li}_{1+\epsilon }\left( e^{-\frac{2\pi \imath z}{L}}\right) =-\ln (4 \sin ^2(z\pi /L))+\mathcal {O}(\epsilon ) \ , \end{aligned}$$
(95)

so that the leading order behaviour in \(\epsilon \) of Eq. (93) is in fact

(96)

to leading order in \(\epsilon \), where we have used \(\Gamma ((3-d)/2)=\sqrt{\pi }+\mathcal {O}(\epsilon )\), anticipating no singularities around \(d=3\).

Approximating \(2\zeta (3-d)\approx \Gamma (\epsilon /2)\) the Z-factor for the renormalisation of \(\kappa \) in a system with open boundaries in one dimension is therefore unchanged, cf. Eqs. (56) and (96), provided \(\mu =\pi /L\). Of course, that result holds only as long as \(\rho \ll 1\) is small enough, in particular \(r\ll D/L^2\), i.e. sudden death by extinction is rare compared to death by reaching the boundary. In the case of more frequent deaths by extinction, or, equivalently, taking the thermodynamic limit in the finite, open dimension, extinction is expected to take over eventually and the bulk results above apply, Sect. 4.2. Although there is an effective change of mechanism (bulk extinction versus reaching the edge), there is no dimensional crossover.

The renormalisation of \(\tau \) involves the \(\kappa \)-loops characterised above, as well as \(\sigma \) and \(\lambda \), which, in principle, have to be considered separately; after all, the loop they form has a structure, , that deviates from the structure studied above, , Eq. (96). In principle, there is (again) no guarantee that the diagrams contributing to the renormalisation of \(\tau \) all have the same dependence on the external indices, i.e. whether they are all proportional to \(\Delta _{n,m}\), Eq. (88). By definition, however, Eq. (90)

$$\begin{aligned} \frac{2}{L}\sum _{n=1,3,\ldots }^{\infty } L U_{n,m,\ell }\frac{2}{q_n} = L \Delta _{m,l} \ , \end{aligned}$$
(97)

i.e. one leg is removed by evaluating at \(m_1=0\) (see the diagram in Eq. (91)) and one by performing the summation. Applying this operation to all diagrams appearing in Eq. (52) produces all diagrams renormalising \(\tau \). Provided that \(\sigma =\tau \) and \(\lambda =\kappa \), the renormalisation of \(\tau \) is therefore linear in that of \(\kappa \) and Eq. (67) remains valid, i.e. the renormalisation procedure outlined above for \(\tau \) and \(\kappa \) remains intact.

In principle, further attention is required for the renormalisation of higher order vertices, but as long as only (external) substrate legs are attached, , their index \(m_n\) can be absorbed into the sum of the indices of the substrate legs present: Just like any external leg can take up momentum or frequency, such new legs shift the index used in the internal summation such as the one in Eq. (91), but that does not affect the renormalisation provided that it is done at vanishing external momenta, so that the external momenta do not move the poles of the propagators involved.

We conclude that all diagrammatic vertex identities of Sect. 4.1.1 remain unchanged. As for the scaling of the Sausage volume, comparing Eqs. (96) to (56) and identifying \(\mu =\pi /L\) or \(r=\pi ^2 D/L^2\) means that now

$$\begin{aligned} \left\langle V^m \right\rangle \propto m! \tau \sigma ^{m-1} \left( \frac{L}{\pi }\right) ^{md} \kappa ^{-m} \end{aligned}$$
(98)

for \(d<2\), compared to Eq. (87). Noticeably, compared to the tree level Eq. (48), the diffusion constant is absent—in dimensions \(d<2\) each point is visited infinitely often, regardless of the diffusion constant. Even though the deposition in the present setup is Poissonian, what determines the volume of the sausage is not the time it takes the active particles to drop off the lattice, \(\propto L^2/D\), but the competition between deposition parameterised by \(\tau \) and \(\sigma \) and its inhibition by \(\kappa \).

The scaling \(\left\langle V^m \right\rangle \propto L^{md}\) for \(d<2\) suggests that the Wiener Sausage is a “compact” d dimensional object in dimensions \(d<2\), whereas \(\left\langle V^m \right\rangle \propto L^{2m}\) at tree level, \(d>2\), Sect. 3.6. The Wiener Sausage may therefore be seen as a two-dimensional object projected into a d-dimensional space.

The obvious interpretation of \(r=\pi ^2 D/L^2\) in Eq. (98) is that of \(\pi /L\) being the lowest mode in the denominator of the propagator Eq. (41a) in the presence of open boundaries compared to (effectively) \(\sqrt{r/D}\) at \(\mathbf {k}=0\) in Eq. (12a).

It is interesting to determine the amplitude of the scaling in L with one open boundary, not least in order to determine whether the finding of Eq. (84) being identical to the result known in the literature is a mere coincidence. Technically, the route to take differs from Eq. (42), because in Sect. 3.6 both substrate as well as activity were represented in the \(\sin \) eigensystem. However, integrating over L (for uniform driving and in order to determine the volume) amounts to evaluating the matrix \(\Delta _{p,\ell }\) in Eq. (89) at \(p=0\) and in that case \(L \Delta _{p,\ell }=2/q_{\ell }\) for \(\ell \) odd and 0 otherwise, which reproduces Eq. (42) at \(r=0\), with \(\tau \) replaced by \(\tau _{\mathscr {R}}\):

$$\begin{aligned} \left\langle V \right\rangle =\frac{2}{L}\sum _{n\,\text {odd}} \frac{2}{q_n} \frac{\tau _{\mathscr {R}}}{Dq_n^2} \frac{2}{Lq_n} = \frac{8\tau _{\mathscr {R}}}{\pi ^4 D}L^2 \sum _{n\,\text {odd}} \frac{1}{n^4} = \frac{\tau _{\mathscr {R}}}{12 D}L^2 \ . \end{aligned}$$
(99)

To determine \(\tau _{\mathscr {R}}=\tau Z\) we replace W in Eqs. (56), (59) and (60) by \(2 (L/\pi )^{\epsilon } \zeta (3-d)\Gamma ((3-d)/2)/(\sqrt{\pi }D(4\pi )^{d/2})\), according to Eq. (96), so that asymptotically in large L

$$\begin{aligned} \left\langle V \right\rangle =\frac{\pi ^{(5-d)/2}2^d \tau }{24 \zeta (3-d) \Gamma \left( \frac{3-d}{2}\right) \kappa }L^d \end{aligned}$$
(100)

which for \(d=1\) reproduces the exact result (for uniform driving)

$$\begin{aligned} \left\langle V \right\rangle =\frac{ \tau }{2 \kappa }L \ , \end{aligned}$$
(101)

which is easily confirmed from first principles. However, repeating the calculation for driving at the centre, \(x^*=L/2\), gives \(d_n=(-1)^{(n-1)/2}\) for n odd and 0 otherwise, so that in \(d=1\) after some algebra

$$\begin{aligned} \left\langle V \right\rangle (x^*=L/2)=\frac{3 \tau }{4 \kappa }L \ , \end{aligned}$$
(102)

which is somewhat off the exact amplitude of \(\ln (2)=0.69314718\ldots \) compared to 3 / 4. This is apparently due to the renormalisation of \(U_{n,m,\ell }\) in Eq. (96) being correct only up to \(\mathcal {O}(\epsilon ^0)\), but that problem may require further investigation.

4.4 Infinite Cylinder: Crossover

At tree level, Sect. 3.6, it makes no technical difference to study the Sausage on a finite cylinder or an infinite slab, because the relevant observables require integration in space which amounts to evaluating at \(k_n=0\) or \(\mathbf {k}=0\) resulting in the same expression, e.g. Eq. (31) in both cases.

When including interaction, however, it does matter whether the lattice studied is infinite in \(d-1\) dimensions or periodically closed. Clearly a periodically closed axis has a 0-mode and does therefore not impose an effective cutoff in \(\mathbf {k}\). In that respect, periodic closure is identical to infinite extent, while physically it is not (just like at tree level). One may therefore wonder how periodic closure differs from infinite extent mathematically: How does a finite cylinder differ from an infinite strip? As a first step to assess the effect, we replace the open dimension (axis) by a periodically closed one. One may regard this as an unfortunate kludge—after all, what we are really interested in is a system that is finite in two dimensions, namely open in one and periodically closed in the other. However, if the aim is to study finite size scaling in \(2-\epsilon \) dimensions, then two finite dimensions are already \(\epsilon \) too many.

However, the setup of an infinitely long (in \(d-1\) dimensions) periodically closed tube with circumference L does address the problem in question, namely the difference of \(\mathbf {k}=0\) in an infinitely extended axis versus \(k_n=0\) in a finite but periodic closed dimension. In addition, an infinite cylinder compared to an infinite strip has translational invariance restored in the periodic dimension, and therefore the vertices even for a finite system dramatically simplified.

The physics of a d-dimensional system with one axis periodically closed is quite clear: At early times, or, equivalently, large extinction rates \(r\gg D/L^2\), the periodic closure is invisible and so the scaling is that of a d-dimensional (infinite) bulk system as described in Sect. 4.2, \(\left\langle V^m \right\rangle \propto r^{-md/2}\). But when the walker starts to re-encounter, due to the periodic closure, sites visited earlier, this “dimension will saturate” and so for very small r, it will display the scaling of an infinite \(d-1\)-dimensional lattice.

Just like for the setup in Sect. 3.5, it is most convenient to study the system for small but finite extinction rate r. The integrals to be performed are identical to Eq. (91), but both sums have a pre-factor of 1 / L, Eq. (8), (rather than one having 1 / L and the other 2 / L, Eq. (5)) and \(L U_{n,m,l}\) has the much simpler Kronecker form

$$\begin{aligned} \int _0^L {\!\mathrm {d}z\,} e^{\imath k_n z} e^{\imath k_m z} e^{\imath k_k z} e^{\imath k_{\ell } z} = L \tilde{U}_{n,m+k,\ell } = L \delta _{n+m+k+\ell ,0} \ . \end{aligned}$$
(103)

Most importantly the expression corresponding to Eq. (92) sees \(\sin ^2(q_{n'}z)\) replaced by unity, because the bare propagator corresponding to Eq. (41a) carries a factor \(L\delta _{n+m,0}\), Eq. (7), rather than \(L\delta _{n,m}/2\), Eq. (4), which results in \(n'\) of \(\tilde{U}_{n,m_2-m',n'}\) to pair up with \(-n'\) in \(\tilde{U}_{-n',m'+m_1,\ell }\). For easier comparison, we will keep \(L\tilde{U}_{n,m+k,\ell }\) in the following. We thus have (see Eq. (93))

(104)

Comparing Eqs. (104) to (93), (94) and (96) and re-arranging terms gives for small \(\tilde{\rho }=L^2 \left( r + \epsilon ' -\imath \omega \right) /(4 \pi ^2 D)\)

(105)

and for large \(\tilde{\rho }\)

(106)

using

$$\begin{aligned} \sum _{n'=-\infty }^{\infty } (n'^2 + \tilde{\rho })^{\frac{d-3}{2}} = \tilde{\rho }^{\frac{d-3}{2}} + 2 \zeta (3-d) + \mathcal {O}(\tilde{\rho })&\quad \mathrm{for}\quad \tilde{\rho }\ll 1 \end{aligned}$$
(107a)
$$\begin{aligned} \sum _{n'=-\infty }^{\infty } (n'^2 + \tilde{\rho })^{\frac{d-3}{2}} = \tilde{\rho }^{\frac{d-2}{2}} \frac{\sqrt{\pi } \Gamma \left( \frac{2-d}{2}\right) }{\Gamma \left( \frac{3-d}{2}\right) } + \mathcal {O}\left( \tilde{\rho }^{\frac{d-3}{2}}\right)&\quad \mathrm{for}\quad \tilde{\rho }\gg 1 . \end{aligned}$$
(107b)

The asymptotics above are responsible for all the interesting features to be discussed in the following. Firstly, intuition seems to play tricks: One may think that for small \(\tilde{\rho }\) in the sum on the left of Eq. (107), it will always be large compared to \(n'=0\) and always be small compared to \(n'\rightarrow \infty \). In fact, one might think there is no difference at all between large or small \(\tilde{\rho }\) and be tempted to approximate the sum immediately by an integral, \(\sum (n'^2 + \tilde{\rho })^{(d-3)/2}\approx \int _{-\infty }^{\infty } {\!\mathrm {d}n'\,} (n'^2 + \tilde{\rho })^{(d-3)/2} = \tilde{\rho }^{\frac{d-2}{2}} \sqrt{\pi } \Gamma \left( \frac{2-d}{2}\right) /\Gamma \left( \frac{3-d}{2}\right) \). That, however, produces only the second line, Eq. (107b). The crucial difference is that in a sum each summand actually contributes, whereas in an integral the integrand is weighted by the integration mesh. So, the summand \((n'^2 + \tilde{\rho })^{(d-3)/2}\) has to be evaluated for \(n'=0\), producing \(\tilde{\rho }^{\frac{d-3}{2}}\) in Eq. (107a), which dominates the sum for \(d<2\) (even \(d<3\), but the series does not converge for \(2<d\), and, in fact, is not needed as no IR divergences appear in \(d>2\)) and \(\tilde{\rho }\rightarrow 0\). The remaining terms can actually be evaluated for \(\tilde{\rho }=0\), producing \(2\zeta (3-d)\). The integral, which the (Riemann) sum converges to for large \(\tilde{\rho }\), on the other hand, is strictly proportional to \(\tilde{\rho }^{\frac{d-2}{2}}\) and therefore much less divergent than then sum for small \(\tilde{\rho }\rightarrow 0\) and \(d<2\).

Of the two regimes \(\tilde{\rho }\gg 1\) and \(\tilde{\rho }\ll 1\) the former is more easily analysed. Setting \(\epsilon '-\imath \omega =0\) for the time being, we notice that \(\tilde{\rho }\propto L^2 r\) suggests, somewhat counter-intuitively, that large r, which shortens the lifetime of the walker, has the same effect as large L, which prolongs the time it takes the walker to explore the system. Both effects are, however, of the same nature: They prevent the walker from “feeling” the periodicity of the system. In that case, the walker displays bulk behaviour and in fact, Eq. (106) is the same as Eq. (56).

The other regime, \(\tilde{\rho }\ll 1\) is richer. At \(d<2\) and fixed L, Eq. (105) displays a crossover between the two additive terms on the right hand side. Stretching the expansion (107a) beyond its stated limits, for intermediate values of r or L, \(\tilde{\rho }\approx 1\), the first term on the right hand side of Eq. (105) dominates and the scaling behaviour is that of an open infinite slab of linear extent L, Eq. (96). This is because at moderately large r (or, equally, short times t), the walker is not able to fully explore the infinitely extended directions. But rather than “falling off” as in the system with open boundaries, it starts crossing its own path due to the periodic boundary conditions, at which point the scaling like a d-dimensional bulk lattice (\(\tilde{\rho }\gg 1\)) ceases and turns into that of a d-dimensional open one (\(\tilde{\rho }\approx 1\)). The crossover can also be seen in Eq. (107a), which for \(d<2\) is dominated by \(2\zeta (3-d)\) for large \(\tilde{\rho }\) and by \(\rho ^{(d-3)/2}\) for small \(\tilde{\rho }\).

As r gets even smaller (or t increases), \(\tilde{\rho }\rightarrow 0\), the scaling is dominated by the infinite dimensions, of which there are \(\tilde{d}=d-1\), i.e. the scaling is that of a bulk system with \(\tilde{d}\) dimensions as discussed in Sect. 4.1, in particular Eq. (56). In this setting, the walker explores an infinitely long, thin cylinder, which has effectively degenerated into an infinitely long line. While the (comparatively) small circumference of the cylinder remains accessible this is fully explored very quickly compared to the progress in the infinite directions.

To emphasise the scaling of the last two regimes, one can re-write Eq. (105) as

(108)

with \(\tilde{\epsilon }=1+\epsilon =3-d\), \(\tilde{d}=d-1\). Here, the first term displays the behaviour of the infinite slab discussed above (Sect. 4.3, Eq. (96), \(\zeta (3-d)\propto 1/\epsilon \), but \(L/\pi \) there and \(L/(2\pi )\) here) and the second term that of a bulk-system with \(\tilde{d}\) dimensions, Eq. (56); the infrared singularity \((r+\epsilon '-\imath \omega )^{-\tilde{\epsilon }/2}\) is in fact accompanied by the corresponding ultraviolet singularity \(\Gamma (\tilde{\epsilon }/2)\), exactly as if the space dimension was reduced from d to \(\tilde{d}=d-1\).

The second term also reveals an additional factor 1 / L compared to (56).Footnote 10 This expression determines the factor W, which enters the Z-factor inversely, \(Z\propto L r^{\tilde{\epsilon }/2}\), Eq. (60), i.e. in the present setting, the Sausage volume scales like \((\tau /r) L r^{\tilde{\epsilon }/2} = \tau L r^{-\tilde{d}/2}\). The scaling in t is found by replacing r by 1 / t, or more precisely by \(\omega \) and Fourier transforming according to Eq. (82), which results in the scaling \(\left\langle V \right\rangle \propto L t^{1-\tilde{\epsilon }/2}=Lt^{\tilde{d}/2}\).

5 Summary and Discussion

Because the basic process analysed above is very well understood and has a long-standing history [2, 9, 1416, 23, 24, 26, 30, 32], this work may add not so much to the understanding of the process itself, was it not for a field-theoretic re-formulation, which is particularly flexible and elegant. The price is a process that ultimately differs from the original model. In hindsight, the agreement of the original Wiener Sausage problem with the process used here to formulate the problem field-theoretically deserves further scrutiny. In the following, we first summarise our findings above with respect to the original Wiener Sausage problem, before discussing in further detail the field-theoretic insights.

5.1 Summary of Results in Relation to the Original Wiener Sausage

The original Wiener Sausage problem is concerned with the volume traced out by a finite sphere attached to a Brownian particle. In the present analysis, this has been replaced by a Brownian particle attempting to spawn immobile offspring at Poissonian rate \(\sigma \). The attempt fails if such immobile particles are present already. On the lattice, this process amounts to a variant of the number of distinct sites visited [29].

Above, the field-theoretic treatment has been carried out perturbatively to one loop for dimensions \(d<2=d_c\), but it turns out that there are no higher order loops to be considered. In any dimension, by construction and as a matter of universality, the large time and space asymptotes of the original Wiener Sausage, the process on the lattice and the field theory are expected to coincide at least as far as exponents are concerned.

The tree level of the field theory describes the phenomenon without interaction, i.e. ignoring returns. The resulting observables are the asymptotes of the Wiener Sausage volume in dimensions above \(d=2\). The moments found in the bulk, Eqs. (31), (35), (38), (39) and generally (40), \(\left\langle V^m \right\rangle \propto m! \tau \sigma ^{m-1} r^{-m}\), coincide with those from the exact moment generating function Eq. (36) of the process ignoring return, obtained by probabilistic considerations.

In the infinite slab, the field theory still produces exact results (of the process ignoring return), such as Eqs. (42) and (46), although higher moments are tedious to calculate in closed form, Eq. (47). Again, they are easily verified using generating functions, such as Eq. (49), which also confirms the general form Eq. (48), \(\left\langle V^m \right\rangle \propto \tau \sigma ^{m-1} L^{2m} D^{-m}\), determined field-theoretically.

Below two dimensions, infrared divergences occur in the perturbation theory, which need to be controlled by a finite extinction rate r (or \(\epsilon '\)). It turns out that all orders can be dealt with at once, because “parquet diagrams” [12] can be summed over in a geometric (Dyson) sum, such as Eq. (52). We can therefore expect exact universal exponents of asymptotes, whereas amplitudes are generally non-universal and can be affected by field-theoretically irrelevant terms. In the bulk, the asymptotes Eqs. (80), (81), (86) and generally (87), \(\left\langle V^m \right\rangle \propto m! \tau \sigma ^{m-1} r^{-md/2} (D^{d/2}/\kappa )^{m}\), reproduce the (leading order) exponents as known in the literature [2]. In one dimension, the first moment of the volume, Eq. (84), reproduces the asymptote (in large t) in the continuum [2] and on the lattice [20, 29]. Even the amplitude is reproduced correctly.

The bulk calculations can be modified to apply to the infinite slab, producing Eq. (98), \(\left\langle V^m \right\rangle \propto m! \tau \sigma ^{m-1} (L/\pi )^{md} \kappa ^{-m}\). However, the renormalisation in this case is correct only to leading order in \(\epsilon \), as terms of order \(\epsilon ^0\), such as Eq. (95), were omitted (whereas in the bulk, the Z-factor was exact, Eqs. (83) or (60)). In one dimension, i.e. when the walker can explore only a finite interval, the amplitude of the first moment for uniformly distributed initial starting points, Eq. (100) at \(d=1\), coincides with the exact result, Eq. (101). However, placing the particle initially at the centre results in an amplitude, (102), that differs from the exact result.

Unless one is prepared to allow for a space-dependent \(\kappa \) (whose space dependence is in fact irrelevant in the field-theoretic sense) as suggested in Eq. (93) for the infinite slab, one cannot expect the resulting amplitudes to recover the exact results. That Eq. (101) does so nevertheless, may be explained by the “averaging effect” of the uniform driving, given that

$$\begin{aligned} \int _0^L {\!\mathrm {d}z\,} \left( \mathrm{Li}_{1+\epsilon }\left( e^{\frac{2\pi \imath z}{L}}\right) + \mathrm{Li}_{1+\epsilon }\left( e^{-\frac{2\pi \imath z}{L}}\right) \right) = 0 \ , \end{aligned}$$

see (94).

As alluded to above, the field-theoretic description of the Wiener Sausage is very elegant, not least because the diagrams have an immediate interpretation. For example, corresponds to a substrate particle deposited while the active particle is propagating. Correspondingly, is the suppression of a deposition as the active particle encounters an earlier deposition—the active particle returns to a place it has been before. All loops can therefore be contracted along the wavy line, , to produce a trajectory, say or more strikingly just , illustrating that the loop integrals calculated above, in fact capture the probability of a walker to return: \(W\propto \omega ^{-\epsilon /2}\), Eq. (59), which in the time domain gives \(t^{-d/2}\).

5.2 Original Motivation

The present study was motivated by a number of “technicalities” which were encountered by one of us during the study of a more complicated field theory. The first issue, as mentioned in the introduction, was the “fermionic” or excluded-volume interaction. In a first step, that was generalised to an arbitrary carrying capacity \(n_0\), whereby the deposition rate of immobile offspring varies smoothly in the occupation number until the carrying capacity is reached. It was argued above, Fig. 2, that the constraint to a finite but large carrying capacity \(n_0\), which may be conceived as less brutal than setting \(n_0=1\), can be understood as precisely the latter constraint, but on a more complicated lattice.

Even though the field theory was constructed in a straight-forward fashion, the perturbative implementation of the constraint, namely by effectively discounting depositions that should not have happened in the first place, make it look like a minor miracle that it produces the correct scaling (and even the correct amplitudes in some cases). We conclude that the present approach is perfectly suitable to implement excluded volume constraints.

It is interesting to vary \(n_0\) in the expressions obtained for the volume moments. At first it may not be obvious that, for example, the first volume moments in one dimension, Eqs. (84) and (101), are linear in \(n_0\), because \(\kappa =\tau /n_0\), Eq. (22). Given that \(\kappa \) enters the mth moment \(\left\langle V^m \right\rangle \) as \(\kappa ^{-m}\), Eqs. (87) and (98), the carrying capacity therefore enters through \(\kappa =\gamma /n_0\) as \(n_0^{m}\). Even though the carrying capacity enters smoothly into the deposition rate (or, equivalently, the suppression of the deposition), in dimensions \(d<2\) each site is visited infinitely often and is therefore “filled up to the maximum” with offspring particles, as if the carrying capacity was a hard cutoff (i.e. as if the deposition rate were constant until the occupation reaches the carrying capacity). The volume of each sausage therefore increases by a factor \(n_0\) in dimensions \(d<2\) and is independent of it (as \(\kappa \) does not enter) in \(d>2\).

The second issue to be investigated was the presence of open boundaries. This is, obviously, not a new problem as far as field theory is concerned in general, but in the present case being able to change boundary conditions exploits the flexibility of the field-theoretical re-formulation of the Wiener Sausage and allows us to probe results in a very instructive way.

It is often said that translational invariance corresponds to momentum conservation in \(\mathbf {k}\)-space, but the present study highlights some subtleties. As far as bare propagators are concerned, open, periodic, or, in fact, reflecting boundary conditions all allow it to be written with a Kronecker-\(\delta \) function. In that sense, bare propagators do not lose momentum. Momentum, however, is generally not conserved in vertices, i.e. vertices with more than two legs do not come with a simple \(\delta _{n+m+\ell ,0}\), but rather in a form such as Eqs. (10c) or (90).

These more complicated expressions are present even at tree level, Eq. (46). This touches on an interesting feature, namely that non-linearities are present even in dimensions above the upper critical dimension—they have to, as otherwise the tree level lacks a mechanism by which immobile offspring are deposited.

Below the upper critical dimension, the lack of momentum conservation has three major consequences: Firstly, each vertex comes with a summation and so a loop formed of two vertices, Eq. (91), requires not only one summation “around the loop” but a second one accounting for another index, which is no longer fixed by momentum conservation. This is a technicality, but one that requires more and potentially serious computation. Secondly, and more seriously, the very structure of the vertex might change. For example, at bare level \(\kappa \) comes with a factor \(LU_{n,m+k,\ell }\), but that \(U_{n,m+k,\ell }\) might change under renormalisation.

Finally, the third and probably most challenging consequence is the loss of momentum conservation in the propagator. While a lack of translational invariance may not be a problem at bare level, the presence of non-momentum conserving vertices can render the propagators themselves non-momentum conserving—provided the propagators renormalise at all (see the discussion after Eq. (89)), which they do not in the present case, as far as the two shown in Eq. (12a) are concerned. However, parameterised by \(\tau \) has every right to be called a propagator and it does renormalise. Luckily, however, it never features within loops, so the complications arising from its new structure can be handled within observables and does not spoil the renormalisation process itself.

A consequence of the Dirichlet boundary conditions is the existence of a lowest, non-vanishing mode, \(q_1=\pi /L\), Eq. (98), which, in fact, turns out to play the rôle of the effective mass—just like the minimum of the inverse propagator, \((-\imath \omega +D\mathbf {k}^2+r)\), the “gap”, is r in the bulk, it is \(Dq_1^2+r\) in the presence of Dirichlet boundary conditions, and thus does not vanish even when \(r=0\). This is a nice narrative, which is challenged, however, when periodic boundary conditions are applied. At tree level, when the interaction is switched off, periodic boundaries cannot be distinguished from an infinite system, and so we would evaluate at tree level an infinite and a periodic system both at \(\mathbf {k}=0\) and \(k_n=0\) respectively, producing exactly the same expectation (for exactly the right reason).

The situation is different beyond tree level. Periodic or open, the system is finite. However, periodic boundaries do not drain active particles, so the lowest wave number vanishes, \(k_n=0\). To control the infrared (in the infinite directions), a finite extinction rate r is necessary, which effectively competes with the system size L via \(\tilde{\rho }\propto L^2 r/D\), Eqs. (105) and (106). If \(\tilde{\rho }\) is large, bulk behaviour \(\propto \tilde{\rho }^{-\epsilon /2}\) is recovered, Eq. (106), as is the case in the open system (see footnote before Eq. (94)). For moderately small values, \(\zeta (3-d)\propto 1/\epsilon \) dominates, Eq. (107a), a signature of a d-dimensional system with open boundaries, Eq. (96). In that case, scaling amplitudes are in fact \(\propto L^{\epsilon }\), Eq. (108). However, the presence of the 0-mode allows for a different asymptote as \(\tilde{\rho }\) is lowered further, the bulk-like term governing the \(d-1=\tilde{d}\) infinite dimensions takes over, \(\propto L^{-1}((r+\epsilon '-\imath \omega )/D)^{-\tilde{\epsilon }/2}\). It is the appearance of that term and only that term which distinguishes periodic from open boundary conditions.

So, the narrative of “lowest wave number corresponds to mass” is essentially correct. In open systems, it dominates for all small masses. In periodic systems, the scaling of the lowest non-zero mode competes with that of a \(d-1\)-dimensional bulk system due to the presence of a 0-mode in the periodic dimension, which asymptotically drops out.

The third point that was to be addressed in the present work were the special properties of a propagator of an immobile species. The fact that the propagator is, apart from \(\delta (\mathbf {k}+\mathbf {k}')\), Eq. (12b), independent of the momentum is physically relevant as the particles deposited stay where they have been deposited and so the walker has to truly return to a previous spot in order to interact. Also, deposited particles are not themselves subject to any boundary conditions—this is the reason for the ambiguity of the eigenfunctions that can be used for the fields of the substrate particles. If deposited particles were to “fall off” the lattice, the volume of the sausage on a finite lattice cannot be determined by taking the \(\omega \rightarrow 0\) limit.

It is interesting to see what happens to the crucial integral Eq. (56) when the immobile propagator is changed to \((-\imath \omega +\nu \mathbf {k}^2+\epsilon ')^{-1}\):

(109)

which at external momentum \(\mathbf {k}=0\) is Eq. (56) with D replaced by \(D+\nu \). The integral thus remains essentially unchanged, just that the effective diffusion constant is adjusted by \(D\rightarrow D+\nu \).

A slightly bigger surprise is the fact that \(\epsilon '\), the IR regulator of the substrate propagator, is just as good an IR regulator as r, the IR regulator of the activity propagator. The entire field theory and thus all the physics discussed above, does not change when the “evaporation of walkers” is replaced by “evaporation of substrate particles”. The stationarity of both in infinite systems is obviously due to two completely different processes, which, however, have the same effect on the moments of the Sausage Volume: If r is finite, then a walker eventually disappears, living behind the trace of substrate particles, which stay indefinitely. If \(\epsilon '\) is finite, then stationarity is maintained as substrate particles disappear while new ones are produced by an ever wandering walker.

Finally, the fourth issue to be highlighted in the present work was that of observables which are spatial integrals of densities. These observables have a number of interesting features. As far as space is concerned, eigenfunctions with a 0-mode immediately give access to integrals over all space. However, open boundaries force us to perform a summation (and an awkward looking one, too, say Eq. (42)).

5.3 Future Work

Two interesting extensions of the present work deserve brief highlighting. Firstly, the Wiener Sausage may be studied on networks: Given a network or an ensemble thereof, how many distinct sites are visited as a function of time. The key ingredient in the analysis is the lattice Laplacian, which provides a mathematical tool to describe the diffusive motion of the walker. The contributions \(\mathbf {k}^2\) and \(q_n^2\) in the denominator of the propagator, Eqs. (12a) and (41a), are the squared eigenvalues of the Laplacian operator in the continuum and, in fact, of the lattice Laplacian, for, say, a square lattice. The integrals in \(\mathbf {k}\)-space and, equivalently, sums like Eqs. (5) and (42) should be seen as integrating over all eigenvalues \(\mathbf {k}^2\), whose density in d dimensions is proportional to \(|\mathbf {k}|^{d-1}\). It is that d which determines the scaling in, say, \(\left\langle V \right\rangle \propto t^{d/2}\) for \(d<2\). In other words, if \(|\mathbf {k}|^{d_s-1}\) is the density of eigenvalues (the density of states) of the lattice Laplacian, then the Wiener Sausage volume scales like \(t^{d_s/2}\) (and the probability of return like \(t^{-d_s/2}\)). Provided the propagator does not acquire an anomalous dimension, which could depend on \(d_s\) in a complicated way, the difference between a field theory on a regular lattice with dimension d and one on a complicated graph with spectral dimension \(d_s\) is captured by replacing d by \(d_s\) [10, p. 23]. We confirmed this finite size scaling of the Wiener Sausage on four different fractal lattices.

The second interesting extension is the addition of processes, such as branching of the walkers itself. In that case they not only interact with their past trace, but also with the trace of ancestors and successors. This field theory is primarily dominated by the branching ratio, say s, and \(\lambda \), whereas \(\kappa \), \(\chi \) and \(\xi \) are irrelevant. Preliminary results suggest that \(d_c=4\) [31, see also] in this case and again \(\left\langle V \right\rangle \propto L^{2-\epsilon }\), this time, however, with \(\epsilon =4-d\). Higher moments seem to follow \(\left\langle V^m \right\rangle \propto L^{(m-1)d+2-\epsilon }=L^{md-2}\). The latter result suggests that the dimension of the cluster formed of sites visited is that of the underlying lattice.