1 Introduction

The article is concerned with the following nonlinear stochastic Schrödinger equation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( -\mathrm {i}A u(t)-\mathrm {i}F(u(t))\right) dt-\mathrm {i}B u(t) \circ \mathrm {d}W(t),\qquad t> 0,\\ u(0)&=u_0, \end{aligned}\right. \end{aligned}$$
(1.1)

in the energy space \(E_A:=\mathcal {D}(A^{\frac{1}{2}}),\) where A is a selfadjoint, non-negative operator A with a compact resolvent in an \(L^2\)-space HF a nonlinearity, B a linear bounded operator, W is a Wiener process and the equation is understood in the (multiplicative) Stratonovich sense.

Three basic examples of the operator A are

  • the negative Laplace–Beltrami operator \(-\,\Delta _g\) on a compact Riemannian manifold (Mg) without boundary,

  • the negative Laplacian \(-\,\Delta \) on a bounded domain of \({\mathbb {R}^d}\) with Neumann or Dirichlet boundary conditions,

  • fractional powers of the first two examples.

The two basic model nonlinearities are

  • the defocusing power nonlinearity \(F_{\alpha }^+(u):=\vert u\vert ^{\alpha -1}u\) with subcritical exponents in the sense that the embedding \({E_A}\hookrightarrow L^{\alpha +1}\) is compact

  • and the focusing nonlinearity \(F_{\alpha }^-(u):=-\vert u\vert ^{\alpha -1}u\) with an additional restriction to the power \(\alpha .\)

The typical noise term has the form

$$\begin{aligned} -\mathrm {i}B u(t) \circ \mathrm {d}W(t) =-\mathrm {i}\sum _{m=1}^{\infty }e_m u(t) \circ \mathrm {d}\beta _m(t)=-\frac{1}{2}\sum _{m=1}^{\infty }e_m^2 u(t)-\mathrm {i}\sum _{m=1}^{\infty }e_m u(t) \mathrm {d}\beta _m(t) \end{aligned}$$
(1.2)

with a sequence of independent standard real Brownian motions \(\left( \beta _m\right) _{m\in \mathbb {N}}\) and functions \(\left( e_m\right) _{m\in \mathbb {N}}\) satisfying certain regularity and decay conditions that guarantee the convergence of the series on the RHS of (1.2) in the space \({E_A}.\)

The main aim of this study is twofold. Firstly, it proposes to construct a martingale solution of problem (1.1) by a stochastic version of a compactness method. Secondly, it proposes to prove the uniqueness of solutions by means of the stochastic Strichartz estimates. In this respect it differs from many previous papers on stochastic nonlinear Schrödinger equations, notably [8, 18, 28], and references therein, in which the proofs of both the existence and the uniqueness were obtained by means of appropriate stochastic Strichartz estimates. The compactness approach to the existence of solutions of 1-D stochastic Schrödinger equations in variational form has recently been used in a paper [31] by Keller and Lisei. Classical references for the construction of weak solutions of the deterministic NLSE by a combination of a compactness method and the Galerkin approximation are [23, 24] for intervals and [42] as well as [56] for domains of arbitrary dimension. Let us point out that Burq et al. [4] also used a compactness method in the proof of their Theorem 3 but instead of the Galerkin approximation they used an approximation by more regular solutions. In particular, we give a new proof of these results. But we would like to emphasise that the deterministic case is significantly simpler since our spectral theoretic methods to construct the approximations of the noise term are not needed.

In technical sense, the present paper is motivated by the construction of a global solution of the cubic equation on compact 3d-manifolds M generalizing the existence part, see Theorem 3 of Burq et al. [4], to the stochastic setting. In three dimensions, the fixed point argument from [8] is restricted to higher regularity, because it requires the Sobolev embeddings \(H^{s,q} \hookrightarrow L^\infty ,\) which are more restrictive in 3D than in 2D. Hence, this approach only yields local solutions, which is the motivation for constructing a global solution in \(H^1(M)\) with an approximation procedure based on the conservation laws of the NLSE without using the dispersive properties of the Schrödinger group. We remark that in [4], the authors also prove uniqueness for the deterministic NLSE in 3D. For the equation with noise this question will be addressed in a forthcoming paper.

In the present paper, we construct a martingale solution of problem (1.1) by a modified Faedo–Galerkin approximation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u_n(t)&= \left( -\mathrm {i}A u_n(t)-\mathrm {i}P_n F\left( u_n(t)\right) \right) \mathrm {d}t-\mathrm {i}S_n B(S_n u_n(t)) \circ \mathrm {d}W(t),\quad t>0, \\ u_n(0)&=P_n u_0, \end{aligned}\right. \end{aligned}$$
(1.3)

in finite dimensional subspaces \(H_n\) of H spanned by some eigenvectors of A. Here, \(P_n{:}\,H\rightarrow H_n\) are the standard orthogonal projections and \(S_n{:}\,H\rightarrow H_n\) are selfadjoint operators derived from the Littlewood–Paley-decomposition associated to A. The reason for using the operators \(\left( S_n\right) _{n\in \mathbb {N}}\) lies in the uniform estimate

$$\begin{aligned} \sup _{n\in \mathbb {N}}\Vert S_n \Vert _{L^p\rightarrow L^p}<\infty , \quad 1<p<\infty , \end{aligned}$$

which turns out to be necessary in the estimates of the noise due to the \(L^p\)-structure of the energy, see (1.4) below, and which is false if one replaces \(S_n\) by \(P_n.\) Using the Littlewood–Paley decomposition via the operators \((S_n)_{n\in \mathbb {N}}\) can be viewed as the one of the main analytical contributions of this paper. We remark that in the mean time, a similar construction has been used in [29] to construct a solution of a stochastic nonlinear Maxwell equation by estimates in \(L^q\) for some \(q>2\). This indicates that our method has potential to increase the field of application of the classical Faedo–Galerkin method significantly.

On the other hand, the orthogonal projections \(P_n\) are used in the deterministic part, because they do not destroy the cancellation effects which lead to the mass and energy conservation

$$\begin{aligned} \Vert u \Vert _{L^2}^2={\text {const}}, \qquad \frac{1}{2}\Vert A^{\frac{1}{2}}u \Vert _{L^2}^2+{\hat{F}}(u)={\text {const}} \end{aligned}$$
(1.4)

for solutions u of problem (1.1) in the deterministic setting, where \({\hat{F}}\) denotes the antiderivative of the nonlinearity F. Note that in the case \(F_\alpha ^\pm (u)=\pm \vert u\vert ^{\alpha -1}u\), the antiderivative is given by \({\hat{F}}_\alpha ^\pm =\pm \frac{1}{\alpha +1}\Vert u \Vert _{L^{\alpha +1}}^{\alpha +1}.\) In the stochastic case, the mass conservation \(\Vert u_n \Vert _{L^2}^2={\text {const}}\) for solutions of (1.3) holds almost surely due to the Stratonovich form of the noise. Moreover, the conservation of the energy is carried over in the sense that a Gronwall type argument yields the uniform a priori estimates, for every \(T>0\),

$$\begin{aligned} \sup _{n\in \mathbb {N}}\mathbb {E}\Big [\sup _{t\in [0,T]} \Vert u_n(t) \Vert _{E_A}^2\Big ]<\infty ,\qquad \sup _{n\in \mathbb {N}}\mathbb {E}\Big [\sup _{t\in [0,T]} \Vert u_n(t) \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\Big ]<\infty . \end{aligned}$$
(1.5)

Combined with the Aldous condition [A],  see Definition 4.3, which is a stochastic version of the equicontinuity, the estimates (1.5) lead to the tightness of the sequence \(\left( u_n\right) _{n\in \mathbb {N}}\) in the locally convex space

$$\begin{aligned} Z_T:={C([0,T],{E_A^*})}\cap {L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\cap C_w([0,T],{E_A}), \end{aligned}$$

where \(C_w([0,T],{E_A})\) denotes the space of continuous functions with respect to the weak topology in \({E_A}.\) The construction of a martingale solution is similar to [7] and employs a limit argument based on Jakubowski’s extension of the Skorohod Theorem to nonmetric spaces and the Martingale Representation Theorem from [21, chapter 8]. Our main result is the following Theorem.

Theorem 1.1

Let \(T>0\) and \(u_0\in E_A.\) Under the Assumptions 2.12.4, 2.6, 2.7, there exists a martingale solution \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u\right) \) of Eq. (1.1) (see Definition 2.9), which satisfies

$$\begin{aligned} u\in L^q({\tilde{\Omega }},{L^\infty (0,T;{E_A})}) \end{aligned}$$
(1.6)

for all \(q\in [1,\infty )\) and

$$\begin{aligned} \Vert u(t) \Vert _{L^2(M)}=\Vert u_0 \Vert _{L^2(M)}\qquad {{\tilde{\mathbb {P}}}\text {-}{a.s.\,for\,all}\,t \in [0,T].} \end{aligned}$$

As an application of Theorem 1.1, we get the following Corollary. Note that an analogous result holds in the case of a bounded domain, see Corollary 3.4.

Corollary 1.2

Let (Mg) be a compact d-dimensional Riemannian manifold without boundary. Let \(T>0\) and \(u_0\in H^1(M).\) Under Assumption 2.7 and either (i) or (ii)

  1. (i)

    \(F(u)= \vert u\vert ^{\alpha -1}u\) with \( \alpha \in \left( 1,1+\frac{4}{(d-2)_+}\right) ,\)

  2. (ii)

    \(F(u)= -\vert u\vert ^{\alpha -1}u\) with \( \alpha \in \left( 1,1+\frac{4}{d}\right) ,\)

the equation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( \mathrm {i}\Delta _g u(t)-\mathrm {i}F(u(t)\right) dt-\mathrm {i}B u(t) \circ \mathrm {d}W(t)\quad \text {in} \quad H^1(M),\\ u(0)&=u_0, \end{aligned}\right. \end{aligned}$$
(1.7)

has a martingale solution with

$$\begin{aligned} u\in L^q({\tilde{\Omega }},{L^\infty (0,T;H^1(M))}), \end{aligned}$$
(1.8)

for all \(q\in [1,\infty )\) and

$$\begin{aligned} \Vert u(t) \Vert _{L^2(M)}=\Vert u_0 \Vert _{L^2(M)}\qquad {\tilde{\mathbb {P}}}\text {-}{a.s.\,for\,all}\,t \in [0,T]. \end{aligned}$$

Furthermore, we address the question of uniqueness of the solution from Corollary 1.2 in two dimensions.

Corollary 1.3

In the situation of Corollary 1.2 with \(d=2,\) there exists a unique strong solution of (1.7) in \(H^1(M)\) and the martingale solutions are unique in law.

We obtain pathwise uniqueness by an improvement of the regularity of solutions based on the Strichartz estimates by Bernicot and Samoyeau from [13] and Brzeźniak and Millet from [8]. Ondreját showed in [44] in a quite general setting, that this is sufficient to get a strong solution. In fact, our uniqueness result is more general than we have formulated in Corollary 1.3. On the one hand, we allow possibly non-compact of manifolds with bounded geometry. On the other hand, uniqueness holds in the strictly larger class \(L^{r}(\Omega ,L^\beta (0,T;H^s(M)))\) with \(r> \alpha ,\)\(\beta :=\max \left\{ 2,\alpha \right\} \) and

$$\begin{aligned} s \in {\left\{ \begin{array}{ll} (\frac{2\alpha -1}{2\alpha },1] &{} \text {for } \alpha \in (1,3], \\ (\frac{\alpha (\alpha -1)-1}{\alpha (\alpha -1)},1] &{} \text {for } \alpha >3. \end{array}\right. } \end{aligned}$$

For the details, we refer to Theorem 7.5

Let us point out that the stochastic nonlinear Schrödinger equations are used in the fiber optics, nonlinear photonics and optical wave turbulence, see for instance a recent review paper [51] by Turitsyn et al. and references therein. There is also an extended literature on the nonlinear Schrödinger equations on special manifolds, as e.g. Schwarzschild manifolds, see papers [1, 38, 40]. In these papers the Schrödinger equation is somehow related to the corresponding nonlinear wave equation which in turn appears in the theory of gravitational fields. Furthermore, we would like to mention the article [48] which deals with the derivation of the Schrödinger equation on manifolds. From a mathematical point of view, important questions are how the geometry of the manifold influences the qualitative behavior of solutions and how the geometry of the manifold and the external noise influence the well-posedness theory. Nonlinear Schrödinger equations on manifolds have been studied e.g. by Burg et al. [3, 4], see also references therein. The motivation for these authors was “to evaluate the impact of geometry of the manifold on the well-posedness theory, having in mind the infinite propagation speed of the Schrödinger equation”.

The paper is organized as follows. In the Sects. 2 and 3, we fix the notation, formulate our Assumptions and present a number of typical examples of operators A,  a model nonlinearity F and noise coefficients B covered by our framework. In Sect. 4, we are concerned with the compactness results that we will be using later on. In Sect. 5, we formulate the Galerkin approximation equations and prove the a priori estimates which are sufficient for compactness in view of Sect. 4. Section 6 is devoted to the proof of Theorem 1 and in Sect. 7, we focus on uniqueness in the case of 2d manifolds with bounded geometry.

2 Notation and assumptions

In this section, we want to fix the notations, explain the assumptions and formulate an abstract framework for the stochastic nonlinear Schrödinger equation.

Let \(\left( X,\Sigma ,\mu \right) \) be a \(\sigma \)-finite measure space with metric \(\rho \) satisfying the doubling property, i.e. \(\mu (B(x,r))<\infty \) for all \(x\in X\) and \(r>0\) and

$$\begin{aligned} \mu (B(x,2r))\lesssim \mu (B(x,r)). \end{aligned}$$
(2.1)

This estimate implies

$$\begin{aligned} \mu (B(x,tr))\lesssim t^d \mu (B(x,r)),\qquad x\in X,\quad r>0,\quad t\ge 1 \end{aligned}$$
(2.2)

and the number \(d\in \mathbb {N}\) is called doubling dimension. Let \(M\subset X\) be an open subset with finite measure and \(L^q(M)\) for \(q\in [1,\infty ]\) the space of equivalence classes of \(\mathbb {C}\)-valued q-integrable functions. For \(q\in [1,\infty ],\) let \(q':=\frac{q}{q-1}\in [1,\infty ]\) be the conjugate exponent. In particular, for \(q\in [1,\infty ]\) it holds that \(\frac{1}{q}+\frac{1}{q'}=1.\) We further abbreviate \({H}:=L^2(M)\). In the special case that M is a Riemannian manifold, \(H^{s,q}(M)\) denotes the fractional Sobolev space of regularity \(s\in \mathbb {R}\) and integrability \(q\in (1,\infty )\) and we shortly write \(H^s(M):=H^{s,2}(M).\) For a definition of these spaces, we refer to Definition B.1.

If functions \(a,b\ge 0\) satisfy the inequality \(a\le C(A) b\) with a constant \(C(A)>0\) depending on the expression A, we write \(a \lesssim _A b.\) If we have \(a \lesssim _A b\) and \(b \lesssim _A a,\) we write \(a \eqsim _A b.\) For two Banach spaces EF, we denote by \(\mathcal {L}(E,F)\) the space of linear bounded operators \(B{:}\,E\rightarrow F\) and abbreviate \({\mathcal {L}}(E):={\mathcal {L}}(E,E).\) Furthermore, we write \(E\hookrightarrow F,\) if E is continuously embedded in F;  i.e. \(E\subset F\) with natural embedding \(j\in {\mathcal {L}}(E,F).\) The space \(C^{1,2}([0,T]\times E,F)\) consists of all functions \(\varPhi {:}\,[0,T]\times E\rightarrow F\) such that \(\varPhi (\cdot ,x)\in C^1([0,T],F)\) for every \(x\in E\) and \(\varPhi (t,\cdot )\in C^2(E,F)\) for every \(t\in [0,T].\) For two Hilbert spaces \(H_1\) and \(H_2,\) the space of Hilbert–Schmidt operators \(B{:}\,H_1\rightarrow H_2\) is abbreviated by \({\text {HS}}(H_1,H_2).\) The resolvent set of a densely defined linear operator \(A{:}\,E\supset {\mathcal {D}}(A)\rightarrow E\) on a Banach space E is denoted by \(\rho (A).\) For a probability space \(\left( \Omega , \mathcal {F}, \mathbb {P}\right) ,\) the law of a random variable \(X{:}\,\Omega \rightarrow E\) is denoted by \(\mathbb {P}^X\).

Assumption and Notation 2.1

We assume the following:

  1. (i)

    Let A be a non-negative selfadjoint operator on H with domain \({\mathcal {D}}(A).\)

  2. (ii)

    There is a strictly positive selfadjoint operator S on H with compact resolvent commuting with A which fulfills \(\mathcal {D}(S^k)\hookrightarrow E_A\) for sufficiently large k. Moreover, we assume that S has generalized Gaussian\((p_0,p_0')\)-bounds for some \(p_0\in [1,2),\) i.e.

    $$\begin{aligned} \Vert {\mathbf {1}}_{B(x,t^\frac{1}{m})}e^{-tS}{\mathbf {1}}_{B(y,t^\frac{1}{m})}\Vert _{{\mathcal {L}}(L^{p_0},L^{p_0'})} \le C{\mu (B(x,t^\frac{1}{m}))}^{\frac{1}{p_0'}-\frac{1}{p_0}} \exp \left\{ -c \left( \frac{\rho (x,y)^m}{t}\right) ^{\frac{1}{m-1}}\right\} , \end{aligned}$$
    (2.3)

    for all \(t>0\) and \((x,y)\in M\times M\) with constants \(c,C>0\) and \(m\ge 2.\)

  3. (iii)

    The Hilbert space \(E_A:=\mathcal {D}(A^{\frac{1}{2}})\) equipped with the inner product

    $$\begin{aligned} \big (u,v\big )_{{E_A}}:=\big (u,v\big )_{H}+\big (A^{\frac{1}{2}}u,A^{\frac{1}{2}}v\big )_{H},\qquad u,v\in {E_A}, \end{aligned}$$

    is called the energy space and the induced norm \(\Vert \cdot \Vert _{E_A}\) is called the energy norm associated to A. We denote the dual space of \(E_A\) by \(E_A^*\) and abbreviate the duality with \(\langle \cdot , \cdot \rangle := \langle \cdot , \cdot \rangle _{E_A^*,E_A},\) where the complex conjugation is taken over the second variable of the duality. Note that \(\left( E_A, H, E_A^*\right) \) is a Gelfand triple, i.e.

    $$\begin{aligned} E_A\hookrightarrow H \cong H^* \hookrightarrow E_A^*. \end{aligned}$$
  4. (iv)

    Let \(\alpha \in (1,p_0'-1)\) be such that \({E_A}\) is compactly embedded in \({L^{\alpha +1}(M)}.\) We set

    $$\begin{aligned} p_{\max }:= \sup \left\{ p\in (1,\infty ]{:}\,{E_A}\hookrightarrow L^p(M) \quad \text {is continuous}\right\} \end{aligned}$$

    and note that \(p_{\max }\in [\alpha +1,\infty ].\) In the case \(p_{\max }<\infty ,\) we assume that \({E_A}\hookrightarrow L^{p_{\max }}(M)\) is continuous, but not necessarily compact.

Remark 2.2

  1. (a)

    The operator S plays the role of an auxiliary operator to cover the different examples from Sect. 3 in a unified framework. Typical choices are \(S:=I+A,\)\(S:=A\) or \(S:=I+A^{1/\beta }\) for some \(\beta >0.\)

  2. (b)

    If \(p_0=1,\) then it is proved in [6] that (2.3) is equivalent to the usual upper Gaussian estimate, i.e. for all \(t>0\) there is a measurable function \(p(t,\cdot ,\cdot ){:}\,M\times M\rightarrow \mathbb {R}\) with

    $$\begin{aligned} (e^{-tS}f)(x)= \int _M p(t,x,y) f(y) \mu (dy), \quad t> 0, \quad \text {a.e.} x\in M \end{aligned}$$

    for all \(f\in H\) and

    $$\begin{aligned} \vert p(t,x,y)\vert \le \frac{C}{\mu (B(x,t^\frac{1}{m}))} \exp \left\{ -c \left( \frac{\rho (x,y)^m}{t}\right) ^{\frac{1}{m-1}}\right\} , \end{aligned}$$
    (2.4)

    for all \(t>0\) and almost all \((x,y)\in M\times M\) with constants \(c,C>0\) and \(m\ge 2.\)

  3. (c)

    The generalized Gaussian estimate (2.3) is used in the proof of Proposition 5.2, where spectral multiplier theorems for S in \(L^p(M)\) for \(p\in (p_0,p_0'),\) respectively a Mihlin \({\mathcal {M}}^\beta \) functional calculus of S for some \(\beta >0\) are employed. The Mihlin functional calculus is defined and studied in [32, 34]. For additional information about spectral multiplier theorems for operators with generalized Gaussian estimates, we refer to [33, 55]. Note that spectral multiplier results with different assumptions are also sufficient for our analysis below, see e.g. [20], where a result for the Laplace–Beltrami operator on a compact Riemannian manifold is explicitly stated without mentioning the doubling property in this particular case.

We start with some conclusions which can be deduced from Assumption 2.1.

Lemma 2.3

  1. (a)

    There is a non-negative selfadjoint operator \({\hat{A}}\) on \(E_A^*\) with \(\mathcal {D}({\hat{A}})=E_A\) with \({\hat{A}}=A\) on H.

  2. (b)

    The embedding \({E_A}\hookrightarrow {H}\) is compact.

  3. (c)

    There is an orthonormal basis \(\left( h_n\right) _{n\in \mathbb {N}}\) and a nondecreasing sequence \(\left( \lambda _n\right) _{n\in \mathbb {N}}\) with \(\lambda _n>0\) and \(\lambda _n\rightarrow \infty \) as \(n\rightarrow \infty \) and

    $$\begin{aligned} S x=\sum _{n=1}^\infty \lambda _n \big (x,h_n\big )_{H} h_n, \quad x\in \mathcal {D}(S)=\left\{ x\in H{:}\,\sum _{n=1}^\infty \lambda _n^2 \vert \big (x,h_n\big )_{H}\vert ^2<\infty \right\} . \end{aligned}$$

Proof

(ad a) The operator \({\hat{A}}\) is defined by

$$\begin{aligned} \langle {\hat{A}}\varphi , \psi \rangle := \big (A^{\frac{1}{2}}\varphi ,A^{\frac{1}{2}}\psi \big )_{H}, \quad \varphi ,\psi \in E_A. \end{aligned}$$

The estimate

$$\begin{aligned} \vert \langle {\hat{A}}\varphi , \psi \rangle \vert \le \Vert A^{\frac{1}{2}}\varphi \Vert _H \Vert A^{\frac{1}{2}}\psi \Vert _H\le \Vert \varphi \Vert _{E_A} \Vert \psi \Vert _{E_A} \end{aligned}$$

shows that \({\hat{A}}\) is well-defined and a bounded operator from \(E_A\) to \(E_A^*\) with \(\Vert {\hat{A}} \Vert \le 1.\) Moreover, one can apply the Lax–Milgram-Theorem to see that \(I+{\hat{A}}\) is a surjective isometry from \({E_A}\) to \({E_A^*}.\) If one equips \({E_A^*}\) with the inner product

$$\begin{aligned} \big (f^*,g^*\big )_{{E_A^*}}:=\big ((I+{\hat{A}})^{-1}f^*,(I+{\hat{A}})^{-1}g^*\big )_{E_A}, \qquad f^*,g^*\in {E_A^*}, \end{aligned}$$

one can show the symmetry of \({\hat{A}}\) as an unbounded operator in \({E_A^*}.\) Hence, \({\hat{A}}\) is selfadjoint, because \(-1\in \rho ({\hat{A}}).\)

(ad b) The embedding \({E_A}\hookrightarrow {L^{\alpha +1}(M)}\) is compact by Assumption 2.1(iv) and \({L^{\alpha +1}(M)}\hookrightarrow {H}\) is continuous due to \(\mu (M)<\infty .\) Hence, \({E_A}\hookrightarrow H\) is compact. (ad c) Immediate consequence of the spectral theorem, since S has a compact resolvent. \(\square \)

In most cases where this does not cause ambiguity or confusion, we also use the notations A for \({\hat{A}}.\) We continue with the assumptions on the nonlinear part of our problem.

Assumption 2.4

Let \(\alpha \in (1,p_0'-1)\) be chosen as in Assumption 2.1. Then, we assume the following:

  1. (i)

    Let \(F{:}\,{L^{\alpha +1}(M)}\rightarrow {L^{\frac{\alpha +1}{\alpha }}(M)}\) be a function satisfying the following estimate

    $$\begin{aligned} \Vert F(u) \Vert _{L^{\frac{\alpha +1}{\alpha }}(M)}\lesssim \Vert u \Vert _{L^{\alpha +1}(M)}^\alpha ,\quad u\in {L^{\alpha +1}(M)}. \end{aligned}$$
    (2.5)

    Note that this leads to \(F{:}\,{E_A}\rightarrow {E_A^*}\) by Assumption 2.1(iv), because \({E_A}\hookrightarrow {L^{\alpha +1}(M)}\) implies \(({L^{\alpha +1}(M)})^*={L^{\frac{\alpha +1}{\alpha }}(M)}\hookrightarrow {E_A^*}.\) We further assume and \(F(0)=0\) and

    $$\begin{aligned} {\text {Re}}\langle \mathrm {i}u, F(u) \rangle =0, \quad u\in {L^{\alpha +1}(M)}. \end{aligned}$$
    (2.6)
  2. (ii)

    The map \(F{:}\,{L^{\alpha +1}(M)}\rightarrow {L^{\frac{\alpha +1}{\alpha }}(M)}\) is continuously real Fréchet differentiable with

    $$\begin{aligned} \Vert F'[u]\Vert _{L^{\alpha +1}\rightarrow L^\frac{\alpha +1}{\alpha }} \lesssim \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha -1}, \quad u\in {L^{\alpha +1}(M)}. \end{aligned}$$
    (2.7)
  3. (iii)

    The map F has a real antiderivative \({\hat{F}},\) i.e. there exists a Fréchet-differentiable map \({\hat{F}}{:}\,{L^{\alpha +1}(M)}\rightarrow \mathbb {R}\) with

    $$\begin{aligned} {\hat{F}}'[u]h={\text {Re}}\langle F(u), h \rangle ,\quad u,h\in {L^{\alpha +1}(M)}. \end{aligned}$$
    (2.8)

By Assumption 2.4(ii) and the mean value theorem for Fréchet differentiable maps, we get

$$\begin{aligned} \Vert F(x)-F(y) \Vert _{{L^{\frac{\alpha +1}{\alpha }}(M)}}&\le \sup _{t\in [0,1]}\Vert F'[tx+(1-t)y] \Vert \Vert x-y \Vert _{{L^{\alpha +1}(M)}}\nonumber \\&\lesssim \left( \Vert x \Vert _{L^{\alpha +1}(M)}+\Vert y \Vert _{L^{\alpha +1}(M)}\right) ^{\alpha -1} \Vert x-y \Vert _{L^{\alpha +1}(M)}, \nonumber \\&\qquad x,y\in {L^{\alpha +1}(M)}, \end{aligned}$$
(2.9)

which means that the nonlinearity is Lipschitz on bounded sets of \({L^{\alpha +1}(M)}.\)

We will cover the following two standard types of nonlinearities.

Definition 2.5

Let F satisfy Assumption 2.4. Then, F is called defocusing, if \({\hat{F}}(u)\ge 0\) and focusing, if \({\hat{F}}(u)\le 0\) for all \(u\in {L^{\alpha +1}(M)}.\)

Assumption 2.6

We assume either (i) or (i\(^{\prime }\)):

  1. (i)

    Let F be defocusing and satisfy

    $$\begin{aligned} \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\lesssim {\hat{F}}(u), \quad u\in {L^{\alpha +1}(M)}. \end{aligned}$$
    (2.10)
  2. (i’)

    Let F be focusing and satisfy

    $$\begin{aligned} -{\hat{F}}(u)\lesssim \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha +1}, \quad u\in {L^{\alpha +1}(M)}. \end{aligned}$$
    (2.11)

    and there is \(\theta \in (0,\frac{2}{\alpha +1})\) with

    $$\begin{aligned} \left( {H},{E_A}\right) _{\theta ,1}\hookrightarrow {L^{\alpha +1}(M)}, \end{aligned}$$
    (2.12)

Here \(\left( \cdot ,\cdot \right) _{\theta ,1}\) denotes the real interpolation space and we remark that by [54, Lemma 1.10.1], (2.12) is equivalent to

$$\begin{aligned} \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\lesssim \Vert u \Vert _H^{\beta _1} \Vert u \Vert _{E_A}^{\beta _2},\quad u\in {E_A}, \end{aligned}$$
(2.13)

for some \(\beta _1>0\) and \(\beta _2 \in (0,2)\) with \(\alpha +1= \beta _1+\beta _2.\) Let us continue with the definitions and assumptions for the stochastic part.

Assumption 2.7

We assume the following:

  1. (i)

    Let \((\Omega ,\mathcal {F},\mathbb {P})\) be a probability space, Y a separable real Hilbert space with ONB \((f_m)_{m\in \mathbb {N}}\) and W a Y-canonical cylindrical Wiener process adapted to a filtration \(\mathbb {F}\) satisfying the usual conditions.

  2. (ii)

    Let \(B{:}\,{H} \rightarrow {\text {HS}}(Y,{H})\) be a linear operator and set \(B_m u:=B(u)f_m\) for \(u\in {H}\) and \(m\in \mathbb {N}.\) Additionally, we assume that \(B_m\in {\mathcal {L}}(H)\) is selfadjoint for every \(m\in \mathbb {N}\) with

    $$\begin{aligned} \sum _{m=1}^{\infty }\Vert B_m \Vert _{{\mathcal {L}}({H})}^2<\infty \end{aligned}$$
    (2.14)

    and assume \(B_m\in {\mathcal {L}}({E_A})\) and \(B_m\in {\mathcal {L}}({L^{\alpha +1}(M)})\) for \(m\in \mathbb {N}\) and \(\alpha \in (1,p_0'-1)\) as in Assumption and Notation 2.1 with

    $$\begin{aligned} \sum _{m=1}^{\infty }\Vert B_m \Vert _{{\mathcal {L}}({E_A})}^2<\infty ,\quad \sum _{m=1}^{\infty }\Vert B_m \Vert _{{\mathcal {L}}(L^{\alpha +1})}^2<\infty . \end{aligned}$$
    (2.15)

For the special case, when the \(B_m\) are pointwise multiplication operators, see Sect. 3.5 below.

Remark 2.8

The estimates (2.14) and (2.15) imply

$$\begin{aligned} B\in {\mathcal {L}}({H},{\text {HS}}(Y,{H})),\quad B\in {\mathcal {L}}({E_A},{\text {HS}}(Y,{E_A})),\quad B\in {\mathcal {L}}({L^{\alpha +1}(M)},\gamma (Y,{L^{\alpha +1}(M)})), \end{aligned}$$

where \(\gamma (Y,{L^{\alpha +1}(M)})\) denotes the spaces of \(\gamma \)-radonifying operators from Y to \({L^{\alpha +1}(M)}.\)

Finally, we have sufficient background to formulate the problem which we want to solve. We investigate the following stochastic evolution equation in the Stratonovich form

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( -\mathrm {i}A u(t)-\mathrm {i}F(u(t)\right) dt-\mathrm {i}B u(t) \circ \mathrm {d}W(t),\quad t\in (0,T),\\ u(0)&=u_0, \end{aligned}\right. \end{aligned}$$
(2.16)

where the stochastic differential is defined by

$$\begin{aligned} -\mathrm {i}B u(t) \circ \mathrm {d}W(t)=-\mathrm {i}B u(t) \mathrm {d}W(t)+\frac{1}{2} {\text {tr}}_Y\left( {\mathcal {M}}(u(t))\right) \mathrm {d}t, \end{aligned}$$
(2.17)

with the bilinear form \({\mathcal {M}}(u)\) on \(Y\times Y\) defined by

$$\begin{aligned} {\mathcal {M}}(u)(y_1,y_2):= -\mathrm {i}B'[u](-\mathrm {i}B(u)y_1)y_2, \quad u\in {H},\quad y_1,y_2\in Y. \end{aligned}$$

For the purpose of giving a rigorous definition of a solution to problem (2.16), it is useful to rewrite the equation in the Itô form. Therefore, we first compute

$$\begin{aligned} {\text {tr}}_Y\left( {\mathcal {M}}(u)\right)&=\sum _{m=1}^{\infty }-\mathrm {i}B'[u](-\mathrm {i}B(u)f_m)f_m= -\sum _{m=1}^{\infty }B\left( B(u)f_m\right) f_m\\&= -\sum _{m=1}^{\infty }B\left( B_m u\right) f_m=-\sum _{m=1}^{\infty }B_m^2 u. \end{aligned}$$

Hence, Eq. (2.16) will be understood in the following Itô form

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( -\mathrm {i}A u(t)-\mathrm {i}F(u(t)+ \mu \left( u(t)\right) \right) \mathrm {d}t-\mathrm {i}B u(t) \mathrm {d}W(t),\quad t\in (0,T),\\ u(0)&=u_0, \end{aligned}\right. \end{aligned}$$
(2.18)

where the linear operator \(\mu \) defined by

$$\begin{aligned} \mu (u) := -\frac{1}{2} \sum _{m=1}^{\infty }B_m^2 u,\qquad u\in {H}, \end{aligned}$$

is the Stratonovich correction term.

Most of our paper will be concerned with the construction of a martingale solution.

Definition 2.9

Let \(T>0\) and \(u_0\in E_A.\) A martingale solution of the Eq. (1.1) is a system \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u\right) \) consisting of

  • a probability space \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}}\right) ;\)

  • a Y-valued cylindrical Wiener \({\tilde{W}}\) process on \({\tilde{\Omega }};\)

  • a filtration \({\tilde{\mathbb {F}}}=\left( {\tilde{\mathcal {F}}}_t\right) _{t\in [0,T]}\) with the usual conditions;

  • a continuous, \({\tilde{\mathbb {F}}}\)-adapted, \({E_A^*}\)-valued process such that \(u\in L^2(\Omega \times [0,T],{E_A^*})\) and almost all paths are in \(C_w([0,T],{E_A})\),

such that the equality

$$\begin{aligned} u(t)= u_0+ \int _0^t \left[ -\mathrm {i}A u(s)-\mathrm {i}F(u(s))+\mu (u(s))\right] \mathrm {d}s- \mathrm {i}\int _0^t B u(s) \mathrm {d}{\tilde{W}}(s) \end{aligned}$$
(2.19)

holds almost surely in \({E_A^*}\) for all \(t\in [0,T].\)

3 Examples

In this section, we consider concrete situations and verify that they are covered by the general framework presented in the last section.

3.1 The model nonlinearities

The class of the general nonlinearities from the Assumptions 2.4 and 2.6 covers the standard focusing and defocusing power nonlinearity.

Proposition 3.1

Let \(\alpha \in (1,\infty )\) be chosen as in Assumption 2.1. Define the following function

$$\begin{aligned} F_{\alpha }^\pm (u):=\pm \vert u\vert ^{\alpha -1}u, \qquad {\hat{F}}_\alpha ^\pm (u):=\pm \frac{1}{\alpha +1} \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha +1},\qquad u\in {L^{\alpha +1}(M)}. \end{aligned}$$

Then, \(F_{\alpha }^\pm \) satisfies Assumption 2.4 with antiderivative \({\hat{F}}_\alpha ^\pm .\)

Proof

Obviously, \(F_{\alpha }^\pm {:}\,{L^{\alpha +1}(M)}\rightarrow {L^{\frac{\alpha +1}{\alpha }}(M)}\) due to

$$\begin{aligned} \Vert F_{\alpha }^\pm (u) \Vert _{L^{\frac{\alpha +1}{\alpha }}(M)}=\Vert u \Vert _{L^{\alpha +1}(M)}^\alpha ,\qquad u\in {L^{\alpha +1}(M)}. \end{aligned}$$

Furthermore,

$$\begin{aligned} {\text {Re}}\langle \mathrm {i}v, F_{\alpha }^\pm (v) \rangle =\pm {\text {Re}}\int _M \mathrm {i}v \vert v\vert ^{\alpha -1}{\overline{v}}\mathrm {d}\mu =\pm {\text {Re}}\left[ \mathrm {i}\Vert v \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\right] =0. \end{aligned}$$

We can apply the Lemma 3.2 below with \(p=\alpha +1\) and

$$\begin{aligned} \varPhi (a,b)=\left( a^2+b^2\right) ^\frac{\alpha -1}{2} \left( \begin{array}{l} a \\ b \end{array}\right) ,\qquad a,b\in \mathbb {R}, \end{aligned}$$

to obtain part (ii) and (iii) of Assumption 2.4. \(\square \)

The next Lemma contains the differentiability properties of the nonlinearity. For a proof, we refer to the lecture notes [26, Lemma 9.1 and Lemma 9.2].

Lemma 3.2

Let \(\left( S,{\mathcal {A}},\mu \right) \) be a measure space and \(\alpha >1.\)

  1. (a)

    Let \(p>1.\) Then, the map \(G_1{:}\,L^p(S)\rightarrow \mathbb {R}\) defined by \(G_1(u):= \Vert u \Vert _{L^p(S)}^p\) is continuously Fréchet differentiable and for all \(u,h\in L^p(S),\) we have

    $$\begin{aligned} G_1'[u]h={\text {Re}}\int _S \vert u\vert ^{p-1}u {\overline{h}} \mathrm {d}\mu . \end{aligned}$$
  2. (b)

    Let \(p>\alpha \) and \(\varPhi =(\varPhi _1,\varPhi _2)\in C^1(\mathbb {R}^2,\mathbb {R}^2).\) Assume that there is \(C>0\) with

    $$\begin{aligned} \vert \varPhi (a,b) \vert \le C \left( a^2+b^2\right) ^\frac{\alpha }{2}, \qquad \vert \varPhi '(a,b) \vert \le C \left( a^2+b^2\right) ^\frac{\alpha -1}{2}, \qquad a,b\in \mathbb {R}. \end{aligned}$$

    Then, the map

    $$\begin{aligned} G{:}\,L^p(S) \rightarrow L^{\frac{p}{\alpha }}(S), \quad G(u):=\varPhi _1({\text {Re}}u,{\text {Im}}u)+\mathrm {i}\varPhi _2({\text {Re}}u,{\text {Im}}u) \end{aligned}$$

    is continuously Fréchet differentiable and for \(u,h\in L^p(S),\) we have

    $$\begin{aligned} G'[u]h=\nabla \varPhi _1({\text {Re}}u,{\text {Im}}u)\cdot \!\left( \begin{array}{l} {\text {Re}}h \\ {\text {Im}}h \end{array}\!\right) +\mathrm {i}\nabla \varPhi _2({\text {Re}}u,{\text {Im}}u)\cdot \!\left( \begin{array}{l} {\text {Re}}h \\ {\text {Im}}h \end{array}\!\right) \end{aligned}$$

    and

    $$\begin{aligned} \Vert G'[u]\Vert _{L^p \rightarrow L^{\frac{p}{\alpha }}}\le C \Vert u \Vert _{L^p}^{\alpha -1}. \end{aligned}$$

3.2 The Laplace–Beltrami operator on compact manifolds

In this subsection, we deduce Corollary 1.2 from Theorem 1.1. Let (Mg) be a compact d-dimensional Riemannian manifold without boundary and \(A:=-\Delta _g\) be the Laplace–Beltrami operator on M.

Proof of Corollary 1.2

Step 1. Let \(X=M\), \(\rho \) be the geodesic distance and \(\mu \) be the canonical volume measure on X. From [16, Section 4, p. 329], we obtain the local doubling property of X, i.e. there is \(C_1>0\) such that for all \(x\in X\) and \(r\in (0,1)\) we have

$$\begin{aligned} \mu (B(x,2r))\le C_1 \mu (B(x,r)). \end{aligned}$$
(3.1)

Dominated convergence implies that the function \(f{:}\,X\times [1,\max \{1,{\text {diam}}(M)\}]\rightarrow (0,\infty )\) defined by

$$\begin{aligned} f(x,r)=\mu (B(x,r)),\qquad x\in X,\quad r\in [1,\max \left\{ 1,{\text {diam}}(M)\right\} ], \end{aligned}$$

is continuous. Since \(X\times [1,\max \{1,{\text {diam}}(M)\}]\) is compact, we therefore obtain that

$$\begin{aligned} C_2:=\inf _{x\in X, r\in [1,\max \{1,{\text {diam}}(M)\}]}\mu (B(x,r))>0. \end{aligned}$$
(3.2)

In particular, this yields

$$\begin{aligned} \mu (B(x,2r))\le \frac{\mu (M)}{C_2} \mu (B(x,r)) \end{aligned}$$
(3.3)

for every \(x\in X\) and \(r\in [1,\max \{1,{\text {diam}}(M)\}]\). For \(x\in X\) and \(r>{\text {diam}}(M)\), we get

$$\begin{aligned} \mu (B(x,2r))=\mu (M)=\mu (B(x,r)). \end{aligned}$$
(3.4)

Combining (3.1), (3.3) and (3.4) implies the doubling property (2.1).

Step 2 Let \(S:=I-\Delta _g.\) Then, S is selfadjoint, strictly positive and commutes with A. Moreover, S has a compact resolvent and \(\mathcal {D}(S^k)\hookrightarrow E_A\) holds for every \(k\in \mathbb {N}.\) Furthermore, S has upper Gaussian bounds by [25, Corollary 5.5 and Theorem 6.1], since these results imply

$$\begin{aligned} \vert p(t,x,y)\vert \le \frac{C}{t^{d/2}} e^{-t} \exp \left\{ -c \frac{\rho (x,y)^2}{t}\right\} , \qquad t>0,\quad (x,y)\in M\times M \end{aligned}$$

for the kernel p of the semigroup \(\left( e^{-tS}\right) _{t\ge 0}.\) This is sufficient for (2.4) since (2.2) implies

$$\begin{aligned} \frac{1}{t^{d/2}}\lesssim \frac{\mu (B(x,1))}{\mu (B(x,t^{1/2}))}\le \frac{\mu (M)}{\mu (B(x,t^{1/2}))},\qquad t>0. \end{aligned}$$

In particular, S has generalized Gaussian bounds with \(p_0=1\), see Remark 2.2. Next note that by Proposition B.2(a), the scale of Sobolev spaces on M is given by

$$\begin{aligned} H^s(M)=R\left( S^{-\frac{s}{2}}\right) =\mathcal {D}\left( S^{\frac{s}{2}}\right) =\mathcal {D}\left( (-\Delta _g)^{\frac{s}{2}}\right) , \quad s> 0, \end{aligned}$$

where the last identity can be deduced from the spectral theorem and \((1+\lambda )^s\eqsim _s 1+\lambda ^s.\) In particular, we have \(E_A=H^1(M).\) Let \(1<\alpha <1+\frac{4}{(d-2)_+}.\) Then, by Proposition B.2(c) and Lemma 2.3, the embeddings

$$\begin{aligned} E_A=H^1(M)\hookrightarrow H^{-1}(M)=E_A^*, \qquad E_A=H^1(M)\hookrightarrow {L^{\alpha +1}(M)}\end{aligned}$$

are compact. Hence, Assumption 2.1 holds with our choice of A and S.

Step 3 In view of Proposition 3.1, Assumption 2.4 holds. Next, we check Assumption 2.6. Obviously, \(F_\alpha ^+\) fulfills (i) for \( \alpha \in \left( 1,1+\frac{4}{(d-2)_+}\right) \). Let us consider \(F_\alpha ^-\) for \( \alpha \in \left( 1,1+\frac{4}{d}\right) .\)

Case 1 Let \(d\ge 3.\) Then, \(p_{\max }:=\frac{2d}{d-2}\) is the maximal exponent with \(H^1(M)\hookrightarrow L^{p_{\max }}(M).\) Since \(\alpha \in (1,p_{\max }-1),\) we can interpolate \({L^{\alpha +1}(M)}\) between H and \(L^{p_{\max }}(M)\) and get

$$\begin{aligned} \Vert u \Vert _{L^{\alpha +1}(M)}\le \Vert u \Vert _{L^2}^{1-\theta } \Vert u \Vert _{L^{p_{\max }}(M)}^\theta \lesssim \Vert u \Vert _{L^2}^{1-\theta } \Vert u \Vert _{H^1(M)}^\theta \end{aligned}$$

with \(\theta =\frac{d(\alpha -1)}{2(\alpha +1)}\in (0,1).\) The restriction \(\beta _2:=\theta (\alpha +1)<2\) from Assumption 2.6(i\(^{\prime }\)) is equivalent to \(\alpha <1+\frac{4}{d}.\)

Case 2 In the case \(d=2,\) Assumption (i\(^{\prime }\)) is guaranteed for \(\alpha \in (1,3).\) To see this, take \(p>\frac{4}{3-\alpha }\) which is equivalent to \(\theta (\alpha +1)<2\) when \(\theta \in (0,1)\) is chosen as

$$\begin{aligned} \theta =\frac{(\alpha -1)p}{(\alpha +1) (p-2)}. \end{aligned}$$

We have \(H^1(M)\hookrightarrow L^p(M)\) and as above, interpolation between H and \(L^p(M)\) yields

$$\begin{aligned} \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\lesssim \Vert u \Vert _{L^2}^{(\alpha +1)(1-\theta )} \Vert u \Vert _{{E_A}}^{(\alpha +1)\theta }. \end{aligned}$$

Case 3 Let \(d=1\) and fix \(\varepsilon \in (0,\frac{1}{2}).\) Proposition B.2 yields

$$\begin{aligned} H^{\frac{1}{2}+\varepsilon }(M)\hookrightarrow {L^\infty (M)},\qquad H^{\frac{1}{2}+\varepsilon }(M)=\left[ L^2(M),H^1(M)\right] _{\frac{1}{2}+\varepsilon }. \end{aligned}$$

Hence,

$$\begin{aligned} \Vert v \Vert _{L^{\alpha +1}}^{\alpha +1} \le \Vert v \Vert _{L^2}^2\Vert v \Vert _{L^\infty }^{\alpha -1} \lesssim \Vert v \Vert _{L^2}^2\Vert v \Vert _{H^{\frac{1}{2}+\varepsilon }}^{\alpha -1} \lesssim \Vert v \Vert _{L^2}^{2+(\frac{1}{2}-\varepsilon )(\alpha -1)} \Vert v \Vert _{H^1}^{(\frac{1}{2}+\varepsilon )(\alpha -1)}. \end{aligned}$$

The condition \((\frac{1}{2}+\varepsilon )(\alpha -1)<2\) is equivalent to \(\alpha <1+\frac{4}{1+2\varepsilon }.\) Choosing \(\varepsilon \) small enough, we see that Assumption 2.6(i\(^{\prime }\)) is true for \(\alpha \in (1,5).\)

Step 4 The Steps 1–3 and Theorem 1.1 complete the proof of Corollary 1.2. \(\square \)

Remark 3.3

Note, that the 3-dimensional case with a cubic defocusing nonlinearity, i.e.

$$\begin{aligned} d=\alpha =3,\quad F(u)=F_3^+(u)=\vert u\vert ^2 u \end{aligned}$$

is admissible in our framework. In the deterministic setting, i.e. \(B=0,\) a global unique weak solution to this problem in \(H^1(M)\) was constructed in [4, Theorem 3]. Uniqueness in the stochastic case will be proved in a forthcoming paper. In [8], the authors considered the stochastic problem, but only obtained global solutions in the 2-dimensional case.

3.3 Laplacians on bounded domains

We can apply Theorem 1.1 to the stochastic NLSE on bounded domains.

Corollary 3.4

Let \(M\subset {\mathbb {R}^d}\) be a bounded domain and \(\Delta \) be the Laplacian with Dirichlet or Neumann boundary conditions. In the Neumann case, we assume that \(\partial M\) is Lipschitz. Under Assumption 2.7 and either (i) or (ii)

  1. (i)

    \(F(u)= \vert u\vert ^{\alpha -1}u\) with \( \alpha \in \left( 1,1+\frac{4}{(d-2)_+}\right) \),

  2. (ii)

    \(F(u)= -\vert u\vert ^{\alpha -1}u\) with \( \alpha \in \left( 1,1+\frac{4}{d}\right) ,\)

the equation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( \mathrm {i}\Delta u(t)-\mathrm {i}F(u(t)\right) dt-\mathrm {i}B u(t) \circ \mathrm {d}W(t)\quad \text {in} \quad H^1(M),\\ u(0)&=u_0\in H^1(M), \end{aligned}\right. \end{aligned}$$
(3.5)

has a martingale solution which satisfies

$$\begin{aligned} u\in L^q({\tilde{\Omega }},L^\infty (0,T;H^1(M))) \end{aligned}$$

for all \(q\in [1,\infty ).\)

We remark, that one could consider uniformly elliptic operators and more general boundary conditions, but for the sake of simplicity, we concentrate on the present two examples.

Proof

In the setting of the second section, we choose \(X={\mathbb {R}^d}.\) Hence, the doubling property is fulfilled. We consider the Dirichlet form \(a_V{:}\,V \times V \rightarrow \mathbb {C}\),

$$\begin{aligned} a_V(u,v)=\int _M \nabla u \cdot \nabla v \mathrm {d}x, \quad u,v\in V, \end{aligned}$$

with associated operator \(\left( A_V,\mathcal {D}(A_V)\right) \) in the following two situations:

  1. (i)

    \(V=H^1_0(M)\)

  2. (ii)

    \(V=H^1(M)\) and M has Lipschitz-boundary.

The operator \(A_{H^1_0(M)}=\Delta _D\) is the Dirichlet Laplacian and \(A_{H^1(M)}=\Delta _N\) is the Neumann Laplacian. In both cases, \(V=E_{A_V}\) by the square root property (see [46, Theorem 8.1]) and the embedding \(E_{A_V}\hookrightarrow {L^{\alpha +1}(M)}\) is compact iff \(1<\alpha <p_{\max }-1\) with \(p_{\max }:=2+\frac{4}{(d-2)_+}.\) Hence, we obtain the same range of admissible powers \(\alpha \) for the focusing and the defocusing nonlinearity as in the case of the Riemannian manifold without boundary.

In the Dirichlet case, we choose \(S:=A=-\Delta _D,\) which is a strictly positive operator and [46, Theorem 6.10], yields the Gaussian estimate for the associated semigroup. Hence, we can directly apply Theorem 1.1 to construct a martingale solution of problem (3.5).

In the Neumann case, we have \(0\in \sigma (\Delta _N)\) and the kernel of the semigroup \(\left( e^{-t\Delta _N}\right) _{t\ge 0}\) only satisfies the estimate

$$\begin{aligned} \vert p(t,x,y)\vert \le \frac{C_\varepsilon }{\mu (B(x,t^\frac{1}{m}))} e^{\varepsilon t}\exp \left\{ -c \left( \frac{\rho (x,y)^m}{t}\right) ^{\frac{1}{m-1}}\right\} \end{aligned}$$

for all \(t>0\) and almost all \((x,y)\in M\times M\) with an arbitrary \(\varepsilon >0,\) see [46, Theorem 6.10]. In order to get a strictly positive operator with the Gaussian bound from Remark 2.2, we fix \(\varepsilon >0\) and choose \(S:=\varepsilon I -\Delta _N.\) Finally, the computation of the admissible range of exponents \(\alpha \) in the focusing case is similar to the third step of the proof of Corollary 1.2. \(\square \)

3.4 The fractional NLSE

In this subsection, we show how the range of admissible nonlinearities change when the Laplacians in the previous examples are replaced by their fractional powers \(\left( -\Delta \right) ^\beta \) for \(\beta >0.\) Exemplary, we treat the case of a compact Riemannian manifold without boundary. Similar results are also true for the Dirichlet and the Neumann Laplacian on a bounded domain. Let us point out that there exists a huge literature on the subject of fractional NLSE apparently starting with a paper [36] by Laskin.

In the setting of Sect. 3.2, we look at the fractional Laplace–Beltrami operator given by \(A:=\left( -\Delta _g\right) ^\beta \) for \(\beta >0,\) which is also a selfadjoint positive operator by the functional calculus and once again, we choose \(S:=I-\Delta _g.\) We apply Theorem 1.1 with

$$\begin{aligned} {E_A}=\mathcal {D}(A^{\frac{1}{2}})=\mathcal {D}\left( \left( I-\Delta _g\right) ^\frac{\beta }{2}\right) =H^\beta (M), \end{aligned}$$

see Proposition B.2(a). Note that \(\mathcal {D}(S^k)\hookrightarrow E_A\) holds for every \(k\in \mathbb {N}\) with \(k\ge \frac{\beta }{2}.\) The range of admissible pairs \((\alpha ,\beta )\) in the defocusing case is given by

$$\begin{aligned} \beta >\frac{d}{2}-\frac{d}{\alpha +1} \quad \Leftrightarrow \quad \alpha \in \left( 1,1+\frac{4\beta }{(d-2\beta )_+}\right) , \end{aligned}$$

since this is exactly the range of \(\alpha \) and \(\beta \) with a compact embedding \(E_A \hookrightarrow {L^{\alpha +1}(M)}\) [see Proposition B.2(c)]. In the focusing case, analogous calculations as in the third step of the proof of Corollary 1.2 (with the distinction of \(\beta > \frac{d}{2},\)\(\beta =\frac{d}{2}\) and \(\beta <\frac{d}{2}\)) imply that the range of exponents reduces to

$$\begin{aligned} \alpha \in \left( 1,1+\frac{4\beta }{d}\right) . \end{aligned}$$

Hence, we get the following Corollary.

Corollary 3.5

Let (Mg) be a compact d-dimensional Riemannian manifold without boundary, \(\beta >0\) and \(u_0\in H^\beta (M).\) Under Assumption 2.7 and either (i) or (ii)

  1. (i)

    \(F(u)= \vert u\vert ^{\alpha -1}u\) with \( \alpha \in \left( 1,1+\frac{4\beta }{(d-2\beta )_+}\right) \),

  2. (ii)

    \(F(u)= -\vert u\vert ^{\alpha -1}u\) with \( \alpha \in \left( 1,1+\frac{4\beta }{d}\right) ,\)

the equation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u(t)&= \left( -\mathrm {i}\left( -\Delta _g\right) ^\beta u(t)-\mathrm {i}F(u(t)\right) dt-\mathrm {i}B u(t) \circ \mathrm {d}W(t),\quad t>0,\\ u(0)&=u_0\in H^\beta (M), \end{aligned}\right. \end{aligned}$$
(3.6)

has a martingale solution \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u\right) \) in \( H^\beta (M)\) with

$$\begin{aligned} u\in L^q\left( {\tilde{\Omega }},L^\infty (0,T;H^\beta (M))\right) \end{aligned}$$
(3.7)

for all \(q\in [1,\infty ).\)

3.5 The model noise

In Corollaries 1.2 and 3.4, we considered the general linear noise from Assumption 2.7. If M is either a compact Riemannian manifold or a bounded domain, let us consider the following example. Let \(\left( B_m\right) _{m\in \mathbb {N}}\) the multiplication operators given by

$$\begin{aligned} B_m u:= e_m u \end{aligned}$$

for \(u\in {H}\) with real valued functions \(e_m,\)\(m\in \mathbb {N},\) that satisfy

$$\begin{aligned} e_m \in F:={\left\{ \begin{array}{ll} H^{1,d}(M) \cap {L^\infty (M)}, &{}\quad d\ge 3,\\ H^{1,q}(M),&{}\quad d=2,\\ H^{1}(M),&{}\quad d=1,\\ \end{array}\right. } \end{aligned}$$
(3.8)

for some \(q>2\) in the case \(d=2.\) Moreover, we assume

$$\begin{aligned} \sum _{m=1}^{\infty }\Vert e_m \Vert _{F}^2<\infty . \end{aligned}$$

We get

$$\begin{aligned} \Vert e_m u \Vert _{L^p}\le \Vert e_m \Vert _{L^\infty (M)}\Vert u \Vert _{L^p},\qquad u\in L^p(M), \end{aligned}$$

for \(p\in [1,\infty ].\) First, let \(d\ge 3.\) The Sobolev embedding \(H^1(M) \hookrightarrow L^{p_{\max }}(M)\) for \(p_{\max }=\frac{2d}{d-2}\) and the Hölder inequality with \(\frac{1}{2}=\frac{1}{d}+\frac{1}{p_{\max }}\) yield

$$\begin{aligned} \Vert \nabla \left( e_m u\right) \Vert _{L^2}&\le \Vert u \nabla e_m \Vert _{L^2}+ \Vert e_m \nabla u \Vert _{L^2} \le \Vert \nabla e_m \Vert _{L^d} \Vert u \Vert _{L^{p_{\max }}}+\Vert e_m \Vert _{L^\infty (M)}\Vert \nabla u \Vert _{L^2}\\&\lesssim \left( \Vert \nabla e_m \Vert _{L^d}+\Vert e_m \Vert _{L^\infty (M)}\right) \Vert u \Vert _{H^1},\qquad u\in H^1(M). \end{aligned}$$

Now, let \(d=2\) and \(q>2\) as in (3.8). Then, we have \(F \hookrightarrow {L^\infty (M)}.\) Furthermore, we choose \(p>2\) according to \(\frac{1}{2}=\frac{1}{q}+\frac{1}{p}\) and observe \(H^1(M)\hookrightarrow L^p(M).\) As above, we obtain

$$\begin{aligned} \Vert \nabla \left( e_m u\right) \Vert _{L^2}&\lesssim \left( \Vert \nabla e_m \Vert _{L^q}+\Vert e_m \Vert _{L^\infty (M)}\right) \Vert u \Vert _{H^1}\lesssim \Vert e_m \Vert _{H^{1,q}}\Vert u \Vert _{H^1},\qquad u\in H^1(M). \end{aligned}$$

Hence, we conclude in both cases

$$\begin{aligned} \Vert e_m u \Vert _{H^1}\lesssim \Vert e_m \Vert _F \Vert u \Vert _{H^1},\qquad m\in \mathbb {N},\quad u\in H^1(M). \end{aligned}$$

For \(d=1,\) this inequality directly follows from the embedding \(H^1(M) \hookrightarrow {L^\infty (M)}.\) Therefore, we obtain

$$\begin{aligned} \sum _{m=1}^{\infty }\Vert B_m \Vert _{{\mathcal {L}}({E_A})}^2<\infty \end{aligned}$$

for arbitrary dimension d. The properties of \(B_m\) as operator in \({\mathcal {L}}({L^{\alpha +1}(M)})\) and in \({\mathcal {L}}(L^2(M))\) can be deduced from the embedding \(F\hookrightarrow {L^\infty (M)}.\)

We close this section by remarks on natural generalizations of the linear, conservative noise considered in this paper. The details have been worked out in the second author’s dissertation [27].

Remark 3.6

As in [8, Section 8], it is possible to replace the linear Stratonovich noise in Theorem 1.1, see also Assumption 2.7, by a nonlinear one of the form

$$\begin{aligned} B_m(u):= -\mathrm {i}B_m\left( g(\vert u\vert ^2)u\right) ,\qquad \mu (u):=-\frac{1}{2}\sum _{m=1}^{\infty }B_m^2 \left( g(\vert u\vert ^2)^2 u\right) , \end{aligned}$$

where we assume the Lipschitz and linear growth conditions

$$\begin{aligned}&\Vert g(\vert u\vert ^2)^j u \Vert _{E_A}\lesssim \Vert u \Vert _{E_A},\qquad \Vert g(\vert u\vert ^2)^j u \Vert _{L^p}\lesssim \Vert u \Vert _{L^p},\\&\quad \Vert g(\vert u\vert ^2)^j u-g(\vert v\vert ^2)^jv \Vert _{L^p}\lesssim \Vert u-v \Vert _{L^p} \end{aligned}$$

for \(j\in \left\{ 1,2\right\} \) and \(p\in \left\{ \alpha +1,2\right\} .\) In the case of \(H^1\)-based energy spaces, i.e. the \(A=-\Delta \) on a bounded domain or \(A=-\Delta _g\) on a Riemannian manifold, one can take \(g\in C^2([0,\infty ),\mathbb {R})\) which satisfies the following conditions:

$$\begin{aligned} \sup _{r>0} \vert g(r)\vert<\infty ,\qquad \sup _{r>0} (1+r)\vert g'(r)\vert<\infty ,\qquad \sup _{r>0} (1+r^\frac{3}{2})\vert g''(r)\vert <\infty . \end{aligned}$$
(3.9)

This kind of nonlinearity is often called saturated and typical examples are given by

$$\begin{aligned} g_1(r)=\frac{r}{1+\sigma r},\qquad g_2(r)=\frac{r(2+\sigma r)}{(1+\sigma r)^2},\qquad g_3(r)=\frac{\log (1+\sigma r)}{1+\log (1+\sigma r)},\qquad r\in [0,\infty ), \end{aligned}$$

for a constant \(\sigma >0.\) For the Galerkin equation, we then take

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u_n&= \left( -\mathrm {i}A u_n-\mathrm {i}P_n F\left( u_n\right) -\frac{1}{2}\sum _{m=1}^{\infty }S_n B_m^2\left( g(\vert u_n\vert ^2)^2 u_n\right) \right) \mathrm {d}t-\mathrm {i}\sum _{m=1}^{\infty }S_n B_m( g(\vert u_n\vert ^2) u_n) \mathrm {d}\beta _m, \\ u_n(0)&=P_n u_0. \end{aligned}\right. \end{aligned}$$

Unfortunately, this approximation does not respect mass conservation, but one still has

$$\begin{aligned} \sup _{n\in \mathbb {N}}\mathbb {E}\left[ \sup _{t\in [0,T]}\Vert u_n(t) \Vert _H^2\right] \lesssim 1, \end{aligned}$$
(3.10)

which is enough for our purpose.

Remark 3.7

Another possible generalization of the noise is to drop the assumption that \(B_m,\)\(m\in \mathbb {N},\) is selfadjoint. Then, the correction term \(\mu \) has the form

$$\begin{aligned} \mu (u):= -\frac{1}{2}\sum _{m=1}^{\infty }B_m^* B_m u. \end{aligned}$$

This kind of noise is called non-conservative and was considered in [12, 28]. The existence result is then based on the approximation

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u_n&= \left( -\mathrm {i}A u_n-\mathrm {i}P_n F\left( u_n\right) -\frac{1}{2}\sum _{m=1}^{\infty }S_n B_m^* B_m u_n\right) \mathrm {d}t-\mathrm {i}\sum _{m=1}^{\infty }S_n B_m u_n \mathrm {d}\beta _m, \\ u_n(0)&=P_n u_0, \end{aligned}\right. \end{aligned}$$

and the a priori estimates as well as the convergence results can be proved analogously. We only have to replace mass conservation by the estimate (3.10). The uniqueness result in Sect. 7, however, only holds for selfadjoint \(B_m,\) since this is the crucial assumption in Lemma 7.4.

4 Compactness and tightness criteria

This section is devoted to the compactness results which will be used to get a martingale solution of (1.1) by the Faedo–Galerkin method.

Let A and \(\alpha >1\) be chosen according to Assumption 2.1. We recall that the energy space \({E_A}\) is defined by \({E_A}:=\mathcal {D}(A^{\frac{1}{2}}).\) We start with a criterion for convergence of a sequence in \(C([0,T],\mathbb {B}_{{E_A}}^r),\) where the ball \(\mathbb {B}_{{E_A}}^r\) is equipped with the weak topology.

Lemma 4.1

Let \(r>0\) and \(\left( u_n\right) _{n\in \mathbb {N}} \subset {L^\infty (0,T;{E_A})}\) be a sequence with the properties

  1. (a)

    \(\sup _{n\in \mathbb {N}} \Vert u_n \Vert _{L^\infty (0,T;{E_A})}\le r\),

  2. (b)

    \(u_n\rightarrow u\) in \({C([0,T],{E_A^*})}\) for \(n\rightarrow \infty .\)

Then \(u_n,u\in C([0,T],\mathbb {B}_{{E_A}}^r)\) for all \(n\in \mathbb {N}\) and \(u_n \rightarrow u\) in \(C([0,T],\mathbb {B}_{{E_A}}^r)\) for \(n\rightarrow \infty .\)

Proof

The Strauss-Lemma A.3 and the assumptions guarantee that

$$\begin{aligned} u_n \in {C([0,T],{E_A^*})}\cap {L^\infty (0,T;{E_A})}\subset C_w([0,T],{E_A})\end{aligned}$$

for all \(n\in \mathbb {N}\) and \(\sup _{t\in [0,T]} \Vert u_n(t) \Vert _{E_A}\le r.\) Hence, we infer that \(u_n\in C([0,T],\mathbb {B}_{{E_A}}^r)\) for all \(n\in \mathbb {N}.\) For \(h\in {E_A}\)

$$\begin{aligned} \sup _{s\in [0,T]} \left| \langle u_n(s)-u(s), h \rangle \right| \le \Vert u_n-u \Vert _{C([0,T],{E_A^*})} \Vert h \Vert _{E_A}\rightarrow 0,\qquad n\rightarrow \infty . \end{aligned}$$

By (a) and Banach–Alaoglu, we get a subsequence \(\left( u_{n_k}\right) _{k\in \mathbb {N}}\) and \(v\in {L^\infty (0,T;{E_A})}\) with \(u_{n_k} \rightharpoonup ^* v\) in \({L^\infty (0,T;{E_A})}\) and by the uniqueness of the weak star limit in \(L^\infty (0,T;{E_A^*}),\) we conclude \(u=v \in {L^\infty (0,T;{E_A})}\) with \(\Vert u \Vert _{L^\infty (0,T;{E_A})}\le r.\)

Let \(\varepsilon >0\) and \(h\in {E_A^*}.\) By the density of \({E_A}\) in \({E_A^*},\) we choose \(h_\varepsilon \in {E_A}\) with \(\Vert h-h_\varepsilon \Vert _{E_A^*}\le \frac{\varepsilon }{4 r}\) and obtain for large \(n\in \mathbb {N}\)

$$\begin{aligned} \left| \langle u_n(s)-u(s), h \rangle \right|&\le \left| \langle u_n(s)-u(s), h-h_\varepsilon \rangle \right| + \left| \langle u_n(s)-u(s), h_\varepsilon \rangle \right| \\&\le \Vert u_n(s)-u(s) \Vert _{E_A}\Vert h-h_\varepsilon \Vert _{E_A^*}+ \left| \langle u_n(s)-u(s), h_\varepsilon \rangle \right| \\&\le 2 r \frac{\varepsilon }{4 r}+\frac{\varepsilon }{2}=\varepsilon \end{aligned}$$

independent of \(s\in [0,T].\) This implies \(\sup _{s\in [0,T]} \left| \langle u_n(s)-u(s), h \rangle \right| \rightarrow 0\) for \(n\rightarrow \infty \) and all \(h\in {E_A^*},\) i.e. \(u_n \rightarrow u\) in \(C_w([0,T],{E_A}).\) By Lemma A.2, we obtain the assertion. \(\square \)

We define a Banach space \({\tilde{Z}}_T\) by

$$\begin{aligned} {\tilde{Z}}_T:={C([0,T],{E_A^*})}\cap {L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\end{aligned}$$

and a locally convex space \(Z_T\) by

$$\begin{aligned} Z_T:={\tilde{Z}}_T \cap C_w([0,T],{E_A}). \end{aligned}$$

The latter is equipped with the Borel \(\sigma \)-algebra, i.e. the \(\sigma \)-algebra generated by the open sets in the locally convex topology of \(Z_T.\) In the next Proposition, we give a criterion for compactness in \(Z_T.\)

Proposition 4.2

Let K be a subset of \(Z_T\) and \(r>0\) such that

  1. (a)

    \( \sup _{u\in K} \Vert u\Vert _{{L^\infty (0,T;{E_A})}}\le r ; \)

  2. (b)

    K is equicontinuous in \({C([0,T],{E_A^*})},\) i.e.

    $$\begin{aligned} \lim _{\delta \rightarrow 0} \sup _{u\in K} \sup _{\vert t-s\vert \le \delta } \Vert u(t)-u(s)\Vert _{{E_A^*}}=0. \end{aligned}$$

Then, K is relatively compact in \(Z_T.\)

Proof

Let K be a subset of \(Z_T\) such that the assumptions (a) and (b) are fullfilled and \(\left( z_n\right) _{n\in \mathbb {N}}\subset K.\) We want to construct a subsequence converging in \({L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})},\)\({C([0,T],{E_A^*})}\) and \(C_w([0,T],{E_A}).\)

Step 1 By (a),  we can choose a constant \(C>0\) and for each \(n\in \mathbb {N}\) a null set \(I_n\) with \(\Vert z_n(t)\Vert _{E_A}\le C\) for all \(t\in [0,T]{\setminus } I_n.\) The set \(I:= \bigcup _{n\in \mathbb {N}} I_n\) is also a nullset and for each \(t\in [0,T]{\setminus } I,\) the sequence \(\left( z_n(t)\right) _{n\in \mathbb {N}}\) is bounded in \({E_A}.\)

Let \(\left( t_j\right) _{j\in \mathbb {N}}\subset [0,T]{\setminus } I\) be a sequence, which is dense in [0, T]. By Lemma 2.3, the embedding \({E_A}\hookrightarrow {H}\) is compact, which yields that \({E_A}\hookrightarrow {E_A^*}\) is also compact. Therefore, we can choose for each \(j\in \mathbb {N}\) a Cauchy subsequence in \({E_A^*}\) again denoted by \(\left( z_n(t_j)\right) _{n\in \mathbb {N}}.\) By a diagonalisation argument, one obtains a common Cauchy subsequence \(\left( z_{n}(t_j)\right) _{n\in \mathbb {N}}.\)

Let \(\varepsilon >0.\) Assumption (b) yields \(\delta >0\) with

$$\begin{aligned} \sup _{u\in K} \sup _{\vert t-s\vert \le \delta } \Vert u(t)-u(s)\Vert _{E_A^*}\le \frac{\varepsilon }{3}. \end{aligned}$$
(4.1)

Let us choose finitely many open balls \(U_\delta ^1,\dots , U_\delta ^L\) of radius \(\delta \) covering [0, T]. By density, each of these balls contains an element of the sequence \(\left( t_j\right) _{j\in \mathbb {N}},\) say \(t_{j_l}\in U_\delta ^l\) for \(l\in \left\{ 1,\dots , L\right\} .\) In particular, the sequence \(\left( z_{n}(t_{j_l})\right) _{n\in \mathbb {N}}\) is Cauchy for all \(l\in \left\{ 1,\dots , L\right\} .\) Hence,

$$\begin{aligned} \Vert z_n(t_{j_l})-z_m(t_{j_l})\Vert _{E_A^*}\le \frac{\varepsilon }{3},\qquad l=1,\dots ,L, \end{aligned}$$
(4.2)

if we choose \(m,n\in \mathbb {N}\) sufficiently large. Now, we fix \(t\in [0,T]\) and take \(l\in \{1,\dots , L\}\) with \(\vert t_{j_l}-t\vert \le \delta .\) We use (4.1) and (4.2) to get

$$\begin{aligned} \Vert z_n(t)-z_m(t)\Vert _{E_A^*}&\le \Vert z_n(t)-z_n(t_{j_l})\Vert _{E_A^*}+\Vert z_n(t_{j_l})-z_m(t_{j_l})\Vert _{E_A^*}+\Vert z_m(t_{j_l})-z_m(t)\Vert _{E_A^*}\le \varepsilon . \end{aligned}$$
(4.3)

This means that \(\left( z_n\right) _{n\in \mathbb {N}}\) is a Cauchy sequence in \({C([0,T],{E_A^*})}\) since the estimate (4.3) is uniform in \(t\in [0,T].\)

Step 2 The first step yields \(z\in {C([0,T],{E_A^*})}\) with \(z_n \rightarrow z\) in \({C([0,T],{E_A^*})}\) for \(n\rightarrow \infty \) and assumption (a) implies, that there is \(r>0\) with \(\sup _{n\in \mathbb {N}} \Vert z_n \Vert _{L^\infty (0,T;{E_A})}\le r.\) Therefore, we obtain \(z\in C([0,T],\mathbb {B}_{{E_A}}^r)\) and \(z_n\rightarrow z\) in \(C([0,T],\mathbb {B}_{{E_A}}^r)\) for \(n\rightarrow \infty \) by Lemma 4.1. Hence, \(z_n\rightarrow z\) in \(C_w([0,T],{E_A}).\)

Step 3 We fix again \(\varepsilon >0.\) By the Lions Lemma A.4 with \(X_0={E_A},\)\(X={L^{\alpha +1}(M)},\)

\(X_1={E_A^*}, p=\alpha +1\) and \(\varepsilon _0=\frac{\varepsilon }{2 T (2C)^{\alpha +1}}\) we get

$$\begin{aligned} \Vert v \Vert _{L^{\alpha +1}(M)}^{\alpha +1} \le \varepsilon _0 \Vert v\Vert _{{E_A}}^{\alpha +1}+C_{\varepsilon _0} \Vert v\Vert _{{E_A^*}}^{\alpha +1} \end{aligned}$$
(4.4)

for all \(v\in {E_A}.\) The first step allows us to choose \(n,m\in \mathbb {N}\) large enough that

$$\begin{aligned} \Vert z_n-z_m\Vert _{C([0,T],{E_A^*})}^{\alpha +1}\le \frac{\varepsilon }{2 C_{\varepsilon _0}T} . \end{aligned}$$

The special choice \(v=z_n(t)-z_m(t)\) for \(t\in [0,T]\) in (4.4) and integration with respect to time yields

$$\begin{aligned} \Vert z_n-z_m \Vert _{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}^{\alpha +1}&\le {\varepsilon _0} \Vert z_n-z_m\Vert _{L^{\alpha +1}(0,T;{E_A})}^{\alpha +1}+C_{\varepsilon _0} \Vert z_n-z_m\Vert _{L^{\alpha +1}(0,T;{E_A^*})}^{\alpha +1}\\&\le {\varepsilon _0} T \Vert z_n-z_m\Vert _{L^\infty (0,T;{E_A})}^{\alpha +1}+C_{\varepsilon _0} T\Vert z_n-z_m\Vert _{C([0,T],{E_A^*})}^{\alpha +1}\\&\le {\varepsilon _0} T \left( 2 C\right) ^{\alpha +1}+C_{\varepsilon _0} T\Vert z_n-z_m\Vert _{C([0,T],{E_A^*})}^{\alpha +1}\\&\le \frac{\varepsilon }{2}+\frac{\varepsilon }{2}=\varepsilon . \end{aligned}$$

Hence, the sequence \(\left( z_n\right) _{n\in \mathbb {N}}\) is also Cauchy in \({L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\). \(\square \)

In the following, we want to obtain a criterion for tightness in \(Z_T.\) Therefore, we introduce the Aldous condition.

Definition 4.3

Let \((X_n)_{n\in \mathbb {N}}\) be a sequence of stochastic processes in a Banach space E. Assume that for every \(\varepsilon >0\) and \(\eta >0\) there is \(\delta >0\) such that for every sequence \((\tau _n)_{n\in \mathbb {N}}\) of [0, T]-valued stopping times one has

$$\begin{aligned} \sup _{n\in \mathbb {N}} \sup _{0<\theta \le \delta } \mathbb {P}\left\{ \Vert X_n((\tau _n+\theta )\wedge T)-X_n(\tau _n)\Vert _E\ge \eta \right\} \le \varepsilon . \end{aligned}$$

In this case, we say that \((X_n)_{n\in \mathbb {N}}\) satisfies the Aldous condition [A].

The following Lemma (see [41, Lemma A.7]) gives us a useful consequence of the Aldous condition [A].

Lemma 4.4

Let \((X_n)_{n\in \mathbb {N}}\) be a sequence of continuous stochastic processes in a Banach space E,  which satisfies the Aldous condition [A]. Then, for every \(\varepsilon >0\) there exists a measurable subset \(A_\varepsilon \subset C([0,T],E)\) such that

$$\begin{aligned} \mathbb {P}^{X_n}(A_\varepsilon )\ge 1-\varepsilon ,\qquad \lim _{\delta \rightarrow 0} \sup _{u\in A_\varepsilon } \sup _{\vert t-s\vert \le \delta } \Vert u(t)-u(s)\Vert _E=0. \end{aligned}$$

The deterministic compactness result in Proposition 4.2 and the last Lemma can be used to get the following criterion for tightness in \(Z_T.\)

Proposition 4.5

Let \((X_n)_{n\in \mathbb {N}}\) be a sequence of continuous adapted \({E_A^*}\)-valued processes satisfying the Aldous condition [A] in \({E_A^*}\) and

$$\begin{aligned} \sup _{n\in \mathbb {N}} \mathbb {E}\left[ \Vert X_n\Vert _{L^\infty (0,T;{E_A})}^2\right] <\infty . \end{aligned}$$

Then the sequence \(\left( {\mathbb {P}}^{X_n}\right) _{n\in \mathbb {N}}\) is tight in \(Z_T,\) i.e. for every \(\varepsilon >0\) there is a compact set \(K_\varepsilon \subset Z_T\) with

$$\begin{aligned} \mathbb {P}^{X_n}(K_\varepsilon )\ge 1- \varepsilon \end{aligned}$$

for all \(n\in \mathbb {N}.\)

Proof

Let \(\varepsilon >0.\) With \(R_1:= \left( \frac{2}{\varepsilon } \sup _{n\in \mathbb {N}} \mathbb {E}\left[ \Vert X_n\Vert _{L^\infty (0,T;{E_A})}^2\right] \right) ^{\frac{1}{2}},\) we obtain

$$\begin{aligned} \mathbb {P}\left\{ \Vert X_n\Vert _{L^\infty (0,T;{E_A})}> R_1\right\} \le \frac{1}{R_1^2}\mathbb {E}\left[ \Vert X_n\Vert _{L^\infty (0,T;{E_A})}^2\right] \le \frac{\varepsilon }{2}. \end{aligned}$$

By Lemma 4.4, one can use the Aldous condition [A] to get a Borel subset A of \({C([0,T],{E_A^*})}\) with

$$\begin{aligned} \mathbb {P}^{X_n}\left( A\right) \ge 1-\frac{\varepsilon }{2},\quad n\in \mathbb {N}, \qquad \lim _{\delta \rightarrow 0} \sup _{u\in A} \sup _{\vert t-s\vert \le \delta } \Vert u(t)-u(s)\Vert _{E_A^*}=0. \end{aligned}$$

We define \(K:= \overline{A\cap B}\) where \(B:= \left\{ u\in Z_T{:}\,\Vert u\Vert _{L^\infty (0,T;{E_A})}\le R_1 \right\} .\) This set K is compact by Proposition 4.2 and we can estimate

$$\begin{aligned} \mathbb {P}^{X_n}(K)\ge \mathbb {P}^{X_n}\left( A\cap B\right) \ge \mathbb {P}^{X_n}\left( A\right) -\mathbb {P}^{X_n}\left( B^c\right) \ge 1-\frac{\varepsilon }{2}-\frac{\varepsilon }{2}=1-\varepsilon \end{aligned}$$

for all \(n\in \mathbb {N}.\)\(\square \)

In metric spaces, one can apply Prokhorov Theorem (see [47, Theorem II.6.7]) and Skorohod Theorem (see [5, Theorem 6.7]) to obtain convergence from tightness. Since the space \(Z_T\) is a locally convex space, we use the following generalization to nonmetric spaces.

Proposition 4.6

(Skorohod–Jakubowski) Let \({\mathcal {X}}\) be a topological space such that there is a sequence of continuous functions \(f_m{:}\,{\mathcal {X}}\rightarrow \mathbb {C}\) that separates points of \({\mathcal {X}}.\) Let \({\mathcal {A}}\) be the \(\sigma \)-algebra generated by \(\left( f_m\right) _m.\) Then, we have the following assertions:

  1. (a)

    Every compact set \(K\subset {\mathcal {X}}\) is metrizable.

  2. (b)

    Let \(\left( \mu _n\right) _{n\in \mathbb {N}}\) be a tight sequence of probability measures on \(\left( {\mathcal {X}}, {\mathcal {A}}\right) .\) Then, there are a subsequence \(\left( \mu _{n_k}\right) _{k\in \mathbb {N}},\) random variables \(X_k,\)X for \(k\in \mathbb {N}\) on a common probability space \(({\tilde{\Omega }},{\tilde{\mathbb {F}}},\tilde{\mathbb {P}})\) with \(\tilde{\mathbb {P}}^{X_k}=\mu _{n_k}\) for \(k\in \mathbb {N},\) and \(X_k \rightarrow X\)\(\tilde{\mathbb {P}}\)-almost surely for \(k\rightarrow \infty .\)

We stated Proposition 4.6 in the form of [9] (see also [30]) where it was first used to construct martingale solutions for stochastic evolution equations. We apply this result to the concrete situation and obtain the final result of this section.

Corollary 4.7

Let \((X_n)_{n\in \mathbb {N}}\) be a sequence of adapted \({E_A^*}\)-valued processes satisfying the Aldous condition [A] in \({E_A^*}\) and

$$\begin{aligned} \sup _{n\in \mathbb {N}} \mathbb {E}\left[ \Vert X_n\Vert _{L^\infty (0,T;{E_A})}^2\right] <\infty . \end{aligned}$$

Then, there are a subsequence \((X_{n_k})_{k\in \mathbb {N}}\) and random variables \({\tilde{X}}_k,\)\({\tilde{X}}\) for \(k\in \mathbb {N}\) on a second probability space \(({\tilde{\Omega }},{\tilde{\mathbb {F}}},\tilde{\mathbb {P}})\) with \(\tilde{\mathbb {P}}^{{\tilde{X}}_k}=\mathbb {P}^{X_{n_k}}\) for \(k\in \mathbb {N},\) and \({\tilde{X}}_k \rightarrow {\tilde{X}}\)\(\tilde{\mathbb {P}}\)-almost surely in \(Z_T\) for \(k\rightarrow \infty .\)

Proof

We recall that \(Z_T={C([0,T],{E_A^*})}\cap {L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\cap C_w([0,T],{E_A})\) is a locally convex space. Therefore, the assertion follows by an application of the Propositions 4.5 and 4.6 if for each of the spaces in the definition of \(Z_T\) we find a sequence \(f_m{:}\,Z_T \rightarrow \mathbb {R}\) of continuous functions separating points which generates the Borel \(\sigma \)-algebra. The separable Banach spaces \({C([0,T],{E_A^*})}\) and \({L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\) have this property.

Let \(\left\{ h_m{:}\,m\in \mathbb {N}\right\} \) be a dense subset of \({E_A^*}.\) Then, we define the countable set

\(F:=\left\{ f_{m,t}{:}\,m\in \mathbb {N}, t \in [0,T]\cap \mathbb {Q}\right\} \) of functionals on \(C_w([0,T],{E_A})\) by

$$\begin{aligned} f_{m,t}(u):= \langle u(t), h_m \rangle \end{aligned}$$

for \(m\in \mathbb {N},\)\(t\in [0,T]\cap \mathbb {Q}\) and \(u\in C_w([0,T],{E_A}).\) The set F separates points, since for \(u,v \in C_w([0,T],{E_A})\) with \(f_{m,t}(u)=f_{m,t}(v)\) for all \(m\in \ N\) and \(t\in [0,T]\cap \mathbb {Q},\) we get \(\langle u, h_m \rangle =\langle v, h_m \rangle \) on [0, T] for all \(m\in \mathbb {N}\) by continuous continuation and therefore \(u=v\) on [0, T].

Furthermore, the density of \(\{h_m{:}\,m\in \mathbb {N}\}\) and the definition of the locally convex topology yield that \(\left( f_{m,t}\right) _{m\in \mathbb {N}, t\in [0,T]\cap \mathbb {Q}}\) generate the Borel \(\sigma \)-algebra on \(C_w([0,T],{E_A}).\)\(\square \)

5 The Galerkin approximation

In this section, we introduce the Galerkin approximation, which will be used for the proof of the existence of a solution to (1.1). We prove the well-posedness of the approximated equation and uniform estimates for the solutions that are sufficient to apply Corollary 4.7.

By the functional calculus of the selfadjoint operator S from Assumption and Notation 2.1, we define the operators \(P_n{:}\,H\rightarrow H\) by \(P_n:={\mathbf {1}}_{(0,2^{n+1})}(S)\) for \(n\in \mathbb {N}_0.\) Recall from Lemma 2.3, that S has the representation

$$\begin{aligned} S x=\sum _{m=1}^\infty \lambda _m \big (x,h_m\big )_{H} h_m, \quad x\in \mathcal {D}(S)=\left\{ x\in H{:}\,\sum _{m=1}^\infty \lambda _m^2 \vert \big (x,h_m\big )_{H}\vert ^2<\infty \right\} , \end{aligned}$$

with an orthonormal basis \(\left( h_m\right) _{m\in \mathbb {N}}\) and eigenvalues \(\lambda _m>0\) such that \(\lambda _m\rightarrow \infty \) as \(m\rightarrow \infty .\) For \(n\in \mathbb {N}_0,\) we set

$$\begin{aligned} H_n:={\text {span}}\left\{ h_m{:}\,m\in \mathbb {N}, \lambda _m< 2^{n+1}\right\} \end{aligned}$$

and observe that \(P_n\) is the orthogonal projection from H to \(H_n.\) Moreover, we have

$$\begin{aligned} P_n x = \sum _{\lambda _m< 2^{n+1}} \big (x,h_m\big )_{H}h_m, \qquad x\in {H}. \end{aligned}$$

Note that we have \(h_m\in \bigcap _{k\in \mathbb {N}}\mathcal {D}(S^k)\) for \(m\in \mathbb {N}\) and thus, we obtain by the assumption \(\mathcal {D}(S^k)\hookrightarrow E_A\) for some \(k\in \mathbb {N}\) that \(H_n\) is a closed subspace of \({E_A}\) for \(n\in \mathbb {N}_0.\) In particular, \(H_n\) is a closed subspace of \({E_A^*}.\) The fact that the operators S and A commute by Assumption 2.1 implies that \(P_n\) and \(A^\frac{1}{2}\) commute. We obtain

$$\begin{aligned}&\Vert P_n x \Vert _{{E_A}}^2=\Vert P_n x \Vert _{H}^2+\Vert A^{\frac{1}{2}}P_n x \Vert _{H}^2 =\Vert P_n x \Vert _{H}^2+\Vert P_n A^{\frac{1}{2}}x \Vert _{H}^2 \nonumber \\&\le \Vert x \Vert _{{E_A}}^2,\qquad x\in {E_A}, \end{aligned}$$
(5.1)

and

$$\begin{aligned} \Vert P_n v \Vert _{{E_A^*}}=\sup _{\Vert x \Vert _{E_A}\le 1}\vert \big (P_n v,x\big )_{H}\vert \le \Vert v \Vert _{E_A^*}\sup _{\Vert x \Vert _{E_A}\le 1} \Vert P_n x \Vert _{E_A}\le \Vert v \Vert _{E_A^*}. \end{aligned}$$

By density, we can extend \(P_n\) to an operator \(P_n{:}\,{E_A^*}\rightarrow H_n\) with \(\Vert P_n \Vert _{{E_A^*}\rightarrow {E_A^*}}\le 1\) and

$$\begin{aligned} \langle v, P_n v \rangle \in \mathbb {R}, \qquad \langle v, P_n w \rangle =\big (P_n v,w\big )_{H}, \qquad v\in {E_A^*}, \quad w\in {E_A}. \end{aligned}$$
(5.2)

Despite their nice behaviour as orthogonal projections, it turns out that the operators \(P_n, n\in \mathbb {N},\) lack the crucial property needed in the proof of the a priori estimates of the stochastic terms. In general, they are not uniformly bounded from \({L^{\alpha +1}(M)}\) to \({L^{\alpha +1}(M)}.\) To overcome this deficit, we construct another sequence \(\left( S_n\right) _{n\in \mathbb {N}}\) of operators \(S_n{:}\,H \rightarrow H_n\) using functional calculus techniques and the general Littlewood–Paley decomposition from [34].

We take a function \(\dot{\rho }\in C_c^\infty (0,\infty )\) with \({\text {supp}} \dot{\rho }\subset [\frac{1}{2},2]\) and \(\sum _{m\in \mathbb {Z}} \dot{\rho }(2^{-m} t)=1\) for all \(t>0.\) We define \( \rho _m= \dot{\rho }(2^{-m} \cdot )\) for \(m\in \mathbb {N}\) and \(\rho _0:=\sum _{m=-\infty }^0 \dot{\rho }(2^{-m} \cdot ),\) so that we have \(\sum _{m=0}^\infty \rho _m(t)=1\) for all \(t>0.\) The sequence \(\left( \rho _m\right) _{m\in \mathbb {N}_0}\) is called dyadic partition of unity.

Lemma 5.1

We have the norm equivalence

$$\begin{aligned} \Vert x \Vert _{{L^{\alpha +1}(M)}}\eqsim \sup _{\Vert a \Vert _{l^\infty (\mathbb {N}_0)}\le 1}\left\| \sum _{m=0}^\infty a_m \rho _m(S) x \right\| _{{L^{\alpha +1}(M)}}, \end{aligned}$$
(5.3)

where the operators \(\rho _m(S),\)\(m\in \mathbb {N},\) are defined by the functional calculus for selfadjoint operators.

Proof

By Assumption 2.1(ii), we obtain that the restriction of \(\left( T(t)\right) _{t\ge 0}\) to \({L^{\alpha +1}(M)}\) defines a \(c_0\)-semigroup on \({L^{\alpha +1}(M)},\) see Theorem 7.1. in [46]. We denote the corresponding generator by \(S_{\alpha +1}.\) Lemma 6.1. in [34] implies that the operator \(S_{\alpha +1}\) is 0-sectorial and has a Mihlin \(M^\beta \)-calculus for some \(\beta >0.\) For a definition of these properties, we refer to [34, Section 2]. The estimate (5.3) follows from Theorem 4.1 in [34]. \(\square \)

In the next Proposition, we use the estimate from Lemma 5.1 to construct the sequence \(\left( S_n\right) _{n\in \mathbb {N}}\) which we will employ in our Galerkin approximation of the problem (1.1). For a more direct proof which employs spectral multiplier theorems from [33, 55] rather than the abstract Littlewood–Paley theory from [34], we refer to [27]. Moreover, we would like to remark that in the meantime, a similar construction has also been applied to use the Galerkin method in the context of stochastic Maxwell equation, see [29].

Proposition 5.2

There exists a sequence \(\left( S_n\right) _{n\in \mathbb {N}_0}\) of selfadjoint operators \(S_n{:}\,H \rightarrow H_n\) for \(n\in \mathbb {N}_0\) with \(S_n \psi \rightarrow \psi \) in \(E_A\) for \(n\rightarrow \infty \) and \(\psi \in E_A\) and the uniform norm estimates

$$\begin{aligned} \sup _{n\in \mathbb {N}_0}\Vert S_n \Vert _{{{\mathcal {L}}(H)}}\le 1, \quad \sup _{n\in \mathbb {N}_0} \Vert S_n \Vert _{{\mathcal {L}}({E_A})}\le 1, \quad \sup _{n\in \mathbb {N}_0} \Vert S_n \Vert _{{\mathcal {L}}(L^{\alpha +1})}<\infty . \end{aligned}$$
(5.4)

Proof

We fix \(n\in \mathbb {N}\) and define the operators \(S_n{:}\,H \rightarrow H\) for \(n\in \mathbb {N}_0\) by \(S_n:= \sum _{m=0}^n \rho _m(S)\) via the functional calculus for selfadjoint operators. The operator \(\rho _m(S)\) is selfadjoint for each m,  since \(\rho _m\) is real-valued. Hence, \(S_n\) is selfadjoint. By the convergence property of the functional calculus, we get \(S_n \varphi \rightarrow \varphi \) in \({E_A}\) for all \(\varphi \in {E_A}.\) A straightforward calculation using the properties of the dyadic partition of unity leads to

$$\begin{aligned} S_n x=\sum _{\lambda _m < 2^n} \big (x,h_m\big )_{H} h_m+ \sum _{\lambda _m \in [ 2^n,2^{n+1})} \rho _n(\lambda _m) \big (x,h_m\big )_{H} h_m,\quad u\in H. \end{aligned}$$

Therefore, \(S_n\) maps H to \(H_n\) and we have \(\sup _{n\in \mathbb {N}_0}\Vert S_n \Vert _{{{\mathcal {L}}(H)}}\le 1.\) The second estimate in (5.4) can be derived as in (5.1), since \(S_n\) and \(A^{\frac{1}{2}}\) commute. To prove the third estimate, we employ Lemma 5.2 with \(\left( a_m\right) _{m\in \mathbb {N}_0}\) as \(a_m=1\) for \(m\le n\) and \(a_m=0\) for \(m>n\) and obtain for \(x\in {L^{\alpha +1}(M)}\)

$$\begin{aligned} \Vert S_n x \Vert _{L^{\alpha +1}(M)}&= \left\| \sum _{m=0}^\infty a_m \rho _m(S)x \right\| _{{L^{\alpha +1}(M)}}\le \sup _{\Vert a \Vert _{l^\infty (\mathbb {N}_0)}\le 1}\left\| \sum _{m=0}^\infty a_m \rho _m(S)x \right\| _{{L^{\alpha +1}(M)}} \lesssim \Vert x \Vert _{L^{\alpha +1}(M)}. \end{aligned}$$

\(\square \)

Using the operators \(P_n\) and \(S_n,\)\(n\in \mathbb {N},\) we approximate our original problem (1.1) by the stochastic differential equation in \(H_n\) given by

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u_n(t)&= \left( -\mathrm {i}A u_n(t)-\mathrm {i}P_n F\left( u_n(t)\right) \right) \mathrm {d}t-\mathrm {i}S_n B(S_n u_n(t)) \circ \mathrm {d}W(t), \\ u_n(0)&=P_n u_0. \end{aligned}\right. \end{aligned}$$

With the Stratonovich correction term

$$\begin{aligned} \mu _n := -\frac{1}{2} \sum _{m=1}^{\infty }\left( S_n B_m S_n\right) ^2, \end{aligned}$$

the approximated problem can also be written in the Itô form

$$\begin{aligned} \left\{ \begin{aligned} \mathrm {d}u_n(t)&= \left( -\mathrm {i}A u_n(t)-i P_n F\left( u_n(t)\right) + \mu _n \left( u_n(t)\right) \right) \mathrm {d}t-\mathrm {i}S_n B (S_n u_n(t)) \mathrm {d}W(t), \\ u_n(0)&=P_n u_0. \end{aligned}\right. \nonumber \\ \end{aligned}$$
(5.5)

By the well known theory for finite dimensional stochastic differential equations with locally Lipschitz coefficients, we get a local wellposedness result for (5.5).

Proposition 5.3

For each \(n\in \mathbb {N},\) there is a unique local solution \(u_n\) of (5.5) with continuous paths in \(H_n\) and maximal existence time \(\tau _n,\) which is a blow-up time in the sense that we have \(\limsup _{ t \nearrow \tau _n(\omega )} \Vert u_n(t,\omega ) \Vert _{H_n}=\infty \) for almost all \(\omega \in \Omega \) with \(\tau _n(\omega )<\infty .\)

The global existence for Eq. (5.5) is based on the conservation of the \(L^2\)-norm of solutions.

Proposition 5.4

For each \(n\in \mathbb {N},\) there is a unique global solution \(u_n\) of (5.5) with continuous paths in \(H_n\) and we have the estimate

$$\begin{aligned} \Vert u_n(t) \Vert _{H_n}=\Vert u_n(t) \Vert _{H}=\Vert P_n u_0 \Vert _{H}\le \Vert u_0 \Vert _{H} \end{aligned}$$
(5.6)

almost surely for all \(t\ge 0.\)

Proof

Step 1 We fix \(n\in \mathbb {N}\) and take the unique maximal solution \((u_n,\tau _n)\) from Proposition 5.3. We show that the estimate (5.6) holds almost surely on \(\{t\le \tau _n\}.\) The function \(\varPhi {:}\,H_n \rightarrow \mathbb {R}\) defined by \(\varPhi (v):=\Vert v \Vert _{H}^2\) for \(v\in H_n\) is twice continuously Fréchet-differentiable with

$$\begin{aligned} \varPhi '[v]h_1&= 2 {\text {Re}}\big (v, h_1\big )_{H}, \qquad \varPhi ''[v] \left[ h_1,h_2\right] = 2 {\text {Re}}\big ( h_1,h_2\big )_{H} \end{aligned}$$

for \(v, h_1, h_2\in H_n.\) For the sequence \(\left( \tau _{n,k}\right) _{k\in \mathbb {N}}\) of stopping times

$$\begin{aligned} \tau _{n,k}:=\inf \left\{ t\in [0,\tau _n]{:}\,\Vert u_n(t) \Vert _{H_n}\ge k\right\} \wedge \tau _n,\qquad k\in \mathbb {N}, \end{aligned}$$

we have \(\tau _{n,k} \nearrow \tau _n\) almost surely and the Itô process \(u_n\) has the representation

$$\begin{aligned} u_n(t)= P_n u_0&+ \int _0^t \left[ -\mathrm {i}A u_n(s)-\mathrm {i}P_n F\left( u_n(s)\right) +\mu _n(u_n(s))\right] \mathrm {d}s- \mathrm {i}\int _0^t S_n B (S_n u_n(s)) \mathrm {d}W(s) \end{aligned}$$

almost surely on \(\{t\le \tau _{n,k}\}\) for all \(k\in \mathbb {N}.\) We fix \(k\in \mathbb {N}.\) Since we have

$$\begin{aligned}&{\text {tr}}\Big (\varPhi ''[u_n(s)]\left( -\mathrm {i}S_n B \left( S_n u_n(s)\right) ,-\mathrm {i}S_n B \left( S_n u_n(s)\right) \right) \Big ) \\&\quad = \sum _{m=1}^{\infty }2 {\text {Re}}\big (-\mathrm {i}S_n B \left( S_n u_n(s)\right) f_m,-\mathrm {i}S_n B \left( S_n u_n(s)\right) f_m\big )_{H} \\&\quad = 2 \sum _{m=1}^{\infty }\Vert S_n B_m S_n u_n(s) \Vert _{H}^2 \end{aligned}$$

for \(s\in \{t\le \tau _{n,k}\},\) the Itô lemma yields

$$\begin{aligned} \Vert u_n(t) \Vert _{H}^2&=\Vert P_n u_0 \Vert _{H}^2+2 \int _0^t {\text {Re}}\big ( u_n(s), -\mathrm {i}A u_n(s)-\mathrm {i}P_n F\left( u_n(s)\right) +\mu _n(u_n(s))\big )_{H} \mathrm {d}s \\&\quad +\,2 \int _0^t {\text {Re}}\big (u_n(s), -\mathrm {i}S_n B(S_n u_n(s)) \mathrm {d}W(s)\big )_{H} +\sum _{m=1}^{\infty }\int _0^t \Vert S_n B_m S_n u_n(s)\Vert _{H}^2\mathrm {d}s \end{aligned}$$

almost surely in \(\{t\le \tau _{n,k}\}.\) We fix \(v\in H_n\) and \(m\in \mathbb {N}\) and calculate

$$\begin{aligned} {\text {Re}}\big ( v, -\mathrm {i}A v\big )_{H}&={\text {Re}}\left[ \mathrm {i}\Vert A^{\frac{1}{2}}v\Vert _{H}^2\right] =0,\\ {\text {Re}}\big ( v, -\mathrm {i}P_n F\left( v\right) \big )_{H}&={\text {Re}}\langle \mathrm {i}v, F\left( v\right) \rangle =0, \\ 2{\text {Re}}\big ( v,\mu _n(v)\big )_{H}&= -\sum _{m=1}^{\infty }{\text {Re}}\big ( v,\left( S_n B_m S_n\right) ^2 v\big )_{H}=-\sum _{m=1}^{\infty }\Vert { S_n B_m S_n v}\Vert _{H}^2, \end{aligned}$$

where we used (5.2) and Assumption 2.4(i) for the second term and the fact, that the operator \(S_n B_mS_n \) is selfadjoint for the third term. Analogously, we get

$$\begin{aligned} {\text {Re}}\big (v, -\mathrm {i}S_n B(S_n v)f_m\big )_{H}&={\text {Re}}\big (v, -\mathrm {i}S_n B_m S_n v\big )_{H}= {\text {Re}}\left[ \mathrm {i}\big ( v,S_n B_m S_n v\big )_{H}\right] =0. \end{aligned}$$

Thus, we obtain \( \Vert u_n(t) \Vert _{H}^2=\Vert P_n u_0 \Vert _{H}^2\le \Vert u_0 \Vert _{H}^2 \) almost surely in \(\{t\le \tau _{n,k}\}.\)

Step 2 To show \(\tau _n=\infty \) almost surely, we assume the contrary. Therefore, there is \(\Omega _0 \in \mathcal {F}\) with \(\mathbb {P}(\Omega _0)>0\) such that \(\tau _n(\omega )<\infty \) and \(\tau _{n,k}(\omega )\nearrow \tau _n(\omega )\) for all \(\omega \in \Omega _0.\) Hence, \(\tau _{n,k}<\infty \) on \(\Omega _0\) and by the continuity of the paths of \(u_n\) and the definition of \(\tau _{n,k},\) we get \(\Vert u_n(\tau _{n,k}(\omega ),\omega ) \Vert _{H_n}=k\) for all \(\omega \in \Omega _0\) and \(k\in \mathbb {N}.\) This is a contradiction to Step 1, where we obtained \(\Vert u_n(t) \Vert _{H} \le \Vert u_0 \Vert _{H}\) almost surely in \(\{t\le \tau _{n,k}\}.\) Therefore, \(u_n\) is a global solution and we have

$$\begin{aligned} \Vert u_n(t) \Vert _{H_n}=\Vert u_n(t) \Vert _{H}=\Vert P_n u_0 \Vert _{H}\le \Vert u_0 \Vert _{H} \end{aligned}$$

almost surely for all \(t\ge 0.\)\(\square \)

The next goal is to find uniform energy estimates for the global solutions of the Eq. (5.5). Recall that by Assumption 2.4, the nonlinearity F has a real antiderivative denoted by \({\hat{F}}.\)

Definition 5.5

We define the energy \(\mathcal {E}(u)\) of \(u\in {E_A}\) by

$$\begin{aligned} \mathcal {E}(u):= \frac{1}{2} \Vert A^{\frac{1}{2}} u \Vert _{H}^2+{\hat{F}}(u),\qquad u\in {E_A}. \end{aligned}$$

Note that \(\mathcal {E}(u)\) is welldefined by the embedding \({E_A}\hookrightarrow L^{\alpha +1}(M).\) In contrast to the uniform \(L^2\)-estimate in \([0,\infty ),\) we cannot exclude the growth of the energy in an infinity time interval. So, we fix \(T>0\) from now on. As a preparation, we formulate a Lemma, which simplifies the arguments, when the Burkholder–Davis–Gundy inequality is used.

Lemma 5.6

Let \(r\in [1,\infty ),\)\(\varepsilon >0,\)\(T>0\) and \(X\in L^r(\Omega ,L^\infty (0,{T})).\) Then,

$$\begin{aligned} \Vert X \Vert _{L^r(\Omega ,L^2(0,t))}\le \varepsilon \Vert X \Vert _{L^r(\Omega ,L^\infty (0,t))} +\frac{1}{4\varepsilon }\int _0^t \Vert X \Vert _{L^r(\Omega ,L^\infty (0,s))} \mathrm {d}s,\qquad t\in [0,T]. \end{aligned}$$

Proof

By interpolation of \(L^2(0,t)\) between \(L^\infty (0,t)\) and \(L^1(0,t)\) and the elementary inequality \(\sqrt{a b}\le \varepsilon a+ \frac{1}{4\varepsilon }b\) for \(a,b\ge 0\) and \(\varepsilon >0,\) we obtain

$$\begin{aligned} \Vert X \Vert _{L^2(0,t)}\le \Vert X \Vert _{L^\infty (0,t)}^\frac{1}{2} \Vert X \Vert _{L^1(0,t)}^\frac{1}{2}\le \varepsilon \Vert X \Vert _{L^\infty (0,t)} +\frac{1}{4\varepsilon } \Vert X \Vert _{L^1(0,t)}. \end{aligned}$$

Now, we take the \(L^r(\Omega )\)-norm and apply Minkowski’s inequality to get

$$\begin{aligned} \Vert X \Vert _{L^r(\Omega ,L^2(0,t))}&\le \varepsilon \Vert X \Vert _{L^r(\Omega ,L^\infty (0,t))} +\frac{1}{4\varepsilon }\int _0^t \Vert X(s) \Vert _{L^r(\Omega )} \mathrm {d}s\\&\le \varepsilon \Vert X \Vert _{L^r(\Omega ,L^\infty (0,t))} +\frac{1}{4\varepsilon }\int _0^t \Vert X \Vert _{L^r(\Omega ,L^\infty (0,s))} \mathrm {d}s. \end{aligned}$$

\(\square \)

The next Proposition is the key step to show that we can apply Corollary 4.7 to the sequence of solutions \((u_n)_{n\in \mathbb {N}}\) of the Eq. (5.5) in the defocusing case.

Proposition 5.7

Under Assumption 2.6(i), the following assertions hold.

  1. (a)

    For all \(q\in [1,\infty )\) there is a constant \(C=C(q,\Vert u_0 \Vert _{{E_A}}, \alpha , F, \left( B_m\right) _{m\in \mathbb {N}},T)>0\) with

    $$\begin{aligned} \sup _{n\in \mathbb {N}}\mathbb {E}\Big [\sup _{t\in [0,T]} \left[ \Vert u_n(t) \Vert _{H}^2+\mathcal {E}(u_n(t))\right] ^q\Big ]\le C . \end{aligned}$$

    In particular, for all \(r\in [1,\infty )\) there is \(C_1=C_1(r,\Vert u_0 \Vert _{{E_A}}, \alpha , F, \left( B_m\right) _{m\in \mathbb {N}},T)>0\)

    $$\begin{aligned} \sup _{n\in \mathbb {N}}\mathbb {E}\Big [\sup _{t\in [0,T]} \Vert u_n(t) \Vert _{E_A}^{r}\Big ]\le C_1. \end{aligned}$$
  2. (b)

    The sequence \((u_n)_{n\in \mathbb {N}}\) satisfies the Aldous condition [A] in \({E_A^*}.\)

Proof

(ad a) By Assumption 2.4(ii) and (iii), the restriction of the energy \(\mathcal {E}{:}\,H_n \rightarrow \mathbb {R}\) is twice continuously Fréchet-differentiable with

$$\begin{aligned} \mathcal {E}'[v]h_1&={\text {Re}}\langle Av+F(v), h_1 \rangle ; \\ \mathcal {E}''[v] \left[ h_1,h_2\right]&= {\text {Re}}\big (A^{\frac{1}{2}}h_1,A^{\frac{1}{2}}h_2\big )_{H}+{\text {Re}}\langle F'[v]h_2, h_1 \rangle \end{aligned}$$

for \(v, h_1, h_2\in H_n.\) We compute

$$\begin{aligned}&{\text {tr}}\Big (\mathcal {E}''[u_n(s)] \left( -\mathrm {i}S_n B \left( S_n u_n(s)\right) ,-\mathrm {i}S_n B \left( S_n u_n(s)\right) \right) \Big )\\&\quad =\sum _{m=1}^{\infty }\mathcal {E}''[u_n(s)]\left( -\mathrm {i}S_n B_m S_n u_n(s),-\mathrm {i}S_n B_m S_n u_n(s)\right) \\&\quad = \sum _{m=1}^{\infty }\Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s)\Vert _{H}^2 +\sum _{m=1}^{\infty }{\text {Re}}\langle F'[u_n(s)] \left( S_n B_m S_n u_n(s)\right) , S_n B_m S_n u_n(s) \rangle \end{aligned}$$

and therefore, Itô’s formula and Proposition 5.4 lead to the identity

$$\begin{aligned}&\Vert u_n(t) \Vert _{H}^2+\mathcal {E}\left( u_n(t)\right) =\Vert P_n u_0 \Vert _{H}^2+\mathcal {E}\left( P_n u_0\right) \nonumber \\&\quad +\,\int _0^t{\text {Re}}\langle A u_n(s)+F(u_n(s)), -\mathrm {i}A u_n(s)-\mathrm {i}P_n F(u_n(s)) \rangle \mathrm {d}s\nonumber \\&\quad +\,\int _0^t {\text {Re}}\langle A u_n(s)+F(u_n(s)), \mu _n(u_n(s)) \rangle \mathrm {d}s\nonumber \\&\quad +\,\int _0^t {\text {Re}}\langle A u_n(s)+F(u_n(s)), -\mathrm {i}S_n B \left( S_n u_n(s)\right) \mathrm {d}W(s) \rangle \nonumber \\&\quad +\,\frac{1}{2}\sum _{m=1}^{\infty }\int _0^t \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s)\Vert _{H}^2\mathrm {d}s\nonumber \\&\quad +\,\frac{1}{2} \int _0^t \sum _{m=1}^{\infty }{\text {Re}}\langle F'[u_n(s)] \left( S_n B_m S_n u_n(s)\right) , S_n B_m S_n u_n(s) \rangle \mathrm {d}s \end{aligned}$$
(5.7)

almost surely for all \(t\in [0,T].\) We can use (5.2) for

$$\begin{aligned}&{\text {Re}}\langle F(v), -\mathrm {i}P_n F(v) \rangle ={\text {Re}}\left[ \mathrm {i}\langle F(v), P_n F(v) \rangle \right] =0;\\&{\text {Re}}\left[ \langle A v, -\mathrm {i}P_n F(v) \rangle +\langle F(v), -\mathrm {i}A v \rangle \right] ={\text {Re}}\left[ -\langle A v, \mathrm {i}F(v) \rangle +\overline{\langle A v, \mathrm {i}F(v) \rangle }\right] =0;\\&{\text {Re}}\big (A v, -\mathrm {i}A v\big )_{H}={\text {Re}}\left[ \mathrm {i}\Vert A v \Vert _{H}^2\right] =0 \end{aligned}$$

for all \(v\in H_n\) to simplify (5.7) and get

$$\begin{aligned} \Vert u_n(t) \Vert _{H}^2+\mathcal {E}\left( u_n(t)\right)&=\Vert P_n u_0 \Vert _{H}^2+\mathcal {E}\left( P_n u_0\right) +\int _0^t {\text {Re}}\langle A u_n(s)+F(u_n(s)), \mu _n(u_n(s)) \rangle \mathrm {d}s\nonumber \\&\quad +\,\int _0^t {\text {Re}}\langle A u_n(s)+F(u_n(s)), -\mathrm {i}S_n B \left( S_n u_n(s)\right) \mathrm {d}W(s) \rangle \nonumber \\&\quad +\,\frac{1}{2}\sum _{m=1}^{\infty }\int _0^t \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s)\Vert _{H}^2\mathrm {d}s\nonumber \\&\quad +\,\frac{1}{2}\int _0^t \sum _{m=1}^{\infty }{\text {Re}}\langle F'[u_n(s)] \left( S_n B_m S_n u_n(s)\right) , S_n B_m S_n u_n(s) \rangle \mathrm {d}s \end{aligned}$$
(5.8)

almost surely for all \(t\in [0,T].\) Next, we fix \(\delta >0,\)\(q>1\) and apply the Itô formula to the process on the LHS of (5.8) and the function \(\varPhi {:}\,(-\frac{\delta }{2},\infty )\rightarrow \mathbb {R}\) defined by \(\varPhi (x):=\left( x+\delta \right) ^q.\) The derivatives are given by

$$\begin{aligned} \varPhi '(x)=q \left( x+\delta \right) ^{q-1},\qquad \varPhi ''(x)=q (q-1)\left( x+\delta \right) ^{q-2},\quad x\in \left( -\frac{\delta }{2},\infty \right) . \end{aligned}$$

With the short notation

$$\begin{aligned} Y(s):=\delta +\Vert u_n(s) \Vert _{H}^2+\mathcal {E}\left( u_n(s)\right) ,\qquad s\in [0,T], \end{aligned}$$

we obtain

$$\begin{aligned} Y(t)^{q}&=\left[ \delta +\Vert P_n u_0 \Vert _{H}^2+\mathcal {E}\left( P_n u_0\right) \right] ^q +q\int _0^t Y(s)^{q-1} {\text {Re}}\langle A u_n(s)+F(u_n(s)), \mu _n(u_n(s)) \rangle \mathrm {d}s\nonumber \\&\quad +\,q\int _0^t Y(s)^{q-1} {\text {Re}}\langle A u_n(s)+F(u_n(s)), -\mathrm {i}S_n B \left( S_n u_n(s)\right) \mathrm {d}W(s) \rangle \nonumber \\&\quad +\,\frac{q}{2}\sum _{m=1}^{\infty }\int _0^t Y(s)^{q-1} \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s)\Vert _{H}^2\mathrm {d}s\nonumber \\&\quad +\,\frac{q}{2} \sum _{m=1}^{\infty }\int _0^t Y(s)^{q-1} {\text {Re}}\langle F'[u_n(s)] \left( S_n B_m S_n u_n(s)\right) , S_n B_m S_n u_n(s) \rangle \mathrm {d}s\nonumber \\&\quad +\,\frac{q}{2}(q-1)\sum _{m=1}^{\infty }\int _0^t Y(s)^{q-2} \left[ {\text {Re}}\langle A u_n(s)+F(u_n(s)), -\mathrm {i}S_n B_m S_n u_n(s) \rangle \right] ^2 \mathrm {d}s \end{aligned}$$
(5.9)

almost surely for all \(t\in [0,T].\) In order to treat the stochastic integral, we use Propositions 5.2 and 5.4 to estimate for fixed \(s\in [0,T]\)

$$\begin{aligned} \vert \big (A u_n(s),-\mathrm {i}S_n B_m S_n u_n(s)\big )_{H}\vert&\le \Vert A^{\frac{1}{2}}u_n(s)\Vert _{H} \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s) \Vert _{H}\nonumber \\&\le \Vert A^{\frac{1}{2}}u_n(s)\Vert _{H} \Vert S_n B_m S_n u_n(s)\Vert _{E_A}\nonumber \\&\le \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H} \Vert S_n \Vert _{{\mathcal {L}}({E_A})}^2\Vert B_m\Vert _{{{\mathcal {L}}({E_A})}} \Vert u_n(s)\Vert _{E_A}\nonumber \\&\le \left( \Vert u_n(s) \Vert _{H}^2+\Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^2\right) \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}\nonumber \\&\lesssim Y(s) \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}} \end{aligned}$$
(5.10)

and (2.5), (2.10) and Proposition 5.2 to estimate

$$\begin{aligned} \vert \langle F(u_n(s)), -\mathrm {i}S_n B_m S_n u_n(s) \rangle \vert&\le \Vert F(u_n(s))\Vert _{L^{\frac{\alpha +1}{\alpha }}(M)}\Vert S_n B_m S_n u_n(s) \Vert _{L^{\alpha +1}(M)}\nonumber \\&\le \Vert u_n(s)\Vert _{L^{\alpha +1}(M)}^{\alpha +1} \Vert S_n\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2 \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}\nonumber \\&\lesssim {\hat{F}}(u_n(s)) \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})} \nonumber \\&\lesssim Y(s) \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}. \end{aligned}$$
(5.11)

The Burkholder–Gundy–Davis inequality, the estimates (5.10) and (5.11), Assumption 2.7 and Lemma 5.6 applied to the process \(X=Y^q\) with \(r=1\) yield for any \(\varepsilon >0\)

$$\begin{aligned} \mathbb {E}&\left[ \sup _{s\in [0,t]} \left| \int _0^s Y(r)^{q-1}{\text {Re}}\langle A u_n(r)+F(u_n(r)), -\mathrm {i}S_n B \left( S_n u_n(r)\right) \mathrm {d}W(r) \rangle \right| \right] \nonumber \\&\lesssim \mathbb {E}\left[ \left( \int _0^t \sum _{m=1}^{\infty }\left| Y(r)^{q-1} \langle A u_n(r)+F(u_n(r)), -\mathrm {i}S_n B_m S_n u_n(r) \rangle \right| ^2 \mathrm {d}r\right) ^{\frac{1}{2}}\right] \nonumber \\&\lesssim \mathbb {E}\left[ \left( \int _0^t Y(r)^{2q} \mathrm {d}r\right) ^{\frac{1}{2}}\right] \le \varepsilon \mathbb {E}\left[ \sup _{s\in [0,{t}]} Y(s)^q\right] + \frac{1}{4 \varepsilon } \int _0^t \mathbb {E}\left[ \sup _{r\in [0,s]} Y(r)^q\right] \mathrm {d}s . \end{aligned}$$
(5.12)

The integrands of the deterministic integrals can be estimated by using the bounds (5.4), Proposition 5.4 for the linear and (2.5) as well as (2.10) for the nonlinear part. We fix \(s\in [0,T]\) and get

$$\begin{aligned} {\text {Re}}\big (A u_n(s), \left( S_n B_m S_n \right) ^2 u_n(s)\big )_{H}\le & {} \Vert A^{\frac{1}{2}}u_n(s)\Vert _{H} \Vert A^{\frac{1}{2}}\left( S_n B_m S_n \right) ^2 u_n(s)\Vert _{H}\nonumber \\\le & {} \Vert A^{\frac{1}{2}}u_n(s)\Vert _{H} \Vert \left( S_n B_m S_n\right) ^2 u_n(s)\Vert _{E_A}\nonumber \\\le & {} \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H} \Vert S_n\Vert _{{{\mathcal {L}}({E_A})}}^4 \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}^2 \Vert u_n(s)\Vert _{E_A}\nonumber \\\le & {} \left( \Vert u_n(s) \Vert _{H}^2+\Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^2\right) \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}^2\nonumber \\\lesssim & {} Y(s) \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}^2; \end{aligned}$$
(5.13)
$$\begin{aligned} {\text {Re}}\langle F(u_n(s)), \left( S_n B_m S_n \right) ^2 u_n(s) \rangle\le & {} \Vert F(u_n(s))\Vert _{L^{\frac{\alpha +1}{\alpha }}(M)}\Vert \left( S_n B_m S_n\right) ^2 u_n(s) \Vert _{L^{\alpha +1}(M)}\nonumber \\\lesssim & {} \Vert u_n(s)\Vert _{L^{\alpha +1}(M)}^{\alpha +1} \Vert S_n\Vert _{{\mathcal {L}}(L^{\alpha +1})}^4 \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2\nonumber \\\lesssim & {} {\hat{F}}(u_n(s)) \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2 \lesssim Y(s) \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2; \end{aligned}$$
(5.14)
$$\begin{aligned} \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s)\Vert _{H}^2\le & {} \Vert S_n B_m S_n u_n(s)\Vert _{E_A}^2 \le \Vert S_n\Vert _{{{\mathcal {L}}({E_A})}}^4 \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}^2 \Vert u_n(s)\Vert _{E_A}^2\nonumber \\\le & {} \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}^2 \left( \Vert u_n(s) \Vert _{H}^2+\Vert A^{\frac{1}{2}}u_n(s)\Vert _{H}^2\right) \nonumber \\\lesssim & {} \Vert B_m\Vert _{{{\mathcal {L}}({E_A})}}^2 Y(s) \end{aligned}$$
(5.15)

for \(m\in \mathbb {N}\) and \(s\in [0,T].\) By the bounds (5.4) of \(S_n\) and the Assumptions (2.7) and (2.10) on the nonlinearity

$$\begin{aligned} {\text {Re}}\langle F'[u_n(s)] \left( S_n B_m S_n u_n(s)\right) , S_n B_m S_n u_n(s) \rangle&\lesssim \Vert F'[u_n(s)] \Vert _{L^{\alpha +1}\rightarrow L^\frac{\alpha +1}{\alpha }} \Vert S_n B_m S_n u_n(s)\Vert _{L^{\alpha +1}(M)}^2\nonumber \\&\lesssim \Vert u_n(s)\Vert _{L^{\alpha +1}(M)}^{\alpha +1} \Vert S_n\Vert _{{\mathcal {L}}(L^{\alpha +1})}^4 \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2\nonumber \\&\lesssim {\hat{F}}(u_n(s)) \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2 \lesssim Y(s) \Vert B_m\Vert _{{\mathcal {L}}(L^{\alpha +1})}^2 . \end{aligned}$$
(5.16)

Substituting the inequalities (5.12) to (5.16), into the identity (5.9), we get for each \(t\in [0,T]\)

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\in [0,t]}Y(s)^q\right]&\lesssim _q \left[ \delta +\Vert P_n u_0 \Vert _{H}^2+\mathcal {E}(P_n u_0)\right] ^q +\mathbb {E}\int _0^t \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2 Y(s)^q\mathrm {d}s\nonumber \\&\quad +\,\mathbb {E}\int _0^t \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(L^{\alpha +1})}}^2Y(s)^q \mathrm {d}s \nonumber \\&\quad +\,\varepsilon \mathbb {E}\left[ \sup _{r\in [0,t]}Y(s)^q\right] +\frac{1}{4 \varepsilon } \int _0^t \mathbb {E}\left[ \sup _{s\in [0,r]} Y(s)^q\right] \mathrm {d}r\nonumber \\&\quad +\,\mathbb {E}\sum _{m=1}^{\infty }\int _0^t \Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2 Y(s)^q \mathrm {d}s + \mathbb {E}\int _0^t\sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(L^{\alpha +1})}}^2Y(s)^q \mathrm {d}s\nonumber \\&\quad +\,\mathbb {E}\int _{}^t Y(s)^{q} \sum _{m=1}^{\infty }\max \{\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2,\Vert B_m \Vert _{{\mathcal {L}}(L^{\alpha +1})}^2\} \mathrm {d}s\nonumber \\&\lesssim \left[ \delta +\Vert u_0 \Vert _{H}^2+\mathcal {E}(P_n u_0)\right] ^q +\mathbb {E}\int _0^t Y(s)^q\mathrm {d}s\nonumber \\&\quad +\,\varepsilon \mathbb {E}\left[ \sup _{r\in [0,t]}Y(s)^q\right] +\frac{1}{4 \varepsilon } \int _0^t \mathbb {E}\left[ \sup _{s\in [0,r]} Y(s)^q\right] \mathrm {d}r\nonumber \\&\lesssim _T \left[ \delta +\Vert u_0 \Vert _{H}^2+\mathcal {E}(P_n u_0)\right] ^q +\varepsilon \mathbb {E}\left[ \sup _{r\in [0,t]}Y(s)^q\right] \nonumber \\&\quad + \int _0^t \mathbb {E}\left[ \sup _{s\in [0,r]} Y(s)^q\right] \mathrm {d}r. \end{aligned}$$
(5.17)

Choosing \(\varepsilon >0\) small enough in inequality (5.17), the Gronwall lemma yields

$$\begin{aligned} \mathbb {E}\big [\sup _{s\in [0,t]}Y(s)^q\big ]\le C\left[ \delta +\Vert u_0 \Vert _{H}^2+\mathcal {E}(P_n u_0)\right] ^q e^{C t},\qquad t\in [0,T], \end{aligned}$$

with a constant \(C>0\), which is uniform in \(n\in \mathbb {N}.\) Because of

$$\begin{aligned} \mathcal {E}(P_n u_0)\lesssim \Vert A^{\frac{1}{2}}P_n u_0 \Vert _{H}^2+\Vert P_n u_0 \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\lesssim \Vert P_n u_0 \Vert _{E_A}^2+\Vert P_n u_0 \Vert _{E_A}^{\alpha +1}\lesssim 1, \end{aligned}$$

we obtain the assertion of Proposition 5.7, part (a).

(ad b) Now, we continue with the proof of the Aldous condition. We have

$$\begin{aligned} u_n(t)- P_n u_0&= -\mathrm {i}\int _0^t A u_n(s) \mathrm {d}s-\mathrm {i}\int _0^t P_n F(u_n(s)) \mathrm {d}s+\int _0^t \mu _n(u_n(s)) \mathrm {d}s\\&\quad - \mathrm {i}\int _0^t S_n B (S_n u_n(s)) \mathrm {d}W(s)\\&=:J_1(t)+J_2(t)+J_3(t)+J_4(t) \end{aligned}$$

in \(H_n\) almost surely for all \(t\in [0,T]\) and therefore

$$\begin{aligned} \Vert u_n((\tau _n+\theta )\wedge T)-u_n(\tau _n)\Vert _{E_A^*}\le \sum _{k=1}^4 \Vert J_k((\tau _n+\theta )\wedge T)-J_k(\tau _n)\Vert _{{E_A^*}} \end{aligned}$$

for each sequence \(\left( \tau _n\right) _{n\in \mathbb {N}}\) of stopping times and \(\theta >0.\) Hence, we get

$$\begin{aligned} \mathbb {P}\left\{ \Vert u_n((\tau _n+\theta )\wedge T)-u_n(\tau _n)\Vert _{E_A^*}\ge \eta \right\} \le \sum _{k=1}^4 \mathbb {P}\left\{ \Vert J_k((\tau _n+\theta )\wedge T)-J_k(\tau _n)\Vert _{{E_A^*}}\ge \frac{\eta }{4}\right\} \end{aligned}$$
(5.18)

for a fixed \(\eta >0\). We aim to apply Tschebyscheff’s inequality and estimate the expected value of each term in the sum. We use part a) for

$$\begin{aligned} \mathbb {E}\Vert J_1((\tau _n+\theta )\wedge T)-J_1(\tau _n)\Vert _{{E_A^*}}&\le \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \Vert A u_n(s)\Vert _{E_A^*}\mathrm {d}s \le \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \Vert A^{\frac{1}{2}}u_n(s)\Vert _{H} \mathrm {d}s\\&\lesssim \theta \mathbb {E}\left[ \sup _{s\in [0,T]} \Vert u_n(s) \Vert _{E_A}\right] \le \theta \mathbb {E}\left[ \sup _{s\in [0,T]} \Vert u_n(s) \Vert _{E_A}^2\right] ^{\frac{1}{2}}\le \theta C_1; \end{aligned}$$

the embedding \( {L^{\frac{\alpha +1}{\alpha }}(M)}\hookrightarrow {E_A^*}\) and the estimate (2.5) of the nonlinearity F for

$$\begin{aligned} \mathbb {E}\Vert J_2((\tau _n+\theta )\wedge T)-J_2(\tau _n)\Vert _{{E_A^*}}&\le \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \Vert P_n F(u_n(s))\Vert _{E_A^*}\mathrm {d}s\\&\le \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \Vert F(u_n(s))\Vert _{E_A^*}\mathrm {d}s\\&\lesssim \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \Vert F(u_n(s))\Vert _{L^{\frac{\alpha +1}{\alpha }}(M)}\mathrm {d}s\\&\lesssim \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \Vert u_n(s)\Vert _{L^{\alpha +1}(M)}^\alpha \mathrm {d}s \\&\lesssim \theta \mathbb {E}\big [\sup _{s\in [0,T]} \Vert u_n(s) \Vert _{E_A}^\alpha \big ] \le \theta C_2 \end{aligned}$$

Propositions 5.2 and 5.4 for

$$\begin{aligned} \mathbb {E}\Vert J_3((\tau _n+\theta )\wedge T)-J_3(\tau _n)\Vert _{{E_A^*}}&= \frac{1}{2}\mathbb {E}\left\| \int _{\tau _n}^{(\tau _n+\theta )\wedge T}\sum _{m=1}^{\infty }\left( S_n B_m S_n\right) ^2 u_n(s) \mathrm {d}s \right\| _{E_A^*}\\&\le \frac{1}{2}\mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \sum _{m=1}^{\infty }\Vert \left( S_n B_m S_n\right) ^2 u_n(s)\Vert _{E_A^*}\mathrm {d}s\\&\lesssim \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T}\sum _{m=1}^{\infty }\Vert \left( S_n B_m S_n\right) ^2 u_n(s)\Vert _{H} \mathrm {d}s\\&\le \mathbb {E}\int _{\tau _n}^{(\tau _n+\theta )\wedge T} \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(H)}}^2\Vert u_n(s)\Vert _{H} \mathrm {d}s\\&\lesssim \theta \mathbb {E}\big [ \sup _{s\in [0,T]} \Vert u_n(s)\Vert _{H}\big ] =C_3 \theta . \end{aligned}$$

Finally, we use the Itô isometry and again the Propositions 5.2 and 5.4 for

$$\begin{aligned} \mathbb {E}\Vert J_4((\tau _n+\theta )\wedge T)-J_4(\tau _n)\Vert _{{E_A^*}}^2&\le \mathbb {E}\left\| \int _{\tau _n}^{(\tau _n+\theta )\wedge T} S_n B \left( S_n u_n(s)\right) \mathrm {d}W(s)\right\| _{H}^2\\&= \mathbb {E}\left[ \int _{\tau _n}^{(\tau _n+\theta )\wedge T}\Vert S_n B\left( S_n u_n(s)\right) \Vert _{{\text {HS}}(Y,{H})}^2 \mathrm {d}s\right] \\&= \mathbb {E}\left[ \int _{\tau _n}^{(\tau _n+\theta )\wedge T}\sum _{m=1}^{\infty }\Vert S_n B_m S_n u_n(s)\Vert _{H}^2 \mathrm {d}s\right] \\&\le \mathbb {E}\left[ \int _{\tau _n}^{(\tau _n+\theta )\wedge T}\sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(H)}}^2\Vert u_n(s)\Vert _{H}^2 \mathrm {d}s\right] \\&\lesssim \theta \mathbb {E}\big [\sup _{s\in [0,T]} \Vert u_n(s)\Vert _{H}^2\big ] =\theta C_4. \end{aligned}$$

By the Tschebyscheff inequality, we obtain for a given \(\eta >0\)

$$\begin{aligned} \mathbb {P}\left\{ \Vert J_k((\tau _n+\theta )\wedge T)-J_k(\tau _n)\Vert _{{E_A^*}}\ge \frac{\eta }{4}\right\} \le \frac{4}{\eta } \mathbb {E}\Vert J_k((\tau _n+\theta )\wedge T)-J_k(\tau _n)\Vert _{{E_A^*}}\le \frac{ 4C_k \theta }{\eta } \end{aligned}$$
(5.19)

for \(k\in \{1,2,3\}\) and

$$\begin{aligned} \mathbb {P}\left\{ \Vert J_4((\tau _n+\theta )\wedge T)-J_4(\tau _n)\Vert _{{E_A^*}}\ge \frac{\eta }{4}\right\} \le \frac{16}{\eta ^2} \mathbb {E}\Vert J_4((\tau _n+\theta )\wedge T)-J_4(\tau _n)\Vert _{{E_A^*}}^2\le \frac{16 C_4 \theta }{\eta ^2}. \end{aligned}$$
(5.20)

Let us fix \(\varepsilon >0.\) Due to estimates (5.19) and (5.20) we can choose \(\delta _1,\dots ,\delta _4>0\) such that

$$\begin{aligned} \mathbb {P}\left\{ \Vert J_k((\tau _n+\theta )\wedge T)-J_k(\tau _n)\Vert _{{E_A^*}}\ge \frac{\eta }{4}\right\} \le \frac{\varepsilon }{4} \end{aligned}$$

for \(0<\theta \le \delta _k\) and \(k=1,\dots ,4.\) With \(\delta := \min \left\{ \delta _1,\dots ,\delta _4\right\} ,\) using (5.18) we get

$$\begin{aligned} \mathbb {P}\left\{ \Vert J_k((\tau _n+\theta )\wedge T)-J_k(\tau _n)\Vert _{{E_A^*}}\ge \eta \right\} \le \varepsilon \end{aligned}$$

for all \(n\in \mathbb {N}\) and \(0<\theta \le \delta \) and therefore, the Aldous condition [A] holds in \(E_A^*.\)\(\square \)

We continue with the a priori estimate for solutions of (5.5) with a focusing nonlinearity. Note that this case is harder since the expression

$$\begin{aligned} \Vert v \Vert _{H}^2+\mathcal {E}(v):=\Vert v \Vert _{E_A}^2+{\hat{F}}(v), \qquad v\in H_n, \end{aligned}$$

does not dominate \(\Vert v \Vert _{E_A}^2,\) because \({\hat{F}}\) is negative.

Proposition 5.8

Under Assumption 2.6(i\(^{\prime })\), the following assertions hold:

  1. (a)

    For all \(r\in [1,\infty ),\) there is a constant \(C=C(r,\Vert u_0 \Vert _{E_A}, \alpha , F, \left( B_m\right) _{m\in \mathbb {N}},T)>0\) with

    $$\begin{aligned} \sup _{n\in \mathbb {N}}\mathbb {E}\left[ \sup _{t\in [0,T]} \Vert u_n(t) \Vert _{E_A}^{r}\right] \le C . \end{aligned}$$
  2. (b)

    The sequence \((u_n)_{n\in \mathbb {N}}\) satisfies the Aldous condition [A] in \({E_A^*}.\)

Proof

Let \(\varepsilon >0.\) Assumption 2.6(i\(^{\prime }\)) and Young’s inequality imply that there are \(\gamma >0\) and \(C_\varepsilon >0\) such that

$$\begin{aligned} \Vert u \Vert _{L^{\alpha +1}(M)}^{\alpha +1} \lesssim \varepsilon \Vert u \Vert _{E_A}^2+C_\varepsilon \Vert u \Vert _H^\gamma ,\qquad u\in {E_A}, \end{aligned}$$
(5.21)

and therefore by Proposition 5.4, we infer that

$$\begin{aligned} -{\hat{F}}(u_n(t))&\lesssim \Vert u_n(t) \Vert _{L^{\alpha +1}(M)}^{\alpha +1} \lesssim \varepsilon \Vert u_n(t) \Vert _{E_A}^2+C_\varepsilon \Vert u_n(t) \Vert _H^\gamma \nonumber \\&\lesssim \varepsilon \Vert A^{\frac{1}{2}}u_n(t) \Vert _{H}^2+\varepsilon \Vert u_0 \Vert _H^2+C_\varepsilon \Vert u_0 \Vert _H^\gamma ,\qquad t\in [0,T]. \end{aligned}$$
(5.22)

By the same calculations as in the proof of Proposition 5.7 we get

$$\begin{aligned}&\frac{1}{2} \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^2=\mathcal {E}(u_n(s))-{\hat{F}}(u_n(s))\nonumber \\&=-{\hat{F}}(u_n(s))+\mathcal {E}\left( P_n u_0\right) +\int _0^s {\text {Re}}\langle A u_n(r)+F(u_n(r)), \mu _n(u_n(r)) \rangle \mathrm {d}r\nonumber \\&\quad +\,\int _0^s {\text {Re}}\langle A u_n(r)+F(u_n(r)), -\mathrm {i}S_n B\left( S_n u_n(r)\right) \mathrm {d}W(r) \rangle \nonumber \\&\quad +\,\frac{1}{2}\sum _{m=1}^{\infty }\int _0^s \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(r)\Vert _{H}^2\mathrm {d}r\nonumber \\&\quad +\,\frac{1}{2}\int _0^s \sum _{m=1}^{\infty }{\text {Re}}\langle F'[u_n(r)] \left( S_n B_m S_n u_n(r)\right) , S_n B_m S_n u_n(r) \rangle \mathrm {d}r \end{aligned}$$
(5.23)

almost surely for all \(s\in [0,T].\) In the following, we fix \(q\in [1,\infty )\) and \(t\in (0,T]\) and want to apply the \(L^q(\Omega ,L^\infty (0,t))\)-norm to the identity (5.23). We will use the notation

$$\begin{aligned} X(s):=\left[ \Vert u_0 \Vert _{H}^2+\Vert A^{\frac{1}{2}}u_n(s) \Vert _H^2+\Vert u_n(s) \Vert _{L^{\alpha +1}(M)}^{\alpha +1}\right] , \qquad s\in [0,T], \end{aligned}$$
(5.24)

and estimate the stochastic integral by the Burkholder–Gundy–Davis inequality and the estimates (5.10) and (5.11) as well as Lemma 5.6

$$\begin{aligned}&\left\| \int _0^\cdot {\text {Re}}\langle A u_n(r)+F(u_n(r)), -\mathrm {i}S_n B \left( S_n u_n(r)\right) \mathrm {d}W(r) \rangle \right\| _{L^q(\Omega ,L^\infty (0,t))}\nonumber \\&\quad \lesssim \left\| \left( \sum _{m=1}^{\infty }\vert \langle A u_n(r)+F(u_n(r)), -\mathrm {i}S_n B_m S_n u_n(r) \rangle \vert ^2\right) ^\frac{1}{2}\right\| _{L^q(\Omega ,L^2([0,t]))}\nonumber \\&\quad \lesssim \left\| X\right\| _{L^q(\Omega ,L^2([0,t]))}\nonumber \\&\quad \le \varepsilon \Vert X \Vert _{L^q(\Omega ,L^\infty (0,t))} +\frac{1}{4\varepsilon }\int _0^t \Vert X \Vert _{L^q(\Omega ,L^\infty (0,s))} \mathrm {d}s. \end{aligned}$$
(5.25)

By (5.22), we get

$$\begin{aligned} \Vert -{\hat{F}}(u_n) \Vert _{L^q(\Omega ,L^\infty (0,t))} \lesssim \varepsilon \left\| \Vert A^{\frac{1}{2}}u_n \Vert _{H}^2\right\| _{L^q(\Omega ,L^\infty (0,t))}+\varepsilon \Vert u_0 \Vert _H^2+C_\varepsilon \Vert u_0 \Vert _H^\gamma . \end{aligned}$$
(5.26)

For the following estimates, we will use (5.13)–(5.16) and the Minkowski inequality and obtain

$$\begin{aligned}&\left\| \int _0^\cdot {\text {Re}}\langle A u_n(s)+F(u_n(s)), \mu _n(u_n(s)) \rangle \mathrm {d}s \right\| _{L^q(\Omega ,L^\infty (0,t))}\nonumber \\&\quad \lesssim \left\| \int _0^t X(s) \mathrm {d}s\right\| _{L^q(\Omega )} \lesssim \int _0^t \Vert X(s) \Vert _{L^q(\Omega )} \mathrm {d}s; \end{aligned}$$
(5.27)
$$\begin{aligned}&\left\| \sum _{m=1}^{\infty }\int _0^\cdot \Vert A^{\frac{1}{2}}S_n B_m S_n u_n(s)\Vert _{H}^2\mathrm {d}s\right\| _{L^q(\Omega ,L^\infty (0,t))}\nonumber \\&\quad \lesssim \left\| \int _0^t X(s) \mathrm {d}s\right\| _{L^q(\Omega )}\lesssim \int _0^t \Vert X(s) \Vert _{L^q(\Omega )} \mathrm {d}s; \end{aligned}$$
(5.28)
$$\begin{aligned}&\left\| \int _0^\cdot \sum _{m=1}^{\infty }{\text {Re}}\langle F'[u_n(s)] \left( S_n B_m S_n u_n(s)\right) , S_n B_m S_n u_n(s) \rangle \mathrm {d}s \right\| _{L^q(\Omega ,L^\infty (0,t))}\nonumber \\&\quad \lesssim \left\| \int _0^t X(s) \mathrm {d}s\right\| _{L^q(\Omega )}\lesssim \int _0^t \Vert X(s) \Vert _{L^q(\Omega )} \mathrm {d}s. \end{aligned}$$
(5.29)

By (5.23) and the estimates (5.25)–(5.29), we get

$$\begin{aligned} \left\| \Vert A^{\frac{1}{2}}u_n \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,t))}&\lesssim \varepsilon \left\| \Vert A^{\frac{1}{2}}u_n(t) \Vert _{H}^2\right\| _{L^q(\Omega ,L^\infty (0,t))}+\varepsilon \Vert u_0 \Vert _H^2\nonumber \\&\quad +C_\varepsilon \Vert u_0 \Vert _H^\gamma +\Vert u_0 \Vert _{E_A}\nonumber \\&\quad +\,\int _0^t \Vert X(s) \Vert _{L^q(\Omega )} \mathrm {d}s+\varepsilon \Vert X \Vert _{L^q(\Omega ,L^\infty (0,t))}\nonumber \\&\quad +\,\frac{1}{4\varepsilon }\int _0^t \Vert X \Vert _{L^q(\Omega ,L^\infty (0,s))} \mathrm {d}s \nonumber \\&\quad +\,\int _0^t \Vert X(s) \Vert _{L^q(\Omega )} \mathrm {d}s. \end{aligned}$$
(5.30)

In order to estimate the terms with X by the LHS of (5.30), we exploit (5.21) to get

$$\begin{aligned} \Vert X \Vert _{L^q(\Omega ,L^\infty (0,t))}&\le \Vert u_0 \Vert _{H}^2+\mathbb {E}\left[ \sup _{s\in [0,t]} \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^{2q}\right] ^{\frac{1}{q}} +\mathbb {E}\left[ \sup _{s\in [0,t]} \Vert u_n(s) \Vert _{L^{\alpha +1}(M)}^{(\alpha +1)q}\right] ^{\frac{1}{q}}\\&\lesssim \Vert u_0 \Vert _{H}^2+\mathbb {E}\left[ \sup _{s\in [0,t]} \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^{2q}\right] ^{\frac{1}{q}}\\&\quad +\,\varepsilon \mathbb {E}\left[ \sup _{s\in [0,t]} \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^{2q}\right] ^{\frac{1}{q}}+\varepsilon \Vert u_0 \Vert _H^2+C_\varepsilon \Vert u_0 \Vert _H^\gamma \\&\lesssim \left\| \sup _{s\in [0,t]} \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^{2}\right\| _{L^q(\Omega )}+ \Vert u_0 \Vert _H^2+ \Vert u_0 \Vert _H^\gamma . \end{aligned}$$

Hence, by (5.24), we obtain

$$\begin{aligned} \left\| \Vert A^{\frac{1}{2}}u_n \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,t))}&\lesssim \varepsilon \left\| \Vert A^{\frac{1}{2}}u_n(t) \Vert _{H}^2\right\| _{L^q(\Omega ,L^\infty (0,t))}\\&\quad +\varepsilon \Vert u_0 \Vert _H^2+C_\varepsilon \Vert u_0 \Vert _H^\gamma +\Vert u_0 \Vert _{E_A}\\&\quad +\,\int _0^t \left\| \Vert A^{\frac{1}{2}}u_n \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,s))} \mathrm {d}s + t \Vert u_0 \Vert _H^2+ t \Vert u_0 \Vert _H^\gamma \\&\quad +\,\varepsilon \left\| \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,t))}+ \varepsilon \Vert u_0 \Vert _H^2+\varepsilon \Vert u_0 \Vert _H^\gamma . \end{aligned}$$

Choosing \(\varepsilon >0\) small enough, we get

$$\begin{aligned}&\left\| \Vert A^{\frac{1}{2}}u_n \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,t))} \le C_1(\Vert u_0 \Vert _{E_A},T,q)+\int _0^t C_2(q)\left\| \Vert A^{\frac{1}{2}}u_n \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,s))} \mathrm {d}s,\\ \end{aligned}$$

for all \(t\in [0,T]\) and thus, the Gronwall Lemma yields

$$\begin{aligned} \left\| \Vert A^{\frac{1}{2}}u_n(s) \Vert _{H}^{2}\right\| _{L^q(\Omega ,L^\infty (0,t))}\le C_1(\Vert u_0 \Vert _{E_A},T,q) e^{C_2(q)t},\qquad t\in [0,T]. \end{aligned}$$

This implies that there is \(C>0\) with

$$\begin{aligned} \sup _{n\in \mathbb {N}}\mathbb {E}\Big [\sup _{t\in [0,T]} \Vert u_n(t) \Vert _{E_A}^{2q}\Big ]\le C, \end{aligned}$$

since the H-norm is conserved by Proposition 5.4. Therefore, we obtain the assertion for \(r\ge 2.\) Finally, the case \(r\in [1,2)\) is an application of Hölder’s inequality.

(ad b) Analogous to the proof of Proposition 5.7(b). \(\square \)

6 Construction of a martingale solution

The aim of this section is the construction of a solution of Eq. (1.1) by a suitable limiting process in the Galerkin equation (5.5) using the results from the previous sections. Let us recall that

$$\begin{aligned} Z_T:={C([0,T],{E_A^*})}\cap {L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\cap C_w([0,T],{E_A}). \end{aligned}$$

Proposition 6.1

Let \(\left( u_n\right) _{n\in \mathbb {N}}\) be the sequence of solutions to the Galerkin equation (5.5).

  1. (a)

    There are a subsequence \(\left( u_{n_k}\right) _{k\in \mathbb {N}}\), a probability space \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}}\right) \) and random variables \(v_k, v{:}\,{\tilde{\Omega }} \rightarrow Z_T\) with \({\tilde{\mathbb {P}}}^{v_k}=\mathbb {P}^{u_{n_k}}\) such that \(v_k\rightarrow v\)\({\tilde{\mathbb {P}}}\)-a.s. in \(Z_T\) for \(k\rightarrow \infty .\)

  2. (b)

    We have \(v_k \in C\left( [0,T],H_k\right) \)\({\tilde{\mathbb {P}}}\)-a.s. and for all \(r\in [1,\infty ),\) there is \(C>0\) with

    $$\begin{aligned} \sup _{k\in \mathbb {N}} \tilde{\mathbb {E}}\left[ \Vert v_k \Vert _{L^\infty (0,T;{E_A})}^r\right] \le C. \end{aligned}$$
  3. (c)

    For all \(r\in [1,\infty ),\) we have

    $$\begin{aligned} \tilde{\mathbb {E}}\left[ \Vert v \Vert _{L^\infty (0,T;{E_A})}^r\right] \le C \end{aligned}$$

    with the same constant \(C>0\) as in b).

For the precise dependence of the constants, we refer to the Propositions 5.7 and 5.8.

Proof

(ad a) The estimates to apply Corollary 4.7 are provided by Propositions 5.7 and 5.8.

(ad b) Since we have \(u_{n_k} \in C\left( [0,T],H_k\right) \)\(\mathbb {P}\)-a.s. and \(C\left( [0,T],H_k\right) \) is closed in \({C([0,T],{E_A^*})}\) and therefore a Borel set , we conclude \(v_k \in C\left( [0,T],H_k\right) \)\({\tilde{\mathbb {P}}}\)-a.s. by the identity of the laws. Furthermore, the map \(C\left( [0,T],H_k\right) \ni u \mapsto \Vert u \Vert _{{L^\infty (0,T;{E_A})}}^r\in [0,\infty )\) is continuous and therefore measurable, so that we can conclude that

$$\begin{aligned} \tilde{\mathbb {E}}\left[ \Vert v_k \Vert _{L^\infty (0,T;{E_A})}^r\right]&=\int _{C([0,T],H_k)}\Vert u \Vert _{L^\infty (0,T;{E_A})}^r \mathrm {d}\tilde{\mathbb {P}}^{v_k}(u)\\&=\int _{C([0,T],H_k)}\Vert u \Vert _{L^\infty (0,T;{E_A})}^r \mathrm {d}\mathbb {P}^{u_{n_k}}(u)\\&=\mathbb {E}\left[ \Vert u_{n_k} \Vert _{L^\infty (0,T;{E_A})}^r \right] . \end{aligned}$$

Use the Propositions 5.7 in the defocusing respectively 5.8 in the focusing case to get the assertion.

(ad c) We have \(v_n \rightarrow v\) almost surely in \({L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\) by part (a). From part (b) and the embedding \({L^\infty (0,T;{E_A})}\hookrightarrow {L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\), we obtain that the sequence \(\left( v_n\right) _{n\in \mathbb {N}}\) is bounded in \(L^{\alpha +1}({\tilde{\Omega }}\times [0,T]\times M).\) By Vitali’s Theorem (see [22, Theorem VI, 5.6]), we conclude

$$\begin{aligned} v_n \rightarrow v \quad \text {in} \quad L^2({\tilde{\Omega }},{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}) \end{aligned}$$

for \(n\rightarrow \infty .\) On the other hand, part b) yields the existence of \({\tilde{v}}\in L^r({\tilde{\Omega }},{L^\infty (0,T;{E_A})})\) for all \(r\in [1,\infty )\) with norm less than the constant \(C=C(\Vert u_0 \Vert _{E_A},T,r)>0\) and a subsequence \(\left( v_{n_k}\right) _{k\in \mathbb {N}},\) such that \(v_{n_k} \rightharpoonup ^* {\tilde{v}}\) for \(k\rightarrow \infty .\) Especially, \(v_{n_k} \rightharpoonup ^* {\tilde{v}}\) for \(k\rightarrow \infty \) in \(L^2({\tilde{\Omega }},{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})})\) and hence,

$$\begin{aligned} v={\tilde{v}}\in L^r({\tilde{\Omega }},{L^\infty (0,T;{E_A})}). \end{aligned}$$

\(\square \)

The next Lemma shows, how convergence in \(Z_T\) can be used for the convergence of the terms appearing in the Galerkin equation.

Lemma 6.2

Let \(z_n\in C([0,T],H_n)\) for \(n\in \mathbb {N}\) and \(z\in Z_T.\) Assume \(z_n \rightarrow z\) for \(n\rightarrow \infty \) in \(Z_T.\) Then, for \(t\in [0,T]\) and \(\psi \in {E_A}\) as \(n\rightarrow \infty \)

$$\begin{aligned}&\big (z_n(t),\psi \big )_{H} \rightarrow \langle z(t), \psi \rangle , \\&\int _0^t \big (A z_n(s),\psi \big )_{H} \mathrm {d}s \rightarrow \int _0^t \langle A z(s), \psi \rangle \mathrm {d}s, \\&\int _0^t \big (\mu _n\left( z_n(s)\right) ,\psi \big )_{H} \mathrm {d}s \rightarrow \int _0^t \langle \mu \left( z(s)\right) , \psi \rangle \mathrm {d}s,\\&\int _0^t \big (P_n F(z_n(s)),\psi \big )_{H} \mathrm {d}s \rightarrow \int _0^t \langle F( z(s)), \psi \rangle \mathrm {d}s. \end{aligned}$$

Proof

Step 1 We fix \(\psi \in {E_A}\) and \(t\in [0,T].\) Recall, that the assumption implies \(z_n \rightarrow z\) for \(n\rightarrow \infty \) in \({C([0,T],{E_A^*})}.\) This can be used to deduce

$$\begin{aligned} \left| \big (z_n(t),\psi \big )_{H} - \langle z(t), \psi \rangle \right| \le \Vert z_n - z\Vert _{C([0,T],{E_A^*})}\Vert \psi \Vert _{E_A}\rightarrow 0. \end{aligned}$$

By \(z_n \rightarrow z\) in \(C_w([0,T],{E_A})\) we get \(\sup _{s\in [0,T]} \vert \langle z_n(s)-z(s), \varphi \rangle \vert \rightarrow 0\) for \(n\rightarrow \infty \) and all \(\varphi \in {E_A^*}.\) We plug in \(\varphi =A \psi \) and use \(\langle A z_n(s), \psi \rangle =\langle z_n(s), A \psi \rangle \) for \(n\in \mathbb {N}\) and \(s\in [0,t]\) to get

$$\begin{aligned}&\int _0^t \left| \big (A z_n(s),\psi \big )_{H}- \langle z(s), A \psi \rangle \right| \mathrm {d}s= \int _0^t \left| \langle z_n(s)-z(s), A \psi \rangle \right| \mathrm {d}s\\&\le T \sup _{s\in [0,T]} \vert \langle z_n(s)-z(s), A \psi \rangle \vert \rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$

Step 2 First, we fix \(m\in \mathbb {N}.\) Using that the operators \(B_m\) and \(S_n\) are selfadjoint, we get

$$\begin{aligned}&\int _0^t \left| \Big ((S_n B_m S_n)^2 z_n(s),\psi \Big )_{H} - \langle B_m^2 z(s), \psi \rangle \right| \mathrm {d}s \\&\quad \le \int _0^t \left| \Big ((S_n-I ) B_m S_n^2 B_m S_n z_n(s),\psi \Big )_{H} \right| \mathrm {d}s \\&+\int _0^t \left| \Big ( B_m (S_n^2-I ) B_m S_n z_n(s),\psi \Big )_{H} \right| \mathrm {d}s \\&\qquad +\,\int _0^t \left| \Big ( B_m^2 (S_n-I) z_n(s),\psi \Big )_{H} \right| \mathrm {d}s +\int _0^t \left| \langle B_m^2 \left( z_n(s)-z(s)\right) , \psi \rangle \right| \mathrm {d}s\\&\quad \le T \Vert z_n \Vert _{C([0,T],{E_A^*})}\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2 \Vert S_n \Vert _{{{\mathcal {L}}({E_A})}}^3 \Vert (S_n-I)\psi \Vert _{E_A}\\&\qquad +\,T \Vert z_n \Vert _{C([0,T],{E_A^*})}\Vert S_n \Vert _{{{\mathcal {L}}({E_A})}} \Vert B_m \Vert _{{{\mathcal {L}}({E_A})}} \Vert S_n+I \Vert _{{{\mathcal {L}}({E_A})}}\Vert (S_n-I) \left( B_m \psi \right) \Vert _{E_A}\\&\qquad +\,T \Vert z_n \Vert _{C([0,T],{E_A^*})}\Vert (S_n-I) \left( B_m^2 \psi \right) \Vert _{E_A}\\&\qquad +\,T \Vert z_n-z \Vert _{C([0,T],{E_A^*})}\Vert B_m^2 \Vert _{{{\mathcal {L}}({E_A})}} \Vert \psi \Vert _{E_A}\longrightarrow 0, \qquad n\rightarrow \infty , \end{aligned}$$

since \(S_n \varphi \rightarrow \varphi \) in \({E_A}\) for \(\varphi \in {E_A}\) by Proposition 5.2 and \(z_n \rightarrow z\) in \({C([0,T],{E_A^*})}.\) By the estimate

$$\begin{aligned}&\int _0^t \left| \Big ((S_n B_m S_n)^2 z_n(s),\psi \Big )_{H} - \langle B_m^2 z(s), \psi \rangle \right| \mathrm {d}s\\&\quad \le T \Vert \psi \Vert _{E_A}\left[ \Vert (S_n B_m S_n)^2 \Vert _{{{\mathcal {L}}({E_A})}} \Vert z_n \Vert _{C([0,T],{E_A^*})}+\Vert B_m^2 \Vert _{{{\mathcal {L}}({E_A})}} \Vert z \Vert _{C([0,T],{E_A^*})}\right] \\&\quad \lesssim _{T,\psi } \Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2 \in l^1(\mathbb {N}) \end{aligned}$$

and Lebesgue’s convergence Theorem, we obtain

$$\begin{aligned} \sum _{m=1}^\infty \int _0^t \left| \Big ((S_n B_m S_n)^2 z_n(s),\psi \Big )_{H} - \langle B_m^2 z(s), \psi \rangle \right| \mathrm {d}s\longrightarrow 0, \qquad n\rightarrow \infty , \end{aligned}$$

and therefore

$$\begin{aligned} \int _0^t \big (\mu _n\left( z_n(s)\right) ,\psi \big )_{H} \mathrm {d}s \rightarrow \int _0^t \langle \mu \left( z(s)\right) , \psi \rangle \mathrm {d}s,\qquad n\rightarrow \infty . \end{aligned}$$

Step 3 Before we prove the last assertion, we recall \(z_n \rightarrow z\) in \({L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\) for \(n\rightarrow \infty .\) We estimate

$$\begin{aligned}&\int _0^t \left| \big (P_n F(z_n(s)),\psi \big )_{H} - \langle F(z(s)), \psi \rangle \right| \mathrm {d}s\nonumber \\&\quad \le \int _0^t \left| \langle F(z_n(s)), (P_n-I)\psi \rangle \right| \mathrm {d}s+\int _0^t \left| \langle F(z_n(s))-F(z(s)), \psi \rangle \right| \mathrm {d}s \end{aligned}$$
(6.1)

where we used (5.2). For the first term in (6.1), we look at

$$\begin{aligned}&\int _0^t \left| \langle F(z_n(s)), (P_n-I)\psi \rangle \right| \mathrm {d}s \le \Vert F (z_n) \Vert _{L^1(0,T;{E_A^*})} \Vert (P_n-I)\psi \Vert _{E_A}\\&\lesssim \Vert F(z_n) \Vert _{L^1(0,T;{L^{\frac{\alpha +1}{\alpha }}(M)})} \Vert (P_n-I)\psi \Vert _{E_A}\\&\lesssim \Vert z_n \Vert _{L^\alpha (0,T;{L^{\alpha +1}(M)})}^\alpha \Vert (P_n-I)\psi \Vert _{E_A}\\&\lesssim \Vert z_n \Vert _{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}^\alpha \Vert (P_n-I)\psi \Vert _{E_A}\longrightarrow 0, \quad n\rightarrow \infty . \end{aligned}$$

By Assumption (2.4) [see (2.7)], we get

$$\begin{aligned} \Vert F(z_n(s))-F(z(s)) \Vert _{{L^{\frac{\alpha +1}{\alpha }}(M)}}&\le \left( \Vert z_n(s) \Vert _{L^{\alpha +1}(M)}+\Vert z(s) \Vert _{L^{\alpha +1}(M)}\right) ^{\alpha -1}\\&\Vert z_n(s)-z(s) \Vert _{L^{\alpha +1}(M)}\end{aligned}$$

for \(s\in [0,T].\) Now, we apply Hölder’s inequality in time with \(\frac{1}{\alpha +1}+\frac{1}{\alpha +1}+\frac{\alpha -1}{\alpha +1}=1\)

$$\begin{aligned}&\Vert F(z_n)-F(z) \Vert _{L^1(0,T; {L^{\frac{\alpha +1}{\alpha }}(M)})} \\&\le T^\frac{1}{\alpha +1}\left( \Vert z_n \Vert _{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}+ \Vert z \Vert _{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\right) ^{\alpha -1}\\&\quad \Vert z_n-z \Vert _{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}\rightarrow 0,\qquad n\rightarrow \infty . \end{aligned}$$

This leads to the last claim. \(\square \)

By the application of the Skorohod–Jakubowski Theorem, we have replaced the Galerkin solutions \(u_n\) by the processes \(v_n\) on \({\tilde{\Omega }}.\) Now, we want to transfer the properties given by the Galerkin equation (5.5). Therefore, we define the process \(N_n{:}\,{\tilde{\Omega }} \times [0,T] \rightarrow H_n\) by

$$\begin{aligned} N_n(t)=-v_n(t)+ P_n u_0+ \int _0^t \left[ -\mathrm {i}A v_n(s)-\mathrm {i}P_n F(v_n(s))+\mu _n(v_n(s))\right] \mathrm {d}s \end{aligned}$$

for \(n\in \mathbb {N}\) and \(t\in [0,T]\) and in the following lemma, we prove its martingale property. Note that in this section, we consider H as a real Hilbert space equipped with the real scalar product \({\text {Re}}\big (u,v\big )_{H}\) for \(u,v\in {H}\) in order to be consistent with the martingale theory from [21] we use.

Lemma 6.3

For each \(n\in \mathbb {N},\) the process \(N_n\) is an H-valued continuous square integrable martingale w.r.t the filtration \({\tilde{\mathcal {F}}}_{n,t}:=\sigma \left( v_n(s){:}\,s\le t\right) .\) The quadratic variation of \(N_n\) is given by

$$\begin{aligned} \langle \langle N_n \rangle \rangle _t\psi = \sum _{m=1}^{\infty }\int _0^t \mathrm {i}S_n B_m S_n v_n(s) {\text {Re}}\big (S_n B_m S_n v_n(s),\psi \big )_{H} \mathrm {d}s \end{aligned}$$

for all \(\psi \in {H}.\)

Proof

Fix \(n\in \mathbb {N}.\) We define \(M_n{:}\,\Omega \times [0,T] \rightarrow H_n\) by

$$\begin{aligned} M_n(t):=-u_n(t)+ P_n u_0+ \int _0^t \left[ -\mathrm {i}A u_n(s)-\mathrm {i}P_n F(u_n(s))+\mu _n(u_n(s))\right] \mathrm {d}s \end{aligned}$$

for \(t\in [0,T].\) Since \(u_n\) is a solution of the Galerkin equation (5.5), we obtain the representation

$$\begin{aligned} M_n(t)= \mathrm {i}\int _0^t S_n B_m (S_n u_n(s)) \mathrm {d}W(s) \end{aligned}$$

\(\mathbb {P}\)-a.s. for all \(t\in [0,T].\) The estimate

$$\begin{aligned} \mathbb {E}\left[ \sum _{m=1}^{\infty }\int _0^T \Vert S_n B_m S_n u_n(s) \Vert _{H}^2 \mathrm {d}s\right]&\le \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({H})}}^2 \mathbb {E}\left[ \int _0^T \Vert u_n(s) \Vert _{H}^2 \mathrm {d}s\right] \\&\le T \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({H})}}^2 \Vert u_0 \Vert _{H}^2 <\infty \end{aligned}$$

yields, that \(M_n\) is a square integrable continuous martingale w.r.t. the filtration \(\left( \mathcal {F}_t\right) _{t\in [0,T]}.\) From the definition of \(M_n\) we get, that for each \(t\in [0,T],\)\(M_n(t)\) is measurable w.r.t. the smaller \(\sigma \)-field \(\mathcal {F}_{n,t}:=\sigma \left( u_n(s){:}\,s\le t\right) .\)

The adjoint of the operator \(\varPhi _n(s):=\mathrm {i}S_n B(S_n u_n(s)){:}\,Y\rightarrow {H}\) for \(s\in [0,T]\) is given by \(\varPhi ^*(s)\psi = \sum _{m=1}^{\infty }{\text {Re}}\big (\mathrm {i}S_n B_m S_n u_n(s),\psi \big )_{H} f_m\) for \(\psi \in {H}.\) Therefore

$$\begin{aligned} \varPhi (s)\varPhi ^*(s)\psi = \sum _{m=1}^{\infty }{\text {Re}}\big (\mathrm {i}S_n B_m S_n u_n(s),\psi \big )_{H} \mathrm {i}S_n B_m S_n u_n(s) \end{aligned}$$

for \(\psi \in {H}\) and \(s\in [0,T].\) Hence, \(M_n\) is a \(\left( \mathcal {F}_{n,t}\right) \)-martingale with quadratic variation

$$\begin{aligned} \langle \langle M_n \rangle \rangle _t\psi = \sum _{m=1}^{\infty }\int _0^t \mathrm {i}S_n B_m S_n u_n(s) {\text {Re}}\big (\mathrm {i}S_n B_m S_n u_n(s),\psi \big )_{H} \mathrm {d}s \end{aligned}$$

for \(\psi \in {H}\) (see [21, Theorem 4.27]). This property can be rephrased as

$$\begin{aligned} \mathbb {E}\left[ {\text {Re}}\big (M_n(t)-M_n(s),\psi \big )_{H} h(u_n|_{[0,s]})\right] =0 \end{aligned}$$

and

$$\begin{aligned}&\mathbb {E}\Bigg [ \Bigg ({\text {Re}}\big (M_n(t),\psi \big )_{H}{\text {Re}}\big (M_n(t),\varphi \big )_{H}- {\text {Re}}\big (M_n(s),\psi \big )_{H}{\text {Re}}\big (M_n(s),\varphi \big )_{H}\\&\quad -\,\sum _{m=1}^{\infty }\int _0^t {\text {Re}}\big (\mathrm {i}S_n B_m S_n u_n(s),\psi \big )_{H} {\text {Re}}\big (\mathrm {i}S_n B_m S_n u_n(s),\varphi \big )_{H} \mathrm {d}s\Bigg ) h(u_n|_{[0,s]})\Bigg ]=0 \end{aligned}$$

for all \(\psi , \varphi \in {H}\) and bounded, continuous functions h on C([0, T], H).

We use the identity of the laws of \(u_n\) and \(v_n\) on \(C([0,T],H_n)\) to obtain

$$\begin{aligned} \tilde{\mathbb {E}}\left[ {\text {Re}}\big (N_n(t)-N_n(s),\psi \big )_{H} h(v_n|_{[0,s]})\right] =0 \end{aligned}$$

and

$$\begin{aligned}&\tilde{\mathbb {E}}\Bigg [ \Bigg ({\text {Re}}\big (N_n(t),\psi \big )_{H}{\text {Re}}\big (N_n(t),\varphi \big )_{H}- {\text {Re}}\big (N_n(s),\psi \big )_{H}{\text {Re}}\big (N_n(s),\varphi \big )_{H}\\&\qquad -\,\sum _{m=1}^{\infty }\int _0^t {\text {Re}}\big (\mathrm {i}S_n B_m S_n v_n(s),\psi \big )_{H} {\text {Re}}\big (\mathrm {i}S_n B_m S_n v_n(s),\varphi \big )_{H} \mathrm {d}s\Bigg ) h(v_n|_{[0,s]})\Bigg ]=0 \end{aligned}$$

for all \(\psi , \varphi \in {H}\) and bounded, continuous functions h on \(C([0,T],H_n).\) Hence, \(N_n\) is a continuous square integrable martingale w.r.t \({\tilde{\mathcal {F}}}_{n,t}:=\sigma \left( v_n(s){:}\,s\le t\right) \) and the quadratic variation is given as claimed in the lemma. \(\square \)

We define a process N on \({\tilde{\Omega }} \times [0,T]\) by

$$\begin{aligned} N(t):=-v(t)+ u_0+ \int _0^t \left[ -\mathrm {i}A v(s)-\mathrm {i}F(v(s))+\mu (v(s))\right] \mathrm {d}s, \quad t\in [0,T]. \end{aligned}$$

By Proposition 6.1, we infer that \(v\in {C([0,T],{E_A^*})}\) almost surely and

$$\begin{aligned}&\Vert F(v) \Vert _{L^\infty (0,T;{E_A^*})} \lesssim \Vert F(v) \Vert _{L^\infty (0,T;{L^{\frac{\alpha +1}{\alpha }}(M)})} = \Vert v \Vert _{L^\infty (0,T;{L^{\alpha +1}(M)})}^\alpha<\infty \quad \text {a.s.} \\&\Vert A v \Vert _{L^\infty (0,T;{E_A^*})}\le \Vert v \Vert _{L^\infty (0,T;{E_A})}<\infty \quad \text {a.s.} \end{aligned}$$

Because of \(\mu \in {\mathcal {L}}({E_A^*}),\) we infer that \(\mu (v)\in {C([0,T],{E_A^*})}\) almost surely. Hence, N has \({E_A^*}\)-valued continuous paths.

Let \(\iota {:}\,{E_A}\hookrightarrow {H}\) be the usual embedding, \(\iota ^*{:}\,{H} \rightarrow {E_A}\) its Hilbert-space-adjoint, i.e. \(\big (\iota u,v\big )_{H}=\big (u,\iota ^* v\big )_{E_A}\) for \(u\in {E_A}\) and \(v\in {H}.\) Further, we set \(L:=\left( \iota ^*\right) '{:}\,{E_A^*}\rightarrow {H}\) as the dual operator of \(\iota ^*\) with respect to the Gelfand triple \({E_A}\hookrightarrow H\eqsim H^*\hookrightarrow {E_A^*}.\)

In the next Lemma, we use the martingale property of \(N_n\) for \(n\in \mathbb {N}\) and a limiting process based on Proposition 6.1 and Lemma 6.2. to conclude that LN is also an H-valued martingale.

Lemma 6.4

The process LN is an H-valued continuous square integrable martingale with respect to the filtration \({\tilde{\mathbb {F}}}=\left( {\tilde{\mathcal {F}}}_t\right) _{t\in [0,T]},\) where \({\tilde{\mathcal {F}}}_{t}:=\sigma \left( v(s){:}\,s\le t\right) .\) The quadratic variation is given by

$$\begin{aligned} \langle \langle { L N} \rangle \rangle _t\zeta = \sum _{m=1}^{\infty }\int _0^t\mathrm {i}L B_m v(s) {\text {Re}}\big (\mathrm {i}L B_m v(s),\zeta \big )_{H} \mathrm {d}s \end{aligned}$$

for all \(\zeta \in {H}.\)

Proof

Step 1 Let \(t\in [0,T].\) We will first show that \(\tilde{\mathbb {E}}\left[ \Vert N(t) \Vert _{E_A^*}^2\right] <\infty .\) By Lemma 6.2, we have \(N_n(t) \rightarrow N(t)\) almost surely in \({E_A^*}\) for \(n\rightarrow \infty .\) By the Davis inequality for continuous martingales (see [49]), Lemma 6.3 and Proposition 6.1, we conclude

$$\begin{aligned} \tilde{\mathbb {E}}\left[ \sup _{t\in [0,T]}\Vert N_n(t) \Vert _{H}^{\alpha +1}\right]&\lesssim \tilde{\mathbb {E}}\left[ \left( \sum _{m=1}^{\infty }\int _0^T \Vert S_n B_m S_n v_n(s) \Vert _{H}^2 \mathrm {d}s\right) ^{\frac{\alpha +1}{2}}\right] \nonumber \\&\le \left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({H})}}^2\right) ^{\frac{\alpha +1}{2}} \tilde{\mathbb {E}}\left[ \left( \int _0^T \Vert v_n(s) \Vert _{H}^2 \mathrm {d}s\right) ^{\frac{\alpha +1}{2}}\right] \nonumber \\&\lesssim \tilde{\mathbb {E}}\left[ \int _0^T \Vert v_n(s) \Vert _{H}^{\alpha +1} \mathrm {d}s\right] \lesssim \tilde{\mathbb {E}}\left[ \int _0^T \Vert v_n(s) \Vert _{L^{\alpha +1}(M)}^{\alpha +1} \mathrm {d}s\right] \nonumber \\&\le T \sup _{n\in \mathbb {N}}\tilde{\mathbb {E}}\left[ \Vert v_n \Vert _{L^\infty (0,T;{L^{\alpha +1}(M)})}^{\alpha +1}\right] \le T C. \end{aligned}$$
(6.2)

Since \(\alpha +1>2\), we deduce \(N(t)\in L^2({\tilde{\Omega }},{E_A^*})\) by the Vitali Theorem and \(N_n(t)\rightarrow N(t)\) in \(L^2({\tilde{\Omega }},{E_A^*})\) for \(n\rightarrow \infty .\)

Step 2 Let \(\psi , \varphi \in {E_A}\) and h be a bounded continuous function on \({C([0,T],{E_A^*})}.\)

For \(0\le s\le t\le T,\) we define the random variables

$$\begin{aligned}&f_n(t,s):={\text {Re}}\big (N_n(t)-N_n(s),\psi \big )_{H} h(v_n|_{[0,s]}),\\&f(t,s):={\text {Re}}\langle N(t)-N(s), \psi \rangle h(v|_{[0,s]}). \end{aligned}$$

The \(\tilde{\mathbb {P}}\)-a.s.-convergence \(v_n \rightarrow v\) in \(Z_T\) for \(n\rightarrow \infty \) yields by Lemma 6.2\(f_n(t,s)\rightarrow f(t,s)\)\(\tilde{\mathbb {P}}\)-a.s. for all \(0\le s\le t\le T.\) We use \(\left( a+b\right) ^p\le 2^{p-1} \left( a^p+b^p\right) \) for \(a,b\ge 0\) and \(p\ge 1\) and the estimate (6.2) for

$$\begin{aligned} \tilde{\mathbb {E}}\vert f_n(t,s)\vert ^{\alpha +1}&\le 2^\alpha \Vert h \Vert _\infty ^{\alpha +1} \Vert \psi \Vert _{H}^{\alpha +1} \tilde{\mathbb {E}}\left[ \Vert N_n(t) \Vert _{H}^{\alpha +1}+\Vert N_n(s) \Vert _{H}^{\alpha +1}\right] \\&\le 2^\alpha \Vert h \Vert _\infty ^{\alpha +1} \Vert \psi \Vert _{H}^{\alpha +1} 2 T C. \end{aligned}$$

In view of the Vitali Theorem, we get

$$\begin{aligned} 0=\lim _{n\rightarrow \infty }\tilde{\mathbb {E}}f_n(t,s)= \tilde{\mathbb {E}}f(t,s), \qquad 0\le s\le t\le T. \end{aligned}$$

Step 3 For \(0\le s\le t\le T,\) we define

$$\begin{aligned} g_{1,n}(t,s):=\Big ({\text {Re}}\big (N_n(t),\psi \big )_{H}&{\text {Re}}\big (N_n(t), \varphi \big )_{H}-{\text {Re}}\big (N_n(s),\psi \big )_{H}{\text {Re}}\big (N_n(s),\varphi \big )_{H}\Big ) h(v_n|_{[0,s]}) \end{aligned}$$

and

$$\begin{aligned} g_1(t,s):=\Big ({\text {Re}}\langle N(t), \psi \rangle&{\text {Re}}\langle N(t), \varphi \rangle -{\text {Re}}\langle N(s), \psi \rangle {\text {Re}}\langle N(s), \varphi \rangle \Big ) h(v|_{[0,s]}). \end{aligned}$$

By Lemma 6.2, we obtain \(g_{1,n}(t,s)\rightarrow g_1(t,s)\)\(\tilde{\mathbb {P}}\)-a.s. for all \(0\le s\le t\le T.\) In order to get uniform integrability, we set \(r:={\frac{\alpha +1}{2}}>1\) and estimate

$$\begin{aligned}&\tilde{\mathbb {E}}\vert g_{1,n}(t,s)\vert ^r\le 2^{r} \Vert h \Vert _\infty ^{r} \tilde{\mathbb {E}}\left[ \vert {\text {Re}}\big (N_n(t),\psi \big )_{H}{\text {Re}}\big (N_n(t),\varphi \big )_{H} \vert ^r \right. \\&\left. \quad +\vert {\text {Re}}\big (N_n(s),\psi \big )_{H}{\text {Re}}\big (N_n(s),\varphi \big )_{H}\vert ^r\right] \\&\le 2^r \Vert h \Vert _\infty ^r \Vert \psi \Vert _{H}^r \Vert \varphi \Vert _{H}^r \tilde{\mathbb {E}}\left[ \Vert N_n(t) \Vert _{H}^{\alpha +1}+\Vert N_n(s) \Vert _{H}^{\alpha +1}\right] \le 2^r \Vert h \Vert _\infty ^r \Vert \psi \Vert _{H}^r \Vert \varphi \Vert _{H}^r 2 T C, \end{aligned}$$

where we used (6.2) again. As above, Vitali’s Theorem yields

$$\begin{aligned} 0=\lim _{n\rightarrow \infty }\tilde{\mathbb {E}}g_{1,n}(t,s)= \tilde{\mathbb {E}}g_1(t,s), \qquad 0\le s\le t\le T . \end{aligned}$$

Step 4 For \(0\le s\le t\le T,\) we define

$$\begin{aligned} g_{2,n}(t,s)&:=h(v_n|_{[0,s]}) \sum _{m=1}^{\infty }\int _s^t {\text {Re}}\big (S_n B_m S_n v_n(\tau ),\psi \big )_{H} {\text {Re}}\big (S_n B_m S_n v_n(\tau ),\varphi \big )_{H}\mathrm {d}\tau \\ g_{2}(t,s)&:=h(v|_{[0,s]}) \sum _{m=1}^{\infty }\int _s^t {\text {Re}}\langle B_m v(\tau ), \psi \rangle {\text {Re}}\langle B_m v(\tau ), \varphi \rangle \mathrm {d}\tau . \end{aligned}$$

Because of \(h(v_n |_{[0,s]}) \rightarrow h(v|_{[0,s]})\)\(\tilde{\mathbb {P}}\)-a.s. and the continuity of the inner product \(L^2([s,t]\times \mathbb {N}),\) the convergence

$$\begin{aligned} {\text {Re}}\big (S_n B_m S_n v_n,\psi \big )_{H} \rightarrow {\text {Re}}\langle B_m v, \psi \rangle \end{aligned}$$

\(\tilde{\mathbb {P}}\)-a.s. in \(L^2([s,t]\times \mathbb {N})\) already implies \(g_{2,n}(t,s)\rightarrow g_2(t,s)\)\(\tilde{\mathbb {P}}\)-a.s. Therefore, we consider

$$\begin{aligned} \Vert&{\text {Re}}\big (S_n B_m S_n v_n,\psi \big )_{H} - {\text {Re}}\langle B_m v, \psi \rangle \Vert _{L^2([s,t]\times \mathbb {N})}\\&\le \Vert {\text {Re}}\big (B_m S_n v_n,\left( S_n-I\right) \psi \big )_{H} \Vert _{L^2([s,t]\times \mathbb {N})}+ \Vert {\text {Re}}\big ( v_n,\left( S_n-I\right) B_m \psi \big )_{H} \Vert _{L^2([s,t]\times \mathbb {N})}\\&\quad +\,\Vert {\text {Re}}\langle B_m \left( v_n-v\right) , \psi \rangle \Vert _{L^2([s,t]\times \mathbb {N})}\\&\le \Vert B_m S_n v_n \Vert _{L^2([s,t]\times \mathbb {N},{E_A^*})}\Vert \left( S_n-I\right) \psi \Vert _{E_A}+\Vert {\text {Re}}\big ( v_n,\left( S_n-I\right) B_m \psi \big )_{H} \Vert _{L^2([s,t]\times \mathbb {N})}\\&\quad +\,\Vert \psi \Vert _{E_A}\Vert B_m (v_n-v) \Vert _{L^2([s,t]\times \mathbb {N},{E_A^*})}\\&\le \left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2\right) ^{\frac{1}{2}} T^\frac{1}{2} \Vert v_n \Vert _{C([0,T],{E_A^*})}\Vert \left( P_n-I\right) \psi \Vert _{E_A}\\&\quad +\Vert {\text {Re}}\big ( v_n,\left( S_n-I\right) B_m \psi \big )_{H} \Vert _{L^2([s,t]\times \mathbb {N})}\\&\quad +\,\left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2\right) ^{\frac{1}{2}} T^\frac{1}{2} \Vert v_n-v \Vert _{C([0,T],{E_A^*})}\Vert \psi \Vert _{E_A}. \end{aligned}$$

The first and the third term tend to 0 as \(n\rightarrow \infty \) by Proposition 6.1 and for the second one, this follows by the estimate

$$\begin{aligned} \vert {\text {Re}}\big ( v_n(s),\left( S_n-I\right) B_m \psi \big )_{H}\vert ^2 \le 4 \Vert v_n(s) \Vert _{E_A^*}^2 \Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2 \Vert \psi \Vert _{E_A}^2 \in L^1([s,t]\times \mathbb {N}) \end{aligned}$$

and Lebesgue’s convergence Theorem. Hence, we conclude

$$\begin{aligned} \Vert {\text {Re}}\big (S_n B_m S_n v_n,\psi \big )_{H} - {\text {Re}}\langle B_m v, \psi \rangle \Vert _{L^2([s,t]\times \mathbb {N})}\rightarrow 0 \end{aligned}$$

\(\tilde{\mathbb {P}}\)-a.s. as \(n\rightarrow \infty .\) Furthermore, we estimate

$$\begin{aligned} \sum _{m=1}^{\infty }\int _s^t \vert {\text {Re}}\big (S_n B_m S_n v_n(\tau ),\psi \big )_{H} \vert ^2 \mathrm {d}\tau&\le \int _0^T\Vert v_n(\tau ) \Vert _{E_A^*}^2 \mathrm {d}\tau \Vert \psi \Vert _{E_A}^2 \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2 \end{aligned}$$

and continue with \(r:=\frac{\alpha +1}{2}>1\) and

$$\begin{aligned}&\tilde{\mathbb {E}}\vert g_{2,n}(t,s)\vert ^r \le \tilde{\mathbb {E}}\Big [\Vert {\text {Re}}\langle S_n B_m S_n v_n, \psi \rangle \Vert _{L^2([s,t]\times \mathbb {N})} ^r \\&\quad \Vert {\text {Re}}\langle S_n B_m S_n v_n, \varphi \rangle \Vert _{L^2([s,t]\times \mathbb {N})}^r \vert h(v_n|_{[0,s]})\vert ^r\Big ]\\&\le \tilde{\mathbb {E}}\left[ \left( \int _0^T\Vert v_n(\tau ) \Vert _{E_A^*}^2 \mathrm {d}\tau \right) ^r\right] \Vert \psi \Vert _{E_A}^r \Vert \varphi \Vert _{E_A}^r \left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2\right) ^r \Vert h \Vert _\infty ^r\\&\lesssim \tilde{\mathbb {E}}\left[ \int _0^T\Vert v_n(\tau ) \Vert _{E_A^*}^{\alpha +1} \mathrm {d}\tau \right] \lesssim \sup _{n\in \mathbb {N}} \tilde{\mathbb {E}}\left[ \Vert v_n \Vert _{L^{\alpha +1}(0,T;{L^{\alpha +1}(M)})}^{\alpha +1}\right] \le C T . \end{aligned}$$

Using Vitali’s Theorem, we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty } \tilde{\mathbb {E}}\left[ g_{2,n}(t,s)\right] =\tilde{\mathbb {E}}\left[ g_{2}(t,s)\right] ,\qquad 0\le s\le t\le T. \end{aligned}$$

Step 5 From step 2, we have

$$\begin{aligned} \tilde{\mathbb {E}}\left[ {\text {Re}}\langle N(t)-N(s), \psi \rangle h(u|_{[0,s]})\right] =0 \end{aligned}$$
(6.3)

and step 3, step 4 and Lemma 6.3 yield

$$\begin{aligned}&\tilde{\mathbb {E}}\Bigg [ \Bigg ({\text {Re}}\langle N(t), \psi \rangle {\text {Re}}\langle N(t), \varphi \rangle - {\text {Re}}\langle N(s), \psi \rangle {\text {Re}}\langle N(s), \varphi \rangle \nonumber \\&\quad +\sum _{m=1}^{\infty }\int _s^t {\text {Re}}\langle B_m v(\tau ), \psi \rangle {\text {Re}}\langle B_m v(\tau ), \varphi \rangle \mathrm {d}\tau \Bigg ) h(v|_{[0,s]})\Bigg ]=0. \end{aligned}$$
(6.4)

Now, let \(\eta ,\zeta \in {H}.\) Then \(\iota ^* \eta ,\iota ^* \zeta \in {E_A}\) and for all \(z\in {E_A^*},\) we have \({\text {Re}}\big (L z,\eta \big )_{H}={\text {Re}}\langle z, \iota ^* \eta \rangle .\) By the first step, LN is a continuous, square integrable process in H and the identities (6.3) and (6.4) imply

$$\begin{aligned} \tilde{\mathbb {E}}\left[ {\text {Re}}\big (L N(t)-L N(s),\eta \big )_{H} h(u|_{[0,s]})\right] =0 \end{aligned}$$

and

$$\begin{aligned}&\tilde{\mathbb {E}}\Bigg [ \Bigg ({\text {Re}}\big (L N(t),\eta \big )_{H} {\text {Re}}\big (L N(t),\zeta \big )_{H}-{\text {Re}}\big (L N(s),\eta \big )_{H}{\text {Re}}\big (L N(s),\zeta \big )_{H}\\&\quad +\,\sum _{m=1}^{\infty }\int _s^t {\text {Re}}\big (L B_m v(\tau ),\eta \big )_{H} {\text {Re}}\big (L B_m v(\tau ),\zeta \big )_{H} \mathrm {d}\tau \Bigg ) h(v|_{[0,s]})\Bigg ]=0. \end{aligned}$$

Hence, LN is a continuous, square integrable martingale in H with respect to the \({\tilde{\mathcal {F}}}_{n,t}:=\sigma \left( v(s){:}\,s\le t\right) \) and quadratic variation

$$\begin{aligned} \langle \langle L N \rangle \rangle _t\zeta = \sum _{m=1}^{\infty }\int _0^t \mathrm {i}L B_m v(s) {\text {Re}}\big (\mathrm {i}L B_m v(s),\zeta \big )_{H} \mathrm {d}s \end{aligned}$$

for all \(\zeta \in {H}.\)\(\square \)

Finally, we can prove our main result Theorem 1.1 using the Martingale Representation Theorem from [21, Theorem 8.2].

Proof of Theorem 1.1

We choose \(H=L^2(M),\)\(Q=I\) and \(\varPhi (s):= \mathrm {i}L B\left( v(s)\right) \) for all \(s\in [0,T].\) The adjoint \(\varPhi (s)^*\) is given by \(\varPhi (s)^*\zeta := \sum _{m=1}^{\infty }{\text {Re}}\big (\mathrm {i}L B_m v(s),\zeta \big )_{H} f_m\) and hence,

$$\begin{aligned} \left( \varPhi (s) Q^{\frac{1}{2}}\right) \left( \varPhi (s) Q^{\frac{1}{2}}\right) ^*\zeta =\varPhi (s)\varPhi (s)^*\zeta =\sum _{m=1}^{\infty }{\text {Re}}\big (\mathrm {i}L B_m v(s),\zeta \big )_{H}\mathrm {i}L B_m v(s) \end{aligned}$$

for \(\zeta \in {H}.\) Clearly, v is continuous in \({E_A^*}\) and adapted to the filtration \({\tilde{\mathbb {F}}}\) given by \(\tilde{{\mathcal {F}}}_t=\sigma \left( v(s){:}\,0\le s\le t\right) \) for \(s\in [0,T].\) Hence, \(\varPhi \) is continuous in H and adapted to \({\tilde{\mathbb {F}}}\) and therefore progressively measurable.

By an application of Theorem 8.2 in [21] to the process LN from Lemma 6.4, we obtain a cylindrical Wiener process \({\tilde{W}}\) on Y defined on a probability space

$$\begin{aligned} \left( \Omega ',\mathcal {F}',\mathbb {P}'\right) =\left( {\tilde{\Omega }} \times \tilde{{\tilde{\Omega }}}, {\tilde{\mathcal {F}}}\otimes \tilde{{\tilde{\mathcal {F}}}}, {\tilde{\mathbb {P}}}\otimes \tilde{{\tilde{\mathbb {P}}}}\right) \end{aligned}$$

with

$$\begin{aligned} L N(t)=\int _0^t \varPhi (s) \mathrm {d}{\tilde{W}}(s)=\int _0^t \mathrm {i}L B\left( v(s)\right) \mathrm {d}{\tilde{W}}(s) \end{aligned}$$

for \(t\in [0,T].\) The estimate

$$\begin{aligned}&\Vert B v \Vert _{L^2([0,T]\times \Omega ,{\text {HS}}(Y,{E_A^*}))}^2=\mathbb {E}\int _0^T \sum _{m=1}^{\infty }\Vert B_m v(s) \Vert _{E_A^*}^2 \mathrm {d}s \lesssim \mathbb {E}\int _0^T \sum _{m=1}^{\infty }\Vert B_m v(s) \Vert _{E_A}^2 \mathrm {d}s\\&\quad \le \mathbb {E}\int _0^T \left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({E_A})}}^2\right) \Vert v(s) \Vert _{E_A}^2 \mathrm {d}s \lesssim \mathbb {E}\int _0^T \Vert v(s) \Vert _{E_A}^2 \mathrm {d}s\\&\quad \le T \Vert v \Vert _{L^2(\Omega ,{L^\infty (0,T;{E_A})})}^2\le T C \end{aligned}$$

yields that the stochastic integral \(\int _0^\cdot B\left( v(s)\right) \mathrm {d}{\tilde{W}}(s)\) is a continuous martingale in \({E_A^*}\) and using the continuity of the operator L, we get

$$\begin{aligned} \int _0^t \mathrm {i}L B\left( v(s)\right) \mathrm {d}{\tilde{W}}(s)=L \left( \int _0^t \mathrm {i}B\left( v(s)\right) \mathrm {d}{\tilde{W}}(s)\right) \end{aligned}$$

for all \(t\in [0,T].\) The definition of N and the injectivity of L yield the equality

$$\begin{aligned} \int _0^t \mathrm {i}B v(s) \mathrm {d}{\tilde{W}}(s)=-v(t)+ u_0+ \int _0^t \left[ -\mathrm {i}A v(s)-\mathrm {i}F(v(s))+\mu (v(s))\right] \mathrm {d}s \end{aligned}$$
(6.5)

in \({E_A^*}\) for \(t\in [0,T].\) The weak continuity of the paths of v in \({E_A}\) and the estimates for property (1.6) have already been shown in Proposition 6.1. Hence, the system \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},v\right) \) is a martingale solution of equation (1.1). \(\square \)

It remains to prove the mass conservation from Theorem 1.1. In Proposition 5.4, we proved a similar result for the approximating equation. Since this property is not invariant under the limiting procedure from above, we have to repeat the calculation in infinite dimensions and justify it by a regularization procedure.

Proposition 6.5

Let \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u\right) \) be a martingale solution of (1.1). Then, we have \(\Vert u(t) \Vert _{L^2}=\Vert u_0 \Vert _{L^2}\) almost surely for all \(t\in [0,T].\)

Proof

Step 1 Given \(\lambda >0,\) we define \(R_\lambda :=\lambda \left( \lambda +A\right) ^{-1}.\) Using the series representation, one can verify

$$\begin{aligned}&R_\lambda f \rightarrow f \quad \text {in}\quad {X},\quad \lambda \rightarrow \infty , \quad f\in {X}\nonumber \\&\Vert R_\lambda \Vert _{{{\mathcal {L}}(X)}}\le 1 \end{aligned}$$
(6.6)

for \(X\in \left\{ {H}, {E_A}, {E_A^*}\right\} .\) Moreover, \(R_\lambda ({E_A^*})={E_A}\) and hence, the equation

$$\begin{aligned} R_\lambda u(t)= & {} R_\lambda u_0+\int _0^t \left[ -\mathrm {i}R_\lambda A u(s)-\mathrm {i}R_\lambda F(u(s))+R_\lambda \mu (u(s))\right] \mathrm {d}s \nonumber \\&- \mathrm {i}\int _0^t R_\lambda B u(s) \mathrm {d}{\tilde{W}}(s) \end{aligned}$$
(6.7)

holds almost surely in \({E_A}\) for all \(t\in [0,T].\) The function \(\mathcal {M}{:}\,{{H}} \rightarrow \mathbb {R}\) defined by \(\mathcal {M}(v):=\Vert v \Vert _{{H}}^2\) is twice continuously Fréchet-differentiable with

$$\begin{aligned} \mathcal {M}'[v]h_1&= 2 {\text {Re}}\big (v, h_1\big )_{H}, \qquad \mathcal {M}''[v] \left[ h_1,h_2\right] = 2 {\text {Re}}\big ( h_1,h_2\big )_{H} \end{aligned}$$

for \(v, h_1, h_2\in {{H}}.\) Therefore, we get

$$\begin{aligned}&\Vert R_\lambda u(t) \Vert _{{H}}^2=\Vert R_\lambda u_0 \Vert _{H}^2+2 \int _0^t {\text {Re}}\big (R_\lambda u(s),-\mathrm {i}R_\lambda A u(s) \nonumber \\&\quad -\mathrm {i}R_\lambda F(u(s))+R_\lambda \mu (u(s))\big )_{H} \mathrm {d}s\nonumber \\&\quad -\,2 \int _0^t {\text {Re}}\big (R_\lambda u(s),\mathrm {i}R_\lambda B u(s) \mathrm {d}{\tilde{W}}(s)\big )_{H} +\sum _{m=1}^{\infty }\int _0^t \Vert R_\lambda B_m u(s)\Vert _{{H}}^2\mathrm {d}s \end{aligned}$$
(6.8)

almost surely for all \(t\in [0,T].\)

Step 2 In the following, we deal with the behaviour of the terms in (6.8) for \(\lambda \rightarrow \infty .\) Since \(R_\lambda \) and A commute, we get

$$\begin{aligned} {\text {Re}}\big (R_\lambda u(s),-\mathrm {i}R_\lambda A u(s)\big )_{H}={\text {Re}}\big (R_\lambda u(s),-\mathrm {i}A R_\lambda u(s)\big )_{H}=0,\quad s\in [0,T],\quad \lambda >0. \end{aligned}$$
(6.9)

For \(s\in [0,T],\) we have

$$\begin{aligned}&{\text {Re}}\big (R_\lambda u(s),-\mathrm {i}R_\lambda F(u(s))\big )_{H}\rightarrow {\text {Re}}\langle u(s), -\mathrm {i}F(u(s)) \rangle =0\nonumber \\&{\text {Re}}\big (R_\lambda u(s),R_\lambda \mu (u(s))\big )_{H}\rightarrow {\text {Re}}\big ( u(s),\mu (u(s))\big )_{H},\qquad \lambda \rightarrow \infty . \end{aligned}$$
(6.10)

by (6.6). In order to apply the dominated convergence Theorem by Lebesgue, we estimate

$$\begin{aligned} \vert {\text {Re}}&\big (R_\lambda u(s),-\mathrm {i}R_\lambda F(u(s))+R_\lambda \mu (u(s))\big )_{H}\vert \\&\quad \le \Vert u(s) \Vert _{{E_A}} \left\| -\mathrm {i}F(u(s))+\mu (u(s)) \right\| _{{E_A^*}}\\&\quad \lesssim \Vert u(s) \Vert _{{E_A}} \left( \Vert F(u(s)) \Vert _{{L^{\frac{\alpha +1}{\alpha }}(M)}}+\sum _{m=1}^{\infty }\Vert B_m \Vert _{{\mathcal {L}}({H})}^2\Vert u(s) \Vert _{{H}}\right) \\&\quad \lesssim \Vert u(s) \Vert _{{E_A}} \left( \Vert u(s) \Vert _{{L^{\alpha +1}(M)}}^\alpha +\Vert u(s) \Vert _{{H}}\right) \\&\quad \lesssim \Vert u(s) \Vert _{E_A}^{\alpha +1}+\Vert u(s) \Vert _{E_A}^2 \end{aligned}$$

using (6.6) and the Sobolev embeddings \({L^{\frac{\alpha +1}{\alpha }}(M)}\hookrightarrow {E_A^*}\) and \({E_A}\hookrightarrow {L^{\alpha +1}(M)}.\)

Since \(u\in C_w([0,T],{E_A})\) almost surely and \(C_w([0,T],{E_A})\subset {L^\infty (0,T;{E_A})}\), we obtain

$$\begin{aligned}&\int _0^t {\text {Re}}\big (R_\lambda u(s),-\mathrm {i}R_\lambda F(u_1(s))+R_\lambda \mu (u(s))\big )_{H} \mathrm {d}s\\&\quad \rightarrow \int _0^t {\text {Re}}\big (u(s),\mu (u(s))\big )_{H}\mathrm {d}s ,\qquad \lambda \rightarrow \infty , \end{aligned}$$

almost surely for all \(t\in [0,T].\) Moreover, the pointwise convergence

$$\begin{aligned} \Vert R_\lambda B_m u(s)\Vert _{{H}} \rightarrow \Vert B_m u(s)\Vert _{{H}}, \qquad m\in \mathbb {N}, \quad \text {f.a.a. }s\in [0,T] \end{aligned}$$

and the estimate

$$\begin{aligned} \Vert R_\lambda B_m u(s)\Vert _{{H}}^2\le \Vert B_m\Vert _{{{\mathcal {L}}({H})}}^2 \Vert u(s)\Vert _{{H}}^2 \in L^1([0,T]\times \mathbb {N}) \end{aligned}$$

lead to, by Lebesgue DCT,

$$\begin{aligned} \sum _{m=1}^{\infty }\int _0^t \Vert R_\lambda B_m u(s)\Vert _{{H}}^2\mathrm {d}s \rightarrow \sum _{m=1}^{\infty }\int _0^t \Vert B_m u(s)\Vert _{{H}}^2\mathrm {d}s,\quad \lambda \rightarrow \infty \end{aligned}$$
(6.11)

almost surely for all \(t\in [0,T]\). For the stochastic term, we fix \(K\in \mathbb {N}\) and define a stopping time \(\tau _K\) by

$$\begin{aligned} \tau _K:=\inf \left\{ t\in [0,T]{:}\,\Vert u(t) \Vert _{{H}}>K\right\} . \end{aligned}$$

Then, we infer that

$$\begin{aligned} {\text {Re}}\big (R_\lambda u(s),\mathrm {i}R_\lambda B_m u(s)\big )_{H}\rightarrow {\text {Re}}\big ( u(s),\mathrm {i}B u(s)\big )_{H}=0 \quad \text {a.s.},\quad m\in \mathbb {N}, s\in [0,T] \end{aligned}$$

and

$$\begin{aligned} {\mathbf {1}}_{[0,\tau _K]}(s) \vert {\text {Re}}\big (R_\lambda u(s),\mathrm {i}R_\lambda B_m u(s)\big )_{H}\vert ^2&\le {\mathbf {1}}_{[0,\tau _K]}(s) \Vert u(s) \Vert _{{H}}^4 \Vert B_m \Vert _{{{\mathcal {L}}({H})}}^2\\&\le K^4 \Vert B_m \Vert _{{{\mathcal {L}}({H})}}^2\in L^1({\tilde{\Omega }}\times [0,T]\times \mathbb {N}) \end{aligned}$$

to get

$$\begin{aligned} \tilde{\mathbb {E}}\sum _{m=1}^{\infty }\int _0^{\tau _K}\left[ {\text {Re}}\big (R_\lambda u(s),\mathrm {i}R_\lambda B_m u(s)\big )_{H}\right] ^2 \mathrm {d}s\rightarrow 0,\quad \lambda \rightarrow \infty , \end{aligned}$$

by Lebesgue. The Itô isometry and the Doob inequality yield

$$\begin{aligned} \tilde{\mathbb {E}}\left[ \sup _{t\in [0,\tau _K]}\left| \int _0^t {\text {Re}}\big (R_\lambda u(s),\mathrm {i}R_\lambda B u(s) \mathrm {d}W(s)\big )_{H} \right| ^2 \right] \rightarrow 0,\quad \lambda \rightarrow \infty . \end{aligned}$$

After passing to a subsequence, we get

$$\begin{aligned} \int _0^t {\text {Re}}\big (R_\lambda u(s),\mathrm {i}R_\lambda B u(s) \mathrm {d}W(s)\big )_{H}\rightarrow 0,\quad \lambda \rightarrow \infty , \end{aligned}$$
(6.12)

almost surely in \(\left\{ t\le \tau _K\right\} .\) By

$$\begin{aligned} \bigcup _{K\in \mathbb {N}} \left\{ t\le \tau _K\right\} =[0,T]\qquad \text {a.s.}, \end{aligned}$$

we conclude that (6.12) holds almost surely on [0, T].

Step 3 Using (6.9), (6.11) and (6.12) in (6.8), we obtain

$$\begin{aligned} \Vert u(t) \Vert _{{H}}^2&=\Vert u_0 \Vert _{{H}}^2+2 \int _0^t {\text {Re}}\big ( u(s),\mu (u(s))\big )_{H} \mathrm {d}s +\sum _{m=1}^{\infty }\int _0^t \Vert B_m u(s)\Vert _{{H}}^2\mathrm {d}s \end{aligned}$$

almost surely for all \(t\in [0,T].\) By the selfadjointness of \(B_m,\)\(m\in \mathbb {N},\) we simplify

$$\begin{aligned} 2 {\text {Re}}\big (u(s),\mu (u(s))\big )_{H}=-\sum _{m=1}^{\infty }{\text {Re}}\big (u(s),B_m^2 u(s)\big )_{H} =-\sum _{m=1}^{\infty }\Vert B_m u(s) \Vert _{{H}}^2. \end{aligned}$$

Therefore, we have \(\Vert u(t) \Vert _{{H}}^2=\Vert u_0 \Vert _{{H}}^2\) almost surely for all \(t\in [0,T].\)\(\square \)

7 Regularity and uniqueness of solutions on 2d manifolds

In this section, we want to study pathwise uniqueness of solutions to (1.1) and we consider the case of a 2-dimensional Riemannian manifold without boundary M. We drop the assumption that M is compact and replace it by

$$\begin{aligned} M\,\text {is complete, has a positive injectivity radius and a bounded geometry.} \end{aligned}$$
(7.1)

We refer to [53, chapter 7], for the definitions of the notions above and background references on differential geometry. We equip M with the canonical volume \(\mu \) and suppose that M satisfies the doubling property: for all \(x\in {\tilde{M}}\) and \(r>0,\) we have \(\mu (B(x,r))<\infty \) and

$$\begin{aligned} \mu (B(x,2r))\lesssim \mu (B(x,r)). \end{aligned}$$
(7.2)

We emphasize that (7.1) is satisfied by compact manifolds. Examples for manifolds with the property (7.2) are given by compact manifolds and manifolds with non-negative Ricci-curvature, see [16].

Let \(A=-\Delta _g\) be the Laplace–Beltrami operator \(F=F_\alpha ^\pm \) be the model nonlinearity from Sect. 3. The proof is based on an additional regularity of the solution, which we obtain by applying the deterministic and the stochastic Strichartz estimates from [8, 13].

In two dimensions, the mapping properties of the nonlinearity improve, as we will see in the first Lemma.

Lemma 7.1

Let \(d=2,\)\(\alpha >1,\)\(s\in ( \frac{\alpha -1}{\alpha },1]\) and \({\tilde{s}}\in (0,1-\alpha +s\alpha ]\cap (0,1).\) Then, we have \(F_\alpha ^\pm {:}\,H^s(M)\rightarrow H^{{\tilde{s}}}(M)\) and

$$\begin{aligned} \Vert F_\alpha ^\pm (u) \Vert _{H^{{\tilde{s}}}}\lesssim \Vert u \Vert _{H^s}^{\alpha },\qquad u\in H^s(M). \end{aligned}$$

Proof

Step 1 First, we consider the case \(s=1.\) Take \(q\in [2,\infty )\) and \(r\in (2,\infty )\) with

$$\begin{aligned} q\ge \frac{2(\alpha -1)}{1-{\tilde{s}}},\qquad \frac{1}{r}=\frac{1}{2}+\frac{\alpha -1}{q}. \end{aligned}$$
(7.3)

Due to \(d=2,\) we have \(H^1(M)\hookrightarrow L^q(M)\) and by [10, Lemma, III. 1.4.], we get

$$\begin{aligned} \Vert F_\alpha ^\pm (u) \Vert _{H^{1,r}}\lesssim \Vert u \Vert _{H^1}^{\alpha },\qquad u\in H^1(M). \end{aligned}$$

The condition (7.3) yields

$$\begin{aligned} {\tilde{s}}-1\le -\frac{2(\alpha -1)}{q}=1-\frac{2}{r} \end{aligned}$$

and therefore, the assertion follows by applying the Sobolev embedding \(H^{1,r}(M)\hookrightarrow H^{{\tilde{s}}}(M).\)

Step 2 Next, we consider \(s\in (\frac{\alpha -1}{\alpha },1).\) Let \(r=\frac{2}{(1-s)\alpha +s}\in (1,2)\) and \(q=\frac{2}{1-s}\in (2\alpha ,\infty ).\) Then, we have \(\frac{1}{r}=\frac{1}{2}+\frac{\alpha -1}{q}.\) Thus, we can apply [17, Proposition 3.1], and obtain

$$\begin{aligned} \Vert \vert \nabla \vert ^s F_\alpha ^\pm (u) \Vert _{L^r}\lesssim \Vert u \Vert _{L^q}^{\alpha -1} \Vert \vert \nabla \vert ^s u \Vert _{L^2}. \end{aligned}$$
(7.4)

Furthermore, we have

$$\begin{aligned} s-1=-\frac{2}{q},\qquad s-1=\frac{s}{\alpha }-\frac{2}{r\alpha }\ge -\frac{2}{r\alpha }, \end{aligned}$$

which implies

$$\begin{aligned} H^s(\mathbb {R}^2)\hookrightarrow L^q(\mathbb {R}^2),\qquad H^s(\mathbb {R}^2)\hookrightarrow L^{r\alpha }(\mathbb {R}^2). \end{aligned}$$

Together with (7.4) and \(\Vert F_\alpha ^\pm (u) \Vert _{L^r}=\Vert u \Vert _{L^{r\alpha }}^\alpha \) for \(u\in L^{r\alpha }(\mathbb {R}^2),\) this implies

$$\begin{aligned} \Vert F_\alpha ^\pm (u) \Vert _{H^{s,r}(\mathbb {R}^2)}\lesssim \Vert u \Vert _{H^s(\mathbb {R}^2)}^{\alpha },\qquad u\in H^s(\mathbb {R}^2). \end{aligned}$$
(7.5)

Since we have the Sobolev embedding \(H^{s,r}(\mathbb {R}^2)\hookrightarrow H^{{\tilde{s}}}(\mathbb {R}^2)\) as a consequence of \(0<{\tilde{s}}\le 1-\alpha +s\alpha \le s,\) we obtain

$$\begin{aligned} \Vert F_\alpha ^\pm (u) \Vert _{H^{{\tilde{s}}}(\mathbb {R}^2)}\lesssim \Vert u \Vert _{H^s(\mathbb {R}^2)}^{\alpha },\qquad u\in H^s(M). \end{aligned}$$

This completes the proof in the case \(M=\mathbb {R}^2.\) For a general manifold M, the estimate follows by the definition of fractional Sobolev spaces via charts, see “Appendix B”. \(\square \)

In the following Proposition, we reformulate problem (1.1) in a mild form and use this to show additional regularity properties of solutions of (1.1). Let us therefore recall the notation

$$\begin{aligned} \mu =-\frac{1}{2}\sum _{m=1}^{\infty }B_m^2. \end{aligned}$$

Proposition 7.2

Assume \(d=2\) and choose \(2<p,q<\infty \) with

$$\begin{aligned} \frac{2}{p}+\frac{2}{q}=1. \end{aligned}$$

Let \(\varepsilon \in (0,1),\)\(\alpha >1,\)\(s\in [1+\frac{1+\varepsilon }{q\alpha }-\frac{1}{\alpha },1],\)\(r>1\) and \(\beta := \max \{\alpha ,2\}.\) Let \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u\right) \) be a solution to (1.1) with \(F=F_\alpha ^\pm \) and assume

$$\begin{aligned} u\in L^{r\alpha }({\tilde{\Omega }},L^\beta (0,T;H^s(M))). \end{aligned}$$
(7.6)

Then, for each \({\tilde{s}}\in [\frac{1+\varepsilon }{q},1-\alpha +s\alpha ]\cap (0,1),\) we have

$$\begin{aligned} u\in L^r({\tilde{\Omega }}, C([0,T],H^{{\tilde{s}}}(M))\cap L^q(0,T;H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M))) \end{aligned}$$
(7.7)

and almost surely in \(H^{{\tilde{s}}}(M)\) for all \(t\in [0,T]\)

$$\begin{aligned} \mathrm {i}u(t)&=\mathrm {i}e^{-\mathrm {i}tA}u_0+\int _0^t e^{-\mathrm {i}(t-\tau )A}F_\alpha ^\pm (u(\tau ))\mathrm {d}\tau \nonumber \\&\quad +\,\int _0^t e^{-\mathrm {i}(t-\tau )A}\mu (u(\tau ))\mathrm {d}\tau +\int _0^t e^{-\mathrm {i}(t-\tau )A}B(u(\tau ))\mathrm {d}W(\tau ). \end{aligned}$$
(7.8)

Remark 7.3

Of course, (7.7) also holds for \(\varepsilon \ge 1,\) but then \(u\in L^r({\tilde{\Omega }},L^q(0,T;H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M)))\) would be trivial by the Sobolev embedding \(H^{{\tilde{s}}}(M)\hookrightarrow H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M).\) Being able to choose \(\varepsilon \in (0,1)\) means a gain of regularity which will be used below via \(H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M)\hookrightarrow {L^\infty (M)}\) for an appropriate choice of the parameters.

Proof of Proposition 7.2

Step 1 First, we will show that it is possible to rewrite the Eq. (2.19) from the definition of solutions for (1.1) in the mild form (7.8).

We note that for each \(s_0<0\) the semigroup \(\left( e^{-\mathrm {i}t A}\right) _{t\ge 0}\) on \(L^2(M)\) extends to a semigroup \(\left( T_{s_0}(t)\right) _{t\ge 0}\) with the generator \(A_{s_0}\) that extends A to \(\mathcal {D}(A_{s_0})=H^{s_0+2}(M).\) To keep the notation simple, we also call this semigroup \(\left( e^{-\mathrm {i}t A}\right) _{t\ge 0}.\)

We apply the Itô formula to \(\varPhi \in C^{1,2}([0,t]\times H^{s-2}(M), H^{s-4}(M))\) defined by

$$\begin{aligned} \varPhi (\tau ,x):=e^{-\mathrm {i}(t-\tau )A}x,\qquad \tau \in [0,t],\quad x\in H^{s-2}(M) \end{aligned}$$

and obtain

$$\begin{aligned} \mathrm {i}u(t)&=\mathrm {i}e^{-\mathrm {i}tA}u_0+\int _0^t e^{-\mathrm {i}(t-\tau )A}F_\alpha ^\pm (u(\tau ))\mathrm {d}\tau +\int _0^t e^{-\mathrm {i}(t-\tau )A}\mu (u(\tau ))\mathrm {d}\tau \\&\quad +\,\int _0^t e^{-\mathrm {i}(t-\tau )A}B(u(\tau ))\mathrm {d}W(\tau ) \end{aligned}$$

almost surely in \(H^{s-4}(M)\) for all \(t\in [0,T].\)

Step 2 Using the Strichartz estimates from Lemma B.4 we deal with the free term and each convolution term on the right hand site to get (7.7) and the identity (7.8) in \(H^{{\tilde{s}}}(M).\) For this purpose, we define

$$\begin{aligned} Y_T:=L^q(0,T; H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M))\cap L^\infty (0,T;H^{{\tilde{s}}}(M)). \end{aligned}$$

By (B.5) we obtain

$$\begin{aligned} \Vert e^{-\mathrm {i}tA}u_0 \Vert _{L^r({\tilde{\Omega }},Y_T)}\lesssim \Vert u_0 \Vert _{H^{{\tilde{s}}}}\lesssim \Vert u_0 \Vert _{H^s}<\infty \end{aligned}$$

and by (B.6) and Lemma 7.1, we get

$$\begin{aligned} \left\| \int _0^t e^{-\mathrm {i}(t-\tau )A}F_\alpha ^\pm (u(\tau ))\mathrm {d}\tau \right\| _{Y_T}\lesssim \Vert F_\alpha ^\pm (u) \Vert _{L^1(0,T; H^{{\tilde{s}}})}\lesssim \Vert u \Vert _{L^\alpha (0,T; H^s)}^\alpha . \end{aligned}$$

Integration over \({\tilde{\Omega }}\) and (7.6) yields

$$\begin{aligned} \left\| \int _0^t e^{-\mathrm {i}(t-\tau )A}F_\alpha ^\pm (u(\tau ))\mathrm {d}\tau \right\| _{L^r({\tilde{\Omega }},Y_T)}\lesssim \Vert u \Vert _{L^{r\alpha }({\tilde{\Omega }},L^\alpha (0,T; H^s))}^\alpha <\infty . \end{aligned}$$

To estimate the other convolutions, we need that \(\mu \) is bounded in \(H^{{\tilde{s}}}(M)\) and B is bounded from \(H^{{\tilde{s}}}(M)\) to \({\text {HS}}(Y,H^{{\tilde{s}}}(M)).\) This can be deduced from the following estimate, which follows from complex interpolation (see [39, Theorem 2.1.6]), Hölder’s inequality and Assumption 2.7:

$$\begin{aligned} \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(H^{{\tilde{s}}})}}^2&\le \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(H^1)}}^{2{{\tilde{s}}}} \Vert B_m \Vert _{{{\mathcal {L}}({H})}}^{2(1-{{\tilde{s}}})}\nonumber \\&\le \left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}(H^1)}}^{2}\right) ^{{{\tilde{s}}}} \left( \sum _{m=1}^{\infty }\Vert B_m \Vert _{{{\mathcal {L}}({H})}}^{2}\right) ^{1-{{\tilde{s}}}}<\infty . \end{aligned}$$
(7.9)

Therefore, by (B.6), (7.9) and (7.6)

$$\begin{aligned} \left\| \int _0^t e^{-\mathrm {i}(t-\tau )A}\mu (u(\tau ))\mathrm {d}\tau \right\| _{L^r({\tilde{\Omega }},Y_T)}&\lesssim \Vert \mu (u) \Vert _{L^r({\tilde{\Omega }}, L^1(0,T; H^{{\tilde{s}}}))} \lesssim \Vert u \Vert _{L^r({\tilde{\Omega }}, L^1(0,T; H^{{\tilde{s}}}))}\\&\lesssim \Vert u \Vert _{L^{r\alpha }({\tilde{\Omega }}, L^\beta (0,T; H^s)}<\infty . \end{aligned}$$

The estimates (B.7), (7.9) and (7.6) imply

$$\begin{aligned}&\left\| \int _0^t e^{-\mathrm {i}(t-\tau )A}B(u(\tau ))\mathrm {d}W(\tau ) \right\| _{L^r({\tilde{\Omega }},Y_T)} \lesssim \Vert B(u) \Vert _{L^r({\tilde{\Omega }}, L^2(0,T; {\text {HS}}(Y,H^{{\tilde{s}}}))}\\&\ \quad \lesssim \Vert u \Vert _{L^r({\tilde{\Omega }}, L^2(0,T; H^{{\tilde{s}}}))} \lesssim \Vert u \Vert _{L^{r\alpha }({\tilde{\Omega }}, L^\beta (0,T; H^s))}<\infty . \end{aligned}$$

Hence, the mild Eq. (7.8) holds almost surely in \(H^{{\tilde{s}}}(M)\) for each \(t\in [0,T]\) and thus, we get (7.7) by the pathwise continuity of deterministic and stochastic integrals. \(\square \)

As a preparation for the proof of pathwise uniqueness, we show a formula for the \(L^2\)-norm of the difference of two solutions of (1.1).

Lemma 7.4

Let \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u_j\right) ,\)\(j=1,2,\) be solutions of (1.1) with \(F=F_\alpha ^\pm \) for \(\alpha >1.\) Then,

$$\begin{aligned} \Vert u_1(t)-u_2(t) \Vert _{L^2}^2&=2 \int _0^t {\text {Re}}\big (u_1(\tau )-u_2(\tau ),-\mathrm {i}F_\alpha ^\pm (u_1(\tau ))+\mathrm {i}F_\alpha ^\pm (u_2(\tau ))\big )_{L^2} \mathrm {d}\tau \nonumber \\ \end{aligned}$$
(7.10)

almost surely for all \(t\in [0,T].\)

Proof

The proof is similar to Proposition 6.5. In fact, it is even simpler, since the regularity of \(F_\alpha ^\pm \) due to Lemma 7.1 simplifies the proof of the convergence for \(\lambda \rightarrow \infty .\)\(\square \)

Finally, we are ready to prove the pathwise uniqueness of solutions to (1.1).

Theorem 7.5

Let \(d=2\) and \(F(u)=F_\alpha ^\pm (u)=\pm \vert u\vert ^{\alpha -1}u\) with \(\alpha \in (1,\infty ).\) Let \(r> \alpha ,\)\(\beta \ge \max \{\alpha ,2\}\) and

$$\begin{aligned} s \in {\left\{ \begin{array}{ll} (1-\frac{1}{2\alpha },1] &{}\quad {for}\,\alpha \in (1,3], \\ (1-\frac{1}{\alpha (\alpha -1)},1] &{}\quad {for}\, \alpha >3. \end{array}\right. } \end{aligned}$$

Then, solutions of problem (1.1) are pathwise unique in \(L^{r}({\tilde{\Omega }},L^\beta (0,T;H^s(M)))\), i.e. given two solutions \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u_j\right) \) with

$$\begin{aligned} u_j\in L^{r}({\tilde{\Omega }},L^\beta (0,T;H^s(M))), \end{aligned}$$

for \(j=1,2,\) we have \(u_1(t)=u_2(t)\) almost surely in \({L^2(M)}\) for all \(t\in [0,T].\)

Proof

Step 1 Take two solutions \(\left( {\tilde{\Omega }},{\tilde{\mathcal {F}}},{\tilde{\mathbb {P}}},{\tilde{W}},{\tilde{\mathbb {F}}},u_j\right) \) of (1.1) with \(u_j\in L^{r}({\tilde{\Omega }},L^\infty (0,T;H^s(M)))\) for \(j=1,2,\) and define \(w:=u_1-u_2.\) From Lemma 7.4, we conclude

$$\begin{aligned} \Vert w(t) \Vert _{L^2}^2&=2 \int _0^t {\text {Re}}\big (w(\tau ),-\mathrm {i}F(u_1(\tau ))+\mathrm {i}F(u_2(\tau ))\big )_{L^2} \mathrm {d}\tau \end{aligned}$$

almost surely for all \(t\in [0,T].\) The estimate

$$\begin{aligned} \vert F_\alpha ^\pm (z_1)-F_\alpha ^\pm (z_2)\vert \lesssim \left( \vert z_1\vert ^{\alpha -1}+\vert z_2\vert ^{\alpha -1}\right) \vert z_1-z_2\vert ,\qquad z_1,z_2\in \mathbb {C}, \end{aligned}$$

yields

$$\begin{aligned} \Vert w(t) \Vert _{L^2}^2&\lesssim \int _0^t \int _M \vert w(\tau ,x)\vert ^2 \left[ \vert {u_1(\tau ,x)}\vert ^{\alpha -1}+\vert {u_2(\tau ,x)}\vert ^{\alpha -1}\right] \mathrm {d}x \mathrm {d}\tau \nonumber \\&\le \int _0^t \Vert w(\tau ) \Vert _{L^2}^2 \left[ \Vert {u_1(\tau )}\Vert _{L^\infty (M)}^{\alpha -1}+\Vert {u_2(\tau )}\Vert _{L^\infty (M)}^{\alpha -1}\right] \mathrm {d}\tau \end{aligned}$$
(7.11)

almost surely for all \(t\in [0,T].\)

Step 2 First, we deal with the case \(\alpha \in (1,3].\) By \(s> 1-\frac{1}{2\alpha },\) we can choose \(q>2\) and \(\varepsilon \in (0,1)\) with

$$\begin{aligned} 1-\frac{1}{2 \alpha }<1-\frac{1}{2 \alpha }+\frac{q-2+2\varepsilon }{2q\alpha }=1-\frac{1}{q\alpha }+ \frac{\varepsilon }{q\alpha }<s. \end{aligned}$$

Hence, we have \(\frac{1+\varepsilon }{q}+1-\frac{2}{q}< 1-\alpha +s\alpha \) and in particular, there is \({\tilde{s}}\in (\frac{1+\varepsilon }{q}+1-\frac{2}{q}, 1-\alpha +s\alpha ).\) If we choose \(p>2\) according to \(\frac{2}{p}+\frac{2}{q}=1,\) Proposition B.2 leads to \(H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M)\hookrightarrow {L^\infty (M)}\) because of

$$\begin{aligned} \left( {\tilde{s}}-\frac{1+\varepsilon }{q}\right) -\frac{2}{p}={ \tilde{s}}-\frac{1+\varepsilon }{q}+\frac{2}{q}-1= {\tilde{s}}-\left( \frac{1+\varepsilon }{q}+1-\frac{2}{q}\right) >0. \end{aligned}$$

Moreover, we have \(u_j\in L^q(0,T;H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M))\) almost surely for \(j=1,2\) by Proposition 7.2. Hence, the process b defined by

$$\begin{aligned} b(\tau ):=\left[ \Vert u_1(\tau ) \Vert _{L^\infty }^{\alpha -1}+\Vert u_2( \tau ) \Vert _{L^\infty }^{\alpha -1}\right] ,\qquad \tau \in [0,T], \end{aligned}$$
(7.12)

satisfies

$$\begin{aligned} \Vert b \Vert _{L^1(0,T)}&\lesssim \Vert u_1 \Vert _{L^{q}(0,T;H^{s-\frac{1+\varepsilon }{q},p})}^{\alpha -1}+ \Vert u_2 \Vert _{L^{q}(0,T;H^{s-\frac{1+\varepsilon }{q},p})}^{\alpha -1} <\infty \quad \text {a.s.}, \end{aligned}$$
(7.13)

where we used \(q>2\ge \alpha -1\) and the Hölder inequality in time. Because of (7.11), we can apply Gronwall’s Lemma to get

$$\begin{aligned} u_1(t)=u_2(t)\quad \text {a.s. in }{L^2(M)} \text { for all }t \in [0,T]. \end{aligned}$$

Step 3 Now, let \(\alpha >3.\) Then, we set \(q:=\alpha -1\) and choose \(p>2\) with \(\frac{2}{p}+\frac{2}{q}=1.\) Using \(s> 1-\frac{1}{\alpha (\alpha -1)},\) we fix \(\varepsilon \in (0,1)\) with

$$\begin{aligned} 1-\frac{1}{\alpha (\alpha -1)}<1-\frac{1}{q\alpha }+\frac{\varepsilon }{q\alpha }<s. \end{aligned}$$

As above, we can choose \({\tilde{s}}\in (\frac{1+\varepsilon }{q}+1-\frac{2}{q}, 1-\alpha +s\alpha ).\) We therefore get \(H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M)\hookrightarrow {L^\infty (M)}\) and \(u_j\in L^q(0,T;H^{{\tilde{s}}-\frac{1+\varepsilon }{q},p}(M))\) almost surely for \(j=1,2.\) We obtain \(b\in L^1(0,T)\) almost surely for b from (7.12) and Gronwall’s Lemma implies

$$\begin{aligned} u_1(t)=u_2(t)\quad \text {a.s. in }{L^2(M)}\text { for all } t \in [0,T]. \end{aligned}$$

\(\square \)

Remark 7.6

In [8], Brzeźniak and Millet proved pathwise uniqueness of solutions in the space \(L^q(\Omega ,C([0,T],H^1(M))\cap L^q([0,T],H^{1-\frac{1}{q},p}(M)))\) with \(\frac{2}{q}+\frac{2}{p}=1\) and \(q>\alpha +1.\) Since they used the deterministic Strichartz estimates from [4] instead of [13], their result is restricted to compact manifolds M. Comparing the result in [8] with Theorem 7.5 in the present article, we see that the assumptions of Theorem 7.5 are weaker with respect to space and time. On the other hand, the assumptions on the required moments is slightly weaker in [8].

Remark 7.7

A similar Uniqueness-Theorem can also be proved on bounded domains in \(\mathbb {R}^2\) using the Strichartz inequalities by Blair, Smith and Sogge from [14]. We also want to mention the classical strategy by Vladimirov (see [15, 43, 45, 56]) to prove uniqueness of \(H^1\)-solutions using Trudinger type inequalities which can be seen as the limit case of Sobolev’s embedding, see also [2, Theorem 8.27]. Since this proof only relies on the formula (7.10) and the property of solutions to be in \(H^1,\) it can be directly transfered to the stochastic setting. This strategy does not use Strichartz estimates, but it suffers from a restriction to \(\alpha \in (1,3]\) and it cannot be transfered to \(H^s\) for \(s<1.\)

Now, we give the definition of the concepts of strong solutions and uniqueness in law used in Corollary 1.3.

Definition 7.8

  1. (a)

    Let \(T>0\) and \(u_0\in E_A.\) Then, a strong solution of the Eq. (1.1) is a continuous, \({\tilde{\mathbb {F}}}\)-adapted process with values in \({E_A^*}\) such that \(u\in L^2(\Omega \times [0,T],{E_A^*})\) and almost all paths are in \(C_w([0,T],{E_A})\) with

    $$\begin{aligned} u(t)= u_0+ \int _0^t \left[ -\mathrm {i}A u(s)-\mathrm {i}F(u(s))+\mu (u(s))\right] \mathrm {d}\tau - \mathrm {i}\int _0^t B u(s) \mathrm {d}W(s) \end{aligned}$$

    almost surely in \({E_A^*}\) for all \(t\in [0,T].\)

  2. (b)

    The solutions of (1.7) are called unique in law, if for all martingale solutions \(\left( \Omega _j,\mathcal {F}_j,\mathbb {P}_j,W_j,\mathbb {F}_j,u_j\right) \) with \(u_j(0)=u_0,\) for \(j=1,2,\) we have \(\mathbb {P}_1^{u_1}=\mathbb {P}_2^{u_2}\) almost surely in \(C([0,T],L^2(M)).\)

We finish this section with the proof of Corollary 1.3.

Proof of Corollary 1.3

The existence of a martingale solution from Corollary 1.2 and the pathwise uniqueness from Theorem 7.5 yield the assertion by [44, Theorem 2 and 12.1]. \(\square \)