1 Introduction

We consider the Cauchy problem of the system of nonlinear Schrödinger equations:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle (i\partial _{t}+\alpha \Delta )u=-(\nabla \cdot w )v, \quad t>0,\ x\in {\mathbb R}^d,\\ \displaystyle (i\partial _{t}+\beta \Delta )v=-(\nabla \cdot {\overline{w}})u, \quad t>0,\ x\in {\mathbb R}^d,\\ \displaystyle (i\partial _{t}+\gamma \Delta )w =\nabla (u\cdot {\overline{v}}), \quad t>0,\ x\in {\mathbb R}^d,\\ (u, v, w)|_{t=0}=(u_{0}, v_{0}, w_{0})\in (H^s({\mathbb R}^d))^d\times (H^s({\mathbb R}^d))^d\times (H^s({\mathbb R}^d))^d, \end{array}\right. } \end{aligned}$$
(1.1)

where \(\alpha \), \(\beta \), \(\gamma \in {\mathbb R}\backslash \{0\}\) and the unknown functions u, v, w are d-dimensional complex vector-valued. System (1.1) was introduced by Colin and Colin in [6] as a model of laser–plasma interaction. (See also [7, 8].) They also showed that the local existence of the solution of (1.1) in \(H^s({\mathbb R}^d)\) for \(s>\frac{d}{2}+3\). System (1.1) is invariant under the following scaling transformation:

$$\begin{aligned} A_{\lambda }(t,x)=\lambda ^{-1}A(\lambda ^{-2}t,\lambda ^{-1}x)\quad (A=(u,v,w) ), \end{aligned}$$
(1.2)

and the scaling critical regularity is \(s_{c}=\frac{d}{2}-1\). We put

$$\begin{aligned} \theta :=\alpha \beta \gamma \left( \frac{1}{\alpha }-\frac{1}{\beta }-\frac{1}{\gamma }\right) , \quad \kappa :=(\alpha -\beta )(\alpha -\gamma )(\beta +\gamma ). \end{aligned}$$
(1.3)

We note that \(\kappa =0\) does not occur when \(\theta \ge 0\) for \(\alpha \), \(\beta \), \(\gamma \in {\mathbb R}\backslash \{0\}\).

First, we introduce some known results for related problems. System (1.1) has quadratic nonlinear terms which contain a derivative. A derivative loss arising from the nonlinearity makes the problem difficult. In fact, Mizohata [21] considered the Schrödinger equation

$$\begin{aligned} {\left\{ \begin{array}{ll} i\partial _{t}u-\Delta u=(b_{1}(x)\cdot \nabla ) u, \quad t\in {\mathbb R},\ x\in {\mathbb R}^{d},\\ u(0,x)=u_{0}(x), \quad x\in {\mathbb R}^{d} \end{array}\right. } \end{aligned}$$

and proved that the uniform bound

$$\begin{aligned} \sup _{x\in {\mathbb R}^{n},\omega \in S^{n-1},R>0}\left| \mathrm{Re}\int _{0}^{R}b_{1}(x+r\omega )\cdot \omega \hbox {d}r\right| <\infty \end{aligned}$$

is a necessary condition for the \(L^{2} ({\mathbb R}^d)\) well-posedness. Furthermore, Christ [5] proved that the flow map of the nonlinear Schrödinger equation

$$\begin{aligned} {\left\{ \begin{array}{ll} i\partial _{t}u-\partial _{x}^{2}u=u\partial _{x}u, \quad t\in {\mathbb R},\ x\in {\mathbb R},\\ u(0,x)=u_{0}(x), \quad x\in {\mathbb R}\end{array}\right. } \end{aligned}$$
(1.4)

is not continuous on \(H^{s} ({\mathbb R}^d)\) for any \(s\in {\mathbb R}\). From these results, it is difficult to obtain the well-posedness for quadratic derivative nonlinear Schrödinger equation in general. For the system of quadratic derivative nonlinear equations, it is known that the well-posedness holds. In [15], the first author proved the well-posedness of (1.1) in \(H^s({\mathbb R}^d)\), where s is given in Table 1.

Table 1 Well-posedness (WP for short) for (1.1) proved in [15]

Recently, in [16], the first and second authors have improved this result by using the generalization of the Loomis–Whitney inequality introduced in [2] and [3]. They proved the well-posedness of (1.1) in \(H^s({\mathbb R}^d)\) for \(s\ge \frac{1}{2}\) if \(d=2\) and \(s>\frac{1}{2}\) if \(d=3\), under the condition \(\kappa \ne 0\) and \(\theta < 0\). In [15], the first author also proved that the flow map is not \(C^2\) for \(s<1\) if \(\theta = 0\) and for \(s<\frac{1}{2}\) if \(\theta < 0\) and \(\kappa \ne 0\). Therefore, the well-posedness obtained in [15] and [16] is optimal except the case \(d=3\) and \(s=\frac{1}{2}\) (which is scaling critical) as far as we use the iteration argument. In particular, the optimal regularity is far from the scaling critical regularity if \(d\le 3\) and \(\theta \le 0\).

We point out that the results in [15, 16] do not contain the scattering of the solution for \(d\le 3\) under the condition \(\theta =0\) (and also \(\theta <0\)). In [17], Ikeda, Katayama, and Sunagawa considered the system of quadratic nonlinear Schrödinger equations

$$\begin{aligned} \left( i\partial _t+\frac{1}{2m_j}\Delta \right) u_j=F_j(u,\partial _xu), \quad t>0,\ x\in {\mathbb R}^d,\ j=1,2,3, \end{aligned}$$
(1.5)

under the mass resonance condition \(m_1+m_2=m_3\) (which corresponds to the condition \(\theta =0\) for (1.1)), where \(u=(u_1,u_2,u_3)\) is \({\mathbb C}^3\)-valued, \(m_1\), \(m_2\), \(m_3\in {\mathbb R}\backslash \{0\}\), and \(F_j\) is defined by

$$\begin{aligned} {\left\{ \begin{array}{ll} F_{1}(u,\partial _xu)=\sum _{|\alpha |, |\beta |\le 1} C_{1,\alpha ,\beta }(\overline{\partial ^{\alpha }u_2})(\partial ^{\beta }u_3),\\ F_{2}(u,\partial _xu)=\sum _{|\alpha |, |\beta |\le 1} C_{1,\alpha ,\beta }(\partial ^{\beta }u_3)(\overline{\partial ^{\alpha }u_1}),\\ F_{3}(u,\partial _xu)=\sum _{|\alpha |, |\beta |\le 1} C_{1,\alpha ,\beta }(\partial ^{\alpha }u_1)(\partial ^{\beta }u_2) \end{array}\right. } \end{aligned}$$
(1.6)

with some constants \(C_{1,\alpha ,\beta }\), \(C_{2,\alpha ,\beta }\), \(C_{3,\alpha ,\beta }\in {\mathbb C}\). They obtained the small data global existence and the scattering of the solution to (1.5) in the weighted Sobolev space for \(d=2\) under the mass resonance condition and the null condition for the nonlinear terms (1.6). They also proved the same result for \(d\ge 3\) without the null condition. In [18], Ikeda, Kishimoto, and Okamoto proved the small data global well-posedness and the scattering of the solution to (1.5) in \(H^s ({\mathbb R}^d)\) for \(d\ge 3\) and \(s\ge s_c\) under the mass resonance condition and the null condition for the nonlinear terms (1.6). They also proved the local well-posedness in \(H^s ({\mathbb R}^d)\) for \(d=1\) and \(s\ge 0\), \(d=2\) and \(s>s_c\), and \(d=3\) and \(s\ge s_c\) under the same conditions. (The results in [15] for \(d\le 3\) and \(\theta =0\) say that if the nonlinear terms do not have null condition, then \(s=1\) is optimal regularity to obtain the well-posedness by using the iteration argument.)

Recently, in [23], Sakoda and Sunagawa have considered (1.5) for \(d=2\) and \(j=1,\ldots , N\) with

$$\begin{aligned} F_j(u,\partial _xu) =\sum _{|\alpha |, |\beta |\le 1}\sum _{1\le k,l\le 2N} C^{\alpha , \beta }_{j,k,l}(\partial _x^{\alpha }u_k^{\#})(\partial _x^{\beta }u_l^{\#}), \end{aligned}$$
(1.7)

where \(u_j^{\#}=u_j\) if \(j=1,\ldots ,N\), and \(u_j^{\#}=\overline{u_j}\) if \(j=N+1,\ldots ,2N\). They obtained the small data global existence and the time decay estimate for the solution under some conditions for \(m_1,\cdots m_N\) and the nonlinear terms (1.7), where the conditions contain (1.1) with \(\theta =0\). There exists the blow-up solutions for the system of nonlinear Schrödinger equations. Ozawa and Sunagawa [22] gave the examples of the derivative nonlinearity which causes the small data blow-up for a system of Schrödinger equations. There are also some known results for a system of nonlinear Schrödinger equations with no derivative nonlinearity [12,13,14].

The aim in the present paper is to improve the results in [15, 16] for conditional radial initial data in \({\mathbb R}^2\) and \({\mathbb R}^3\). The radial solution to (1.1) is only trivial solution since the nonlinear terms of (1.1) are not radial form. Therefore, we rewrite (1.1) into a radial form. Here, we focus on \(d=2\). Let \({\mathcal {S}}({\mathbb R}^2)\) denote the Schwartz class. If \(w=(w_1,w_2)\in ({\mathcal {S}}({\mathbb R}^2))^2\) satisfies

$$\begin{aligned} \xi ^{\perp }\cdot {\widehat{w}}(\xi )=\xi _1\widehat{w_2}(\xi )-\xi _2\widehat{w_1}(\xi )=0, \quad x^{\perp }\cdot w(x)=x_1w_2(x)-x_2w_1(x)=0\nonumber \\ \end{aligned}$$
(1.8)

for any \(\xi =(\xi _1,\xi _2)\in {\mathbb R}^2\) and \(x=(x_1,x_2)\in {\mathbb R}^2\), then there exists a scalar potential \(W\in C^1({\mathbb R}^2)\) satisfying

$$\begin{aligned} \nabla W (x)=w(x), \quad {}^{\forall }x\in {\mathbb R}^2 \end{aligned}$$
(1.9)

and

$$\begin{aligned} \frac{\partial }{\partial \vartheta }W(r\cos \vartheta , r\sin \vartheta )=0, \quad {}^{\forall }(r,\vartheta )\in [0,\infty )\times [0,2\pi ). \end{aligned}$$
(1.10)

Indeed, if we put

$$\begin{aligned} W(x):=\int _{a_1}^{x_1}w_1(y_1,x_2)\hbox {d}y_1+\int _{a_2}^{x_2}w_2(a_1,y_2)\hbox {d}y_2 \end{aligned}$$

for some \(a_1\), \(a_2\in {\mathbb R}\), then W satisfies (1.9) by the first equality in (1.8). Furthermore, W also satisfies (1.10) by the second equality in (1.8). We note that the first equality in (1.8) is equivalent to

$$\begin{aligned} \nabla ^{\perp }\cdot w(x)=\partial _1w_2(x)-\partial _2w_1(x)=0, \end{aligned}$$

which is the irrotational condition.

Remark 1.1

If \(d=3\), we can also obtain the radial scalar potential \(W\in C^1({\mathbb R}^3)\) of \(w=(w_1,w_2,w_3)\in ({\mathcal {S}}({\mathbb R}^3))^3\) by assuming the conditions

$$\begin{aligned} \xi \times {\widehat{w}}(\xi )=0,\ x\times w(x)=0 \end{aligned}$$
(1.11)

instead of (1.8).

Definition 1

We say \(f\in {\mathcal {S}}'({\mathbb R}^d)\) is radial if it holds that

$$\begin{aligned}<f,\varphi \circ R>\,=\,<f,\varphi > \end{aligned}$$

for any \(\varphi \in {\mathcal {S}}({\mathbb R}^d)\) and rotation \(R:{\mathbb R}^d\rightarrow {\mathbb R}^d\).

Remark 1.2

If \(f\in L^1_{\mathrm{loc}}({\mathbb R}^d)\), then Definition 1 is equivalent to

$$\begin{aligned} {}^\exists g:{\mathbb R}\rightarrow {\mathbb C}\ \mathrm{s.t.}\ f(x)=g(|x|), \quad {\hbox {a.e.}}\ x\in {\mathbb R}^d. \end{aligned}$$

Now, we consider the system of nonlinear Schrödinger equations:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle (i\partial _{t}+\alpha \Delta )u=-(\Delta W )v, \quad t>0,\ x\in {\mathbb R}^d,\\ \displaystyle (i\partial _{t}+\beta \Delta )v=-(\Delta {\overline{W}})u, \quad t>0,\ x\in {\mathbb R}^d,\\ \displaystyle (i\partial _{t}+\gamma \Delta )\nabla W =\nabla (u\cdot {\overline{v}}), \quad t>0,\ x\in {\mathbb R}^d,\\ (u, v, [W])|_{t=0}=(u_{0}, v_{0}, [W_{0}])\in {\mathcal {H}}^s({\mathbb R}^d) \end{array}\right. } \end{aligned}$$
(1.12)

instead of (1.1), where \(d=2\) or 3, and

$$\begin{aligned} \begin{aligned} {\mathcal {H}}^s({\mathbb R}^d)&:=(H^s_{\mathrm{rad}}({\mathbb R}^d))^d\times (H^s_{\mathrm{rad}}({\mathbb R}^d))^d\times {\widetilde{H}}^{s+1}_{\mathrm{rad}}({\mathbb R}^d),\\ H^s_{\mathrm{rad}}({\mathbb R}^d)&:=\{f\in H^s({\mathbb R}^d)|\ f\ \mathrm{is\ radial}\},\\ {\widetilde{H}}^{s+1}({\mathbb R}^d)&:=\{f\in {\mathcal {S}}'({\mathbb R}^d)| \ \nabla f\in (H^s({\mathbb R}^d))^d\}/{\mathcal {N}}_0,\\ {\mathcal {N}}_0&:= \{f\in {\mathcal {S}}'({\mathbb R}^d)|\ \nabla f=0 \}, \\ {\widetilde{H}}^{s+1}_{\mathrm{rad}}({\mathbb R}^d)&:=\{[f]\in {\widetilde{H}}^{s+1}({\mathbb R}^d)|\ f\ \mathrm{is\ radial}\}. \end{aligned} \end{aligned}$$

The norm for an equivalent class \([f]\in {\widetilde{H}}^{s+1}({\mathbb R}^d)\) is defined by

$$\begin{aligned} \Vert [f]\Vert _{{\widetilde{H}}^{s+1}}:=\Vert \nabla f\Vert _{(H^s)^d}\sim \Vert f\Vert _{{\dot{H}}^{s+1}} +\Vert f\Vert _{{\dot{H}}^1}, \end{aligned}$$

which is well defined since \( {\widetilde{H}}^{s+1}({\mathbb R}^d)\) is a quotient space. System (1.12) is obtained by substituting \(w=\nabla W\) and \(w_0=\nabla W_0\) in (1.1).

Definition 2

We say \((u,v,[W])\in C([0,T];{\mathcal {H}}^s({\mathbb R}^d))\) is a solution to (1.12) if

$$\begin{aligned} \begin{aligned} u(t)&=e^{it\alpha \Delta }u_0+i\int _0^te^{i(t-t')\alpha \Delta }(\Delta W(t'))v(t')\hbox {d}t'\quad \mathrm{in}\ (H^s({\mathbb R}^d))^d,\\ v(t)&=e^{it\beta \Delta }v_0+i\int _0^te^{i(t-t')\beta \Delta }(\Delta \overline{W(t')})v(t')\hbox {d}t'\quad \mathrm{in}\ (H^s({\mathbb R}^d))^d,\\ \nabla W(t)&=e^{it\gamma \Delta }\nabla W_0-i\int _0^te^{i(t-t')\gamma \Delta }\nabla (u(t')\cdot \overline{v(t')})\hbox {d}t'\quad \mathrm{in}\ H^s({\mathbb R}^d) \end{aligned} \end{aligned}$$

hold for any \(t\in [0,T]\). This definition does not depend on how we choose a representative W.

Now, we give the main results in this paper.

Theorem 1.1

Assume \(\kappa \ne 0\).

  1. (i)

    Let \(d=2\). Assume that \(s\ge \frac{1}{2}\) if \(\theta =0\) and \(s>0\) if \(\theta <0\). Then, (1.12) is locally well posed in \({\mathcal {H}}^s({\mathbb R}^2)\).

  2. (ii)

    Let \(d=3\). Assume that \(\theta \le 0\) and \(s\ge \frac{1}{2}\). Then, (1.12) is locally well posed in \({\mathcal {H}}^s({\mathbb R}^3)\).

  3. (iii)

    Let \(d=3\). Assume that \(\theta \le 0\) and \(s\ge \frac{1}{2}\). Then, (1.12) is globally well posed in \({\mathcal {H}}^s({\mathbb R}^3)\) for small data. Furthermore, the solution scatters in \({\mathcal {H}}^s({\mathbb R}^3)\).

Remark 1.3

\(s=0\) for \(d=2\), and \(s=\frac{1}{2}\) for \(d=3\) are scaling critical regularity for (1.1).

We obtain the following.

Theorem 1.2

Let \(d=2\) and \(\theta =0\). Then, the flow map of (1.12) is not \(C^2\) in \({\mathcal {H}}^s({\mathbb R}^2)\) for \(s<\frac{1}{2}\).

Remark 1.4

Theorem 1.2 says that the well-posedness in Theorem 1.1 for \(\theta =0\) is optimal as far as we use the iteration argument.

Remark 1.5

It is interesting that the result for 2D radial initial data is better than that for 1D initial data. Actually, the optimal regularity for 1D initial data is \(s= 1\) if \(\theta =0\), and \(s= \frac{1}{2}\) if \(\theta <0\) and \(\kappa \ne 0\), which are larger than the optimal regularity for 2D radial initial data. The reason is the following. We use the angular decomposition, and each angular localized term has a better property. For radial functions, the angular localized bound leads to an estimate for the original functions. (See (2.15).)

We note that if \(\nabla W_0=w_0\) holds and (uv, [W]) is a solution to (1.12) with \((u,v,[W])|_{t=0}=(u_0,v_0,[W_0])\in {\mathcal {H}}^s({\mathbb R}^d)\), then \((u,v,\nabla W)\) is a solution to (1.1) with \((u,v,\nabla W)|_{t=0}=(u_0,v_0,w_0)\in (H^s_{\mathrm{rad}}({\mathbb R}^d))^d\times (H^s_{\mathrm{rad}}({\mathbb R}^d))^d\times H^s({\mathbb R}^d)\). The existence of a scalar potential \(W_0\in {\widetilde{H}}^{s+1}_{\mathrm{rad}}({\mathbb R}^d)\) will be proved for \(w_0\in {\mathcal {A}}^s({\mathbb R}^d)\) with \(s>\frac{1}{2}\) (see Proposition 3.2), where

$$\begin{aligned} \begin{aligned} {\mathcal {A}}^s({\mathbb R}^2)&:=\left\{ f=(f_1,f_2)\in (H^s({\mathbb R}^2))^2|\ f\ \mathrm{satisfies}\ (1.8) \,\,{\hbox {a.e.}} x, \xi \in {\mathbb R}^2\right\} ,\\ {\mathcal {A}}^s({\mathbb R}^3)&:=\left\{ f=(f_1,f_2,f_3)\in (H^s({\mathbb R}^3))^3|\ f\ \mathrm{satisfies} (1.11)\,\, {\hbox {a.e.}} x, \xi \in {\mathbb R}^3\right\} . \end{aligned} \end{aligned}$$

Therefore, we obtain the following.

Theorem 1.3

Let \(d=2\) or 3. Assume that \(\theta = 0\) and \(s>\frac{1}{2}\). Then, (1.1) is locally well posed in \((H^s_{\mathrm{rad}}({\mathbb R}^d))^d\times (H^s_{\mathrm{rad}}({\mathbb R}^d))^d\times {\mathcal {A}}^s({\mathbb R}^d)\).

Remark 1.6

For \(d=3\), Theorem 1.1 can be obtained by almost the same way as in [15]. In Proposition 4.4 (i) of [15], the author used the Strichartz estimate

$$\begin{aligned} \Vert e^{it\Delta }P_{N}u_0\Vert _{L^q_tL^r_x({\mathbb R}\times {\mathbb R}^d)} \lesssim \Vert P_{N}u_0\Vert _{L^2} \end{aligned}$$

and

$$\begin{aligned} \left| N_{\mathrm{max}}\int _0^T\int _{{\mathbb R}^d}(P_{N_1}u_1)(P_{N_2}u_2)(P_{N_3}u_3)\hbox {d}x\hbox {d}t\right| \lesssim N_{\mathrm{\max }}^{s_c}\prod _{j=1}^3\Vert P_{N_j}u_j\Vert _{L^q_tL^r_x} \end{aligned}$$

with an admissible pair \((q,r)=(3,\frac{6d}{3d-4})\) for \(d\ge 4\). But this trilinear estimate does not hold for \(d=3\). This is the reason why the well-posedness in \(H^{s_c}({\mathbb R}^3)\) could not be obtained in [15]. For the radial function \(u_0\in L^2({\mathbb R}^3)\), it is known that the improved Strichartz estimate ([24], Corollary 6.2)

$$\begin{aligned} \Vert e^{it\Delta }P_{N}u_0\Vert _{L^3_{t,x}({\mathbb R}\times {\mathbb R}^3)} \lesssim N^{-\frac{1}{6}}\Vert P_{N}u_0\Vert _{L^2}. \end{aligned}$$

It holds that

$$\begin{aligned} \left| N_{\mathrm{max}}\int _0^T\int _{{\mathbb R}^3}(P_{N_1}u_1)(P_{N_2}u_2)(P_{N_3}u_3)\hbox {d}x\hbox {d}t\right| \lesssim N_{\mathrm{max}}^{\frac{1}{2}}\prod _{j=1}^3N_j^{\frac{1}{6}}\Vert P_{N_j}u_j\Vert _{L^3_{t,x}} \end{aligned}$$

for \(N_1\sim N_2\sim N_3\ge 1\). Therefore, for \(d=3\), we can obtain the same estimate in Proposition 4.4 (i). Because of such reason, we omit more detail of the proof for \(d=3\) and only consider \(d=2\) in the following sections.

Notation. We denote the spatial Fourier transform by   \({\widehat{\cdot }}\)   or \({\mathcal {F}}_{x}\), the Fourier transform in time by \({\mathcal {F}}_{t}\) and the Fourier transform in all variables by   \({\widetilde{\cdot }}\)   or \({\mathcal {F}}_{tx}\). For \(\sigma \in {\mathbb R}\), the free evolution \(e^{it\sigma \Delta }\) on \(L^{2}\) is given as a Fourier multiplier

$$\begin{aligned} {\mathcal {F}}_{x}[e^{it\sigma \Delta }f](\xi )=e^{-it\sigma |\xi |^{2}}{\widehat{f}}(\xi ). \end{aligned}$$

We will use \(A\lesssim B\) to denote an estimate of the form \(A \le CB\) for some constant C and write \(A \sim B\) to mean \(A \lesssim B\) and \(B \lesssim A\). We will use the convention that capital letters denote dyadic numbers, e.g. \(N=2^{n}\) for \(n\in {\mathbb N}_0:={\mathbb N}\cup \{0\}\), and for a dyadic summation, we write \(\sum _{N}a_{N}:=\sum _{n\in {\mathbb N}_0}a_{2^{n}}\) and \(\sum _{N\ge M}a_{N}:=\sum _{n\in {\mathbb N}_0, 2^{n}\ge M}a_{2^{n}}\) for brevity. Let \(\chi \in C^{\infty }_{0}((-2,2))\) be an even, non-negative function such that \(\chi (t)=1\) for \(|t|\le 1\). We define \(\psi (t):=\chi (t)-\chi (2t)\), \(\psi _1(t):=\chi (t)\), and \(\psi _{N}(t):=\psi (N^{-1}t)\) for \(N\ge 2\). Then, \(\sum _{N}\psi _{N}(t)=1\). We define frequency and modulation projections

$$\begin{aligned} \widehat{P_{N}u}(\xi ):=\psi _{N}(\xi ){\widehat{u}}(\xi ),\ \widetilde{Q_{L}^{\sigma }u}(\tau ,\xi ):=\psi _{L}(\tau +\sigma |\xi |^{2}){\widetilde{u}}(\tau ,\xi ). \end{aligned}$$

Furthermore, we define \(Q_{\ge M}^{\sigma }:=\sum _{L\ge M}Q_{L}^{\sigma }\) and \(Q_{<M}:=Id -Q_{\ge M}\).

The rest of this paper is planned as follows. In Section 2, we will give the bilinear estimates which will be used to prove the well-posedness. In Sect. 3, we will give the proof of Theorems 1.1 and  1.3. In Sect. 4, we will give the proof of Theorem 1.2.

2 Bilinear Estimates

In this section, we prove the bilinear estimates. First, we define the radial condition for time–space function.

Definition 3

We say \(u\in {\mathcal {S}}'({\mathbb R}_t\times {\mathbb R}_x^2)\) is radial with respect to x if it holds that

$$\begin{aligned}<u,\varphi _R>=<u,\varphi > \end{aligned}$$

for any \(\varphi \in {\mathcal {S}}({\mathbb R}_t\times {\mathbb R}_x^2)\) and rotation \(R:{\mathbb R}^2\rightarrow {\mathbb R}^2\), where \(\varphi _R\in {\mathcal {S}}({\mathbb R}_t\times {\mathbb R}_x^2)\) is defined by \(\varphi _R(t,x)=\varphi (t,R(x))\).

Next, we define the Fourier restriction norm, which was introduced by Bourgain in [4].

Definition 4

Let \(s\in {\mathbb R}\), \(b\in {\mathbb R}\), \(\sigma \in {\mathbb R}\backslash \{0\}\).

  1. (i)

    We define \(X^{s,b}_{\sigma }:=\{u\in {\mathcal {S}}'({\mathbb R}_t\times {\mathbb R}_x^2)|\ \Vert u\Vert _{X^{s,b}_{\sigma }}<\infty \}\), where

    $$\begin{aligned} \begin{aligned} \Vert u\Vert _{X^{s,b}_{\sigma }}&:= \Vert \langle \xi \rangle ^s\langle \tau +\sigma |\xi |^2\rangle ^b{\widetilde{u}}(\tau ,\xi )\Vert _{L^2_{\tau \xi }} \sim \left( \sum _{N\ge 1} \sum _{L\ge 1}N^{2s}L^{2b}\Vert Q_{L}^{\sigma }P_{N}u\Vert _{L^2}^2\right) ^{\frac{1}{2}}. \end{aligned} \end{aligned}$$
  2. (ii)

    We define \({\widetilde{X}}^{s+1,b}_{\sigma }:=\{u\in {\mathcal {S}}'({\mathbb R}_t\times {\mathbb R}_x^2)|\ \nabla u\in X^{s,b}_{\sigma }\}/ {\mathcal {N}}\) with the norm

    $$\begin{aligned} \Vert [u]\Vert _{{\widetilde{X}}^{s+1,b}_{\sigma }}:=\Vert \nabla u\Vert _{X^{s,b}_{\sigma }}, \end{aligned}$$

    where \({\mathcal {N}}:= \{u\in {\mathcal {S}}'({\mathbb R}_t\times {\mathbb R}_x^2)|\ \nabla u =0 \}\).

  3. (iii)

    We define

    $$\begin{aligned} \begin{aligned} X^{s,b}_{\sigma ,\mathrm{rad}}&:= \{u\in X^{s,b}_{\sigma }|\ u\ \mathrm{is\ radial\ with\ respect\ to}\ x\},\\ {\widetilde{X}}^{s,b}_{\sigma ,\mathrm{rad}}&:= \{[u]\in {\widetilde{X}}^{s+1,b}_{\sigma }|\ u\ \mathrm{is\ radial\ with\ respect\ to}\ x\}. \end{aligned} \end{aligned}$$

We put

$$\begin{aligned} {\widetilde{\theta }}:=\sigma _1\sigma _2\sigma _3 \left( \frac{1}{\sigma _1}+\frac{1}{\sigma _2}+\frac{1}{\sigma _3}\right) , \quad {\widetilde{\kappa }}:=(\sigma _1+\sigma _2)(\sigma _2 +\sigma _3)(\sigma _3 +\sigma _1). \end{aligned}$$

We note that if \((\sigma _1,\sigma _2,\sigma _3)\in \{(\beta , \gamma , -\alpha ), (-\gamma , \alpha , -\beta ), (\alpha , -\beta , -\gamma )\}\), then it hold that \({\widetilde{\theta }}=\theta \) and \(|{\widetilde{\kappa }}|=|\kappa |\).

The following bilinear estimate plays a central role to show Theorem 1.1.

Proposition 2.1

Let \(\sigma _1\), \(\sigma _2\), \(\sigma _3\in {\mathbb R}\backslash \{0\}\) satisfy \({\widetilde{\kappa }}\ne 0\). Let \(s\ge \frac{1}{2}\) if \({\widetilde{\theta }}=0\) and \(s>0\) if \({\widetilde{\theta }}<0\). Then there exists \(b'\in (0,\frac{1}{2})\) and \(C>0\) such that

$$\begin{aligned} \Vert |\nabla |(uv)\Vert _{X^{s,-b'}_{-\sigma _3}}&\le C\Vert u\Vert _{X^{s,b'}_{\sigma _1}}\Vert v\Vert _{X^{s,b'}_{\sigma _2}}, \end{aligned}$$
(2.1)
$$\begin{aligned} \Vert (\Delta U)v\Vert _{X^{s,-b'}_{-\sigma _3}}&\le C(\Vert \partial _1 U\Vert _{X^{s,b'}_{\sigma _1}} +\Vert \partial _2 U\Vert _{X^{s,b'}_{\sigma _1}})\Vert v\Vert _{X^{s,b'}_{\sigma _2}} \end{aligned}$$
(2.2)

hold for any \(u\in X^{s,b'}_{\sigma _1,\mathrm{rad}}\), \(v\in X^{s,b'}_{\sigma _2,\mathrm{rad}}\), and \([U]\in {\widetilde{X}}^{s+1,b'}_{\sigma _1,\mathrm{rad}}\).

Remark 2.1

Since \(\Vert \partial _1(uv)\Vert _{X^{s,-b'}_{-\sigma _3}} +\Vert \partial _2(uv)\Vert _{X^{s,-b'}_{-\sigma _3}}\sim \Vert |\nabla |(uv)\Vert _{X^{s,-b'}_{-\sigma _3}}\), (2.1) implies

$$\begin{aligned} \Vert \partial _1(uv)\Vert _{X^{s,-b'}_{-\sigma _3}} +\Vert \partial _2(uv)\Vert _{X^{s,-b'}_{-\sigma _3}} \le C\Vert u\Vert _{X^{s,b'}_{\sigma _1}}\Vert v\Vert _{X^{s,b'}_{\sigma _2}}. \end{aligned}$$

To prove Proposition 2.1, we first give the Strichartz estimate.

Proposition 2.2

(Strichartz estimate (cf. [11, 19])). Let \(\sigma \in {\mathbb R}\backslash \{0\}\) and (pq) be an admissible pair of exponents for the 2D Schrödinger equation, i.e. \(p>2\), \(\frac{1}{p}+\frac{1}{q}=\frac{1}{2}\). Then, we have

$$\begin{aligned} \Vert e^{it\sigma \Delta }\varphi \Vert _{L_{t}^{p}L_{x}^{q}({\mathbb R}\times {\mathbb R}^2)}\lesssim \Vert \varphi \Vert _{L^{2}_{x}({\mathbb R}^2)}. \end{aligned}$$

for any \(\varphi \in L^{2}({\mathbb R}^{2})\).

The Strichartz estimate implies the following. (See the proof of Lemma 2.3 in [10].)

Corollary 2.3

Let \(L\in 2^{{\mathbb N}_0}\), \(\sigma \in {\mathbb R}\backslash \{0\}\), and (pq) be an admissible pair of exponents for the Schrödinger equation. Then, we have

$$\begin{aligned} \Vert Q_{L}^{\sigma }u\Vert _{L_{t}^{p}L_{x}^{q}}\lesssim L^{\frac{1}{2}}\Vert Q_{L}^{\sigma }u\Vert _{L^{2}_{tx}}. \end{aligned}$$
(2.3)

for any \(u \in L^{2}({\mathbb R}\times {\mathbb R}^{2})\).

Next, we give the bilinear Strichartz estimate.

Proposition 2.4

We assume that \(\sigma _{1}\), \(\sigma _{2}\in {\mathbb R}\backslash \{0\}\) satisfy \(\sigma _1+\sigma _2\ne 0\). For any dyadic numbers \(N_1\), \(N_2\), \(N_3\in 2^{{\mathbb N}_0}\) and \(L_1\), \(L_2\in 2^{{\mathbb N}_0}\), we have

$$\begin{aligned} \begin{aligned}&\Vert P_{N_3}(Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\cdot Q_{L_2}^{\sigma _2}P_{N_2}u_{2})\Vert _{L^{2}_{tx}({\mathbb R}\times {\mathbb R}^2)}\\&\quad \lesssim \left( \frac{N_{\min }}{N_{\max }}\right) ^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}} \Vert Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\Vert _{L^2_{tx}({\mathbb R}\times {\mathbb R}^2)}\Vert Q_{L_2}^{\sigma _2}P_{N_2}u_{2}\Vert _{L^2_{tx}({\mathbb R}\times {\mathbb R}^2)}, \end{aligned} \end{aligned}$$
(2.4)

where \(N_{\min }= \min \nolimits _{1\le i\le 3}N_i\), \(N_{\max }= \max \nolimits _{1\le i\le 3}N_i\).

Proposition 2.4 can be obtained by the same way as Lemma 1 in [9]. (See also Lemma 3.1 in [15].)

Corollary 2.5

Let \(b'\in (\frac{1}{4},\frac{1}{2})\), and \(\sigma _{1}\), \(\sigma _{2}\in {\mathbb R}\backslash \{0\}\) satisfy \(\sigma _1+\sigma _2\ne 0\), We put \(\delta =\frac{1}{2}-b'\). For any dyadic numbers \(N_1\), \(N_2\), \(N_3\in 2^{{\mathbb N}_0}\) and \(L_1\), \(L_2\in 2^{{\mathbb N}_0}\), we have

$$\begin{aligned} \begin{aligned}&\Vert P_{N_3}(Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\cdot Q_{L_2}^{\sigma _2}P_{N_2}u_{2})\Vert _{L^{2}_{tx}({\mathbb R}\times {\mathbb R}^2)}\\&\quad \lesssim N_{\min }^{4 \delta } \left( \frac{N_{\min }}{N_{\max }}\right) ^{\frac{1}{2}- 2\delta }L_1^{b'}L_2^{b'} \Vert Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\Vert _{L^2_{tx}({\mathbb R}\times {\mathbb R}^2)}\Vert Q_{L_2}^{\sigma _2}P_{N_2}u_{2}\Vert _{L^2_{tx}({\mathbb R}\times {\mathbb R}^2)}. \end{aligned} \end{aligned}$$
(2.5)

The proof is given in Corollary 2.5 in [16].

2.1 The Estimates for Low Modulation

In this subsection, we assume that \(L_{\text {max}} \ll N_{\max }^2\).

Lemma 2.6

We assume that \(\sigma _1\), \(\sigma _2\), \(\sigma _3 \in {\mathbb R}\setminus \{0 \}\) satisfy \({\widetilde{\kappa }}\ne 0\) and \((\tau _{1},\xi _{1})\), \((\tau _{2}, \xi _{2})\), \((\tau _{3}, \xi _{3})\in {\mathbb R}\times {\mathbb R}^{2}\) satisfy \(\tau _{1}+\tau _{2}+\tau _{3}=0\), \(\xi _{1}+\xi _{2}+\xi _{3}=0\). If \(\max \nolimits _{1\le j\le 3}|\tau _{j}+\sigma _{j}|\xi _{j}|^{2}| \ll \max \nolimits _{1\le j\le 3}|\xi _{j}|^{2}\), then we have

$$\begin{aligned} |\xi _1| \sim |\xi _2| \sim |\xi _3|. \end{aligned}$$

Since the above lemma is the contrapositive of the following lemma which was utilized in [15], we omit the proof.

Lemma 2.7

(Lemma 4.1 in [15]) We assume that \(\sigma _{1}\), \(\sigma _{2}\), \(\sigma _{3} \in {\mathbb R}\backslash \{0\}\) satisfy \({\widetilde{\kappa }}\ne 0\) and \((\tau _{1},\xi _{1})\), \((\tau _{2}, \xi _{2})\), \((\tau _{3}, \xi _{3})\in {\mathbb R}\times {\mathbb R}^{2}\) satisfy \(\tau _{1}+\tau _{2}+\tau _{3}=0\), \(\xi _{1}+\xi _{2}+\xi _{3}=0\). If there exist \(1\le i,j\le 3\) such that \(|\xi _{i}|\ll |\xi _{j}|\), then we have

$$\begin{aligned} \max _{1\le j\le 3}|\tau _{j}+\sigma _{j}|\xi _{j}|^{2}| > rsim \max _{1\le j\le 3}|\xi _{j}|^{2}. \end{aligned}$$
(2.6)

Lemma 2.6 suggests that if \( \max \nolimits _{1\le j\le 3}|\tau _{j}+\sigma _{j}|\xi _{j}|^{2}| \ll \max \nolimits _{1\le j\le 3}|\xi _{j}|^{2}\) then we can assume

$$\begin{aligned} \max _{1 \le j\le 3} |\tau _{j}+\sigma _{j}|\xi _{j}|^{2}| \ll \min _{1\le j\le 3} |\xi _j|^2. \end{aligned}$$
(2.7)

We first introduce the angular frequency localization operators which were utilized in [1].

Definition 5

[1]. We define the angular decomposition of \({\mathbb R}^2\) in frequency. We define a partition of unity in \({\mathbb R}\),

$$\begin{aligned} 1 = \sum _{j \in {\mathbb Z}} \omega _j, \quad \omega _j (s) = \psi (s-j) \left( \sum _{k \in {\mathbb Z}} \psi (s-k) \right) ^{-1}. \end{aligned}$$

For a dyadic number \(A \ge 64\), we also define a partition of unity on the unit circle,

$$\begin{aligned} 1 = \sum _{j =0}^{A-1} \omega _j^A, \quad \omega _j^A (\vartheta ) = \omega _j \left( \frac{A\vartheta }{\pi } \right) + \omega _{j-A} \left( \frac{A\vartheta }{\pi } \right) . \end{aligned}$$

We observe that \(\omega _j^A\) is supported in

$$\begin{aligned} \Theta _j^A = \left[ \frac{\pi }{A} \, (j-2), \ \frac{\pi }{A} \, (j+2) \right] \cup \left[ -\pi + \frac{\pi }{A} \, (j-2), \ - \pi +\frac{\pi }{A} \, (j+2) \right] . \end{aligned}$$

We now define the angular frequency localization operators \(R_j^A\),

$$\begin{aligned} {\mathcal {F}}_x (R_j^A f)(\xi ) = \omega _j^A(\vartheta ) {\mathcal {F}}_x f(\xi ), \quad \text {where} \ \xi = |\xi | (\cos \vartheta , \sin \vartheta ). \end{aligned}$$

For any function \(u: \, {\mathbb R}\, \times \, {\mathbb R}^2 \, \rightarrow {\mathbb C}\), \((t,x) \mapsto u(t,x)\), we set \((R_j^A u ) (t, x) = (R_j^Au( t, \cdot )) (x)\). This operator localizes function in frequency to the set

$$\begin{aligned} {{\mathfrak {D}}}_j^A = \left\{ (\tau , |\xi | \cos \vartheta , |\xi | \sin \vartheta ) \in {\mathbb R}\times {\mathbb R}^2 \, | \, \vartheta \in \Theta _j^A \right\} . \end{aligned}$$

Immediately, we can see

$$\begin{aligned} u = \sum _{j=0}^{A-1} R_j^A u. \end{aligned}$$

The next lemma will be used to obtain Proposition 2.1 for the case \({\widetilde{\theta }}=0\).

Lemma 2.8

Let N, \(L_1\), \(L_2\), \(L_3\), \(A\in 2^{{\mathbb N}_0}\). We assume that \(\sigma _{1}\), \(\sigma _{2}\), \(\sigma _{3} \in {\mathbb R}\backslash \{0\}\) satisfy \({\widetilde{\theta }}=0\) and \((\tau _{1},\xi _{1})\), \((\tau _{2}, \xi _{2})\), \((\tau _{3}, \xi _{3})\in {\mathbb R}\times {\mathbb R}^{2}\) satisfy \(\tau _{1}+\tau _{2}+\tau _{3}=0\), \(\xi _{1}+\xi _{2}+\xi _{3}=0\), \(|\xi _i|\sim N_i\), \(|\tau _i+\sigma _i|\xi _i|^2|\sim L_i\), and \((\tau _i, \xi _i)\in {{\mathfrak {D}}}_{j_i}^A\)\((i=1,2,3)\) for some \(j_{1}\), \(j_{2}\), \(j_3\in \{0,1,\ldots ,A-1\}\). If \(N_1\sim N_2\sim N_3\), \(L_{\max }:=\max \nolimits _{1\le i\le 3}L_i\le N_{\max }^2 A^{-2}\), and \(A\gg 1\) hold, then we have \(\min \{|j_1-j_2|, |A-(j_{1}-j_{2})|\}\lesssim 1\), \(\min \{|j_2-j_3|, |A-(j_{2}-j_{3})|\}\lesssim 1\), and \(\min \{|j_1-j_3|, |A-(j_{1}-j_{3})|\}\lesssim 1\).

Proof

Because \(0={\widetilde{\theta }}=\sigma _1\sigma _2\sigma _3(\frac{1}{\sigma _1}+\frac{1}{\sigma _2}+\frac{1}{\sigma _3})=\sigma _1\sigma _2+\sigma _2\sigma _3+\sigma _3\sigma _1\), we have

$$\begin{aligned} (\sigma _1+\sigma _3)(\sigma _2+\sigma _3) =\sigma _1\sigma _2+\sigma _2\sigma _3+\sigma _3\sigma _1+\sigma _3^2 =\sigma _3^2>0. \end{aligned}$$

We put \(p:=\mathrm{sgn}(\sigma _1+\sigma _3)=\mathrm{sgn}(\sigma _2+\sigma _3)\), \(q:=\mathrm{sgn}(\sigma _3)\). Let \(\angle (\xi _{1},\xi _{2})\in [0,\pi ]\) denote the smaller angle between \(\xi _{1}\) and \(\xi _{2}\). Since

$$\begin{aligned} \begin{aligned} \frac{|\sigma _1+\sigma _3|^{\frac{1}{2}}|\sigma _2+\sigma _3|^{\frac{1}{2}}}{|\sigma _3|} =\sqrt{1+\frac{\sigma _1\sigma _2\sigma _3}{\sigma _3^2}\left( \frac{1}{\sigma _1}+\frac{1}{\sigma _2}+\frac{1}{\sigma _3}\right) }=1, \end{aligned} \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned}&N_{\max }^2 A^{-2}\ge L_{\max }\\&\quad > rsim |\sigma _1|\xi _1|^2+\sigma _2|\xi _2|^2+\sigma _3|\xi _1+\xi _2|^2|\\&\quad =|(\sigma _1+\sigma _3)|\xi _{1}|^2+(\sigma _2+\sigma _3)|\xi _{2}|^2 +2\sigma _3|\xi _{1}||\xi _{2}|\cos \angle (\xi _{1},\xi _{2})|\\&\quad =|p(|\sigma _1+\sigma _3|^{\frac{1}{2}}|\xi _{1}|-|\sigma _2+\sigma _3|^{\frac{1}{2}} |\xi _{2}|)^2 \\&\qquad +2|\xi _{1}||\xi _{2}| (p|\sigma _1+\sigma _3|^{\frac{1}{2}}|\sigma _2+\sigma _3|^{\frac{1}{2}} +q|\sigma _3|\cos \angle (\xi _{1},\xi _{2}))|\\&\quad =|(|\sigma _1+\sigma _3|^{\frac{1}{2}}|\xi _{1}|-|\sigma _2+\sigma _3|^{\frac{1}{2}} |\xi _{2}|)^2 +2|\sigma _3||\xi _{1}||\xi _{2}| (1+pq\cos \angle (\xi _{1},\xi _{2}))|\\&\quad \ge 2|\sigma _3||\xi _{1}||\xi _{2}| (1+pq\cos \angle (\xi _{1},\xi _{2})). \end{aligned} \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \begin{aligned}&1-\cos \angle (\xi _{1},\xi _{2})\lesssim A^{-2}\quad \mathrm{if}\ (\sigma _1+\sigma _3)\sigma _3<0,\\&1+\cos \angle (\xi _{1},\xi _{2})\lesssim A^{-2} \quad \mathrm{if}\ (\sigma _1+\sigma _3)\sigma _3>0. \end{aligned} \end{aligned}$$

This implies

$$\begin{aligned} \angle (\xi _{1},\xi _{2})\lesssim A^{-1}\ \mathrm{or}\ \pi -\angle (\xi _{1},\xi _{2})\lesssim A^{-1}. \end{aligned}$$

Therefore, we get \(\min \{|j_1-j_2|, |A-(j_{1}-j_{2})|\}\lesssim 1\). By the same argument, we also get \(\min \{|j_2-j_3|, |A-(j_{2}-j_{3})|\}\lesssim 1\) and \(\min \{|j_1-j_3|, |A-(j_{1}-j_{3})|\}\lesssim 1\). \(\square \)

Now we introduce the necessary bilinear estimates to obtain Proposition 2.1 for the case \({\widetilde{\theta }}<0\).

Theorem 2.1

(Theorem 2.8 in [16]) We assume that \(\sigma _{1}\), \(\sigma _{2}\), \(\sigma _{3} \in {\mathbb R}\backslash \{0\}\) satisfy \({\widetilde{\kappa }}\ne 0\) and \({\widetilde{\theta }}<0\). Let \(L_{\max }:= \max \nolimits _{1\le j\le 3} (L_1, L_2, L_3) \ll |{\widetilde{\theta }}| N_{\min }^2\), \(A \ge 64\), and \(|j_1 - j_2| \lesssim 1\). Then the following estimates hold:

$$\begin{aligned}&\Vert Q_{L_3}^{-\sigma _3} P_{N_3}(R_{j_1}^A Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\cdot R_{j_2}^A Q_{L_2}^{\sigma _2}P_{N_2}u_{2})\Vert _{L^{2}_{tx}} \nonumber \\&\quad \lesssim A^{-\frac{1}{2}} L_1^{\frac{1}{2}}L_2^{\frac{1}{2}} \Vert R_{j_1}^A Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\Vert _{L^2_{tx}} \Vert R_{j_2}^A Q_{L_2}^{\sigma _2}P_{N_2}u_{2}\Vert _{L^2_{tx}}, \end{aligned}$$
(2.8)
$$\begin{aligned}&\Vert R_{j_1}^A Q_{L_1}^{-\sigma _1} P_{N_1}(R_{j_2}^A Q_{L_2}^{\sigma _2}P_{N_2}u_{2}\cdot Q_{L_3}^{\sigma _3}P_{N_3}u_{3})\Vert _{L^{2}_{tx}} \nonumber \\&\quad \lesssim A^{-\frac{1}{2}} L_2^{\frac{1}{2}}L_3^{\frac{1}{2}} \Vert R_{j_2}^A Q_{L_2}^{\sigma _2}P_{N_2}u_{2}\Vert _{L^2_{tx}} \Vert Q_{L_3}^{\sigma _3}P_{N_3}u_{3}\Vert _{L^2_{tx}}, \end{aligned}$$
(2.9)
$$\begin{aligned}&\Vert R_{j_2}^A Q_{L_2}^{-\sigma _2} P_{N_2}( Q_{L_3}^{\sigma _3}P_{N_3}u_{3}\cdot R_{j_1}^A Q_{L_1}^{\sigma _1}P_{N_1}u_{1})\Vert _{L^{2}_{tx}} \nonumber \\&\quad \lesssim A^{-\frac{1}{2}} L_3^{\frac{1}{2}}L_1^{\frac{1}{2}} \Vert Q_{L_3}^{\sigma _3}P_{N_3}u_{3}\Vert _{L^2_{tx}} \Vert R_{j_1}^A Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\Vert _{L^2_{tx}}. \end{aligned}$$
(2.10)

Proposition 2.9

(Proposition 2.9 in [16]) We assume that \(\sigma _{1}\), \(\sigma _{2}\), \(\sigma _{3} \in {\mathbb R}\backslash \{0\}\) satisfy \({\widetilde{\kappa }}\ne 0\) and \({\widetilde{\theta }}<0\). Let \(L_{\text {max}} \ll |{\widetilde{\theta }}| N_{\min }^2\) and \( 64 \le A \le N_{\text {max}}\), \(16 \le |j_1 - j_2 |\le 32\). Then the following estimate holds:

$$\begin{aligned} \begin{aligned}&\Vert Q_{L_3}^{-\sigma _3} P_{N_3}(R_{j_1}^A Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\cdot R_{j_2}^A Q_{L_2}^{\sigma _2}P_{N_2}u_{2})\Vert _{L^{2}_{tx}} \\&\quad \lesssim A^{\frac{1}{2}} N_1^{-1} L_1^{\frac{1}{2}}L_2^{\frac{1}{2}} L_3^{\frac{1}{2}} \Vert R_{j_1}^A Q_{L_1}^{\sigma _1}P_{N_1}u_{1}\Vert _{L^2_{tx}} \Vert R_{j_2}^A Q_{L_2}^{\sigma _2}P_{N_2}u_{2}\Vert _{L^2_{tx}}. \end{aligned} \end{aligned}$$
(2.11)

2.2 Proof of Proposition 2.1

By the duality argument, we have

$$\begin{aligned} \begin{aligned} \Vert |\nabla |(uv)\Vert _{X^{s,-b'}_{-\sigma _3}}&\lesssim \sup _{\Vert w\Vert _{X^{-s,b'}_{\sigma _3}}=1} \left| \int |\nabla |(uv)w\hbox {d}x\hbox {d}t\right| ,\\ \Vert (\Delta U)v\Vert _{X^{s,-b'}_{-\sigma _3}}&\lesssim \sup _{\Vert w\Vert _{X^{-s,b'}_{\sigma _3}}=1} \left| \int (\Delta U)vw\hbox {d}x\hbox {d}t\right| \\&\le \sup _{\Vert w\Vert _{X^{-s,b'}_{\sigma _3}}=1} \left( \left| \int \partial _1(\partial _1U)vw\hbox {d}x\hbox {d}t\right| +\left| \int \partial _2(\partial _2U)vw\hbox {d}x\hbox {d}t\right| \right) , \end{aligned} \end{aligned}$$

where we used \((Q_{L_3}^{-\sigma _3}f,{\overline{g}})_{L^2_{tx}}=(f,\overline{Q_{L_3}^{\sigma _3}g})_{L^2_{tx}}\). Since \(|\nabla |(uv)\) and \((\Delta U)v\) are radial with respect to x, we can assume w is also radial with respect to x. Therefore, to obtain (2.1), it suffices to show that

$$\begin{aligned} \begin{aligned}&\sum _{N_1,N_2,N_3\ge 1}\sum _{L_1,L_2, L_3 \ge 1} N_{\max } \left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \Vert u\Vert _{X^{s,b'}_{\sigma _1}} \Vert v\Vert _{X^{s,b'}_{\sigma _2}} \Vert w\Vert _{X^{-s, b'}_{\sigma _3}} \end{aligned} \end{aligned}$$
(2.12)

for the radial functions u, v, and w, where we put

$$\begin{aligned} u_{N_1,L_1}:=Q_{L_1}^{\sigma _1}P_{N_1}u,\ v_{N_2,L_2}:=Q_{L_2}^{\sigma _2}P_{N_2}v,\ w_{N_3,L_3}:=Q_{L_3}^{\sigma _3}P_{N_3}w \end{aligned}$$

and used \((Q_{L_3}^{-\sigma _3}f,{\overline{g}})_{L^2_{tx}}=(f,\overline{Q_{L_3}^{\sigma _3}g})_{L^2_{tx}}\). By Plancherel’s theorem, we have

$$\begin{aligned} \begin{aligned}&\left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \sim \left| \int _{\begin{array}{c} \xi _1+\xi _2+\xi _3=0\\ \tau _1+\tau _2+\tau _3=0 \end{array}} {\mathcal {F}}_{tx}[u_{N_1,L_1}](\tau _1,\xi _1){\mathcal {F}}_{tx}[v_{N_2,L_2}](\tau _2,\xi _2){\mathcal {F}}_{tx}[w_{N_3,L_3}](\tau _3,\xi _3)\right| . \end{aligned} \end{aligned}$$

We only consider the case \(N_1 \lesssim N_2 \sim N_3\), because the remaining cases \(N_2 \lesssim N_3 \sim N_1\) and \(N_3 \lesssim N_1 \sim N_2\) can be shown similarly. It suffices to show that

$$\begin{aligned} \begin{aligned}&N_{2} \left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \left( \frac{N_{1}}{N_{2}} \right) ^{\epsilon }N_{1}^s(L_1L_2L_3)^{c} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}} \end{aligned} \end{aligned}$$
(2.13)

for some \(b'\in (0,\frac{1}{2})\), \(c\in (0,b')\), and \(\epsilon >0\). Indeed, from (2.13) and the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned} \begin{aligned}&\sum _{N_1 \lesssim N_2 \sim N_3}\sum _{L_1,L_2, L_3 \ge 1} N_{2} \left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \sum _{N_1 \lesssim N_2 \sim N_3}\sum _{L_1,L_2, L_3\ge 1} \left( \frac{N_{1}}{N_{2}} \right) ^{\epsilon }N_{1}^s(L_1L_2L_3)^{c}\Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}\\&\quad \lesssim \sum _{N_3}\sum _{ N_2 \sim N_3} \left( \sum _{N_1\lesssim N_2}N_1^{s+\varepsilon } N_2^{-\varepsilon } \sum _{L_1 \ge 1}L_1^{c}\Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\right) \\&\qquad \times \left( N_2^{s}\sum _{L_2\ge 1}L_2^{-(b'-c)}L_2^{b'}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\right) \left( N_3^{-s}\sum _{L_3\ge 1}L_3^{-(b'-c)}L_3^{b'}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}\right) \\&\quad \lesssim \Vert u\Vert _{X^{s,b'}_{\sigma _1}} \Vert v\Vert _{X^{s,b'}_{\sigma _2}} \Vert w\Vert _{X^{-s, b'}_{\sigma _3}}. \end{aligned} \end{aligned}$$

We put \(L_{\max }:= \max \nolimits _{1\le j\le 3} (L_1, L_2, L_3)\).

Case 1 High modulation, \(\displaystyle L_{\max } > rsim N_{\max }^2\)

In this case, the radial condition is not needed. We assume \(L_1 > rsim N_{\max }^2\sim N_2^2\). By the Cauchy–Schwarz inequality and (2.5), we have

$$\begin{aligned} \begin{aligned}&\left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert P_{N_1}(v_{N_2,L_2}w_{N_3,L_3})\Vert _{L^2_{tx}}\\&\quad \lesssim N_1^{4\delta } \left( \frac{N_1}{N_2}\right) ^{\frac{1}{2}- 2\delta }L_2^{c}L_3^{c} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}, \end{aligned} \end{aligned}$$

where \(\delta := \frac{1}{2}-c\). Therefore, we obtain

$$\begin{aligned} \begin{aligned}&N_{2} \left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim N_1^{\frac{1}{2}+2\delta } N_2^{\frac{1}{2}-2c+2\delta } (L_1L_2L_3)^{c} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}. \end{aligned} \end{aligned}$$

Thus, it suffices to show that

$$\begin{aligned} N_1^{\frac{1}{2}+2\delta } N_2^{\frac{1}{2}-2c+2\delta } \lesssim \left( \frac{N_{1}}{N_{2}} \right) ^{\epsilon }N_{1}^s. \end{aligned}$$
(2.14)

Since \(\delta =\frac{1}{2}-c\), we have

$$\begin{aligned} \begin{aligned} N_1^{\frac{1}{2}+2\delta } N_2^{\frac{1}{2}-2c+2\delta }&=N_1^{\frac{3}{2}- 2c} N_2^{\frac{3}{2}- 4c} \\&\sim N_1^{3- 6c-s} \left( \frac{N_{1}}{N_{2}} \right) ^{4c- \frac{3}{2} }N_{1}^s. \end{aligned} \end{aligned}$$

Therefore, by choosing \(b'\) and c as \(\max \{\frac{3-s}{6},\frac{3}{8}\}<c<b'<\frac{1}{2}\) for \(s>0\), we get (2.14).

Case 2: Low modulation, \(\displaystyle L_{\max }\ll N_{\max }^2\)

By Lemma 2.6, we can assume \(N_1 \sim N_2 \sim N_3\) thanks to \(\displaystyle L_{\max }\ll N_{\max }^2\). We assume \(L_{\text {max}} = L_3\) for simplicity. The other cases can be treated similarly.

\(\circ \) The case \({\widetilde{\theta }}=0\)

Let \(A:= L_{\max }^{-\frac{1}{2}} N_{\max }\sim L_{3}^{-\frac{1}{2}} N_{1}\). We decompose \({\mathbb R}^3 \times {\mathbb R}^3\times {\mathbb R}^3\) as follows:

$$\begin{aligned} {\mathbb R}^3 \times {\mathbb R}^3\times {\mathbb R}^3 = \bigcup _{0 \le j_1,j_2,j_3 \le A -1} {{\mathfrak {D}}}_{j_1}^A \times {{\mathfrak {D}}}_{j_2}^A\times {{\mathfrak {D}}}_{j_3}^A. \end{aligned}$$

Since \(L_{\text {max}} \le N_{\max }^2 (L_{\max }^{-\frac{1}{2}} N_{\max })^{-2} = N_{\max }^2 A^{-2}\), by Lemma 2.8, we can write

$$\begin{aligned}&\left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \le \sum _{j_1=0}^{A-1}\sum _{j_2\in J(j_1)} \sum _{j_3\in J(j_1)} \left| \int u_{N_1,L_1, j_1}v_{N_2,L_2, j_2}w_{N_3,L_3,j_3}\hbox {d}x\hbox {d}t\right| \end{aligned}$$

with \(u_{N_1,L_1, j_1}:= R_{j_1}^A u_{N_1, L_1}\), \(v_{N_2,L_2, j_2}:= R_{j_2}^A v_{N_2, L_2}\) and \(w_{N_3,L_3, j_3}:= R_{j_3}^A v_{N_3, L_3}\), where

$$\begin{aligned} J(j_1):=\{j\in \{0,1,\ldots ,A-1\}|\min \{|j_1-j|, |A-(j_1-j)|\}\lesssim 1\}. \end{aligned}$$

We note that \(\# J(j_1)\lesssim 1\). By using the Hölder inequality and Corollary 2.3 with \(p=q=4\), we get

$$\begin{aligned}&\sum _{j_1=0}^{A-1}\sum _{j_2\in J(j_1)} \sum _{j_3\in J(j_1)} \left| \int u_{N_1,L_1, j_1}v_{N_2,L_2, j_2}w_{N_3,L_3,j_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \sum _{j_1=0}^{A-1}\sum _{j_2\in J(j_1)} \sum _{j_3\in J(j_1)} \Vert u_{N_1,L_1,j_1}\Vert _{L^4_{tx}}\Vert v_{N_2,L_2,j_2}\Vert _{L^4_{tx}}\Vert w_{N_3,L_3,j_3} \Vert _{{L^2_{t x}}}\\&\quad \lesssim AL_1^{\frac{1}{2}}L_2^{\frac{1}{2}} \sup _{j_1}\Vert u_{N_1,L_1,j_1}\Vert _{L^2_{tx}} \sup _{j_2}\Vert v_{N_2,L_2,j_2}\Vert _{L^2_{tx}} \sup _{j_3}\Vert w_{N_3,L_3,j_3}\Vert _{L^2_{tx}}. \end{aligned}$$

Since u, v, and w are radial respect to x, we have

$$\begin{aligned} \begin{aligned}&\Vert u_{N_1,L_1,j_1}\Vert _{L^2_{tx}}\lesssim A^{-\frac{1}{2}}\Vert u_{N_1,L_1}\Vert _{L^2_{tx}},\ \Vert v_{N_2,L_2,j_2}\Vert _{L^2_{tx}}\lesssim A^{-\frac{1}{2}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}},\\&\Vert w_{N_3,L_3,j_3}\Vert _{L^2_{tx}}\lesssim A^{-\frac{1}{2}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}. \end{aligned} \end{aligned}$$
(2.15)

Therefore, we obtain

$$\begin{aligned} \begin{aligned}&N_2\left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim N_2A^{-\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}} \Vert v_{N_2,L_2}\Vert _{L^2_{tx}} \Vert w_{N_3,L_3}\Vert _{L^2_{tx}}\\&\quad \sim N_1^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}}L_3^{\frac{1}{4}} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}} \Vert v_{N_2,L_2}\Vert _{L^2_{tx}} \Vert w_{N_3,L_3}\Vert _{L^2_{tx}}\\&\quad \lesssim N_1^{\frac{1}{2}}(L_1L_2L_3)^{\frac{5}{12}} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}} \Vert v_{N_2,L_2}\Vert _{L^2_{tx}} \Vert w_{N_3,L_3}\Vert _{L^2_{tx}}. \end{aligned} \end{aligned}$$

This estimate gives the desired estimate (2.13) for \(s\ge \frac{1}{2}\) by choosing \(b'\) and c as \(\frac{5}{12}\le c<b'<\frac{1}{2}\).

\(\circ \) The case \({\widetilde{\theta }}<0\)

We decompose \({\mathbb R}^3 \times {\mathbb R}^3\) as follows:

$$\begin{aligned} {\mathbb R}^3 \times {\mathbb R}^3 = \left( \bigcup _{\tiny {\begin{array}{c} 0 \le j_1,j_2 \le N_1 -1\\ |j_1 - j_2|\le 16 \end{array}}} {{\mathfrak {D}}}_{j_1}^{N_1} \times {{\mathfrak {D}}}_{j_2}^{N_1}\right) \cup \left( \bigcup _{64 \le A \le N_1} \ \bigcup _{\tiny {\begin{array}{c} 0 \le j_1,j_2 \le A -1\\ 16 \le |j_1 - j_2|\le 32 \end{array}}} {{\mathfrak {D}}}_{j_1}^A \times {{\mathfrak {D}}}_{j_2}^A\right) . \end{aligned}$$

We can write

$$\begin{aligned}&\left| \int u_{N_1,L_1}v_{N_2,L_2}w_{N_3,L_3}\hbox {d}x\hbox {d}t\right| \\&\quad \le \sum _{{\tiny {\begin{array}{c} A=N_1\\ 0 \le j_1,j_2 \le N_1 -1\\ |j_1 - j_2|\le 16 \end{array}}}} \sum _{j_3\in J(j_1)} \left| \int u_{N_1,L_1, j_1}v_{N_2,L_2, j_2}w_{N_3,L_3,j_3}\hbox {d}x\hbox {d}t\right| \\&\qquad + \sum _{64 \le A \le N_1} \sum _{{\tiny {\begin{array}{c} 0 \le j_1,j_2 \le A-1\\ 16\le |j_1 - j_2|\le 32 \end{array}}}} \sum _{j_3\in J(j_1)} \left| \int u_{N_1,L_1, j_1}v_{N_2,L_2, j_2}w_{N_3,L_3,j_3}\hbox {d}x\hbox {d}t\right|&. \end{aligned}$$

For the former term, by using the Hölder inequality, Theorem 2.1, and (2.15), we get

$$\begin{aligned}&\sum _{{\tiny {\begin{array}{c} A=N_1\\ 0 \le j_1,j_2 \le N_1 -1\\ |j_1 - j_2|\le 16 \end{array}}}} \sum _{j_3\in J(j_1)} \left| \int u_{N_1,L_1, j_1}v_{N_2,L_2, j_2}w_{N_3,L_3,j_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \sum _{{\tiny {\begin{array}{c} A=N_1\\ 0 \le j_1,j_2 \le N_1 -1\\ |j_1 - j_2|\le 16 \end{array}}}} \Vert Q_{L_3}^{-\sigma _3} P_{N_3} ( u_{N_1,L_1, j_1} v_{N_2,L_2, j_2})\Vert _{L^{2}_{tx}} \sum _{j_3\in J(j_1)} \Vert w_{N_3,L_3, j_3} \Vert _{{L^2_{t x}}} \\&\quad \lesssim N_1^{-1} L_1^{\frac{1}{2}}L_2^{\frac{1}{2}} \Vert w_{N_3,L_3} \Vert _{{L^2_{t x}}} \sum _{{\tiny {\begin{array}{c} A=N_1\\ 0 \le j_1,j_2 \le N_1 -1\\ |j_1 - j_2|\le 16 \end{array}}}} \Vert u_{N_1,L_1, j_1}\Vert _{L^2_{tx}} \Vert v_{N_2,L_2, j_2}\Vert _{L^2_{tx}}\\&\quad \lesssim N_1^{-1} L_1^{\frac{1}{2}}L_2^{\frac{1}{2}} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}\\&\quad \lesssim N_1^{-1} (L_1L_2L_3)^{\frac{1}{3}} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}. \end{aligned}$$

For the latter term, by using Proposition 2.9, (2.15), and \(L_1L_2L_3\lesssim N_{1}^6\) that we get

$$\begin{aligned}&\sum _{64 \le A \le N_1} \sum _{{\tiny {\begin{array}{c} 0 \le j_1,j_2 \le A-1\\ 16\le |j_1 - j_2|\le 32 \end{array}}}} \sum _{j_3\in J(j_1)} \left| \int u_{N_1,L_1, j_1}v_{N_2,L_2, j_2}w_{N_3,L_3,j_3}\hbox {d}x\hbox {d}t\right| \\&\quad \lesssim \sum _{64 \le A \le N_1} \sum _{{\tiny {\begin{array}{c} 0 \le j_1,j_2 \le A-1\\ 16\le |j_1 - j_2|\le 32 \end{array}}}} \Vert Q_{L_3}^{-\sigma _3} P_{N_3} ( u_{N_1,L_1, j_1} v_{N_2,L_2, j_2})\Vert _{L^{2}_{tx}} \sum _{j_3\in J(j_1)} \Vert w_{N_3,L_3, j_3} \Vert _{{L^2_{t x}}}\\&\quad \lesssim \Vert w_{N_3,L_3} \Vert _{{L^2_{t x}}} \sum _{64 \le A \le N_1} N_1^{-1} ( L_1 L_2 L_3)^\frac{1}{2} \sum _{{\tiny {\begin{array}{c} 0 \le j_1,j_2 \le A-1\\ 16\le |j_1 - j_2|\le 32 \end{array}}}} \Vert u_{N_1,L_1, j_1}\Vert _{L^2_{tx}} \Vert v_{N_2,L_2, j_2}\Vert _{L^2_{tx}}\\&\quad \lesssim (\log {N_1}) N_1^{-1} ( L_1 L_2 L_3)^{\frac{1}{2}} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}\\&\quad \lesssim (\log {N_1}) N_1^{2-6c} ( L_1 L_2 L_3)^{c} \Vert u_{N_1,L_1}\Vert _{L^2_{tx}}\Vert v_{N_2,L_2}\Vert _{L^2_{tx}}\Vert w_{N_3,L_3}\Vert _{L^2_{tx}}. \end{aligned}$$

The above two estimates give the desired estimate (2.13) for \(s>0\) by choosing \(b'\) and c as \(\max \{\frac{3-s}{6},\frac{1}{3}\}< c<b'<\frac{1}{2}\). \(\square \)

3 Proof of the Well-Posedness

In this section, we prove Theorems 1.1 and  1.3. For a Banach space H and \(r>0\), we define \(B_r(H):=\{ f\in H \,|\, \Vert f\Vert _H \le r \}\). Furthermore, we define \({\mathcal {X}}^{s,b}_{T}\) as

$$\begin{aligned} {\mathcal {X}}^s_T:= (X^{s,b}_{\alpha ,\mathrm{rad},T})^2\times (X^{s,b}_{\beta ,\mathrm{rad},T})^2\times {\widetilde{X}}^{s+1,b}_{\gamma ,\mathrm{rad},T}, \end{aligned}$$

where \(X^{s,b}_{\alpha ,\mathrm{rad},T}\) and \(X^{s,b}_{\beta ,\mathrm{rad},T}\) are the time localized spaces defined by

$$\begin{aligned} X^{s,b}_{\sigma ,\mathrm{rad},T}:=\left\{ u|_{[0,T]}|\ u\in X^{s,b}_{\sigma ,\mathrm{rad}}\right\} \end{aligned}$$

with the norm

$$\begin{aligned} \Vert u\Vert _{X^{s,b}_{\sigma , T}}:=\inf \left\{ \Vert v\Vert _{X^{s,b}_{\sigma ,T}}|\ v\in X^{s,b}_{\sigma ,\mathrm{rad}},\ v|_{[0,T]}=u|_{[0,T]}\right\} . \end{aligned}$$

Also, \({\widetilde{X}}^{s+1,b}_{\gamma ,\mathrm{rad},T}\) is defined by the same way. Now, we restate Theorem 1.1 for \(d=2\) more precisely.

Theorem 3.1

Let \(s\ge \frac{1}{2}\) if \(\theta =0\) and \(s>0\) if \(\theta <0\). For any \(r>0\) and for all initial data \((u_{0}, v_{0}, [W_{0}])\in B_r({\mathcal {H}}^s({\mathbb R}^2))\), there exist \(T=T(r)>0\) and a solution \((u,v,[W])\in {\mathcal {X}}^{s,b}_T\) to system (1.12) on [0, T] for suitable \(b>\frac{1}{2}\). Such solution is unique in \(B_R({\mathcal {X}}^s_T)\) for some \(R>0\). Moreover, the flow map

$$\begin{aligned} S:B_{r}({\mathcal {H}}^s({\mathbb R}^2))\ni (u_{0},v_{0},[W_{0}])\mapsto (u,v,[W])\in {\mathcal {X}}^s_T \end{aligned}$$

is Lipschitz continuous.

Remark 3.1

Since \(X^{s,b}_T\hookrightarrow C([0,T];H^s({\mathbb R}^2))\) holds for \(b>\frac{1}{2}\), we have \({\mathcal {X}}^{s,b}_T\hookrightarrow C([0,T];{\mathcal {H}}^s({\mathbb R}^2))\).

To prove Theorem 3.1, we give the linear estimate.

Proposition 3.1

Let \(s\in {\mathbb R}\), \(\sigma \in {\mathbb R}\backslash \{0\}\), \(b\in (\frac{1}{2},1]\), \(b'\in [0,1-b]\) and \(0<T\le 1\).

  1. (1)

    There exists \(C_1>0\) such that for any \(\varphi \in H^s({\mathbb R}^2)\), we have

    $$\begin{aligned} \Vert e^{it\sigma \Delta }\varphi \Vert _{X^{s,b}_{\sigma ,T}} \le C_1\Vert \varphi \Vert _{H^s}. \end{aligned}$$
  2. (2)

    There exists \(C_2>0\) such that for any \(F \in X^{s,-b'}_{\sigma ,T}\), we have

    $$\begin{aligned} \left\| \int _{0}^{t}e^{i(t-t')\sigma \Delta }F(t')\hbox {d}t'\right\| _{X^{s,b}_{\sigma ,T}} \le C_2T^{1-b'-b}\Vert F\Vert _{X^{s,-b'}_{\sigma ,T}}. \end{aligned}$$
  3. (3)

    There exists \(C_3>0\) such that for any \(u \in X^{s,b}_{\sigma ,T}\), we have

    $$\begin{aligned} \Vert u\Vert _{X^{s,b'}_{\sigma , T}}\le C_3T^{b-b'}\Vert u\Vert _{X^{s,b}_{\sigma , T}}. \end{aligned}$$

For the proof of Proposition 3.1, see Lemma 2.1 and 3.1 in [10].

We define the map \(\Phi (u,v,[W])=(\Phi _{\alpha , u_{0}}^{(1)}([W], v), \Phi _{\beta , v_{0}}^{(1)}([{\overline{W}}], u), [\Phi _{\gamma , [W_{0}]}^{(2)}(u, {\overline{v}}))])\) as

$$\begin{aligned} \begin{aligned} \Phi _{\sigma , \varphi }^{(1)}([f],g)(t)&:=e^{it\sigma \Delta }\varphi -i\int _{0}^{t}e^{i(t-t')\sigma \Delta }(\Delta f(t'))g(t')\hbox {d}t',\\ \Phi _{\sigma , [\varphi ]}^{(2)}(f,g)(t)&:=e^{it\sigma \Delta }\varphi +i\int _{0}^{t}e^{i(t-t')\sigma \Delta } (f(t')\cdot g(t'))\hbox {d}t'. \end{aligned} \end{aligned}$$

To prove the existence of the solution of (1.1), we prove that \(\Phi \) is a contraction map on \(B_R({\mathcal {X}}^s_T)\) for some \(R>0\) and \(T>0\). For a vector-valued function \(f=(f_1,f_2)\), \(\Vert f\Vert _{H^s}\) and \(\Vert f\Vert _{X^{s,b}_T}\) denote \(\Vert f_1\Vert _{H^s}+\Vert f_2\Vert _{H^s}\) and \(\Vert f_1\Vert _{X^{s,b}_T}+\Vert f_2\Vert _{X^{s,b}_T}\), respectively.

Proof of Theorem 3.1

We choose \(b>\frac{1}{2}\) as \(b=1-b'\), where \(b'\) is as in Proposition 2.1. Let \((u_{0}\), \(v_{0}\), \([W_{0}])\in B_{r}({\mathcal {H}}^s({\mathbb R}^2))\) be given. By Proposition 2.1 with \((\sigma _1,\sigma _2,\sigma _3)\in \{(\beta , \gamma , -\alpha ), (-\gamma , \alpha , -\beta ), (\alpha , -\beta , -\gamma )\}\) and Proposition 3.1 with \(\sigma \in \{\alpha , \beta , \gamma \}\), there exist constants \(C_1\), \(C_2\), \(C_3>0\) such that for any \((u,v,[W])\in B_R({\mathcal {X}}^s_T)\), we have

$$\begin{aligned} \begin{aligned} \Vert \Phi ^{(1)}_{\alpha ,u_{0}}([W], v)\Vert _{X^{s,b}_{\alpha ,T}}&\le C_1\Vert u_{0}\Vert _{H^s} +CC_2C_3^2T^{4b-2} \Vert [W]\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma ,T}} \Vert v\Vert _{X^{s,b}_{\beta ,T}} \\&\le C_1r+CC_2C_3^2T^{4b-2}R^2,\\ \Vert \Phi ^{(1)}_{\beta ,v_{0}}([{\overline{W}}], u)\Vert _{X^{s,b}_{\beta ,T}}&\le C_1\Vert v_{0}\Vert _{H^s} +CC_2C_3^2T^{4b-2}\Vert [W]\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma ,T}} \Vert u\Vert _{X^{s,b}_{\alpha ,T}}\\&\le C_1r+CC_2C_3^2T^{4b-2}R^2,\\ \Vert [\Phi ^{(2)}_{\gamma ,[W_{0}]}(u, {\overline{v}})]\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma ,T}}&\le C_1\Vert [W_{0}]\Vert _{{\widetilde{H}}^{s+1}} +CC_2C_3^2T^{4b-2}\Vert u\Vert _{X^{s,b}_{\alpha ,T}} \Vert v\Vert _{X^{s,b}_{\beta ,T}}\\&\le C_1r+CC_2C_3^2T^{4b-2}R^2. \end{aligned} \end{aligned}$$

Similarly,

$$\begin{aligned} \begin{aligned}&\Vert \Phi ^{(1)}_{\alpha ,u_{0}}([W], v)-\Phi ^{(1)}_{\alpha ,u_{0}}([W'], v')\Vert _{X^{s,b}_{\alpha ,T}} \\&\quad \le CC_2C_3^2T^{4b-2}R\left( \Vert [W]-[W']\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma ,T}}+\Vert v-v'\Vert _{X^{s,b}_{\beta ,T}}\right) ,\\&\Vert \Phi ^{(1)}_{\beta ,v_{0}}([{\overline{W}}], u)-\Phi ^{(1)}_{\beta ,v_{0}}([\overline{W'}], u')\Vert _{X^{s,b}_{\beta ,T}} \\&\quad \le CC_2C_3^2T^{4b-2}R\left( \Vert [W]-[W']\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma ,T}}+\Vert u-u'\Vert _{X^{s,b}_{\alpha ,T}}\right) ,\\&\Vert [\Phi ^{(2)}_{\gamma ,[W_{0}]}(u, {\overline{v}})]-[\Phi ^{(2)}_{\gamma ,[W_{0}]}(u', \overline{v'})]\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma ,T}} \\&\quad \le CC_2C_3^2T^{4b-2}R\left( \Vert u-u'\Vert _{X^{s,b}_{\alpha ,T}}+\Vert v-v'\Vert _{X^{s,b}_{\beta ,T}}\right) . \end{aligned} \end{aligned}$$

Therefore, if we choose \(R>0\) and \(T>0\) as

$$\begin{aligned} R=6C_1r,\ CC_2C_3^2T^{4b-2}R\le \frac{1}{4} \end{aligned}$$

then \(\Phi \) is a contraction map on \(B_R({\mathcal {X}}^s_T)\). This implies the existence of the solution of system (1.1) and the uniqueness in the ball \(B_R({\mathcal {X}}^s_T)\). The Lipschitz continuity of the flow map is also proved by similar argument. \(\square \)

Next, to prove Theorem 1.3, we justify the existence of a scalar potential of \(w\in (H^s({\mathbb R}^2))^2\). Let \({\mathcal {F}}_1\) and \({\mathcal {F}}_2\) denote the Fourier transform with respect to the first component and the second component, respectively. We note that \({\mathcal {F}}_1^{-1}{\mathcal {F}}_2^{-1}={\mathcal {F}}_2^{-1}{\mathcal {F}}_1^{-1}={\mathcal {F}}_x^{-1}\) (and also \({\mathcal {F}}_1{\mathcal {F}}_2={\mathcal {F}}_2{\mathcal {F}}_1={\mathcal {F}}_x\)) holds on \(L^2({\mathbb R}^2)\).

Proposition 3.2

Let \(s>\frac{1}{2}\) and \(w=(w_1,w_2)\in (H^s({\mathbb R}^2))^2\). If \(w_1\) and \(w_2\) satisfy

$$\begin{aligned} \xi _2\widehat{w_1}(\xi )-\xi _1\widehat{w_2}(\xi ) =0\quad {\hbox {a.e.}} \xi =(\xi _1,\xi _2)\in {\mathbb R}^2, \end{aligned}$$

then there exists \(W\in L^1_{\mathrm{loc}}({\mathbb R}^2)\)\((\subset {\mathcal {S}}'({\mathbb R}^2))\) such that

$$\begin{aligned} \nabla W(x)=w(x)\ {\hbox {a.e.}} x=(x_1,x_2)\in {\mathbb R}^2. \end{aligned}$$

To obtain Proposition 3.2, we use the next lemma.

Lemma 3.3

Let \(s>\frac{1}{2}\). If \(f\in H^s({\mathbb R}^2)\), then it hold that

$$\begin{aligned} {\mathcal {F}}_1[f](\cdot ,x_2)\in L^1({\mathbb R})\ \,\, {\mathrm{a.e.}}\,\, x_2\in {\mathbb R}, \quad {\mathcal {F}}_2[f](x_1,\cdot )\in L^1({\mathbb R})\quad {\mathrm{a.e.}}\,\, x_1\in {\mathbb R}. \end{aligned}$$

Proof

By the Cauchy–Schwarz inequality and Plancherel’s theorem, we have

$$\begin{aligned} \begin{aligned} \left\| \Vert {\mathcal {F}}_1[f](\xi _1,x_2)\Vert _{L^1_{\xi _1}}\right\| _{L^2_{x_2}}&\le \left\| \Vert \langle \xi _1\rangle ^{-s}\Vert _{L^2_{\xi _1}} \Vert \langle \xi _1\rangle ^s{\mathcal {F}}_1[f](\xi _1,x_2)\Vert _{L^2_{\xi _1}}\right\| _{L^2_{x_2}}\\&\lesssim \Vert \langle \xi _1\rangle ^s{\widehat{f}}(\xi _1,\xi _2)\Vert _{L^2_{\xi }}\\&\lesssim \Vert f\Vert _{H^s}<\infty \end{aligned} \end{aligned}$$

for \(s>\frac{1}{2}\). Therefore, we obtain

$$\begin{aligned} \Vert {\mathcal {F}}_1[f](\xi _1,x_2)\Vert _{L^1_{\xi _1}}<\infty \quad {\hbox {a.e.}}\,\, x_2\in {\mathbb R}. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \Vert {\mathcal {F}}_2[f](x_1,\xi _2)\Vert _{L^1_{\xi _2}}<\infty \quad {\hbox {a.e.}}\,\, x_1\in {\mathbb R}. \end{aligned}$$

\(\square \)

Proof of Proposition 3.2

We put

$$\begin{aligned} W(x):=\int _{a_1}^{x_1}w_1(y_1,x_2)\hbox {d}y_1+\int _{a_2}^{x_2}w_2(a_1,y_2)\hbox {d}y_2 =:W_1(x)+W_2(x) \end{aligned}$$

for some \(a_1\), \(a_2\in {\mathbb R}\). By \(w \in L^2({\mathbb R}^2)\), we have \(W\in L^1_{\mathrm{loc}}({\mathbb R}^2)\). Hence, it remains to show that \(\nabla W=w\). Since

$$\begin{aligned} \partial _1W_1(x)=w_1(x), \quad \partial _1W_2(x)=0, \quad \partial _2W_2(x)=w_2(a_1,x_2) \end{aligned}$$

hold for almost all \(x=(x_1,x_2)\in {\mathbb R}^2\), it suffices to show

$$\begin{aligned} \partial _2W_1(x)=w_2(x)-w_2(a_1,x_2)\quad {\hbox {a.e.}}\,\, x=(x_1,x_2)\in {\mathbb R}^2. \end{aligned}$$
(3.1)

Let \(h\in {\mathbb R}\). Since \({\mathcal {F}}_1[w_1](\cdot ,x_2)\in L^1({\mathbb R})\) a.e. \(x_2\in {\mathbb R}\) by Lemma 3.3, we have

$$\begin{aligned} \begin{aligned}&\frac{W_1(x_1,x_2+h)-W_1(x_1,x_2)}{h}\\&\quad =\frac{1}{h}\int _{a_1}^{x_1}\left( w_1(y_1,x_2+h)-w_1(y_1,x_2)\right) \hbox {d}y_1\\&\quad =\frac{1}{h}\int _{a_1}^{x_1} \left( \int _{{\mathbb R}} \left( {\mathcal {F}}_1[w_1](\xi _1,x_2+h)-{\mathcal {F}}_1[w_1](\xi _1,x_2)\right) e^{i\xi _1y_1} \hbox {d}\xi _1\right) \hbox {d}y_1\\&\quad =\frac{1}{h} \int _{{\mathbb R}} \left( {\mathcal {F}}_1[w_1](\xi _1,x_2+h)-{\mathcal {F}}_1[w_1](\xi _1,x_2)\right) \left( \int _{a_1}^{x_1}e^{i\xi _1y_1}\hbox {d}y_1\right) \hbox {d}\xi _1\\&\quad =\frac{1}{h}\int _{{\mathbb R}}\left( \int _{{\mathbb R}}\widehat{w_1}(\xi _1,\xi _2)e^{i\xi _2x_2} (e^{i\xi _2h}-1)\hbox {d}\xi _2\right) \frac{e^{i\xi _1x_1}-e^{i\xi _1a_1}}{i\xi _1}\hbox {d}\xi _1 =:I_h \end{aligned} \end{aligned}$$

by Fubini’s theorem. We put \({\mathcal {F}}_{12}^{-1}:={\mathcal {F}}_1^{-1}{\mathcal {F}}_2^{-1}\), \({\mathcal {F}}_{21}^{-1}:={\mathcal {F}}_2^{-1}{\mathcal {F}}_1^{-1}\). By using \(\xi _2\widehat{w_1}=\xi _1\widehat{w_2}\) and \({\mathcal {F}}_{12}^{-1}={\mathcal {F}}_{21}^{-1}\), we have

$$\begin{aligned} \begin{aligned} I_h&= \int _{{\mathbb R}}\left( \int _{{\mathbb R}}\widehat{w_2}(\xi _1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}e^{i\xi _2x_2} \hbox {d}\xi _2\right) (e^{i\xi _1x_1}-e^{i\xi _1 a_1})\hbox {d}\xi _1\\&={\mathcal {F}}_{12}^{-1}\left[ \widehat{w_2}(\xi _1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}\right] (x_1,x_2)-{\mathcal {F}}_{12}^{-1}\left[ \widehat{w_2}(\xi _1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}\right] (a_1,x_2)\\&={\mathcal {F}}_{21}^{-1}\left[ \widehat{w_2}(\xi _1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}\right] (x_1,x_2)-{\mathcal {F}}_{21}^{-1}\left[ \widehat{w_2}(\xi _1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}\right] (a_1,x_2)\\&=\int _{{\mathbb R}}\left( {\mathcal {F}}_2[w_2](x_1,\xi _2)-{\mathcal {F}}_2[w_2](a_1,\xi _2)\right) \frac{e^{i\xi _2h}-1}{i\xi _2h}e^{i\xi _2x_2}\hbox {d}\xi _2. \end{aligned} \end{aligned}$$

Since \({\mathcal {F}}_2[w_2](x_1,\cdot )\in L^1({\mathbb R})\) a.e. \(x_1\in {\mathbb R}\) by Lemma 3.3, we have

$$\begin{aligned} \begin{aligned} \lim _{h\rightarrow 0}I_h&= \int _{{\mathbb R}}\left( {\mathcal {F}}_2[w_2](x_1,\xi _2)-{\mathcal {F}}_2[w_2](a_1,\xi _2)\right) e^{i\xi _2x_2}\hbox {d}\xi _2\\&=w_2(x_1,x_2)-w_2(a_1,x_2) \end{aligned} \end{aligned}$$

by Lebesgue’s dominant convergence theorem. Therefore, we obtain (3.1). \(\square \)

Remark 3.2

In the proof of Proposition 3.2, we also used

$$\begin{aligned} \left| \frac{e^{i\xi _2h}-1}{i\xi _2h}\right| \le \sup _{z\in {\mathbb R}}\left( \left| \frac{\cos z-1}{z}\right| +\left| \frac{\sin z}{z}\right| \right) <\infty . \end{aligned}$$

This implies

$$\begin{aligned} \widehat{w_2}(\xi _1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}\in L^2_{\xi }({\mathbb R}^2) \end{aligned}$$

and

$$\begin{aligned} {\mathcal {F}}_2[w_2](x_1,\xi _2)\frac{e^{i\xi _2h}-1}{i\xi _2h}\in L^1_{\xi _2}({\mathbb R})\quad {\hbox {a.e.}}\,\, x_1\in {\mathbb R}. \end{aligned}$$

Remark 3.3

If \(w=(w_1,w_2)\in (H^s({\mathbb R}^2))^2\) for \(s>\frac{1}{2}\) satisfies

$$\begin{aligned} x_2w_1(x)-x_1w_2(x)=0, \quad {\hbox {a.e.}}\,\, x\in {\mathbb R}^2 \end{aligned}$$

additionally in Proposition 3.2, then \(W\in L^1_{\mathrm{loc}}({\mathbb R}^2)\) given in the proof of Proposition 3.2 is radial. Indeed, this condition with \(\nabla W(x)=w(x)\) yields (1.10).

Remark 3.4

For \(s\le \frac{1}{2}\), we do not know whether there exists a scalar potential of \(w\in (H^s({\mathbb R}^2))^2\) or not. But we point out that if \(s<\frac{1}{2}\), then the 1D delta function appears in \(\partial _2w_1-\partial _1w_2\) for some \(w\in (H^s({\mathbb R}^2))^2\). Then, the irrotational condition does not make sense for pointwise.

Next, we prove that \({\mathcal {A}}^s({\mathbb R}^2)\) is a Banach space.

Proposition 3.4

For \(s\ge 0\), \({\mathcal {A}}^s({\mathbb R}^2)\) is a closed subspace of \((H^s({\mathbb R}^2))^2\).

Proof

Let \(f^{(n)}=(f_1^{(n)}, f_2^{(n)})\in {\mathcal {A}}^s({\mathbb R}^2)\)\((n=1,2,3,\ldots )\) and \(f=(f_1,f_2)\in (H^s({\mathbb R}^2))^2\). Assume that \(f^{(n)}\) convergences to f in \((H^s({\mathbb R}^2))^2\) as \(n\rightarrow \infty \). We prove \(f\in A^s({\mathbb R}^2)\); namely, f satisfies (1.8). By the triangle inequality, we have

$$\begin{aligned} \begin{aligned}&\left\| \frac{x_2}{\langle x \rangle }f_1-\frac{x_1}{\langle x \rangle }f_2\right\| _{L^2}\\&\quad \le \left\| \frac{x_2}{\langle x \rangle }f_1-\frac{x_2}{\langle x \rangle }f_1^{(n)}\right\| _{L^2} + \left\| \frac{x_2}{\langle x \rangle }f_1^{(n)}-\frac{x_1}{\langle x \rangle }f_2^{(n)}\right\| _{L^2} + \left\| \frac{x_1}{\langle x \rangle }f_2^{(n)}-\frac{x_1}{\langle x \rangle }f_2\right\| _{L^2}\\&\quad \le \Vert f_1-f_1^{(n)}\Vert _{L^2}+\Vert x_2f_1^{(n)}-x_1f_2^{(n)}\Vert _{L^2}+\Vert f_2^{(n)}-f_2\Vert _{L^2}. \end{aligned} \end{aligned}$$

Since \(f^{(n)}\) satisfies (1.8) and \(f^{(n)}\rightarrow f\) in \((L^2({\mathbb R}^2))^2\) as \(n\rightarrow \infty \), we obtain

$$\begin{aligned} \Vert x_2f_1^{(n)}-x_1f_2^{(n)}\Vert _{L^2}=0, \quad \Vert f_1-f_1^{(n)}\Vert _{L^2}+\Vert f_2^{(n)}-f_2\Vert _{L^2}\rightarrow 0\ (n\rightarrow \infty ). \end{aligned}$$

Therefore, we get

$$\begin{aligned} \left\| \frac{x_2}{\langle x \rangle }f_1-\frac{x_1}{\langle x \rangle }f_2\right\| _{L^2}=0. \end{aligned}$$

It implies \(x_2f_1(x)-x_1f_2(x)=0\) a.e. \(x\in {\mathbb R}^2\). Similarly, we obtain \(\xi _2\widehat{f_1}(\xi )-\xi _1\widehat{f_2}(\xi )=0\) a.e. \(\xi \in {\mathbb R}^2\). \(\square \)

Proof of Theorem 1.3

Let \((u_0,v_0,w_0)\in B_r((H_{\mathrm{rad}}^s({\mathbb R}^2))^2\times (H_{\mathrm{rad}}^s({\mathbb R}^2))^2 \times {\mathcal {A}}^s({\mathbb R}^2))\) be given. We first prove the existence of solution to (1.1). Since \(w_0\) satisfies (1.8), by Proposition 3.2, there exists \([W_0]\in {\widetilde{H}}^{s+1}_{\mathrm{rad}}\) such that \(\nabla W_0=w_0\). From Theorem 1.1, there exists \(T>0\) and a solution \((u,v,[W])\in {\mathcal {X}}^{s}_T\) to (1.12) with \((u,v,[W])|_{t=0}=(u_0,v_0,[W_0])\). Since

$$\begin{aligned} \Vert [W_0]\Vert _{{\widetilde{H}}^{s+1}}=\Vert w_0\Vert _{H^s}\le r, \end{aligned}$$

the existence time T is decided by r. We put \(w=\nabla W\). Then, \(w\in X^{s,b}_{\gamma , T}\) satisfying

$$\begin{aligned} \Vert w\Vert _{X^{s,b}_{\gamma , T}} =\Vert [W]\Vert _{{\widetilde{X}}^{s+1,b}_{\gamma , T}} \le R, \end{aligned}$$

where R is as in the proof of Theorem 1.1, and (uvw) satisfies (1.1) since \(\Delta W=\nabla \cdot w\). Furthermore, we have

$$\begin{aligned} \partial _1w_2-\partial _2w_1=\partial _1(\partial _2W)-\partial _2(\partial _1W)=0 \end{aligned}$$

and

$$\begin{aligned} x_1w_2-x_2w_1=(x_1\partial _2-x_2\partial _1)W=0 \end{aligned}$$

because W is radial with respect to x. Therefore, \(w(t)\in {\mathcal {A}}^s({\mathbb R}^2)\) for any \(t\in [0,T]\).

Next, we prove the uniqueness of the solution in \(B_R({\mathcal {Y}}^{s,b}_T)\), where

$$\begin{aligned} \begin{aligned} {\mathcal {Y}}^{s,b}_T&:=(X^{s,b}_{\alpha ,\mathrm{rad},T})^2\times (X^{s,b}_{\beta ,\mathrm{rad},T})^2\times Y^{s,b}_{\gamma ,T},\\ Y^{s,b}_{\gamma , T}&:= \{w=(w_1,w_2)\in (X^{s,b}_{\gamma ,T})^2|w(t)\ \mathrm{satisfies}\ (1.8) \mathrm{for\ any\ }t\in [0,T]\}. \end{aligned} \end{aligned}$$

Let \((u^{(1)},v^{(1)},w^{(1)})\), \((u^{(2)},v^{(2)},w^{(2)})\in B_R({\mathcal {Y}}^{s,b}_T)\) are solution to (1.1) with initial data \((u_0,v_0,w_0)\). Then by Proposition 3.2, there exists \([W^{(1)}]\), \([W^{(2)}]\in {\widetilde{X}}^{s+1,b}_{\gamma ,\mathrm{rad},T}\) such that \(w^{(1)}=\nabla W^{(1)}\), \(w^{(2)}=\nabla W^{(2)}\). By substituting \(w^{(j)}=\nabla W^{(j)}\) in both sides of the integral form of (1.1), \((u^{(j)},v^{(j)},W^{(j)})\)\((j=1,2)\) satisfy

$$\begin{aligned} \begin{aligned} u^{(j)}(t)&=e^{it\alpha \Delta }u_0+i\int _0^te^{i(t-t')\alpha \Delta }(\Delta W^{(j)}(t'))u^{(j)}(t')\hbox {d}t'\quad \mathrm{in}\ (H^s({\mathbb R}^2))^2,\\ v^{(j)}(t)&=e^{it\beta \Delta }v_0+i\int _0^te^{i(t-t')\beta \Delta }(\Delta \overline{W^{(j)}(t')})v^{(j)}(t')\hbox {d}t'\quad \mathrm{in}\ (H^s({\mathbb R}^2))^2,\\ \nabla W^{(j)}(t)&=e^{it\gamma \Delta }w_0-i\int _0^te^{i(t-t')\gamma \Delta }\nabla (u^{(j)}(t')\cdot \overline{v^{(j)}(t')})\hbox {d}t'\quad \mathrm{in}\ H^s({\mathbb R}^2). \end{aligned} \end{aligned}$$

Therefore, by the same argument as in the proof of Theorem 1.1, we have

$$\begin{aligned} \begin{aligned} \Vert u^{(1)}-u^{(2)}\Vert _{X^{s,b}_{\alpha ,T}}&\le \frac{1}{4}\left( \Vert w^{(1)}-w^{(2)}\Vert _{X^{s,b}_{\gamma ,T}} +\Vert v^{(1)}-v^{(2)}\Vert _{X^{s,b}_{\beta ,T}}\right) \\ \Vert v^{(1)}-v^{(2)}\Vert _{X^{s,b}_{\beta ,T}}&\le \frac{1}{4}\left( \Vert w^{(1)}-w^{(2)}\Vert _{X^{s,b}_{\gamma ,T}} +\Vert u^{(1)}-u^{(2)}\Vert _{X^{s,b}_{\alpha ,T}}\right) \\ \Vert w^{(1)}-w^{(2)}\Vert _{X^{s,b}_{\gamma ,T}}&\le \frac{1}{4}\left( \Vert u^{(1)}-u^{(2)}\Vert _{X^{s,b}_{\alpha ,T}} +\Vert v^{(1)}-v^{(2)}\Vert _{X^{s,b}_{\beta ,T}}\right) \end{aligned} \end{aligned}$$

since \(w^{(1)}-w^{(2)}=\nabla (W^{(1)}-W^{(2)})\). This implies

$$\begin{aligned} (u^{(1)},v^{(1)},w^{(1)})=(u^{(2)},v^{(2)},w^{(2)})\ \mathrm{on}\ [0,T]. \end{aligned}$$

The continuous dependence on initial data can be obtained by the similar argument. \(\square \)

4 The Lack of the Twice Differentiability of the Flow Map

The following proposition implies Theorem 1.2.

Proposition 4.1

Let \(d=2\) and \(0<T\ll 1\). Assume \(\theta =0\) and \(s<\frac{1}{2}\). For every \(C>0\), there exist f, \(g\in H_{\mathrm{rad}}^{s}({\mathbb R}^{2})\) such that

$$\begin{aligned} \sup _{0\le t\le T}\left\| \int _{0}^{t}e^{i(t-t')\gamma \Delta }\nabla \left( (e^{it'\alpha \Delta }f)(\overline{e^{it'\beta \Delta }g})\right) \hbox {d}t'\right\| _{H^{s}} \ge C\Vert f\Vert _{H^{s}}\Vert g\Vert _{H^{s}}. \end{aligned}$$
(4.1)

Proof

Let \(N\gg 1\) and \(p:=\frac{\gamma }{\alpha -\gamma }\)\((\ne 0)\). We note that p is well defined since \(\theta =0\) implies \(\kappa \ne 0\) for \(\alpha \), \(\beta \), \(\gamma \in {\mathbb R}\backslash \{0\}\). For simplicity, we assume \(p>0\). Put

$$\begin{aligned} \begin{aligned} D_1&:=\{\xi \in {\mathbb R}^2|\ N\le |\xi |\le N+1\}, \quad D_2:=\{\xi \in {\mathbb R}^2|\ p^{-1}N\le |\xi |\le p^{-1}N+1\},\\ D&:=\{\xi \in {\mathbb R}^2|\ (1+p^{-1})N+1\le |\xi |\le (1+p^{-1})N+1+2^{-10}\}. \end{aligned} \end{aligned}$$

We define the functions f and g as

$$\begin{aligned} {\widehat{f}}(\xi ):=N^{-s-\frac{1}{2}}\varvec{1}_{D_1}(\xi ), \quad {\widehat{g}}(\xi ):=N^{-s-\frac{1}{2}}\varvec{1}_{D_2}(\xi ). \end{aligned}$$

Clearly, we have \(\Vert f\Vert _{H^s}\sim \Vert g\Vert _{H^s}\sim 1\) and f, g are radial. For \(\xi =(\xi _1,\xi _2)\in {\mathbb R}^2\) and \(\eta =(\eta _1,\eta _2)\in {\mathbb R}^2\), we define

$$\begin{aligned} \begin{aligned} \Phi (\xi ,\eta )&:= \alpha |\eta |^2-\beta |\xi -\eta |^2-\gamma |\xi |^2\\&=(\alpha -\gamma )|\eta -p(\xi -\eta )|^2\\&=(\alpha -\gamma )\left\{ \left( \eta _1-p(\xi _1-\eta _1)\right) ^2+\left( \eta _2-p(\xi _2-\eta _2)\right) ^2\right\} \end{aligned} \end{aligned}$$

because \(\theta =0\) implies \(\frac{\beta +\gamma }{\alpha -\gamma }=-\left( \frac{\gamma }{\alpha -\gamma }\right) ^2\). We will show

$$\begin{aligned} \sup _{0\le t\le T}\left\| \int _{0}^{t}e^{i(t-t')\gamma \Delta }\nabla \left( (e^{it'\alpha \Delta }f)(\overline{e^{it'\beta \Delta }g})\right) \hbox {d}t'\right\| _{H^{s}} > rsim N^{-s+\frac{1}{2}}. \end{aligned}$$

We calculate that

$$\begin{aligned} \begin{aligned}&\left\| \int _{0}^{t}e^{i(t-t')\gamma \Delta }\nabla \left( (e^{it'\alpha \Delta }f)(\overline{e^{it'\beta \Delta }g})\right) \hbox {d}t'\right\| _{H^{s}}\\&\quad > rsim N^{-s}\left\| \varvec{1}_D(\xi )\int _0^t\int _{{\mathbb R}^2}e^{-it'\Phi (\xi ,\eta )}\varvec{1}_{D_1}(\eta )\varvec{1}_{D_2}(\xi -\eta )\hbox {d}\eta \right\| _{L^2_{\xi }}\\&\quad \ge N^{-s}\left\| \varvec{1}_D(\xi )\int _0^t\int _{{\mathbb R}^2}\cos (t'\Phi (\xi ,\eta ))\varvec{1}_{D_1}(\eta )\varvec{1}_{D_2}(\xi -\eta )\hbox {d}\eta \right\| _{L^2_{\xi }}\\&\quad =:N^{-s}\left\| F(\xi )\right\| _{L^2_{\xi }}. \end{aligned} \end{aligned}$$

Let \(R:{\mathbb R}^2\rightarrow {\mathbb R}^2\) be a rotation operator. Since \(\Phi (\xi ,\eta )=\Phi (R\xi ,R\eta )\) and \(\varvec{1}_D\), \(\varvec{1}_{D_1}\), \(\varvec{1}_{D_2}\) are radial, we can see

$$\begin{aligned} \begin{aligned} F(\xi )&= \varvec{1}_D(\xi )\int _{{\mathbb R}^2}\frac{\sin (t\Phi (\xi ,\eta ))}{\Phi (\xi ,\eta )}\varvec{1}_{D_1}(\eta )\varvec{1}_{D_2}(\xi -\eta )\hbox {d}\eta \\&=\varvec{1}_D(R\xi )\int _{{\mathbb R}^2}\frac{\sin (t\Phi (R\xi ,R\eta ))}{\Phi (R\xi ,R\eta )}\varvec{1}_{D_1}(R\eta )\varvec{1}_{D_2}(R\xi -R\eta )\hbox {d}\eta \\&=\varvec{1}_D(R\xi )\int _{{\mathbb R}^2}\frac{\sin (t\Phi (R\xi ,\eta ))}{\Phi (R\xi ,\eta )}\varvec{1}_{D_1}(\eta )\varvec{1}_{D_2}(R\xi -\eta )\hbox {d}\eta \\&=F(R\xi ). \end{aligned} \end{aligned}$$

It implies that F is radial. Therefore, there exists \(G:{\mathbb R}\rightarrow {\mathbb R}\) such that \(F(\xi )=G(|\xi |)\). We note that

$$\begin{aligned} \Vert F(\xi )\Vert _{L^2_{\xi }}=\Vert G(r)r^{\frac{1}{2}}\Vert _{L^2((0,\infty ))} > rsim N^{\frac{1}{2}}\inf _{r>0}|G(r)|=N^{\frac{1}{2}}\inf _{(\xi _1,0) \in D}|F(\xi _1,0)| \end{aligned}$$

since \(\mathrm{supp}G\subset [(1+p^{-1})N+1, (1+p^{-1})N+1+2^{-10}]\). Hence, it suffices to show that

$$\begin{aligned} |F(\xi _c)| > rsim t^{\frac{1}{2}} \end{aligned}$$
(4.2)

for any \(c\in [0,2^{-10}]\) and some \(0\le t\le T\), where \(\xi _c:=(\xi _{c1},0)\in {\mathbb R}^2\), \(\xi _{c1}:=(1+p^{-1})N+1+c\). Simple calculation gives

$$\begin{aligned} \Phi (\xi _c,\eta )=(\alpha -\gamma )\left\{ \left( (1+p)(\eta _1-N)-p(1+c)\right) ^2+(1+p)^2\eta _2^2\right\} . \end{aligned}$$
(4.3)

We also observe that

$$\begin{aligned} \begin{aligned}&\varvec{1}_{D_1}(\eta )\varvec{1}_{D_2}(\xi _c-\eta )\ne 0\\&\quad \Longrightarrow \ \eta _1\le N+1\ \mathrm{and}\ \xi _{c1}-\eta _1\le p^{-1}N+1\\&\quad \Longrightarrow \ N+c\le \eta _1\le N+1. \end{aligned} \end{aligned}$$

Let \(\epsilon >0\) be small. We define a new set E as

$$\begin{aligned} E:=D_1\cap \{\eta =(\eta _1,\eta _2)\in {\mathbb R}^2|\ N+c\le \eta _1\le N+1\}, \end{aligned}$$

and we decompose E into four sets:

$$\begin{aligned} \begin{aligned} E_1&=\left\{ \xi _{c1}-\sqrt{(p^{-1}N+1)^2-N^{2\epsilon }}\le \eta _1<\sqrt{(N+1)^2-N^{2\epsilon }},\ |\eta _2|\le N^{\epsilon } \right\} ,\\ E_2&=\{N+c\le \eta _1<\xi _{c1}-\sqrt{(p^{-1}N+1)^2-N^{2\epsilon }},\ |\eta _2|\le N^{\epsilon }\}\cap E,\\ E_3&=\{\sqrt{(N+1)^2-N^{2\epsilon }}\le \eta _1\le N+1,\ |\eta _2|\le N^{\epsilon }\}\cap E,\\ E_4&=\{N^{\epsilon }<|\eta _2|\}\cap E. \end{aligned} \end{aligned}$$

We can easily show that \(E_i\cap E_j=\emptyset \) if \(i\ne j\). Furthermore, we can obtain \(E_1\subset E\) and

$$\begin{aligned} \varvec{1}_{D_1}(\eta )\varvec{1}_{D_2}(\xi _c-\eta )=1 \end{aligned}$$

for any \(\eta \in E_1\). We observe that

$$\begin{aligned} \begin{aligned} |F(\xi _c)|&\ge \left| \int _{{\mathbb R}^2}\frac{\sin (t\Phi (\xi _c,\eta ))}{\Phi (\xi _c,\eta )}\varvec{1}_{E_1}(\eta )\hbox {d}\eta \right| -\sum _{j=2}^4\int _{{\mathbb R}^2}\left| \frac{\sin (t\Phi (\xi _c,\eta ))}{\Phi (\xi _c,\eta )}\right| \varvec{1}_{E_j}(\eta )\hbox {d}\eta \\&=:I_1-\sum _{j=2}^4I_j. \end{aligned} \end{aligned}$$

We first consider \(I_1\). Let

$$\begin{aligned} c':=p^{-1}N+1-\sqrt{(p^{-1}N+1)^2-N^{2\epsilon }}, \quad c'':=N+1-\sqrt{(N+1)^2-N^{2\epsilon }}. \end{aligned}$$

Obviously, it holds \(c'\sim c''\sim N^{-1+2\epsilon }\). We calculate that

$$\begin{aligned} \begin{aligned} I_1&=2 \left| \int _{N+c'+c''}^{N+1-c''}\left( \int _0^{N^{\epsilon }}\frac{\sin (t\Phi (\xi _c,\eta ))}{\Phi (\xi _c,\eta )}\hbox {d}\eta _2\right) \hbox {d}\eta _1 \right| \\&=\frac{2}{(1+p) |\alpha -\gamma |} \left| \int _{N+c'+c''}^{N+1-c''}\left( \int _0^{(1+p)N^{\epsilon }}\frac{\sin (\tau (q(\eta _1)+\eta _2^2))}{q(\eta _1)+\eta _2^2}\hbox {d}\eta _2\right) \hbox {d}\eta _1 \right| , \end{aligned} \end{aligned}$$

where \(\tau :=|\alpha -\gamma |t\) and \(q(\eta _1):=\left( (1+p)(\eta _1-N)-p(1+c)\right) ^2\). Therefore, if we obtain

$$\begin{aligned} \inf _{\eta _1\in [N+c'+c'',N+1-c'']} \int _0^{(1+p)N^{\epsilon }}\frac{\sin (\tau (q(\eta _1)+\eta _2^2))}{q(\eta _1)+\eta _2^2}\hbox {d}\eta _2 > rsim t^{\frac{1}{2}}, \end{aligned}$$
(4.4)

then we get \(I_1 > rsim t^{\frac{1}{2}}\). Let \(t>0\) be small. We fix \(\eta _1\in [N+c'+c'',N+1-c'']\) and write \(q(\eta _1)=q\) for simplicity. Clearly, we have \(0\le q\lesssim 1\). We easily verify that if we restrict \(\eta _2\) as \(0\le \eta _2\le \sqrt{\pi \tau ^{-1}-q}\), then we have \(\sin (\tau (q+\eta _2^2))\ge 0\) and \(\frac{\sin (\tau (q+\eta _2^2))}{q+\eta _2^2}\) is monotone decreasing. Similarly, if \(\sqrt{\pi \tau ^{-1}-q}\le \eta _2\le \sqrt{2\pi \tau ^{-1}-q}\), then we see \(\sin (\tau (q+\eta _2^2))\le 0\). We calculate

$$\begin{aligned} \begin{aligned}&\int _0^{\sqrt{2\pi \tau ^{-1}-q}}\frac{\sin (\tau (q+\eta _2^2))}{q+\eta _2^2}\hbox {d}\eta _2\\&\quad \ge \int _0^{\sqrt{\pi \tau ^{-1}-q}}\frac{\sin (\tau (q+\eta _2^2))}{q+\eta _2^2}\hbox {d}\eta _2 -\int _{\sqrt{\pi \tau ^{-1}-q}}^{\sqrt{2\pi \tau ^{-1}-q}}\frac{1}{q+\eta _2^2}\hbox {d}\eta _2\\&\quad \ge \frac{2\tau }{\pi }\int _0^{\sqrt{\pi (2\tau )^{-1}-q}}\hbox {d}\eta _2-\frac{\tau }{\pi }\int _{\sqrt{\pi \tau ^{-1}-q}}^{\sqrt{2\pi \tau ^{-1}-q}}\hbox {d}\eta _2\\&\quad =\frac{\tau }{\pi }\left( 2\sqrt{\pi (2\tau )^{-1}-q}-\sqrt{2\pi \tau ^{-1}-q}+\sqrt{\pi \tau ^{-1}-q}\right) \\&\quad > rsim t^{\frac{1}{2}}. \end{aligned} \end{aligned}$$

The last estimate is verified by the smallness of \(\tau =|\alpha -\gamma |t\). We also see

$$\begin{aligned} \int _{\sqrt{2n\pi \tau ^{-1}-q}}^{\sqrt{2(n+1)\pi \tau ^{-1}-q}}\frac{\sin (\tau (q+\eta _2^2))}{q+\eta _2^2}\hbox {d}\eta _2 > rsim \frac{t^{\frac{1}{2}}}{n^2} \end{aligned}$$

for any \(n\in {\mathbb N}\). Therefore, we obtain (4.4).

Next, we consider \(I_2\), \(I_3\), and \(I_4\). Since \(|E_2|\), \(|E_3|\lesssim N^{-1+3\epsilon }\), we easily observe that

$$\begin{aligned} I_2+I_3\lesssim tN^{-1+3\epsilon }. \end{aligned}$$

For \(I_4\), we observe that

$$\begin{aligned} I_4=\int _{E_4}\left| \frac{\sin (t\Phi (\xi _c,\eta ))}{\Phi (\xi _c,\eta )}\right| \hbox {d}\eta \lesssim \int _{N+c}^{N+1}\left( \int _{N^{\epsilon }}^{\infty }\frac{1}{\eta _2^2}\hbox {d}\eta _2\right) \hbox {d}\eta _1\lesssim N^{-\epsilon }. \end{aligned}$$

By the above argument, we obtain

$$\begin{aligned} |F(\xi _c)|\ge I_1-\sum _{j=2}^4I_j > rsim t^{\frac{1}{2}}-tN^{-1+3\epsilon }+N^{-\epsilon }. \end{aligned}$$

If we choose \(N\gg 1\) satisfying \(N^{-\epsilon }\ll T\), then for any \(t\in [0,T]\) with \(N^{-\epsilon }\ll t\), we have (4.2). \(\square \)