1 Introduction

Consider the following space fractional partial differential equation, with \(1< \alpha <2\), \(0 \le x, y \le 1\), \(0< t < T\):

$$\begin{aligned} u_{t} (t, x, y )&= {\,} _{0}^{R} \text{D}^{\alpha }_{x} u(t, x, y) +{\,} _{x}^{R} \text{D}^{\alpha }_{1} u(t, x, y) \\&\quad + {\,} _{0}^{R} \text{D}^{\alpha }_{y} u(t, x, y) + {\,} _{y}^{R} \text{D}^{\alpha }_{1} u(t, x, y) + f(t, x, y), \end{aligned}$$
(1)
$$\begin{aligned} u(t, 0, y)&= \varphi _{1}(t, y), \; u(t, 1, y) =\varphi _{2}(t, y), \end{aligned}$$
(2)
$$\begin{aligned} u(t, x, 0) =\;&\psi _{1}(t, x), \; u(t, x, 1) =\psi _{2}(t, x), \end{aligned}$$
(3)
$$\begin{aligned} u(0, x, y)&= u_{0}(x, y), \end{aligned}$$
(4)

where f is a source/sink term and \(u_{0}, \varphi _{1}, \varphi _{2}, \psi _{1}, \psi _{2}\) are well-defined initial and boundary values, respectively. Here the Riemann–Liouville left-sided fractional derivative \(\, _{0}^{R} \text{D}^{\alpha }_{x} f(x)\) is defined by

$$\begin{aligned} \, _{0}^{R} \text{D}^{\alpha }_{x} g(x) = \frac{1}{\Gamma (2- \alpha )} \frac{\mathrm{{d}}^2}{\mathrm{{d}} x^2} \int _{0}^{x} (x- \xi )^{1- \alpha } g( \xi ) \, \mathrm{{d}} \xi . \end{aligned}$$
(5)

Similarly, we may define the Riemann–Liouville right-sided fractional derivative as follows:

$$\begin{aligned} \, _{x}^{R} \text{D}^{\alpha }_{1} g(x) = \frac{1}{\Gamma (2- \alpha )} \frac{\mathrm{{d}}^2}{\mathrm{{d}} x^2} \int _{x}^{1} (\xi -x)^{1- \alpha } g( \xi ) \, \mathrm{{d}} \xi . \end{aligned}$$
(6)

There are several ways to approximate the Riemann–Liouville fractional derivative in the literature. Meerschaert and Tadjeran [30] used the Grünwald–Letnikov formula to obtain the first-order scheme O(h) to approximate the Riemann–Liouville fractional derivative. Lubich [27] introduced the higher order schemes with order \(O(h^{p}), p=1, 2, \cdots 6,\) to approximate the Riemann–Liouville fractional derivative. Diethelm [9, 10] obtained the scheme to approximate the Riemann–Liouville fractional derivative with the convergence order \(O(h^{2-\alpha }), 0< \alpha <2\) using the Hadamard finite-part integral approach; see other higher order schemes to approximate the Riemann–Liouville fractional derivative in Li and Zeng [22].

Based on the different schemes for approximating the Riemann–Liouville fractional derivatives, many numerical methods are introduced for solving space fractional partial differential Eqs. (1)–(4): finite difference methods [4,5,6, 15,16,17,18, 21, 24, 25, 28, 29, 31,32,33,34,35,36,37,38,39,40], finite element methods [1,2,3, 7, 8, 11,12,13,14, 23, 26] and spectral methods [19, 20]. Meerschaert and Tadjeran [30] introduced a shifted finite difference method based on the Grünwald–Letnikov formula for solving the two-sided space fractional partial differential equation in one-dimensional case and proved that the convergence order of the numerical method is O(h). Meerschaert and Tadjeran [29] also considered the finite difference method for solving the fractional advection–dispersion equation in one-dimensional case using the Grünwald–Letnikov formula. The second-order shifted finite difference methods for solving fractional partial differential equations based on the Grünwald–Letnikov formula are discussed in both one- and two-dimensional cases in Tadjeran et al. [35] and Tadjeran and Meerschaert [34]. Now we turn to the Lubich’s higher order schemes. When Lubich’s higher order schemes with no shifts are applied for solving space fractional partial differential equations, the obtained finite difference methods are unstable as for using the Grünwald–Letnikov formula. With shifted Lubich higher order methods, it shows that the corresponding numerical methods for solving space fractional partial differential equations have only first-order accuracy; see [4, 5]. In [22, Section 2.2], Li and Zeng introduced other higher order schemes, for example, L2, L2C schemes, to approximate the Riemann–Liouville fractional derivative. However, to the best of our knowledge, there are no works available in the literature to use the L2 and L2C methods for solving space fractional partial differential equations. The numerical methods discussed in [22, Chapter 4] for solving space fractional partial differential equations are also based on the Grünwald–Letnikov formula and Lubich’s higher order schemes; see other recent works for solving space fractional partial differential equations in [1,2,3, 5, 21, 24, 25, 39, 40]. All the numerical methods constructed using the Grünwald–Letnikov formula or Lubich’s higher order methods for solving space fractional partial differential equations require the solution of the equation satisfies the homogeneous Dirichlet boundary condition. Otherwise, the experimentally determined convergence orders of such numerical methods are very low, e.g., see Table 4 in Example 2 in Sect. 3. Therefore, it is interesting to design some numerical methods which have the higher order convergence for solving the space fractional partial differential equation with respect to both homogeneous and non-homogeneous Dirichlet boundary conditions. The purpose of this paper is to introduce such finite difference methods for solving the space fractional partial differential equation.

Recently, Ford et al. [17] considered the finite difference method for solving the space fractional partial differential equation in one-dimensional case where the Riemann–Liouville fractional derivative is approximated using the Hadamard finite-part integral; see also [15, 16, 37]. The convergence order \(O(\tau + h^{3- \alpha }+h^{\beta }), \alpha \in (1,2), \beta >0\) of the numerical method in [17] is proved in the maximum norm for both homogeneous and non-homogeneous Dirichlet boundary conditions. In this paper, we will extend the method in Ford et al. [17] to solve space fractional partial differential equations in two-dimensional case. The corresponding error estimates in this paper are proved using a completely different way from Ford et al. [17]. The error estimates with the convergence order \(O(\tau + h^{3- \alpha }+h^{\beta }), \alpha \in (1,2), \beta >0\) hold for both homogeneous and non-homogeneous Dirichlet boundary conditions.

The main contributions of this paper are as follows.

  1. (i)

    A new finite difference method for solving space fractional partial differential equations in two-dimensional case is introduced and the convergence order is \(O(\tau + h^{\beta } + h^{3-\alpha }), \alpha \in (1,2), \beta >0\), where the Riemann–Liouville fractional derivative is approximated using the Hadamard finite-part integral approach.

  2. (ii)

    The convergence order of the finite difference method introduced in this paper is valid for both homogeneous and non-homogeneous Dirichlet boundary conditions.

The paper is organized as follows. In Sect. 2, we introduce the shifted finite difference methods for solving (1)–(4) and the error estimates are proved. In Sect. 3, we consider four numerical examples in both homogeneous and non-homogeneous Dirichlet boundary conditions with the different smoothness for the solution of the equation and show that the numerical results are consistent with the theoretical analysis.

2 The Finite Difference Method

In this section, we shall extend the method in Ford et al. [17] for solving the space fractional partial differential equation in one-dimensional case to solve space fractional partial differential equations (1)–(4) in two-dimensional case. For simplicity with the notations, we also assume that the boundary values are equal to 0, i.e., \(\varphi _{1} = \varphi _{2} = \psi _{1} =\psi _{2}=0\).

We have

Lemma 1

[17, Lemma 2.1] Let\(1< \alpha <2\)and let\(M=2 m_{0}\)where\(m_{0}\)is a fixed positive integer. Let\(0 = x_{0}< x_{1}< x_{2}< \cdots< x_{2j}< x_{2j+1}<\cdots < x_{M} =1\)be a partition of [0, 1]. Assume that\(g \in C^{3}[0, 1]\)is a sufficiently smooth function. Then, with\(j=1,2, \cdots , m_{0}\),

$$\begin{aligned} \, _0^R \text{D}_{x}^{\alpha } g(x) \Big |_{x= x_{2j}}&= \frac{x_{2j}^{-\alpha }}{\Gamma (-\alpha )} \left( \sum _{l=0}^{2j} \alpha _{l, 2j} g (x_{2j-l}) + R_{2j}(g) \right) \\&= h^{-\alpha } \sum _{l=0}^{2j} w_{l, 2j} g(x_{2j-l}) + \frac{x_{2j}^{-\alpha }}{\Gamma (-\alpha )} R_{2j} (g), \end{aligned}$$

and, with\(j=1,2, \cdots , m_{0}-1\),

$$\begin{aligned} \, _0^R \text{D}_{x}^{\alpha } g(x) \Big |_{x= x_{2j+1}}&= \frac{1}{\Gamma (-\alpha )} \int _{0}^{x_{1}} (x_{2j+1} - \xi )^{-1-\alpha } g(\xi ) \, \mathrm{{d}} \xi \\&\quad + \frac{x_{2j+1}^{-\alpha }}{\Gamma (-\alpha )} \left( \sum _{l=0}^{2j} \alpha _{l, 2j+1} g(x_{2j+1-l}) + R_{2j+1}(g) \right) \\&= \frac{1}{\Gamma (-\alpha )} \int _{0}^{x_{1}} (x_{2j+1} - \xi )^{-1-\alpha } g(\xi ) \, \mathrm{{d}} \xi \\&\quad + h^{-\alpha } \sum _{l=0}^{2j} w_{l, 2j+1} g(x_{2j+1-l}) + \frac{x_{2j+1}^{-\alpha }}{\Gamma (-\alpha )} R_{2j+1} (g), \end{aligned}$$

where

$$\begin{aligned} (-\alpha ) (- \alpha +1) (-\alpha +2) (2j)^{-\alpha } \alpha _{l, 2j}&= {\left\{ \begin{array}{ll} 2^{-\alpha } ( \alpha +2), &{{for}} \; l=0, \\ (-\alpha ) 2^{2-\alpha }, &{{for}} \; l=1, \\ (-\alpha )(-2^{-\alpha } \alpha ) + \frac{1}{2} F_{0}(2), &{{for}} \; l=2, \\ -F_{1}(k), &{{for}} \; l=2k-1, \quad k=2,3, \cdots , j, \\ \frac{1}{2}\,\,(F_{2}(k) +F_{0}(k+1)), &{{for}} \; l=2k, \quad k=2,3, \cdots , j-1, \\ \frac{1}{2}\,\, F_{2}(j), &{{for}} \; l=2j, \end{array}\right. }\\ F_{0}(k)&= (2k-1)(2k) ( (2k)^{-\alpha } - (2(k-1))^{-\alpha } )(-\alpha +1)(-\alpha +2) \\&\quad - ((2k-1) + 2k ) ((2k)^{-\alpha +1} -(2(k-1))^{-\alpha +1} ) (-\alpha )(-\alpha +2) \\&\quad + ( (2k)^{-\alpha +2} - (2(k-1))^{-\alpha +2} ) (-\alpha ) (-\alpha +1), \\ F_{1}(k)&= (2k-2)(2k) ( (2k)^{-\alpha } - (2k-2)^{-\alpha } )(-\alpha +1)(-\alpha +2) \\&\quad - ((2k-2) + 2k ) \big ((2k)^{-\alpha +1} -(2k-2)^{-\alpha +1} ) (-\alpha )(-\alpha +2) \\&\quad + ( (2k)^{-\alpha +2} - (2k-2)^{-\alpha +2} ) (-\alpha ) (-\alpha +1), \\ F_{2}(k)&= (2k-2)(2k-1) ( (2k)^{-\alpha } - (2k-2)^{-\alpha } )(-\alpha +1)(-\alpha +2) \\&\quad - ((2k-2) + (2k-1) ) ((2k)^{-\alpha +1} -(2k-2)^{-\alpha +1} ) (-\alpha )(-\alpha +2) \\&\quad + ( (2k)^{-\alpha +2} - (2k-2)^{-\alpha +2} ) (-\alpha ) (-\alpha +1). \end{aligned}$$

Further, we have, with\(l=0,1,2,\cdots , 2j\),

$$\begin{aligned} \Gamma (3- \alpha ) w_{l, 2j} =(-\alpha ) (- \alpha +1) (-\alpha +2) (2j)^{-\alpha } \alpha _{l, 2j}, \end{aligned}$$
(7)

and

$$\begin{aligned} \alpha _{l, 2j+1} = \alpha _{l, 2j}, \quad w_{l, 2j+1} = w_{l, 2j}. \end{aligned}$$
(8)

The remainder term\(R_{l}(g)\)satisfies, for every\(g \in C^{3} (0,1)\),

$$\begin{aligned} |R_{l}(g) | \le C h^{3-\alpha } \Vert g^{\prime \prime \prime } \Vert _{\infty }, \; l=2,3, 4, \cdots , M \, {{with}} \, M=2m_{0}. \end{aligned}$$

Similarly, we may consider the approximation of the right-sided Riemann–Liouville fractional derivative \(\, _x^R \text{D}_{1}^{\alpha } g(x)\) at \(x= x_{l}, l=0, 1, 2, \cdots , 2m_{0}-2\). Using the same argument as for the approximation of \(\, _0^R \text{D}_{x}^{\alpha } f(x)\) at \(x= x_{l}\), we can show that, with \(j=0, 1, 2, \cdots , m_{0}-1,\)

$$\begin{aligned} \, _x^R \text{D}_{1}^{\alpha } g(x) \Big |_{x= x_{2j}}&= h^{-\alpha } \sum _{l=0}^{M-2j} w_{l, M-2j} g(x_{2j+l}) + \frac{x_{2j}^{-\alpha }}{\Gamma (-\alpha )} R_{2j} (g), \end{aligned}$$

and, with \(j=0, 1, 2, \cdots , m_{0}-2,\)

$$\begin{aligned} _x^R \text{D}_{1}^{\alpha } g(x) \Big |_{x= x_{2j+1}}&= \frac{1}{\Gamma (-\alpha )} \int _{x_{M-1}}^{x_{M}} ( \xi - x_{2j +1})^{-1-\alpha } g(\xi ) \, \mathrm{{d}} \xi \\&\quad + h^{-\alpha } \sum _{l=0}^{M-(2j +1)-1} w_{l, M-(2j+1)} g(x_{2j+1+l}) + \frac{x_{2j+1}^{-\alpha }}{\Gamma (-\alpha )} R_{2j+1} (g). \end{aligned}$$

Let \(M=2 m_{0}\). Let \(0 = x_{0}< x_{1}< x_{2}< \cdots< x_{j}< \cdots < x_{M}=1\) and \(0 = y_{0}< y_{1}< y_{2}< \cdots< y_{j}< \cdots < y_{M}=1\) be the partitions of [0, 1] and h the space step size. Let \(0 = t_{0}< t_{1}< t_{2}< \cdots< t_{n}< \cdots < t_{N}=T\) be the time partition of [0, T] and \(\tau\) the time step size. At the point \((t_{n+1}, x_{l}, y_{m})\), where lm will be specified later, we have

$$\begin{aligned} u_{t} (t_{n+1}, x_{l}, y_{m})&=\;_{0}^{R} \text{D}^{\alpha }_{x} u(t_{n+1}, x_{l}, y_{m}) + \, _{0}^{R} \text{D}^{\alpha }_{y} u(t_{n+1}, x_{l}, y_{m}) \nonumber \\&\quad + \, _{x}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l}, y_{m}) + \, _{y}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l}, y_{m}) + f_{l, m}^{n+1}, \end{aligned}$$
(9)

where \(f_{l, m}^{n+1} = f(t_{n+1}, x_{l}, y_{m})\).

To obtain a stable finite difference method, we will consider the following shifted equation:

$$\begin{aligned}&u_{t} (t_{n+1}, x_{l}, y_{m}) - \, _{0}^{R} \text{D}^{\alpha }_{x} u(t_{n+1}, x_{l+1}, y_{m}) - \, _{x}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l-1}, y_{m}) \nonumber \\& - \, _{0}^{R} \text{D}^{\alpha }_{y} u(t_{n+1}, x_{l}, y_{m+1}) - \, _{y}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l}, y_{m-1}) = f_{l, m}^{n+1} + \sigma _{l, m}^{n+1}, \end{aligned}$$
(10)

where the errors produced by the shifted terms are denoted by

$$\begin{aligned} \sigma _{l, m}^{n+1}&= \, _{0}^{R} \text{D}^{\alpha }_{x} u(t_{n+1}, x_{l}, y_{m}) - \, _{0}^{R} \text{D}^{\alpha }_{x} u(t_{n+1}, x_{l+1}, y_{m}) \nonumber \\&\quad + \, _{x}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l}, y_{m}) - \, _{x}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l-1}, y_{m}) \nonumber \\&\quad + \, _{0}^{R} \text{D}^{\alpha }_{y} u(t_{n+1}, x_{l}, y_{m}) - \, _{0}^{R} \text{D}^{\alpha }_{y} u(t_{n+1}, x_{l}, y_{m+1}) \nonumber \\&\quad + \, _{y}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l}, y_{m}) - \, _{y}^{R} \text{D}^{\alpha }_{1} u(t_{n+1}, x_{l}, y_{m-1}). \end{aligned}$$
(11)

We assume that \(\, _{0}^{R} \text{D}^{\alpha }_{x} u(t, x, y), \, _{0}^{R} \text{D}^{\alpha }_{y} u(t, x, y)\) satisfy the following Hölder conditions.

Assumption 1

For any \(x^{*}, x^{**}, y^{*}, y^{**} \in \mathbb {R}\), there exist constants \(C>0\) and \(\beta >0\) such that

$$\begin{aligned}&\left| \, _{0}^{R} \text{D}^{\alpha }_{x} u(t, x^{*}, y) - \, _{0}^{R} \text{D}^{\alpha }_{x} u(t, x^{**}, y) \right| \le C | x^{*}- x^{**}|^{\beta },\\&\left| \, _{0}^{R} \text{D}^{\alpha }_{y} u(t, x, y^{*}) - \, _{0}^{R} \text{D}^{\alpha }_{y} u(t, x, y^{**}) \right| \le C | y^{*}- y^{**}|^{\beta }. \end{aligned}$$

We also assume that \(\, _{x}^{R} \text{D}^{\alpha }_{1} u(t, x, y), \, _{y}^{R} \text{D}^{\alpha }_{1} u(t, x, y)\) satisfy the following Hölder conditions.

Assumption 2

For any \(x^{*}, x^{**}, y^{*}, y^{**} \in \mathbb {R}\), there exist constants \(C>0\) and \(\beta >0\) such that

$$\begin{aligned}&\left| \, _{x}^{R} \text{D}^{\alpha }_{1} u(t, x^{*}, y) - \, _{x}^{R} \text{D}^{\alpha }_{1} u(t, x^{**}, y) \right| \le C | x^{*}- x^{**}|^{\beta },\\&\left| \, _{y}^{R} \text{D}^{\alpha }_{1} u(t, x, y^{*}) - \, _{y}^{R} \text{D}^{\alpha }_{1} u(t, x, y^{**}) \right| \le C | y^{*}- y^{**}|^{\beta }. \end{aligned}$$

Remark 1

To make Assumptions 1 and 2 hold, we need to assume that the solution u satisfies some regularity conditions. In some circumstances, such conditions are easy to check, for example, when \(\, _{0}^{R} \text{D}^{\alpha }_{x} v(x) \in C^{1}[0, 1]\), we have, with \(\beta =1\),

$$\begin{aligned} \left| \, _{0}^{R} \text{D}^{\alpha }_{x} v(x^{*}) - \, _{0}^{R} \text{D}^{\alpha }_{x} v(x^{**}) \right| \le C | x^{*}- x^{**}|^{\beta }. \end{aligned}$$

Similarly, we can consider \(\, _{x}^{R} \text{D}^{\alpha }_{1} v(x)\).

We now turn to the discretization scheme of (10). Discretizing \(u_{t}\) at \(t=t_{n+1}\) using the backward Euler method and discretizing \(\, _{0}^{R} \text{D}^{\alpha }_{x}, \, _{x}^{R} \text{D}^{\alpha }_{1}, \, _{0}^{R} \text{D}^{\alpha }_{y}, \, _{y}^{R} \text{D}^{\alpha }_{1}\) at \(x= x_{l+1}, x= x_{l-1}, y= y_{m+1}, y= y_{m-1}\), respectively, using the Diethem’s finite difference method introduced in Lemma 1, we obtain

$$\begin{aligned}&\tau ^{-1} \big ( u(t_{n+1}, x_{l}, y_{m}) - u(t_{n}, x_{l}, y_{m}) \big) \nonumber \\&-h^{-\alpha } \sum _{k=0}^{l_{i}^{(1)}} w^{(1)}_{k, l+1} u(t_{n+1}, x_{l+1-k}, y_{m}) - h^{-\alpha }\sum _{k=0}^{ m_{j}^{(1)}} w^{(2)}_{k, m+1} u(t_{n+1}, x_{l}, y_{m+1-k}) \nonumber \\& -h^{-\alpha } \sum _{k=0}^{l_{i}^{(2)}} w^{(3)}_{k, M-(l-1)} u(t_{n+1}, x_{l-1+k}, y_{m}) - h^{-\alpha }\sum _{k=0}^{ m_{j}^{(2)}} w^{(4)}_{k, M-(m-1)} u(t_{n+1}, x_{l}, y_{m-1+k}) \nonumber \\& = f_{l, m}^{n+1} + S_{l, m}^{n+1} + O(\tau +h^{\beta } + h^{3-\alpha }), \end{aligned}$$
(12)

where \(S^{n+1}_{l, m}\) can be defined as \(S_{2j}^{n+1}\) in [17, (27)] and the weights \(w_{k, l+1}^{(1)}, w_{k, m+1}^{(2)}, w_{k, M-(l-1)}^{(3)}, w_{k, M-(m-1)}^{(4)}\) in (12) are defined by (7) and (8). Further, we denote \(l_{i}^{(1)}, l_{i}^{(2)}, m_{j}^{(1)},m_{j}^{(2)}\) by the following, with \(i = 1, 2, \cdots , m_{0}-1, \; j=1, 2, \cdots , m_{0}-1\),

$$\begin{aligned} l_{i}^{(1)} ={\left\{ \begin{array}{ll} 2i, \quad l=2i, \\ 2i+2, \quad l=2i+1, \end{array}\right. } \quad m_{j}^{(1)} ={\left\{ \begin{array}{ll} 2j, \quad m=2j, \\ 2j+2, \quad m=2j+1, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} l_{i}^{(2)} ={\left\{ \begin{array}{ll} M-(2i-1)-1, \quad l=2i, \\ M-2i, \quad l=2i+1, \end{array}\right. }\quad m_{j}^{(2)} ={\left\{ \begin{array}{ll} M-(2j-1)-1, \quad m=2j, \\ M-2j, \quad m=2j+1. \end{array}\right. } \end{aligned}$$

Then we have

Lemma 2

[17, Lemma 2.3] Let\(1< \alpha < 2\). The coefficients\(w^{(s)}_{k, 2p}, s=1, 2, 3, 4, \; p=1, 2, \cdots , m_{0}\)defined in (12) satisfy

$$\begin{aligned}&w^{(s)}_{1, 2p} <0, \end{aligned}$$
(13)
$$\begin{aligned}&w^{(s)}_{k, 2p} >0, \quad k \ne 1, \; k =0,2, 3, \cdots , 2p, \end{aligned}$$
(14)
$$\begin{aligned}&\Gamma (3- \alpha ) \sum _{k=0}^{2p} w^{(s)}_{k, 2p} <0. \end{aligned}$$
(15)

Let \(U_{l, m}^{n} \approx u(t_{n}, x_{l}, y_{m})\) denote the approximate solution of \(u(t_{n}, x_{l}, y_{m})\). We define the following finite difference method for solving (1)–(4):

$$\begin{aligned}&\tau ^{-1} \left( U_{l, m}^{n+1} - U_{l, m}^{n} \right) \nonumber \\& -h^{-\alpha } \sum _{k=0}^{l_{i}^{(1)}} w^{(1)}_{k, l+1} U^{n+1}_{l+1-k, m} - h^{-\alpha }\sum _{k=0}^{ m_{j}^{(1)}} w^{(2)}_{k, m+1} U^{n+1}_{l, m+1-k} \nonumber \\& -h^{-\alpha } \sum _{k=0}^{l_{i}^{(2)}} w^{(3)}_{k, M-(l-1)} U^{n+1}_{l-1+k, m} - h^{-\alpha }\sum _{k=0}^{ m_{j}^{(2)}} w^{(4)}_{k, M-(m-1)} U^{n+1}_{l, m-1+k} \nonumber \\& = f_{l, m}^{n+1} + Q_{l, m}^{n+1}, \end{aligned}$$
(16)

where \(Q_{l, m}^{n+1}\) is some approximation of \(S_{l, m}^{n+1}\), defined as in [17, (30)] which satisfies

$$\begin{aligned} Q_{l, m}^{n+1} -S_{l, m}^{n+1} = O(h^{3- \alpha }). \end{aligned}$$

Now we come to our main theorem in this work.

Theorem 1

Assume that\(u(t_{n}, x_{l}, y_{m})\)and\(U_{l, m}^{n}\)are the solutions of (12) and (16), respectively. Assume that Assumptions 1and 2hold. Then there exists a norm\(\Vert \cdot \Vert\)such that

$$\begin{aligned} \Vert e^{n} \Vert = \Vert U^{n} - u(t_{n}) \Vert \le C ( \tau + h^{3- \alpha } + h^{\beta }). \end{aligned}$$

Proof

Let \(e_{l,m}^{n} = u(t_{n}, x_{l}, y_{m}) - U_{l, m}^{n}\). Subtracting (16) from (12), we get the following error equation, with \(\lambda = \tau /h^{\alpha }\):

$$\begin{aligned}&(e_{l, m}^{n+1} - e_{l, m}^{n} ) - \lambda \left( \sum _{k=0}^{l_{i}^{(1)}} w^{(1)}_{k, l+1} e_{l+1-k, m}^{n+1} + \sum _{k=0}^{m_{j}^{(1)}} w^{(2)}_{k, m+1} e_{l, m+1-k}^{n+1} \right) \nonumber \\& - \lambda \left( \sum _{k=0}^{l_{i}^{(2)}} w^{(3)}_{k,M-(l-1)} e_{l-1+k, m}^{n+1} + \sum _{k=0}^{m_{j}^{(2)}} w^{(4)}_{k, M-(m-1)} e_{l,m-1+k}^{n+1} \right) = \tau R_{l, m}^{n+1}, \end{aligned}$$
(17)

where

$$\begin{aligned} R_{l, m}^{n+1} = O (\tau + h^{\beta } + h^{3- \alpha }). \end{aligned}$$

Rearranging (17), we get

$$\begin{aligned}&\Big (1-\lambda \omega _{1,l+1}^{(1)}-\lambda \omega _{1,M-(l-1)}^{(3)}-\lambda \omega _{1,m+1}^{(2)}-\lambda \omega _{1,M-(m-1)}^{(4)} \Big )e_{l,m}^{n+1}\nonumber \\& -\lambda \left( \sum _{k=2}^{l+1}\omega _{k,l+1}^{(1)}e_{l+1-k,m}^{n+1}+\sum _{k=2}^{M-(l-1)}\omega _{k,M-(l-1)}^{(3)}e_{l-1+k,m}^{n+1} \right) \nonumber \\& -\lambda \left( \sum _{k=2}^{m+1}\omega _{k,m+1}^{(2)}e_{l,m+1-k}^{n+1}+\sum _{k=2}^{M-(m-1)}\omega _{k,M-(m-1)}^{(4)}e_{l,m-1+k}^{n+1} \right) \nonumber \\& -\lambda \omega _{0,l+1}^{(1)}e_{l+1,m}^{n+1}-\lambda \omega _{0,M-(l-1)}^{(3)}e_{l-1,m}^{n+1} -\lambda \omega _{0,m+1}^{(2)}e_{l,m+1}^{n+1}-\lambda \omega _{0,M-(m-1)}^{(4)}e_{l,m-1}^{n+1} \nonumber \\& =\tau R_{l,m}^{n}+e_{l,m}^{n}, \end{aligned}$$
(18)

that is,

$$\begin{aligned}&\Big (1-\lambda \omega _{1,l+1}^{(1)}-\lambda \omega _{1,M-(l-1)}^{(3)}-\lambda \omega _{1,m+1}^{(2)}-\lambda \omega _{1,M-(m-1)}^{(4)} \Big )e_{l,m}^{n+1}\nonumber \\& -\lambda \omega _{0,l+1}^{(1)}e_{l+1,m}^{n+1} -0 - \lambda w^{(1)}_{2, l+1} e_{l-1, m}^{n+1} -\cdots - \lambda w^{(1)}_{l+1, l+1} e_{0, m}^{n+1} \nonumber \\& - \lambda w^{(2)}_{0, m+1} e_{l, m+1}^{n+1} -0 - \lambda w^{(2)}_{2, m+1} e_{l, m-1}^{n+1} -\cdots - \lambda w^{(2)}_{m+1, m+1} e_{l, 0}^{n+1} \nonumber \\& -\lambda \omega _{0,M-(l-1)}^{(3)}e_{l-1,m}^{n+1} -0 -\lambda \omega _{2,M-(l-1)}^{(3)}e_{l+1,m}^{n+1} -\cdots -\lambda \omega _{M-(l-1),M-(l-1)}^{(3)}e_{M,m}^{n+1} \nonumber \\& -\lambda \omega _{0,M-(m-1)}^{(4)}e_{l,m-1}^{n+1} -0 -\lambda \omega _{2,M-(m-1)}^{(4)}e_{l,m+1}^{n+1} -\cdots -\lambda \omega _{M-(m-1),M-(m-1)}^{(4)}e_{l,M}^{n+1}\nonumber \\& = e_{l, m}^{n} + \tau R_{l, m}^{n+1}. \end{aligned}$$
(19)

More precisely, we have, for \(l=1, m=1,2, \cdots , M-1\),

$$\begin{aligned}&\left( 1-\lambda \omega _{1,2}^{(1)}-\lambda \omega _{1,M}^{(3)}-\lambda \omega _{1,m+1}^{(2)}-\lambda \omega _{1,M-(m-1)}^{(4)}\right) e_{1,m}^{n+1}\nonumber \\& -\lambda \omega _{0,2}^{(1)}e_{2,m}^{n+1} -0 -\lambda \omega _{2,2}^{(1)}e_{0,m}^{n+1} \nonumber \\& -\lambda w^{(2)}_{0, m+1} e_{1, m+1}^{n+1} -0 - \lambda w^{(2)}_{2, m+1} e_{1, m-1}^{n+1} -\cdots - \lambda w^{(2)}_{m+1, m+1} e_{1, 0}^{n+1}\nonumber \\& -\lambda \omega _{0,M}^{(3)}e_{0,m}^{n+1} -0 -\lambda \omega _{2,M}^{(3)}e_{2,m}^{n+1} -\cdots -\lambda \omega _{M,M}^{(3)}e_{M,m}^{n+1} \nonumber \\& -\lambda \omega _{0,M-(m-1)}^{(4)}e_{1,m-1}^{n+1} -0 -\lambda \omega _{2,M-(m-1)}^{(4)}e_{1,m+1}^{n+1} -\cdots -\lambda \omega _{M-(m-1),M-(m-1)}^{(4)}e_{1,M}^{n+1}\nonumber \\& = e_{1, m}^{n} + \tau R_{1, m}^{n+1}. \end{aligned}$$
(20)

For \(l=2, m=1,2, \cdots , M-1\), we have

$$\begin{aligned}&\left( 1-\lambda \omega _{1,3}^{(1)}-\lambda \omega _{1,M-1}^{(3)}-\lambda \omega _{1,m+1}^{(2)}-\lambda \omega _{1,M-(m-1)}^{(4)} \right) e_{2,m}^{n+1}\nonumber \\& -\lambda \omega _{0,3}^{(1)}e_{3,m}^{n+1} -0 \nonumber \\& - \lambda w^{(2)}_{0, m+1} e_{2, m+1}^{n+1} -0 - \lambda w^{(2)}_{2, m+1} e_{2, m-1}^{n+1} -\cdots - \lambda w^{(2)}_{m+1, m+1} e_{2, 0}^{n+1}\nonumber \\& -\lambda \omega _{0,M-1}^{(3)}e_{1,m}^{n+1} -0 -\lambda \omega _{2,M-1}^{(3)}e_{3,m}^{n+1} -\cdots -\lambda \omega _{M-1,M-1}^{(3)}e_{M,m}^{n+1} \nonumber \\& -\lambda \omega _{0,M-(m-1)}^{(4)}e_{2,m-1}^{n+1} -0 -\lambda \omega _{2,M-(m-1)}^{(4)}e_{2,m+1}^{n+1} \nonumber \\& -\cdots -\lambda \omega _{M-(m-1),M-(m-1)}^{(4)}e_{2,M}^{n+1}\nonumber \\& = e_{2, m}^{n} + \tau R_{2, m}^{n+1}. \end{aligned}$$
(21)

In general, for \(l=M-1, m=1,2, \cdots , M-1\), we have

$$\begin{aligned}&\left( 1-\lambda \omega _{1,M}^{(1)}-\lambda \omega _{1,2}^{(3)}-\lambda \omega _{1,m+1}^{(2)}-\lambda \omega _{1,M-(m-1)}^{(4)}\right) e_{M-1,m}^{n+1}\nonumber \\& -\lambda \omega _{0,M}^{(1)}e_{M,m}^{n+1} -0 - \lambda w^{(1)}_{2, M-1} e_{M-2, m}^{n+1} -\cdots - \lambda w^{(1)}_{M, M-1} e_{0, m}^{n+1} \nonumber \\& - \lambda w^{(2)}_{0, m+1} e_{M-1, m+1}^{n+1} -0 - \lambda w^{(2)}_{2, m+1} e_{M-1, m-1}^{n+1} -\cdots - \lambda w^{(2)}_{m+1, m+1} e_{M-1, 0}^{n+1}\nonumber \\& -\lambda \omega _{0,2}^{(3)}e_{M-2,m}^{n+1} -0 -\lambda \omega _{2,2}^{(3)}e_{M,m}^{n+1} -\cdots -\lambda \omega _{2,2}^{(3)}e_{M,m}^{n+1} \nonumber \\& -\lambda \omega _{0,M-(m-1)}^{(4)}e_{M-1,m-1}^{n+1} -0 -\lambda \omega _{2,M-(m-1)}^{(4)}e_{M-1,m+1}^{n+1} \nonumber \\& -\cdots -\lambda \omega _{M-(m-1),M-(m-1)}^{(4)}e_{M-1,M}^{n+1}\nonumber \\& = e_{l, m}^{n} + \tau R_{l, m}^{n+1}. \end{aligned}$$
(22)

Thus, we may write (18) as the following matrix form:

$$\begin{aligned} A e^{n+1} = e^{n} + \tau R^{n+1}, \end{aligned}$$
(23)

where

$$\begin{aligned} e^{n+1}= & {} \left( \begin{array}{cccc} e_{1}^{n+1} \\ e_{2}^{n+1} \\ \vdots \\ e_{M-1}^{n+1} \end{array} \right) , \quad R^{n+1} = \left( \begin{array}{cccc} R_{1}^{n+1} \\ R_{2}^{n+1} \\ \vdots \\ R_{M-1}^{n+1} \end{array} \right) , \\ e_{l}^{n+1}= & {} \left( \begin{array}{cccc} e_{l,1}^{n+1} \\ e_{l,2}^{n+1} \\ \vdots \\ e_{l, M-1}^{n+1} \end{array} \right) , \quad R_{l}^{n+1} = \left( \begin{array}{cccc} R_{l,1}^{n+1} \\ R_{l,2}^{n+1} \\ \vdots \\ R_{l, M-1}^{n+1} \end{array} \right) , \; l=1, 2, \cdots , M-1, \end{aligned}$$

and

$$\begin{aligned} A= (a_{i, j})_{(M-1)^2 \times (M-1)^2} = \left( \begin{array}{cccc} A_{1,1} &{} A_{1,2}&{} \cdots &{} A_{1, M-1} \\ A_{2, 1} &{} A_{2, 2} &{} \cdots &{} A_{2, M-1} \\ \vdots &{} \vdots &{} &{} \vdots \\ A_{M-1, 1} &{} A_{M-1, 2} &{} \cdots &{} A_{M-1, M-1} \\ \end{array} \right) _{(M-1)^2 \times (M-1)^2}. \end{aligned}$$

Here

$$\begin{aligned} A_{l, l} = \left( \begin{array}{ccccc} a_{11} &{} - \lambda w^{(2)}_{0, 2}- \lambda w^{(4)}_{2, M} &{} - \lambda w^{(4)}_{3, M} &{} \cdots &{}- \lambda w^{(4)}_{M-1, M} \\ - \lambda w^{(2)}_{2, 3} - \lambda w^{(4)}_{0, M-1} &{} a_{22} &{} - \lambda w^{(2)}_{0, 3}- \lambda w^{(4)}_{2, M-1} &{} \cdots &{}- \lambda w^{(4)}_{M-2, M-1} \\ \vdots &{} \vdots &{} \vdots &{} &{} \vdots \\ - \lambda w^{(2)}_{M-1, M} &{} - \lambda w^{(2)}_{M-2, M} &{} - \lambda w^{(2)}_{2, M} -\lambda w_{0, 2}^{(4)} &{} \cdots &{} a_{M-1 M-1} \\ \end{array} \right) , \end{aligned}$$

where

$$\begin{aligned}&a_{11}=1-\lambda \omega ^{(1)}_{1,l+1}-\lambda \omega ^{(3)}_{1,M-l+1}-\lambda \omega ^{(2)}_{1,2}-\lambda \omega ^{(4)}_{1,M}, \\&a_{22}=1-\lambda \omega ^{(1)}_{1,l+1}-\lambda \omega ^{(3)}_{1,M-l+1} -\lambda \omega ^{(2)}_{1,3}-\lambda \omega ^{(4)}_{1,M-1}, \\&\cdots \\&a_{M-1,M-1}=1-\lambda \omega ^{(1)}_{1,l+1}-\lambda \omega ^{(3)}_{1,M-l+1}-\lambda \omega ^{(2)}_{1,M}-\lambda \omega ^{(4)}_{1,2}, \end{aligned}$$

and, with \(E= I_{(M-1) \times (M-1)}\),

$$\begin{aligned}&A_{l, l-1} = (-\lambda \omega ^{(1)}_{2,l+1}-\lambda \omega ^{(3)}_{0,M-l+1})E, \quad l=2, 3, \cdots , M-1, \\&A_{l, l-2} = -\lambda w^{(1)}_{3, l+1} E, \quad l=3, 4, \cdots , M-1, \\&\cdots \\&A_{l, l-(M-2)} = - \lambda w^{(1)}_{M-1, l+1} E, \quad l=M-1, \\&A_{l,l+1}=(-\lambda \omega ^{(1)}_{0,l+1}-\lambda \omega ^{(3)}_{2,M-l+1})E, \ \ \ \ l=1,2,\cdots ,M-2, \\&A_{l,l+2}=-\lambda \omega ^{(3)}_{3,M-l+1}E, \ \ \ \ l=1,2,\cdots ,M-3, \\&\cdots \\&A_{l,M-l}=-\lambda \omega ^{(3)}_{M-1,M-l+1}E, \ \ \ \ l=1. \end{aligned}$$

We shall show that there exists a norm \(\Vert \cdot \Vert\) such that

$$\begin{aligned} \Vert A^{-1} \Vert \le 1. \end{aligned}$$
(24)

Assume (24) holds at the moment, we have, by (23), noting that \(n \tau = t_{n} \le T\),

$$\begin{aligned} \Vert e^{n+1} \Vert&\le \Vert A^{-1} \Vert \big ( \Vert e^{n} \Vert + \tau \Vert R^{n+1} \Vert \big ) \le \Vert e^{n} \Vert + \tau \Vert R^{n+1} \Vert \\&\le \cdots \\&\le \Vert e^{0} \Vert + ((n+1) \tau ) \max _{1 \le n \le N} \Vert R^{n} \Vert \le C ( \tau + h^{\beta } + h^{3-\alpha }), \end{aligned}$$

where we use the fact \(e^{0} =0\).

It remains to show (24). It suffices to show all the eigenvalues of A are greater than or equal to 1, which implies that all the eigenvalues of \(A^{-1}\) are less than or equal to 1. If all the eigenvalues of \(A^{-1}\) are less than or equal to 1, then there exists some norm \(\Vert \cdot \Vert\) such that \(\Vert A^{-1} \Vert \le 1\) [33]. To show all the eigenvalues of A are greater than or equal to 1, we may use the well-known Gershgorin lemma.

Let

$$\begin{aligned} r_{l}= \sum _{k=1, k \ne l}^{(M-1)^2} | a_{l, k}|. \end{aligned}$$

We have

$$\begin{aligned} r_{1}&=\lambda \omega ^{(1)}_{0,2}+\lambda \omega ^{(2)}_{0,2} +\lambda \left( \omega ^{(4)}_{2,M}+\cdots +\omega ^{(4)}_{M-1,M}\right) +\lambda \left( \omega ^{(3)}_{2,M}+\cdots +\omega ^{(3)}_{M-1,M}\right) ,\\ a_{1,1}&=1-\lambda \omega ^{(1)}_{1,2}-\lambda \omega ^{(3)}_{1,M}-\lambda \omega ^{(2)}_{1,2}-\lambda \omega ^{(4)}_{1,M},\\ r_{2}&=\lambda \omega ^{(1)}_{0,2}+\lambda \left( \omega ^{(3)}_{0,3}+\omega ^{(2)}_{2,3}\right) +\lambda \left( \omega ^{(4)}_{0,M-1}+0+\omega ^{(4)}_{2,M-1}+\cdots +\omega ^{(4)}_{M-2,M-1}\right) \\&\quad +\lambda (\omega ^{(3)}_{2,M}+\cdots +\omega ^{(3)}_{M-1,M}),\\ a_{2,2}&=1-\lambda \omega ^{(1)}_{1,2}-\lambda \omega ^{(3)}_{1,M}-\lambda \omega ^{(2)}_{1,3}-\lambda \omega ^{(4)}_{1,M-1},\\ \cdots \\ r_{(M-1)^{2}}&=\lambda \left( \omega ^{(1)}_{2,M}+\cdots +\omega ^{(1)}_{M-1,M}\right) + \lambda \omega ^{(3)}_{0,2} \\&\quad + \lambda \left( \omega ^{(2)}_{2,M}+\cdots +\omega ^{(2)}_{M-1,M}\right) +\lambda \omega ^{(4)}_{0,2},\\ a_{(M-1)^{2},(M-1)^{2}}&=1-\lambda \omega ^{(1)}_{1,M}-\lambda \omega ^{(3)}_{1,2}-\lambda \omega ^{(2)}_{1,2}-\lambda \omega ^{(4)}_{1,M}-\lambda \omega ^{(2)}_{1,M}-\lambda \omega ^{(4)}_{1,2}, \end{aligned}$$

which imply that

$$\begin{aligned} a_{1,1}-r_{1}&= 1-\lambda \left( \omega ^{(1)}_{0,2}+\omega ^{(1)}_{1,2}\right) -\lambda \left( \omega ^{(2)}_{0,2}+\omega ^{(2)}_{1,2}\right) \\&\quad -\lambda \left( \omega _{1,M}^{(3)}+\cdots +\omega _{M-1,M}^{(3)}\right) -\lambda \left( \omega _{1,M}^{(4)}+ \cdots +\omega _{M-1,M}^{(4)}\right), \\ a_{2,2}-r_{2}&= 1-\lambda \left( \omega ^{(1)}_{0,2}+\omega ^{(1)}_{1,2}\right) -\lambda \left( \omega ^{(2)}_{0,3}+\omega ^{(2)}_{1,3}+\omega ^{(2)}_{2,3}\right) \\&\quad -\lambda \left( \omega _{1,M}^{(3)}+\cdots +\omega _{M-1,M}^{(3)}\right) -\lambda \left( \omega _{0,M}^{(4)}+ \cdots +\omega _{M-2,M-1}^{(4)}\right), \\ \cdots \\ a_{(M-1)^{2},(M-1)^{2}}-r_{(M-1)^{2}}&=1-\lambda \sum _{i=1}^{M-1}\omega ^{(1)}_{i,M}-\lambda \sum _{i=1}^{M-1}\omega ^{(2)}_{i,M} \\&\quad -\lambda \left( \omega ^{(3)}_{0,2}+\omega ^{(3)}_{1,2}\right) -\lambda \left( \omega ^{(4)}_{0,2}+\omega ^{(4)}_{1,2}\right) . \end{aligned}$$

By Lemma 2, we get

$$\begin{aligned} a_{l, l} - r_{l} >1, \quad l=1, 2, \cdots , (M-1)^2, \end{aligned}$$

which implies that all the eigenvalues \(\mu\) of A satisfy, by Gershgorin lemma,

$$\begin{aligned} 1< a_{l, l} - r_{l}< \mu < a_{l, l} + r_{l}, \end{aligned}$$

that is, all the eigenvalues \(\mu\) of A are greater than 1 which implies (24).

Together these estimates complete the proof of Theorem 1.

3 Numerical Examples

We shall consider in this section four numerical examples to illustrate that the numerical results are consistent with our theoretical results.

Example 1

Consider, with \(1< \alpha <2\), \(0 \le x, y \le 2\) [6],

$$\begin{aligned}&u_{t} (t, x, y ) = \, _0^R \text{D}_{x}^{\alpha } u(t, x, y) + \, _0^R \text{D}_{y}^{\alpha } u(t, x, y) + f(t,x,y), \; 0<t<1, \end{aligned}$$
(25)
$$\begin{aligned}&u(t, 0, y) = u(t, 2, y) = u(t, x, 0) = u(t, x, 2) =0, \end{aligned}$$
(26)
$$\begin{aligned}&u(0, x, y) = 4 x^2 (2-x )^2 y^2 (2-y )^2, \end{aligned}$$
(27)

where

$$\begin{aligned} f(t,x, y) =&-4 {\text{e}}^{-t} x^2 (2-x)^2 y^2 (2-y)^2 \\&-4{\text{e}}^{-t} (y^2 (2-y)^2) \left( 4 \frac{\Gamma (2+1)}{\Gamma (2-\alpha +1)} x^{2- \alpha } \right. \\&\left. -4 \frac{\Gamma (3+1)}{\Gamma (3-\alpha +1)} x^{3- \alpha } + \frac{\Gamma (4+1)}{\Gamma (4-\alpha +1)} x^{4- \alpha } \right) \\&-4{\text{e}}^{-t} (x^2 (2-x)^2) \left( 4 \frac{\Gamma (2+1)}{\Gamma (2-\alpha +1)} y^{2- \alpha } \right. \\&\left. -4 \frac{\Gamma (3+1)}{\Gamma (3-\alpha +1)} y^{3- \alpha } + \frac{\Gamma (4+1)}{\Gamma (4-\alpha +1)} y^{4- \alpha } \right) . \end{aligned}$$

It is easy to check that \(u(t, x, y) =4{\text{e}}^{-t} x^2(2-x)^2y^2 (2-y)^2\) is the exact solution.

Note that the error estimate satisfies, by Theorem  1, with \(\gamma = \min (3- \alpha , \beta )\),

$$\begin{aligned} \Vert e^{N} \Vert = \Vert U^{N} - u(t_{N}) \Vert \le C ( \tau + h^{\gamma }). \end{aligned}$$

In the numerical method (16), we simply ignore the errors \(\sigma _{l, m}^{n+1}\) in (11) which are produced by the shifted terms. Of course, if we use the numerical methods (16) to calculate the approximate solutions, the spatial error should be \(O(h^{\beta } + h^{3-\alpha })\). Since the exact solutions are given in our numerical examples, the errors \(\sigma _{l, m}^{n+1}\) in (11) produced by the shifted terms can be calculated exactly. Thus, the convergence order should be \(O( h^{3-\alpha })\) if we include \(\sigma _{l, m}^{n+1}\) in the numerical method (16). In general, we do not know the exact solutions of the equation. In such case, we may approximate \(\sigma _{l, m}^{n+1}\) using the computed solutions \(U^{n}\) to improve the convergence orders. In all our numerical simulations in this section, the numerical method (16) will include \(\sigma _{l, m}^{n+1}\) defined by (11), which makes the experimentally determined order of convergence (EOC) independent of \(\beta >0\).

We will observe the convergence orders with respect to the space step size. To see this, we shall choose sufficiently small time step size \(\tau = 2^{-10}\) and the different space step sizes \(h_{l} = 2^{-l}, \, l=2, 3, 4, 5, 6\) such that the computational error is dominated by the space step size \(O(h^{3-\alpha }), 1< \alpha <2\). Denote \(\Vert e_{l}^{N} \Vert = \Vert U^{N} - u(t_{N}) \Vert\) the \(L^{2}\) norm of the error at \(t_{N}=1\) calculated with the step size \(h_{l}\). We then have

$$\begin{aligned} \Vert e_{l}^{N} \Vert \approx C h_{l}^{\gamma }, \; l=2, 3, 4, 5, 6, \end{aligned}$$
(28)

which implies that

$$\begin{aligned} \frac{\Vert e_{l}^{N} \Vert }{\Vert e_{l+1}^{N} \Vert } \approx \frac{h_{l}^{ \gamma }}{h_{l+1}^{ \gamma }} = 2^{\gamma }. \end{aligned}$$

Hence, the convergence order satisfies

$$\begin{aligned} \gamma \approx \text{ log }_{2} \left( \frac{\Vert e_{l}^{N} \Vert }{\Vert e_{l+1}^{N} \Vert } \right) . \end{aligned}$$
(29)

The experimentally determined orders of convergence (EOC) for the numerical method (16) are provided in Table 1 with respect to the different \(\alpha\). We observe that the convergence order is indeed \(O(h^{3-\alpha })\) which is consistent with Theorem 1.

Next, we solve the equation in Example 1 using the finite difference method introduced in Meerschaert and Tadjeran [30] where the Riemann–Liouville fractional derivatives are approximated using the Grünwald–Letnikov formula which requires the solution of the equation satisfies the homogeneous Dirichlet boundary condition; see some other shifted and weighted Grünwald difference operator to approximate the Riemann–Liouville fractional derivative in [36]. The convergence order of the finite difference method in [30] is O(h) and we indeed observe this in Table 2 for solving (33)–(35). From now on, we call the finite difference method in Meerschaert and Tadjeran [30] as “the shifted Grünwald–Letnikov method ”.

Table 1 The experimentally determined orders of convergence (EOC) in Example 1 using the numerical method (16) at \(t=1\)
Table 2 The experimentally determined orders of convergence (EOC) in Example 2 using the shifted Grünwald–Letnikov method at \(t=1\)

Example 2

In this example, we will consider the following space fractional partial differential equation with non-homogeneous Dirichlet boundary conditions, with \(1< \alpha <2\), \(0 \le x, y \le 2\):

$$\begin{aligned}&u_{t} (t, x, y) = \, _0^R \text{D}_{x}^{\alpha } u(t, x, y) + \, _0^R \text{D}_{y}^{\alpha } u(t, x, y) + f(t,x, y), \quad 0< t<1, \end{aligned}$$
(30)
$$\begin{aligned}&u(t, 0, y) = u(t, 2, y)= u(t, x, 0)=u(t, x,2) = 5, \end{aligned}$$
(31)
$$\begin{aligned}&u(0, x, y) = 4 x^2 (2-x )^2 y^2 (2-y )^2 +5, \end{aligned}$$
(32)

where

$$\begin{aligned} f(t,x, y)&= -4 \text{e}^{-t} x^2 (2-x)^2 y^2 (2-y)^2 \\&\quad -4\text{e}^{-t} y^2 (2-y)^2 \left( 4 \frac{\Gamma (2+1)}{\Gamma (2-\alpha +1)} x^{2- \alpha } -4 \frac{\Gamma (3+1)}{\Gamma (3-\alpha +1)} x^{3- \alpha } \right. \\&\left. \quad + \frac{\Gamma (4+1)}{\Gamma (4-\alpha +1)} x^{4- \alpha } \right) + 5 \frac{\Gamma (1)}{\Gamma (1-\alpha )} x^{- \alpha } \\&\quad -4\text{e}^{-t} x^2 (2-x)^2 \left( 4 \frac{\Gamma (2+1)}{\Gamma (2-\alpha +1)} y^{2- \alpha } -4 \frac{\Gamma (3+1)}{\Gamma (3-\alpha +1)} y^{3- \alpha } \right. \\&\left. \quad + \frac{\Gamma (4+1)}{\Gamma (4-\alpha +1)} y^{4- \alpha } \right) + 5 \frac{\Gamma (1)}{\Gamma (1-\alpha )} y^{- \alpha }. \end{aligned}$$

It is easy to see that \(u(t, x,y) =4\text{e}^{-t} x^2(2-x)^2 y^2 (2-y)^2 +5\) is the exact solution of the equation.

In Table 3, we show the convergence orders using the numerical method (16). We see that for some \(\alpha\), the convergence orders can reach \(O(h^{3- \alpha })\) and for some other \(\alpha\) the convergence orders are less than \(O(h^{3- \alpha })\). But in most cases, the convergence orders of the numerical method (16) are greater than 1 for solving (30)–(32) with the non-homogeneous Dirichlet boundary conditions.

In Table 4, we use “the shifted Grünwald–Letnikov method” introduced in Meerschaert and Tadjeran [30] for solving (30)–(32). We observe that the convergence orders are very low because of the non-homogeneous boundary conditions. From Tables 3 and 4, we observe that the numerical method (16) introduced in this paper has higher order convergence than “the shifted Grünwald–Letnikov method” introduced in Meerschaert and Tadjeran [30] for solving space fractional partial differential equations with non-homogeneous boundary conditions.

In the next example, we shall investigate the convergence orders of the numerical method (16) for solving space fractional partial differential equations where the solutions of the equations are not sufficiently smooth.

Table 3 The experimentally determined orders of convergence (EOC) in Example 2 using (16) at \(t=1\)
Table 4 The experimentally determined orders of convergence (EOC) in Example 2 using the shifted Grünwald–Letnikov method at \(t=1\)

Example 3

Consider, with \(1< \alpha <2\), \(0 \le x, y \le 1\) [6],

$$\begin{aligned}&u_{t} (t, x, y) = \, _0^R \text{D}_{x}^{\alpha } u(t, x, y) + \, _0^R \text{D}_{y}^{\alpha } u(t, x, y) + f(t,x,y), \quad 0< x<1, \; t>0, \end{aligned}$$
(33)
$$\begin{aligned}&u(t, 0, y) =0, \; u(t,1, y) =\text{e}^{-t} y^{\alpha _{1}}, \; u(t, x, 0) =0, \; u(t,x, 1) =\text{e}^{-t} x^{\alpha _{1}}, \end{aligned}$$
(34)
$$\begin{aligned}&u(0, x, y) = x^{\alpha _{1}} y^{\alpha _{1}}, \end{aligned}$$
(35)

where

$$\begin{aligned} f(t,x, y) =&-\text{e}^{-t} x^{\alpha _{1}}y^{\alpha _{1}} -\text{e}^{-t} \frac{\Gamma (\alpha _{1}+1)}{\Gamma (\alpha _{1}+1-\alpha )} x^{\alpha _{1}-\alpha }y^{\alpha _{1}} \\&-\text{e}^{-t} x^{\alpha _{1}}y^{\alpha _{1}} -\text{e}^{-t} \frac{\Gamma (\alpha _{1}+1)}{\Gamma (\alpha _{1}+1-\alpha )} y^{\alpha _{1}-\alpha }x^{\alpha _{1}}. \end{aligned}$$

Here the exact solution has the form \(u(t, x, y) =\text{e}^{-t} x^{\alpha _{1}}y^{\alpha _{1}}\). We will consider two different \(\alpha _{1}\): the nonsmooth solution case with \(\alpha _{1}= \alpha\) and the smooth solution case with \(\alpha _{1}=3\).

For the case \(\alpha _{1}= \alpha\), we have, there exists some constant C,

$$\begin{aligned} \, _{0}^{R} \text{D}_{x}^{\alpha } ( x^{\alpha _{1}})&= \text{D}^{2} \left( _{0}^{R} \text{D}_{x}^{\alpha -2} \right) ( x^{\alpha _{1}}) = \text{D}^{2} \frac{1}{\Gamma (2- \alpha )} \int _{0}^{x} (x- \tau )^{1- \alpha } \tau ^{\alpha _{1}} \, \mathrm{{d}} \tau \\&= C \text{D}^{2} (x^2) =C, \end{aligned}$$

which implies that the following Lipschitz condition holds

$$\begin{aligned} \left| \, _{0}^{R} \text{D}_{x}^{\alpha } u(t, x^{*}, y) - \, _{0}^{R} \text{D}_{y}^{\alpha } u(t, x^{**}, y) \right| =0 \le C | x^{*}-x^{**}|^{\beta } \end{aligned}$$

for any \(\beta >0\).

In Table 5, we obtain the experimentally determined orders of convergence (EOC) for the different \(\alpha = 1.2, 1.4, 1.6, 1.8\). Since the solution is not sufficiently smooth, the convergence orders are less than \(O(h^{3-\alpha })\) as we expected.

For the smooth solution case with \(\alpha _{1}=3\), in Table 6, we observe that the convergence orders are almost \(3- \alpha\) as we expected.

In our final example, we consider a two-sided space fractional partial differential equation.

Table 5 The experimentally determined orders of convergence (EOC) in Example 3 for \(\alpha _{1}=\alpha\) using (16) at \(t=1\)
Table 6 The experimentally determined orders of convergence (EOC) in Example 3 for \(\alpha _{1}=3\) using (16) at \(t=1\)

Example 4

Consider, with \(1< \alpha <2\), \(0 \le x, y \le 2\) [30],

$$\begin{aligned} u_{t} (t, x, y)&= \, _0^R \text{D}_{x}^{\alpha } u(t,x,y) + \, _x^R \text{D}_{2}^{\alpha } u(t,x,y) \nonumber \\&\quad + \, _0^R \text{D}_{y}^{\alpha } u(t,x,y) + \, _y^R \text{D}_{2}^{\alpha } u(t,x,y) + f(t,x,y), \; 0< t<1, \end{aligned}$$
(36)
$$\begin{aligned} u(t, 0, y)&= u(t, 2, y) = u(t, x, 0) =u(t, x, 2)=0, \end{aligned}$$
(37)
$$\begin{aligned} u(0, x)&= 4 x^2 (2-x)^2 y^2 (2-y)^2, \end{aligned}$$
(38)

where \(u(t, x, y) =4 \text{e}^{-t} x^2 (2-x)^2y^2 (2-y)^2\) is the exact solution.

In Table 7, we observe that the convergence orders of the numerical method (16) for solving this equation are also \(O(h^{3- \alpha })\) as we expected.

Table 7 The experimentally determined orders of convergence (EOC) in Example 3 for \(\alpha _{1}=3\) using (16) at \(t=1\)

4 Conclusions

In this paper, we construct a new and reliable finite difference method for solving the space fractional partial differential equations. The error estimates are proved and the convergence order of the numerical method depends on the smoothness of the solution of the equation. The convergence orders are proved for both homogeneous and non-homogeneous Dirichlet boundary conditions. Numerical examples show that the proposed numerical method in this paper has much higher convergence order than the shifted Grünwald–Letnikov method proposed in Meerschaert and Tadjeran [30] for solving space fractional partial differential equations with non-homogeneous boundary conditions.