Numerical solution of Generalized Burger–Huxley & Huxley’s equation using Deep Galerkin neural network method

https://doi.org/10.1016/j.engappai.2022.105289Get rights and content

Abstract

In this paper, a deep learning algorithm based on Deep Galerkin method (DGM) is presented for the approximate solution of the generalized Burgers–Huxley equation (gBHE), and generalized Huxley’s equation (gHE). In this method, a deep neural network (DNN) is used for approximating the solution without generating mesh grid, which satisfies the differential operator, boundary and initial conditions. DNN is trained on randomly selected batches of time and space points, thus helping to avoid forming a mesh. Adam optimizer is used for optimizing the parameters of the DNN. Further, the convergence of the cost function and convergence of the neural network to the exact solution is demonstrated. This method shows very encouraging results which have been compared with recent methods such as: A fourth order improved numerical scheme (FDS4), Adomain-decomposition method (ADM), Modified cubic B-spline differential quadrature method (MCB-DQM), Variational iteration method (VIM), and others.

Introduction

Nonlinear partial differential equations (NPDEs) are used to model the majority of physical phenomenon that arises in numerous sectors of science and engineering. The gBHE is one of the well-known NPDE. It describes the interaction between the reaction mechanism, the convective effect, and diffusion transport . It came into existence due to the joint efforts of Bateman, 1915, Whitham, 2011 and Burgers (1948) for the Burger equation, and Hodgkin and Huxley (1952) for the Huxley equation. H. Bateman first proposed the Burger equation in 1915, and Johannes M. Burgers explored it in 1948. It is the most straightforward paradigm for comprehending the physical features of phenomenon such as hydrodynamic turbulence, vorticity transportation, heat conduction, wave processes in thermoelastic medium, elasticity, mathematical modeling of turbulent fluid, gas dynamics, sound and shock wave theory, dispersion in porous media, and so on. A. Hodgkin and A. Huxley in 1952, proposed a model to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon, and they got the Nobel Prize in Physiology in 1963 for this model. Satsuma et al. (1987) discovered applications of gBHE in biology, combustion, chemistry, nonlinear acoustics, mathematics and engineering, metallurgy, in 1986.

Let DRd be a bounded set and D denotes its boundary, and assume DT=D×(0,T] and DT=D×(0,T]. The most general form of the gBHE (Ismail et al., 2004) is defined as follows: L[Ψ(x,t)]=ΨtΨxx+pΨsΨx+qΨΨs1Ψsr=0,(x,t)DTwith the initial and boundary conditions Ψ(x,0)=r2+r2tanhB1x1/s=Ψ0(x)(say),xDΨ(0,t)=r2+r2tanhB1B2t1/s=f1(x,t)(say),(x,t)DTΨ(1,t)=r2+r2tanhB11B2t1/s=f2(x,t)(say),(x,t)DT

The exact solution of gBHE i.e., Eq. (1)–Eq. (2) is given by Eq. (3): Ψ(x,t)=r2+r2tanhB1xB2t1/swhere B1=ps+sp2+4q(1+s)4(1+s)&B2=rp1+s(1+sr)p+p2+4q(1+s)2(1+s)where, p>0 represents advection coefficient, q0 represents reaction coefficient, r(0,1) and s>0 are real coefficients, while Ψxx is the diffusive term, ΨsΨx is the advection term, and ΨΨs1Ψsr is the reaction term.

At s=1,p=0, Eq. (1) becomes the Huxley equation which describe wall motion in liquid crystals and nerve pulse propagation in nerve fibers (Wang, 1985): ΨtΨxx+qΨ(Ψ1)(Ψr)=0At s=1,q=0, Eq. (1) becomes the Burger’s equation which describe the far field of wave propagation in non-linear dissipative systems (Whitham, 2011). ΨtΨxx+pΨΨx=0Non-linear diffusion equations such as Eqs. (5), (6) are well-known in non-linear physics. Eq. (1) becomes the modified Burger’s equation when q=0, and becomes the Burgers–Huxley equation (BHE) at s=1 and p0,q0 as: ΨtΨxx+pΨΨx+qΨ(Ψ1)(Ψr)=0and finally at p=0, Eq. (1) becomes the Fitzhugh–Nagumo equation (Bratsos, 2010, Hodgkin and Huxley, 1952, FitzHugh, 1969) respectively.

Many researchers have made several attempts in recent years to obtain analytical and numerical solutions for gBHE. By using nonlinear transformation, Wang et al. (1990) finds the kink wave and solitary solutions of the gBHE. Ismail et al. (2004) and Hashim et al., 2006b, Hashim et al., 2006a used the Adomian decomposition method (ADM), Wazwaz (2008) used the tanh–coth method, Bataineh et al. (2009) used the homotopy analysis method, and Molabahrami and Khani (2009) used the homotopy analysis method to find the solitary wave solution of BHE. Batiha et al. (2008) used the variational iteration method (VIM), Yefimova and Kudryashov (2004) used Hope–Cole transformation, and Gao and Zhao (2010) used He’s Exp-function method to find the traveling wave solutions of the gBHE, Griffiths and Schiesser (2010) presented a traveling wave analysis for the BHE.

For obtaining the approximation solution of gBHE over the variety of the domain, several types of approaches have been devised such as a fourth-order finite difference scheme (FDS4) (Bratsos, 2011), Hybrid B-Spline Collocation Method (Wasim et al., 2018), Chebyshev spectral collocation with the domain decomposition (Javidi, 2011), high order finite difference schemes (Sari et al., 2011), domain decomposition algorithm based on Chebyshev polynomials (DDAC) (Javidi and Golbabai, 2009), differential quadrature method (DQM) (Mittal and Jiwari, 2009), optimal Homotopy asymptotic method (OHAM) (Nawaz et al., 2013), Homotopy analysis method (Molabahrami and Khani, 2009), B-spline collocation method (Mohammadi, 2013). In higher dimensions, many of these methods, particularly grid-based methods such as FDS4 (Bratsos, 2011), Chebyshev spectral collocation with the domain decomposition (Javidi, 2011), DDAC (Javidi and Golbabai, 2009), and others are plagued by concerns of instability and computing expense. Other efficient numerical methods are also developed in recent literature for various type of differential equation problems (Wasim et al., 2019, Iqbal et al., 2018a, Iqbal et al., 2018b, Iqbal et al., 2020a, Iqbal et al., 2020b, Iqbal et al., 2020c).

Several machine learning-based methods have been proposed in recent years to address the issue of large dimensionality faced by mesh-based methods (Bai et al., 2021, Xu et al., 2020, Zhang et al., 2020, Zhang et al., 2022). Recently, Sirignano and Spiliopoulos (2018) presented a mesh-free deep learning algorithm called ‘Deep-Galerkin-Method(DGM)’ to get approximate solutions of high-dimensional PDEs. The Galerkin technique is a widely used numerical approach for finding a reduced-form solution to a PDE by combining basis functions in a linear combination. DGM is similar to the Galerkin method, with few significant differences based on machine learning approaches. In DGM, compare to the Galerkin method, linear combination of basis functions is replaced by a DNN. In this method, there is no need to generate a mesh since the random sampling technique is used for generating spatial points. At randomly sampled spatial points, the stochastic gradient descent (SGD) technique is used to train DNN so that it satisfies the differential operator, initial and boundary conditions. Galerkin’s method and machine learning come together naturally in DGM. Because of its simple and uncomplicated implementation, DGM approach has gotten a lot of attention.

Keeping in view the application/importance of gBHE and advantages of DGM algorithms the goal of this study is to obtain the approximate solution of the gBHE using DGM and a different type of architecture that is comparable to the Gated Recurrent Unit(GRU) network (Cho et al., 2014), without using Monte Carlo Method. GRU is an advanced version of the standard recurrent neural network (RNN) or it may be considered as a refined version of the Long-Short-Term-Memory(LSTM) network (Hochreiter and Schmidhuber, 1997), this seeks to tackle the vanishing gradient problem that comes with standard RNN. The main difference between GRU and LSTM is that while LSTM has three gates: input, output, and forget, GRU only has two gates: reset and update. GRU is simpler than LSTM since it contains fewer gates. Unlike an LSTM, a GRU does not have an output gate or any internal memory, thus uses fewer training parameters, less memory, and operates more quickly than LSTM. In the proposed method given nonlinear PDE is converted into a machine learning problem using a cost function based on the L2-norm error function, and an approximating function given by a DNN is employed to approximate the unknown solution. The suggested method is easy to use and implement, and it provides an approximate solution for any value in the solution domain. The suggested method’s efficiency and reliability are demonstrated by successful solution of gBHE and Huxley’s equations. Convergence analysis of cost function and convergence of neural network to the gBHE solution is also discussed.

Further in this paper, we describe the Methodology in Section 2., and implementation details for the algorithm in Section 2.1. In Section 3., we describe the Convergence analysis, and Numerical results and Discussion are described in Section 4. Finally, we analyze our findings in the Conclusions part in Section 5.

Section snippets

Methodology

In this section, we present the methodology of DGM to approximate the solution of gBHE. Recall the form of gBHE from Eq. (1), (2), i.e., L[Ψ(x,t)]=0,(x,t)DTΨ(x,0)=Ψ0(x),xDΨ(x,t)=f(x,t),(x,t)DTwhere Ψ be a function of space (x) and time (t) defined on the region DT, and xDRd. By using DGM, our aim is to approximates Ψ(x,t) with approximating function Ψ̂(x,t;θ) given by a DNN, where θRk are the parameter of DNN. Firstly, we construct a cost function as follows: C(Ψ̂)=L[Ψˆ(x,t;θ)]DT,ρ12di

Convergence analysis

The cost function C(Ψ̂) can measure how well Ψ̂(x,t;θ) satisfies the differential operator, boundary and initial condition. The approximation capabilities of neural network architectures have recently been investigated by many authors. Especially, in Hornik (1991), they have ascertained that the standard multi-layer feed-forward networks with activation function (ϕ) can approximate any continuous function define on arbitrary compact subset of D, whenever activation function (ϕ) is continuous,

Numerical results and discussion

In the following section, we assess and test the suggested method’s performance on the gBHE and gHE, as well as demonstrate its efficacy. According to the DGM method, we have to generate a sample of random points from the domain of the given equation which includes interior point, boundary points, and terminal/initial points. For our work, we generate one thousand random points from the interior of the domain, and from the boundary and the terminal of the domain we generate one hundred random

Conclusion

In this article, DGM algorithm is applied to provide the approximate solution of the gBHE and gHE. An architecture similar to GRU network architecture is used in implementation of the algorithm which is more faster and advantageous over other DNN architectures. This algorithm does not require linearization of nonlinear PDEs as well as dimension reduction, and it does not create meshes, which is an important feature because meshes become infeasible in higher dimensions. This method also removes

CRediT authorship contribution statement

Harender Kumar: Conceptualization, Methodology, Software, Investigation, Writing – original draft. Neha Yadav: Supervision, Validation. Atulya K. Nagar: Formal analysis, Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (50)

  • JavidiM. et al.

    A new domain decomposition algorithm for generalized Burger’s–Huxley equation based on Chebyshev polynomials and preconditioning

    Chaos Solitons Fractals

    (2009)
  • MolabahramiA. et al.

    The homotopy analysis method to solve the Burgers–Huxley equation

    Nonlinear Anal. RWA

    (2009)
  • SinghB.K. et al.

    A numerical scheme for the generalized Burgers–Huxley equation

    J. Egypt. Math. Soc.

    (2016)
  • SirignanoJ. et al.

    DGM: A deep learning algorithm for solving partial differential equations

    J. Comput. Phys.

    (2018)
  • WangX.

    Nerve propagation and wall in liquid crystals

    Phys. Lett. A

    (1985)
  • WazwazA.-M.

    Analytic study on Burgers, Fisher, Huxley equations and combined forms of these equations

    Appl. Math. Comput.

    (2008)
  • YefimovaO.Y. et al.

    Exact solutions of the Burgers-Huxley equation

    J. Appl. Math. Mech.

    (2004)
  • BaiY. et al.

    Solving Huxley equation using an improved PINN method

    Nonlinear Dyn.

    (2021)
  • BatainehA.S. et al.

    Analytical treatment of generalized Burgers-Huxley equation by homotopy analysis method

    Bull. Malays. Math. Sci. Soc.

    (2009)
  • BatemanH.

    Some recent researches on the motion of fluids

    Mon. Weather Rev.

    (1915)
  • BratsosA.G.

    A fourth order improved numerical scheme for the generalized Burgers—Huxley equation

    American Journal of Computational Mathematics

    (2011)
  • ChoK. et al.

    On the properties of neural machine translation: Encoder–decoder approaches

  • FitzHughR.

    Mathematical models of excitation and propagation in nerve

    Biol. Eng.

    (1969)
  • GainesJ.
  • GilbargD. et al.
  • Cited by (6)

    • A swarming neural network computing approach to solve the Zika virus model

      2023, Engineering Applications of Artificial Intelligence
    View full text