Numerical solution of Generalized Burger–Huxley & Huxley’s equation using Deep Galerkin neural network method
Introduction
Nonlinear partial differential equations (NPDEs) are used to model the majority of physical phenomenon that arises in numerous sectors of science and engineering. The gBHE is one of the well-known NPDE. It describes the interaction between the reaction mechanism, the convective effect, and diffusion transport . It came into existence due to the joint efforts of Bateman, 1915, Whitham, 2011 and Burgers (1948) for the Burger equation, and Hodgkin and Huxley (1952) for the Huxley equation. H. Bateman first proposed the Burger equation in 1915, and Johannes M. Burgers explored it in 1948. It is the most straightforward paradigm for comprehending the physical features of phenomenon such as hydrodynamic turbulence, vorticity transportation, heat conduction, wave processes in thermoelastic medium, elasticity, mathematical modeling of turbulent fluid, gas dynamics, sound and shock wave theory, dispersion in porous media, and so on. A. Hodgkin and A. Huxley in 1952, proposed a model to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon, and they got the Nobel Prize in Physiology in 1963 for this model. Satsuma et al. (1987) discovered applications of gBHE in biology, combustion, chemistry, nonlinear acoustics, mathematics and engineering, metallurgy, in 1986.
Let be a bounded set and denotes its boundary, and assume and . The most general form of the gBHE (Ismail et al., 2004) is defined as follows: with the initial and boundary conditions
The exact solution of gBHE i.e., Eq. (1)–Eq. (2) is given by Eq. (3): where where, represents advection coefficient, represents reaction coefficient, and are real coefficients, while is the diffusive term, is the advection term, and is the reaction term.
At , Eq. (1) becomes the Huxley equation which describe wall motion in liquid crystals and nerve pulse propagation in nerve fibers (Wang, 1985): At , Eq. (1) becomes the Burger’s equation which describe the far field of wave propagation in non-linear dissipative systems (Whitham, 2011). Non-linear diffusion equations such as Eqs. (5), (6) are well-known in non-linear physics. Eq. (1) becomes the modified Burger’s equation when , and becomes the Burgers–Huxley equation (BHE) at and as: and finally at , Eq. (1) becomes the Fitzhugh–Nagumo equation (Bratsos, 2010, Hodgkin and Huxley, 1952, FitzHugh, 1969) respectively.
Many researchers have made several attempts in recent years to obtain analytical and numerical solutions for gBHE. By using nonlinear transformation, Wang et al. (1990) finds the kink wave and solitary solutions of the gBHE. Ismail et al. (2004) and Hashim et al., 2006b, Hashim et al., 2006a used the Adomian decomposition method (ADM), Wazwaz (2008) used the tanh–coth method, Bataineh et al. (2009) used the homotopy analysis method, and Molabahrami and Khani (2009) used the homotopy analysis method to find the solitary wave solution of BHE. Batiha et al. (2008) used the variational iteration method (VIM), Yefimova and Kudryashov (2004) used Hope–Cole transformation, and Gao and Zhao (2010) used He’s Exp-function method to find the traveling wave solutions of the gBHE, Griffiths and Schiesser (2010) presented a traveling wave analysis for the BHE.
For obtaining the approximation solution of gBHE over the variety of the domain, several types of approaches have been devised such as a fourth-order finite difference scheme (FDS4) (Bratsos, 2011), Hybrid B-Spline Collocation Method (Wasim et al., 2018), Chebyshev spectral collocation with the domain decomposition (Javidi, 2011), high order finite difference schemes (Sari et al., 2011), domain decomposition algorithm based on Chebyshev polynomials (DDAC) (Javidi and Golbabai, 2009), differential quadrature method (DQM) (Mittal and Jiwari, 2009), optimal Homotopy asymptotic method (OHAM) (Nawaz et al., 2013), Homotopy analysis method (Molabahrami and Khani, 2009), B-spline collocation method (Mohammadi, 2013). In higher dimensions, many of these methods, particularly grid-based methods such as FDS4 (Bratsos, 2011), Chebyshev spectral collocation with the domain decomposition (Javidi, 2011), DDAC (Javidi and Golbabai, 2009), and others are plagued by concerns of instability and computing expense. Other efficient numerical methods are also developed in recent literature for various type of differential equation problems (Wasim et al., 2019, Iqbal et al., 2018a, Iqbal et al., 2018b, Iqbal et al., 2020a, Iqbal et al., 2020b, Iqbal et al., 2020c).
Several machine learning-based methods have been proposed in recent years to address the issue of large dimensionality faced by mesh-based methods (Bai et al., 2021, Xu et al., 2020, Zhang et al., 2020, Zhang et al., 2022). Recently, Sirignano and Spiliopoulos (2018) presented a mesh-free deep learning algorithm called ‘Deep-Galerkin-Method(DGM)’ to get approximate solutions of high-dimensional PDEs. The Galerkin technique is a widely used numerical approach for finding a reduced-form solution to a PDE by combining basis functions in a linear combination. DGM is similar to the Galerkin method, with few significant differences based on machine learning approaches. In DGM, compare to the Galerkin method, linear combination of basis functions is replaced by a DNN. In this method, there is no need to generate a mesh since the random sampling technique is used for generating spatial points. At randomly sampled spatial points, the stochastic gradient descent (SGD) technique is used to train DNN so that it satisfies the differential operator, initial and boundary conditions. Galerkin’s method and machine learning come together naturally in DGM. Because of its simple and uncomplicated implementation, DGM approach has gotten a lot of attention.
Keeping in view the application/importance of gBHE and advantages of DGM algorithms the goal of this study is to obtain the approximate solution of the gBHE using DGM and a different type of architecture that is comparable to the Gated Recurrent Unit(GRU) network (Cho et al., 2014), without using Monte Carlo Method. GRU is an advanced version of the standard recurrent neural network (RNN) or it may be considered as a refined version of the Long-Short-Term-Memory(LSTM) network (Hochreiter and Schmidhuber, 1997), this seeks to tackle the vanishing gradient problem that comes with standard RNN. The main difference between GRU and LSTM is that while LSTM has three gates: input, output, and forget, GRU only has two gates: reset and update. GRU is simpler than LSTM since it contains fewer gates. Unlike an LSTM, a GRU does not have an output gate or any internal memory, thus uses fewer training parameters, less memory, and operates more quickly than LSTM. In the proposed method given nonlinear PDE is converted into a machine learning problem using a cost function based on the -norm error function, and an approximating function given by a DNN is employed to approximate the unknown solution. The suggested method is easy to use and implement, and it provides an approximate solution for any value in the solution domain. The suggested method’s efficiency and reliability are demonstrated by successful solution of gBHE and Huxley’s equations. Convergence analysis of cost function and convergence of neural network to the gBHE solution is also discussed.
Further in this paper, we describe the Methodology in Section 2., and implementation details for the algorithm in Section 2.1. In Section 3., we describe the Convergence analysis, and Numerical results and Discussion are described in Section 4. Finally, we analyze our findings in the Conclusions part in Section 5.
Section snippets
Methodology
In this section, we present the methodology of DGM to approximate the solution of gBHE. Recall the form of gBHE from Eq. (1), (2), i.e., where be a function of space and time defined on the region , and . By using DGM, our aim is to approximates with approximating function given by a DNN, where are the parameter of DNN. Firstly, we construct a cost function as follows:
Convergence analysis
The cost function can measure how well satisfies the differential operator, boundary and initial condition. The approximation capabilities of neural network architectures have recently been investigated by many authors. Especially, in Hornik (1991), they have ascertained that the standard multi-layer feed-forward networks with activation function can approximate any continuous function define on arbitrary compact subset of , whenever activation function is continuous,
Numerical results and discussion
In the following section, we assess and test the suggested method’s performance on the gBHE and gHE, as well as demonstrate its efficacy. According to the DGM method, we have to generate a sample of random points from the domain of the given equation which includes interior point, boundary points, and terminal/initial points. For our work, we generate one thousand random points from the interior of the domain, and from the boundary and the terminal of the domain we generate one hundred random
Conclusion
In this article, DGM algorithm is applied to provide the approximate solution of the gBHE and gHE. An architecture similar to GRU network architecture is used in implementation of the algorithm which is more faster and advantageous over other DNN architectures. This algorithm does not require linearization of nonlinear PDEs as well as dimension reduction, and it does not create meshes, which is an important feature because meshes become infeasible in higher dimensions. This method also removes
CRediT authorship contribution statement
Harender Kumar: Conceptualization, Methodology, Software, Investigation, Writing – original draft. Neha Yadav: Supervision, Validation. Atulya K. Nagar: Formal analysis, Writing – review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References (50)
- et al.
Application of variational iteration method to the generalized Burgers–Huxley equation
Chaos Solitons Fractals
(2008) A fourth-order numerical scheme for solving the modified Burgers equation
Comput. Math. Appl.
(2010)A mathematical model illustrating the theory of turbulence
Adv. Appl. Mech.
(1948)- et al.
New exact solutions to the generalized Burgers–Huxley equation
Appl. Math. Comput.
(2010) - et al.
Solving the generalized Burgers–Huxley equation using the Adomian decomposition method
Math. Comput. Modelling
(2006) - et al.
A note on the Adomian decomposition method for the generalized Huxley equation
Appl. Math. Comput.
(2006) Approximation capabilities of multilayer feedforward networks
Neural Netw.
(1991)- et al.
New cubic B-spline approximation for solving third order Emden–flower type equations
Appl. Math. Comput.
(2018) - et al.
Adomian decomposition method for Burger’s–Huxley and Burger’s–Fisher equations
Appl. Math. Comput.
(2004) A modified Chebyshev pseudospectral DD algorithm for the GBH equation
Comput. Math. Appl.
(2011)
A new domain decomposition algorithm for generalized Burger’s–Huxley equation based on Chebyshev polynomials and preconditioning
Chaos Solitons Fractals
The homotopy analysis method to solve the Burgers–Huxley equation
Nonlinear Anal. RWA
A numerical scheme for the generalized Burgers–Huxley equation
J. Egypt. Math. Soc.
DGM: A deep learning algorithm for solving partial differential equations
J. Comput. Phys.
Nerve propagation and wall in liquid crystals
Phys. Lett. A
Analytic study on Burgers, Fisher, Huxley equations and combined forms of these equations
Appl. Math. Comput.
Exact solutions of the Burgers-Huxley equation
J. Appl. Math. Mech.
Solving Huxley equation using an improved PINN method
Nonlinear Dyn.
Analytical treatment of generalized Burgers-Huxley equation by homotopy analysis method
Bull. Malays. Math. Sci. Soc.
Some recent researches on the motion of fluids
Mon. Weather Rev.
A fourth order improved numerical scheme for the generalized Burgers—Huxley equation
American Journal of Computational Mathematics
On the properties of neural machine translation: Encoder–decoder approaches
Mathematical models of excitation and propagation in nerve
Biol. Eng.
Cited by (6)
Estimating hydrocarbon recovery factor at reservoir scale via machine learning: Database-dependent accuracy and reliability
2024, Engineering Applications of Artificial IntelligenceA swarming neural network computing approach to solve the Zika virus model
2023, Engineering Applications of Artificial IntelligenceDecoupling thermal effects in GaN photodetectors for accurate measurement of ultraviolet intensity using deep neural network
2023, Engineering Applications of Artificial IntelligenceDeep learning framework for solving Fokker–Planck equations with low-rank separation representation
2023, Engineering Applications of Artificial Intelligence