Abstract

A stochastic technique has been developed for the solution of fractional order system represented by Bagley-Torvik equation. The mathematical model of the equation was developed with the help of feed-forward artificial neural networks. The training of the networks was made with evolutionary computational intelligence based on genetic algorithm hybrid with pattern search technique. Designed scheme was successfully applied to different forms of the equation. Results are compared with standard approximate analytic, stochastic numerical solvers and exact solutions.

1. Introduction

The Bagley-Torvik equation is originally formulated in the studies on behavior of real material by use of fractional calculus [1, 2]. It has raised its importance since then in many engineering and applied sciences applications. In particular, the equation with 1/2-order derivative or 3/2-order derivative can model the frequency-dependent damping materials quite satisfactorily. It can also describe motion of real physical systems, the modeling of the motion of a rigid plate immersed in a Newtonian fluid and a gas in a fluid, respectively [3, 4]. Fractional dynamic systems have found many applications in various problems such as viscoelasticity, heat conduction, electrode-electrolyte polarization, electromagnetic waves, diffusion wave, control theory, and signal processing [110].

The generic form of Bagley-Torvik equation can be written as with initial conditions given as whereas boundary condition at , for , is written as where is the nonlinear operator of the equations, is the solution of the equation, , , and are constant coefficients, is the constant representing the span of inputs within the close interval , and , are the constants.

The general response expression (1.1) contains parameters that can be varied to obtain various responses. In the case of , , the mass of thin rigid plate, , the stiffness of the spring, , where is area of plate immersed in Newtonian fluid, is viscosity, and is the fluid density, then (1.1) represents the motion of a large thin plate in a Newtonian fluid [3]. Similarly, linearly damped fractional oscillator with the damping term has a fractional derivative of order , and it can be represented by Bagley-Torvik [3, 11].

The problem to develop the numerical solvers to find the solution of Bagley-Torvik fractional differential equation has attracted much attention and has been studied by many authors. In this regard an approximate analytical solution of the equation was derived using Adomian decomposition method [12, 13], He’s variational iteration method [14], Taylor collocation method [15]. Diethelm transformed the equation into first-order coupled fractional differential equation and solved the problem with Adams predictor and corrector approach [16]. Podlubny used successive approximation method to solve the equation and recently applied the Matrix Approach to Discretization of Fractional Derivatives for the same problem [3, 17]. However, there has been no advancement to apply the stochastic numerical solvers to find the solution of the equation.

In this paper, investigation and analysis are carried out for successful modeling of the equation using feed-forward artificial neural networks (ANNs). The linear combination of these networks is defined by an unsupervised error. This error is minimized with the help of appropriate unknown parameter, that is, weights. The training of weights is conducted with stochastic optimization techniques based on genetic algorithm hybridized with pattern search technique. It is well known that these techniques are reliable, successful, and efficient to avoid the possibility to get stuck in local minima or diverge to unstable situations and also maintain the diversity throughout [18, 19].

2. Basic Definition

In this section, some definitions and relations are given, which will be used in the proceeding sections. The definitions of fractional integral and derivative have been provided in the fractional calculus literature in a variety of ways, including Riemann-Liouville, Caputo, Erdélyi-Kober, Hadamard, Grünwald-Letnikov, and Riesz type. Equivalence of these definitions on some function has also been established [2022]. However, in this paper, Riemann-Liouville definition for fractional derivative is used with lower terminal at 0.

Definition 2.1 (Riemann-Liouville fractional order derivatives). The Riemann-Liouville derivative of of fractional order is normally provided in the literature as [3, 22] where , , is the continuous function, and is the gamma function defined by

Definition 2.2 (Mittag-Leffler function). The Mittag-Leffler function (MLF) is one of the main functions that has widespread use in the field of fractional calculus. Specially, its importance is realized in providing the analytic solution for differential equation of fractional order.
The definition of classical MLF function in two parameters and is given as [3, 23] It reduces to standard MLF function of one parameter by taking .

3. Mathematical Modeling

In this section mathematical modeling of Bagley-Torvik equations with feed-forward artificial neural network is presented.

The solution of the fractional differential equation along with its arbitrary order derivative can be approximated by the following continuous mapping as in neural network methodology [2426]: where , , and are bounded real-valued adaptive parameters, is the number of neurons, and is the activation function taken as exponential function instead of normally taken log-sigmoid due to nonavailability of its fractional derivative. Fractional differential equation neural networks (FDE-NNs) for expression (1.1) can be formulated as

The mathematical model for (1.1) can be the linear combinations of the networks represented above. The FDE-NN architecture formulated for Bagley-Torvik equation can be seen in Figure 1. It is clear that the solution can be approximated with subject to finding appropriate unknown weights.

4. Evolutionary Computational Intelligence

Evolutionary computational intelligence uses natural evolution as an optimization mechanism for solving various problems. The main objective of the scheme is to find a good solution to a problem in a large search space of candidate solutions. In this section, the methodology for learning of the unknown weights of networks is given. Our intent is to use evolutionary computation based on Genetic Algorithm (GA) hybrid with Pattern Search technique (PS) to solve such problem. The main advantages of the GA algorithm are robustness in controlling parameters, not to get stuck in local minima, avoid divergence and efficient compared to other mathematical algorithms and heuristic optimization techniques [18, 19].

Pattern search algorithm is a kind of aggressive optimization method, belonging to class of direct search method [27, 28]. It can find the optimum values using stochastic searching technology based on scaled and translated integer lattice. Pattern search algorithm has a strong ability to find the local optimistic results [29, 30].

So, after the detailed study of the literature about evolutionary computation techniques [31, 32], GAs hybrid with PS algorithm are used by seeing its strengths and applicability. The general flowchart showing the process of evolutionary algorithm is given in Figure 2.

Randomly generated initial population consists of finite set of chromosomes or individual, and each chromosome consists of as many numbers of genes as the number of unknown parameters, that is, weights in the neural networks representing the equation. The fitness of the individual is calculated based on an unsupervised error defined by linear combination of the differential equation neural networks. The fitness function to be minimized is defined as the mean of sum of square errors where is the generation index and is given as where is the total number of steps, each step is randomly taken between . The greater the value of the more will be the accuracy, but increase, in computational complexity of the algorithm.

Similarly, , linked with initial and boundary conditions is finally written as It is quite evident that the weights for which fitness function, , approaches zero, the solution of the equation is well approximate by given in (3.2).

Evolutionary algorithm is given in the following steps.

Step 1 (initialized population). Randomly generate bounded real values to form initial population with number of the chromosomes or individuals. Create subpopulation each with individuals. Each individual consists of as many number of genes as the number of unknown parameter in neural network. The better search space of algorithm is subjected to enough spread of initial population.

Step 2 (initialization). Following parameter values are initialized for algorithm execution. Set the number of variable equivalent to element in the individual. Set the number of generations and the fitness limit. Set the elite count “2” and value of crossover fraction as 0.80 for reproduction. Set migration in both forward and backward direction. Start generation count, and so forth.

Step 3 (fitness evaluation). Calculate the fitness for each individual using the expressions (4.1) to (4.3).

Step 4 (ranking). Rank each individual of the populations on the basis of minimum fitness values.

Step 5 (termination criteria). Terminate the algorithm if either predefined fitness value, that is, MSE 10−4 is achieved or number of generation is complete. If termination criterion fulfilled, then go to Step 8, else continue.

Step 6 (reproduction). Create the next generation using. Crossover: Call for scattered function, Mutation: Call for Adaptive Feasible function, Selection: Call for Stochastic Uniform function, and Elitism. Repeat the procedure from Step 3 to Step 6 until total number of generation are complete.

Step 7. Store the global best individual of this run. Repeat the Steps 2 to 6 to have sufficient numbers of global best individual for better statistical analysis.

Step 8 (refinement). Pattern search algorithm is used for further fine-tuning by taking the best individual of GAs as start point of the algorithm. MATLAB Genetic algorithm and direct search tool box are used for pattern search algorithm. Store the refined global best individual for statistical analysis later.

5. Simulation and Results

In this section, the simulation results are presented for three different fractional order systems represented by Bagley-Torvik equation. In order to prove the applicability of the designed scheme for such systems, we have considered the equation in example I for which exact solution is available. Moreover, the statistical analysis of results is also carried out to verify and validate reliability of the algorithm. In example II, we have tested the proposed methodology on Bagley-Torvik equation for which PMA technique fails to determine the results due to nonhomogeneous initial conditions. In example III, the strength and weakness of our scheme are analyzed by taking such equation for which the exact solution is not known; however, its numerical solution is available.

5.1. Example I

Consider the Bagley-Torvik equation [14, 33] subjected to the initial condition and boundary condition as The exact solution of the equation is given as

Mathematical modeling of the above equation is done with Fractional differential equation neural network (FDE-NN) represented by (3.2) to (3.4) by taking 10 number of neurons resulting in 30 number of unknown parameters or weights. These weights are restricted to real numbers between −10 to 10. The initial population consists of a set of 200 individuals, which is divided into 10 subpopulations each with 20 numbers of individuals. Each individual consists of 30 genes, which is equivalent to number of unknown parameters of FDE-NN. Input of the training set is taken from . Therefore the fitness function is formulated as where is the number of generations, , , and are networks provided in (3.2), (3.3), and (3.4), respectively. The scheme runs iteratively in order to compute the minimum of fitness function, , with a termination criteria as 3000 number of generations executed or value of the fitness function whichever comes earlier. The best individual found by global search technique is passed to a rapid local search method as a start point for further refinements of the results. The parameter settings used in genetic algorithm (GA) and Pattern Search algorithm (PS) are provided in Table 1. One of the best sets of weights of FDE-NN learned stochastically by PS, GA, and Genetic algorithm hybrid with PS (GA-PS) algorithms is provided in Table 2. Using these weights in (3.2) one can find the solution to this problem for any input time between 0 and 1. The solutions obtained from the chromosomes given in Table 2 are presented in Table 3.

The results of other numerical solvers like Podlubny matrix approach (PMA) [3, 17] and reported results of He’s variational iteration method (VIM) [14] for inputs between 0 and 1 with a step of 0.1 are also provided in Table 2. In order to determine the results by PMA technique, the library functions provided by Podlubny at MATLAB central file exchange are used [34]. Following parameter setting is employed for PMA technique as follows: (i)the value of fractional order derivative ;(ii)set the value for constant coefficients, ;(iii)the discretization step is taken as ;(iv)total number of time steps = 100.

It can be seen that the result obtained by our scheme is in good agreement with the state of art numerical solvers.

Moreover, the behavior of the derivative of the solution is also analyzed with the same individual as given in Table 2. The results for derivative of the solutions are provided for inputs between 0 and 1 with a step of 0.1 by PS, GA, and GA-PS algorithms in Table 4. It is quite evident that the accuracy level lies in the range of 10−2 to 10−3.

Before moving toward the statistical analysis of our designed scheme, it is necessary to mention that entire surrogate model of the algorithm is based on Mittag-Leffler function. Its generic form is given in expression (2.3). However, the difficulty faced in calculation of MLF function is due to its complex nature. This issue is tackled in our simulation by use of MATLAB code provided by Podlubny at MATLAB central file exchanges to determine the value of MLF function for desired inputs [35].

Training of the weights for neural networks of the equation is highly stochastic nature. It is necessary to have statistical analysis of the results obtained by FDE-NN algorithms. 125 total independents runs are executed for each stochastic optimizer. These optimizers are PS, GA, and GA hybrid with PS (GA-PS). The values of the absolute error are provided in Table 5 for input at 0 and 1. Mean and Standard deviation (STD) were also calculated for the best 100 runs. The value of unsupervised error, , of the equation in ascending order is plotted against each independent run of the algorithm in Figure 3. Moreover, the values of the absolute error for the solution as well as its derivative at 0 and 1 are plotted in Figure 4. It can be seen from Figures 3 and 4 and Table 5 that the best results are obtained by the use of GA-PS algorithm.

5.2. Example II

Another fractional order system of Bagley-Torvik equation is taken to investigate the strength and weaknesses of the proposed stochastic algorithm. Consider the Bagley-Torvik equation [13, 15] with initial condition as . It has the exact solution given as This problem can be made simpler by use of following substitution: Equation (5.5) will be This problem is solved on the same methodology adopted for previous example. However, the problem-specific fitness function for (5.5) by taking the values of constant coefficient as unity can be formulated as where is the generations index, , , and are FDE-NN networks provided in (3.2), (3.3), and (3.4), respectively. The one of unknown set of weights of FDE-NN networks trained stochastically by using PS, GA, and GA-SA is given in Table 6. These weights are used in expression (3.2) for finding the solution of the equations for some inputs between 0 and 1. The results are summarized in Table 7. It can be inferred from the table that the best results are obtained by FDE-NN networks trained by GA-PS algorithm. The average accuracy achieved in the given scheme lies in the range of 10−3 to 10−4.

The results computed for the algorithm are also plotted graphically in Figure 5. In Figure 5(a), the comparison of the results is given from exact solution on interval (0, 1) with step of 0.1. Moreover, the results are also plotted for larger interval (0, 4) in Figure 5(b) by using same weights, and it can be seen that error is starting to grow for inputs larger than 1 and it is tending to for inputs greater than 2. One can increase the accuracy slightly on large intervals by taking large number of neurons in FDE-NN networks or by training of networks for large input span or increasing the number of steps. However, it is worth mentioning that in these cases computational complexity will increase exponentially [36, 37].

5.3. Example III

In this example the fractional order system based on Bagley-Torvik equation is taken for which the exact solution is not available. However, its numerical solutions are obtained by different state of art solvers. The designed scheme is applied for such problem in order to analyze further the applicability, reliability, effectiveness, and efficiency.

Consider the fractional order system [3, 17] where, the forcing function and the initial conditions are given as The example is also solved on the same manner as the previous one. However, the objective function used in this case can be given as The aim of our algorithm is to tune weights for which the value of fitness function as given in (5.12) is at its minimum. One such set of unknown weights of FDE-NN networks trained stochastically using PS, GA, and GA-SA is given in Figures 6(a), 6(b), and 6(c), respectively. The results obtained based on these weights are summarized in Table 8. It also includes the results of PMA algorithm provided with same set of parameters as taken in example I. It can be inferred from the results of the given scheme that it can perform comparable to the specialized state of art numerical solvers.

5.4. Some Further Discussion

In this section, some necessary further discussion is added to elaborate the results. The advantages and limitation of the proposed method is also presented, so that one should know about its effectiveness and reliability.

In the design scheme, the number of steps s is taken as 2, 5, and 10, respectively, for corresponding (5.4), (5.9), and (5.12) in the interval (0, 1) randomly. It means that the mesh size used in our scheme is 0.5, 0.2, and 0.1 for examples I, II, and III, respectively. On the other hand in PMA technique the results were obtained with number of step equals to 100, using mesh size in the interval (0, 1). It is well known that, for the numerical method, decreasing the mesh size increases the accuracy of algorithm, but at the cost of extensive computational budget. It can be inferred from Tables 3 and 8 that the results of our scheme are close to PMA method with less number of steps. Another advantage of our algorithm is that once weights are obtained by the training of FDE-NN networks, the solution of the equation can be obtained for any continuous input time within the interval (0, 1), whereas PMA approach results are available only on predefined discrete grid of inputs within the interval (0, 1). Therefore, if one desired the result on any new inputs timing, the whole cumbersome iterative procedure has to run.

It can be seen from the result given in Table 3 that on average the value of absolute error for GA-PS lies in the range 10−2 to 10−3, whereas reported results of VIM [14] have good approximation for the exact solution in closed vicinity of initial guess while the value of absolute error is going to rise with increase of time, for example, at , it is 0.225. However, the results by PMA method are better than those of GA-PS algorithm, but it has limitation as discussed earlier.

Moreover, it is a bit tricky to decide the appropriate number for steps in optimization of weights by our scheme. The decision of the input span interval and step is made on compromise between the computation complexity and accuracy of the algorithm. The time taken for the computation is measured in order to solve Bagley-Torvik equation with FDE-NN networks optimized with GA algorithm (200 individuals, 3000 generations) and PS technique (3000 generation). Optimization of weights with the help of fitness function provided in (5.12) the algorithm required executing MLF function 20000 times for single generation, whereas the single generation using fitness functions in (5.4) and (5.9) required 10000 and 4000 time execution of MLF function, respectively. The average total time taken by the algorithm for its single run and time taken exclusively by MLF function is provided in Table 9. It can be seen from Table 9 that about 80% of execution time is spent on calculation of MLF function only. The time analysis provided in this paper is carried out using Dell Latitude D630 laptop with Intel(R) Core(TM) 2 Duo CPU T9300, 2.50 GHz, and MATLAB version R2008b.

6. Conclusions

On the basis of the simulations and analysis, it can be concluded that fractional order system of Bagley-Torvik equation can be solved by designed heuristic computational intelligence algorithm. The fractional differential equation neural networks of the equation trained by GA-PS algorithm is the best stochastic optimizer compared to PS, GA algorithms. On the basis of the statistical analysis it can be inferred that our proposed computing approaches are reliable, effective, and easily applicable to such complex fractional order systems. In our future work, we intend to use other biologically inspired computational intelligence algorithms to solve these fractional order systems.

Acknowledgment

The author would like to acknowledge Dr. Siraj-ul-Islam Ahmed for his valuable suggestion to improve the presentation of the paper.