Abstract

Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.

1. Introduction

Most of the swarm intelligent optimization studies and applications have been focused on nature-inspired algorithms. Numerous population-based and nature-inspired optimization algorithms have been presented, such as the Ant Colony Optimization (ACO), Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Differential Evolution (DE). These optimization algorithms are based on different natural phenomena. ACO works based on the behavior of ant colony searching foods from the source to a destination [1, 2]. GA applies the theory of Darwin based on the survival of the fittest to the optimization problems [3, 4]. PSO emulates the collaborative behavior of birds flocking and fish schooling in searching for foods [57]. ABC uses the foraging behavior of a honey bee [810]. DE derived from the Genetic Algorithm, which is an efficient global optimizer in the continuous search domain [11, 12]. These algorithms have been applied to many engineering optimization problems and proven effective in solving specific types of problems. However, various algorithms have their own advantages and disadvantages in solving diverse problems. Generally, a good optimization algorithm should possess the three essential conditions. First, the algorithm has the ability of obtaining the true global optima value. Second, the convergence speed of the algorithms should be fast. Third, the program should have a minimum of control parameters so that it will be easy to use. If an optimization algorithm meets the above three conditions at the same time, it would be a great algorithm. Some optimization techniques often achieve global optima results but at the cost of the convergence speed. Those algorithms tend to focus on the quality of computational results rather than the convergence speed. However, the higher calculation accuracy and faster convergence speed are the ultimate aim in the practical applications.

Recently, Rao et al. [13, 14] proposed a teaching-learning-based optimization (TLBO) algorithm, inspired by the phenomenon of teaching and learning in a class. The TLBO requires only the common control parameters like population size and numbers of generation and that does not require any algorithm-specific control parameters; that is, it is a parameter-less algorithm [15]. Thus, there is no burden of tuning control parameters in the TLBO algorithm. Hence, the TLBO algorithm is simpler and more effective and involves relatively less computational cost. What it is more important is that the TLBO algorithm has the ability to achieve better results at comparatively faster convergence speed to other algorithms mentioned above. Therefore, the TLBO algorithm has been successfully applied in diverse optimization fields such as mechanical engineering, task scheduling, production planning and control, and vehicle-routing problems in transportation [1620]. Similar to other swarm intelligent optimization algorithms, the basic TLBO can be improved further and further. In order to improve the performance of TLBO, several variants of the TLBO have been proposed. Rao and Patel presented an elitist TLBO (ETLBO) algorithm [15] to solve complex constrained optimization problems and used a modified version of TLBO algorithm [17] to solve the multiobjective optimization problem of heat exchangers. Sultana and Roy [19] proposed a quasioppositional teaching-learning-based optimization (QOTLBO) methodology in order to find the optimal location of the distributed generator to simultaneously optimize power loss, voltage stability index, and voltage deviation of radial distribution network. Ghasemi et al. [20] used Lévy mutation strategy based on TLBO for optimal settings of optimal power flow problem control variables. Furthermore, some improved TLBO algorithms have been proposed to solve the global function optimization problem [2124] and the multiobjective optimization problem [17, 25, 26].

In this paper, we propose a novel improved TLBO, which is called nonlinear inertia weighted TLBO (NIWTLBO). A nonlinear inertia weighted factor is introduced into the basic TLBO to control the memory rate of learners, and another dynamic inertia weighted factor is used to replace the original random number in teacher phase and learner phase. So, as a result, the NIWTLBO has faster convergence speed and higher calculation accuracy for most of these optimization problems than the basic TLBO. The performance of NIWTLBO for solving global function optimization problems is compared with basic TLBO and other optimization algorithms. The analysis results show that the proposed algorithm outperforms most of the other algorithms investigated in this paper.

The rest of this paper is organized as follows. Section 2 describes the basic TLBO algorithm in detail. In Section 3, the proposed NIWTLBO algorithm will be introduced. And Section 4 provides numerical experiments and results demonstrating the performance of NIWTLBO in comparison with other optimization algorithms. Finally, our conclusions are mentioned in Section 5.

2. Teaching-Learning-Based Optimization

The basic TLBO algorithm mainly consists of two parts, namely, the teacher phase and the learner phase. In teacher phase, the students can learn from the teacher to make their knowledge level closer to the teacher’s. In learner phase, the students can learn from the interaction of other individuals to increase their knowledge. In the TLBO algorithm, a group of learners is considered as a population. Each learner is analogous to an individual of the evolutionary algorithm. The different subjects offered to the learners are considered as design variables of the optimization problem. A learner’s result is analogous to the fitness value of the objective function for optimization problems. The best learner (i.e., the best solution in the entire population) is considered as the teacher. The best solution is the best value of the objective function of the given optimization problem. The design variables are the input parameters of the objective function.

The process of basic TLBO algorithm is described below.

2.1. Initialization

The notations used in TLBO are described as follows: is number of learners in a class (i.e., population size). is number of subjects offered to the learners (i.e., dimensions of design variables).MAXITER is maximum number of allowable iterations. denotes a learner in class (i.e., the individual in the population) at any iterator . denotes the result of th subject offered to th learner at th iterator. represents the teacher, that is, the best learner in a class at th iterator.

The population is randomly initialized by a search space bounded by matrix. The values of are assigned randomly using the equationwhere and . The rand represents a uniformly distributed random variable within the range . represents the lower bound of design variable. represents the upper bound of design variable.

2.2. Teacher Phase

In this phase, the algorithm simulates the students learning from teachers. A good teacher can bring his or her learners up to his or her level in terms of knowledge. Hence, the mean result of a class may increase from a low level to the teacher’s level. But, in fact, it is impossible that the mean result of a class reaches the teacher’s level. Because of the individual differences and the forgetfulness of memory, the learners cannot gain all the knowledge of the teacher. A teacher can increase the mean result of a class to a certain value which depends on the capability of the whole class.

Let be the mean result of the learners on a particular subject “” () and let be the teacher at any iteration . will try to move mean towards its own level which is the new mean. is the difference between the existing mean result of each subject and the corresponding result of the teacher for each subject at the iteration . The solution is updated according to the difference between the existing and the new means given bywhere is the result of the teacher in subject at the iteration . is a random number in the range , and is the teaching factor, which decides the value of mean to be changed. can be either 1 or 2. The values of and are generated randomly in the algorithm and both of these parameters are not supplied as input to the algorithm.

In every iteration, is the updated value of . Because the optimization problem is a minimization problem, our goal is to find the minimum of . If the new value gives a better function value, then the old value is updated with the new value. The updated formula is given aswhere and represent the new and old total result of th student at the iteration , respectively. All the accepted new values at the end of the teacher phase become the input to the learner phase.

2.3. Learner Phase

In learner phase, the algorithm simulates the learning of the learners through interaction among themselves. A learner interacts randomly with other learners to increase his or her knowledge. If a learner has more knowledge than others, the other learners can quickly achieve new knowledge by learning from him or her to increase their level. In this learning process, two learners are randomly selected. One is and another is , . The updated formula is given aswhere is a random number in the range . and represent the total result of th student and th student at the iteration , respectively. Accept the new value if it improves the value of the objective function. Similarly, use (5) to update the learner.

In each iteration of the TLBO, it is necessary to detect the repeated solution to the entire population. If there is a repeated solution, it needs to remove the repeated solution and generate a new individual randomly. Hence, it will expand the diversity of populations and avoid premature convergence of the algorithm. After a number of generations, the knowledge level of the entire class is smoothly approximated to a point that is considered the teacher, and the algorithm converges to a solution.

2.4. Algorithm Termination

The algorithm is terminated after MAXITER iterations. The details of TLBO algorithm can be referred to in literature [13, 14].

3. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization

The basic TLBO algorithm is based on teaching-learning phenomenon of a classroom. In the teacher phase, the teacher tries to shift the mean of the learners towards himself or herself by teaching. In the learner phase, learners improve their knowledge by interaction among themselves. In the process of the teaching-learning, learners improve their level by accumulating knowledge. In other words, they learn new knowledge based on existing knowledge. In the real world, the teacher tends to wish that his or her students should achieve the knowledge equal to him in fast possible time. But it is impossible for a student because of his or her forgetting characteristics. In fact, a student usually forgets a part of existing knowledge due to the physiological phenomena of the brain. With increasing the iteration numbers of learning, more and more existing knowledge will be remembered. As the learning curve presented by Ebbinghaus, it describes how fast learning knowledge is in learning process. The sharpest increase occurs after the first try and then gradually evens out, meaning that less and less new knowledge is retained after each repetition. Like the forgetting curve, the learning curve is exponential. So it is necessary to add a memory weight to the existing knowledge of the student for simulating this learning scenario. According to this phenomenon, a nonlinear inertia weighted factor is introduced into (4) and (6) in the basic TLBO, and this factor is considered as memory weighted factor which controls the memory rate of learners. This nonlinear inertia weighted factor will scale the existing knowledge of the learner for computing the new value. In contrast to the basic TLBO, in our algorithm the part of previous value of the learner is decided by a weighted factor while computing the new learner value.

Accordingly, to meet the characteristic of memory to conform to the learning curve, the nonlinear inertia weighted factor (i.e., memory rate) is nonlinearly increased from to 1.0 over time, whose value is given aswhere iter is the current iteration number, MAXITER is the maximum number of allowable iterations, and is the minimum value of nonlinear inertia weighted factor . The value should be above 0.5 (here it is selected 0.6), or the individuals are worse due to remembering too little existing knowledge at first. Hence, if the value is too small, the algorithm could not converge to the true global optimal solution. curve (i.e., memory rate curve) is shown as Figure 1. The nonlinear inertia weighted factor is applied to the new equations shown as (10) and (11). In this modified TLBO, the individuals try to sample diverse zones of the search space during the early stages of the search. During the later stages, the individuals adjust the movements of trial solutions finely so that they can explore the interior of a relative small space.

In the teacher phase, in order to obtain a new set of better learners, the difference between the existing mean result and the corresponding result of the teacher is added to the existing population of learners. Similarly, to obtain a new set of better learners in the learner phase, two learners are selected randomly, and the difference between their result of each corresponding subject is added to the existing learner. As (2) and (6) shown, the difference value added to the existing learner is formed from the difference of result and the random number . Therefore, in the teacher and learner phases, the difference value is decided by the random number to a large extent. In our proposed method, we modify the random number as follows:where is a uniformly distributed random number within the range . The value should be neither too big nor too small. Here, is selected to be 0.5, which conforms to the dynamic inertia weight proposed by Eberhart and Shi [28]. So (8) is modified as

Equation (9) generates a random number in the range which is similar to the method proposed by Satapathy and Naik [23]. We call dynamic inertia weighted factor. Therefore, the mean value of the random number is raised from 0.5 to 0.75. This increases the probability of stochastic variations and enlarges the difference value added to the existing learners, so as to improve population diversity, avoid prematurity in the search process, and increase the ability of the basic TLBO to escape from local optima. On the multimodal function surface, the original random weighed factor leads to most of the populations clustering near a local optimum point. However, the population with new dynamic inertia weight has more chances to jump out of the local optima and continuously move towards the global optimum point until a true global optimum is reached.

With the nonlinear inertia weighted factor and the dynamic inertia weighted factor, the new set of improved learners can be expressed by using equation in the teacher phaseand the new set of improved learners can be expressed by using equation in the learner phasewhere is given by (7) and is given by (9).

4. Experiments on Benchmark Functions

In this section, NIWTLBO is applied on several benchmark functions to evaluate its performance with different dimensions and search space, comparing with the basic TLBO algorithm and with other optimization algorithms available in the literature. All tests are evaluated on a laptop having Intel core i5 2.67 GHz processor and 2 GB RAM. The algorithm is coded using the MATLAB programming language and run in MATLAB 2012a environment. This section provides the results obtained by the NIWTLBO algorithm compared to the basic TLBO and other intelligent optimization algorithms. The details of the 24 benchmark functions with different characteristics like unimodality/multimodality and separability/nonseparability are shown in Table 1. “C” denotes the characteristic of function; “” is the dimensions of function; “range” of each function is the difference between the lower and upper bounds of the variables; “” is the theoretical global minimum solution.

4.1. Experiment 1: NIWTLBO versus PSO, ABC, DE, and TLBO

This experiment is aimed at identifying the performance of the NIWTLBO algorithm to achieve the global optimum value comparing with PSO, ABC, DE, and the basic TLBO. To be fair, each algorithm uses the same values of common control parameters such as population size and maximum evaluation number. Population size is 40 and the maximum fitness function evaluation number is 80,000 for all benchmark functions in Table 1. The other specific parameters of algorithms are given below.

PSO Setting. Cognitive attraction , social attraction , and inertia weight . As mentioned in [5], a recommended choice for constant and is integer 2, since it on average makes the weights for “social” and “cognition” parts be 1. When is in the range of , the PSO will have the best chance to find the global optimum and takes a moderate number of iterations [29].

ABC Setting. For ABC there are no other specific parameters to set.

DE Setting. In DE, is a real constant which affects the differential variation between two solutions and is crossover rate. Set and . The configuration parameters for DE are decided on the results of experiments using different parameter values. We choose the parameter values which make the DE algorithms get the best result.

TLBO Settings. For TLBO there are no other specific parameters to set.

NIWTLBO Settings. In NIWTLBO, there are no other specific parameters too.

In this section, each benchmark function is independently experimented 30 times with PSO, ABC, DE, TLBO, and NIWTLBO. Each algorithm was terminated after running for 80,000FEs or when it reached the global minimum value before completely running for 80,000FEs. The mean and standard deviation of fitness value obtained through 30 experiments on each benchmark function are recorded in Table 2. Meanwhile, the mean value and standard deviations of the number of function fitness evaluations produced by the experiments are reported in Table 3. In order to analyze the performance whether there is significance between the results of the NIWTLBO and other algorithms, we carried out -test on pairs of algorithms which is very popular in evolutionary computing [12]. The statistical significance levels of difference of the means of PSO and NIWTLBO algorithm, ABC and NIWTLBO algorithm, DE and NIWTLBO algorithm, and TLBO and NIWTLBO algorithm are reported in Table 4. Here, “+” symbol indicates that value is significant at 0.05 level of significance by two tailed tests, “” symbol marks value being not statistically significant, and “NA” means not applicable due to the results of one pair of algorithms having the same accuracy.

The comparative results of each benchmark function for PSO, ABC, DE, and TLBO are presented in Table 2 in the form of average solution and standard deviation obtained in 30 independent runs on each benchmark function. The significance of NIWTLBO comparing with PSO, ABC, DE, and TLBO is shown in Table 4. It is observed from Tables 2 and 4 that the performance of NIWTLBO outperforms PSO, ABC, DE, and TLBO for functions , , , , and . Furthermore, TLBO performs better than PSO, ABC, and DE for functions and . For functions , the performance of NIWTLBO, PSO, ABC, DE, and TLBO is alike that almost all the algorithms can obtain the global optimum value except for ABC on Bohachevsky3. For Rosenbrock, the performance of different algorithms is similar to each other. For Griewank and Multimod, the performance of NIWTLBO, DE, and TLBO is alike and better than PSO and ABC. For Weierstrass, the performance of NIWTLBO and TLBO is alike and outperforms PSO, ABC, and DE.

It is observed from the results in Table 3 that the smaller the number of fitness evaluations the more quickly the algorithm obtains the global optimum value; that is, the convergence rate of the algorithm is faster. Obviously, the NIWTLBO algorithm requires less numbers of function evaluations than the basic TLBO algorithm and other algorithms mentioned to achieve the global optimum value for most of the benchmark functions. Hence, the convergence rate of the NIWTLBO algorithm is faster than other algorithms mentioned for most of the benchmark functions except Six-Hump Camel Back, Branin, and Goldstein-Price.

4.2. Experiment 2: NIWTLBO versus PSO-, PSO-cf, CPSO-H, and CLPSO

In this section, the experiment is aimed at analysing the ability of the NIWTLBO algorithm to obtain the global optimum value comparing with other variant PSO algorithms such as PSO- [29], PSO-cf [30], CPSO-H [31], and CLPSO [32]. In this experiment, 8 different unimodal and multimodal benchmark functions are tested using the NIWTLBO algorithm. The details of benchmark functions are shown in Table 1. In order to maintain the consistency in the comparison, NIWTLBO algorithm is performed with the same maximum function evaluations and dimensions. Each benchmark function is independently experimented 30 times for NIWTLBO. The comparative results are reported in Table 5 in the form of the average solution and standard deviation obtained in 30 independent runs on each benchmark function. In Table 5, the results of algorithms except NIWTLBO are taken from literatures [24, 27], where the algorithms run 30,000FEs with 10 population sizes for 10 dimensional functions.

It is observed from the results in Table 5 that the performance of NIWTLBO and TLBO algorithms is better than PSO-, PSO-cf, CPSO-H, and CLPSO algorithms for Sphere, Ackley, and Griewank. The performance of NIWTLBO and CLPSO is alike for Rastrigin, Noncontinuous Rastrigin, and Weierstrass. For Rosenbrock and Schwefel 2.26, the NIWTLBO algorithm does not perform well comparing with other algorithms.

4.3. Experiment 3: NIWTLBO versus CABC, GABC, RABC, and IABC

In this section, the experiment is conducted to identify the performance of the NIWTLBO algorithm to achieve the global optimum value versus CABC [33], GABC [34], RABC [8], and IABC [35] on 7 benchmark functions shown in Table 1. The comparative results are reported in Table 6. To maintain the consistency in the comparison, the parameters of the algorithms are similar to the literature [8], where the population size is set as 20 and dimension is set as 30. The results of CABC, GABC, RABC, and IABC are taken from the literature [23] directly. The results of NIWTLBO and TLBO, in the form of average solution and standard deviation, are obtained in 30 independent runs on each benchmark function. In this experiment, TLBO and NIWTLBO are tested with the same function evaluations listed in Table 6 to compare their performance with CABC, GABC, RABC, and IABC algorithms.

From Table 6, it is observed obviously that the performance of NIWTLBO and TLBO algorithms is better than CABC, GABC, and RABC for all benchmark functions. The performance of NIWTLBO algorithm is similar to IABC for Rastrigin and Griewank and outperforms the IABC for the rest of benchmark functions in Table 6.

4.4. Experiment 4: NIWTLBO versus SaDE, jDE, and JADE

In this section, the experiment is carried out for comparing the performance of the NIWTLBO algorithm with SaDE, jDE, and JADE algorithms on 7 benchmark functions which are described in Table 1. The results of SaDE, jDE, and JADE are taken from the literature [36] directly. The results of NIWTLBO and TLBO, in the form of average solution and standard deviation, are obtained in 30 independent runs on each benchmark function. To be fair, the parameters of the algorithms are the same to the literature [36], where the population size is 20 and the dimension is 30. The comparative results are recorded in Table 7. In this experiment, TLBO and NIWTLBO are implemented with the same function evaluations listed in Table 7 to compare their performance with SaDE, jDE, and JADE algorithms.

It can be seen that NIWTLBO performs much better than these variants of DE on all the benchmark functions in Table 7. Therefore, it is shown that the NIWTLBO algorithm has a good performance.

4.5. Experiment 5: NIWTLBO versus TLBO with Different Dimensions

In this section, we analyse the convergence of NIWTLBO and TLBO algorithms with different dimensions. Two unimodal functions and two multimodal functions have been tested with dimensions 2, 10, 50, and 100. In this work, evolutionary generation is employed to evaluate the performance of NIWTLBO and TLBO algorithms. The population size is set as 40 and the number of evolutionary generations is set as 2000. The experiment results of NIWTLBO and TLBO algorithms for 2, 10, 50, and 100 dimensional functions over 30 independent runs are listed in Table 8, which is in form of the mean solution. The graphs are plotted between the function value and evolutionary generations on logarithmic scale.

Figures 2 and 3 show the convergence graphs of the unimodal and multimodal functions for different dimensions, respectively. It is observed from the graphs that the convergence rate of the NIWTLBO algorithm is faster than the basic TLBO algorithm for both these unimodal and multimodal functions for all dimensions. Furthermore, it is observed from Table 8 and the figures that the performance of NIWTLBO algorithm is almost not affected by the dimension. But the performance of TLBO algorithm will be reduced slightly with the dimension increasing.

4.6. Experiment 6: NIWTLBO versus Other Variants of TLBO

In order to show the advantages and disadvantages of the NIWTLBO, we make experiments to compare the performance of the NIWTLBO algorithm with some other variants of TLBO in this section. The variants of TLBO include WTLBO [21], ITLBO22 [22], ITLBO23 [23], and ITLBO [24]. Some benchmark functions described in Table 1 are tested for experiments. In the experiments, the population size is 20 and dimension is 2. The number of teachers is 4 in ITLBO. To maintain the consistency, the execution of the NIWTLBO and other variants of TLBO algorithms is stopped after running for 80,000FEs or when the difference between the fitness obtained by the algorithm and the global optimum value is less than 0.1% (e.g., if the optimum value is 0, the solution is accepted if it differs from the optimum value by less than 0.001). If the solution to the algorithm is not accepted after running for 80,000FEs, it is unsuccessful. Each benchmark function is tested 100 times with the NIWTLBO and other variants of TLBO algorithms and the comparative results in the form of mean function evaluations and success percentage are shown in Table 9. “MNFE” denotes the number of function evaluations when the solution is accepted. The number of function evaluations in the variants of TLBO is = (2 × population size × number of generations).

It is observed from Table 9 that, except for Rosenbrock and Branin, the NIWTLBO algorithm requires fewer number of function evaluations than other algorithms to reach the global optimum value, with a very high success rate of 100%. For Rosenbrock, Branin, Griewank, and Weierstrass, the WTLBO algorithm performs worse than other algorithms with low success rate, which is easily trapped in local optima. From this, it is shown that the NIWTLBO algorithm has a better performance than some other variants of TLBO.

5. Conclusion

In this paper, we propose the NIWTLBO algorithm which introduced a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and used a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is implemented on 24 benchmark functions having different characteristics to evaluate its performance which is compared with the basic TLBO and some other state-of-the-art optimization algorithms available in the literature. Furthermore, the comparisons between the NIWTLBO and other algorithms mentioned are also reported.

The experiment results have shown the satisfactory performance of the NIWTLBO algorithm for solving global optimization problems. The NIWTLBO algorithm not only enhances the local searching ability of TLBO but also improves the global performance. Moreover, the NIWTLBO algorithm can increase the convergence speed and enhance the ability of the TLBO to escape from local optima.

In future work, the NIWTLBO algorithm will be extended to handle more complex functions and solve constrained/multiobjective optimization problems. Furthermore, we will also open up a new way to improve the diversity of TLBO using a hybrid method, so as to utilize the advantages of other intelligent algorithms to further improve the global performance of TLBO.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The present study was partially supported by the National Natural Science Foundation of China (10872160). The authors thank Rao R. V. for providing the source code of the basic TLBO algorithm and Dervis Karaboga for providing the source code of the ABC algorithm.