Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification

https://doi.org/10.1016/j.amc.2011.10.012Get rights and content

Abstract

The hybrid algorithm that combined particle swarm optimization with simulated annealing behavior (SA-PSO) is proposed in this paper. The SA-PSO algorithm takes both of the advantages of good solution quality in simulated annealing and fast searching ability in particle swarm optimization. As stochastic optimization algorithms are sensitive to their parameters, proper procedure for parameters selection is introduced in this paper to improve solution quality. To verify the usability and effectiveness of the proposed algorithm, simulations are performed using 20 different mathematical optimization functions with different dimensions. The comparative works have also been conducted among different algorithms under the criteria of quality of the solution, the efficiency of searching for the solution and the convergence characteristics. According to the results, the SA-PSO could have higher efficiency, better quality and faster convergence speed than compared algorithms.

Introduction

Combinatorial optimization problems are used extensively in science, engineering, and commercial applications. Since their mathematical models for real-life systems are usually both nonlinear and non-differentiable, stochastic optimization algorithms have become important in recent years due to their flexibility for finding solutions. Therefore, an algorithm which can effectively discover the global optimum solution or near global optimum solutions is the main goal committed by many researchers and the development of an effective algorithm becomes a pressing issue today.

There are two main approaches to obtain the global or near global optimum solutions for most nonlinear problems. The first is through mathematical programming, common methods include: linear programming, nonlinear programming, mix-integer programming, etc. These approaches take a disadvantage which requires that the optimization problem could be expressed as a mathematical formulation, and then searches for the solution through mathematical derivative algorithms. This type of algorithm has been in development for a long time, so the obtained solutions are relatively reliable and usable. However, this approach might be insufficient when obtaining a global optimum solution is needed. It is also difficult to applied when the mathematical formulation can not be detailed describing. The second approach is by stochastic global searching algorithms, which is based on stochastic search with some heuristic behaviors. In the past decade, several stochastic computational algorithm techniques, such as genetic algorithms (GA) [1], simulated annealing (SA)[2], [3], [4], and tabu searches (TS) [5], [6] have been used to address optimization issues. These algorithms are in the form of probabilistic heuristics, with global search properties. Though GA methods have been employed successfully to solve complex optimization problems, recent research has identified deficiencies in its performance. This degradation in efficiency is apparent in applications with highly epistatic objective functions (i.e., when optimized parameters are highly correlated), thereby, hampering crossover and mutation operations and compromising the improved fitness of offspring because population chromosomes contain similar structures. In addition, their average fitness becomes high towards the end of the evolutionary process [7]. Moreover, the premature convergence of GA degrades its performance by reducing its search capability, leading to a higher probability of being trapped to a local optimum [8].

Recently, a global optimization technique called particle swarm optimization (PSO) [9], has been used to solve real time issues and aroused researchers’ interest due to its flexibility and efficiency. Limitations of the classic greedy search technique, which restricts allowed forms of fitness functions, as well as continuity of the variables used, can be entirely eliminated. The PSO, first introduced by Kennedy and Eberhart [10], is a modern heuristic algorithm developed through the simulation of a simplified social system. It was found to be robust in solving continuous nonlinear optimization problems [11].

In general, the PSO method is faster than the SA method because the PSO contains parallel search techniques. However, similar to the GA, the main adversity of the PSO is premature convergence, which might occur when the particle and group best solutions are trapped into local minimums during the search process. Localization occurs because particles have the tendency to fly to local, or near local, optimums, therefore, particles will concentrate to a small region and the global exploration ability will be weakened. On the contrary, the most significant characteristic of SA is its probabilistic jumping property, called the metropolis process. However, by adjusting the temperature, the metropolis process can be controlled. It has been theoretically proven that the SA technique converges asymptotically to the global optimum solution with probability, ONE [12], [13] provided that certain conditions are satisfied. Therefore, a novel SA-PSO approach is proposed in this paper. The salient features of PSO and SA are hybrid to create an innovative approach, which can generate high-quality solutions within shorter calculation times and offers more stable convergence characteristics.

The algorithm proposed in this paper has the characteristics that it is easily realizable, and has the advantages of both SA and PSO algorithms. The feasibility was demonstrated on 20 different mathematical optimization functions with different dimensions and then compared with the GA, SA and PSO methods regarding solution quality and computational efficiency.

Section snippets

Overview of PSO

PSO was first proposed by Kennedy and Eberhart in 1995 [10], reference [14] had discussed its convergence rate and method to select the best parameters. The PSO search procedures are based on the swarm concept, which is a group of individuals that are able to optimize a certain fitness function. Each individual can send information to another and ultimately allow the entire group to move towards the same object or in the same direction. It is a way to simulate the behavior of individuals of the

SA-PSO

To obtain the global or near global optimum solutions, the searching algorithm requires appropriate use of the methods of ‘exploit’ and ‘explore’. The role of ‘exploring’ allows the search to be performed in the entire solution space, in order to approach the global optimum area. The role of ‘exploit’ allows a gradient search to be performed within a localized region to search for the best solution in that region. Although the use of both does not guarantee finding the global optimum solution,

Simulation studies

In order to verify the effectiveness of the proposed SA-PSO algorithm, this paper uses Matlab to code the 4 algorithms: GA, SA, PSO and SA-PSO. The computing platform used consists of an Intel Core 2 Duo 2.80 GHz with 4 GB RAM. There are 20 different optimized functions in different dimensions used to compare the results of the algorithms. Since these algorithms are stochastic optimized algorithms, the solution found each time may not be exactly the same, therefore, each problem is repeated 100

Conclusion

This paper presented a hybrid SA-PSO approach to solve practical combinatorial or nonlinear optimization problems. Through a series of systematic simulation processes, comparative works and convergence analysis, the proposed SA-PSO algorithm showed the effectively includes the characteristics of SA algorithm in converging to optimal solution across the entire domain, and the characteristics of PSO in fast calculation. The simulation results also showed that there is less chance of falling into

References (16)

There are more references available in the full text version of this article.

Cited by (0)

View full text