Gravitational swarm optimizer for global optimization

https://doi.org/10.1016/j.swevo.2016.07.003Get rights and content

Abstract

In this paper, a new meta-heuristic method is proposed by combining Particle Swarm Optimization (PSO) and gravitational search in a coherent way. The advantage of swarm intelligence and the idea of a force of attraction between two particles are employed collectively to propose an improved meta-heuristic method for constrained optimization problems. Excellent constraint handling is always required for the success of any constrained optimizer. In view of this, an improved constraint-handling method is proposed which was designed in alignment with the constitutional mechanism of the proposed algorithm. The design of the algorithm is analyzed in many ways and the theoretical convergence of the algorithm is also established in the paper. The efficiency of the proposed technique was assessed by solving a set of 24 constrained problems and 15 unconstrained problems which have been proposed in IEEE-CEC sessions 2006 and 2015, respectively. The results are compared with 11 state-of-the-art algorithms for constrained problems and 6 state-of-the-art algorithms for unconstrained problems. A variety of ways are considered to examine the ability of the proposed algorithm in terms of its converging ability, success, and statistical behavior. The performance of the proposed constraint-handling method is judged by analyzing its ability to produce a feasible population. It was concluded that the proposed algorithm performs efficiently with good results as a constrained optimizer.

Introduction

Constrained optimization problems (COPs) are an important class of problems in the field of optimization, because many real-life problems arising in engineering, computer science, finance, and business science can be modeled as nonlinear constrained optimization problems. The formulation of any real-life problem into a constrained optimization problem involves the use of many parameters. Determination of the optimal value of each of these parameters is very important because together these values provide the solution to the problems.

Mathematically, a constrained optimization problem can be formulated in the form of an objective function, which is constrained by some linear and nonlinear constraints. The following model provides the mathematical description of a nonlinear constrained optimization problem:MinorMax.f(x),x=[x1,x2,x3,,xD],subject to a set of inequality constraintsgk(x)0,k=1,2,3,,q,as well as equality constraintshk(x)=0,k=q+1,q+2,,m,where objective function f is defined over subspace of a D dimensional real vector space SRn and x is a member of D-dimensional vector space. A set of q inequality constraints and mq equality constraints, define the feasible region FS. LixiUi are the lower and upper bounds of the decision variables in the domain S, where i=1:1:D.

A class of optimization techniques is available in the literature for use with COPs. In principle, both deterministic and nondeterministic techniques are being used to solve COPs. Unfortunately, selected predefined assumptions of deterministic techniques restrict their applicability to a specific class of problems. This restriction directed us to focus on nondeterministic techniques, among which differential-free, nature-inspired optimization techniques have become very popular, because of their applicability to a wide range of optimization problems. This paper focuses on the development of a new meta-heuristic method for constrained optimization problems.

In recent years, many nature-inspired optimization techniques have been developed to solve constrained optimization problems. Initially, these techniques were only used to solve unconstrained optimization problems. Particle Swarm Optimization (PSO) [7], [28], [44], Differential Evolution (DE) [5], [33], [8], and the Gravitational Search Algorithm (GSA) [45] are known to deliver excellent performance for unconstrained optimization problems, but have been found to perform variedly with COPs. In particular, the performance is strongly affected when problems have to be solved at a high level of complexity. The complexity in COPs mainly occurs when the ratio of the search region to the feasible region is very small [29]. This level of complexity in problems requires the combined use of different classes of algorithms to provide a more powerful constrained optimizer. Many modifications and hybridizations intended to improve the efficiency and robustness of the algorithms have appeared in the literature. Banks et al. [3] provide detailed information about the possible improvements of the PSO algorithm by hybridization, and they exhaustively discussed the major benefits of this development. Huang [23] improved the availability of the DE algorithm by evolving two subpopulations. Lwin and Qu [31] proposed a hybrid algorithm by integrating population-based incremental learning and DE for the solution of constrained portfolio selections.

The success of any constrained optimization algorithm mostly depends on the strength of the constraint-handling technique, the design of which has to be customized for an individual optimization algorithm. A few good constraint-handling mechanisms, that are capable of performing well, have been proposed. For example, Deb [19] proposed an efficient constraint-handling approach for genetic algorithms, whereas Coello and Carlos [14] published a comprehensive survey on constraint-handling approaches for a large number of optimization algorithms. Mezura-Montes and Coello [35] also furnished a detailed report in which they presented the future scope and trends of constraint-handling mechanisms. In [18] the design of a very good constraint-handling method for multiple swarm-based cultural PSO is described. The constraint-handling method discussed in [1] was successfully embedded within DE based on a penalty function. The FPBRM constraint-handling method proposed by Mun and Cho [40] for a modified harmony search algorithm also produced good results for optimization problems. The advantage of these algorithms lies in the fact that they were specifically designed for the technique being used for the optimization. This kind of constraint-handling mechanism is naturally compatible with the algorithms and enhances the performance of the optimizer. The overall message emerging from these studies is that an effective constraint-handling method should be based on the individual algorithm in which this constraint-handling method will be utilized. This inspired the authors of this paper to propose a new constraint-handling mechanism that is appropriate for and compatible with the proposed optimization algorithm.

This research extends the concept of the recently proposed shrinking hypersphere PSO (SHPSO) [51] for unconstrained optimization and engineering design problems, as opposed to the method in [52], which was extended for constrained optimization problems. The performance of the SHPSO approach was improved using the GSA [45] and a global constrained optimizer was established with theoretical proof of its convergence and stability.

The organization of the paper is as follows. Section 2 briefly provides the concept of the GSA, and in 2.1 Particle Swarm Optimization, 2.2 Shrinking Hypersphere based PSO (SHPSO) the principles of PSO and SHPSO are discussed. Subsequently, in Section 3, the proposed SHPSO-GSA is presented and Section 4 contains a detailed theoretical and experimental analysis of the proposed algorithm. Section 5 discusses the proposed constraint-handling method and in Section 6 the experimental results are discussed followed by the conclusions. A flow chart of the paper is depicted in Fig. 1.

Section snippets

Gravitational search algorithm

The GSA [45] is a recent meta-heuristic algorithm for solving nonlinear optimization problems. It is inspired by Newton's basic physical theory that states that a force of attraction works between every particle in the universe and this force is directly proportional to the product of their masses and inversely proportional to the square of the distance between their positions. In the GSA, each particle is equipped with four kinds of properties: position, mass, active gravitational mass (Mai),

Motivation of hybridization

The fundamental motivation for designing the SHPSO-GSA was to utilize the memory-enabled behavior of PSO in the memory-less approach of GSA; i.e., GSA does not keep track of the path of any individual particle in its memory. The memory functionality was assembled into GSA by using it jointly with the recently proposed SHPSO. The advantage of the GSA is the constitutional diversity in the algorithm, which originates from the fundamental concept of defined acceleration of a particle. Because the

A detailed analysis of the proposed algorithm

This section presents a rigorous analysis of the proposed SHPSO-GSA. The effect of the modified velocity updated equation, theoretical convergence, and converging ability is discussed in detail.

A new constraint handling method

A parameter-free constraint-handling approach is used to ensure the feasibility of the particles. The degree of constrained violation is evaluated by using Eq. (42) and the total degree of violation of an individual x is evaluated by taking the sum of violations at each constraint, i.e. G(x)=j=1mGj(x). In each iteration the swarm is sorted in the following three ways:

  • (i)

    The feasible solutions are listed in front of the infeasible solutions.

  • (ii)

    The feasible solutions are sorted in ascending order of

Experimental analysis and results

The proposed SHPSO-GSA is tested on twenty four benchmark problems proposed in IEEE CEC 2006 [29]. The results are compared with eleven state-of-the-algorithms. These algorithms are listed in Table 4. The experiments were performed by using the following experimental setup.

Performance of SHPSO-GSA on unconstrained problems

In order to study the performance of SHPSO-GSA over unconstrained optimization problems. It is applied to solve CEC 2015 benchmark [6] expensive optimization test problems. All the 15 problems are solved and the results are compared with the state-of-the-art algorithms listed in Table 6.

The results are listed in a form of best, worst, mean and standard deviation (stdev) of the fitness values of the corresponding problem in Table 12, Table 13. The best result for each algorithm is presented in

Algorithm complexity

The time complexity of SHPSO-GSA is studied based on the strategy defined in CEC 2015 benchmark [6]. The measurement of the complexity of the strategy employed is presented in Algorithm 5.

Algorithm 5

Strategy for the calculation of algorithm complexity.

1: Run the test program below:
2: for i=1:1,000,000 do
3:  x=0.55+(double)i;
4:  x=x+x;x=x/2;x=xx;x=sqrt(x);x=log(x);x=exp(x);x=x/(x+2);
5: end for
6: Computing time for the above = T0;
7: The average complete computing time for the algorithm =T1
8: The complexity

Conclusion

This paper presents a new algorithm named SHPSO-GSA, which was developed by hybridizing shrinking hypersphere-based PSO with the GSA, to produce an optimizer capable of improved constraint. The need for and design of the proposed algorithm are well established and justified in various respects. The validity of the designed hybrid was tested in multiple ways with positive output. An effective constraint-handling technique, which is compatible with the proposed algorithm, was defined to ensure

Acknowledgment

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (No. 2013R1A2A1A01013886) and National Institute of Technology Uttarakhand, India. We would like to express our gratitude toward the unknown potential reviewers who have agreed to review this paper and who have provided valuable suggestions to improve the quality of the paper.

References (55)

  • W.H. Lim et al.

    Teaching and peer-learning particle swarm optimization

    Appl. Soft. Comput.

    (2014)
  • R. Mallipeddi et al.

    Differential evolution algorithm with ensemble of parameters and mutation strategies

    Appl. Soft. Comput.

    (2011)
  • E. Mezura-Montes et al.

    Constraint-handling in nature-inspired numerical optimizationpast, present and future

    Swarm Evol. Comput.

    (2011)
  • E. Mezura-Montes et al.

    Differential evolution in constrained numerical optimizationan empirical study

    Inf. Sci.

    (2010)
  • A.W. Mohamed et al.

    Constrained optimization based on modified differential evolution algorithm

    Inf. Sci.

    (2012)
  • S. Mun et al.

    Modified harmony search optimization for constrained design problems

    Expert Syst. Appl.

    (2012)
  • F. Neri et al.

    Compact particle swarm optimization

    Inf. Sci.

    (2013)
  • E. Rashedi et al.

    GSAa gravitational search algorithm

    Inf. Sci.

    (2009)
  • S. Sun et al.

    A two-swarm cooperative particle swarms optimization

    Swarm Evol. Comput.

    (2014)
  • I. Trelea

    The particle swarm optimization algorithmconvergence analysis and parameter selection

    Inf. Process. Lett.

    (2003)
  • F. Van den Bergh et al.

    A study of particle swarm optimization particle trajectories

    Inf. Sci.

    (2006)
  • A. Yadav et al.

    Shrinking hypersphere based trajectory of particles in pso

    Appl. Math. Comput.

    (2013)
  • X. Yuan et al.

    A new approach for unit commitment problem via binary gravitational search algorithm

    Appl. Soft. Comput.

    (2014)
  • M. Zhang et al.

    Differential evolution with dynamic stochastic selection for constrained optimization

    Inf. Sci.

    (2008)
  • M.M. Ali et al.

    A penalty function-based differential evolution algorithm for constrained global optimization

    Comput. Optim. Appl.

    (2012)
  • A. Banks et al.

    A review of particle swarm optimization. Part IIhybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications

    Nat. Comput. Ser.

    (2008)
  • M.R. Bonyadi et al.

    A hybrid particle swarm with a time-adaptive topology for constrained optimization

    Swarm Evol. Comput.

    (2014)
  • Cited by (28)

    • Opposition-based Laplacian Equilibrium Optimizer with application in Image Segmentation using Multilevel Thresholding

      2021, Expert Systems with Applications
      Citation Excerpt :

      One another swarm intelligence based algorithm imitating the behaviour of grasshopper and proves to be the very efficient algorithm for structural design is Grasshopper optimization algorithm (Saremi, Mirjalili, & Lewis, 2017; Yıldız, Yıldız, Sait, et al., 2019; Yıldız, Yıldız, Sait, Bureerat, et al., 2019). Few algorithms such as Central Force Optimization (CFO) (Formato, 2007), Gravitational Search Algorithm (GSA) (Rashedi, Nezamabadi-Pour, & Saryazdi, 2009; Yadav, Deep, Kim, & Nagar, 2016) induced their motivation from physics law of nature. Simulated Annealing (SA) is an algorithm based on the physical process of enhancing the temperature of heat bath which induces all the solid particles to arrange randomly in a liquid phase (Van Laarhoven & Aarts, 1987; Kurtuluş, Yıldız, Sait, & Bureerat, 2020).

    • Artificial electric field algorithm for engineering optimization problems

      2020, Expert Systems with Applications
      Citation Excerpt :

      Coello and Montes (2002) investigated a dominance-based tournament selection scheme in genetic algorithm instead of traditional penalty method to incorporate the constrained into fitness functions. Some other primary nature inspired and evolutionary optimization techniques and their hybrids to solve the constrained optimization problems are: monarch butterfly algorithm (MBO) (Wang, Zhao, & Deb, 2015), firefly algorithm (Baykasoğlu & Ozsoydan, 2015), harmony search (HS) (Geem, Kim, & Loganathan, 2001), chaotic cuckoo search (CCS) (Wang, Deb, Gandomi, Zhang, & Alavi, 2016), grey wolf optimizer (GWO) (Mirjalili, Mirjalili, & Lewis, 2014), water cycle algorithm (WCA) (Sayyaadi, Sadollah, Yadav, & Yadav, 2019), artificial search agents with cognitive intelligence (Ozsoydan, 2019), bat algorithm (BA) (Yang, 2010), particle swarm optimization and their hybrids (Al-Shaikhi, Khan, Al-Awami, & Zerguine, 2019; Sereshki & Derakhshani, 2019; Yadav & Deep, 2014), gravitational swarm optimization (Yadav et al., 2016), swarm intelligence-based algorithm (Ozsoydan & Baykasoglu, 2019; Yadav, Yadav, Kumar, & Kim, 2017), neural network algorithm (Sadollah, Sayyaadi, & Yadav, 2018), an efficient co-swarm particle swarm optimization (Yadav & Deep, 2014; 2016), rain-fall optimization algorithm (Kaboli, Selvaraj, & Rahim, 2017), whale optimization algorithm(WOA) (Mirjalili & Lewis, 2016), weighted superposition attraction (WSA) (Baykasoğlu, Ozsoydan, & Senol, 2018) and etc. AEFA (Yadav et al., 2019) is a population-based optimization algorithm designed for continuous optimization problems.

    View all citing articles on Scopus
    View full text