AHPS2: An optimizer using adaptive heterogeneous particle swarms
Introduction
Inspired by the natural behavior of fish schooling and bird flocking, Kennedy and Eberhart first proposed particle swarm optimization (PSO), a novel stochastic optimizer, in 1995 [28]. In PSO, a population of individuals, named as particles, is employed to explore the hyperspace and search for the global solution in a collaborative manner. The searching movement of each particle is guided by its best historical position in conjunction with the populations’ best historical position. Compared with evolutionary algorithms such as artificial bee colony, differential evolution and genetic algorithm, PSO has shown similar or even superior performance for both unimodal and multimodal optimization problems in terms of solution accuracy and computational efficiency [13]. As a result, PSO has been applied to various real-world problems including power systems, engineering optimization, and portfolio management [19], [40], [61], just to name a few.
The promise of PSO motivates researchers to further improve its performance. The particle velocity update formulation is modified to improve solution accuracy in [11], [52], [55]; different population topologies are studied to improve the information sharing in [29], [39]; new learning strategies for the particles are designed to avoid premature convergence in [4], [22], [23], [34], [37], [53], [54], [60]. Among these efforts, it is noted most PSO variants are developed to tackle one specific type of problem (e.g., unimodal or multimodal). Given this, the resulting performance on other types of problems is not guaranteed. For example, the PSO with comprehensive learning strategy (CLPSO) [34] performs well on complex multi-modal problems but converges slow on unimodal problems. The PSO obtaining a high convergence rate for unimodal problems tends to get trapped at local optima which limits its application to multi-modal problems [11]. To address the balance between the exploration and exploitation, a recent emerging research field is to increase the diversity of the population by introducing sub-swarms into PSO and has resulted in multi-swarm PSO (MS-PSO) [2], [15], [32], [33], [35], [36], [46], [47], [48], [49], [51], [70], [71], [73]. Compared to single-swarm PSO, MS-PSO has demonstrated the ability to explore the search space more thoroughly and is less likely to be trapped at local optima [71].
Research on MS-PSO mainly lies in two distinct directions: (1) multiple swarms with homogenous particles (same learning strategy, topologies). This is an extension of single-swarm PSO by deploying multiple swarms to the search space aiming to increase the diversity where each swarm may have different properties [2], [36], [49], [71], [73] and (2) each swarm may consist of different types of particles, known as heterogeneous swarms [7], [17], [18], [20], [31], [43], [45]. Research from both directions treats the swarms as cooperative groups with minimum competition (if not none) [35], [36], [49], [70], [72]. Furthermore, most MS-PSO variants are incapable of handling a diverse set of problems [35], [47], [48], [51], [70], [71], [72]. It is also worth pointing out that a majority of existing research treats the population size of each swarm fixed over the evolution process, which may not be computationally efficient and lead to a low convergence speed [7], [17], [32], [43].
In this research, we introduce an adaptive learning among competitive heterogeneous swarms, termed Adaptive Heterogeneous Particle SwarmS (AHPS2). There exist multiple heterogeneous swarms with each consisting of a group of homogenous particles having similar learning strategy. The adaptive competitions at the swarm level will trigger the size of the swarms to be dynamically adjusted based on the group performances. To comprehensively evaluate the performance of the proposed algorithm, AHPS2 is compared with other state-of-the-art algorithms on three categories of experiments: (1) 36 30-dimensional benchmark problems with various properties, such as unimodal, multimodal, shifted, rotated, non-separable and ill-conditioned functions, are employed to test AHPS2’s performance for a diverse set of problems with common dimensionalities; (2) a set of large-scale optimization problems, i.e., CEC’2010 1000-dimensional testbed including 20 benchmark functions, are used to justify AHPS2’s scalability for high dimensional problems; (3) 3 real-world problems are employed to benchmark AHPS2’s applicability for practical problems. The numerical results demonstrate that AHPS2 significantly improves the performance of PSO and outperforms most of the comparison algorithms during the experiments.
The rest of the paper is organized as follows. Section 2 provides an overview of PSO and multi-swarm variants. Section 3 introduces the proposed AHPS2. Section 4 provides the analysis of two types of learning strategies to be implemented by the swarms followed by the validation experiments on the competition strategies. Comprehensive experiments are presented by Section 5. Finally, the conclusion is drawn in Section 6.
Section snippets
Literature review
In the PSO, for any particle i, the velocity and position for the dth dimension are updated as:where, t and t + 1 refer to the iteration before and after the updating; vi,t and xi,t are the velocity and position of particle i at the tth iteration; c1 and c2 denote the cognitive and the social learning factors that decide the attractions of a particle influenced by the learning exemplars; and are
Proposed method: adaptive heterogeneous particle swarms
Adaptive Heterogeneous Particle SwarmS (AHPS2) introduces heterogeneous learning to the swarms to boost the search capability on diverse problems. In addition, an adaptive competition strategy among the swarms is implemented to guide evolution by simulating nature’s law, “survival of the fittest”.
Analysis of the proposed strategies in AHPS2
The performance of AHPS2 heavily relies on two aspects: (1) the search capability of the heterogeneous learning and (2) the efficiency of the adaptive competition strategy. The detailed analysis of each is provided in the following sections.
Numerical experiments
In order to fully demonstrate AHPS2’s performance for diverse problems with various properties, the numerical experiments are grouped into three categories: (1) comparison against 11 state-of-the-art swarm-based algorithms on a diverse set of 30-dimensional functions; (2) comparison of AHPS2 with 9 recent algorithms performing well on large scale global optimization problems (CEC 2010 large scale testbed with 1000-dimension is employed); and (3) comparison of AHPS2 against existing algorithms
Conclusion
In this research, a novel PSO termed Adaptive Heterogeneous Particle SwarmS (AHPS2) is proposed to enhance the original PSO’s performance on a diverse set of problems with various properties. Two independent swarms, one with comprehensive learning and the other one with subgradient learning are introduced. An adaptive competition strategy is implemented between the two swarms to dynamically adjust the population size based on the group performance. By theoretical analysis and experimental
Acknowledgments
This research was partially supported by funds from the National Science Foundation award under Grant No. CNS-1239257, from the United States Transportation Command (USTRANSCOM) in concert with the Air Force Institute of Technology (AFIT) under an ongoing Memorandum of Agreement and from the National Science Foundation of China (Grant No. 71171064). The U.S. Government is authorized to reproduce and distribute for governmental purposes notwithstanding any copyright annotation of the work by the
References (73)
- et al.
Parallel memetic structures
Inf. Sci.
(2013) - et al.
Enhancing distributed differential evolution with multicultural migration for global numerical optimization
Inf. Sci.
(2013) Backtracking search optimization algorithm for numerical optimization problems
Appl. Math. Comput.
(2013)Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm
Comput. Geosci.
(2012)Performance assessment of foraging algorithms vs. evolutionary algorithms
Inf. Sci.
(2012)- et al.
An intelligent augmentation of particle swarm optimization with multiple adaptive methods
Inf. Sci.
(2012) - et al.
Memory-saving memetic computing for path-following mobile robots
Appl. Soft Comput.
(2013) - et al.
Ockham’s Razor in memetic computing: three stage optimal memetic exploration
Inf. Sci.
(2012) - et al.
A hybrid particle swarm optimization with estimation of distribution algorithm for solving permutation flowshop scheduling problem
Expert Syst. Appl.
(2011) - et al.
Multiobjective evolutionary algorithms for portfolio management: a comprehensive literature review
Expert Syst. Appl.
(2012)
Solving spread spectrum radar polyphase code design problem by tabu search and variable neighbourhood search
Eur. J. Oper. Res.
Compact particle swarm optimization
Inf. Sci.
A multi-swarm optimizer based fuzzy modeling approach for dynamic systems processing
Neurocomputing
MCPSO: a multi-swarm cooperative particle swarm optimizer
Appl. Math. Comput.
Self-adaptive learning based particle swarm optimization
Inf. Sci.
A study on scale factor in distributed differential evolution
Inf. Sci.
Large scale evolutionary optimization using cooperative coevolution
Inf. Sci.
A novel multi-swarm algorithm for optimization in dynamic environments based on particle swarm optimization
Appl. Soft Comput.
Dynamic multi-swarm particle swarm optimizer with harmony search
Expert Syst. Appl.
Multi-swarm optimization in dynamic environments
Appl. Evol. Comput.
Subgradient Methods
A hybrid particle swarm optimization – simulated annealing algorithm for the probabilistic travelling salesman problem
Stud. Inf. Control
Migration policies, selection pressure, and parallel evolutionary algorithms
J. Heurist.
The particle swarm – explosion, stability, and convergence in a multidimensional complex space
IEEE Trans. Evol. Comput.
A cooperative particle swarm optimizer with migration of heterogeneous probabilistic models
Swarm Intell.
A multi-swarm PSO using charged particles in a partitioned search space for continuous optimization
Comput. Optim. Appl.
A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control
IEEE Trans. Neural Networks
Built by Animals: The Natural History of Animal Architecture
Gradual distributed real-coded genetic algorithms
IEEE Trans. Evol. Comput.
OPSO: orthogonal particle swarm optimization and its application to task assignment problems
IEEE Trans. Syst. Man Cybern. Part A: Syst. Humans
Cited by (18)
Solving non-convex/non-smooth economic load dispatch problems via an enhanced particle swarm optimization
2017, Applied Soft Computing JournalCitation Excerpt :Comparing with other meta-heuristics, PSO has the advantages of easy implementation, flexibility, efficiency and rapid convergence speed. Thus far, PSO exhibited good performance and was widely applied to real-world problems [21–25]. The canonical PSO algorithm tends to suffers from the premature convergence.
Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems
2016, Information SciencesCitation Excerpt :This novel strategy updates a given particle’s velocity using all other particles’ personal best information. Further refinements include the usage of personal best of neighboring particles [31,47], distance and fitness information [51,61], multi-layer swarms [36,62] and multi-swarm strategies [7,69]. The refinements and modifications of the PSO algorithm using different learning strategies and hybridization have successfully provided better solutions to the complex optimization problems.
Self regulating particle swarm optimization algorithm
2015, Information SciencesCitation Excerpt :A compact particle swarm optimization [48] has also been introduced for efficient hardware utilization. An emerging advancement in the form of multi-swarm is used in [11,75] for increasing the diversity of swarms. The PSO algorithm was investigated by eliminating some of its traditional features in [30] referred to as a Bare Bones Particle Swarm Optimization (BBPSO).
An improved artificial bee colony algorithm based on elite search strategy with segmentation application on robot vision system
2021, Concurrency and Computation: Practice and ExperienceLearning–interaction–diversification framework for swarm intelligence optimizers: a unified perspective
2020, Neural Computing and ApplicationsPrecision in High Dimensional Optimisation of Global Tasks with Unknown Solutions
2020, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)