Elsevier

Information Sciences

Volume 280, 1 October 2014, Pages 26-52
Information Sciences

AHPS2: An optimizer using adaptive heterogeneous particle swarms

https://doi.org/10.1016/j.ins.2014.04.043Get rights and content

Highlights

  • Multiple heterogeneous swarms with each consisting a group of homogeneous particles are modeled.

  • Nature’s law, “survival of the fittest”, is simulated by proposing the adaptive competition strategy with two models.

  • Heterogeneous learning based on two complementary techniques are theoretically analyzed and experimentally tested.

  • Adaptive competition strategy with immigration model is more effective in practice.

  • The proposed method shows a better or comparable performance than the comparison algorithms over the numerical experiments.

Abstract

Particle swarm optimization (PSO) has suffered from premature convergence and lacked diversity for complex problems since its inception. An emerging advancement in PSO is multi-swarm PSO (MS-PSO) which is designed to increase the diversity of swarms. However, most MS-PSOs were developed for particular problems so their search capability on diverse landscapes is still less than satisfactory. Moreover, research on MS-PSO has so far treated the sub-swarms as cooperative groups with minimum competition (if not none). In addition, the size of each sub-swarm is set to be fixed which may encounter excessive computational cost. To address these issues, a novel optimizer using Adaptive Heterogeneous Particle SwarmS (AHPS2) is developed in this research. In AHPS2, multiple heterogeneous swarms, each consisting of a group of homogenous particles having similar learning strategy, are introduced. Two complementary search techniques, comprehensive learning and a subgradient method, are studied. To best take advantage of the heterogeneous learning strategies, an adaptive competition strategy is proposed so the size of each swarm can be dynamically adjusted based on its group performance. The analyses of the swarm heterogeneity and the competition models are presented to validate the effectiveness. Furthermore, comparisons between AHPS2 and state-of-the-art algorithms are grouped into three categories: 36 regular benchmark functions (30-dimensional), 20 large-scale benchmark functions (1000-dimensional) and 3 real-world problems. Experimental results show that AHPS2 displays a better or comparable performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

Introduction

Inspired by the natural behavior of fish schooling and bird flocking, Kennedy and Eberhart first proposed particle swarm optimization (PSO), a novel stochastic optimizer, in 1995 [28]. In PSO, a population of individuals, named as particles, is employed to explore the hyperspace and search for the global solution in a collaborative manner. The searching movement of each particle is guided by its best historical position in conjunction with the populations’ best historical position. Compared with evolutionary algorithms such as artificial bee colony, differential evolution and genetic algorithm, PSO has shown similar or even superior performance for both unimodal and multimodal optimization problems in terms of solution accuracy and computational efficiency [13]. As a result, PSO has been applied to various real-world problems including power systems, engineering optimization, and portfolio management [19], [40], [61], just to name a few.

The promise of PSO motivates researchers to further improve its performance. The particle velocity update formulation is modified to improve solution accuracy in [11], [52], [55]; different population topologies are studied to improve the information sharing in [29], [39]; new learning strategies for the particles are designed to avoid premature convergence in [4], [22], [23], [34], [37], [53], [54], [60]. Among these efforts, it is noted most PSO variants are developed to tackle one specific type of problem (e.g., unimodal or multimodal). Given this, the resulting performance on other types of problems is not guaranteed. For example, the PSO with comprehensive learning strategy (CLPSO) [34] performs well on complex multi-modal problems but converges slow on unimodal problems. The PSO obtaining a high convergence rate for unimodal problems tends to get trapped at local optima which limits its application to multi-modal problems [11]. To address the balance between the exploration and exploitation, a recent emerging research field is to increase the diversity of the population by introducing sub-swarms into PSO and has resulted in multi-swarm PSO (MS-PSO) [2], [15], [32], [33], [35], [36], [46], [47], [48], [49], [51], [70], [71], [73]. Compared to single-swarm PSO, MS-PSO has demonstrated the ability to explore the search space more thoroughly and is less likely to be trapped at local optima [71].

Research on MS-PSO mainly lies in two distinct directions: (1) multiple swarms with homogenous particles (same learning strategy, topologies). This is an extension of single-swarm PSO by deploying multiple swarms to the search space aiming to increase the diversity where each swarm may have different properties [2], [36], [49], [71], [73] and (2) each swarm may consist of different types of particles, known as heterogeneous swarms [7], [17], [18], [20], [31], [43], [45]. Research from both directions treats the swarms as cooperative groups with minimum competition (if not none) [35], [36], [49], [70], [72]. Furthermore, most MS-PSO variants are incapable of handling a diverse set of problems [35], [47], [48], [51], [70], [71], [72]. It is also worth pointing out that a majority of existing research treats the population size of each swarm fixed over the evolution process, which may not be computationally efficient and lead to a low convergence speed [7], [17], [32], [43].

In this research, we introduce an adaptive learning among competitive heterogeneous swarms, termed Adaptive Heterogeneous Particle SwarmS (AHPS2). There exist multiple heterogeneous swarms with each consisting of a group of homogenous particles having similar learning strategy. The adaptive competitions at the swarm level will trigger the size of the swarms to be dynamically adjusted based on the group performances. To comprehensively evaluate the performance of the proposed algorithm, AHPS2 is compared with other state-of-the-art algorithms on three categories of experiments: (1) 36 30-dimensional benchmark problems with various properties, such as unimodal, multimodal, shifted, rotated, non-separable and ill-conditioned functions, are employed to test AHPS2’s performance for a diverse set of problems with common dimensionalities; (2) a set of large-scale optimization problems, i.e., CEC’2010 1000-dimensional testbed including 20 benchmark functions, are used to justify AHPS2’s scalability for high dimensional problems; (3) 3 real-world problems are employed to benchmark AHPS2’s applicability for practical problems. The numerical results demonstrate that AHPS2 significantly improves the performance of PSO and outperforms most of the comparison algorithms during the experiments.

The rest of the paper is organized as follows. Section 2 provides an overview of PSO and multi-swarm variants. Section 3 introduces the proposed AHPS2. Section 4 provides the analysis of two types of learning strategies to be implemented by the swarms followed by the validation experiments on the competition strategies. Comprehensive experiments are presented by Section 5. Finally, the conclusion is drawn in Section 6.

Section snippets

Literature review

In the PSO, for any particle i, the velocity and position for the dth dimension are updated as:vi,t+1d=vi,td+c1×rand1,id×Bpi,td-xi,td+c2×rand2,id×Bgtd-xi,tdxi,t+1d=xi,td+vi,t+1dwhere, t and t + 1 refer to the iteration before and after the updating; vi,t and xi,t are the velocity and position of particle i at the tth iteration; c1 and c2 denote the cognitive and the social learning factors that decide the attractions of a particle influenced by the learning exemplars; rand1,id and rand2,id are

Proposed method: adaptive heterogeneous particle swarms

Adaptive Heterogeneous Particle SwarmS (AHPS2) introduces heterogeneous learning to the swarms to boost the search capability on diverse problems. In addition, an adaptive competition strategy among the swarms is implemented to guide evolution by simulating nature’s law, “survival of the fittest”.

Analysis of the proposed strategies in AHPS2

The performance of AHPS2 heavily relies on two aspects: (1) the search capability of the heterogeneous learning and (2) the efficiency of the adaptive competition strategy. The detailed analysis of each is provided in the following sections.

Numerical experiments

In order to fully demonstrate AHPS2’s performance for diverse problems with various properties, the numerical experiments are grouped into three categories: (1) comparison against 11 state-of-the-art swarm-based algorithms on a diverse set of 30-dimensional functions; (2) comparison of AHPS2 with 9 recent algorithms performing well on large scale global optimization problems (CEC 2010 large scale testbed with 1000-dimension is employed); and (3) comparison of AHPS2 against existing algorithms

Conclusion

In this research, a novel PSO termed Adaptive Heterogeneous Particle SwarmS (AHPS2) is proposed to enhance the original PSO’s performance on a diverse set of problems with various properties. Two independent swarms, one with comprehensive learning and the other one with subgradient learning are introduced. An adaptive competition strategy is implemented between the two swarms to dynamically adjust the population size based on the group performance. By theoretical analysis and experimental

Acknowledgments

This research was partially supported by funds from the National Science Foundation award under Grant No. CNS-1239257, from the United States Transportation Command (USTRANSCOM) in concert with the Air Force Institute of Technology (AFIT) under an ongoing Memorandum of Agreement and from the National Science Foundation of China (Grant No. 71171064). The U.S. Government is authorized to reproduce and distribute for governmental purposes notwithstanding any copyright annotation of the work by the

References (73)

  • N. Mladenović et al.

    Solving spread spectrum radar polyphase code design problem by tabu search and variable neighbourhood search

    Eur. J. Oper. Res.

    (2003)
  • F. Neri et al.

    Compact particle swarm optimization

    Inf. Sci.

    (2013)
  • B. Niu et al.

    A multi-swarm optimizer based fuzzy modeling approach for dynamic systems processing

    Neurocomputing

    (2008)
  • B. Niu et al.

    MCPSO: a multi-swarm cooperative particle swarm optimizer

    Appl. Math. Comput.

    (2007)
  • Y. Wang et al.

    Self-adaptive learning based particle swarm optimization

    Inf. Sci.

    (2011)
  • M. Weber et al.

    A study on scale factor in distributed differential evolution

    Inf. Sci.

    (2011)
  • Z.Y. Yang et al.

    Large scale evolutionary optimization using cooperative coevolution

    Inf. Sci.

    (2008)
  • D. Yazdani et al.

    A novel multi-swarm algorithm for optimization in dynamic environments based on particle swarm optimization

    Appl. Soft Comput.

    (2013)
  • S.Z. Zhao et al.

    Dynamic multi-swarm particle swarm optimizer with harmony search

    Expert Syst. Appl.

    (2011)
  • A. Auger, N. Hansen, Performance evaluation of an advanced local search evolutionary algorithm, in: 2005 IEEE Congress...
  • T. Blackwell et al.

    Multi-swarm optimization in dynamic environments

    Appl. Evol. Comput.

    (2004)
  • S. Boyd

    Subgradient Methods

    (2010)
  • G.G. Cabrera et al.

    A hybrid particle swarm optimization – simulated annealing algorithm for the probabilistic travelling salesman problem

    Stud. Inf. Control

    (2012)
  • E. Cantu-Paz

    Migration policies, selection pressure, and parallel evolutionary algorithms

    J. Heurist.

    (2001)
  • L. Cartwright, T. Hendtlass, A heterogeneous particle swarm, in: 4th Australian Conference on Artificial Life:...
  • M. Clerc et al.

    The particle swarm – explosion, stability, and convergence in a multidimensional complex space

    IEEE Trans. Evol. Comput.

    (2002)
  • S. Das, P.N. Suganthan, Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary...
  • M. El-Abd et al.

    A cooperative particle swarm optimizer with migration of heterogeneous probabilistic models

    Swarm Intell.

    (2010)
  • A. El Dor et al.

    A multi-swarm PSO using charged particles in a partitioned search space for continuous optimization

    Comput. Optim. Appl.

    (2012)
  • S.M. Elsayed, R.A. Sarker, D.L. Essam, Differential evolution with multiple strategies for solving CEC2011 real-world...
  • A.P. Engelbrecht, Heterogeneous particle swarm optimization, in: Proceedings of the 7th International Conference on...
  • A.P. Engelbrecht, Scalability of a heterogeneous particle swarm optimizer, in: 2011 IEEE Symposium on Swarm...
  • M. Han et al.

    A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control

    IEEE Trans. Neural Networks

    (2011)
  • M. Hansell

    Built by Animals: The Natural History of Animal Architecture

    (2009)
  • F. Herrera et al.

    Gradual distributed real-coded genetic algorithms

    IEEE Trans. Evol. Comput.

    (2000)
  • S.Y. Ho et al.

    OPSO: orthogonal particle swarm optimization and its application to task assignment problems

    IEEE Trans. Syst. Man Cybern. Part A: Syst. Humans

    (2008)
  • Cited by (18)

    • Solving non-convex/non-smooth economic load dispatch problems via an enhanced particle swarm optimization

      2017, Applied Soft Computing Journal
      Citation Excerpt :

      Comparing with other meta-heuristics, PSO has the advantages of easy implementation, flexibility, efficiency and rapid convergence speed. Thus far, PSO exhibited good performance and was widely applied to real-world problems [21–25]. The canonical PSO algorithm tends to suffers from the premature convergence.

    • Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems

      2016, Information Sciences
      Citation Excerpt :

      This novel strategy updates a given particle’s velocity using all other particles’ personal best information. Further refinements include the usage of personal best of neighboring particles [31,47], distance and fitness information [51,61], multi-layer swarms [36,62] and multi-swarm strategies [7,69]. The refinements and modifications of the PSO algorithm using different learning strategies and hybridization have successfully provided better solutions to the complex optimization problems.

    • Self regulating particle swarm optimization algorithm

      2015, Information Sciences
      Citation Excerpt :

      A compact particle swarm optimization [48] has also been introduced for efficient hardware utilization. An emerging advancement in the form of multi-swarm is used in [11,75] for increasing the diversity of swarms. The PSO algorithm was investigated by eliminating some of its traditional features in [30] referred to as a Bare Bones Particle Swarm Optimization (BBPSO).

    • Precision in High Dimensional Optimisation of Global Tasks with Unknown Solutions

      2020, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    View all citing articles on Scopus
    View full text