Bare-bones particle swarm optimization with disruption operator

https://doi.org/10.1016/j.amc.2014.03.152Get rights and content

Abstract

Bare-bones particle swarm optimization (BPSO) is attractive since it is easy to implement and parameter-free. However, it suffers from premature convergence because of quickly losing diversity. To enhance population diversity and speed up convergence rate of BPSO, this paper proposes a novel disruption strategy, originating from astrophysics, to shift the abilities between exploration and exploitation during the search process. We research the distribution and diversity on the proposed disruption operator, and illustrate the position relationship between the original and disrupted position. The proposed Disruption BPSO (DBPSO) has also been evaluated on a set of well-known nonlinear benchmark functions and compared with several variants of BPSO and other evolutionary algorithms (such as DE, ABC, ES and BSA). Experimental results and statistic analysis confirm promising performance of DBPSO with least computation cost in solving major nonlinear functions.

Introduction

Particle swarm optimization (PSO) which is an attractive swarm intelligence algorithm by exchanging information among particles in a population, was firstly proposed by Kennedy and Eberhart in 1995 [1]. Compared with other optimization algorithms, PSO has many advantages such as simplicity, high performance and fast convergence, and has attracted scientific researchers’ attention to solve benchmark tests or real-world engineering problems [2], [3], [4], [5]. However, the performance of the classical PSO [6] suffers from control parameters as inertial weight and acceleration coefficients. Various modified strategies have been proposed to overcome this disadvantage [6], [7], [8], [9]. In 2003, Kennedy [10] proposed the Bare-bones PSO (BPSO) after researching the convergence property of PSO. It must be emphasized that BPSO does not tune control parameters by discarding the velocity item of PSO, i.e., it is parameter-free. In BPSO, the position of each particle is randomly updated from the Gaussian distribution based on two leaders, i.e., the personal best position and the global best position of particles.

However, like many PSOs, BPSO suffers from premature convergence because of quickly losing diversity with number of iteration. In order to enhance the performance of BPSO, lots of modified BPSO algorithms have been proposed in recent years. Krohling and Mendel [11] combined BPSO with Gaussian or Cauchy jump strategy when no fitness improvement is observed. Zhang et al. [12] proposed a BPSO with mutation and crossover operators of differential evolution algorithm to update certain particles in the population. Hybridized BPSO with differential evolution is also proposed in [13]. Chen et al. [14] proposed an unified bare-bones particle swarm optimization (UBPSO), which integrates local and global learning strategies to solve economic dispatch problems with multiple fuel options. Zhang et al. [15] modified the updating strategy of particle’s position and introduced mutation operator with action range varying over time into BPSO to expand the search capability, and the proposed algorithms has been carried on environmental/economic dispatch problems. Yao et al. [16] proposed a new BPSO variant called BPSO with neighborhood search to achieve a tradeoff between exploration and exploitation. In [17], Wang embedded opposition-based learning (OBL) into BPSO and utilized a new boundary search strategy to solve constrained nonlinear optimization problems. In [18], Hsieh et al. proposed a modified BPSO by adding three extra parameters to attain better performance with smaller standard deviation and faster convergence. In addition, Blackwell [19] presented a theoretical analysis of BPSO. A series of experimental trials confirmed that the BPSO situated at the edge of collapse is comparable to other PSO algorithms and that performance can be still further improved with the use of an adaptive distribution.

Though all kinds of variants of BPSO have enhanced performance of BPSO, there are still some problems such as hardly implement, new parameters to just or high computation cost. So it is necessary to study how to improve the performance with least computation cost and easily implement. In order to carry out the targets, in this paper, we introduce a disruption operator into BPSO. We compare the distribution and diversity with BPSO, and illustrate the position relation among the global best position, the original position and the disrupted position. The so-called DBPSO has also been applied on some nonlinear benchmark functions to confirm its high performance by comparing with other modified BPSOs and other evolutionary algorithms (EAs).

The remainder of the paper is structured as follows. In Section 2, the classical PSO and Bare-bones PSO (BPSO) is introduced. The disruption Bare-bones PSO (DBPSO) and its analysis are given in Section 3. Benchmark functions and parameter settings are provided in Section 4. Section 5 provides the experiment results and discussion on benchmark functions. Some conclusions are given in Section 6.

Section snippets

Classical PSO and bare-bones PSO (BPSO)

The PSO is inspired by the behavior of bird flying or fish schooling, it is firstly introduced by Kennedy and Eberhart in 1995 [1] as a new heuristic algorithm. In PSO, a swarm consists of a set of particles; and each particle represents a potential solution of an optimization problem. Considering the ith particle of the swarm with N particles in a D-dimensional space, its position and velocity at iteration t are denoted by Xit=xi1(t),xi2(t),,xiD(t) and Vit=vi1(t),vi2(t),,viD(t). Then, the

Disruption bare-bones PSO (DBPSO)

In order to improve the exploration and exploitation abilities of BPSO, a novel operator called “Disruption”, originating from astrophysics, will be introduced in BPSO. In this paper, we introduce a disruption strategy into BPSO with least computation cost, but it can extremely enhance the convergence precision and speed. Meanwhile, we research the difference of distributions and diversities of particles between BPSO and DBPSO, and illustrate the position relation among the global best

Benchmark functions

To evaluate the performance of DBPSO, it is applied to 18 well-known benchmark functions used in [9], [24]. Table 1 lists the 18 test functions. They are high-dimensional problems and divided into three classes: unimodal, multimodal and rotated and shifted problems. In which functions, f1 to f6 are unimodal functions. As for f4, it is Rosenbrock function which is unimodal for D=2 and 3 but may have multiple minima when D>3. Functions f7 to f12 are multimodal functions, and f13 to f18 are

Experimental results and discussion

In this section, we compare DBPSO with the other selected algorithms listed in Table 2 from the mean best and standard deviation of best value, and list the mean FEs to reach the acceptable value, and graphically show the convergence. Meanwhile, we apply nonparametric statistical inference for analyzing the results to verify the performance of DBPSO.

Conclusion

In this paper, we introduce a disruption operator in bare-bones PSO, which is called DBPSO, to help BPSO to shift the exploration and exploitation abilities, and it is helpful to jump out local optima. Through plotting the distribution figures and the diversities figures of the selected functions, we can intuitionally found that DBPSO can enhances the population diversity, so it improves the chance to jump out local optimal solution. By applying the disruption operator on those particles, which

References (36)

  • Y. Shi et al.

    A modified particle swarm optimizer

  • Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proceedings of IEEE Congress on Evolutionary...
  • M. Clerc et al.

    The particle swarm: explosion, stability and convergence in a multi-dimensional complex space

    IEEE Trans. Evol. Comput.

    (2002)
  • A. Ratnaweera et al.

    Self-organizing hierarchical particle swarm optimizer with time−varying acceleration coefficients

    IEEE Trans. Evol. Comput.

    (2004)
  • J. Kennedy, Bare bones particle swarms, in: Proceedings of the 2003 IEEE Swarm Intelligence Symposium, 2003, pp....
  • R.A. Krohling, E. Mendel, Bare bones particle swarm optimization with Gaussian or Cauchy jumps, in: Proceedings of IEEE...
  • C.H. Chen, J.S. Sheu, Unified bare bone particle swarm for economic dispatch with multiple fuel cost functions, in:...
  • J.Z. Yao, D.F. Han, Improved barebones particle swarm optimization with neighborhood search and its application on ship...
  • Cited by (45)

    • Data clustering using leaders and followers optimization and differential evolution[Formula presented]

      2023, Applied Soft Computing
      Citation Excerpt :

      In addition, plenty of new algorithms or variants are sophisticated, which leads to problems for users in selecting an appropriate model [44]. Therefore, parameter-free optimization techniques have earned increasing popularity in recent years [45–50]. For instance, an efficient elicitation of software configurations using crowd preferences and domain knowledge has been proposed in [51].

    • A modified particle swarm optimization using adaptive strategy

      2020, Expert Systems with Applications
      Citation Excerpt :

      It must be emphasized that BPSO does not tune control parameters by discarding the velocity item of PSO, i.e., it is parameter-free. Some variants of BPSO have been proposed to enhance the performance of BPSO (Liu, Ding, & Wang, 2014; Liu, Xu, Ding, & Li, 2015; Omran, Engelbrecht, & Salman, 2009; Vafashoar & Meybodi, 2019). Changing the learning Strategy.

    View all citing articles on Scopus
    View full text