Skip to main content
Log in

Dynamic cluster in particle swarm optimization algorithm

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

Particle swarm optimization is an optimization method based on a simulated social behavior displayed by artificial particles in a swarm, inspired from bird flocks and fish schools. An underlying component that influences the exchange of information between particles in a swarm, is its topological structure. Therefore, this property has a great influence on the comportment of the optimization method. In this study, we propose DCluster: a dynamic topology, based on a combination of two well-known topologies viz. Four-cluster and Fitness. The proposed topology is analyzed, and compared to six other topologies used in the standard PSO algorithm using a set of benchmark test functions and several well-known constrained and unconstrained engineering design problems. Our comparisons demonstrate that DCluster outperforms the other tested topologies and leads to satisfactory performance while avoiding the problem of premature convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. SPSO2006 uses the random topology R2006.

References

  • Ali M, Pant M, Singh VP (2010) Two modified differential evolution algorithms and their applications to engineering design problems. World J Model Simul 6(1):72–80

    Google Scholar 

  • Bastos-Filho CJA, Carvalho DF, Caraciolo MP, Miranda PBC, Figueiredo EMN (2009) Multi-ring particle swarm optimization. In: Wellington Pinheiro dos Santos (ed) Evolutionary computation. InTech, Vienna

  • Cagnina LC, Esquivel SC, Coello CA (2008) Solving engineering optimization problems with the simple constrained particle swarm optimizer. Informatica (Slovenia) 32(3):319–326

    MATH  Google Scholar 

  • Clerc M (2006) Particle swarm optimization. ISTE (International Scientific and Technical Encyclopaedia), London

  • Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, MHS’95Nagoya, Japan, pp 39–43

  • El Dor A, Clerc M, Siarry P (2012) A multi-swarm PSO using charged particles in a partitioned search space for continuous optimization. Comput Optimiz Appl 53(1):271–295

    Article  MATH  Google Scholar 

  • El-Saleh AA, Ismail M, Viknesh R, Mark CC, Chan ML (2009) Particle swarm optimization for mobile network design. IEICE Electron Express 6(17):1219–1225

    Article  Google Scholar 

  • Engelbrecht AP (2006) Fundamentals of computational swarm intelligence. Wiley, Chichester

    Google Scholar 

  • Golinski J (1973) An adaptive optimization system applied to machine synthesis. J Eng Ind Trans ASME 8(4):419–436

    Google Scholar 

  • Homaifar A, Qi CX, Lai SH (1994) Constrained optimization via genetic algorithms. Simulation 62(4):242–253

    Article  Google Scholar 

  • Hsieh ST, Sun TY, Liu CC, Tsai SJ (2009) Efficient population utilization strategy for particle swarm optimizer. IEEE Trans Syst Man Cybernet B: Cybernet 39(2):444–456

    Article  Google Scholar 

  • Janson S, Middendorf M (2005) A hierarchical particle swarm optimizer and its adaptive variant. IEEE Trans Syst Man Cybernet B 35(6):1272–1282

    Article  Google Scholar 

  • Kannan BK, Kramer SN (1994) An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J Mech Des 116(2):318–320

    Article  Google Scholar 

  • Kennedy J (1999) Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, CEC’99DC USA, Washington, pp 1931–1938

  • Kennedy J (2000) Stereotyping: improving particle swarm performance with cluster analysis. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 1507–1512. IEEE

  • Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of the 2002 IEEE Congress on Evolutionary Computation, CEC’02Honolulu, HI, USA, pp 1671–1676

  • Lane J, Engelbrecht A, Gain J (2008) Particle swarm optimization with spatially meaningful neighbours. In: Proceedings of the 2008 IEEE Swarm Intelligence Symposium. Piscataway, NJ, IEEE, pp 1–8.

  • Li C, Yang S, Nguyen TT (2011) A self-learning particle swarm optimizer for global optimization problems. IEEE Trans Syst Man Cybernet B 42(3):627–646

    Google Scholar 

  • Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer. In: Proceedings of the 2005 IEEE Swarm Intelligence Symposium, pp 124–129, Pasadena, CA, USA

  • Liang JJ, Suganthan PN (2006) Dynamic multi-swarm particle swarm optimizer with a novel constraint-handling mechanism. In Proceedings of the IEEE Congress on Evolutionary Computation, pp 9–16, Canada

  • Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl Soft Comput 10(2):629–640

    Article  Google Scholar 

  • Marsaglia G, Zaman A (1993) The KISS generator. Technical report. Department of statistics, Florida State University, Tallahassee, FL, USA

  • Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler. May be better. IEEE Trans Evolut Comput 8(3):204–210

    Article  Google Scholar 

  • Mendes R, Kennedy J, Neves J (2003) Watch thy neighbor or how the swarm can learn from its environment. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium, pp 88–94, Indianapolis, Indiana, USA

  • Nasir M, Das S, Maity D, Sengupta S, Halder U, Suganthan PN (2012) A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization. Inf Sci 209:16–36

    Article  MathSciNet  Google Scholar 

  • Particle swarm central (2012) http://particleswarm.info. Accessed 1 Oct 2014

  • Ragsdell K, Phillips D (1976) Optimal design of a class of welded structures using geometric programming. J Eng Ind Trans ASME 98(3):1021–1025

    Article  Google Scholar 

  • Richards M, Ventura D (2003) Dynamic cociometry in particle swarm optimization. In: Proceedings of the joint conference on information sciences, pp 1557–1560, Cary, North Carolina USA

  • Safavieh E, Gheibi A, Abolghasemi M, Mohades A (2009) Particle swarm optimization with Voronoi neighborhood. In Proceedings of the 14th international CSI computer conference (CSICC2009), pp 397–402, Tehran, Iran

  • Salomon R (1996) Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions. BioSyst 39:263–278

    Article  Google Scholar 

  • Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. In Proceedings of the 1999 congress on evolutionary computation, CEC’99, vol 3, pp 1945–1950, Washington, DC USA

  • Suganthan PN, Hansen N, Liang JJ, Deb K, Chen YP, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical report. Nanyang Technological University, Singapore

  • Wang YX, Xiang QL (2008a) Particle swarms with dynamic ring topology. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 419–423. IEEE

  • Wang YX, Xiang QL (2008b) Exploring new learning strategies in differential evolution algorithm. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 204–209. IEEE

  • Watts DJ (1999) Small worlds : the dynamics of networks between order and randomness. Princeton University Press, Princeton

    Google Scholar 

  • Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393:440–442

    Article  Google Scholar 

  • Zhao SZ, Suganthan PN, Pan QK, Tasgetiren MF (2011) Dynamic multi-swarm particle swarm optimizer with harmony search. Expert Syst Appl 38(4):3735–3742

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Siarry.

Appendix

Appendix

See Table 5.

Table 5 Benchmark functions

1.1 Real life problems

1.1.1 F1: The Lennard-Jones atomic cluster problem

The Lennard-Jones (LJ) problem is the simplest form of the molecular conformation problem, which consists of finding a configuration of atoms in a cluster or molecule whose potential energy is minimum.

We present the LJ problem as follows: we denote by \(K\) the number of clusters of atoms, \(x^i=(x^{i}_{1},x^{i}_{2},x^{i}_{3})\) represents the coordinates of atom \(i\) in the three-dimensional space, and let be \(X=((x^1),\ldots ,(x^K))\). The LJ potential energy function \(v(r_{ij})\) of a pair of atoms \((i,j)\) is given by \(v(r_{ij})={\frac{1}{r_{ij}^{12}}}-{\frac{1}{r_{ij}^{6}}}, 1 \le i,j \le K\), where \(r_{ij}=\left\| x^i - x^j\right\| \).

We can notice that, with a single pair of neutral atoms, the LJ potential function is classified as a simple unimodal function. In a cluster with a high number of atoms, the interaction between atoms increases. Then we need to calculate the LJ potential function for each pair of atoms in a cluster. Many local minima are found in the resulting rugged energy landscape. The function to be minimized in this application is as follows:

$$\begin{array}{l} {\mathrm{{Min}}}\,V_k(x)=\sum \nolimits _{i<j}^{}v\left( \left\| x_{i}-x_{j}\right\| \right) =\sum \nolimits _{i=1}^{K-1}\sum \nolimits _{j=i+1}^{K}\left( \frac{1}{\left\| x_{i}- x_{j}\right\| ^{12}}-\frac{1}{\left\| x_{i}-x_{j}\right\| ^{6}}\right) .\\ \end{array}$$
(9)

In this work, we use nine atoms for this problem, hence the search space is \([-2, 2]^{27}\). The value of the global optimum is \(f^*=-24.113360\).

1.1.2 F2: Gear train design problem

A gear train is a set or system of gears arranged to transfer rotational torque from one part of a mechanical system to another. The problem of gear train is a common problem in the field of mechanical engineering, to be designed such that the gear ratio is as close as possible to 1/6.931. For each gear the number of teeth must be between 12 and 60. Since the number of teeth is to be an integer, the variables must be integers. The mathematical model of gear train design is given by:

$${\mathrm{{Min}}}\,f(x) = \left\{ \frac{1}{6.931}-\frac{T_dT_b}{T_a T_f} \right\} ^2=\left\{ \frac{1}{6.931}-\frac{x_1x_2}{x_3x_4}\right\} ^2.$$
(10)

Bounds: \(12\le x_i\le 60, i = 1,2,3,4\) and \(x_i\) should be integers. Where \(T_a, T_b, T_d\) and \(T_f\) are the numbers of teeth on gears A, B, D and F respectively.

1.1.3 F3: Frequency modulation sound parameter identification

The frequency modulation sound parameter identification is a complex multimodal optimization problem. The problem is to satisfy six parameters \(a_1, w_1, a_2, w_2, a_3\) and \(w_3\) of the frequency modulation sound model given below:

$$y(t)=a_1 \times sin(w_1 \times t \times \theta +a_2 \times sin(w_2 \times t \times \theta +a_3 \times sin(w_3 \times t \times \theta )))$$
(11)

with \(\theta = (2.\pi /100)\). The fitness function is defined as the sum of square errors between the evolved data and the model data, as follows:

$$f(a_1,w_1, a_2,w_2, a_3,w_3) = \sum \nolimits _{t=0}^{100} \left( y(t)-y_0(t)\right) ^2$$

where the model data are given by the following equation:

$$y_0(t) = 1.0 \times sin(5.0 \times t \times \theta +1.5 \times sin(4.8 \times t \times \theta +2.0 \times sin(4.9 \times t \times \theta ))).$$
(12)

Bounds: \(-6.4\le a_i, w_i \le 6.35\), \(i=1,2,3\).

1.1.4 F4: The spread spectrum radar polyphase code design problem

A famous problem of optimal design arises in the field of spread spectrum radar poly-phase codes. Such a problem is very well suited for the application of global optimization algorithms. The problem under consideration is modeled as a min–max nonlinear non-convex optimization problem in continuous variables and with numerous local optima. It can be expressed as follows:

$${\mathrm{{Min}}}\,f(X) = {\mathrm{{max}}} \left\{ f_1(X), \dots , f_{2m}(X)\right\}.$$
(13)

where:

$$X = \left\{ (x_1,\dots , x_n)\in R^n \mid 0 \le x_j \le 2\pi , j=1,2,\dots ,n \right\} \;\hbox {and}\;m = 2n-1,$$

with:

$$\begin{aligned} f_{2i-1}(x)&= \sum _{j=i}^{n} \cos \left( \sum _{k= \mid 2i-j-1 \mid + 1}^{j} x_k\right) \quad i = 1,2, \dots , n;\\ f_{2i}(x)&= 0.5 + \sum _{j=i+1}^{n} \cos \left( \sum _{k= \mid 2i-j \mid + 1}^{j} x_k\right) \quad i = 1,2, \dots , n-1;\\ f_{m+i}(X)&= - f_i(X), i = 1, 2, \dots , m. \end{aligned}$$

Here the objective is to minimize the module of the biggest among the samples of the so-called autocorrelation function which is related to the complex envelope of the compressed radar pulse at the optimal receiver output, while the variables \(x_k\) represent symmetrized phase differences. This problem belongs to the class of continuous min–max global optimization problems. They are characterized by the fact that the objective function is piecewise smooth.

1.1.5 F5: Compression spring design problem

This is a simplified version of a more difficult problem (see Clerc 2006). The function to minimize is:

$${\mathrm{{Min}}}\,f(x) = \pi ^2 \frac{x_2 x_3^2 \left( x_1 + 1\right) }{4},$$
(14)

where \(x_1 \in \{1,\dots ,70\}\) with a granularity 1, \(x_2 \in \left[ 0.6, 3.0\right] \) and \(x_3 \in \{0.207,\dots ,0.5\}\) with a granularity 0.0001. The best known solution is (7, 1.386599591, 0.292) which gives the fitness value \(f^* = 2.6254214578\). This problem has five constraints (Clerc 2006), and a penalty method is used to take them into account.

1.1.6 F6: Perm function

The function to be minimized is (PSC 2012):

$${\mathrm{{Min}}}\,f(x) = \sum \nolimits _{k=1}^{5} \left[ \sum \nolimits _{i=1}^{5} \left( i^k + \beta \right) \left\{ \left( x_i / i\right) ^k -1\right\} \right] ^2.$$
(15)

In the domain \(x \in \left[ -5, 5\right] \), the global optimum is (1, 2, 3, 4, 5) which gives the fitness value \(f^* = 0\). The value of \(\beta \) is fixed to 10.

1.1.7 F7: Mobile network design problem

In mobile network design, the challenge is to efficiently determine the locations of base control stations, mobile switching centers, and their connecting links for given locations of base transceiver stations. This problem is complex to describe, and the reader can refer to El-Saleh et al. (2009) for more information. The search space is made of binary variables (38 variables) and of continuous ones (four variables). Hence, the problem has 42 dimensions.

1.1.8 F8: Pressure vessel design problem

This problem was proposed by Kannan and Kramer (1994). The objective is to minimize the total cost, including the cost of the material, forming and welding. The mathematical model of pressure vessel optimization problem can be described as follows (Clerc 2006):

$${\mathrm{{Min}}}\,f(x) = 0.6224x_1x_3x_4 + 1.7781x_2x_3^2 + 3.1661x_1^2x_4 + 19.84x_1^2x_3.$$
(16)

Subject to:

$$g_1(x)= 0.0193x_3 - x_1 \le 0,\quad g_2(x) = 0.00954x_3 - x_2 \le 0 \quad g_3(x) = 750\times 1728 - \pi x_3^2(x_4 + \frac{4}{3}x_3) \le 0 $$

where \(x_1 \in \{0.0625,\dots , 12.5\}\) with a granularity 0.0625, \(x_2 \in \{0.625,\dots ,12.5\}\) with a granularity 0.0625, \(x_3 \in \,]0, 240]\) and \(x_4 \in \,]0, 240]\). The best solution is (1.125, 0.625, 58.2901554, 43.6926562) which gives the fitness value \(f^* = 7,197.72893\).

1.1.9 F9: Welded beam design problem

The problem is to design a welded beam for minimum cost, subject to some constraints (Ragsdell and Phillips 1976). The problem can be mathematically formulated as follows:

$${\mathrm{{Min}}}\,f(x) = 1.10471x_1^2x_2 + 0.04811x_3x_4(14.0 + x_2).$$
(17)

Subject to:

$$\begin{aligned} g_1(x)&= \tau (x) - 13000 \le 0\quad g_2(x) = \sigma (x) - 30000 \le 0\quad g_3(x) = x_1 - x_4 \le 0 \\ g_4(x)&= 6000 - P_c(x) \le 0\quad g_5(x) = 0.125 - x_1 \le 0\quad g_6(x) = \delta (x) - 0.25 \le 0 \\ g_7(x)&= 0.10471x_1^2 + 0.04811x_3x_4(14.0 + x_2) - 5.0 \le 0 \end{aligned}$$

where:

$$\begin{aligned} \tau (x)&=\sqrt{(\tau ^{\prime })^2+2\tau ^{\prime }\tau ^{\prime \prime } \frac{x_2}{2R}+(\tau ^{\prime \prime })^2} \quad M= 6000\left( 14+\frac{x_2}{2}\right) \quad {J = 2\left\{ \sqrt{2}x_1x_2\left[ \frac{x_2^2}{12} +(\frac{x_1+x_3}{2})^2\right] \right\} }\\ R &= \sqrt{\frac{x_2^2}{4}+(\frac{x_1+x_3}{2})^2} \quad \sigma (x)= \frac{504000}{x_4x_3^2} \quad \delta (x)= \frac{2.1952}{x_3^3x_4} \quad \tau ^{\prime }=\frac{6000}{\sqrt{2}x_1x_2}\\ \tau ^{\prime \prime }&= \frac{MR}{J} \quad {P_c(x)= \frac{4.013(30\times 10^6)\sqrt{\frac{x_3^2x_4^6}{36}}}{196} \left( 1-\frac{x_3\sqrt{\frac{30\times 10^6}{4(12\times 10^6)}}}{28}\right) .} \end{aligned}$$

With: \(x_1, x_4 \in \left[ 0.1, 2.0\right] \) and \(x_2, x_3 \in \left[ 0.1, 10.0\right] \).

Best solution: \(x^* = (0.205730, 3.470489, 9.036624, 0.205729)\) where \(f^* = 1.724852\).

1.1.10 F10: Speed reducer design problem

This problem was modeled by Golinski (1973) as a single-level optimization. The objective is to minimize the speed reducer weight while satisfying a number of constraints imposed by gear and shaft design practices. Mathematically, the problem is specified as follows:

$$\begin{aligned} {\mathrm{{Min}}}\,f(x)&= 0.7854x_1x_2^2(3.3333x_3^2 + 14.9334x_3 - 43.0934)- 1.508x_1(x_6^2 + x_7^2)\nonumber \\&+ 7.4777(x_6^3 + x_7^3)+0.7854(x_4x_6^2+ x_5x_7^2). \end{aligned}$$
(18)

Subject to:

$$\begin{aligned} &g_1(x) = \frac{27}{x_1x_2^2x_3} \le 0 \quad g_2(x) = \frac{397.5}{x_1x_2^2x_3^2}\le 0 \quad g_3(x) = \frac{1.93x_4^3}{x_2x_3x_6^4} \le 0 \quad g_4(x) = \frac{1.93x_5^3}{x_2x_3x_7^4} \le 0 \\ & g_5(x) =\frac{1.0}{110x_6^3}\sqrt{\left( \frac{745 x_4}{x_2x_3}\right) ^2+16.9\times 10^6}-1\le 0 \quad g_6(x) = \frac{x_2x_3}{40} - 1 \le 0 \quad g_7(x) =\frac{5x_2}{x_1} -1 \le 0 \\ & {g_8(x) =\frac{1.0}{85x_7^3}\sqrt{ \left( \frac{745x_5}{x_2x_3}\right) ^2+157.5\times 10^6}-1\le 0} \quad {g_9(x) = \frac{x_1}{12x_2}-1 \le 0}\\ & {g_{10}(x) = \frac{1.5x_6+1.9}{x_4} -1 \le 0} \quad {g_{11}(x) = \frac{1.1x_7+1.9}{x_5} -1 \le 0.} \end{aligned}$$

With: \(2.6 \le x_1 \le 3.6, 0.7 \le x_2 \le 0.8, 17 \le x_3 \le 28, 7.3 \le x_4 \le 8.3, 7.8 \le x_5 \le 8.3,2.9 \le x_6 \le 3.9, \;{\mathrm{{and}}} \;5.0 \le x_7 \le 5.5.\) Best solution: \(x^* = (3.5, 0.7, 17, 7.3, 7.8, 3.350214, 5.286683)\) where \(f^* = 2, 996.348165\).

1.1.11 F11: Constraint problem 1

The function to be minimized is:

$${\mathrm{{Min}}}\,f(x) = 5\sum _{i=1}^{4}x_i - 5\sum _{i=1}^{4}{x_i}^2 - \sum _{i=1}^{13}x_i.$$
(19)

Subject to:

$$\begin{array}{lll} g_1=2x_1 + 2x_2 + x_10 + x_11 - 10 \le 0 &{} g_4= -8x_1 + x_{10} \le 0 &{} g_7= -2x_4 - x_5 + x_{10} \le 0 \\ g_2=2x_1 + 2x_3 + x10 + x12 - 10 \le 0 &{} g_5= -8x_2 + x_{11} \le 0 &{} g_8= -2x_6 - x_7 + x_{11} \le 0 \\ g_3= 2x_2 + 2x_3 + x_{11} + x_{12} - 10 \le 0 &{} g_6= -8x_3 + x_{12} \le 0 &{} g_9= -2x_8 - x_9 + x_{12} \le 0\\ \end{array}$$

Bounds: \(0 \le x_i \le 1\) \((i=1, \ldots , 9, 13)\), and \(0 \le x_i \le 100\) \((i=10, 11, 12)\). The global optimum is \((1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1)\), where \(f^* = -15\).

1.1.12 F12: Constraint problem 2

The function to be minimized is:

$$\begin{aligned} {\mathrm{{Min}}}\,f(x)&= x_1^2 + x_2^2 + x_1x_2-14x_1 - 16x_2 + (x_3 - 10)^2 + 4(x_4 - 5)^2 + (x_5 - 3)^2\nonumber \\&\quad \;+ 2(x_6 - 1)^2+ 5x_7^2 \\& \quad \; + 7(x_8 - 11)^2 + 2(x_9 - 10)^2 + (x_{10} - 7)^2+45. \end{aligned}$$
(20)

Subject to:

$$\begin{aligned} g_1&= -105 + 4x_1 + 5x_2 - 3x_7 + 9x_8 \le 0 \quad g_4= -3x_1 + 6x_2 + 12(x_9 - 8)^2 - 7x_{10} \le 0 \\ g_2&= 10x_1 - 8x_2 - 17x_7 + 2x_8 \le 0 \quad g_5= 5x_1^2 + 8x_2 + (x3 - 6)^2 - 2x_4 - 40 \le 0\\ g_3&= -8x_1 + 2x_2 + 5x_9 - 2x_{10} - 12 \le 0 \quad g_6= x_1^2 + 2(x_2 - 2)^2 - 2x_1x_2 + 14x_5 - 6x_6 \le 0 \\ g_7&= 3(x_1 - 2)^2 + 4(x_2 - 3)^2 + 2x_3^2 - 7x_4 - 120 \le 0 \\ g_8&= 0.5(x_1 - 8)^2 + 2(x_2 - 4)^2 + 3x_5^2 - x_6 - 30 \le 0 \\ \end{aligned}$$

Bounds: \(-10 \le x_i \le 10\) \((i = 1, 2,\ldots , 10)\). The optimum solution is : \(x^*=(2.171996\), \(2.363683\), \(8.773926\), \(5.095984\), \(0.9906548\), \(1.430574\), \(1.321644\), \(9.828726\), \(8.280092\), \(8.375927)\), where \(f(x^*)= 24.3062091\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

El Dor, A., Lemoine, D., Clerc, M. et al. Dynamic cluster in particle swarm optimization algorithm. Nat Comput 14, 655–672 (2015). https://doi.org/10.1007/s11047-014-9465-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11047-014-9465-2

Keywords

Navigation