Skip to main content
Log in

Stopping criteria for MAPLS-AW, a hybrid multi-objective evolutionary algorithm

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Evolutionary algorithms are widely used to solve multi-objective optimization problems effectively by performing global search over the solution space to find better solutions. Hybrid evolutionary algorithms have been introduced to enhance the quality of solutions obtained. One such hybrid algorithm is memetic algorithm with preferential local search using adaptive weights (MAPLS-AW) (Bhuvana and Aravindan in Soft Comput, doi:10.1007/s00500-015-1593-9, 2015). MAPLS-AW, a variant of NSGA-II algorithm, recognizes the elite solutions of the population and preferences are given to them for local search during the evolution. This paper proposes a termination scheme derived from the features of MAPLS-AW. The objective of the proposed scheme is to detect convergence of population without compromising quality of solutions generated by MAPLS-AW. The proposed termination scheme consists of five stopping measures, among which two are newly proposed in this paper to predict the convergence of the population. Experimental study has been carried out to analyze the performance of the proposed termination scheme and to compare with existing termination schemes. Several constrained and unconstrained multi-objective benchmark test problems are used for this comparison. Additionally, a real-time application economic emission and load dispatch has also been used to check the performance of the proposed scheme. The results show that the proposed scheme identifies convergence of population much earlier than the existing stopping schemes without compromising the quality of solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Abedian A, Ghiasi M, Dehghan-Manshadi B (2006) An introduction to a new criterion proposed for stopping GA optimization process of a laminated composite plate. JAST-TEHRAN 3(4):167

    Google Scholar 

  • Arab A, Alfi A (2015) An adaptive gradient descent-based local search in memetic algorithm applied to optimal controller design. Inf Sci 299:117–142

    Article  MathSciNet  Google Scholar 

  • Balakrishnan S, Kannan P, Aravindan C, Subathra P (2003) On-line emission and economic load dispatch using adaptive Hopfield neural network. Appl Soft Comput 2(4):297–305

    Article  Google Scholar 

  • Basu M (2011) Economic environmental dispatch using multi-objective differential evolution. Appl Soft Comput 11(2):2845–2853

    Article  Google Scholar 

  • Bhandari D, Murthy C, Pal SK (2012) Variance as a stopping criterion for genetic algorithms with elitist model. Fundam Inform 120(2):145–164

    MathSciNet  MATH  Google Scholar 

  • Bhuvana J, Aravindan C (2011a) Design of hybrid genetic algorithm with preferential local search for multiobjective optimization problems. In: Information technology and mobile communication, communications in computer and information science, vol 147. Springer, Berlin, pp 312–316

  • Bhuvana J, Aravindan C (2011b) Preferential local search with adaptive weights in evolutionary algorithms for multiobjective optimization problems. In: International conference of soft computing and pattern recognition (SoCPaR), pp 358–363. doi:10.1109/SoCPaR.2011.6089270

  • Bhuvana J, Aravindan C (2015) Memetic algorithm with preferential local search using adaptive weights for multi objective optimization problems. Soft Comput. doi:10.1007/s00500-015-1593-9

  • Bishop G, Welch G (2001) An introduction to the Kalman filter. Proc SIGGRAPH Course 8(27):599–3175

    Google Scholar 

  • Bos A (1998) Aircraft conceptual design by genetic/gradient-guided optimization. Eng Appl Artif Intell 11(3):377–382

    Article  Google Scholar 

  • Bui LT, Wesolkowski S, Bender A, Abbass HA, Barlow M, (2009) A dominance-based stability measure for multi-objective evolutionary algorithms. In: IEEE congress on evolutionary computation (CEC’09). IEEE, pp 749–756

  • Chaudhuri A, Haftka RT (2013) A stopping criterion for surrogate based optimization using EGO. In: 10th world congress on structural and multidisciplinary optimization, pp 1–9

  • Chen X, Ong YS, Lim MH, Tan KC (2011) A multi-facet survey on memetic computation. IEEE Trans Evol Comput 15(5):591–607

    Article  Google Scholar 

  • Črepinšek M, Liu SH, Mernik L (2012) A note on teaching-learning-based optimization algorithm. Inf Sci 212:79–93

    Article  Google Scholar 

  • Črepinšek M, Liu SH, Mernik M (2014) Replication and comparison of computational experiments in applied evolutionary computing: common pitfalls and guidelines to avoid them. Appl Soft Comput 19:161–170

    Article  Google Scholar 

  • Deb K (1998) Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evol Comput 7:205–230

    Article  Google Scholar 

  • Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, New York

  • Deb K, Thiele L, Laumanns M, Zitzler E (2002) Scalable multi-objective optimization test problems. In: Proceedings of the congress on evolutionary computation (CEC’02), Honolulu, pp 825–830

  • El-Mihoub TA, Hopgood AA, Nolle L, Battersby A (2006) Hybrid genetic algorithms: a review. Eng Lett 13(2):124–137

    Google Scholar 

  • Fraser G, Arcuri A, McMinn P (2013) Test suite generation with memetic algorithms. In: Proceedings of the 15th annual conference on genetic and evolutionary computation. ACM, New York, pp 1437–1444

  • Fraser G, Arcuri A, McMinn P (2015) A memetic algorithm for whole test suite generation. J Syst Softw 103:311–327

    Article  Google Scholar 

  • Goel T, Stander N (2010) A study on the convergence of multiobjective evolutionary algorithms. In: Preprint submitted to the 13th AIAA/ISSMO conference on multidisciplinary analysis optimization, pp 1–18

  • Guerrero JL, García J, Marti L, Molina JM, Berlanga A (2009) A stopping criterion based on Kalman estimation techniques with several progress indicators. In: Proceedings of the 11th annual conference on genetic and evolutionary computation. ACM, New York, pp 587–594

  • Guerrero JL, Marti L, Berlanga A, Garcia J, Molina JM (2010) Introducing a robust and efficient stopping criterion for MOEAs. In: IEEE congress on evolutionary computation (CEC). IEEE, pp 1–8

  • Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506

    Article  MATH  Google Scholar 

  • Ishibuchi H, Yoshida T, Murata T (2003) Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling. IEEE Trans Evol Comput 7(2):204–223

    Article  Google Scholar 

  • Kalman RE (1960) A new approach to linear filtering and prediction problems. J Basic Eng 82(1):35–45

    Article  Google Scholar 

  • Martí L, García J, Berlanga A, Molina JM (2007) A cumulative evidential stopping criterion for multiobjective optimization evolutionary algorithms. In: Proceedings of the 2007 GECCO conference companion on genetic and evolutionary computation. ACM, New York, pp 2835–2842

  • Martí L, García J, Berlanga A, Molina JM, (2009) An approach to stopping criteria for multi-objective optimization evolutionary algorithms: the MGBM criterion. In: IEEE congress on evolutionary computation (CEC’09). IEEE, pp 1263–1270

  • Martí Orosa L (2011) Scalable multi-objective optimization. PhD thesis, Universidad Carlos III de Madrid, Spain

  • Maybeck PS (1990) The Kalman filter: an introduction to concepts. In: Autonomous robot vehicles. Springer, New York, pp 194–204

  • Merz P, Freisleben B (2000) Fitness landscapes, memetic algorithms, and greedy operators for graph bipartitioning. Evol Comput 8(1):61–91

    Article  Google Scholar 

  • Mignotte M, Collet C, Perez P, Bouthemy P (2000) Hybrid genetic optimization and statistical model based approach for the classification of shadow shapes in sonar imagery. IEEE Trans Pattern Anal Mach Intell 22(2):129–141

    Article  Google Scholar 

  • Molina D, Lozano M, García-Martínez C, Herrera F (2010) Memetic algorithms for continuous optimisation based on local search chains. Evol Comput 18(1):27–63

    Article  Google Scholar 

  • Mongus D, Repnik B, Mernik M, Žalik B (2012) A hybrid evolutionary algorithm for tuning a cloth-simulation model. Appl Soft Comput 12(1):266–273

    Article  Google Scholar 

  • Neri F, Cotta C (2012) Memetic algorithms and memetic computing optimization: a literature review. Swarm Evol Comput 2:1–14

    Article  Google Scholar 

  • Ong YS, Lim MH, Zhu N, Wong KW (2006) Classification of adaptive memetic algorithms: a comparative study. IEEE Trans Syst Man Cybern Part B Cybern 36(1):141–152

    Article  Google Scholar 

  • Roudenko O, Schoenauer M (2004) A steady performance stopping criterion for pareto-based evolutionary algorithms. In: The 6th international multi-objective programming and goal programming conference

  • Santamaría J, Cordøn O, Damas S, García-Torres J, Quirin A (2009) Performance evaluation of memetic approaches in 3d reconstruction of forensic objects. Soft Comput 13(8–9):883–904

    Article  Google Scholar 

  • Trautmann H, Ligges U, Mehnen J, Preuss M (2008) A convergence criterion for multiobjective evolutionary algorithms based on systematic statistical testing. In: Parallel problem solving from nature-PPSN X. Springer, New York, pp 825–836

  • Trautmann H, Wagner T, Naujoks B, Preuss M, Mehnen J (2009) Statistical methods for convergence detection of multi-objective evolutionary algorithms. Evol comput 17(4):493–509

    Article  Google Scholar 

  • Van Veldhuizen DA, Lamont GB (1999) Multiobjective evolutionary algorithm test suites. In: Proceedings of the 1999 ACM symposium on applied computing (SAC’99). ACM, New York, pp 351–357

  • Veček N, Mernik M, Črepinšek M (2014) A chess rating system for evolutionary algorithms: a new method for the comparison and ranking of evolutionary algorithms. Inf Sci 277:656–679

    Article  MathSciNet  Google Scholar 

  • Wagner T, Trautmann H (2010) Online convergence detection for evolutionary multi-objective algorithms revisited. In: IEEE congress on evolutionary computation (CEC). IEEE, pp 1–8

  • Wagner T, Trautmann H, Naujoks B (2009) Ocd: online convergence detection for evolutionary multi-objective algorithms based on statistical testing. In: Evolutionary multi-criterion optimization. Springer, New York, pp 198–215

  • Wagner T, Trautmann H, Martí L (2011) A taxonomy of online stopping criteria for multi-objective evolutionary algorithms. In: Evolutionary multi-criterion optimization. Springer, New York, pp 16–30

  • Wang H, Wang D, Yang S (2009) A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems. Soft Comput 13(8–9):763–780

    Article  Google Scholar 

  • Wood AJ, Wollenberg BF (2011) Power generation, operation, and control. Wiley, New Delhi

    Google Scholar 

  • Zitzler E (1999) Evolutionary algorithms for multiobjective optimization: methods and applications. PhD thesis, ETH Zurich, Switzerland

  • Zitzler E, Thiele L (1998) Multiobjective optimization using evolutionary algorithms a comparative case study. In: Parallel problem solving from nature PPSN V. Springer, New York, pp 292–301

  • Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271. doi:10.1109/4235.797969

    Article  Google Scholar 

  • Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary algorithms: empirical results. Evol Comput 8(2):173–195

    Article  Google Scholar 

  • Zitzler E, Thiele L, Laumanns M, Fonseca CM, Da Fonseca VG (2003) Performance assessment of multiobjective optimizers: an analysis and review. IEEE Trans Evol Comput 7(2):117–132

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the management of SSN College of Engineering for providing funds for the High Performance Computing Lab (HPC Lab) where this research was carried out.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. Bhuvana.

Additional information

Communicated by V. Loia.

Appendix

Appendix

Here, we present memetic algorithm with preferential local search using adaptive weights for multi-objective optimization problems, which is reported in citation (Bhuvana and Aravindan 2015). It is included here to make this paper self-contained.

1.1 Design of memetic algorithm

The goal of memetic algorithm is to generate quality solutions by combining global and local search together. The purpose of combining two different search processes is to perform exploration of the search space and exploitation of the neighborhood locality. Two major proposals in combining two search heuristics are adaptive weight assignment scheme for performing local search and preferential local search. These two key ideas can be incorporated into any of the global search algorithms to arrive at a new memetic algorithm. We have integrated the above two approaches into NSGA-II arriving at a new algorithm, MAPLS-AW. Balance should be maintained, when a global search is integrated with a local search process. Our proposed algorithm maintains that balance between exploration and exploitation through preferential local search and by using adaptive weight assignment scheme. Due to this, the explicit exploration parameter, \(\eta _c,\) in simulated binary crossover used by NSGA-II is not needed. We have eliminated the need of that parameter in our proposed algorithm and kept \(\eta _c\) as a constant. In the following subsections we have introduced the working of adaptive weight assignment and preferential local search.

1.2 Adaptive weights

Using the weighted sum method, multi-objectives are combined together into single objective before the local optimization process is performed. This requires the weights to be assigned to the functional objectives. An adaptive weights mechanism has been introduced in this work that dynamically adapts weights for the objectives.

We assume that the objectives are minimizing ones and this is not a limitation, since any minimization problem can be converted into maximization problem.

Uniformly distributed optimal solutions are the solutions expected out of the evolutionary process, where providing equal weights will affect the diversity of solutions obtained. Equal weights never explore the extreme regions of the optimal front. Hence, we decided to provide weights for the multiple objectives in an adaptive manner. Classical aggregation techniques aggregate the objectives before the evolution begins and also does it only once. We are proposing a new aggregation method that aggregates during evolution and it is prudent to use information available in the objective space.

Adaptive weights are assigned by collecting information about a solution from its multidimensional objective space. Weights computed from these functional values will keep on changing during the course of the evolutionary process.

The aim is to provide lesser preference for any objective which has larger functional value in a minimization problem. Higher preference can be given to sustain a lesser functional objective and vice versa. While associating weight in this manner to one objective function, we need to consider other objectives at the same time. If we do not consider other objectives, then the evolution may take the solutions toward one particular region and make them crowded. This will affect the diversity of optimal solutions. Instead, we need to move an objective functional value toward its optimal minimum with respect to every other functional objectives in the search space. By this, a solution is shifted proportionately toward the Pareto optimal. Proportionate movement in the objective space is achieved with the help of the Euclidean norm.

If \(f_i^{(x)}\) is the \(i\)th functional objective of a solution x, then the proportionate movement is given by \(\omega _i\).

$$\begin{aligned} \omega _i= \frac{f_i^{(x)}}{\Vert f^{(x)}\Vert } \end{aligned}$$
(22)

where \(\Vert f^{(x)}\Vert \), the euclidean norm is given by

$$\begin{aligned} \Vert f^{(x)}\Vert ~=~\sqrt{f_1^{2^{(x)}}+f_2^{2^{(x)}}+f_3^{2^{(x)}}+\cdots +f_M^{2^{(x)}}}. \end{aligned}$$
(23)

Since the sum of weights in the weighted sum aggregation approach should be equal to 1, the following scaling down of \(\omega _i\) is done to become the adaptive individual weight of the objectives of a solution:

$$\begin{aligned} \alpha _i= \frac{\omega _i}{\sum _{i=1}^M \omega _i}. \end{aligned}$$
(24)

Once the individual weights are determined for all the objectives, they are combined together into a single objective F and is given by

$$\begin{aligned} F=\alpha _1f_1+\alpha _2f_2+\alpha _3f_3+\cdots +\alpha _Mf_M, \end{aligned}$$
(25)

where the sum of the weights, \(\alpha _1+\alpha _2+\alpha _3+\cdots +\alpha _M=1\). Local search applied after such dynamic weight adaptation will overcome the drawbacks, such as convergence time, taking optimization in the wrong direction and diversity of obtained optimal solutions.

1.3 Preferential local search (PLS)

The objective of integrating a local search in a global search process is to enhance the quality of solutions by fine-tuning them. To establish a balance between exploration and exploitation, we are proposing PLS. PLS addresses the issues related to combining global and local searches together. Issues addressed by PLS are choosing individuals for local search, deciding the depth of local search and determining the frequency of local search.

1.3.1 Choosing individuals for LS

Time incurred in allowing all the individuals for local search adds to the complexity of the memetic algorithm. Decision should be made to selectively allow a few for the local search. An individual in the population can be passed on to the next generation only when it has potential enough to survive and compete with other peer solutions and offspring. In any \(i\)th generation where the population is a mix of varied set of solutions, PLS identifies the elite solutions. Preference can be given to such elite solutions to undergo local search. This kind of preferences to good solutions strengthen them to counter new offspring.

The offspring that are generated after the genetic operations may lose their chance in the evolution when they compete with the potential parents. PLS selects the new offspring for depth-limited local search. That is, each solution will undergo at least one depth-limited local search once they are newly generated. If they survive to the next generation, they will be identified as elites and local search will be deepened further.

1.3.2 Depth of LS

PLS is designed in such a away as to limit the depth of local search, that is, potential solutions will undergo depth-limited LS. The depth of LS is determined by a predetermined number of steps. If potential solutions survive the next generation, local search will be deepened further. This way the local search is continued and iteratively deepened on good solutions across generations. Lesser the potential of one candidate solution, the lesser is the depth of local searches applied on it. Greater the potential, the deeper will be the local search applied on that solution across generations. The potential of a candidate depends on the fitness of that individual, which is associated with both exploration of the search space and exploitation of its neighborhood. We decided to use the steepest descent as local search method, since it suits the decision of limiting the depth of applying local search. Thus PLS chooses individuals for depth-limited local search and fine-tunes them. This fine-tuning spreads across generations and iteratively deepens. These solutions are collectively referred to as preferential local search. Procedure followed by PLS is given in Algorithm 5. These two ideas of PLS and AW are incorporated into NSGA-II and a new hybrid algorithm MAPLS-AW was developed.

figure e

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhuvana, J., Aravindan, C. Stopping criteria for MAPLS-AW, a hybrid multi-objective evolutionary algorithm. Soft Comput 20, 2409–2432 (2016). https://doi.org/10.1007/s00500-015-1651-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-015-1651-3

Keywords

Navigation