Skip to main content

Advertisement

Log in

Experimental study on population-based incremental learning algorithms for dynamic optimization problems

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBIL’s adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. For the convenience of description in this paper we will call the probability vector that has 0.5 for all of its entries central probability vector or just central vector because it represents the central point in the genotype space.

  2. For each bit position of a solution, assuming binary encoded, if a random created real number in the range of [0.0, 1.0] is less than the probability value of corresponding element in the probability vector, the bit is set to 1 (or 0), otherwise it is set to 0 (or 1 respectively).

  3. For the convenience of analyzing experimental results on dynamic problems, we herein call the first period stationary since the behavior of an algorithm on a dynamic problem during this period is the same as that on the relevant stationary problem. And the other nine periods are called dynamic.

References

  1. Bäck T (1998) On the behavior of evolutionary algorithms in dynamic fitness landscape. In: Proceedings of the 1998 IEEE international conference on evolutionary computation, pp 446–451

  2. Baker JE (1987) Reducing bias and inefficiency in the selection algorithms. In: Grefenstelle JJ (ed) Proceedings of the 2nd international conference on genetic algorithms. Lawrence Erlbaum Associates, pp 14-21

  3. Baluja S (1994) Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning. Technical report CMU-CS-94-163, Carnegie Mellon University, USA

  4. Baluja S, Caruana R (1995) Removing the genetics from the standard genetic algorithm. In: Proceedings of the 12th international conference on machine learning, pp 38–46

  5. Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 congress on evolutionary computation 3:1875–1882

  6. Branke J, Kaußler T, Schmidt C, Schmeck H (2000) A multi-population approach to dynamic optimization problems. In: Adaptive computing in design and manufacturing

  7. Branke J (2001) Evolutionary approaches to dynamic optimization problems—updated survey. In: GECCO Workshop on evolutionary algorithms for dynamic optimization problems, pp 134–137

  8. Branke J (2002) Evolutionary optimization in dynamic environments. Kluwer, Dordrecht

    Google Scholar 

  9. Cobb HG (1990) An Investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environments. Technical report AIC-90-001, Naval Research Laboratory, Washington

  10. Cobb HG, Grefenstette J (1993) Genetic algorithms for tracking changing environments. In: Proceedings of the 5th international conference on genetic algorithms, pp 523–530

  11. Dasgupta D, McGregor D (1992) Nonstationary function optimization using the structured genetic algorithm. In: Proceedings of the 2nd international conference on parallel problem solving from nature, pp 145–154

  12. Goldberg DE, Smith RE (1987) Nonstationary function optimization using genetic algorithms with dominance and diploidy. In: Grefenstelle JJ (ed) Proceedings of the 2nd international conference on genetic algorithms. Lawrence Erlbaum Associates, pp 59–68

  13. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, Reading

    Google Scholar 

  14. González C, Lozano JA, Larrañaga P (2000) Analyzing the population based incremental learning algorithm by means of discrete dynamical systems. Complex Syst 12(4):465–479

    Google Scholar 

  15. Grefenstette JJ (1992) Genetic algorithms for changing environments. In: Männer R, Manderick B (eds) Proceedings of the 2nd international conference on parallel problem solving from nature, pp 137–144

  16. Grefenstette JJ (1999) Evolvability in dynamic fitness landscapes: a genetic algorithm approach. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 2031–2038

  17. Höhfeld M, Rudolph G (1997) Towards a theory of population-based incremental learning. In: Proceedings of the 4th IEEE conference on evolutionary computation, pp 1–5

  18. Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor

    Google Scholar 

  19. Larrañaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer, Dordrecht

    Google Scholar 

  20. Lewis J, Hart E, Ritchie G (1998) A comparison of dominance mechanisms and simple mutation on non-stationary problems. In: Proceedings of the 5th int conf on parallel problem solving from nature, pp 139–148

  21. Mitchell M, Forrest S, Holland JH (1992) The royal road for genetic algorithms: fitness landscapes and GA performance. In: Proceedings of the 1st European conference on artificial life, pp 245–254

  22. Mori N, Kita H, Nishikawa Y (1997) Adaptation to changing environments by means of the memory based thermodynamical genetic algorithm. In: Bäck T (ed) Proceedings of the 7th international conference on genetic algorithms. Morgan Kaufmann Publishers, pp 299–306

  23. Morrison RW, De Jong KA (1999) A test problem generator for non-stationary environments. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 2047–2053

  24. Morrison RW, De Jong KA (2000) Triggered hypermutation revisited. In: Proceedings of the 2000 congress on evolutionary computation, pp 1025–1032

  25. Mühlenbein H, Paab G (1996) From recombination of genes to the estimation of distributions I. Binary parameters. In: Voigt H-M, Ebeling W, Rechenberg I, Schwefel H-P (eds) Proceedings of the 4th international conference on parallel problem solving from nature, pp 178–187

  26. Ng KP, Wong KC (1995) A new diploid scheme and dominance change mechanism for non-stationary function optimisation. In: Eshelman LJ (ed) Proceedings of the 6th international conference on genetic algorithms

  27. Servais MP, de Jaer G, Greene JR (1997) Function optimization using multiple-base population based incremental learning. In: Proceedings of the 8th South African workshop on pattern recognition

  28. Trojanowski K, Michalewicz Z (2000) Evolutionary optimization in non-stationary environments. J Comp Sci Technol 1(2):93–124

    Google Scholar 

  29. Whitley LD (1991) Fundamental principles of deception in genetic search. In: Rawlins GJE (ed) Foundations of genetic algorithms, vol 1, pp 221–241

  30. Yang S (2003) Non-stationary problem optimization using the primal-dual genetic algorithm. In: Proceedings of the 2003 congress on evolutionary computation, vol 3, pp 2246–2253

Download references

Acknowledgments

The authors would like to thank Dr. Jürgen Branke and the anonymous reviewers for their thoughtful suggestions and helpful comments. Shengxiang Yang was supported by UK EPSRC under Grant GR/S79718/01.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shengxiang Yang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yang, S., Yao, X. Experimental study on population-based incremental learning algorithms for dynamic optimization problems. Soft Comput 9, 815–834 (2005). https://doi.org/10.1007/s00500-004-0422-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-004-0422-3

Keywords

Navigation