Skip to main content

Convergence Theorems of Estimation of Distribution Algorithms

  • Chapter
Markov Networks in Evolutionary Computation

Part of the book series: Adaptation, Learning, and Optimization ((ALO,volume 14))

Abstract

Estimation of Distribution Algorithms (EDAs) have been proposed as an extension of genetic algorithms.We assume that the function to be optimized is additively decomposed (ADF). The interaction graph of the ADF function is used to create exact or approximate factorizations of the Boltzmann distribution. Convergence of the algorithmMN-GIBBS is proven.MN-GIBBS uses a Markov network easily derived from the ADF and Gibbs sampling. We discuss different variants of Gibbs sampling. We show that a good approximation of the true distribution is not necessary, it suffices to use a factorization where the global optima have a large enough probability. This explains the success of EDAs in practical applications using Bayesian networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abbeel, P., Koller, D., Ng, A.: Learning factor graphs in polynomial time & sample complexity. Journ. Machine Learning Research 7, 1743–1780 (2006)

    MathSciNet  MATH  Google Scholar 

  2. Alden, M.E.: MARLEDA: Effective Distribution Estimation through Random Fields. PhD thesis, University of Texas at Austin, Austin, USA (2007)

    Google Scholar 

  3. Andrieux, C., de Freitas, N., Doucet, A., Jordan, M.: An introduction to MCMC for machine learning. Machine Learning 50, 5–43 (2003)

    Article  Google Scholar 

  4. Barahona, F.: On the computational complexity of the Ising spin glass models. J. Phys. A: Math. Gen. 15, 3241–3253 (1982)

    Article  MathSciNet  Google Scholar 

  5. Geman, D., Geman, S.: Stochastic relaxation, Gibbs sampling, and the restauration of images. IEEE Transaction on Pattern Recognition and Machine Intelligence 6, 721–741 (1984)

    Article  MATH  Google Scholar 

  6. Gibbs, A.: Bounding the convergence time of the Gibbs sampler in Bayesian image restoration. Biometrika 87, 749–766 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gilks, W.R., Richardson, S., Spiegelhalter, D.J.: Markov Chain Monte Carlo in Practice. Chapman & Hall, London (1996)

    MATH  Google Scholar 

  8. Guo, H., Hsu, W.: A survey of algorithms for real-time Bayesian network inference. In: KDD 2002/UAI 2002 Workshop on Real-Time Decision Support (2002), http://citeseer.ist.psu.edu/Guo2survey.html

  9. Heckendorn, R.B., Wright, A.H.: Efficient linkage discovery by limited probing. Evolutionary Computation 12, 517–545 (2004)

    Article  Google Scholar 

  10. Henrion, M.: Propagating uncertainty in Bayesian networks by Probabilistic Logic Sampling. In: Lemmar, J., Kanal, L. (eds.) Uncertainty in Artificial Intelligence, pp. 149–181. Elsevier, New York (1988)

    Google Scholar 

  11. Kearns, M., Vazirani, U.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1994)

    Google Scholar 

  12. Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47(2), 498–519 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  13. Lauritzen, S.L.: Graphical Models. Clarendon Press, Oxford (1996)

    Google Scholar 

  14. Levin, D.A., Peres, Y., Wilner, E.L.: Markov Chains and Mixing Times. American Mathematical Society, New York (2009)

    MATH  Google Scholar 

  15. MacKay, D.J.: Infomation Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  16. Mahnig, T., Mühlenbein, H.: A new adaptive Boltzmann selection schedule SDS. In: Proceedings of the Congress on Evolutionary Computation 2001, pp. 121–128. IEEE (2001)

    Google Scholar 

  17. Mendiburu, A., Santan, R., Lozano, J.: Introducing belief propagation in Estimation of Distribution Algorithms: A Parallel Framework. Technical Report EHU-KAT-IK-11-07, Intelligent Systems Group (2007)

    Google Scholar 

  18. Mühlenbein, H., Höns, R.: The estimation of distributions and the minimum relative entropy principle. Evolutionary Computation 13(1), 1–27 (2005)

    Article  Google Scholar 

  19. Mühlenbein, H., Höns, R.: The factorized distribution algorithm and the minimum relative entropy principle. In: Pelikan, M., Sastry, K., Cantu-Paz, E. (eds.) Scalable Optimization via Probabilistic Modeling. SCI, pp. 11–37. Springer, Berlin (2006)

    Chapter  Google Scholar 

  20. Mühlenbein, H., Mahnig, T.: FDA - a scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation 7(4), 353–376 (1999)

    Article  Google Scholar 

  21. Mühlenbein, H., Mahnig, T.: Evolutionary optimization and the estimation of search distributions with applications to graph bipartitioning. Journal of Approximate Reasoning 31(3), 157–192 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  22. Mühlenbein, H., Mahnig, T.: Evolutionary algorithms and the Boltzmann distribution. In: DeJong, K., Poli, R., Rowe, J.C. (eds.) Foundations of Genetic Algorithms 7, pp. 525–556. Morgan Kaufmann Publishers, San Francisco (2003)

    Google Scholar 

  23. Mühlenbein, H., Mahnig, T., Ochoa, A.: Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics 5(2), 213–247 (1999)

    Article  Google Scholar 

  24. Mühlenbein, H., Paaß, G.: From Recombination of Genes to the Estimation of Distributions I. Binary Parameters. In: Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 178–187. Springer, Heidelberg (1996)

    Chapter  Google Scholar 

  25. Munetomo, M., Goldberg, D.: Linkage identification by non-monotonicity detection for overlapping functions. Evolutionary Computation 7, 377–398 (1999)

    Article  Google Scholar 

  26. Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Mateo (1988)

    Google Scholar 

  27. Santana, R.: A Markov Network Based Factorized Distribution Algorithm for Optimization. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 337–348. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  28. Santana, R.: Estimation of distribution algorithms with Kikuchi approximations. Evolutionary Computation 13(1), 67–97 (2005)

    Article  Google Scholar 

  29. Santana, R., Mühlenbein, H.: Blocked stochastic sampling versus Estimation of Distribution Algorithms. In: Proceedings of the Congress on Evolutionary Computation 2002, pp. 1390–1395. IEEE Press (2002)

    Google Scholar 

  30. Shakya, S.K.: DEUM: A framework for an Estimation of Distribution Algorithm based on Markov Random Fields. PhD thesis, Robert Gordon University, Aberdeen, Scotland (2006)

    Google Scholar 

  31. Wright, A.H., Pulavarty, S.S.: Estimation of distribution algorithm based on linkage discovery and factorization. In: Beyer, H.G., et al. (eds.) Proceedings of GECCO 2005, pp. 695–703. ACM Press (2005)

    Google Scholar 

  32. Yanover, C., Weiss, Y.: Finding the M most probable configurations using loopy belief propagation. In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Advances in Neural Information Processing Systems 16, pp. 289–295. MIT Press, Cambridge (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Berlin Heidelberg

About this chapter

Cite this chapter

Mühlenbein, H. (2012). Convergence Theorems of Estimation of Distribution Algorithms. In: Shakya, S., Santana, R. (eds) Markov Networks in Evolutionary Computation. Adaptation, Learning, and Optimization, vol 14. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28900-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-28900-2_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-28899-9

  • Online ISBN: 978-3-642-28900-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics