Skip to main content

Learning Semi Naïve Bayes Structures by Estimation of Distribution Algorithms

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2902))

Abstract

Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier called naïve Bayes is competitive with state of the art classifiers. This simple approach stands from assumptions of conditional independence among features given the class. Improvements in accuracy of naïve Bayes has been demonstrated by a number of approaches, collectively named semi naïve Bayes classifiers. Semi naïve Bayes classifiers are usually based on the search of specific values or structures. The learning process of these classifiers is usually based on greedy search algorithms. In this paper we propose to learn these semi naïve Bayes structures through estimation of distribution algorithms, which are non-deterministic, stochastic heuristic search strategies. Experimental tests have been done with 21 data sets from the UCI repository.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/

  2. De Bonet, J.S., Isbell, C.L., Viola, P.: MIMIC: Finding optima by estimating probability densities. In: Advances in Neural Information Processing Systems, vol. 9 (1997)

    Google Scholar 

  3. Domingos, P., Pazzani, M.: Beyond independence: conditions for the optimality of the simple Bayesian classifier. In: Proceedings of the 13th International Conference on Machine Learning, pp. 105–112 (1996)

    Google Scholar 

  4. Dougherty, J., Kohavi, R., Sahami, M.: Supervised and unsupervised discretization of continuous features. In: Proceedings of the 12th International Conference on Machine Learning, pp. 194–202 (1995)

    Google Scholar 

  5. Duda, R., Hart, P.: Pattern Classification and Scene Analysis. John Wiley and Sons, Chichester (1973)

    MATH  Google Scholar 

  6. Etxeberria, R., Larrañaga, P.: Global optimization with Bayesian networks. In: II Symposium on Artificial Intelligence. CIMAF 1999, Special Session on Distributions and Evolutionary Optimization, pp. 332–339 (1999)

    Google Scholar 

  7. Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the 13th International Conference on Artificial Intelligence, pp. 1022–1027 (1993)

    Google Scholar 

  8. Ferreira, J.T.A.S., Denison, D.G.T., Hand, D.J.: Weighted naíve Bayes modelling for data mining. Technical report, Deparment of Mathematics, Imperial College (May 2001)

    Google Scholar 

  9. Friedman, N., Geiger, D., Goldszmidt, D.M.: Bayesian network classifiers. Machine Learning 29(2-3), 131–163 (1997)

    Article  MATH  Google Scholar 

  10. Gama, J.: Iterative Bayes. Intelligent Data Analysis 4, 475–488 (2000)

    MATH  Google Scholar 

  11. Good, I.J.: The Estimation of Probabilities: An Essay on Modern Bayesian Methods. MIT Press, Cambridge (1965)

    MATH  Google Scholar 

  12. Hand, D.J., Yu, K.: Idiot’s Bayes - not so stupid after all? International Statistical Review 69(3), 385–398 (2001)

    Article  MATH  Google Scholar 

  13. Kohavi, J., Becker, B., Sommerfield, D.: Improving simple Bayes. Technical report, Data Mining and Visualization Group, Silicon Graphics (1997)

    Google Scholar 

  14. Kohavi, R.: Scaling up the accuracy of naïve-Bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202–207 (1996)

    Google Scholar 

  15. Kohavi, R., John, G., Long, R., Manley, D., Pfleger, K.: MLC++:A machine learning library in C++. In: Tools with Artificial Intelligence, pp. 740–743. IEEE Computer Society Press, Los Alamitos (1994)

    Google Scholar 

  16. Kononenko, I.: Semi-naïve Bayesian classifier. In: Sixth European Working Session on Learning, pp. 206–219 (1991)

    Google Scholar 

  17. Langley, P.: Induction of recursive Bayesian classifiers. In: European Conference on Machine Learning, pp. 153–164. Springer, Berlin (1993)

    Google Scholar 

  18. Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Morgan Kaufmann (ed.) Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, Seattle, WA, pp. 399–406 (1994)

    Google Scholar 

  19. Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.M.: Optimization in continuous domains by learning and simulation of gaussian networks. In: Proceedings of the Workshop in Optimization by Building and Using Probabilistic Models. A Workshop within the 2000 Genetic and Evolutionary Computation Conference, GECCO 2000, Las Vegas, Nevada, USA, pp. 201–204 (2000)

    Google Scholar 

  20. Larrañaga, P., Lozano, J.A.: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht (2001)

    Google Scholar 

  21. Mühlenbein, H.: The equation for response to selection and its use for prediction. Evolutionary Computation 5, 303–346 (1998)

    Article  Google Scholar 

  22. Mühlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions I. Binary parameters. In: Parallel Problem Solving from Nature - PPSN IV. LNCS, vol. 1411, pp. 178–187 (1996)

    Google Scholar 

  23. Pazzani, M.: Searching for dependencies in Bayesian classifiers. In: Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, pp. 239–248 (1996)

    Google Scholar 

  24. Robles, V., Larrañaga, P., Peña, J.M., Menasalvas, E., Pérez, M.S.: Interval Estimation Naïve Bayes. LNCS (2003)

    Google Scholar 

  25. Ting, K.M.: Discretization of continuous-valued attributes and instance-based learning. Technical Report 491, University of Sydney (1994)

    Google Scholar 

  26. Webb, G.I., Pazzani, M.J.: Adjusted probability naive Bayesian induction. In: Proceedings of the 11th Australian Joint Conference on Artificial Intelligence, pp. 285–295 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Robles, V., Larrañaga, P., Peña, J.M., Pérez, M.S., Menasalvas, E., Herves, V. (2003). Learning Semi Naïve Bayes Structures by Estimation of Distribution Algorithms. In: Pires, F.M., Abreu, S. (eds) Progress in Artificial Intelligence. EPIA 2003. Lecture Notes in Computer Science(), vol 2902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24580-3_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24580-3_31

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20589-0

  • Online ISBN: 978-3-540-24580-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics