Skip to main content
Log in

Methods and algorithms of collective recognition

  • Reviews
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

The collective decision making and, in particular, the collective recognition is treated as the problem of joint application of multiple classifier decisions. The decisions are made about the class of an entity, situation, image, etc. The joint decision is used to improve quality of the final decision by aggregation and coordination of different classifier decisions using a metalevel algorithm. The studies in the field of collective recognition, which were started in the middle of the 1950s, find wide application in practice during the last decade. Since they are used for solving complex large-scale applied problems, the interest of both theoretical scientists and engineers is focused on them. A new impetus for the studies was given by the recent development of embedded distributed structures involving ensembles of intellectual sensors that make decisions under uncertainties on the base of limited local information. The final decision of high quality, in particular, the decision of higher aggregation level, is made by combining local classifier decisions on the metalevel. There are dozens of recent publications proposing new ideas and new approaches and algorithms of collective recognition. Unfortunately, some papers rediscover results publushed several decades before. The goal of this review is to present the main ideas of collective recognition and to outline the status of researches basing on the original source works. The review covers the period from the 1950s, when the first ideas and methods appeared, up to present time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Luce, D.R. and Raiffa, H., Games and Decisions, New York: Wiley, 1957.

    MATH  Google Scholar 

  2. Mirkin, B.G., Problema gruppovogo vybora (Problem of Group Choice), Moscow: Nauka, 1974.

    MATH  Google Scholar 

  3. Blin, J., Fu, K., and Whinston, A., Application of Pattern Recognition to some Problems in Economics, Techniq. Optim., 1972, no. 416, pp. 1–18.

  4. Kanal, L., Interactive Pattern Analysis and Classification. Survey and Commentary, Proc. IEEE, 1972, no. 10, pp. 1200–1215.

  5. Matematicheskie metody v sotsial’nykh naukakh (MathematicalMathods in Social Sciences), Lazarfeld, P. and Henry, N., Eds., Moscow: Nauka, 1973.

    Google Scholar 

  6. Beshelev, S.D. and Gurvitch, F.G., Ekspertnye otsenki (Expert Judgements), Moscow: Nauka, 1973.

    Google Scholar 

  7. Vorobiev, N.N., Mathematization Problems of Decision Making Basing on Expert Judgements, Proc. IV Symp. Cybernatics, 1972, vol. 3, pp. 47–51.

    Google Scholar 

  8. Glushkov, V.M., On Predictions Based on Expert Judgements, Kibernetika, 1969, no. 2, pp. 2–4.

  9. von Neumann, J., Probabilistic Logics and Synthesis of Reliable Organisms from Unreliable Components, in Automata Studies, Shannon, C.E. and McCarthy, J., Eds., Princeton: Princeton Univ. Press, 1956, pp. 43–98.

    Google Scholar 

  10. Rastrigin, L.A. and Erenshtein, R.Kh., Metod kollectivnogo raspoznavaniya (Collective Recognition Method), Moscow: Energoizdat, 1981.

    Google Scholar 

  11. Condorcet, N.C., Essai sur l’application de l’analyse à la probabilité des decisions rendues a la pluralité des voi, Paris: Imprimerie Royale, 1785.

    Google Scholar 

  12. Rastrigin, L.A. and Erenshtein, R.Kh., Collective of Algorithms of Decision Rules in Pattern Recognition Problems, Izv. AN SSSR, Tekh. Kibern., 1978, no. 2, pp. 116–126.

  13. Rastrigin, L.A. and Erenshtein, R.Kh., Collective of Algorithms of Decision Rules in Pattern Recognition Problems, Avtom. Telemekh., 1975, no. 9, pp. 134–144.

  14. Rastrigin, L.A. and Erenshtein, R.Kh., A Collective of Algorithms, Proc. of the 4th IJCAI, Tbilisi, 1975, pp. 370–373.

  15. Rastrigin, L.A. and Erenshtein, R.Kh., Learning of a Collective of Solving Rules, in Adaptive Systems, Riga: Zinatne, 1974, vol. 4, pp. 8–20.

    Google Scholar 

  16. Kittler, J., Hatef, M., Duin, R.P.W., and Matas, J., On Combining Classifiers, IEEE Trans. Pattern Anal. Machine Intelligence, 1998, vol. 20, no. 3, pp. 226–239.

    Article  Google Scholar 

  17. Kuncheva, L., Bezdec, J., and Duin, R.P.W., Decision Templates for Multiple Classifier Fusion, Pattern Recogn., 2001, vol. 34, no. 2, pp. 299–314.

    Article  MATH  Google Scholar 

  18. Ho, T.K., Multiple Classifier Combination: Lessons and Next Steps, in Hybrid Methods in Pattern Recognition, Kandel, A. and Bunke, H., Eds., Singapore: World Scientific, 2002, pp. 171–198.

    Google Scholar 

  19. Datta, S., Bhaduri, K., Giannella, C., Wolff, R., and Kargupta, H., Distributed Data Mining in Peer-to-Peer Networks, IEEE Internet Computing, Special issue on Distributed Data Mining, 2006, vol. 10, no. 4, pp. 18–26.

    Google Scholar 

  20. Fix, E. and Hodges, J., Nonparametric Discrimination, Texas: USAF School of Aviation Medicine, 1951.

    Google Scholar 

  21. Rosenblatt, F., Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington: Spartan Books, 1962.

    MATH  Google Scholar 

  22. Bongard, M.M., Problema uznavaniya (Recognition Problem), Moscow: Nauka, 1967.

    Google Scholar 

  23. Clark, P. and Niblett, T., The CN2 Induction Algorithm, Machine Learning J., 1989, no. 3, pp. 261–283.

  24. Gorodetski, V. and Karsayev, O., Algorithm of Rule Extraction from Learning Data, Proc. Int. Conf. Expert Syst. & Artificial Intelligence (EXPERSYS-96), 1996, pp. 133–138.

  25. Michalski, R.A., Theory and Methodology of Inductive Learning, Machine Learning, 1983, no. 1, pp. 83–134.

  26. Ali, K. and Pazzani, M., Error Reduction through Learning Multiple Descriptions, Machine Learning, 1996, vol. 24, no. 3, pp. 173–202.

    Google Scholar 

  27. Clemen, R., Combining Forecasts: A Review and Annotated Bibliography, Int. J. Forecast, 1989, no. 5, pp. 559–583.

  28. Dietterich, T., Machine Learning Research: Four Current Directions, AI Magazine, 1997, vol. 18, no. 4, pp. 97–136.

    Google Scholar 

  29. Dietterich, T., Ensemble Methods in Machine Learning, LNCS, 2000, no. 1857, pp. 1–15.

  30. Buntine, W.L., A Theory of Learning Classification Rules, Sydney: Univ. of Technology, 1990.

    Google Scholar 

  31. Hashem, S., Optimal Linear Combination of Neural Networks, Purdue: School of Industrial Engineering, 1997.

    Google Scholar 

  32. Jordan, M. and Jacobs, R., Hierarchical Mixtures of Experts and the EM Algorithm, Neural Comput., 1994, vol. 6, no. 2, pp. 181–214.

    Article  Google Scholar 

  33. Perrone, M. and Cooper, L., When Networks Disagree: Ensemble Methods for Hybrid Neural Networks, in Neural Networks for Speech Image Proc., 1993, pp. 126–142.

  34. Breiman, L., Bagging Predictors, Machine Learning, 1996, vol. 24, no. 2, pp. 123–140.

    MATH  MathSciNet  Google Scholar 

  35. Freund, Y. and Shapire, R., Experiments with a New Boosting Algorithm, Proc. 13th Int. Conf. Machine Learning, 1996, pp. 148–156.

  36. Moerland, P., Mixtures of Experts Estimate A Posteriori Probabilities, Proc. Int. Conf. Artificial Neural Networks (ICANN’97), 1997, pp. 499–504.

  37. Ortega, J., Coppel, M., and Argamon, S., Arbilearning Among Competing Classifiers Using Learned Referees, Knowledge Inform. Syst., 2001, no. 3, pp. 470–490.

  38. Gorodetskiy, V., Karsaev, O., Samoilov, V., and Serebryakov, S., P2P Agent Platform: Implementation and Testing, AAMAS Sixth Int. Workshop Agents Peer-to-Peer Comput., 2007, pp. 21–32.

  39. Ting, K. and Witten, I., Issues in Stacked Generalization, J. Artific. Intellig. Res., 1999, no. 10, pp. 271–289.

  40. Wolpert, D., Stacked Generalization, Neural Network, 1992, vol. 5, no. 2, pp. 241–260.

    Article  Google Scholar 

  41. Breiman, L., Stacked Regression, Machine Learning, 1996, vol. 24, no. 1, pp. 49–64.

    MATH  MathSciNet  Google Scholar 

  42. Stolfo, S. and Chan, P., A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data, Proc. Int. Conf. Machine Learning, 1995, pp. 90–98.

  43. Merz, C. and Murphy, P., UCI Repository on Machine Learning Databases, Irvine: Univ. of California, 1997 (http://www.ics.uci.edu/mlearn/MLRrepository.html (21.04.06)).

    Google Scholar 

  44. Merz, C., Using Correspondence Analysis to Combining Classifiers, Machine Learning, 1999, no. 36, pp. 33–58.

  45. Bay, S.D. and Pazzani, M.J., Characterizing Model Error and Differences, Proc. 17 Int. Conf. Machine Learning (ICML-2000), 2000, pp. 49–56.

  46. Quinlan, R., C4.5 Programs for Machine Learning, San Francisco: Morgan Kaufmann, 1993.

    Google Scholar 

  47. Murthy, S., Kassif, S., Salzberg, S., and Beigel, R., OC1: Randomized Induction of Oblique Decision Trees, Proc. AAAI-93, 1993, pp. 322–327.

  48. Cost, S. and Salzberg, S., A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features, Machine Learning, 1993, vol. 10, no. 1, pp. 57–78.

    Google Scholar 

  49. Rumelhart, D.E, Hinton, G.E., and Williams, R.J., Learning Internal Representation by Error Propagation, Parallel Distributed Proc. Exploration Microstructure Cognit., 1986, no. 1, pp. 318–362.

  50. Gama, J. and Brazdil, P., Cascade Generalization, Machine Learning, 2000, vol. 41, no. 3, pp. 315–342.

    Article  MATH  Google Scholar 

  51. Niyogi, P., Pierrot, J.-B., and Siohan, O., Multiple Classifiers by Constrained Minimization, Proc. Int. Conf. Acoustics, Speech, Signal Proc., 2000, no. 6. pp. 3462–3465.

  52. Prodomidis, A., Chan, P., and Stolfo, S., Meta-Learning in Distributed Data Mining Systems: Issues and Approaches, in Advances in Distributed Data Mining, Kargupta, H. and Chan, P., Eds., Book AAAI Press, 2000.

  53. Ting, K., The Characterization of Predictive Accuracy and Decision Combination, Proc. 13 Int. Conf. Machine Learning, 1996, pp. 498–506.

  54. Amanda, J.C. and Sharkey, N.E., Combining Artificial Neural Nets: Ensembles and Modular Multi—Net Systems, New York: Springer-Verlag, 1999.

    Google Scholar 

  55. Seewald, A. and Fuernkranz, J., An Evaluation of Grading Classifiers, Proc. 4 Int. Conf. Intelligent Data Anal., 2001, pp. 115–124.

  56. Todorovski, L. and Dzeroski, S., Combining Classifiers with Meta Decision Trees, Proc. 4 Eur. Conf. Principles of Data Mining and Knowledge Discovery (PKDD-2000), 2000, pp. 54–64.

  57. Todorovski, L. and Dzeroski, S., Combining Classifiers with Meta Decision Trees, Machine Learning, 2003, vol. 50, no. 3, pp. 223–249.

    Article  MATH  Google Scholar 

  58. Huang, T., Hess, C., Pan, H., and Liang Zhi-Pei, A Neuronet Approach to Information Fusion, Proc. IEEE First Workshop Multimedia Signal Proc., 1997, pp. 45–50.

  59. Dar-Shyang, L., A Theory of Classifier Combination: The Neural Network Approach, Proc. Int. Conf. Document Anal. and Recognit. (ICDAR), 1995, pp. 42–45.

  60. Komartsova, L.G. and Maksimov, A.V., Neirokompyutery (Neurocomputers), Moscow: Bauman State Univ. of Technology, 2002, pp. 115–117.

    Google Scholar 

  61. Lipnickas, A. and Korbicz, J., Adaptive Selection of Neural Networks for a Committee Decision, IEEE Int. Workshop Intelligent Data Acquisition and Advanced Comput. Syst.: Technology and Appl., 2003, pp. 109–114.

  62. McKay, T., Classifier Ensembles: A Practical Overview (http://www.music.mcgill.ca/:_cmckay/software/computer_science/Classifier Ensembles/greeting_page.html (26.05.08)).

  63. Jacobs, R., Jordan, M., Nowlan, S., and Hinton, G., Adaptive Mixtures of Local Experts, Neural Comput., 1991, no. 3, pp. 79–87.

  64. Waterhouse, S. and Robinson, A., Classification Using Hierarchical Mixtures of Experts, Proc. IEEE Workshop Neural Networks Signal Proc., 1994, pp. 177–186.

  65. Kuncheva, L., Switching between Selection and Fusion in Combining Classifiers: An Experiment, IEEE Trans. Syst. Man Cybernet., 2002, no. 32, pp. 146–156.

  66. Kuncheva, L. and Whitaker, C., Measures of Diversity in Classifier Ensembles, Machine Learning, 2003, no. 51, pp. 181–207.

  67. Kleinberg, E.M., Stochastic Discrimination, Ann. Mathemat. Artific. Intellig., 1990, no. 1, pp. 207–239.

  68. Kleinberg, E.M., A Mathematically Rigorous Foundation for Supervised Learning, Lecture Notes Comput. Sci., 2000, no. 1857, pp. 67–78.

Download references

Author information

Authors and Affiliations

Authors

Additional information

Original Russian Text © V.I. Gorodetskiy, S.V. Serebryakov, 2008, published in Avtomatika i Telemekhanika, 2008, No. 11, pp. 3–40.

This review is carried out as part of the Russian Academy of Sciences basic research program no. 14 “Fundamental Problems of Informatics and Information Technologies,” branch 1 “Intellectual Technologies and Mathematical Modeling,” project no. 236.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gorodetskiy, V.I., Serebryakov, S.V. Methods and algorithms of collective recognition. Autom Remote Control 69, 1821–1851 (2008). https://doi.org/10.1134/S0005117908110015

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117908110015

PACS numbers

Navigation