Abstract
Ensemble learning is one of the principal current directions in the research of machine learning. In this paper, subspace ensembles for classification are explored which constitute an ensemble classifier system by manipulating different feature subspaces. Starting with the nature of ensemble efficacy, we probe into the microcosmic meaning of ensemble diversity, and propose to use region partitioning and region weighting to implement effective subspace ensembles. An improved random subspace method that integrates this mechanism is presented. Individual classifiers possessing eminent performance on a partitioned region reflected by high neighborhood accuracies, are deemed to contribute largely to this region, and are assigned large weights in determining the labels of instances in this area. The robustness and effectiveness of the proposed method is shown empirically with the base classifier of linear support vector machines on the classification problem of EEG signals.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Guyon, I., Elisseeff, A.: An Introduction to Variable and Feature Selection. J. Mach. Learn. Res 3, 1157–1182 (2003)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, New York (2000)
Kohavi, F., John, G.: Wrappers for Feature Subset Selection. Artif. Intell. 97, 273–324 (1997)
Ho, T.: The Random Subspace Method for Constructing Decision Forests. IEEE Trans. Pattern Anal. Mach. Intell. 20, 832–844 (1998)
Breiman, L.: Random Forests. Mach. Learn. 45, 5–32 (2001)
Brown, G., Wyatt, J., Tiňo, P.: Managing Diversity in Regression Ensembles. J. Mach. Learn. Res. 6, 1621–1650 (2005)
Hansen, L., Salamon, P.: Neural Network Ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12, 993–1001 (1990)
Krogh, A., Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 231–238. MIT, Cambridge (1995)
Kuncheva, L., Whitaker, C.: Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Mach. Learn. 51, 181–207 (2003)
Saranh, A., Demirekler, M.: On Output Independence and Complementariness in Rank-Based Multiple Classifier Decision Systems. Pattern Recogn. 34, 2319–2330 (2001)
Opitz, D.: Feature Selection for Ensembles. In: Proc. of the Sixteenth National Conference on Artificial Intelligence, pp. 379–384 (1999)
Banfield, R., et al.: Ensemble Diversity Measures and Their Application to Thinning. Information Fusion 6, 49–62 (2005)
Brown, G., et al.: Diversity Creation Methods: a Survey and Categorisation. Information Fusion 6, 5–20 (2005)
Ueda, N., Nakano, R.: Generalization Error of Ensemble Estimators. In: Proc. of the International Conference on Neural Networks, pp. 90–95 (1996)
Dietterich, T.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Mach. Learn. 40, 139–157 (2000)
Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Diversity in Search Strategies for Ensemble Feature Selection. Information Fusion 6, 83–98 (2005)
Kittler, J., et al.: On Combining Classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20, 226–239 (1998)
Nicolelis, M.A.L.: Actions from Thoughts. Nature 409, 403–407 (2001)
Millán, J.R.: On the Need for On-Line Learning in Brain-Computer Interfaces. In: Proc. 2004 Int. Joint Conf. Neural Networks, vol. 4, pp. 2877–2882 (2004)
Chiappa, S., Millán, J.R.: Data set V <mental imagery, multi-class>. (2005), Available at http://ida.first.fraunhofer.de/projects/bci/competition_iii/desc_V.html
Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Mach. Learn. 36, 105–139 (1999)
Jordan, M.I., Jacobs, R.A.: Hierarchical Mixtures of Experts and the EM Algorithm. Neural Comput. 6, 181–214 (1994)
Ortega, J.: Making the Most of What You’ve Got: Using Models and Data to Improve Prediction Accuracy. Ph.D. Thesis, Vanderbilt Univeristy, Nashville, TN (1996)
Todorovski, L., Džeroski, S.: Combining Classifiers with Meta Decision Trees. Mach. Learn. 50, 223–249 (2003)
Woods, K., Kegelmeyer, W.P., Bowyer, K.: Combination of Multiple Classifiers using Local Accuracy Estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19, 405–410 (1997)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Sun, S. (2007). An Improved Random Subspace Method and Its Application to EEG Signal Classification. In: Haindl, M., Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2007. Lecture Notes in Computer Science, vol 4472. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72523-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-540-72523-7_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72481-0
Online ISBN: 978-3-540-72523-7
eBook Packages: Computer ScienceComputer Science (R0)