Abstract
We have developed relative feature importance (RFI), a metric for the classifier-independent ranking of features. Previously, we have shown the metric to rank accurately features for a wide variety of artificial and natural problems, for both two-class and multi-class problems. In this paper, we present the design of the metric, including both theoretical considerations and statistical analysis of the possible components.
Chapter PDF
Similar content being viewed by others
Keywords
References
I. Chang and M.H. Loew, “Pattern Recognition with New Class Discovery,” Proceedings of the IEEE Computer Society Conference on Pattern Recognition and Computer Vision, pp. 438–443., 1991.
M. Ben-Bassat, “f-Entropies, Probability of Error, and Feature Selection,” Information and Control, vol. 39, pp. 227–242, 1978.
T.M. Cover, “The Best Two Independent Measurements are Not the Two Best,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 4, no. 1, pp. 116–117, Jan. 1974.
K. Fukunaga and J.M. Mantock, “Nonparametric Discriminant Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 5, no. 6, pp. 671–678, Nov. 1983.
G.F. Hughes, “On the mean accuracy of statistical pattern recognizers,” IEEE Transactions on Information Theory, vol. IT-14, pp. 55–63, Jan. 1968.
R.O. Duda and P.E. Hart, Pattern Classification and Scene Analysis, New York: Wiley & Sons, 1973.
R.D. Short and K. Fukunaga, “The Optimal Distance Measure for Nearest Neighbor Classification,” IEEE Transactions on Information Theory, vol. IT-27, no. 5, pp. 622–627, Sept. 1981.
H.J. Holz and M.H. Loew, “Multi-class classifier-independent feature analysis,” Pattern Recognition Letters, vol. 18, no. 11–13, pp. 1219–1224, Nov. 1997.
K. Fukunaga, Introduction to Statistical Pattern Recognition: 2nd Edition, Academic Press, Inc. 1990.
T.M. Cover and P.E. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. IT-13, pp. 21–27, Jan. 1967.
J.M. Van Campenhout, “The Arbitrary Relation Between Probability of Error and Measurement Subset,” Journal of the American Statistical Association, vol. 75, no. 369, pp. 104–109, March 1980.
P.M. Narendra and K. Fukunaga, “A branch and bound algorithm for feature subset selection,” IEEE Transactions on Computers, vol. 26, no. 9, pp. 917–922, Sept. 1977.
N.L. Johnson and F.C. Leone, Statistics and Experimental Design in Engineering and Physical Sciences, pp. 614–661, New York: Wiley, 1977.
H.J. Holz and M.H. Loew, “Relative Feature Importance: A Classifier-Independent Approach to Feature Selection,” Pattern Recognition in Practice IV, E.S. Gelsema and L.N. Kanal, eds, pp. 473–487, Elsevier Science B.V., 1994.
-, “Non-Parametric Discriminatory Power,” Proceedings of the 1994 IEEE-IMS Workshop on Information Theory and Statistics, pp. 65, Alexandria, VA, 1994.
-, “Validation of Relative Feature Importance Using a Natural Data Set,” 15th International Conference on Pattern Recognition, Barcelona, Spain, September 2000.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Holz, H.J., Loew, M.H. (2000). Design Choices and Theoretical Issues for Relative Feature Importance, a Metric for Nonparametric Discriminatory Power. In: Ferri, F.J., Iñesta, J.M., Amin, A., Pudil, P. (eds) Advances in Pattern Recognition. SSPR /SPR 2000. Lecture Notes in Computer Science, vol 1876. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44522-6_72
Download citation
DOI: https://doi.org/10.1007/3-540-44522-6_72
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67946-2
Online ISBN: 978-3-540-44522-7
eBook Packages: Springer Book Archive