Abstract
One of the most efficient means to understand complex data is by visualizing them in two- or three-dimensional space. As meaningful data are likely to be high dimensional, visualizing them requires dimensional reduction algorithms, which objective is to map high-dimensional data into low-dimensional space while preserving some of their underlying structures. For labeled data, their low-dimensional representations should embed their classifiability so that their class-structures become visible. It is also beneficial if an algorithm can classify labeled input while at the same time executes dimensional reduction to visually offer information regarding the data’s structure to give rational behind the classification. However, most of the currently available dimensional reduction methods are not usually equipped with classification features, while most classification algorithm lacks transparencies in rationalizing their decisions. In this paper, the restricted radial basis function networks (rRBF), a recently proposed supervised neural network with low-dimensional internal representation, is utilized for visualizing high-dimensional data while also performing classification. The primary focus of this paper is to empirically explain the classifiability and visual transparency of the rRBF.
Similar content being viewed by others
References
Bunte K, Hammer B, Wismuler A, Biehl M (2010) Adaptive local disimilarity measures for discriminative dimension reduction of labelled data. Neurocomputing 73:1074–1092
Bunte K, Bieh M, Hammer B (2011) A general framework for dimensionality reducing data visualization mapping. Neural Comput 24:771–804
Cover T, Hart P (1967) Nearest neighbor pattern classifications. IEEE Trans Inf Theory 13:21–27
Fisher R (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7(2):179–188
Flexer A (2001) On the use of self-organizing maps for clustering and visualization. Intell Data Anal 5(5):373–384
Geng X, Zhan D-C, Zhou Z-H (2005) Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Trans Syst Man Cybern Part B 35(6):1098–1107
Goldberger J, Roweis S, Hinton G, Salakhutdinov R (2004) Neighborhood components analysis. In: Saul LK, Weiss Y, Bottou L (eds) Advances in neural information processing systems, vol 17. MIT Press, Cambridge, MA, pp 513–520
Gonen M (2014) Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning. Pattern Recognit Lett 38:132–141
Hartono P, Trappenberg T (2013) Classificability-regulated self-organizing map using restricted RBF. In: Proc. IEEE international joint conference on neural networks (IJCNN 2013). pp 160–164
Hartono P, Hollensen P, Trappenberg T (2015) Learning-regulated context relevant topographical map. IEEE Trans Neural Netw Learn Syst 26(10):2323–2335
Hinton G (2007) Learning multiple layers of representation. Trends Cognit Sci 11(10):428–434
Hinton G, Roweis S (2002) Stochastic neighbor embedding. Adv Neural Inf Process Syst 15:833–840
Hinton G, Salakhutdinov R (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507
Jolliffe I (1986) Principal component analysis. Springer, New York
Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43:59–69
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444
Martinez A, Kak A (2001) Pca versus lda. IEEE Trans Pattern Anal Mach Intell 23(2):228–233
Matsunaga R, Hartono P, Abe J (2013) Learning of tonality differentiations between western music and traditional japanese music by an artificial neural network: an approach of restricted RBF (in Japanese). Technical report
Memisevic R, Hinton G (2005) Multiple relational embedding. In: Proc. NIPS 2004. MIT Press, pp 913–920
Peltonen J, Klami A, Kaski S (2004) Improved learning of riemannian metrics for exploratory analysis. Neural Netw 17:1087–1100
Peltonen J, Aido H, Kaski S (2009) Supervised nonlinear dimensionality reduction by neighborhood retrieval. In: Proc. of IEEE ICASP 2009. pp 1809–1812
Poggio T, Girosi F (1990) Networks for approximation and learning. Proc IEEE 87:1484–1487
Roweis S, Saul L (2000) Dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326
Rumelhart D, Hinton G, Williams R (1984) Learning internal representations by error propagation. In: Rumelhart D, McClelland J (eds) Parallel distributed processing. MIT Press, Cambridge, MA, pp 318–362
Rumelhart D, Hinton G, William R (1986) Learning representation by backpropagating errors. Nature 323:533–536
Schulz A, Gisbrecht A, Hammer B (2013) Using nonlinear dimensionality reduction to visualize classifiers. In: Rojas I, Joya G, Cabestany J (eds) IWANN 2013 part 1, LNCS 7902. pp 59–68
UCI Machine Learning Repository. http://archive.ics.uci.edu/ml/
van der Maaten LPJ (2008) Visualizing high-dimensional data using t-SNE. J Mach Learn Res 9:2579–2605
van der Maaten LPJ, Postma EO, van den Herik HJ (2009) Dimensionality reduction: a comparative review. Technical report TiCC-TR 2009-005, Tilburg University
Venna J, Peltonen J, Nybo K, Kaski S (2010) Information retrieval perspective to nonlinear dimensionality reduction for data visualization. J Mach Learn Res 11:451–490
Wang W, Carreira-Perpinan M (2014) The role of dimensionality reduction in classification. In: Proc. Of the 28th AAAI conf. on artificial intelligence. pp 2128–2134
Weinberger K, Saul L (2006) Unsupervised learning of image manifolds by semidefinite programming. Int J Comput Vis 70:77–90
Weinberger K, Saul L (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244
Xu C (2014) Large-margin weakly supervised dimensionality reduction. In: Proc. the 31st int. conf. on machine learning. pp 865–873
Acknowledgements
The author would like to thank the anonymous reviewers for their valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hartono, P. Classification and dimensional reduction using restricted radial basis function networks. Neural Comput & Applic 30, 905–915 (2018). https://doi.org/10.1007/s00521-016-2726-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-016-2726-5