Skip to main content
Log in

Classification and dimensional reduction using restricted radial basis function networks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

One of the most efficient means to understand complex data is by visualizing them in two- or three-dimensional space. As meaningful data are likely to be high dimensional, visualizing them requires dimensional reduction algorithms, which objective is to map high-dimensional data into low-dimensional space while preserving some of their underlying structures. For labeled data, their low-dimensional representations should embed their classifiability so that their class-structures become visible. It is also beneficial if an algorithm can classify labeled input while at the same time executes dimensional reduction to visually offer information regarding the data’s structure to give rational behind the classification. However, most of the currently available dimensional reduction methods are not usually equipped with classification features, while most classification algorithm lacks transparencies in rationalizing their decisions. In this paper, the restricted radial basis function networks (rRBF), a recently proposed supervised neural network with low-dimensional internal representation, is utilized for visualizing high-dimensional data while also performing classification. The primary focus of this paper is to empirically explain the classifiability and visual transparency of the rRBF.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Bunte K, Hammer B, Wismuler A, Biehl M (2010) Adaptive local disimilarity measures for discriminative dimension reduction of labelled data. Neurocomputing 73:1074–1092

    Article  Google Scholar 

  2. Bunte K, Bieh M, Hammer B (2011) A general framework for dimensionality reducing data visualization mapping. Neural Comput 24:771–804

    Article  MATH  Google Scholar 

  3. Cover T, Hart P (1967) Nearest neighbor pattern classifications. IEEE Trans Inf Theory 13:21–27

    Article  MATH  Google Scholar 

  4. Fisher R (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7(2):179–188

    Article  Google Scholar 

  5. Flexer A (2001) On the use of self-organizing maps for clustering and visualization. Intell Data Anal 5(5):373–384

    MATH  Google Scholar 

  6. Geng X, Zhan D-C, Zhou Z-H (2005) Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Trans Syst Man Cybern Part B 35(6):1098–1107

    Article  Google Scholar 

  7. Goldberger J, Roweis S, Hinton G, Salakhutdinov R (2004) Neighborhood components analysis. In: Saul LK, Weiss Y, Bottou L (eds) Advances in neural information processing systems, vol 17. MIT Press, Cambridge, MA, pp 513–520

    Google Scholar 

  8. Gonen M (2014) Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning. Pattern Recognit Lett 38:132–141

    Article  Google Scholar 

  9. Hartono P, Trappenberg T (2013) Classificability-regulated self-organizing map using restricted RBF. In: Proc. IEEE international joint conference on neural networks (IJCNN 2013). pp 160–164

  10. Hartono P, Hollensen P, Trappenberg T (2015) Learning-regulated context relevant topographical map. IEEE Trans Neural Netw Learn Syst 26(10):2323–2335

    Article  MathSciNet  Google Scholar 

  11. Hinton G (2007) Learning multiple layers of representation. Trends Cognit Sci 11(10):428–434

    Article  Google Scholar 

  12. Hinton G, Roweis S (2002) Stochastic neighbor embedding. Adv Neural Inf Process Syst 15:833–840

    Google Scholar 

  13. Hinton G, Salakhutdinov R (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  MATH  Google Scholar 

  14. Jolliffe I (1986) Principal component analysis. Springer, New York

    Book  MATH  Google Scholar 

  15. Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43:59–69

    Article  MathSciNet  MATH  Google Scholar 

  16. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

    Article  Google Scholar 

  17. Martinez A, Kak A (2001) Pca versus lda. IEEE Trans Pattern Anal Mach Intell 23(2):228–233

    Article  Google Scholar 

  18. Matsunaga R, Hartono P, Abe J (2013) Learning of tonality differentiations between western music and traditional japanese music by an artificial neural network: an approach of restricted RBF (in Japanese). Technical report

  19. Memisevic R, Hinton G (2005) Multiple relational embedding. In: Proc. NIPS 2004. MIT Press, pp 913–920

  20. MNIST. http://yann.lecun.com/exdb/mnist/

  21. Peltonen J, Klami A, Kaski S (2004) Improved learning of riemannian metrics for exploratory analysis. Neural Netw 17:1087–1100

    Article  MATH  Google Scholar 

  22. Peltonen J, Aido H, Kaski S (2009) Supervised nonlinear dimensionality reduction by neighborhood retrieval. In: Proc. of IEEE ICASP 2009. pp 1809–1812

  23. Poggio T, Girosi F (1990) Networks for approximation and learning. Proc IEEE 87:1484–1487

    MATH  Google Scholar 

  24. Roweis S, Saul L (2000) Dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326

    Article  Google Scholar 

  25. Rumelhart D, Hinton G, Williams R (1984) Learning internal representations by error propagation. In: Rumelhart D, McClelland J (eds) Parallel distributed processing. MIT Press, Cambridge, MA, pp 318–362

    Google Scholar 

  26. Rumelhart D, Hinton G, William R (1986) Learning representation by backpropagating errors. Nature 323:533–536

    Article  Google Scholar 

  27. Schulz A, Gisbrecht A, Hammer B (2013) Using nonlinear dimensionality reduction to visualize classifiers. In: Rojas I, Joya G, Cabestany J (eds) IWANN 2013 part 1, LNCS 7902. pp 59–68

  28. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml/

  29. van der Maaten LPJ (2008) Visualizing high-dimensional data using t-SNE. J Mach Learn Res 9:2579–2605

    MATH  Google Scholar 

  30. van der Maaten LPJ, Postma EO, van den Herik HJ (2009) Dimensionality reduction: a comparative review. Technical report TiCC-TR 2009-005, Tilburg University

  31. Venna J, Peltonen J, Nybo K, Kaski S (2010) Information retrieval perspective to nonlinear dimensionality reduction for data visualization. J Mach Learn Res 11:451–490

    MathSciNet  MATH  Google Scholar 

  32. Wang W, Carreira-Perpinan M (2014) The role of dimensionality reduction in classification. In: Proc. Of the 28th AAAI conf. on artificial intelligence. pp 2128–2134

  33. Weinberger K, Saul L (2006) Unsupervised learning of image manifolds by semidefinite programming. Int J Comput Vis 70:77–90

    Article  Google Scholar 

  34. Weinberger K, Saul L (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244

    MATH  Google Scholar 

  35. Xu C (2014) Large-margin weakly supervised dimensionality reduction. In: Proc. the 31st int. conf. on machine learning. pp 865–873

Download references

Acknowledgements

The author would like to thank the anonymous reviewers for their valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pitoyo Hartono.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hartono, P. Classification and dimensional reduction using restricted radial basis function networks. Neural Comput & Applic 30, 905–915 (2018). https://doi.org/10.1007/s00521-016-2726-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-016-2726-5

Keywords

Navigation