Skip to main content

Alternating Optimization of Unsupervised Regression with Evolutionary Embeddings

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9028))

Included in the following conference series:

Abstract

Unsupervised regression is a dimensionality reduction method that allows embedding high-dimensional patterns in low-dimensional latent spaces. In the line of research on iterative unsupervised regression, numerous methodological variants have been proposed in the recent past. This works extends the set of methods by evolutionary embeddings. We propose to use a \((1+\lambda )\)-ES with Rechenberg mutation strength control to iteratively embed patterns and show that the learned manifolds are better with regard to the data space reconstruction error than the embeddings generated with naive Gaussian sampling. Further, we introduce a hybrid optimization approach of alternating gradient descent and the iterative evolutionary embeddings. Experimental comparisons on artificial test data sets confirm the expectation that a hybrid approach is superior or at least competitive to known methods like principal component analysis or Hessian local linear embedding.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kramer, O.: Dimensionalty reduction by unsupervised nearest neighbor regression. In: International Conference on Machine Learning and Applications (ICMLA), pp. 275–278. IEEE (2011)

    Google Scholar 

  2. Kramer, O.: Unsupervised nearest neighbors with kernels. In: Glimm, B., Krüger, A. (eds.) KI 2012: Advances in Artificial Intelligence. LNCS, vol. 7526, pp. 97–106. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  3. Meinicke, P., Klanke, S., Memisevic, R., Ritter, H.: Principal surfaces from unsupervised kernel regression. IEEE Trans. Pattern Anal. Mach. Intell. 27, 1379–1391 (2005)

    Article  Google Scholar 

  4. Smola, A.J., Mika, S., Schölkopf, B., Williamson, R.C.: Regularized principal manifolds. J. Mach. Learn. Res. 1, 179–209 (2001)

    MATH  MathSciNet  Google Scholar 

  5. Lawrence, N.D.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. J. Mach. Learn. Res. 6, 1783–1816 (2005)

    MATH  MathSciNet  Google Scholar 

  6. Tan, S., Mavrovouniotis, M.: Reducing data dimensionality through optimizing neural network inputs. AIChE J. 41, 1471–1479 (1995)

    Article  Google Scholar 

  7. Kramer, O.: A particle swarm embedding algorithm for nonlinear dimensionality reduction. In: Dorigo, M., Birattari, M., Blum, C., Christensen, A.L., Engelbrecht, A.P., Groß, R., Stützle, T. (eds.) ANTS 2012. LNCS, vol. 7461, pp. 1–12. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  8. Nourashrafeddin, S., Arnold, D., Milios, E.E.: An evolutionary subspace clustering algorithm for high-dimensional data. In: Proceedings of the Annual Conference on Genetic and Evolutionary Computation (GECCO), pp. 1497–1498 (2012)

    Google Scholar 

  9. Vahdat, A., Heywood, M.I., Zincir-Heywood, A.N.: Bottom-up evolutionary subspace clustering. In: IEEE Congress on Evolutionary Computation, pp. 1–8 (2010)

    Google Scholar 

  10. Nadaraya, E.: On estimating regression. Theory Probab. Appl. 10, 186–190 (1964)

    Article  Google Scholar 

  11. Rechenberg, I.: Cybernetic solution path of an experimental problem. In: Ministry of Aviation, UK, Royal Aircraft Establishment (1965)

    Google Scholar 

  12. Klanke, S., Ritter, H.: Variants of unsupervised kernel regression: general cost functions. Neurocomputing 70, 1289–1303 (2007)

    Article  Google Scholar 

  13. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2000)

    Google Scholar 

  14. Jolliffe, I.T.: Principal Component Analysis. Springer Series in Statistics. Springer, New York (1986)

    Book  Google Scholar 

  15. Tenenbaum, J.B., Silva, V.D., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)

    Article  Google Scholar 

  16. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. SCIENCE 290, 2323–2326 (2000)

    Article  Google Scholar 

  17. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oliver Kramer .

Editor information

Editors and Affiliations

A Gradient Descent

A Gradient Descent

HybUKR requires the gradient of the Nadaraya-Watson estimator, which is defined as:

$$\begin{aligned} \frac{\partial f(\mathbf {X})}{\partial x_{mn}}= & {} \sum _{i=1}^N \mathbf {y}_i \cdot \left( \frac{ 1 }{ \sum _{j=1}^N \mathbf {K}_h(\mathbf {x} - \mathbf {x}_j) } \cdot \frac{\partial \mathbf {K}_h(\mathbf {x} - \mathbf {x}_i)}{\partial x_{mn}} \right. \\&\left. - \frac{\mathbf {K}(\mathbf {x} - \mathbf {x}_i)}{\left( \sum _{j=1}^N \mathbf {K}_h(\mathbf {x} - \mathbf {x}_j) \right) ^2 } \cdot \sum _{j=1}^N \frac{\partial \mathbf {K}_h(\mathbf {x} - \mathbf {x}_j)}{\partial x_{mn}} \right) \end{aligned}$$

The derivative of the Gaussian kernel function is:

$$\begin{aligned} \mathbf {K}_h( \mathbf {x}_i - \mathbf {x}_j ) = K_h( \left\| \mathbf {x}_i - \mathbf {x}_j \right\| _2^2 ) = \frac{1}{h} \cdot K \left( \frac{\left\| \mathbf {x}_i - \mathbf {x}_j \right\| _2^2}{h} \right) \end{aligned}$$

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Lückehe, D., Kramer, O. (2015). Alternating Optimization of Unsupervised Regression with Evolutionary Embeddings. In: Mora, A., Squillero, G. (eds) Applications of Evolutionary Computation. EvoApplications 2015. Lecture Notes in Computer Science(), vol 9028. Springer, Cham. https://doi.org/10.1007/978-3-319-16549-3_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16549-3_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16548-6

  • Online ISBN: 978-3-319-16549-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics