Skip to main content
Log in

Subspace Methods with Local Refinements for Eigenvalue Computation Using Low-Rank Tensor-Train Format

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Computing a few eigenpairs from large-scale symmetric eigenvalue problems is far beyond the tractability of classic eigensolvers when the storage of the eigenvectors in the classical way is impossible. We consider a tractable case in which both the coefficient matrix and its eigenvectors can be represented in the low-rank tensor train formats. We propose a subspace optimization method combined with some suitable truncation steps to the given low-rank Tensor Train formats. Its performance can be further improved if the alternating minimization method is used to refine the intermediate solutions locally. Preliminary numerical experiments show that our algorithm is competitive to the state-of-the-art methods on problems arising from the discretization of the stationary Schrödinger equation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Downloadable from http://anchp.epfl.ch/TTeMPS.

References

  1. Ballani, J., Grasedyck, L.: A projection method to solve linear systems in tensor format. Numer. Linear Algebra Appl. 20, 27–43 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  2. Dolgov, S.V., Khoromskij, B.N., Oseledets, I.V., Savostyanov, D.V.: Computation of extreme eigenvalues in higher dimensions using block tensor train format. Comput. Phys. Commun. 185, 1207–C1216 (2013)

    Article  MATH  Google Scholar 

  3. Grasedyck, L.: Existence and computation of low Kronecker-rank approximations for large linear systems of tensor product structure. Computing 72, 247–265 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  4. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2009/10)

  5. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM Mitt. 36, 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hackbusch, W.: Entwicklungen nach exponentialsummen, Technical Report 4, Max Planck Institute for Mathematics in the Sciences, 2005, MPI MIS Leipzig, Revised version (2010)

  7. Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Springer, Heidelberg (2012)

    Book  MATH  Google Scholar 

  8. Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimization in the tensor train format. SIAM J. Sci. Comput. 34, A683–A713 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Knyazev, A.V.: Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23, 517–541 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Kressner, D., Steinlechner, M., Uschmajew, A.: Low-rank tensor methods with subspace correction for symmetric eigenvalue problems. SIAM J. Sci. Comput. 36, A2346–CA2368 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kressner, D., Tobler, C.: Preconditioned low-rank methods for high-dimensional elliptic PDE eigenvalue problems. Comput. Methods Appl. Math. 11, 363–381 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kressner, D., Tobler, C.: htucker, a MATLAB toolbox for tensors in hierarchical Tucker format, Technical Report (2012)

  14. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lebedeva, O.S.: Tensor conjugate-gradient-type method for Rayleigh quotient minimization in block QTT-format. Rus. J. Numer. Anal. Math. Model. 26, 465–489 (2011)

    MathSciNet  MATH  Google Scholar 

  16. Liu, X., Wen, Z., Zhang, Y.: Limited memory block Krylov subspace optimization for computing dominant singular value decompositions. SIAM J. Sci. Comput. 35–3, A1641–A1668 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Oseledets, I.: DMRG approach to fast linear algebra in the TT-format. Comput. Methods Appl. Math. 11, 382–393 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Oseledets, I.V.: Approximation of \(2^d \times 2^d\) matrices using tensor decomposition. SIAM J. Matrix Anal. Appl 31, 2130–2145 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  19. Oseledets, I.V.: Tensor train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Wen, Z., Zhang, Y.: Block algorithms with augmented rayleigh-ritz projections for large-scale eigenpair computation, arXiv:1507.06078 (2015)

Download references

Acknowledgments

We thank D. Kressner, M. Steinlechner and A. Uschmajew for sharing online their matlab codes on EVAMEn and the TT/MPS tensor toolbox TTeMPS. The authors would like to thank the associate editor Prof. Wotao Yin and two anonymous referees for their detailed and valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zaiwen Wen.

Additional information

J. Zhang Research supported in part by NSF Grant CMMI-1462408. Z. Wen Research supported in part by NSFC Grants 11322109 and 91330202, and by the National Basic Research Project under the Grant 2015CB856002. Y. Zhang Research supported in part by NSF DMS-1115950 and NSF DMS-1418724.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Wen, Z. & Zhang, Y. Subspace Methods with Local Refinements for Eigenvalue Computation Using Low-Rank Tensor-Train Format. J Sci Comput 70, 478–499 (2017). https://doi.org/10.1007/s10915-016-0255-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-016-0255-0

Keywords

Navigation