Skip to main content
Log in

Estimation of Spectral Bounds in Gradient Algorithms

  • Published:
Acta Applicandae Mathematicae Aims and scope Submit manuscript

Abstract

We consider the solution of linear systems of equations Ax=b, with A a symmetric positive-definite matrix in ℝn×n, through Richardson-type iterations or, equivalently, the minimization of convex quadratic functions (1/2)(Ax,x)−(b,x) with a gradient algorithm. The use of step-sizes asymptotically distributed with the arcsine distribution on the spectrum of A then yields an asymptotic rate of convergence after k<n iterations, k→∞, that coincides with that of the conjugate-gradient algorithm in the worst case. However, the spectral bounds m and M are generally unknown and thus need to be estimated to allow the construction of simple and cost-effective gradient algorithms with fast convergence. It is the purpose of this paper to analyse the properties of estimators of m and M based on moments of probability measures ν k defined on the spectrum of A and generated by the algorithm on its way towards the optimal solution. A precise analysis of the behavior of the rate of convergence of the algorithm is also given. Two situations are considered: (i) the sequence of step-sizes corresponds to i.i.d. random variables, (ii) they are generated through a dynamical system (fractional parts of the golden ratio) producing a low-discrepancy sequence. In the first case, properties of random walk can be used to prove the convergence of simple spectral bound estimators based on the first moment of ν k . The second option requires a more careful choice of spectral bounds estimators but is shown to produce much less fluctuations for the rate of convergence of the algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  2. Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events. Springer, Berlin (1997)

    Book  MATH  Google Scholar 

  3. Fischer, B., Reichel, L.: A stable Richardson iteration method for complex linear systems. Numer. Math. 54, 225–242 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  4. Forsythe, G.E.: On the asymptotic directions of the s-dimensional optimum gradient method. Numer. Math. 11, 57–76 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  5. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  6. Hall, P., Heyde, C.C.: Martingale Limit Theory and Its Applications. Academic Press, New York (1980)

    Google Scholar 

  7. Hestenes, M.H., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  8. Krasnosel’skii, M.A., Krein, S.G.: An iteration process with minimal residues. Mat. Sb. 31(4), 315–334 (1952) (in Russian)

    MathSciNet  Google Scholar 

  9. Meurant, G.: The block preconditioned conjugate gradient method on vector computers. BIT Numer. Math. 24, 623–633 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  10. Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. SIAM, Philadelphia (1992)

    Book  MATH  Google Scholar 

  11. Paige, C.C., Saunders, M.A.: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 12(4), 617–629 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  12. Podvigina, O.M., Zheligovsky, V.A.: An optimized iterative method for numerical solution of large systems of equations based on the extremal property of zeros of Chebyshev polynomials. J. Sci. Comput. 12(4), 433–464 (1976)

    Article  MathSciNet  Google Scholar 

  13. Pronzato, L., Zhigljavsky, A.: Gradient algorithms for quadratic optimization with fast convergence rates. Comput. Optim. Appl. 50(3), 597–617 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  14. Pronzato, L., Wynn, H.P., Zhigljavsky, A.: Renormalised steepest descent in Hilbert space converges to a two-point attractor. Acta Appl. Math. 67, 1–18 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  15. Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: Asymptotic behaviour of a family of gradient algorithms in ℝd and Hilbert spaces. Math. Program., Ser. A 107, 409–438 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: A dynamical-system analysis of the optimum s-gradient algorithm. In: Pronzato, L., Zhigljavsky, A.A. (eds.) Optimal Design and Related Areas in Optimization and Statistics, pp. 39–80. Springer, Berlin (2009). Chap. 3

    Chapter  Google Scholar 

  17. Saad, Y.: Practical use of polynomial preconditionings for the conjugate gradient method. SIAM J. Sci. Stat. Comput. 6(4), 865–881 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  18. Saad, Y.: Iterative Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, Philadelphia (2008)

    Google Scholar 

  19. Slater, B.: Gaps and steps for the sequence mod 1. Math. Proc. Camb. Philos. Soc. 63, 1115–1123 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  20. Tal-Ezer, H.: Polynomial approximation of functions of matrices and applications. J. Sci. Comput. 4(1), 25–60 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  21. van der Vorst, H.A.: Iterative Methods for Large Linear Systems. Utrecht University, Utrecht (2000)

    Google Scholar 

  22. Zhigljavsky, A., Pronzato, L., Bukina, E.: An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost. Optim. Lett. (2012). doi:10.1007/s11590-012-0491-7

    Google Scholar 

Download references

Acknowledgement

The authors are very grateful to the referees for their useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to L. Pronzato.

Additional information

Part of this work was accomplished while the first two authors were invited at the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK; the support of the INI and of CNRS is gratefully acknowledged. The work of E. Bukina was partially supported by the EU through a Marie-Curie Fellowship (EST-SIGNAL program: http://est-signal.i3s.unice.fr) under the contract Nb. MEST-CT-2005-021175.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pronzato, L., Zhigljavsky, A. & Bukina, E. Estimation of Spectral Bounds in Gradient Algorithms. Acta Appl Math 127, 117–136 (2013). https://doi.org/10.1007/s10440-012-9794-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10440-012-9794-z

Keywords

Mathematics Subject Classification

Navigation