Skip to main content
Log in

New stepsizes for the gradient method

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

Gradient methods are famous for their simplicity and low complexity, which attract more and more attention for large scale optimization problems. A good stepsize plays an important role to construct an efficient gradient method. This paper proposes a new framework to generate stepsizes for gradient methods applied to convex quadratic function minimization problems. By adopting different criterions, we propose four new gradient methods. For 2-dimensional unconstrained problems with convex quadratic objective functions, we prove that the new methods either terminate in finite iterations or converge R-superlinearly; for n-dimensional problems, we prove that all the new methods converge R-linearly. Numerical experiments show that the new methods enjoy lower complexity and outperform the existing gradient methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. For example, in this test problem, the per iteration computational cost of \({\text {ABB}}_{\min }\) is \(n^2+3n+\frac{5}{2}\), while that of Alg. 3 is \(\frac{1}{5}n^2+\frac{2}{5}n+\frac{3}{10}\).

References

  1. Barzilai, J., Borwein, J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  2. Cauchy, A.: Méthode généerale pour la résolution des systems d’équations simultanées. Comp. Rend. Sci. Paris 25, 46–89 (1847)

    Google Scholar 

  3. Curry, H.B.: The method of steepest descent for nonlinear minimization problems. Q. Appl. Math. 2, 258–261 (1944)

    Article  MATH  Google Scholar 

  4. Dai, Y.-H.: Alternate step gradient method. Optimization 52(4–5), 395–415 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  5. Dai, Y.-H.: A new analysis on the Barzilai–Borwein gradient method. J. Oper. Res. Soc. China 1(2), 187–198 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dai, Y.-H., Fletcher, R.: On the asymptotic behaviour of some new gradient methods. Math. Program. 103(3), 541–559 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dai, Y.-H., Yuan, Y.-X.: Analysis of monotone gradient methods. J. Ind. Manag. Optim. 1, 181–192 (2005)

    MathSciNet  MATH  Google Scholar 

  8. Dai, Y.-H., Liao, L.-Z.: \(R\)-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1–10 (2002)

    MathSciNet  MATH  Google Scholar 

  9. De Asmundis, R., di Serafino, D., Riccio, F., Toraldo, G.: On spectral properties of steepest descent methods. IMA J. Numer. Anal. 33(4), 1416–1435 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. De Asmundis, R., di Serafino, D., Hager, W.W., Toraldo, G., Zhang, H.-C.: An efficient gradient method using the Yuan steplength. Comp. Opt. Appl. 59(3), 541–563 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. de Klerk, E., Glineur, F., Taylor, A.B.: On the worst-case complexity of the gradient method with exact linesearch for smooth strongly convex functions. Optim. Lett. 11, 1185–1199 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  12. di Serafino, D., Ruggiero, V., Toraldo, G., Zanni, L.: On the steplength selection in gradient methods for unconstrained optimization. Appl. Math. Comput. 318, 176–195 (2018)

    MathSciNet  MATH  Google Scholar 

  13. Fletcher, R.: A limited memory steepest descent method. Math. Program. Ser. A 135, 413–436 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Frassoldati, G., Zanni, L., Zanghirati, G.: New adaptive stepsize selections in gradient methods. J. Ind. Manag. Optim. 4(2), 299–312 (2008)

    MathSciNet  MATH  Google Scholar 

  15. Friedlander, A., Martinez, J.M., Molina, B., Raydan, M.: Gradient method with retards and generalizations. SIAM J. Numer. Anal. 36, 275–289 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  16. Gonzaga, C., Schneider, R.M.: On the steepest descent algorithm for quadratic functions. Comput. Optim. Appl. 63(2), 523–542 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Nocedal, J., Sartenaer, A., Zhu, C.: On the behavior of the gradient norm in the steepest descent method. Comput. Optim. Appl. 22(1), 5–35 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  18. Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  19. Raydan, M., Svaiter, B.F.: Relaxed steepest descent and Cauchy–Barzilai–Borwein method. Comput. Optim. Appl. 21, 155–167 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  20. Vrahatis, M.N., Androulakis, G.S., Lambrinos, J.N., Magoulas, G.D.: A class of gradient unconstrained minimization algorithms with adaptive step-size. J. Comput. Appl. Math. 114(2), 367–386 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  21. Yuan, Y.-X.: A new stepsize for the steepest descent method. J. Comput. Math. 24(2), 149–156 (2006)

    MathSciNet  MATH  Google Scholar 

  22. Yuan, Y.-X.: Step-sizes for the gradient method. AMS IP Stud. Adv. Math. 42(2), 785–796 (2008)

    MathSciNet  MATH  Google Scholar 

  23. Yuan, Y.-X.: A short note on the Q-linear convergence of the steepest descent method. Math. Program. 123, 339–343 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  24. Yuan, Y.-X.: Gradient methods for large scale convex quadratic functions. In: Wang, Y., Yang, C., Yagola, A.G. (eds.) Optimization and regularization for computational inverse problems and applications, pp. 141–155. Springer, Berlin (2010)

    Chapter  Google Scholar 

  25. Zheng, Y., Zheng, B.: A new modified Barzilai–Borwein gradient method for the quadratic minimization problem. J. Optim. Theory Appl. 172(1), 179–186 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Prof. Ya-xiang Yuan from Chinese Academy of Sciences, the editor and the two anonymous reviewers for their valuable suggestions and comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cong Sun.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is partially supported by NSFC Grants 11771056, 91630202 and 11871115, and the Young Elite Scientists Sponsorship Program by CAST 2017QNRC001.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, C., Liu, JP. New stepsizes for the gradient method. Optim Lett 14, 1943–1955 (2020). https://doi.org/10.1007/s11590-019-01512-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-019-01512-y

Keywords

Navigation