skip to main content

Algorithm 896: LSA: Algorithms for large-scale optimization

Authors Info & Claims
Published:23 July 2009Publication History
Skip Abstract Section

Abstract

We present 14 basic Fortran subroutines for large-scale unconstrained and box constrained optimization and large-scale systems of nonlinear equations. Subroutines PLIS and PLIP, intended for dense general optimization problems, are based on limited-memory variable metric methods. Subroutine PNET, also intended for dense general optimization problems, is based on an inexact truncated Newton method. Subroutines PNED and PNEC, intended for sparse general optimization problems, are based on modifications of the discrete Newton method. Subroutines PSED and PSEC, intended for partially separable optimization problems, are based on partitioned variable metric updates. Subroutine PSEN, intended for nonsmooth partially separable optimization problems, is based on partitioned variable metric updates and on an aggregation of subgradients. Subroutines PGAD and PGAC, intended for sparse nonlinear least-squares problems, are based on modifications and corrections of the Gauss-Newton method. Subroutine PMAX, intended for minimization of a maximum value (minimax), is based on the primal line-search interior-point method. Subroutine PSUM, intended for minimization of a sum of absolute values, is based on the primal trust-region interior-point method. Subroutines PEQN and PEQL, intended for sparse systems of nonlinear equations, are based on the discrete Newton method and the inverse column-update quasi-Newton method, respectively. Besides the description of methods and codes, we propose computational experiments which demonstrate the efficiency of the proposed algorithms.

Skip Supplemental Material Section

Supplemental Material

References

  1. Al-Baali, M. and Fletcher, R. 1985. Variational methods for nonlinear least squares. J. Optimiz. Theor. Appl. 36, 405--421.Google ScholarGoogle Scholar
  2. Brown, P. N. and Saad, Y. 1994. Convergence theory of nonlinear Newton-Krylov algorithms. SIAM J. Optimiz. 4, 297--330.Google ScholarGoogle ScholarCross RefCross Ref
  3. Byrd, R. H., Nocedal, J., and Waltz, R. A. 2006. KNITRO: An integrated package for nonlinear optimization. In Large-Scale Nonlinear Optimization, G. di Pillo and M. Roma, Eds. (More information is on http://www.ziena.com/documentation.htm). Springer-Verlag, Berlin, Germany, 35--59.Google ScholarGoogle Scholar
  4. Coleman, T. F. and Moré, J. J. 1983. Estimation of sparse Jacobian and graph coloring problem. SIAM J. Numer. Anal. 20, 187--209.Google ScholarGoogle ScholarCross RefCross Ref
  5. Coleman, T. F. and Moré, J. J. 1984. Estimation of sparse Hessian matrices and graph coloring problem. Math. Program. 28, 243--270.Google ScholarGoogle ScholarCross RefCross Ref
  6. Curtis, A. R., Powell, M. J. D., and Reid, J. K. 1974. On the estimation of sparse Jacobian matrices. IMA J. Appl. Math. 13, 117--119.Google ScholarGoogle ScholarCross RefCross Ref
  7. Dembo, R. S., Eisenstat, S. C., and Steihaug, T. 1982. Inexact Newton methods. SIAM J. Numer. Anal. 19, 400--408.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Dembo, R. S. and Steihaug, T. 1983. Truncated Newton algorithms for large-scale optimization. Math. Program. 26, 190--212.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Dennis, J. E. and Mei, H. H. W. 1975. An unconstrained optimization algorithm which uses function and gradient values. Rep. No. TR 75-246. Department of Computer Science, Cornell University, Ithaca. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Gill, P. E. and Murray, W. 1974. Newton type methods for unconstrained and linearly constrained optimization. Math. Program. 7, 311--350.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Griewank, A. and Toint, P. L. 1982. Partitioned variable metric updates for large-scale structured optimization problems. Numer. Math. 39, 119--137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Liu, D. C. and Nocedal, J. 1989. On the limited memory BFGS method for large-scale optimization. Math. Program. 45, 503--528. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Lukšan, L. 1996. Hybrid methods for large sparse nonlinear least squares. J. Optimiz. Theor. Appl. 89, 575--595. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Lukšan, L., Matonoha, C., and Vlček, J. 2004. A shifted Steihaug-Toint method for computing a trust-region step. Rep. V-914. ICS AS CR, Prague.Google ScholarGoogle Scholar
  15. Lukšan, L., Matonoha, C., and Vlček, J. 2005a. Primal interior-point method for large sparse minimax optimization. Tech. rep. V-941, ICS AS CR, Prague, Czech Republic.Google ScholarGoogle Scholar
  16. Lukšan, L., Matonoha, C., and Vlček, J. 2005b. Trust-region interior point method for large sparse l1 optimization. Tech. rep. V-942. ICS AS CR, Prague, Czech Republic.Google ScholarGoogle Scholar
  17. Lukšan, L. and Spedicato, E. 2000. Variable metric methods for unconstrained optimization and nonlinear least squares. J. Computat. Appl. Math. 124, 61--93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lukšan, L., Tůma, M., Hartman, J., Vlček, J., Ramešová, N., Šiška, M., and Matonoha, C. 2006. Interactive system for universal functional optimization (UFO). Version 2006. Tech. rep. V-977. ICS AS CR, Prague, Czech Republic.Google ScholarGoogle Scholar
  19. Lukšan, L. and Vlček, J. 1998a. Computational experience with globally convergent descent methods for large sparse systems of nonlinear equations. Optimiz. Meth. Softw. 8, 201--223.Google ScholarGoogle ScholarCross RefCross Ref
  20. Lukšan, L. and Vlček, J. 1998b. Sparse and partially separable test problems for unconstrained and equality constrained optimization. Tech. rep. V-767. ICS AS CR, Prague, Czech Republic.Google ScholarGoogle Scholar
  21. Lukšan, L. and Vlček, J. 2006. Variable metric method for minimization of partially separable nonsmooth functions. Pacific J. Optimiz. 2, (Jan.), 59--70.Google ScholarGoogle Scholar
  22. Martinez, J. M. and Zambaldi, M. C. 1992. An inverse column-updating method for solving large-scale nonlinear systems of equations. Optimiz. Meth. Softw. 1, 129--140.Google ScholarGoogle ScholarCross RefCross Ref
  23. Moré, J. J. and Sorensen, D. C. 1983. Computing a trust region step. SIAM J. Sci. Statist. Computat. 4, 553--572.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Nocedal, J. 1980. Updating quasi-Newton matrices with limited storage. Math. Comp. 35, 773--782.Google ScholarGoogle ScholarCross RefCross Ref
  25. Nowak, U. and Weimann, L. 1991. A family of Newton codes for systems of highly nonlinear equations. Tech. rep. TR-91-10. Konrad-Zuse-Zentrum für Informationstechnik, Berlin, Germany.Google ScholarGoogle Scholar
  26. Pernice, M. and Walker, H. F. 1998. NITSOL: A Newton iterative solver for nonlinear systems. SIAM J. Sci. Comput. 19, 302--318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Schlick, T. and Fogelson, A. 1992. TNPACK—A truncated Newton minimization package for large-scale problems. ACM Trans. Math. Softw. 18, 46--111. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Steihaug, T. 1983. The conjugate gradient method and trust regions in large-scale optimization. SIAM J. Numer. Anal. 20, 626--637.Google ScholarGoogle ScholarCross RefCross Ref
  29. Toint, P. L. 1981. Towards an efficient sparsity exploiting Newton method for minimization. In Sparse Matrices and Their Uses, I. S. Duff, Ed. Academic Press, London, 57--88.Google ScholarGoogle Scholar
  30. Toint, P. L. 1995a. Subroutine VE08: Harwell subroutine library. specifications. AEA Tech. 2, 1162--1174.Google ScholarGoogle Scholar
  31. Toint, P. L. 1995b. Subroutine VE10: Harwell subroutine library. specifications. AEA Tech. 2, 1187--1197.Google ScholarGoogle Scholar
  32. Tong, C. H. 1992. A comparative study of preconditioned Lanczos methods for nonsymmetric linear systems. Sandia rep. SAND91-8240B. Sandia National Laboratories, Livermore.Google ScholarGoogle Scholar
  33. Tůma, M. 1988. A note on direct methods for approximations of sparse Hessian matrices. Aplikace Matematiky 33, 171--176.Google ScholarGoogle Scholar
  34. Vlček, J. and Lukšan, L. 2001. Globally convergent variable metric method for nonconvex nondifferentiable unconstrained minimization. J. Optimiz. Theor. Appl. 111, 407--430.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Vlček, J. and Lukšan, L. 2006. Shifted limited-memory variable metric methods for large-scale unconstrained minimization. J. Computat. Appl. Math. 186, 365--390. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Zhu, C., Byrd, R. H., Lu, P., and Nocedal, J. 1997. Algorithm 778. L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization. ACM Trans. Math. Softw. 23, 550--560. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Algorithm 896: LSA: Algorithms for large-scale optimization

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader