Abstract
We present 14 basic Fortran subroutines for large-scale unconstrained and box constrained optimization and large-scale systems of nonlinear equations. Subroutines PLIS and PLIP, intended for dense general optimization problems, are based on limited-memory variable metric methods. Subroutine PNET, also intended for dense general optimization problems, is based on an inexact truncated Newton method. Subroutines PNED and PNEC, intended for sparse general optimization problems, are based on modifications of the discrete Newton method. Subroutines PSED and PSEC, intended for partially separable optimization problems, are based on partitioned variable metric updates. Subroutine PSEN, intended for nonsmooth partially separable optimization problems, is based on partitioned variable metric updates and on an aggregation of subgradients. Subroutines PGAD and PGAC, intended for sparse nonlinear least-squares problems, are based on modifications and corrections of the Gauss-Newton method. Subroutine PMAX, intended for minimization of a maximum value (minimax), is based on the primal line-search interior-point method. Subroutine PSUM, intended for minimization of a sum of absolute values, is based on the primal trust-region interior-point method. Subroutines PEQN and PEQL, intended for sparse systems of nonlinear equations, are based on the discrete Newton method and the inverse column-update quasi-Newton method, respectively. Besides the description of methods and codes, we propose computational experiments which demonstrate the efficiency of the proposed algorithms.
Supplemental Material
Available for Download
Software for LSA: Algorithms for large-scale optimization
- Al-Baali, M. and Fletcher, R. 1985. Variational methods for nonlinear least squares. J. Optimiz. Theor. Appl. 36, 405--421.Google Scholar
- Brown, P. N. and Saad, Y. 1994. Convergence theory of nonlinear Newton-Krylov algorithms. SIAM J. Optimiz. 4, 297--330.Google ScholarCross Ref
- Byrd, R. H., Nocedal, J., and Waltz, R. A. 2006. KNITRO: An integrated package for nonlinear optimization. In Large-Scale Nonlinear Optimization, G. di Pillo and M. Roma, Eds. (More information is on http://www.ziena.com/documentation.htm). Springer-Verlag, Berlin, Germany, 35--59.Google Scholar
- Coleman, T. F. and Moré, J. J. 1983. Estimation of sparse Jacobian and graph coloring problem. SIAM J. Numer. Anal. 20, 187--209.Google ScholarCross Ref
- Coleman, T. F. and Moré, J. J. 1984. Estimation of sparse Hessian matrices and graph coloring problem. Math. Program. 28, 243--270.Google ScholarCross Ref
- Curtis, A. R., Powell, M. J. D., and Reid, J. K. 1974. On the estimation of sparse Jacobian matrices. IMA J. Appl. Math. 13, 117--119.Google ScholarCross Ref
- Dembo, R. S., Eisenstat, S. C., and Steihaug, T. 1982. Inexact Newton methods. SIAM J. Numer. Anal. 19, 400--408.Google ScholarDigital Library
- Dembo, R. S. and Steihaug, T. 1983. Truncated Newton algorithms for large-scale optimization. Math. Program. 26, 190--212.Google ScholarDigital Library
- Dennis, J. E. and Mei, H. H. W. 1975. An unconstrained optimization algorithm which uses function and gradient values. Rep. No. TR 75-246. Department of Computer Science, Cornell University, Ithaca. Google ScholarDigital Library
- Gill, P. E. and Murray, W. 1974. Newton type methods for unconstrained and linearly constrained optimization. Math. Program. 7, 311--350.Google ScholarDigital Library
- Griewank, A. and Toint, P. L. 1982. Partitioned variable metric updates for large-scale structured optimization problems. Numer. Math. 39, 119--137.Google ScholarDigital Library
- Liu, D. C. and Nocedal, J. 1989. On the limited memory BFGS method for large-scale optimization. Math. Program. 45, 503--528. Google ScholarDigital Library
- Lukšan, L. 1996. Hybrid methods for large sparse nonlinear least squares. J. Optimiz. Theor. Appl. 89, 575--595. Google ScholarDigital Library
- Lukšan, L., Matonoha, C., and Vlček, J. 2004. A shifted Steihaug-Toint method for computing a trust-region step. Rep. V-914. ICS AS CR, Prague.Google Scholar
- Lukšan, L., Matonoha, C., and Vlček, J. 2005a. Primal interior-point method for large sparse minimax optimization. Tech. rep. V-941, ICS AS CR, Prague, Czech Republic.Google Scholar
- Lukšan, L., Matonoha, C., and Vlček, J. 2005b. Trust-region interior point method for large sparse l1 optimization. Tech. rep. V-942. ICS AS CR, Prague, Czech Republic.Google Scholar
- Lukšan, L. and Spedicato, E. 2000. Variable metric methods for unconstrained optimization and nonlinear least squares. J. Computat. Appl. Math. 124, 61--93. Google ScholarDigital Library
- Lukšan, L., Tůma, M., Hartman, J., Vlček, J., Ramešová, N., Šiška, M., and Matonoha, C. 2006. Interactive system for universal functional optimization (UFO). Version 2006. Tech. rep. V-977. ICS AS CR, Prague, Czech Republic.Google Scholar
- Lukšan, L. and Vlček, J. 1998a. Computational experience with globally convergent descent methods for large sparse systems of nonlinear equations. Optimiz. Meth. Softw. 8, 201--223.Google ScholarCross Ref
- Lukšan, L. and Vlček, J. 1998b. Sparse and partially separable test problems for unconstrained and equality constrained optimization. Tech. rep. V-767. ICS AS CR, Prague, Czech Republic.Google Scholar
- Lukšan, L. and Vlček, J. 2006. Variable metric method for minimization of partially separable nonsmooth functions. Pacific J. Optimiz. 2, (Jan.), 59--70.Google Scholar
- Martinez, J. M. and Zambaldi, M. C. 1992. An inverse column-updating method for solving large-scale nonlinear systems of equations. Optimiz. Meth. Softw. 1, 129--140.Google ScholarCross Ref
- Moré, J. J. and Sorensen, D. C. 1983. Computing a trust region step. SIAM J. Sci. Statist. Computat. 4, 553--572.Google ScholarDigital Library
- Nocedal, J. 1980. Updating quasi-Newton matrices with limited storage. Math. Comp. 35, 773--782.Google ScholarCross Ref
- Nowak, U. and Weimann, L. 1991. A family of Newton codes for systems of highly nonlinear equations. Tech. rep. TR-91-10. Konrad-Zuse-Zentrum für Informationstechnik, Berlin, Germany.Google Scholar
- Pernice, M. and Walker, H. F. 1998. NITSOL: A Newton iterative solver for nonlinear systems. SIAM J. Sci. Comput. 19, 302--318. Google ScholarDigital Library
- Schlick, T. and Fogelson, A. 1992. TNPACK—A truncated Newton minimization package for large-scale problems. ACM Trans. Math. Softw. 18, 46--111. Google ScholarDigital Library
- Steihaug, T. 1983. The conjugate gradient method and trust regions in large-scale optimization. SIAM J. Numer. Anal. 20, 626--637.Google ScholarCross Ref
- Toint, P. L. 1981. Towards an efficient sparsity exploiting Newton method for minimization. In Sparse Matrices and Their Uses, I. S. Duff, Ed. Academic Press, London, 57--88.Google Scholar
- Toint, P. L. 1995a. Subroutine VE08: Harwell subroutine library. specifications. AEA Tech. 2, 1162--1174.Google Scholar
- Toint, P. L. 1995b. Subroutine VE10: Harwell subroutine library. specifications. AEA Tech. 2, 1187--1197.Google Scholar
- Tong, C. H. 1992. A comparative study of preconditioned Lanczos methods for nonsymmetric linear systems. Sandia rep. SAND91-8240B. Sandia National Laboratories, Livermore.Google Scholar
- Tůma, M. 1988. A note on direct methods for approximations of sparse Hessian matrices. Aplikace Matematiky 33, 171--176.Google Scholar
- Vlček, J. and Lukšan, L. 2001. Globally convergent variable metric method for nonconvex nondifferentiable unconstrained minimization. J. Optimiz. Theor. Appl. 111, 407--430.Google ScholarDigital Library
- Vlček, J. and Lukšan, L. 2006. Shifted limited-memory variable metric methods for large-scale unconstrained minimization. J. Computat. Appl. Math. 186, 365--390. Google ScholarDigital Library
- Zhu, C., Byrd, R. H., Lu, P., and Nocedal, J. 1997. Algorithm 778. L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization. ACM Trans. Math. Softw. 23, 550--560. Google ScholarDigital Library
Index Terms
- Algorithm 896: LSA: Algorithms for large-scale optimization
Recommendations
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (...
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (...
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
The problem of minimizing an objective that can be written as the sum of a set of $n$ smooth and strongly convex functions is challenging because the cost of evaluating the function and its derivatives is proportional to the number of elements in the sum. ...
Comments