A feasible interior-point algorithm for nonconvex nonlinear programming

https://doi.org/10.1016/j.amc.2003.10.059Get rights and content

Abstract

In this paper, a new method is presented to deal with nonconvex nonlinear inequality constrained problem by using the idea of ε-effective set. It is only required to solve two systems of linear equations per single iteration. The theoretical analysis shows that the algorithm is global convergence under some suitable conditions.

Introduction

In this paper, we consider the following nonlinear programming (NLP):minf0(x)s.t.fj(x)⩽0,j=1,2,…,m,where fj:Rn→R(j=0,1∼m) are smooth functions.

The method of feasible directions (MFD) is one of the most important methods in the field of optimization. To optimization problems with nonlinear inequality constraint, in an engineering design, [14] pointed out that feasible iterates are important because

  • solutions are required to be strictly feasible;

  • iterates are all required to be feasible some times;

  • often f0(x) is undefined outside of the feasible region of (1.1);

  • in application of algorithm, the optimization process may be stopped after a few iterations, yielding a feasible approximate solution.


Obviously, MFD can satisfy these harsh requirements. Moreover, in the actual calculation process, it only needs the first order derivative of objective and constrained functions, and the design of algorithm is very simple. Because of above-mentioned advantage, MFD has more extensive appliance than other algorithms. MFD was originally developed by Zoutendijk [1], later, Topkis and Veinett [2], Pironncau–Polak [3] and Cawood and Kostreva [4] made amendment to Zoutendijk's method. However, MFD only takes into account one-times derivative of objective and constrained functions, which makes its convergence rate is slow.

Interior-point method (IPM) is another kind of algorithms of optimization. Theories have already expressed that IPM has a faster rate of convergence by comparison with MFD. IPM was originally proposed by Karmarkar [5] to linear programming, and it had the excellent effect in theories and experiments. Later, a lot of authors payed attention to IPM, and applied to semidefinite programming [6], convex quadratic programming [7], etc. Recently, it is still a hot topic and is extended to convex nonlinear programming [8] in appliance. By using some techniques such as primal-dual [9], SQP [10], trust-region [11] techniques, now it forms different kinds of IPM.

In Ref. [12], a feasible directions algorithm was proposed, where all iterates were prevented from leaving the strict feasible set. The search direction was computed by solving two (m+n)×(m+n) systems of linear equations. When constrained functions were two-times continuously differentiable, it was proved to be global convergent.

In this paper, nonconvex nonlinear programming is studied and a new algorithm is presented by combining MFD with IPM. This algorithm is simple, in addition, it is not necessary to make barrier function and above-mentioned techniques. In single iterate, the search direction is obtained by making a convex combination with solutions of one (m+n)×(m+n) system of linear equations (mm, which is the number of active constraints at x) and another m×m system of linear equations. Furthermore, it has above-mentioned advantage of MFD by using a group of parameters, and it ensures that iterates belong to interior of feasible field. In the end, we establish global convergence without assumptions of any two-times derivative and any convexity.

Section snippets

Description of algorithm

For the sake of simplicity, we denoteI={1,2,…,m},Ω={x∈Rn|fj(x)⩽0,j∈I},Ω0={x∈Rn|fj(x)<0,j∈I},I(x)={j∈I|fj(x)=0},gj(x)=fj(x),j=0∼m.

Throughout this paper, the following assumptions are assumed.

H2.1 Ω≠φ, Ω0≠φ, fj(j=0∼m) are continuously differentiable.

H2.2 ∀x∈Ω, the vectors {gj(x),jI(x)} are linearly independent.

Now, the algorithm for the solution of the problem (1.1) can be stated as follows.

Algorithm

  • Step 0

    Initialization and date:

    Given a starting point x1∈Ω0, and an initial symmetric positive definite matrix H

Global convergence of algorithm

In this section, we first show that the algorithm given in Section 2 is correctly stated, that is to say, it is possible to execute all the steps defined above. Then, we prove the global convergence of algorithm.

Lemma 3.1

For any iteration k, there, is no infinite cycle in step 1.

Proof

Suppose that the desired conclusion is false, that is to say, there exists some k, such that, for the kth iteration, there is an infinite cycle in step 1, then it holds thatdet(Ak,iTAk,i)<12iε0,i=1,2,…and by (2.1), we know that L

Numerical experiments

In this section, we carry out numerical experiments based on the algorithm. The results show that the algorithm is effective.

During the numerical experiments, ε0=0.5, α=0.5, θ=0.8, σ=0.5 and H=I, the n×n unit matrix.

This algorithm has been tested on some problems from Ref. [13], where no equality constraints are present for each problem. The results are summarized in Table 1. For each test problem, No. is the number of the test problem in [13], n the number of variables, m the number of

Acknowledgements

This work was supported in part by the NNSF (No. 10361003) and GIETSF (No. D20350) of China.

References (14)

  • G. Zoutendijk

    Methods of Feasible Directions

    (1960)
  • D.M. Topkis et al.

    On the convergence of some feasible direction algorithms for nonlinear programming

    SIAM J. Control

    (1967)
  • O. Pironncau et al.

    On the rate of convergence of certain methods of centers

    Math. Program.

    (1972)
  • M.E. Cawood et al.

    Norm-relaxed method of feasible directions for solving nonlinear programming problems

    JOTA

    (1994)
  • N. Karmarkar

    A new polynomial-time algorithm for linear programming

    Combinatorica

    (1984)
  • F. Alizadeh et al.

    Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results

    SIAM J. Optim.

    (1998)
  • J. Sun

    A convergence proof for an affine-scaling algorithm for convex quadratic programming without nondegeneracy assumptions

    Math. Program.

    (1993)
There are more references available in the full text version of this article.

Cited by (0)

View full text