A new global optimization algorithm for signomial geometric programming via Lagrangian relaxation

https://doi.org/10.1016/j.amc.2006.05.208Get rights and content

Abstract

In this paper, a global optimization algorithm, which relies on the exponential variable transformation of the signomial geometric programming (SGP) and the Lagrangian duality of the transformed programming, is proposed for solving the signomial geometric programming (SGP). The difficulty in utilizing Lagrangian duality within a global optimization context is that the restricted Lagrangian function for a given estimate of the Lagrangian multipliers is often nonconvex. Minimizing a linear underestimation of the restricted Lagrangian overcomes this difficulty and facilitates the use of Lagrangian duality within a global optimization framework. In the new algorithm the lower bounds are obtained by minimizing the linear relaxation of restricted Lagrangian function for a given estimate of the Lagrange multipliers. A branch-and-bound algorithm is presented that relies on these Lagrangian relaxations to provide lower bounds and on the interval Newton method to facilitate convergence in the neighborhood of the global solution. Computational results show that the algorithm is efficient.

Introduction

In this paper, we consider the global optimization of signomial geometric programming (SGP) problem of the following form:SGP(Ω0)minh0(t)s.t.hj(t)1,j=1,,m,Ω0={t:0<tlttu},wherehj(t)=t=1Tjαjti=1ntiγjti,j=0,1,,mand Tj is one positive integer number, each αjt, and γjti is assumed to be any real number. In general, formulation SGP corresponds to a nonlinear optimization problem with nonconvex objective function and constraint set. Note that if we set αjt  0, for all t = 1,  , Tj, j = 0, 1,  , m, then SGP reduces to the classical posynomial geometric programming formulation which laid the foundation for the theory of SGP problem.

SGP has found a wide range of applications since its initial development. Though SGP is a special class of nonlinear programming, as noted by Refs. [1], [2], many nonlinear programming may be restated as geometric programming with very little additional effort by simple techniques such a change of variables or by straightforward algebraic manipulation of terms. Its great impact has been in the areas in

  • (1)

    Engineering design [3], [4], [5];

  • (2)

    Manufacturing [6], [7];

  • (3)

    Chemical equilibrium [8], [9];

  • (4)

    Economics and statistics [10], [11], [12].

Hence, it is necessary to present good SGP algorithms.

Local optimization approaches for solving SGP problems include three kinds of methods in general. First, successive approximation by posynomials, called ‘condensation’, has received the most popularity [13]. Second, Passy and Wilde [14] developed a weaker type of duality, called ’pseudo-duality’, to accommodate this class of nonlinear optimization. Third, adapted general nonlinear programming methods [15].

Though local optimization methods for solving SGP problem are ubiquitous, finding provably global solution to problems of this sort is difficult. Some specialized algorithms have been developed to global optimization SGP when each γjti is positive integer or rational number [16], [17], [18]. In this case that each γjti is real, Maranas and Floudas [8] proposed such a global optimization algorithm (RCA) based on the exponential variable transformation of SGP, the convex relaxation and branch-and-bound on some hyperrectangle region.

In this paper, a global optimization algorithm is proposed to globally solve problem SGP, which each γjti in SGP is assumed to be real, that is based on using the exponential transformation and Lagrangian duality to generate lower bounds on the optimal objective function value. Lagrangian duality is a well-known optimization tool that can be employed in wide variety of contexts.

Two difficulties arise in attempting to use Lagrangian duality to solve a nonconvex problem. From the following section, the first is the duality gap: solving the dual does not necessarily yield the primal solution. Recently many researchers have studied utilizing Lagrangian duality within a branch-and-bound framework that partitions the feasible region. They have proved that, for very general classes of nonconvex programs with a suitably refined partitioning of the feasible set, the duality gap is less than any specified tolerance ε [19], [20], [21]. In each paper, these results motivate a convergent branch-and-bound algorithm that uses Lagrangian duality to generate bounds.

While a suitable partitioning strategy can overcome the duality gap, the second difficulty with using Lagrangian duality for global optimization is that it requires the minimization a nonconvex function. Thus, it typically is proposed for problems whose structure ensures a tractable minimization subproblem. The papers mentioned in the previous paragraph provide a nice illustration of this. To minimize a general nonconvex function over a polytope, Ben-tal et al. [19], and Dur and Horst [20] use a convex envelope construction to ensure that the dual function generates a valid lower bound. Barrientos and Correa [21] transform quadratic programs so that their objective functions are separable. The Lagrangian subproblem thus reduces to the minimization of a separable quadratic function over variable bounds.

In our paper, (1) we propose the linear relaxation of the Lagrangian over bound constraints which is more convenient in the computation than the convex relaxation [8]. This allows it to be naturally incorporated into a branch-and-bound scheme. (2) An exhaustive partitioning process guarantees that the linear relaxation of the Lagrangian approaches the Lagrangian, so it is not surprising that algorithm can be shown to converge to the global solution. (3) The generated relaxed linear programming does not increasing new variables and constraints [16], [17], [18].

The plan of the paper is as follows: In the next section, the linear relaxation of Lagrangian is presented for generating the lower bounds to SGP. In Section 3, the proposed branch-and-bound algorithm in which the relaxed subproblems are embedded is described, and the convergence of the algorithm is established. Numerical results of some problems in the area of engineering design are considered in Sections 4 The convergence of the algorithm, 5 Numerical experiments provides a summary.

Section snippets

Lagrangian relaxation

We apply the exponential transformation ti = exp(xi), i = 1,  , n for the original formulation SGP(Ω0) and can obtain the following equivalent optimization:SGP(B0)minf0(x)s.t.fj(x)1,j=1,,m,B0={t:x̲0=lntlxx¯0=lntu},wherefj(x)=t=1Tjαjtexpi=1nγjtixi=tPjαjtexpi=1nγjtixi+tNjαjtexpi=1nγjtixi,j = 0, 1,  , m, Pj = {tαjt  0, t = 1,  , Tj} and Nj = {tαjt < 0, t = 1,  , Tj}. According to (2.2) we can see that every function fj can be expressed the sum of convex and concave function.

The principal structure in the

Algorithmic statement

The algorithm presented below will subsequently be referred to as the GDCAB algorithm. The branch-and-bound approach is based on partitioning the set B0 into sub-hyperrectangles, each concerned with a node of the branch-and-bound tree, and each node is associated with a Lagrangian relaxation subproblem in each sub-hyperrectangle. Hence, at any stage k of the algorithm, suppose that we have a collection of active nodes denoted by Ql, say, each associated with a hyperrectangle B  B0, ∀B  Qk. For

The convergence of the algorithm

To obtain the global convergence we always suppose the feasible set of problem (2.1) is nonempty and the step-size parameter chosen in this algorithm satisfy the well-known divergent series rule. For each region Bl, let tl be its level within the branch-and-bound tree (i.e. if Bl has three ancestors, then tl = 4). Let vh=1tl and λjh+1=max{0,λjh+vhςjh}, then if the algorithm dose not stop finitely, then vh  0 and h=1vh=.

Theorem 4.1

If the algorithm stops finitely, then it terminates with the global optimal

Numerical experiments

In this part, we apply Algorithm GDCAB to solve the following optimization. Example 3, Example 4 are common test problems (include engineering process control and design problems). The numerical experiments show that our method is efficient. The arithmetic is coded in C++, numerical test in PC, CPU Main Frequency 1.43GEMS 256Mrun circumstance VC++6.0.

The numerical results by Algorithm GDCAB can be seen in Table 1. In this table, IN denotes the iteration number, t denotes the approximate

Conclusions

In this paper, a new branch-and-bound algorithm via Lagrangian dual is proposed to solve the generalized geometric programming. The algorithm is based on the fact that the linear relaxation of the Lagrangian dual can be obtained and an efficient partitioning strategy can reduce the Lagrangian dual to any specified tolerance. The algorithm was shown to convergence to the global minimum through the successive refinement of a linear relaxation of the Lagrangian dual and the subsequent solutions of

References (25)

  • N.K. Jha

    Geometric programming based robot control design

    Computers and Industrial Engineering

    (1995)
  • C.D. Maranas et al.

    Global optimization in generalized geometric programming

    Computers and Chemical Engineering

    (1997)
  • P. Hensen et al.

    Reduction of indefinite quadratic programs to bilinear programs

    Journal of Global Optimization

    (1992)
  • C.S. Beightler et al.

    Applied Geometric Programming

    (1976)
  • M. Avriel et al.

    An extension of geometric programming with applications in engineering optimization

    Journal of Engineering Mathematics

    (1971)
  • T.R. Jefferson et al.

    Generalized geometric programming applied to problems of optimal control: I. Theory

    JOTA

    (1978)
  • A.I. Ssnmez et al.

    Dynamic optimization of multipass milling operations via geometric programming

    International Journal of Machine Tools and Manufacture

    (1999)
  • C.H. Scott et al.

    Allocation of resources in project management

    International Journal on Systems Science

    (1995)
  • M.J. Rijckaert et al.

    Analysis and optimization of the Williams–Otto process by geometric programming

    AICHE Journal

    (1974)
  • J.C. Choi et al.

    Effectiveness of a geometric programming algorithm for optimization of machining economics models

    Computers Operations Research

    (1996)
  • H.E. Barrel et al.

    Restricted multinomial maximum likelihood estimation based upon Fenchel duality

    Statistics and Probability Letters

    (1994)
  • D.L. Bricker, K.O. Kortanek, L. Xu, Maximum likelihood estimates with order restrictions on probabilities and odds...
  • Cited by (0)

    View full text