A smoothing Newton method for second-order cone optimization based on a new smoothing function
Introduction
Second-order cone optimization (SOCO) problem is convex optimization problem in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order cones. In this paper we consider SOCO in standard formatand the dual problem of (P) is given bywhere and , and is the Cartesian product of second-order cones, i.e.,with n1 + n2 + ⋯ + nr = n, and the ni-dimensional second-order cone (SOC) is defined bywhere ∥·∥ refers to the Euclidean norm of vectors. The interior of SOC is given byFor simplicity, we use “;” to join vectors in a column. Thus, for instance, for vectors x, y, and z we use (x; y; z) to represent (xT, yT, zT)T.
Without loss of generality, in this paper, we assume that r = 1 and n1 = n in the subsequent analysis, since our analysis can be easily extended to general cases.
Throughout the paper, we make the following assumptions: Assumption 1.1 Both (P) and its dual (D) are strictly feasible. Under Assumption 1.1, it is well-known that both (P) and (D) have optimal solutions and their optimal values coincide [1], and the SOCO is equivalent to its optimality conditions:where x ∘ s = 0 is usually referred to the complementary condition. SOCO problems include linear optimization problems, convex quadratic optimization problems and quadratically constrained convex quadratic optimization problems as special cases [1]. In recent years, the SOCO problem has received considerable attention from researchers for its wide range of applications in many fields, such as engineering, optimal control and design, machine learning, robust optimization and combinatorial optimization and so on (see, e.g., [8], [16], [21], [28], [30]). There are an extensive literatures focusing on interior-point methods (IPMs) for solving SOCO (see, e.g., [2], [10], [11], [15], [20], [22], [29], [34] and references therein). IPMs typically deal with the following perturbation of the optimality conditions systemwhere μ > 0 and This set of conditions are called the central path conditions since they define a trajectory approaching the solution set as μ → 0. Conventional IPMs usually apply a Newton-type method to the equations in (4) with a suitable line search dealing with constraints and explicitly Recently great attention has been paid to smoothing Newton methods partially due to their encouraging convergent properties and numerical results (e.g., [3], [4], [12], [13], [17], [18], [23], [24], [25]). However, some algorithms [3], [24] strongly depend on the assumptions of uniform nonsingularity and strict complementarity to obtain the global convergence and local superlinear (or quadratic) convergence. Without the uniform nonsingularity assumption, the algorithm given in [33] usually needs to solve two linear systems of equations and to perform at least two line searches per iteration. Qi et al. [25] proposed a class of new smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities under a nonsingularity assumption, which was shown to be locally superlinearly/quadratically convergent without strict complementarity. Jiang [14] gave another technique to reformulate the nonlinear complementarity problem as a square system of equations. The main feature in the method [14] is the introduction of the additional function eμ − 1. Lastly, by modifying and extending the method of Qi et al. [25], Chi and Liu [6] proposed a one-step smoothing Newton method for the SOCO. The method in [6] can start from an arbitrary point and it is quadratically convergent without strict complementarity. Chi and Liu [7] also presented a non-interior continuation method for the SOCO based on a new smoothing function. Fang et al. [9] gave another smoothing Newton-type method for the SOCO and proved the quadratical convergence of the algorithm without strict complementarity. Motivated by their work, in this paper we introduce a new smoothing function by smoothing the symmetric perturbed Fischer–Burmeister function. Based on this smoothing function, we propose a smoothing Newton method for solving the SOCO. The new smoothing algorithm is based on the perturbed optimality conditions (4), and the main difference from IPMs is that we reformulate (4) as a nonlinear system of equations and then apply Newton’s method to this system. It is shown that our algorithm has the following good properties:
- (i)
The algorithm is well-defined and a solution of SOCO can be obtained from any accumulation point of the iteration sequence generated by the algorithm.
- (ii)
The algorithm can start from an arbitrary point, and does not require the initial point and iteration points to be in the sets of strictly feasible solutions.
- (iii)
It needs to solve only one system of linear equations and to perform only one line search at each iteration.
- (iv)
The global and local quadratic convergence of the algorithm are obtained without strict complementarity.
The paper is organized as follows. In the next section, we briefly introduce the Euclidean Jordan algebra associated with the SOC which will be used in the subsequent sections. Based on Fischer–Burmeister function, a new smoothing function and its properties are given in Section 3. In Section 4, we present a smoothing Newton method for solving the SOCO and state some preliminary results. The global convergence and locally quadratic convergence of the algorithm are investigated in Section 5. Preliminary numerical results are reported in Section 6. The conclusions are given in Section 7.
Some notations used throughout the paper are as follows. and denote the set of vectors with n components, the set of nonnegative vectors and the set of positive vectors, respectively. is identified with . I represents the identity matrix with suitable dimension. ∥·∥ denotes the 2-norm of the vector x defined by . For any α, β > 0, α = O(β) (respectively, α = o(β)) means that α/β is uniformly bounded (respectively, tends to zero) as β → 0. For any we write or (respectively, or ) if or (respectively, or ). For any square matrices , we write A ⪰ 0 (respectively, A ≻ 0) if the symmetric part of A is positive semi-definite (respectively, positive definite).
Section snippets
Euclidean Jordan algebra associated with the SOC
Smoothing Newton methods for the SOCO are based on the Euclidean Jordan algebra associated with the SOC [1], [11]. For any vectors , their Jordan product associated with SOC is defined byOne easily checks that is an Euclidean Jordan algebra with the vectoras identity element. We write x2 to mean x ∘ x and write x + y to mean the usual componentwise addition of vectors. Then, we have the following basic properties. Property 2.1 One has ;[1]
A new smoothing function and its properties
In this subsection, we give an new smoothing function and its properties. One popular choice of SOCO-function is the Fischer–Burmeister (FB) function defined byIn [12], it has been shown that ϕFB(x, s) satisfies the following important propertyThe FB function has many interesting properties. In [23], Pan and Chen showed that ϕFB is semismooth which leads to a semismooth Newton method for second order cone complementarity problems
Algorithm description
Based on the smoothing function defined by (11), we propose a smoothing Newton method for the SOCO and subsequently we show the well-definiteness of the algorithm.
Let . By using the smoothing function (11), we define the function by
In view of (10), (19), z∗ = (μ∗, x∗, y∗) is a solution of the system H(z) = 0 if and only if z∗ = (x∗, y∗, c − ATy∗) satisfies the optimality conditions (3). Therefore, z∗ is a solution of H(z) = 0 if and only
Convergence analysis
In this section, we show that any accumulation point of the iteration sequence {zk≔(μk, xk, yk)} is a solution of the system H(z) = 0. If the accumulation point z∗ satisfies a nonsingularity assumption, then the iteration sequence converges to z∗ locally quadratically without strict complementarity. Firstly, we can prove the global convergence of Algorithm 4.1. We have the following results. Theorem 5.1 Global convergence Suppose that A has full row rank and that {zk} is the iteration sequence generated by Algorithm 4.1. Then any
Numerical results
In order to evaluate the efficiency of Algorithm 4.1, we have conducted some numerical experiments. All experiments were performed on a personal computer (IBM R40e) with 512 MB memory and Intel (R) Pentium (R) 4 CPU 2.00 GHz. The operating system was Windows XP (SP2) and the implementations were done in MATLAB 7.0.1.
In all these experiments, we choose as the starting point. The parameters used in Algorithm 4.1 were μ0 = 0.01, σ = 0.35, δ = 0.75, γ = 0.65. We used ∥H(zk)∥ ⩽ 10−6 as the
Conclusions
In this paper, we propose a new smoothing function by smoothing the symmetric perturbed Fischer–Burmeister function. Based on the new smoothing function, we present a smoothing Newton algorithm for solving the SOCO. The global convergence and locally quadratic convergence of the algorithm are proved without strict complementarity. Numerical results show that our algorithm performs well.
It is obvious that the function eμ − 1 plays an important role in analyzing our algorithm. One may find other
Acknowledgements
This work was supported by National Natural Science Foundation of China (10571109, 10971122), Natural Science Foundation of Shandong Province (Y2008A01) and Specialized Research Foundation for the Doctoral Program of Higher Education (20093718110005).
The authors thank the anonymous referees for their valuable comments and suggestions on the paper, which have considerably improved the paper.
References (34)
- et al.
Primal-dual interior-point algorithms for second-order cone optimization based on kernel functions
Nonlinear Analysis
(2009) - et al.
A one-step smoothing Newton method for second-order cone programming
Journal of Computational and Applied Mathematics
(2009) - et al.
A new smoothing Newton-type method for second-order cone programming problems
Applied Mathematics and Computation
(2009) - et al.
Applications of second-order cone programming
Linear Algebra and its Applications
(1998) - et al.
The convergence of a one-step smoothing Newton method for P0-NCP based on a new smoothing NCP-function
Journal of Computational and Applied Mathematics
(2008) - et al.
Robust BMPM training based on second-order cone programming and its application in medical diagnosis
Neural Networks
(2008) - et al.
A primal-dual interior-point algorithm for second-order cone optimization with full Nesterov–Todd step
Applied Mathematics and Computation
(2009) - et al.
Second-order cone programming
Mathematical Programming
(2003) - et al.
A global linear and local quadratic non-interior continuation method for nonlinear complementarity problems based on Chen–Mangasarian smoothing functions
SIAM Journal on Optimization
(1999) - et al.
An unconstrained smooth minimization reformulation of the second-order cone complementarity problem
Mathematical Programming Series B
(2005)
A non-interior continuation method for second-order cone programming
Optimization
An efficient support vector machine learning method with second-order cone programming for large-scale problems
Applied Intelligence
Analysis on Symmetric Cones
Euclidean Jordan algebras and interior-point algorithms
Positivity
Smoothing functions for second-order-cone complementarity problems
SIAM Journal on Optimization
A combined smoothing and regularized method for monotone second-order cone complementarity problems
SIAM Journal on Optimization
Cited by (20)
A primal–dual interior point method for a novel type-2 second order cone optimization
2021, Results in Control and OptimizationA globally and quadratically convergent smoothing newton method for solving second-order cone optimization
2015, Applied Mathematical ModellingCitation Excerpt :Under mild assumptions, we prove that the proposed method is globally and locally quadratically convergent. To compare with existing smoothing methods for the SOCO (e.g., [6–8,10,19]), our method has the following special properties. It is based on a non-symmetrically perturbed smoothing function, while existing smoothing methods (e.g., [6–8,10,19]) were all designed by some symmetrically perturbed smoothing functions.
A new non-interior continuation method for solving the second-order cone complementarity problem
2014, Applied Mathematics and ComputationCitation Excerpt :A non-interior continuation method for the SOCCP A similar algorithmic framework was originally introduced by Qi et al. [26] and has been extensively studied for solving the cone optimization problems (e.g., [4,8–10,14,18,29–31]). In Algorithm 4.1, we give two step lengths, in which Step 3 is the classical Armijo line search which has been extensively used in smoothing-type methods, while Step 3′ is a new search rule.
Improved Gauss Seidel based projection–contraction algorithm for the mixed complementarity problem in contact problem
2022, International Journal for Numerical and Analytical Methods in Geomechanics