Skip to main content

Advertisement

Log in

SDP-quality bounds via convex quadratic relaxations for global optimization of mixed-integer quadratic programs

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider the global optimization of nonconvex mixed-integer quadratic programs with linear equality constraints. In particular, we present a new class of convex quadratic relaxations which are derived via quadratic cuts. To construct these quadratic cuts, we solve a separation problem involving a linear matrix inequality with a special structure that allows the use of specialized solution algorithms. Our quadratic cuts are nonconvex, but define a convex feasible set when intersected with the equality constraints. We show that our relaxations are an outer-approximation of a semi-infinite convex program which under certain conditions is equivalent to a well-known semidefinite program relaxation. The new relaxations are implemented in the global optimization solver BARON, and tested by conducting numerical experiments on a large collection of problems. Results demonstrate that, for our test problems, these relaxations lead to a significant improvement in the performance of BARON.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Dongarra, J., Croz, J.D., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ guide, vol. 9. Siam (1999)

  2. Bao, X., Sahinidis, N.V., Tawarmalani, M.: Semidefinite relaxations for quadratically constrained quadratic programming: a review and comparisons. Math. Program. 129, 129–157 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Billionnet, A., Elloumi, S., Lambert, A.: An efficient compact quadratic convex reformulation for general integer quadratic programs. Comput. Optim. Appl. 54, 141–162 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Buchheim, C., Wiegele, A.: Semidefinite relaxations for non-convex quadratic mixed-integer programming. Math. Program. 141, 435–452 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Dong, H.: Relaxing nonconvex quadratic functions by multiple adaptive diagonal perturbations. SIAM J. Optim. 26, 1962–1985 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Faye, A., Roupin, F.: Partial Lagrangian relaxation for general quadratic programming. 4OR 5, 75–88 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Goemans, M.X., Williamson, D.P.: Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42, 1115–1145 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  8. Khajavirad, A., Sahinidis, N.V.: A hybrid LP/NLP paradigm for global optimization relaxations. Math. Program. Comput. 10, 383–421 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  9. Koopmans, T.C., Beckmann, M.: Assignment problems and the location of economic activities. Econom. J. Econom. Soc. 25, 53–76 (1957)

    MathSciNet  MATH  Google Scholar 

  10. Kılınç, M., Sahinidis, N.V.: Exploiting integrality in the global optimization of mixed-integer nonlinear programming problems in BARON. Optim. Methods Softw. 33, 540–562 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  11. McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: part I-Convex underestimating problems. Math. Program. 10, 147–175 (1976)

    Article  MATH  Google Scholar 

  12. Nohra, C.J., Raghunathan, A.U., Sahinidis, N.V.: Spectral relaxations and branching strategies for global optimization of mixed-integer quadratic programs. SIAM J. Optim. 31, 142–171 (2021)

  13. Nohra, C.J., Raghunathan, A.U., Sahinidis, N.V.: A test set of quadratic, binary quadratic and integer quadratic programs. ftp://ftp.merl.com/pub/raghunathan/MIQP-TestSet/

  14. Pardalos, P.M., Glick, J.H., Rosen, J.B.: Global minimization of indefinite quadratic problems. Computing 539, 281–291 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  15. Phillips, A., Rosen, J.: A quadratic assignment formulation of the molecular conformation problem. J. Global Optim. 4, 229–241 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  16. Saxena, A., Bonami, P., Lee, J.: Convex relaxations of non-convex mixed integer quadratically constrained programs: projected formulations. Math. Program. 130, 359–413 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Sherali, H.D., Adams, W.P.: A Reformulation-Linearization Technique for Solving Discrete and Continuous Nonconvex Problems, Nonconvex Optimization and Its Applications, vol. 31. Kluwer, Dordrecht (1999)

    MATH  Google Scholar 

  18. Sherali, H.D., Wang, H.: Global optimization of nonconvex factorable programming problems. Math. Program. 89, 459–478 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  19. Shor, N.: Quadratic optimization problems. Sov. J. Comput. Syst. Sci. 25, 1–11 (1987)

    MathSciNet  MATH  Google Scholar 

  20. Tawarmalani, M., Sahinidis, N.V.: Convex extensions and convex envelopes of lsc functions. Math. Program. 93, 247–263 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  21. Tawarmalani, M., Sahinidis, N.V.: Global optimization of mixed-integer nonlinear programs: a theoretical and computational study. Math. Program. 99, 563–591 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  22. Tuy, H.: DC optimization: theory, methods and algorithms. In: Horst, R., Pardalos, P.M. (eds.) Handbook of Global Optimization, pp. 149–216. Kluwer, Boston (1995)

    Chapter  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikolaos V. Sahinidis.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Barrier coordinate minimization algorithm used to solve the nonsmooth regularized separation problem

A Barrier coordinate minimization algorithm used to solve the nonsmooth regularized separation problem

In this appendix, we briefly describe the algorithm proposed by Dong [5] to solve the nonsmooth regularized separation problem (39). We then describe our implementation of this algorithm. The algorithm operates on the following penalized log-det problem

$$\begin{aligned} \begin{array}{cl} \underset{d \in {{\mathbb {R}}}^n}{\text {inf}}\;\; &{} h(d; \sigma ) := \sum \limits _{i = 1}^{n} r_i(d_i) - \sigma \text {log-det}\left( Q + \text {diag}(d) + \alpha A^T A \right) \\ \mathrm{s.t.}\;\; &{} Q + \text {diag}(d) + \alpha A^T A \succ 0 \end{array} \end{aligned}$$
(62)

where \(r_i(d_i) = \beta _i d_i\) for \(d_i > 0\), \(r_i(d_i) = \eta _i d_i\) for \(d_i \le 0\), \(\beta _i = \eta _i + \lambda , \; \forall i \in [n]\), and \(\sigma >0\). Each iteration of this algorithm involves the update of a feasible vector \({\bar{d}}\) and an inverse matrix \(V:= {\left[ Q + \text {diag}({\bar{d}}) + \alpha A^T A \right] }^{-1}\). Based on the optimality condition for (62), this algorithm performs coordinate minimization by choosing an index i determined as:

$$\begin{aligned} \begin{array}{cl} i = \text {arg}\underset{j = 1, \dots , n}{\text {max}} \left\{ \left| {s({\bar{d}})}_j \right| \right\} , \;\; \text {with} \;\; s({\bar{d}}) = \text {arg}\underset{u \in {{\mathbb {R}}}^n}{\text {min}} \left\{ {\Vert u \Vert }_2 \, : \, u \in \partial h({\bar{d}}; \sigma ) \right\} . \end{array} \end{aligned}$$
(63)
figure c

where \(\partial h({\bar{d}}; \sigma )\) is the subdifferential of \(h({\bar{d}}; \sigma )\). This choice of i leads to a one-dimensional minimization problem similar to (46) but involving \(h({\bar{d}} + \varDelta d_i e_i; \sigma )\). This problem can be solved analytically to obtain the following formula for \(\varDelta d_i^*\) (see Section 4 in [5] for details):

$$\begin{aligned} \varDelta d_i^* = \left\{ \begin{array}{ll} \frac{\sigma }{\beta _i} - \frac{1}{V_{ii}},\; &{}\mathrm{if}\; -{\bar{d}}_i< - \frac{1}{V_{ii}} \;\;\; \text {or} \;\;\; \sigma \frac{V_{ii}}{1 - {\bar{d}}_i V_{ii}} > \beta _i,\\ -{\bar{d}}_i,\; &{}\mathrm{if}\; -{\bar{d}}_i \ge - \frac{1}{V_{ii}} \;\;\; \text {and} \;\;\; \eta _i \le \sigma \frac{V_{ii}}{1 - {\bar{d}}_i V_{ii}} \le \beta _i,\\ \frac{\sigma }{\eta _i} - \frac{1}{V_{ii}},\; &{}\mathrm{if}\; -{\bar{d}}_i \ge - \frac{1}{V_{ii}} \;\;\; \text {and} \;\;\; \sigma \frac{V_{ii}}{1 - {\bar{d}}_i V_{ii}} < \eta _i. \end{array}\right. \end{aligned}$$
(64)

After calculating \(\varDelta d_i^*\) according to (64), \({\bar{d}}\) and V are updated using (50) and (51), respectively. Once (62) has been solved within a given precision, the penalty parameter \(\sigma \) is adjusted through a rule similar to (53):

$$\begin{aligned} \begin{aligned} \sigma \leftarrow \max \{\sigma _{\text {min}}, \sigma _{\text {upd}} \cdot \sigma \} \;\;\;\; \text {if} \;\;\;\; \dfrac{ {\Vert s({\bar{d}}) \Vert }_2 }{ {\Vert \beta \Vert }_2 } \le \epsilon _{\text {upd}} \end{aligned} \end{aligned}$$
(65)

where \(s({\bar{d}})\) is used as a measure of optimality. The relative improvement in the objective function of (39) is checked every \(\omega _{\text {check}} n\) iterations, and the algorithm terminates if this relative improvement is smaller than \(\epsilon _{\text {check}}\).

The entire procedure is summarized in Algorithm 3. In our implementation of this algorithm, we use an initial perturbation \({\hat{d}} = -1.5 \lambda _{\text {min}} (Q + \alpha A^T A) \mathbb {1}\). We set the following parameters by using the values recommended in [5]: \(\lambda = \sum _{i = 1}^{n} \eta _i\), \(\sigma _{\text {min}} = 10^{-5}\), \(\sigma _{\text {upd}} = 0.8\), \(\epsilon _{\text {upd}} = 0.03\). We use \(\text {MaxIter} = 500 n\), \(\omega _{\text {check}} = 10\) and \(\epsilon _{\text {check}} = 10^{-4}\). The initial value of \(\sigma _{\text {init}}\) is determined as:

$$\begin{aligned} \begin{aligned} \sigma _{\text {init}} = {{\text {median}} \left\{ \left| \dfrac{u_i}{V_{ii}} \right| \right\} }_{i = 1}^{n} \end{aligned} \end{aligned}$$
(66)

where \( u_i \in \partial r_i({\hat{d}}_i)\), and \(\partial r_i({\hat{d}}_i)\) is the subdifferential of \(r_i({\hat{d}}_i)\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nohra, C.J., Raghunathan, A.U. & Sahinidis, N.V. SDP-quality bounds via convex quadratic relaxations for global optimization of mixed-integer quadratic programs. Math. Program. 196, 203–233 (2022). https://doi.org/10.1007/s10107-021-01680-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-021-01680-9

Keywords

Mathematics Subject Classification

Navigation