Skip to main content
Log in

Visualizing data as objects by DC (difference of convex) optimization

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective is the difference of two convex functions (DC). Suitable DC decompositions allow us to use the Difference of Convex Algorithm (DCA) in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Abdi, H., Williams, L.J., Valentin, D., Bennani-Dosse, M.: STATIS and DISTATIS: optimum multitable principal component analysis and three way metric multidimensional scaling. Wiley Interdiscip. Rev. Comput. Stat. 4(2), 124–167 (2012)

    Article  Google Scholar 

  2. Blanquero, R., Carrizosa, E.: Continuous location problems and big triangle small triangle: constructing better bounds. J. Glob. Optim. 45(3), 389–402 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Blanquero, R., Carrizosa, E., Hansen, P.: Locating objects in the plane using global optimization techniques. Math. Oper. Res. 34(4), 837–858 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bomze, I.M., Locatelli, M., Tardella, F.: New and old bounds for standard quadratic optimization: dominance, equivalence and incomparability. Math. Program. 115(1), 31–64 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  5. Borg, I., Groenen, P.J.F.: Modern Multidimensional Scaling: Theory and Applications. Springer, Berlin (2005)

    MATH  Google Scholar 

  6. Buchin, K., Speckmann, B., Verdonschot, S.: Evolution strategies for optimizing rectangular cartograms. In: Xiao, N., Kwan, M.-P., Goodchild, M.F., Shekhar, S. (eds.) Geographic Information Science, Volume 7478 of Lecture Notes in Computer Science, pp. 29–42. Springer (2012)

  7. Cameron, S., Culley, R.: Determining the minimum translational distance between two convex polyhedra. IEEE Int. Conf. Robot. Autom. 3, 591–596 (1986)

    Google Scholar 

  8. Carrizosa, E., Conde, E., Muñoz-Márquez, M., Puerto, J.: The generalized Weber problem with expected distances. Revue française d’automatique, d’informatique et de recherche opérationnelle. Recherche opérationnelle 29(1), 35–57 (1995)

    MathSciNet  MATH  Google Scholar 

  9. Carrizosa, E., Dražić, M., Dražić, Z., Mladenović, N.: Gaussian variable neighborhood search for continuous optimization. Comput. Oper. Res. 39(9), 2206–2213 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Carrizosa, E., Guerrero, V.: Biobjective sparse principal component analysis. J. Multivar. Anal. 132, 151–159 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Carrizosa, E., Guerrero, V.: rs-Sparse principal component analysis: a mixed integer nonlinear programming approach with VNS. Comput. Oper. Res. 52, 349–354 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Carrizosa, E., Guerrero, V., Romero Morales, D.: A multi-objective approach to visualize adjacencies in weighted graphs by rectangular maps. Technical report, Optimization Online (2015). http://www.optimization-online.org/DB_HTML/2015/12/5226.html

  13. Carrizosa, E., Guerrero, V., Romero Morales, D.: Visualizing proportions and dissimilarities by space-filling maps: a large neighborhood search approach. Comput. Oper. Res. 78, 369–380 (2017)

    Article  MathSciNet  Google Scholar 

  14. Carrizosa, E., Martín-Barragán, B., Plastria, F., Romero Morales, D.: On the selection of the globally optimal prototype subset for nearest-neighbor classification. INFORMS J. Comput. 19(3), 470–479 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  15. Carrizosa, E., Muñoz-Márquez, M., Puerto, J.: Location and shape of a rectangular facility in \({\mathbb{R}}^n\). Convexity properties. Math. Program. 83(1–3), 277–290 (1998)

    MathSciNet  MATH  Google Scholar 

  16. Carrizosa, E., Muñoz-Márquez, M., Puerto, J.: The weber problem with regional demand. Eur. J. Oper. Res. 104(2), 358–365 (1998)

    Article  MATH  Google Scholar 

  17. Carrizosa, E., Romero Morales, D.: Supervised classification and mathematical optimization. Comput. Oper. Res. 40(1), 150–165 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chen, C.P., Zhang, C.-Y.: Data-intensive applications, challenges, techniques and technologies: a survey on big data. Inf. Sci. 275, 314–347 (2014)

    Article  Google Scholar 

  19. Choo, J., Park, H.: Customizing computational methods for visual analytics with big data. IEEE Comput. Gr. Appl. 33(4), 22–28 (2013)

    Article  Google Scholar 

  20. Cox, T.F., Cox, M.A.A.: Multidimensional Scaling. CRC Press, Boca Raton (2000)

    MATH  Google Scholar 

  21. De Leeuw, J., Heiser, W.J.: Convergence of correction matrix algorithms for multidimensional scaling. In: Lingoes, J.C., Roskam, E.E., Borg, I. (eds.) Geometric Representations of Relational Data, pp. 735–752. Mathesis Press, Ann Arbor (1977)

    Google Scholar 

  22. De Silva, V., Tenenbaum, J.B.: Sparse Multidimensional Scaling Using Landmark Points. Technical report, Stanford University (2004)

  23. Díaz-Báñez, J.M., Mesa, J.A., Schöbel, A.: Continuous location of dimensional structures. Eur. J. Oper. Res. 152(1), 22–44 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  24. Dörk, M., Carpendale, S., Williamson, C.: Visualizing explicit and implicit relations of complex information spaces. Inf. Vis. 11(1), 5–21 (2012)

    Article  Google Scholar 

  25. Dorling, D.: Area cartograms: their use and creation. Concepts and Techniques in Modern Geography Series No. 59. University of East Anglia: Environmental Publications, UK (1996)

  26. Ehrgott, M.: A discussion of scalarization techniques for multiple objective integer programming. Ann. Oper. Res. 147(1), 343–360 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  27. Elkeran, A.: A new approach for sheet nesting problem using guided cuckoo search and pairwise clustering. Eur. J. Oper. Res. 231(3), 757–769 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  28. Ferrer, A., Martínez-Legaz, J.E.: Improving the efficiency of DC global optimization methods by improving the DC representation of the objective function. J. Glob. Optim. 43(4), 513–531 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. Flavin, T., Hurley, M., Rousseau, F.: Explaining stock market correlation: a gravity model approach. Manch. Sch. 70, 87–106 (2002)

    Article  Google Scholar 

  30. Fountoulakis, K., Gondzio, J.: Performance of first- and second-order methods for \(\ell _1\)-regularized least squares problems. Comput. Optim. Appl. 65(3), 605–635 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  31. Fountoulakis, K., Gondzio, J.: A second-order method for strongly convex \(\ell _1\)-regularization problems. Math. Program. 156(1), 189–219 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  32. Gomez-Nieto, E., San Roman, F., Pagliosa, P., Casaca, W., Helou, E.S., de Oliveira, M.C.F., Nonato, L.G.: Similarity preserving snippet-based visualization of web search results. IEEE Trans. Vis. Comput. Gr. 20(3), 457–470 (2014)

    Article  Google Scholar 

  33. Gower, J.C.: Some distance properties of latent root and vector methods used in multivariate analysis. Biometrika 53(3–4), 325–338 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  34. Hansen, P., Jaumard, B.: Cluster analysis and mathematical programming. Math. Program. 79(1–3), 191–215 (1997)

    MathSciNet  MATH  Google Scholar 

  35. Heilmann, R., Keim, D.A., Panse, C., Sips, M.: Recmap: Rectangular map approximations. In: Proceedings of the IEEE Symposium on Information Visualization, pp. 33–40. IEEE Computer Society (2004)

  36. Hiriart-Urruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1993)

    MATH  Google Scholar 

  37. Kaufman, L., Rousseeuw, P.J.: Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York (1990)

    Book  MATH  Google Scholar 

  38. Koshizuka, T., Kurita, O.: Approximate formulas of average distances associated with regions and their applications to location problems. Math. Program. 52(1–3), 99–123 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  39. Kruskal, J.B.: Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29(1), 1–27 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  40. Le Thi, H.A., Pham Dinh, T.: D.C. programming approach to the multidimensional scaling problem. In: Migdalas, A., Pardalos, P.M., Värbrand, P. (eds.) From Local to Global Optimization, Volume 53 of Nonconvex Optimizations and Its Applications, pp. 231–276. Springer, Berlin (2001)

    Google Scholar 

  41. Le Thi, H.A., Pham Dinh, T.: DC programming approaches for distance geometry problems. In: Mucherino, A., Lavor, C., Liberti, L., Maculan, N. (eds.) Distance Geometry, pp. 225–290. Springer, Berlin (2013)

    Chapter  Google Scholar 

  42. Le Thi, H.A.: An efficient algorithm for globally minimizing a quadratic function under convex quadratic constraints. Math. Program. 87, 401–426 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  43. Le Thi, H.A., Pham Dinh, T.: The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133(1–4), 23–46 (2005)

    MathSciNet  MATH  Google Scholar 

  44. Liberti, L., Lavor, C., Maculan, N., Mucherino, A.: Euclidean distance geometry and applications. SIAM Rev. 56(1), 3–69 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  45. Lin, M.C., Manocha, D.: Collision and proximity queries. In: O’Rourke, J., Goodman, E. (eds.) Handbook of Discrete and Computational Geometry. CRC Press, Boca Rotan (2004)

    Google Scholar 

  46. Liu, S., Cui, W., Wu, Y., Liu, M.: A survey on information visualization: recent advances and challenges. Vis. Comput. 30(12), 1373–1393 (2014)

    Article  Google Scholar 

  47. Mladenović, N., Dražić, M., Kovačevic-Vujčić, V., Čangalović, M.: General variable neighborhood search for the continuous optimization. Eur. J. Oper. Res. 191(3), 753–770 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  48. Olafsson, S., Li, X., Wu, S.: Operations research and data mining. Eur. J. Oper. Res. 187(3), 1429–1448 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  49. Ong, C.J., Gilbert, E.G.: Growth distances: new measures for object separation and penetration. IEEE Trans. Robot. Autom. 12(6), 888–903 (1996)

    Article  Google Scholar 

  50. Pearson, K.: On lines and planes of closest fit to systems of points in space. Philos. Mag. 2, 559–572 (1901)

    Article  MATH  Google Scholar 

  51. Pham Dinh, T., Le Thi, H.A.: Convex analysis approach to D.C. programming: theory, algorithms and applications. Acta Math. Vietnam. 22(1), 289–355 (1997)

    MathSciNet  MATH  Google Scholar 

  52. Pham Dinh, T., Le Thi, H.A.: A branch-and-bound method via DC optimization algorithm and ellipsoidal technique for box constrained nonconvex quadratic programming problems. J. Glob. Optim. 13, 171–206 (1998)

    Article  MATH  Google Scholar 

  53. Pong, T.K., Tseng, P.: (Robust) edge-based semidefinite programming relaxation of sensor network localization. Math. Program. 130(2), 321–358 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  54. Rabello, R.L., Mauri, G.R., Ribeiro, G.M., Lorena, L.A.N.: A clustering search metaheuristic for the point-feature cartographic label placement problem. Eur. J. Oper. Res. 234(3), 802–808 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  55. So, A.M.-C., Ye, Y.: Theory of semidefinite programming for sensor network localization. Math. Program. 109(2–3), 367–384 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  56. Speckmann, B., van Kreveld, M., Florisson, S.: A linear programming approach to rectangular cartograms. In: Proceedings of the 12th International Symposium on Spatial Data Handling, pp. 527–546. Springer (2006)

  57. Thomas, J., Wong, P.C.: Visual analytics. IEEE Comput. Gr. Appl. 24(5), 20–21 (2004)

    Article  Google Scholar 

  58. Tobler, W.: Thirty five years of computer cartograms. Ann. Assoc. Am. Geogr. 94(1), 58–73 (2004)

    Article  Google Scholar 

  59. Torgerson, W.S.: Theory and Methods of Scaling. Wiley, New York (1958)

    Google Scholar 

  60. Trosset, M.W.: Extensions of classical multidimensional scaling via variable reduction. Comput. Stat. 17, 147–163 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  61. Tseng, P.: Second-order cone programming relaxation of sensor network localization. SIAM J. Optim. 18(1), 156–185 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  62. Tuy, H.: Convex Analysis and Global Optimization. Kluwer Academic Publishers, Dordrecht (1998)

    Book  MATH  Google Scholar 

  63. Umetani, S., Yagiura, M., Imahori, S., Imamichi, T., Nonobe, K., Ibaraki, T.: Solving the irregular strip packing problem via guided local search for overlap minimization. Int. Trans. Oper. Res. 16(6), 661–683 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  64. Vaughan, R.: Approximate formulas for average distances associated with zones. Transp. Sci. 18(3), 231–244 (1984)

    Article  Google Scholar 

  65. Wang, Z., Zheng, S., Ye, Y., Boyd, S.: Further relaxations of the semidefinite programming approach to sensor network localization. SIAM J. Optim. 19(2), 655–673 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the reviewers for their helpful suggestions and comments, which have been very valuable to strengthen the paper and to improve its quality.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vanesa Guerrero.

Additional information

This research is funded in part by Project MTM2015-65915-R (Spain), P11-FQM-7603 and FQM-329 (Andalucía), all with EU ERD Funds, and VPPI-US from the University of Seville.

Appendix

Appendix

1.1 Proof of Proposition 1

$$\begin{aligned}&{\lambda F_1+(1-\lambda ) F_2}\\&= \displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N\, \\ i\ne j \end{array}}\left\{ \lambda \left[ g_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )-\kappa \delta _{ij} \right] ^2-(1-\lambda )g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau )\right\} \\&=\displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N\, \\ i\ne j \end{array}} \left\{ (3\lambda -1)g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau )+2\lambda \kappa ^2\delta _{ij}^2-\lambda (g_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )+\kappa \delta _{ij})^2 \right\} \end{aligned}$$

In Sect. 3, the convexity of the function \(g_{ij}\) was stated. Moreover, since \(g_{ij}\), \(\lambda \), \(\delta _{ij}\ge 0\), then \(g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau )\), \(2\lambda \kappa ^2\delta ^2_{ij}\) and \((g_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )+\kappa \delta _{ij})^2\) are convex. Finally, \((3\lambda -1)g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau )\) is convex for \(3\lambda -1\ge 0\) and concave otherwise. \(\square \)

1.2 Proof of Proposition 2

For convex sets \(A_1\) and \(A_2\) with nonempty interior, the condition in the definition of penetration depth stated in Sect. 2.2 is equivalent to the existence of a separating hyperplane between the sets \({\varvec{p}}+A_1\) and \(A_2,\) i.e., of some \({\varvec{\xi }}\ne 0,\) such that

$$\begin{aligned} {\varvec{\xi }}^\top ({\varvec{p}}+{\varvec{a}}_1) \le {\varvec{\xi }}^\top {\varvec{a}}_2 \qquad \forall {\varvec{a}}_1 \in A_1, {\varvec{a}}_2 \in A_2. \end{aligned}$$

Without loss of generality, we can consider \(\Vert {\varvec{\xi }}\Vert = 1\) and thus we have

$$\begin{aligned} \begin{array}{llll} \pi (A_1,A_2) &{} = &{} \displaystyle \min _{{\varvec{p}},{\varvec{\xi }}\in {\mathbb R}^n} &{} \Vert {\varvec{p}}\Vert \\ &{} &{} \text{ s.t. } &{} {\varvec{\xi }}^\top ({\varvec{p}}+{\varvec{a}}_1) \le {\varvec{\xi }}^\top {\varvec{a}}_2 \qquad \forall {\varvec{a}}_1 \in A_1, {\varvec{a}}_2 \in A_2\\ &{} &{} &{} \Vert {\varvec{\xi }}\Vert =1.\\ \end{array} \end{aligned}$$

Thus, \(h_{ij}\) can be written as follows

$$\begin{aligned} \begin{array}{llll} h_{ij}({\varvec{c}}_i, {\varvec{c}}_j,\tau ) &{} = &{} \displaystyle \min _{{\varvec{p}},{\varvec{\xi }}\in {\mathbb R}^n} &{} \Vert {\varvec{p}}\Vert \\ &{} &{} \text{ s.t. } &{} {\varvec{\xi }}^\top ({\varvec{p}}+{\varvec{c}}_i+\tau r_i {\varvec{x}}_i) \le {\varvec{\xi }}^\top ({\varvec{c}}_j+\tau r_j{\varvec{x}}_j) \qquad \forall {\varvec{x}}_i, {\varvec{x}}_j \in \mathcal {B}\\ &{} &{} &{} \Vert {\varvec{\xi }}\Vert =1. \end{array} \end{aligned}$$

Equivalently, the first constraint, i.e.,

$$\begin{aligned} {\varvec{\xi }}^\top ({\varvec{p}}+{\varvec{c}}_i+\tau r_i {\varvec{x}}_i) \le {\varvec{\xi }}^\top ({\varvec{c}}_j+\tau r_j{\varvec{x}}_j) \quad \forall {\varvec{x}}_i, {\varvec{x}}_j \in \mathcal {B}, \end{aligned}$$

can be written as follows,

$$\begin{aligned} {\varvec{\xi }}^\top ({\varvec{p}}+{\varvec{c}}_i)+\tau r_i \max _{{\varvec{x}} \in \mathcal {B}} {\varvec{\xi }}^\top {\varvec{x}} \le {\varvec{\xi }}^\top {\varvec{c}}_j+\tau r_j\min _{{\varvec{x}} \in \mathcal {B}} {\varvec{\xi }}^\top {\varvec{x}}. \end{aligned}$$

Let \(\sigma _{\mathcal {B}}\) be the support function of \(\mathcal {B},\) i.e.,

$$\begin{aligned} \sigma _{\mathcal {B}} (z) = \max _y \{y^\top z: \, y \in \mathcal {B}\} \end{aligned}$$

Since \(\mathcal {B}\) is assumed to be symmetric with respect to the origin, we have

$$\begin{aligned} \displaystyle \max _{x \in \mathcal {B}} {\varvec{\xi }}^\top x= & {} \sigma _{\mathcal {B}}({\varvec{\xi }}) \\ \displaystyle \min _{x \in \mathcal {B}} {\varvec{\xi }}^\top x= & {} -\sigma _{\mathcal {B}}({\varvec{\xi }}). \end{aligned}$$

Hence, by replacing the expression of the support function in the constraint above, one has

$$\begin{aligned} \begin{array}{lllll} h_{ij}({\varvec{c}}_i,{\varvec{c}}_j,\tau ) &{} = &{} \displaystyle \min _{{\varvec{p}},{\varvec{\xi }}\in {\mathbb R}^n} &{} \Vert {\varvec{p}}\Vert \\ &{} &{} \text{ s.t. } &{} {\varvec{\xi }}^\top {\varvec{p}} \le {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)-\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }}) &{}\\ &{} &{} &{}\Vert {\varvec{\xi }}\Vert =1. \end{array} \end{aligned}$$

For \({\varvec{\xi }}\) fixed with \(\Vert {\varvec{\xi }}\Vert =1,\) let \(\eta ({\varvec{\xi }}) = {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)-\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }}).\) It follows that the inner minimum in \(h_{ij}({\varvec{c}}_i,{\varvec{c}}_j,\tau ),\) is the distance from the origin to the halfspace \({\varvec{\xi }}^\top {\varvec{p}} \le \eta ({\varvec{\xi }}),\) and such distance equals 0, if 0 belongs to the halfspace, i.e., if \(0 \le {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)-\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }}),\) and \( - \eta ({\varvec{\xi }})\) else. Hence

$$\begin{aligned} \begin{array}{llll} h_{ij}({\varvec{c}}_i,{\varvec{c}}_j,\tau ) &{} = &{} \displaystyle \min _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \, \max \left\{ 0, - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} \\ &{} = &{} \max \left\{ 0, \displaystyle \min _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} \right\} \end{array} \end{aligned}$$

But, for \({\varvec{\xi }}\) fixed, the function \(({\varvec{c}}_i,{\varvec{c}}_j,\tau ) \, \longmapsto - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\) is affine, and thus the function \(({\varvec{c}}_i,{\varvec{c}}_j,\tau ) \, \longmapsto \displaystyle \min _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} \) is the minimum of affine functions, and is thus concave. Hence, \(h_{ij}\) is the maximum between 0 and a concave function, which is DC, whose decomposition is

$$\begin{aligned} {h_{ij}({\varvec{c}}_i,{\varvec{c}}_j,\tau )}= & {} \max \left\{ 0, \displaystyle \min _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} \right\} \\= & {} \max \left\{ - \displaystyle \min _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} ,0\right\} \\&\qquad + \displaystyle \min _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ - {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)+\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} \\= & {} \max \left\{ \displaystyle \max _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)-\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} ,0 \right\} \\&\qquad - \displaystyle \max _{\begin{array}{c} {\varvec{\xi }}\in {\mathbb R}^n\\ \Vert {\varvec{\xi }}\Vert =1 \end{array}} \left\{ {\varvec{\xi }}^\top ( {\varvec{c}}_j-{\varvec{c}}_i)-\tau (r_i+r_j) \sigma _{\mathcal {B}}({\varvec{\xi }})\right\} \\= & {} u_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )-(u_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )-h_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )). \end{aligned}$$

\(\square \)

1.3 Proof of Proposition 3

Before giving the proof of Proposition 3, the following technical result is needed.

Lemma 1

Let \(\beta _{ij}\in {\mathbb R}\) be such that \(\beta _{ij}\ge 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2\), \(\forall \varvec{b}_i, \varvec{b}_j \in \mathcal {B}\). Then, \(g_{ij}^2\) can be expressed as a DC function, \(g_{ij}^2=u_{ij}-(u_{ij}-g_{ij}^2)\), where

$$\begin{aligned} u_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )= & {} 2\Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+\beta _{ij}\tau ^2. \end{aligned}$$

Proof

$$\begin{aligned}&{g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau )}\\&\quad =\displaystyle \min _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}}\left\| \varvec{c}_i-\varvec{c}_j+\tau (r_i\varvec{b}_i-r_j\varvec{b}_j)\right\| ^2\\&\quad = \displaystyle \min _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ \Vert \varvec{c}_i-\varvec{c}_j\Vert ^2 +\tau ^2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2 +2\tau ( \varvec{c}_i-\varvec{c}_j)^\top (r_i\varvec{b}_i-r_j\varvec{b}_j ) \right\} \\&\quad =\displaystyle \min _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ \Vert \varvec{c}_i-\varvec{c}_j\Vert ^2 +\tau ^2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2 \right. \\&\qquad \left. + \Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+ \tau ^2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2 -\Vert \varvec{c}_i-\varvec{c}_j-\tau (r_i\varvec{b}_i-r_j\varvec{b}_j ) \Vert ^2 \right\} \\&\quad =2\Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+\beta _{ij}\tau ^2\\&\qquad +\displaystyle \min _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ -\beta _{ij}\tau ^2+ 2\tau ^2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2- \Vert \varvec{c}_i-\varvec{c}_j-\tau \left( r_i\varvec{b}_i-r_j\varvec{b}_j\right) \Vert ^2 \right\} \\&\quad =2\Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+\beta _{ij}\tau ^2\\&\qquad +\displaystyle \min _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ \tau ^2\left( 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2-\beta _{ij}\right) - \Vert \varvec{c}_i-\varvec{c}_j-\tau \left( r_i\varvec{b}_i-r_j\varvec{b}_j\right) \Vert ^2 \right\} \\&\quad =2\Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+\beta _{ij}\tau ^2\\&\qquad -\displaystyle \max _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ \Vert \varvec{c}_i-\varvec{c}_j-\tau \left( r_i\varvec{b}_i-r_j\varvec{b}_j\right) \Vert ^2 - \tau ^2\left( 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2-\beta _{ij}\right) \right\} \end{aligned}$$

Observe that taking \(\beta _{ij}\in {\mathbb R}\) such that

$$\begin{aligned} 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2-\beta _{ij}\le 0 \forall \varvec{b}_i,\varvec{b}_j\in \mathcal {B}, \end{aligned}$$

the function

$$\begin{aligned} (\varvec{c}_i,\varvec{c}_j,\tau )\longmapsto \Vert \varvec{c}_i-\varvec{c}_j-\tau \left( r_i\varvec{b}_i-r_j\varvec{b}_j\right) \Vert ^2 - \tau ^2\left( 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2-\beta _{ij}\right) \end{aligned}$$

is convex. Since the maximum of convex functions is convex, hence taking \(u_{ij}= 2\Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+\beta _{ij}\tau ^2\), we have obtained a DC decomposition for \(g_{ij}^2\) as in the statement. \(\square \)

We prove now Proposition 3:

If \(\lambda <\displaystyle \frac{1}{3}\), considering Proposition 1, one has

$$\begin{aligned} \lambda F_1+ (1-\lambda ) F_2= \displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}} \left\{ 2\lambda \kappa ^2\delta _{ij}^2- \left[ \lambda (g_{ij}+\kappa \delta _{ij})^2-(3\lambda -1)g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau ) \right] \right\} , \end{aligned}$$

and thus \(u= \displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}} 2\lambda \kappa ^2\delta _{ij}^2\) holds.

If \(\lambda \ge \displaystyle \frac{1}{3}\), by using the DC decomposition for \(g_{ij}^2\) obtained in Lemma 1 and Proposition 1, one has

$$\begin{aligned}&{\lambda F_1+ (1-\lambda ) F_2} \\&\quad =\displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}} \left\{ (3\lambda -1)g_{ij}^2(\varvec{c}_i,\varvec{c}_j,\tau ) +2\lambda \kappa ^2\delta _{ij}^2-\lambda (g_{ij}(\varvec{c}_i,\varvec{c}_j,\tau ) +\kappa \delta _{ij})^2\right\} \\&\quad =\displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}} \left\{ 2(3\lambda -1)\Vert \varvec{c}_i-\varvec{c}_j\Vert ^2+(3\lambda -1)\beta _{ij}\tau ^2+2\lambda \kappa ^2\delta _{ij}^2 \right. \\&\qquad \left. - \left[ \lambda (g_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )+\kappa \delta _{ij})^2 \right. \right. + (3\lambda -1)\displaystyle \max _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ \Vert \varvec{c}_i-\varvec{c}_j-\tau \left( r_i\varvec{b}_i-r_j\varvec{b}_j\right) \Vert ^2 \right. \\&\qquad \left. \left. \left. - \tau ^2\left( 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2-\beta _{ij}\right) \right\} \right] \right\} \\&\quad =\displaystyle \sum _{i=1,\ldots ,N} \left\{ 8(3\lambda -1)(N-1)\Vert \varvec{c}_i\Vert ^2\right\} +(3\lambda -1)\tau ^2 \displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}} \beta _{ij}\\&\qquad +2\lambda \kappa ^2\displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}}\delta _{ij}^2 -\displaystyle \sum _{\begin{array}{c} i,j=1,\ldots ,N \\ i\ne j \end{array}}\left[ 2(3\lambda \!-\!1)\Vert \varvec{c}_i\!+\!\varvec{c}_j\Vert ^2\!+\!\lambda (g_{ij}(\varvec{c}_i,\varvec{c}_j,\tau )\!+\!\kappa \delta _{ij})^2\right. \\&\qquad +\left. (3\lambda -1)\displaystyle \max _{\varvec{b}_i,\varvec{b}_j\in \mathcal {B}} \left\{ \Vert \varvec{c}_i-\varvec{c}_j-\tau \left( r_i\varvec{b}_i-r_j\varvec{b}_j\right) \Vert ^2 \right. \right. \\&\left. \left. \qquad - \tau ^2\left( 2\Vert r_i\varvec{b}_i-r_j\varvec{b}_j\Vert ^2-\beta _{ij}\right) \right\} \right] \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Carrizosa, E., Guerrero, V. & Romero Morales, D. Visualizing data as objects by DC (difference of convex) optimization. Math. Program. 169, 119–140 (2018). https://doi.org/10.1007/s10107-017-1156-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-017-1156-1

Keywords

Mathematics Subject Classification

Navigation