Skip to main content
Log in

Spectral simplex method

Mathematical Programming Submit manuscript

Abstract

We develop an iterative optimization method for finding the maximal and minimal spectral radius of a matrix over a compact set of nonnegative matrices. We consider matrix sets with product structure, i.e., all rows are chosen independently from given compact sets (row uncertainty sets). If all the uncertainty sets are finite or polyhedral, the algorithm finds the matrix with maximal/minimal spectral radius within a few iterations. It is proved that the algorithm avoids cycling and terminates within finite time. The proofs are based on spectral properties of rank-one corrections of nonnegative matrices. The practical efficiency is demonstrated in numerical examples and statistics in dimensions up to 500. Some generalizations to non-polyhedral uncertainty sets, including Euclidean balls, are derived. Finally, we consider applications to spectral graph theory, mathematical economics, dynamical systems, and difference equations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Blondel, V.D., Nesterov, Y.: Polynomial-time computation of the joint spectral radius for some sets of nonnegative matrices. SIAM J. Matrix Anal. Appl. 31(3), 865–876 (2009)

    Article  MathSciNet  Google Scholar 

  2. Berman, A., Plemmons, R.J.: Nonnegative Matrices in the Mathematical Sciences. Acedemic Press, New York (1979)

    MATH  Google Scholar 

  3. Brualdi, R.A., Hoffman, A.J.: On the spectral radius of (0, 1)-matrices. Linear Algebra Appl. 65, 133–146 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cantor, D.G., Lippman, S.A.: Optimal investment selection with a multiple of projects. Econometrica 63, 1231–1240 (1995)

    Article  MATH  Google Scholar 

  5. Cvetković, D., Doob, M., Sachs, H.: Spectra of Graphs. Theory and Application. Academic Press, New York (1980)

    Google Scholar 

  6. Engel, G.M., Schneider, H., Sergeev, S.: On sets of eigenvalues of matrices with prescribed row sums and prescribed graph. Linear Algebra Appl. 455, 187–209 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fainshil, L., Margaliot, M.: A maximum principle for the stability analysis of positive bilinear control systems with applications to positive linear switched systems. SIAM J. Control Optim. 50(4), 2193–2215 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Friedland, S.: The maximal eigenvalue of 0–1 matrices with prescribed number of ones. Linear Algebra Appl. 69, 33–69 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  9. Jungers, R., Protasov, V.Yu., Blondel, V.: Efficient algorithms for deciding the type of growth of products of integer matrices. Linear Algebra Appl. 428(10), 2296–2312 (2008)

  10. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1990)

    MATH  Google Scholar 

  11. Kozyakin, V.S.: A short introduction to asynchronous systems. In: Aulbach, B., Elaydi, S., Ladas, G. (eds.) Proceedings of the Sixth International Conference on Difference Equations (Augsburg, Germany 2001): New Progress in Difference Equations, pp. 153–166. CRC Press, Boca Raton (2004)

    Chapter  Google Scholar 

  12. Leontief, W.: Input–Output Economics, 2nd edn. Oxford University Press, New York (1986)

    Google Scholar 

  13. Liberzon, D.: Switching in Systems and Control. Birkhauser, Boston (2003)

    Book  MATH  Google Scholar 

  14. Lin, H., Antsaklis, P.J.: Stability and stabilizability of switched linear systems: a survey of recent results. IEEE Trans. Autom. Control 54(2), 308–322 (2009)

    Article  MathSciNet  Google Scholar 

  15. Liu, B.: On an upper bound of the spectral radius of graphs. Discrete Math. 308(2), 5317–5324 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  16. Logofet, D.O.: Matrices and Graphs: Stability Problems in Mathematical Ecology. CRC Press, Boca Raton (1993)

    Google Scholar 

  17. Meyer, C.D., Stewart, G.W.: Derivatives and perturbations of eigenvectors. SIAM J. Numer. Anal. 25(3), 679–691 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  18. Miller, K.S.: Linear Difference Equations. Elsevier, Amsterdam (2000)

    Google Scholar 

  19. Nesterov, Y., Protasov, V.Yu.: Optimizing the spectral radius. SIAM J. Matrix Anal. Appl. 34(3), 999–1013 (2013)

  20. Olesky, D.D., Roy, A., van den Driessche, P.: Maximal graphs and graphs with maximal spectral radius. Linear Algebra Appl. 346, 109–130 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  21. Pachpatte, B.G.: Integral and Finite Difference Inequalities and Applications. W. A. Benjamin Inc., New York-Amsterdam (1968)

    Google Scholar 

  22. Schvatal, V.: Linear Programming. W. H. Freeman, New York (1983)

    Google Scholar 

  23. Stewart, G.W., Sun, J.G.: Matrix Perturbation Theory. Academic Press, New York (1990)

    MATH  Google Scholar 

  24. Vladimirov, A.G., Grechishkina, N., Kozyakin, V., Kuznetsov, N., Pokrovskii, A., Rachinskii, D.: Asynchronous systems: theory and practice. Inform. Process. 11, 1–45 (2011)

    Google Scholar 

Download references

Acknowledgments

The author is grateful to both anonymous referees for attentive reading and for valuable remarks and suggestions. A large part of this research was carried out during my visit to the department of Mathematical Engineering, Université Catholique de Louvain (UCL), Belgium. I am grateful to the university for hospitality. I also express my thanks to Yuri Nesterov for useful discussions and to Olga Kiseleva for help in many numerical experiments for Section 5.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladimir Yu. Protasov.

Additional information

The work is supported by RFBR Grant Nos. 13-01-00642 and 14-01-00332, and by the grant of Dynasty foundation.

Appendix

Appendix

Lemma 5

Let \(A\) and \(X\) be \(d\times d\) matrices and \(\mathrm{rank}(X) = 1\), then the coefficients of the characteristic polynomial of the matrix \(A_s = A + s X, \ s \in {\mathbb {R}},\) are affine functions of \(s\).

Proof

Consider the factorization \(X = \varvec{a} \varvec{b}^T\), where \(\varvec{a}, \varvec{b} \in {\mathbb {R}}^d\), and pass to a basis in \({\mathbb {R}}^d\) with the first basis vector \(\varvec{a}\). In this basis, \(X\) has only one nonzero row (the first one). Since the determinant of a matrix depends linearly on its first row, the characteristic polynomial \(p(\tau ) = \mathrm{det} (\tau I - A - sX)\) depends linearly on \(s\). \(\square \)

Proof of Proposition 4

Denote \(\lambda _0 = \rho (A), \lambda _1 = \rho (B)\) and assume, to the contrary, that \(\lambda _1\) has multiplicity \(m \ge 2\). For an arbitrary \(s \in [0,1]\), we consider the matrix \(A_s = (1-s)A + sB\) and its characteristic polynomial \(p_s(\tau ) = \mathrm{det}\, (\tau I - A_s)\). In particular, \(p_0\) and \(p_1\) are the characteristic polynomials of \(A_0 = A\) and \(A_1 = B\) respectively. By Lemma 5, we have

$$\begin{aligned} p_s=(1-s)\, p_0+s\, p_1. \end{aligned}$$
(14)

Since \(\lambda _1\) is a root of multiplicity \(m\) for the polynomial \(p_1\), we have \(p_1(\tau ) = (\tau - \lambda _1)^m q(\tau )\), where \(q(\tau )\) is a polynomial with \(q(\lambda _1)\ne 0\). If \(m\) is even, then there is \(\delta > 0\) such that \(p_1(\tau ) \ge 0\) whenever \(|\tau - \lambda _1| < \delta \). On the other hand, since \(\lambda _1 > \lambda _0\) and \(p_0 (\tau )\) is positive in the interval \((\lambda _0, +\infty )\), a small enough value of \(\delta > 0\) guarantees that \(p_0(\tau ) > \varepsilon > 0\) whenever \(|\tau - \lambda _1| < \delta \). Hence, \(p_s(\tau ) > 0\) at those \(\tau \), for any \(s \in (0,1)\). Since \(A_s \ge 0\), its spectral radius \(\rho (A_s)\) is the largest real root of the polynomial \(p_s\). Therefore, \(|\rho (A_s) - \lambda _1| \ge \delta \) for all \(s \in (0,1)\), which contradicts the continuity of the spectral radius as \(s\) tends to one, because \(\rho (A_1) = \lambda _1\). Now, if \(\lambda _1\) has an odd multiplicity \(m \ge 3\), then the matrix \(B\) is reducible as follows from the Perron–Frobenius theorem: there is a proper permutation of the canonical basis after which \(B\) reduces to the following upper-triangular block form:

$$\begin{aligned} B=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} B^{(1)} &{} * &{} \ldots &{} * \\ 0 &{} B^{(2)}&{} * &{} \vdots \\ \vdots &{} {} &{} \ddots &{} * \\ 0 &{} \ldots &{} 0 &{} B^{(r)} \end{array} \right) , \end{aligned}$$
(15)

where \(r \ge m\), each matrix \(B^{(i)}\) is irreducible, exactly \(m\) of these matrices have simple leading eigenvalue \(\lambda _1\), while the rest \((r-m)\) blocks having their leading eigenvalues smaller than \(\lambda _1\) (see, for instance, [10, chapter8]). Let \(i_1\) and \(i_2\) (\(i_1 < i_2\)) be the two smallest indices for which \(\rho (B^{(i_1)}) = \rho (B^{(i_2)}) = \lambda \).

The principal submatrix \(B'\) of \(B\) comprising the blocks \(B^{(1)}, \ldots , B^{(i_2)}\) represents a rank-one correction of the submatrix \(A'\) extracted from the same rows/columns of \(A\) that constitute \(B'\) in \(B\), as far as \(\mathrm{rank}\, (B' - A' ) \le \mathrm{rank}\, (B - A) \le 1.\) Since \(\rho (A' ) \le \rho (A) < \lambda _1\), we get the rank-one correction \(B'\) of matrix \(A'\) that has the largest eigenvalue \(\lambda _1 > \rho (A )\) of multiplicity \(2\). This is impossible as has been proved above, for the case of oven multiplicity. \(\square \)

Proof of Corollary 1

If \({\mathcal {A}}\) is irreducible, then by Theorem 4, the maximal spectral radius is achieved at some matrix \(A \in {\mathcal {A}}\) with a positive leading eigenvector \({\varvec{v}}\). For each \(i\), the linear functional \(f({\varvec{x}}) = ({\varvec{x}}, {\varvec{v}})\) on the set \({\mathcal {F}}_i\) attains its maximum at some extreme point \({\varvec{x}}= {\varvec{a}}_i'\) of the set \({\mathcal {F}}_i\). Let \(A'\) be the matrix composed with rows \({\varvec{a}}_1', \ldots , {\varvec{a}}_d'\). Clearly, \(A'\) is an extreme point of \({\mathcal {A}}\). Moreover, \(A'{\varvec{v}}\ge A{\varvec{v}}\), and by Lemma 1, \(\, \rho (A') \ge \rho (A)\), which by the maximality of \(A\) implies that \(\rho (A') = \max \limits _{B \in {\mathcal {A}}}\rho (B)\). If \({\mathcal {A}}\) is reducible, we make the permutation of the basis, after which all matrices from \({\mathcal {A}}\) get block upper triangular form. As we showed above, the irreducible block with the maximal spectral radius can be composed from rows which are extreme points of the corresponding uncertainty sets. Taking all other rows arbitrarily form the sets of extreme points of the corresponding uncertainty sets, we get a matrix \(A'\) which is an extreme point of \({\mathcal {A}}\) and has the maximal spectral radius.

The proof for the minimal spectral radius is realized in the same way, but shorter: we do not assume irreducibility and omit the condition \({\varvec{v}}> 0\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Protasov, V.Y. Spectral simplex method. Math. Program. 156, 485–511 (2016). https://doi.org/10.1007/s10107-015-0905-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-015-0905-2

Keywords

Mathematics Subject Classification

Navigation