Hostname: page-component-7c8c6479df-fqc5m Total loading time: 0 Render date: 2024-03-19T07:42:33.768Z Has data issue: false hasContentIssue false

Randomized algorithms in numerical linear algebra

Published online by Cambridge University Press:  05 May 2017

Ravindran Kannan
Affiliation:
Microsoft Research Labs, Bangalore, Karnataka 560001, India E-mail: kannan@microsoft.com
Santosh Vempala
Affiliation:
Georgia Institute of Technology, North Avenue NW, Atlanta, GA 30332, USA E-mail: vempala@gatech.edu

Abstract

This survey provides an introduction to the use of randomization in the design of fast algorithms for numerical linear algebra. These algorithms typically examine only a subset of the input to solve basic problems approximately, including matrix multiplication, regression and low-rank approximation. The survey describes the key ideas and gives complete proofs of the main results in the field. A central unifying idea is sampling the columns (or rows) of a matrix according to their squared lengths.

Type
Research Article
Copyright
© Cambridge University Press, 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Achlioptas, D. and Mcsherry, F. (2007), ‘Fast computation of low-rank matrix approximations’, J. Assoc. Comput. Mach. 54, 9.Google Scholar
Ahlswede, R. and Winter, A. (2002), ‘Strong converse for identification via quantum channels’, IEEE Trans. Inform. Theory 48, 568579.Google Scholar
Anari, N., Gharan, S. and Rezaei, A. (2016), Monte Carlo Markov chain algorithms for sampling strongly Rayleigh distributions and determinantal point processes. In COLT 2016: 29th Conference on Learning Theory, pp. 103115.Google Scholar
Arriaga, R. and Vempala, S. (1999), An algorithmic theory of learning: Robust concepts and random projection. In FOCS 1999: 40th Annual Symposium on Foundations of Computer Science, pp. 616623.Google Scholar
Arriaga, R. and Vempala, S. (2006), ‘An algorithmic theory of learning: Robust concepts and random projection’, Machine Learning 63, 161182.CrossRefGoogle Scholar
Berry, M., Pulatova, S. and Stewart, G. (2004), Computing sparse reduced-rank approximations to sparse matrices. Technical report, UMIACS, University of Maryland.Google Scholar
Bhatia, R. (1996), Matrix Analysis, Vol. 169 of Graduate Texts in Mathematics, Springer.Google Scholar
Boutsidis, C. and Woodruff, D. (2014), Optimal CUR matrix decompositions. In STOC 2014: Symposium on Theory of Computing, pp. 353362.Google Scholar
Clarkson, K. and Woodruff, D. (2009), Numerical linear algebra in the streaming model. In STOC 2009: 41st Annual ACM Symposium on Theory of Computing, pp. 205214.Google Scholar
Clarkson, K. and Woodruff, D. (2013), Low rank approximation and regression in input sparsity time. In STOC 2013: Symposium on Theory of Computing Conference, pp. 8190.Google Scholar
Cohen, M. (2016), Nearly tight oblivious subspace embeddings by trace inequalities. In SODA 2016: 27th Annual ACM–SIAM Symposium on Discrete Algorithms, pp. 278287.Google Scholar
Cohen, M., Lee, Y., Musco, C., Musco, C., Peng, R. and Sidford, A. (2015), Uniform sampling for matrix approximation. In ITCS 2015: Conference on Innovations in Theoretical Computer Science, pp. 181190.Google Scholar
Cohen, M., Musco, C. and Musco, C. (2017), Ridge leverage scores for low-rank approximation. In SODA 2017: 27th Annual ACM–SIAM Symposium on Discrete Algorithms, pp. 17581777.Google Scholar
Dasgupta, A., Kumar, R. and Sarlós, T. (2010), A sparse Johnson–Lindenstrauss transform. In STOC 2010: 42nd ACM Symposium on Theory of Computing, pp. 341350.Google Scholar
Dasgupta, S. and Gupta, A. (2003), ‘An elementary proof of a theorem of Johnson and Lindenstrauss’, Random Struct. Alg. 22, 6065.Google Scholar
Deshpande, A. and Rademacher, L. (2010), Efficient volume sampling for row/column subset selection. In FOCS 2010: 51th Annual IEEE Symposium on Foundations of Computer Science, pp. 329338.Google Scholar
Deshpande, A. and Vempala, S. (2006), Adaptive sampling and fast low-rank matrix approximation. In APPROX–RANDOM 2006, Vol. 4110 of Lecture Notes in Computer Science, Springer, pp. 292303.Google Scholar
Deshpande, A., Rademacher, L., Vempala, S. and Wang, G. (2006), ‘Matrix approximation and projective clustering via volume sampling’, Theory of Computing 2, 225247.Google Scholar
Drineas, P., Kannan, R. and Mahoney, M. (2006), ‘Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix’, SIAM J. Comput. 36, 158183.Google Scholar
Drineas, P., Magdon-Ismail, M., Mahoney, M. and Woodruff, D. (2012), ‘Fast approximation of matrix coherence and statistical leverage’, J. Mach. Learn. Res. 13, 34753506.Google Scholar
Drineas, P., Mahoney, M. and Muthukrishnan, S. (2008), ‘Relative-error CUR matrix decompositions’, SIAM J. Matrix Anal. Appl. 30, 844881.Google Scholar
Freedman, D. (1975), ‘On tail probabilities for martingales’, Ann. Probab. 3, 100118.Google Scholar
Frieze, A., Kannan, R. and Vempala, S. (1998), Fast Monte-Carlo algorithms for finding low-rank approximations. In FOCS 1998: 39th Annual Symposium on Foundations of Computer Science, pp. 370378.Google Scholar
Frieze, A., Kannan, R. and Vempala, S. (2004), ‘Fast Monte-Carlo algorithms for finding low-rank approximations’, J. Assoc. Comput. Mach. 51, 10251041.Google Scholar
Golub, G. and Van Loan, C. (1996), Matrix Computations, third edition, Johns Hopkins University Press.Google Scholar
Goreinov, S. and Tyrtyshnikov, E. (2001), ‘The maximum-volume concept in approximation by low-rank matrices’, Contemp. Math. 280, 4751.Google Scholar
Goreinov, S., Tyrtyshnikov, E. and Zamarashkin, N. (1997), ‘A theory of pseudoskeleton approximations’, Linear Algebra Appl. 261, 121.Google Scholar
Halko, N., Martinsson, P. and Tropp, J. (2011), ‘Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions’, SIAM Review 53, 217288.Google Scholar
Hillar, C. and Lim, L. (2013), ‘Most tensor problems are NP-hard’, J. Assoc. Comput. Mach. 60, 45.Google Scholar
Horn, R. and Johnson, C. (2012), Matrix Analysis, second edition, Cambridge University Press.CrossRefGoogle Scholar
Indyk, P. and Motwani, R. (1998), Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC 1998: 30th Annual ACM Symposium on Theory of Computing, pp. 604613.Google Scholar
Kane, D. and Nelson, J. (2014), ‘Sparser Johnson–Lindenstrauss transforms’, J. Assoc. Comput. Mach. 61, 4.Google Scholar
Kannan, R. and Vempala, S. (2009), ‘Spectral algorithms’, Found. Trends Theoret. Comput. Sci. 4, 157288.Google Scholar
Lee, Y. and Sun, H. (2015), Constructing linear-sized spectral sparsification in almost-linear time. In FOCS 2015: IEEE 56th Annual Symposium on Foundations of Computer Science, pp. 250269.Google Scholar
Li, M., Miller, G. and Peng, R. (2013), Iterative row sampling. In FOCS 2013: 54th Annual IEEE Symposium on Foundations of Computer Science, pp. 127136.Google Scholar
Mahoney, M. (2011), ‘Randomized algorithms for matrices and data’, Found. Trends Mach. Learning 3, 123224.Google Scholar
Meng, X. and Mahoney, M. (2013), Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In STOC 2013: Symposium on Theory of Computing Conference, pp. 91100.Google Scholar
Nelson, J. and Nguyên, H. (2013), OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings. In FOCS 2013: IEEE 54th Annual Symposium on Foundations of Computer Science, pp. 117126.Google Scholar
Pisier, G. (1989), The Volume of Convex Bodies and Banach Space Geometry, Cambridge University Press.Google Scholar
Rudelson, M. and Vershynin, R. (2007), ‘Sampling from large matrices: An approach through geometric functional analysis’, J. Assoc. Comput. Mach. 54, 21.Google Scholar
Sarlós, T. (2006), Improved approximation algorithms for large matrices via random projections. In FOCS 2006: 47th Annual IEEE Symposium on Foundations of Computer Science, pp. 143152.Google Scholar
Spielman, D. and Srivastava, N. (2011), ‘Graph sparsification by effective resistances’, SIAM J. Comput. 40, 19131926.Google Scholar
Stewart, G. (1999), ‘Four algorithms for the efficient computation of truncated QR approximations to a sparse matrix’, Numer. Math. 83, 313323.Google Scholar
Stewart, G. (2004), Error analysis of the quasi-Gram–Schmidt algorithm. Technical report, UMIACS, University of Maryland.Google Scholar
Tropp, J. (2011), ‘Improved analysis of the subsampled randomized Hadamard transform’, Adv. Adapt. Data Anal. 3, 115126.Google Scholar
Vempala, S. (2004), The Random Projection Method, Vol. 65 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, DIMACS/AMS.Google Scholar
Vishnoi, N. (2013), ‘ $\text{Lx}=\text{b}$ ’, Found. Trends Theoret. Comput. Sci. 8, 1141.Google Scholar
Woodruff, D. (2014), ‘Sketching as a tool for numerical linear algebra’, Found. Trends Theoret. Comput. Sci. 10, 1157.Google Scholar