skip to main content
10.1145/3564246.3585232acmconferencesArticle/Chapter ViewAbstractPublication PagesstocConference Proceedingsconference-collections
research-article
Open Access

Average-Case Complexity of Tensor Decomposition for Low-Degree Polynomials

Published:02 June 2023Publication History

ABSTRACT

Suppose we are given an n-dimensional order-3 symmetric tensor T ∈ (ℝn)⊗ 3 that is the sum of r random rank-1 terms. The problem of recovering the rank-1 components is possible in principle when rn2 but polynomial-time algorithms are only known in the regime rn3/2. Similar “statistical-computational gaps” occur in many high-dimensional inference tasks, and in recent years there has been a flurry of work on explaining the apparent computational hardness in these problems by proving lower bounds against restricted (yet powerful) models of computation such as statistical queries (SQ), sum-of-squares (SoS), and low-degree polynomials (LDP). However, no such prior work exists for tensor decomposition, largely because its hardness does not appear to be explained by a “planted versus null” testing problem.

We consider a model for random order-3 tensor decomposition where one component is slightly larger in norm than the rest (to break symmetry), and the components are drawn uniformly from the hypercube. We resolve the computational complexity in the LDP model: O(logn)-degree polynomial functions of the tensor entries can accurately estimate the largest component when rn3/2 but fail to do so when rn3/2. This provides rigorous evidence suggesting that the best known algorithms for tensor decomposition cannot be improved, at least by known approaches. A natural extension of the result holds for tensors of any fixed order k ≥ 3, in which case the LDP threshold is rnk/2.

References

  1. Dimitris Achlioptas and Amin Coja-Oghlan. 2008. Algorithmic barriers from phase transitions. In 2008 49th Annual IEEE Symposium on Foundations of Computer Science. 793–802. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Esraa Al-Sharoa, Mahmood Al-Khassaweneh, and Selin Aviyente. 2017. A tensor based framework for community detection in dynamic networks. In International conference on acoustics, speech and signal processing (ICASSP). 2312–2316. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Anima Anandkumar, Dean P Foster, Daniel J Hsu, Sham M Kakade, and Yi-Kai Liu. 2012. A spectral algorithm for latent dirichlet allocation. Advances in neural information processing systems, 25 (2012). Google ScholarGoogle Scholar
  4. Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham Kakade. 2013. A tensor spectral approach to learning mixed membership community models. In Conference on Learning Theory. 867–881. Google ScholarGoogle Scholar
  5. Anima Anandkumar, Rong Ge, and Majid Janzamin. 2014. Analyzing tensor power method dynamics: Applications to learning overcomplete latent variable models. arXiv preprint arXiv:1411.1488. Google ScholarGoogle Scholar
  6. Animashree Anandkumar, Rong Ge, and Majid Janzamin. 2015. Learning overcomplete latent variable models through tensor methods. In Conference on Learning Theory. 36–112. Google ScholarGoogle Scholar
  7. Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher, and James Voss. 2014. The more, the merrier: the blessing of dimensionality for learning large Gaussian mixtures. In Conference on Learning Theory. 1135–1164. Google ScholarGoogle Scholar
  8. Davide Bacciu and Danilo P Mandic. 2020. Tensor decompositions in deep learning. arXiv preprint arXiv:2002.11835. Google ScholarGoogle Scholar
  9. Afonso S Bandeira, Jess Banks, Dmitriy Kunisky, Christopher Moore, and Alexander S Wein. 2021. Spectral planting and the hardness of refuting cuts, colorability, and communities in random graphs. In Conference on Learning Theory. 410–473. Google ScholarGoogle Scholar
  10. Afonso S Bandeira, Ben Blum-Smith, Joe Kileel, Amelia Perry, Jonathan Weed, and Alexander S Wein. 2017. Estimation under group actions: recovering orbits from invariants. arXiv preprint arXiv:1712.10163. Google ScholarGoogle Scholar
  11. Afonso S Bandeira, Ahmed El Alaoui, Samuel B Hopkins, Tselil Schramm, Alexander S Wein, and Ilias Zadik. 2022. The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics. arXiv preprint arXiv:2205.09727. Google ScholarGoogle Scholar
  12. Afonso S Bandeira, Dmitriy Kunisky, and Alexander S Wein. 2020. Computational Hardness of Certifying Bounds on Constrained PCA Problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). 151, 78. Google ScholarGoogle Scholar
  13. Jess Banks, Sidhanth Mohanty, and Prasad Raghavendra. 2021. Local statistics, semidefinite programming, and community detection. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA). 1298–1316. Google ScholarGoogle ScholarCross RefCross Ref
  14. Boaz Barak, Samuel Hopkins, Jonathan Kelner, Pravesh K Kothari, Ankur Moitra, and Aaron Potechin. 2019. A nearly tight sum-of-squares lower bound for the planted clique problem. SIAM J. Comput., 48, 2 (2019), 687–735. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Boaz Barak, Jonathan A Kelner, and David Steurer. 2015. Dictionary learning and tensor decomposition via the sum-of-squares method. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing. 143–151. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Gerard Ben Arous, Reza Gheissari, and Aukosh Jagannath. 2020. Algorithmic thresholds for tensor PCA. The Annals of Probability, 48, 4 (2020), 2052–2087. Google ScholarGoogle Scholar
  17. Gérard Ben Arous, Alexander S Wein, and Ilias Zadik. 2020. Free Energy Wells and Overlap Gap Property in Sparse PCA. In Conference on Learning Theory. 479–482. Google ScholarGoogle Scholar
  18. Quentin Berthet and Philippe Rigollet. 2013. Complexity theoretic lower bounds for sparse principal component detection. In Conference on learning theory. 1046–1066. Google ScholarGoogle Scholar
  19. Aditya Bhaskara, Moses Charikar, Ankur Moitra, and Aravindan Vijayaraghavan. 2014. Smoothed analysis of tensor decompositions. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing. 594–603. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Giulio Biroli, Chiara Cammarota, and Federico Ricci-Tersenghi. 2020. How to iron out rough landscapes and get optimal performances: averaged gradient descent and its application to tensor PCA. Journal of Physics A: Mathematical and Theoretical, 53, 17 (2020), 174003. Google ScholarGoogle ScholarCross RefCross Ref
  21. Cristiano Bocci, Luca Chiantini, and Giorgio Ottaviani. 2014. Refined methods for the identifiability of tensors. Annali di Matematica Pura ed Applicata (1923-), 193, 6 (2014), 1691–1702. Google ScholarGoogle Scholar
  22. Nicolas Boumal, Tamir Bendory, Roy R Lederman, and Amit Singer. 2018. Heterogeneous multireference alignment: A single pass approach. In 52nd Annual Conference on Information Sciences and Systems (CISS). 1–6. Google ScholarGoogle ScholarCross RefCross Ref
  23. Matthew Brennan and Guy Bresler. 2020. Reducibility and statistical-computational gaps from secret leakage. In Conference on Learning Theory. 648–847. Google ScholarGoogle Scholar
  24. Matthew Brennan, Guy Bresler, and Wasim Huleihel. 2018. Reducibility and computational lower bounds for problems with planted sparse structure. In Conference On Learning Theory. 48–166. Google ScholarGoogle Scholar
  25. Matthew S Brennan, Guy Bresler, Sam Hopkins, Jerry Li, and Tselil Schramm. 2021. Statistical Query Algorithms and Low Degree Tests Are Almost Equivalent. In Conference on Learning Theory. 774–774. Google ScholarGoogle Scholar
  26. Guy Bresler and Brice Huang. 2022. The algorithmic phase transition of random k-SAT for low degree polynomials. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS). 298–309. Google ScholarGoogle ScholarCross RefCross Ref
  27. Zongchen Chen, Elchanan Mossel, and Ilias Zadik. 2022. Almost-Linear Planted Cliques Elude the Metropolis Process. arXiv preprint arXiv:2204.01911. Google ScholarGoogle Scholar
  28. Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, Alexander S Wein, and Ilias Zadik. 2022. Statistical and Computational Phase Transitions in Group Testing. In Conference on Learning Theory. 4764–4781. Google ScholarGoogle Scholar
  29. Lieven De Lathauwer, Josphine Castaing, and Jean-Franois Cardoso. 2007. Fourth-order cumulant-based blind identification of underdetermined mixtures. IEEE Transactions on Signal Processing, 55, 6 (2007), 2965–2973. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Aurelien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborová. 2011. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Physical Review E, 84, 6 (2011), 066106. Google ScholarGoogle ScholarCross RefCross Ref
  31. Ilias Diakonikolas and Daniel Kane. 2022. Non-gaussian component analysis via lattice basis reduction. In Conference on Learning Theory. 4535–4547. Google ScholarGoogle Scholar
  32. Ilias Diakonikolas, Daniel M Kane, and Alistair Stewart. 2017. Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). 73–84. Google ScholarGoogle ScholarCross RefCross Ref
  33. Jingqiu Ding, Tommaso d’Orsi, Chih-Hung Liu, David Steurer, and Stefan Tiegel. 2022. Fast algorithm for overcomplete order-3 tensor decomposition. In Conference on Learning Theory. 3741–3799. Google ScholarGoogle Scholar
  34. Yunzi Ding, Dmitriy Kunisky, Alexander S Wein, and Afonso S Bandeira. 2019. Subexponential-time algorithms for sparse PCA. arXiv preprint arXiv:1907.11635. Google ScholarGoogle Scholar
  35. Zhou Fan, Yi Sun, Tianhao Wang, and Yihong Wu. 2020. Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model. Communications on Pure and Applied Mathematics. Google ScholarGoogle Scholar
  36. Vitaly Feldman, Elena Grigorescu, Lev Reyzin, Santosh S Vempala, and Ying Xiao. 2017. Statistical algorithms and a lower bound for detecting planted cliques. Journal of the ACM (JACM), 64, 2 (2017), 1–37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. David Gamarnik. 2021. The overlap gap property: A topological barrier to optimizing over random structures. Proceedings of the National Academy of Sciences, 118, 41 (2021), e2108492118. Google ScholarGoogle ScholarCross RefCross Ref
  38. David Gamarnik, Aukosh Jagannath, and Subhabrata Sen. 2021. The overlap gap property in principal submatrix recovery. Probability Theory and Related Fields, 181, 4 (2021), 757–814. Google ScholarGoogle ScholarCross RefCross Ref
  39. David Gamarnik, Aukosh Jagannath, and Alexander S Wein. 2020. Low-degree hardness of random optimization problems. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS). 131–140. Google ScholarGoogle ScholarCross RefCross Ref
  40. David Gamarnik, Cristopher Moore, and Lenka Zdeborová. 2022. Disordered Systems Insights on Computational Hardness. arXiv preprint arXiv:2210.08312. Google ScholarGoogle Scholar
  41. David Gamarnik and Madhu Sudan. 2017. Limits of Local Algorithms over Sparse Random Graphs. The Annals of Probability, 2353–2376. Google ScholarGoogle Scholar
  42. David Gamarnik and Ilias Zadik. 2017. Sparse high-dimensional linear regression. algorithmic barriers and a local search algorithm. arXiv preprint arXiv:1711.04952. Google ScholarGoogle Scholar
  43. David Gamarnik and Ilias Zadik. 2019. The landscape of the planted clique problem: Dense subgraphs and the overlap gap property. arXiv preprint arXiv:1904.07174. Google ScholarGoogle Scholar
  44. Rong Ge, Qingqing Huang, and Sham M Kakade. 2015. Learning mixtures of gaussians in high dimensions. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing. 761–770. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Rong Ge and Tengyu Ma. 2015. Decomposing Overcomplete 3rd Order Tensors using Sum-of-Squares Algorithms. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 829–849. Google ScholarGoogle Scholar
  46. Rong Ge and Tengyu Ma. 2017. On the optimization landscape of tensor decompositions. Advances in Neural Information Processing Systems, 30 (2017). Google ScholarGoogle Scholar
  47. Navin Goyal, Santosh Vempala, and Ying Xiao. 2014. Fourier PCA and robust tensor decomposition. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing. 584–593. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Bruce Hajek, Yihong Wu, and Jiaming Xu. 2015. Computational lower bounds for community detection on random graphs. In Conference on Learning Theory. 899–928. Google ScholarGoogle Scholar
  49. Johan Håstad. 1989. Tensor rank is NP-complete. In International Colloquium on Automata, Languages, and Programming. 451–460. Google ScholarGoogle Scholar
  50. Christopher J Hillar and Lek-Heng Lim. 2013. Most tensor problems are NP-hard. Journal of the ACM (JACM), 60, 6 (2013), 1–39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Justin Holmgren and Alexander S Wein. 2021. Counterexamples to the Low-Degree Conjecture. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). 185. Google ScholarGoogle Scholar
  52. Samuel Hopkins. 2018. Statistical Inference and the Sum of Squares Method. Ph. D. Dissertation. Cornell University. Google ScholarGoogle Scholar
  53. Samuel B Hopkins, Pravesh K Kothari, Aaron Potechin, Prasad Raghavendra, Tselil Schramm, and David Steurer. 2017. The power of sum-of-squares for detecting hidden structures. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). 720–731. Google ScholarGoogle ScholarCross RefCross Ref
  54. Samuel B Hopkins, Tselil Schramm, and Jonathan Shi. 2019. A robust spectral algorithm for overcomplete tensor decomposition. In Conference on Learning Theory. 1683–1722. Google ScholarGoogle Scholar
  55. Samuel B Hopkins, Tselil Schramm, Jonathan Shi, and David Steurer. 2016. Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing. 178–191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Samuel B Hopkins, Jonathan Shi, and David Steurer. 2015. Tensor principal component analysis via sum-of-square proofs. In Conference on Learning Theory. 956–1006. Google ScholarGoogle Scholar
  57. Samuel B Hopkins and David Steurer. 2017. Efficient Bayesian estimation from few samples: community detection and related problems. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). 379–390. Google ScholarGoogle ScholarCross RefCross Ref
  58. Daniel Hsu and Sham M Kakade. 2013. Learning mixtures of spherical gaussians: moment methods and spectral decompositions. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science. 11–20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Mark Jerrum. 1992. Large cliques elude the Metropolis process. Random Structures & Algorithms, 3, 4 (1992), 347–359. Google ScholarGoogle ScholarCross RefCross Ref
  60. Bing-Yi Jing, Ting Li, Zhongyuan Lyu, and Dong Xia. 2021. Community detection on mixture multilayer networks via regularized tensor decomposition. The Annals of Statistics, 49, 6 (2021), 3181–3205. Google ScholarGoogle ScholarCross RefCross Ref
  61. Bohdan Kivva and Aaron Potechin. 2020. Exact nuclear norm, completion and decomposition for random overcomplete tensors via degree-4 SOS. arXiv preprint arXiv:2011.09416. Google ScholarGoogle Scholar
  62. Frederic Koehler and Elchanan Mossel. 2021. Reconstruction on Trees and Low-Degree Polynomials. arXiv preprint arXiv:2109.06915. Google ScholarGoogle Scholar
  63. Tamara G. Kolda. 2021. Will the real Jennrich’s Algorithm please stand up? Available online at. www.mathsci.ai/post/jennrich Accessed: 10-22-2022 Google ScholarGoogle Scholar
  64. Pravesh K Kothari and Peter Manohar. 2021. A stress-free sum-of-squares lower bound for coloring. arXiv preprint arXiv:2105.07517. Google ScholarGoogle Scholar
  65. Pravesh K Kothari, Ryuhei Mori, Ryan O’Donnell, and David Witmer. 2017. Sum of squares lower bounds for refuting any CSP. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. 132–145. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Dmitriy Kunisky. 2021. Hypothesis testing with low-degree polynomials in the Morris class of exponential families. In Conference on Learning Theory. 2822–2848. Google ScholarGoogle Scholar
  67. Dmitriy Kunisky. 2022. Lecture Notes on Sum-of-Squares Optimization. Available online at. www.kunisky.com/static/teaching/2022spring-sos/sos-notes.pdf Accessed: 09-29-2022 Google ScholarGoogle Scholar
  68. Dmitriy Kunisky, Alexander S Wein, and Afonso S Bandeira. 2022. Notes on computational hardness of hypothesis testing: Predictions using the low-degree likelihood ratio. In ISAAC Congress (International Society for Analysis, its Applications and Computation). 1–50. Google ScholarGoogle ScholarCross RefCross Ref
  69. Thibault Lesieur, Florent Krzakala, and Lenka Zdeborová. 2015. MMSE of probabilistic low-rank matrix estimation: Universality with respect to the output channel. In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton). 680–687. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Sue E Leurgans, Robert T Ross, and Rebecca B Abel. 1993. A decomposition for three-way arrays. SIAM J. Matrix Anal. Appl., 14, 4 (1993), 1064–1083. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Allen Liu and Ankur Moitra. 2021. Algorithms from Invariants: Smoothed Analysis of Orbit Recovery over SO(3). arXiv preprint arXiv:2106.02680. Google ScholarGoogle Scholar
  72. Yuetian Luo and Anru R Zhang. 2022. Tensor clustering with planted structures: Statistical optimality and computational limits. The Annals of Statistics, 50, 1 (2022), 584–613. Google ScholarGoogle ScholarCross RefCross Ref
  73. Tengyu Ma, Jonathan Shi, and David Steurer. 2016. Polynomial-time tensor decompositions with sum-of-squares. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS). 438–446. Google ScholarGoogle ScholarCross RefCross Ref
  74. Stefano Sarao Mannelli, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborova. 2019. Passed & spurious: Descent algorithms and local minima in spiked matrix-tensor models. In International conference on machine learning. 4333–4342. Google ScholarGoogle Scholar
  75. Ankur Moitra. 2014. Algorithmic aspects of machine learning. Lecture notes. Google ScholarGoogle Scholar
  76. Ankur Moitra and Alexander S Wein. 2019. Spectral methods from tensor networks. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. 926–937. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Andrea Montanari and Alexander S Wein. 2022. Equivalence of Approximate Message Passing and Low-Degree Polynomials in Rank-One Matrix Estimation. arXiv preprint arXiv:2212.06996. Google ScholarGoogle Scholar
  78. Elchanan Mossel and Sébastien Roch. 2005. Learning nonsingular phylogenies and hidden Markov models. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing. 366–375. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Mohamed Ouerfelli, Mohamed Tamaazousti, and Vincent Rivasseau. 2022. Random tensor theory for tensor decomposition. In Proceedings of the AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarCross RefCross Ref
  80. Amelia Perry, Jonathan Weed, Afonso S Bandeira, Philippe Rigollet, and Amit Singer. 2019. The sample complexity of multireference alignment. SIAM Journal on Mathematics of Data Science, 1, 3 (2019), 497–517. Google ScholarGoogle ScholarCross RefCross Ref
  81. Aaron Potechin and Goutham Rajendran. 2020. Machinery for proving sum-of-squares lower bounds on certification problems. arXiv preprint arXiv:2011.04253. Google ScholarGoogle Scholar
  82. Stephan Rabanser, Oleksandr Shchur, and Stephan Günnemann. 2017. Introduction to tensor decompositions and their applications in machine learning. arXiv preprint arXiv:1711.10781. Google ScholarGoogle Scholar
  83. Prasad Raghavendra, Tselil Schramm, and David Steurer. 2018. High dimensional estimation via sum-of-squares proofs. In Proceedings of the International Congress of Mathematicians: Rio de Janeiro 2018. 3389–3423. Google ScholarGoogle Scholar
  84. Emile Richard and Andrea Montanari. 2014. A statistical model for tensor PCA. Advances in neural information processing systems, 27 (2014). Google ScholarGoogle Scholar
  85. Tselil Schramm and David Steurer. 2017. Fast and robust tensor decomposition with applications to dictionary learning. In Conference on Learning Theory. 1760–1793. Google ScholarGoogle Scholar
  86. Tselil Schramm and Alexander S Wein. 2022. Computational barriers to estimation from low-degree polynomials. The Annals of Statistics, 50, 3 (2022), 1833–1858. Google ScholarGoogle ScholarCross RefCross Ref
  87. Nicholas D Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E Papalexakis, and Christos Faloutsos. 2017. Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing, 65, 13 (2017), 3551–3582. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Alexander Spence Wein. 2018. Statistical estimation in the presence of group actions. Ph. D. Dissertation. Massachusetts Institute of Technology. Google ScholarGoogle Scholar
  89. Alexander S Wein. 2022. Average-Case Complexity of Tensor Decomposition for Low-Degree Polynomials. arXiv preprint arXiv:2211.05274. Google ScholarGoogle Scholar
  90. Alexander S Wein. 2022. Optimal low-degree hardness of maximum independent set. Mathematical Statistics and Learning, 4, 3 (2022), 221–251. Google ScholarGoogle ScholarCross RefCross Ref
  91. Alexander S Wein, Ahmed El Alaoui, and Cristopher Moore. 2019. The Kikuchi hierarchy and tensor PCA. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS). 1446–1468. Google ScholarGoogle ScholarCross RefCross Ref
  92. Yihong Wu and Jiaming Xu. 2021. Statistical problems with planted structures: Information-theoretical and computational limits. Information-Theoretic Methods in Data Science, 383 (2021). Google ScholarGoogle Scholar
  93. Ilias Zadik, Min Jae Song, Alexander S Wein, and Joan Bruna. 2022. Lattice-based methods surpass sum-of-squares in clustering. In Conference on Learning Theory. 1247–1248. Google ScholarGoogle Scholar

Index Terms

  1. Average-Case Complexity of Tensor Decomposition for Low-Degree Polynomials

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        STOC 2023: Proceedings of the 55th Annual ACM Symposium on Theory of Computing
        June 2023
        1926 pages
        ISBN:9781450399135
        DOI:10.1145/3564246

        Copyright © 2023 Owner/Author

        This work is licensed under a Creative Commons Attribution 4.0 International License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 June 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,469of4,586submissions,32%

        Upcoming Conference

        STOC '24
        56th Annual ACM Symposium on Theory of Computing (STOC 2024)
        June 24 - 28, 2024
        Vancouver , BC , Canada
      • Article Metrics

        • Downloads (Last 12 months)262
        • Downloads (Last 6 weeks)38

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader