Matrix factorization with neural networks

Francesco Camilli and Marc Mézard
Phys. Rev. E 107, 064308 – Published 27 June 2023

Abstract

Matrix factorization is an important mathematical problem encountered in the context of dictionary learning, recommendation systems, and machine learning. We introduce a decimation scheme that maps it to neural network models of associative memory and provide a detailed theoretical analysis of its performance, showing that decimation is able to factorize extensive-rank matrices and to denoise them efficiently. In the case of binary prior on the signal components, we introduce a decimation algorithm based on a ground-state search of the neural network, which shows performances that match the theoretical prediction.

  • Figure
  • Figure
  • Figure
  • Figure
  • Received 21 November 2022
  • Revised 3 March 2023
  • Accepted 23 May 2023

DOI:https://doi.org/10.1103/PhysRevE.107.064308

©2023 American Physical Society

Physics Subject Headings (PhySH)

Statistical Physics & ThermodynamicsGeneral PhysicsInterdisciplinary PhysicsNetworksCondensed Matter, Materials & Applied Physics

Authors & Affiliations

Francesco Camilli1,* and Marc Mézard2,†

  • 1Quantitative Life Sciences, International Centre for Theoretical Physics, Trieste 34151, Italy
  • 2Department of Computing Sciences, Bocconi University, Milan 20100, Italy

  • *fcamilli@ictp.it
  • marc.mezard@unibocconi.it

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 107, Iss. 6 — June 2023

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×