Abstract
In the design of stochastic models, there is a constant trade-off between model complexity and accuracy. Here we prove that quantum models enable a more favorable trade-off. We present a technique for identifying fundamental upper bounds on the predictive accuracy of dimensionality-constrained classical models. We identify quantum models that surpass this bound by creating an algorithm that learns quantum models given time-series data. We demonstrate that this quantum accuracy advantage is attainable in a present-day noisy quantum device. These results illustrate the immediate relevance of quantum technologies to time-series analysis and offer an instance where their resulting accuracy advantage can be provably established.
8 More- Received 13 November 2022
- Revised 30 May 2023
- Accepted 14 July 2023
DOI:https://doi.org/10.1103/PhysRevA.108.022411
©2023 American Physical Society