Abstract
“When in a difficult situation, it is sometimes better to give up and start all over again.” While this empirical truth has been regularly observed in a wide range of circumstances, quantifying the effectiveness of such a heuristic strategy remains an open challenge. In this paper, we combine the notions of optimal control and stochastic resetting to address this problem. The emerging analytical framework allows one not only to measure the performance of a given restarting policy, but also to obtain the optimal strategy for a wide class of dynamical systems. We apply our technique to a system with a final reward and show that the reward value must be larger than a critical threshold for resetting to be effective. Our approach, analogous to the celebrated Hamilton-Jacobi-Bellman paradigm, provides the basis for the investigation of realistic restarting strategies across disciplines. As an application, we show that the framework can be applied to an epidemic model to predict the optimal lockdown policy.
- Received 11 May 2022
- Revised 6 July 2022
- Accepted 30 October 2022
DOI:https://doi.org/10.1103/PhysRevResearch.5.013122
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Published by the American Physical Society