Abstract
An energy gap develops near quantum critical point of quantum phase transition in a finite many-body (MB) system, facilitating the ground state transformation by adiabatic parameter change. In real application scenarios, however, the efficacy for such a protocol is compromised by the need to balance finite system lifetime with adiabaticity, as exemplified in a recent experiment that prepares three-mode balanced Dicke state near deterministically [Y.-Q. Zou et al., Proc. Natl. Acad. Sci. U.S.A. 115, 6381 (2018)]. Instead of tracking the instantaneous ground state as unanimously required for most adiabatic crossing, this work reports a faster sweeping policy taking advantage of excited level dynamics. It is obtained based on deep reinforcement learning (DRL) from a multistep training scheme we develop. In the absence of loss, a fidelity between prepared and the target Dicke state is achieved over a small fraction of the adiabatically required time. When loss is included, training is carried out according to an operational benchmark, the interferometric sensitivity of the prepared state instead of fidelity, leading to better sensitivity in about half of the previously reported time. Implemented in a Bose-Einstein condensate of atoms, the balanced three-mode Dicke state exhibiting an improved number squeezing of is observed within 766 ms, highlighting the potential of DRL for quantum dynamics control and quantum state preparation in interacting MB systems.
- Received 19 September 2020
- Accepted 12 January 2021
DOI:https://doi.org/10.1103/PhysRevLett.126.060401
© 2021 American Physical Society