Elsevier

Annals of Physics

Volume 381, June 2017, Pages 17-40
Annals of Physics

Finite time measurements by Unruh–DeWitt detector and Landauer’s principle

https://doi.org/10.1016/j.aop.2017.03.014Get rights and content

Highlights

  • We study dynamics of Unruh–DeWitt detector switched on for finite time.

  • A new renormalization procedure is proposed.

  • Temperature measurements get universal finite time corrections in weak coupling limit.

  • Even adiabatically slow switching off the detector leaves non-vanishing traces in it.

  • Landauer’s bound governs the amount of work used to switch the detector on and off.

Abstract

The model of Unruh–DeWitt detector coupled to the scalar field for finite time is studied. A systematic way of computing finite time corrections in various cases is suggested and nonperturbative effects like thermalization are discussed. It is shown in particular that adiabatic switching off the coupling between the detector and the thermal bath leaves non-vanishing corrections to the detector’s levels distribution. Considering the two level detector as an information bearing degree of freedom encoding one bit of information, limits on external work for the detector’s (de)coupling in finite time following from the Landauer’s bound are formulated.

Introduction

The problem of measurement occupies a special place among cornerstones of quantum theory   [1], [2], [3]. The measurement procedure, in general case, refers to at least two physical systems (or two distinct parts of a system), interacting with each other, one denoted as “the system” being measured, while the other is “the detector”. There is a crucial assumption about one’s ability to fully control the detector dynamics, in other words, the properties of the detector subsystem should be perfectly understood (and lack of such understanding inevitably brings systematic error to the results). Another important point is that the detector subsystem state at some chosen initial moment is supposed to be known, i.e. being read by another detector (our brain, for example) with certainty. Then, reading the detector state (“the measurement result”) at some later moment, one can come to conclusions about properties of the system the detector has been interacting with. The details vary in wide range—in particular, impossibility (counterintuitive) to perform a measurement with negligible influence on the system is a crucial feature of the quantum world.

The problem of measurement in quantum field theory is even more complicated than in quantum mechanics due to the fact that the number of degrees of freedom is not fixed in the former case  [4]. Using path integral language, one defines typical theory in terms of action S[ϕ] and integration measure Dϕ, and dynamical content of the theory is usually assumed to be encoded in the action and not in the measure. But in fact this is a matter of choice and the dynamics can be “redistributed” between measure and action in arbitrary proportion.1 Moreover, the integration measure can encode some a priori known (or assumed) results of measurements, for example, boundary conditions on the fields  [5].

Another instructive example is UV-regularization, corresponding to limiting the integral over fields ϕ(k) to the domain of momenta |k|<Λ, which in this language is nothing but some assumption about the fields beyond this boundary (surface in momentum space). In renormalizable theories all physics above Λ can be encoded just in a couple of numbers—coefficients in front of marginal operators like the famous fine structure constant 1/137 in QED. Renormalizability, however, does not mean naturalness. Computing, for example, average of some local d-dimensional operator T[ϕ] in a theory with UV cutoff Λ, one gets typically (up to, perhaps, logarithms): DϕT[ϕ]eiS[ϕ]cΛd+finite part . The first term characterizes ad hoc assumptions about the integration measure/detector (for example, geometry of the lattice with the link size a1/Λ used for computation), but not the genuine physics of this operator. Of course in many cases symmetries of the theory guarantee c=0, but if not, we have to work out a way of disentangling “the physics of the detector” from “the physics of the physics” (which presumably is hidden in the finite part). Notable examples include vacuum energy density, Higgs boson mass and gluon condensate, where only in the latter case the answer is known (qualitatively, but not quantitatively)—quantum scale anomaly of Yang–Mills theory and dimensional transmutation take care of unphysical UV scale Λ to become physical scale ΛQCD. In other cases of this sort, like cosmological constant problem or hierarchy problem a solution is yet to be found.

But in most cases in particle physics we assume that dynamics, described by the action is uncorrelated with dynamics of the measure and for good reasons: typical scale of the former is given by strong interaction distance of 1015 meters and even smaller for weak interactions, while detectors are macroscopic objects having sizes of the order of dozens of microns and larger. All the standard perturbative quantum field theory machinery (asymptotic states; propagators computed in plane waves basis, etc.) is based on this assumption. Even if the detector dynamics is relevant, like in case of Unruh effect and similar phenomena—one usually tries to disentangle “beautiful” field theoretic part (universal response functions, etc.) from “ugly” detector part (concrete models of the detector). On the other hand, to what extent it is possible is undoubtedly quantitative question which should be analyzed in each particular problem.

The measurement problem has been given another interesting prospective by introducing the concept of information. The subject’s roots go back to the XIX-th century. Seminal insights of J.C. Maxwell, L. Boltzmann, J. von Neumann, L. Szilard, L.N. Brillouin and many others shaped this area of research. In modern science, the work of Claude Shannon  [6] put the intuitive idea of information on firm mathematical grounds. It happened to be the closest relative of entropy, quantifying delicate relations between micro- and macrostructures of various systems. Moreover, it was demonstrated long time ago  [7] how to get all the main results of equilibrium thermodynamics from the information theory (for more recent exposition see  [8]). Nevertheless, despite tremendous developments, it still and rightfully motivates many researchers to study various aspects of nontrivial interplay between purely combinatorial (and geometrical) and dynamical facets of the information/entropy.

There is another crucial point here. The physical information must be linked to real systems used to produce, to store, to transmit and to erase it. All information processing systems (“hardware”) of today or tomorrow are to obey the laws of physics, but much less trivial is a question about intrinsic energy or entropy cost of “software”—algorithms of computation. Are there any fundamental limits from this point of view? A typical question of this kind looks like: does the amount of energy one has to spend to perform some operation with information depend on the nature of this information and if yes, how? In particular, can one imagine a device capable to copy one bit at zero energy cost? Is there minimal time needed to copy (or to erase), etc.? Such problems are under discussion for decades (see  [9], [10], [11], [12]) and are of prime importance for information theory understood as the one among other natural sciences. Of course, answers to these questions for real computers of today could be very different quantitatively from the ones for idealized devices.

It is easy to understand that in general one can indeed expect various limits of this kind. For example, limiting character of the velocity of light puts relativistic bounds on the computing speed of any device having finite size (as should be the case for any realistic one). Speed of quantum evolution is limited by the so called Margolus–Levitin bound  [13], inclusion of gravity puts its own limits, and so on  [9], [14], [15], [16].

In his seminal paper  [17], Rolf Landauer made an important contribution to this discussion. He formulated the statement known as the Landauer’s principle: erasure of one bit of information by a device at temperature T leads to dissipation of at least kBTlog2 of energy. Roughly speaking, forgetting is costly.2 By “erasure” one understands any (almost) operation that does not have single-valued inverse. On the other hand, reversible logical operations such as copying can be performed, at least in principle, with zero energy consumption, i.e. for any chosen ϵ>0 one can suggest copying protocol which requires an amount of energy less that ϵ.

Landauer’s principle is considered by many as a key to solution of the famous Maxwell demon paradox (see  [19], [20], [21], [22], [23], [24] and nice introductory review to the subject  [25]). The crucial point is the necessity to restore initial state of the demon’s mind (“reset its memory”), and this operation dissipates just the amount of heat one gained during the previous steps of the demon’s activity. The discussions between different schools of the Maxwell demon’s exorcists still go on.

It is rather challenging to explore this kind of physics experimentally and direct evidence of the Landauer’s principle came quite recently  [26], [27], [28], [29]. The validity of the principle was demonstrated, but two important comments are worth making. First, the bound is valid only statistically, while in a single event fluctuations can drive the system well below it. Second, the bound is saturated in stationary case, and dissipated heat increases with decrease of the erasure time.

More refined formulation of the principle was given recently in  [30]. The authors considered a system immersed into the thermal bath. Initially, the density matrices of the system (“detector”) and the bath are uncorrelated: ρ=ρsρb where ρbexp(βHb) and β is initial inverse temperature. After unitary evolution the density matrix ρ has no longer the form of direct product ρsρb and one defines ρs=Trbρ, ρb=Trsρ. Then one can strictly prove that βΔQΔS where ΔS=S(ρs)S(ρs), ΔQ=Tr[Hρb]Tr[Hρb] with the standard definition of entropy S[ρ]=Tr[ρlogρ]. Eq. (3) is a precise formulation of the stationary Landauer’s principle. It is worth noting that factorization of the initial state density matrix and thermal character of the bath are important, without these assumptions one could design gedanken experiments violating (3).

In its original form Landauer’s principle corresponds to ΔQ>0, ΔS>0 case. The physical meaning is pretty clear—suppose we have a system in contact with a thermal bath, and observe that for whatever dynamical reason it gets more ordered than it was initially (ΔS>0, like it happens, for example, in crystallization process). As a limiting case one could think of total reset. It is a process with definite final state of the system regardless the initial state was and hence S(ρs)=0. The Landauer’s principle then states that the amount of thermal energy dissipated to the bath ΔQ is bound from below by (3). For two level system encoding one bit of information the original Landauer’s log2 factor reappears.

The principle can be applied to other cases as well, being trivial in ΔQ>0, ΔS<0 case (this describes explosion-like processes when internal energy converts into heat) and equivalent to nonexistence of perpetuum mobile of the second kind for ΔQ<0, ΔS>0 case—energy of the thermal bath cannot be used to make external system more ordered. The fourth case ΔQ<0, ΔS<0 physically corresponds to using the thermal energy to disorder the system like it happens during melting.

The simplest possible case usually considered in the literature is two-state detector, encoding one bit of information. Two kinds of a detector design are being analyzed. In the first case, two distinct states of a detector correspond to different space coordinates, like, for example, in the double-well experiments  [28]. In the second case, the states are distinguished by energy coordinate, like, for example, the ground and excited states of an atom. One could think about another possibilities, even rather exotic like famous Schrödinger cat as the detector of cyanide presence in the box. It is remarkable that Landauer’s principle pretends to be applicable in any of these cases, despite their physical dynamics is obviously far from being identical.

One can think also about different erasure protocols. The most trivial one is to reset the system, i.e. to put it by any means in some predefined final state (we label it as 0 in what follows). But other options are also possible. For example, the system can be thermalized  [31] by long enough contact with the heat bath. Or, alternatively, one can shake up a detector by some strong external force, leaving it relaxed in some random final state. In all cases any information about initial state of the detector is completely lost and this justifies the term “erasure”, which should not be misinterpreted as “reset”. The latter term corresponds only to the predefined final state of zero entropy.

One might be interested in the subject outlined above from a different prospective. Usually the reset operation is assumed to be done by some external agent, which, for example, removes the barrier between “0” and “1” states, deforms the potential well, etc. Then we are interested in the work done by this external force. On the other hand, one can imagine the detector having some internal time scale to be reset. Good example is an excited atom, where its decay time plays the role of this scale. In this case no external work is needed. Since energy Ω emitted by the atom (detector) and swallowed by the bath can be smaller than kBTlog2, naively (3) looks to be violated. This is of course not the case since the total reset is incompatible with final thermal equilibrium between the detector and the bath—there is always exponentially suppressed at low temperatures but nonzero probability to find the detector in its excited state, and the corresponding correction to the right hand side of (3) exactly compensates the bath energy deficit in the left hand side mentioned above. But it is by far not obvious whether this will also be true for finite time interval, i.e. before equilibration.

In the present paper we develop formalism for systematic studies of Unruh–DeWitt point-like detector coupled to the field bath for finite time. Various aspects of this problem have been discussed in the literature  [32], [33], [34], [35] and the work presented here is heavily based on these findings. The main focus of the present paper is to address this sort of questions in a way, which is suitable not only in the leading order in perturbation theory (where most work has been done so far) but also beyond it. As an application, we discuss limits on external work for the detector’s (de)coupling in finite time, which are put by the Landauer’s principle, if the detector is understood as an information bearing degree of freedom encoding one bit of information.

Section snippets

Basic definitions

In this section we discuss general formalism of Unruh–DeWitt point-like detector model  [36], [37], [38], [39] for finite-time measurements, corresponding to energy coordinate, distinguishing the states. The infinite time limit of this model is well studied. The case with explicit time-dependence attracted less attention, however many interesting results are obtained here (see, e.g.  [32], [33], [34], [35] and recent papers  [40], [41]). The detector-field system state is described as a vector

GZK horizon as Unruh–DeWitt detector excitation time

Before we come to discussion of general structure of the operator Dχ and finite time corrections, let us make pedagogically instructive step by considering concrete perturbative example of the measurement with finite τm when the leading term Dχ=1 dominates. This is celebrated Greisen–Zatsepin–Kuzmin bound  [56], [57] on cosmic rays energy due to the presence of cosmic microwave background radiation. Despite the link between GZK bound and finite time quantum measurements3

Finite time Landauer’s principle

We are now in the position to address the question posed at the beginning of this paper—what is the energy price of erasure done by external force. The role of the latter is of course played by the switching function χ(t). As was discussed in the introduction, operationally erasure corresponds to some time-dependent actions like doing operations with the memory unit, removing potential barriers, etc., or making contact between the memory unit and the bath and so on. Generally speaking, one can

Conclusion

Main part of the present paper is devoted to study of finite time effects for Unruh–DeWitt detector. They are parameterized by time-dependent coupling gχ(τ). We describe the detector’s dynamics as Markov evolution (19), (20). In principle, having fixed time profile χ(τ), one is able to compute nonperturbatively the probability to find the detector excited at a given time in this framework. This is the input and new results obtained in the present paper with it are the following. First, we

Acknowledgments

The author acknowledges partial support from Russian Foundation for Basic Research, grant RFBR-ofi-m N 14-22-03030.

References (81)

  • A.E. Allahverdyan et al.

    Phys. Rep.

    (2013)
  • M. Bordag et al.

    Ann. Phys.

    (1985)
  • S.D.H. Hsu

    Phys. Lett. B

    (2006)
  • N. Margolus et al.

    Physica D

    (1998)
    N. Margolus
  • C. Adami
  • J. Earman et al.

    Stud. Hist. Philos. Modern Phys.

    (1998)
  • M.D. Vidrighin et al.

    Phys. Rev. Lett.

    (2016)
  • L.C. Barbado et al.

    Phys. Rev. D

    (2012)
  • V. Shevchenko

    Nuclear Phys. B

    (2013)
  • E. Martin-Martinez et al.

    Phys. Rev. D

    (2013)
  • P. Mehta et al.

    PNAS

    (2012)
  • V. Braginsky et al.

    Quantum Measurement

    (1992)
  • M. Schlosshauer

    Rev. Modern Phys.

    (2005)
  • L. Landau et al.

    Z. Phys.

    (1931)
  • C.E. Shannon

    Proc. IRE

    (1949)
  • E.T. Jaynes

    Phys. Rev.

    (1957)

    Phys. Rev.

    (1957)
  • A. Ben-Naim

    A Farewell to Entropy: Statistical Thermodynamics Based on Information

    (2008)
  • J.D. Bekenstein

    Phys. Rev. Lett.

    (1981)
  • J.D. Bekenstein

    Found. Phys.

    (2005)
  • R. Landauer

    Phys. Scr.

    (1987)
  • S. Lloyd

    Nature

    (2000)
  • H.J. Bremermann

    Proc. Fifth Berkeley Symp. on Math. Statist. and Prob., Vol. 4

    (1967)
  • R.W. Keyes et al.

    IBM J. Res. Dev.

    (1970)
  • R. Landauer

    IBM J. Res. Dev.

    (1961)
  • P. Ball

    Nat. News

    (2012)
  • J.D. Norton

    Stud. Hist. Philos. Modern Phys.

    (2011)
  • C.H. Bennett

    IBM J. Res. Dev.

    (1973)

    Internat. J. Theoret. Phys.

    (1982)
  • C.H. Bennett

    Stud. Hist. Philos. Modern Phys.

    (2003)
  • M.B. Plenio et al.

    Contemp. Phys.

    (2001)
  • S. Toyabe et al.

    Nat. Phys.

    (2010)
  • S.W. Kim et al.

    Phys. Rev. Lett.

    (2011)
  • A. Brut et al.

    Nature

    (2012)
  • J.P.S. Peterson

    Proc. R. Soc. A

    (2016)
  • D. Reeb et al.

    New J. Phys.

    (2014)
  • E. Lubkin

    Internat. J. Theoret. Phys.

    (1987)
  • L. Sriramkumar et al.

    Classical Quantum Gravity

    (1996)
  • D. Kothawala et al.

    Phys. Lett. B

    (2010)
  • A. Satz

    Classical Quantum Gravity

    (2007)
  • W.G. Unruh

    Phys. Rev. D

    (1976)
  • View full text