Perspective The following article is Open access

Perspective: How to overcome dynamical density functional theory

, , , and

Published 19 April 2023 © 2023 The Author(s). Published by IOP Publishing Ltd
, , Citation Daniel de las Heras et al 2023 J. Phys.: Condens. Matter 35 271501 DOI 10.1088/1361-648X/accb33

0953-8984/35/27/271501

Abstract

We argue in favour of developing a comprehensive dynamical theory for rationalizing, predicting, designing, and machine learning nonequilibrium phenomena that occur in soft matter. To give guidance for navigating the theoretical and practical challenges that lie ahead, we discuss and exemplify the limitations of dynamical density functional theory (DDFT). Instead of the implied adiabatic sequence of equilibrium states that this approach provides as a makeshift for the true time evolution, we posit that the pending theoretical tasks lie in developing a systematic understanding of the dynamical functional relationships that govern the genuine nonequilibrium physics. While static density functional theory gives a comprehensive account of the equilibrium properties of many-body systems, we argue that power functional theory is the only present contender to shed similar insights into nonequilibrium dynamics, including the recognition and implementation of exact sum rules that result from the Noether theorem. As a demonstration of the power functional point of view, we consider an idealized steady sedimentation flow of the three-dimensional Lennard-Jones fluid and machine-learn the kinematic map from the mean motion to the internal force field. The trained model is capable of both predicting and designing the steady state dynamics universally for various target density modulations. This demonstrates the significant potential of using such techniques in nonequilibrium many-body physics and overcomes both the conceptual constraints of DDFT as well as the limited availability of its analytical functional approximations.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The coupled dynamics of the microscopic degrees of freedom in typical soft matter systems generates a wide array of relevant and also often unsolved nonequilibrium phenomena [1, 2]. One central quantity for the characterization of self-assembly and structure formation in complex systems is the microscopically resolved one-body density distribution $\rho(\mathbf{r},t)$, where r indicates position and t denotes time. The 'density profile' $\rho(\mathbf{r},t)$ acts as a central order parameter both due to its intuitive physical interpretation and clear cut mathematical definition [3].

According to the dynamical density functional theory (DDFT), as originally proposed by Evans in 1979 [4] and put at center stage by Marconi and Tarazona in 1999 [5], the time evolution of the microscopic density profile is assumed to be determined by the following partial differential equation:

Equation (1)

Here γ is a friction constant, $F[\rho]$ is an intrinsic free energy functional that depends functionally on the density profile, and the external potential $V_{\mathrm{ext}}(\mathbf{r},t)$ represents interactions of the system with the environment. The system is set into motion by a temporal variation of $V_{\mathrm{ext}}(\mathbf{r},t)$, such as e.g. step-like switching at an initial time.

The time evolution according to equation (1) conserves the particle number locally and hence it constitutes dynamics of model B type [6]. In standard applications one starts with an equilibrium state of the system and then the dynamics are monitored on the basis of numerical time integration of equation (1), where the time dependence is induced by the temporal variation of $V_{\mathrm{ext}}(\mathbf{r},t)$. In order to provide reference data and to allow for the generation of benchmark results to assess the quality of the theory, resorting to many-body computer simulations is common, with overdamped Brownian dynamics (BD) being a popular choice. Marconi and Tarazona [5] initially spelled out the connection of these dynamics with DDFT and [7] describes a modern and stable adaptive time-stepping BD simulation algorithm. Comparison of DDFT data with experimental results are more scarce, but notable exceptions include non-equilibrium sedimentation of colloids [8], the self-diffusion of particles in complex fluids [9], the bulk dynamics of Brownian hard disks [10], and the flow profile and drying pattern of dispersion droplets [11].

The DDFT time evolution reaches a stationary state if the gradient on the right hand side of equation (1) vanishes, i.e. provided that the expression inside of the parentheses is constant:

Equation (2)

Here we have dropped the dependence on time in the notation, as the situation is now static. The constant µ can be identified with the chemical potential, which in a grand canonical statistical mechanical setting is the conjugate control parameter of the mean particle number. Equation (2) is exact in equilibrium, as was shown by Evans [4]. He proved the equilibrium intrinsic free energy functional $F[\rho]$ to exist, to be unique, and to form the starting point for a modern equilibrium theory of spatially inhomogeneous liquids and crystals [12, 13].

In practice one needs to rely on approximations for $F[\rho]$, given a microscopic fluid model under consideration. Once one has solved equation (2) for given values of µ and temperature T (the dependence of $F[\rho]$ on T is suppressed in the notation), then in principle complete knowledge of the thermal system is available. The value of the density functional $F[\rho]$ is the true intrinsic free energy, and higher-order correlation functions are determined via higher-order derivatives of the free energy functional or via test-particle procedures. In particular two-body correlations functions, such as the bulk pair correlation function g(r) as well as its generalization to inhomogeneous systems are accessible. These exhibit defining characteristics of liquids and more general soft matter systems and they are formally fully contained in the static density functional theory framework.

Together with a number of available reliable approximate free energy functionals, density functional theory is a powerful theoretical framework that has been used to elucidate much intricate and complex behaviour in soft matter. Recent representative highlights include tracing hydrophobicity to critical drying at substrates [1416], resolving three-dimensional structures of electrolyte aqueous solutions near surfaces [17, 18], and addressing the magnitude of the decay lengths in electrolytes [19]. Rosenfeld's celebrated hard sphere fundamental measure free energy functional [2023] is at the core of much of this research activity.

In the following we wish to address whether or not the DDFT has the prowess to play a similar role in nonequilibrium, as is often at least implicitly assumed. We demonstrate on the basis of an explicit and generic example, i.e. that of uniaxial compressional flow of the three-dimensional Lennard-Jones (LJ) fluid, that the DDFT is fundamentally flawed and that in reality, as represented by many-body simulations, recognizing the flow field as a further relevant degree of freedom is required to represent true nonequilibrium. These conclusions are based on analytical power functional approximations, adaptive BD simulation data, and explicit machine learning of the power functional map from motion to the interparticle one-body force field. Neglecting the dependence on the velocity field, via artificially setting its value identically to zero, reduces to the machine-learned functional mapping and hence the adiabatic time evolution of DDFT, albeit here on the basis of the quasi-exact adiabatic forces as they are included in the supervised machine learning.

This Perspective is organized as follows. We first make some key aspects of DDFT explicit in section 2 and describe several prominent shortcomings of this theory. We then give an account of how to go towards the formally exact one-body dynamics in section 3 and provide in section 4 a description of key aspects of the power functional framework, which as we wish to argue overcomes the fundamental defects of DDFT. We describe the exemplary stationary compressional flow situation in section 5 and lay out the application of Noether's theorem in this statistical mechanical setting in section 6. We present machine learning results for the kinematic functional relationships of the streaming LJ fluid in section 7. This method also yields direct access to the adiabatic force field, as is required for the DDFT time evolution, without the need for involving any prior explicit analytical approximations for the free energy density functional. We give conclusions and an outlook in section 8. Readers who are primarily interested in the machine learning aspects of our work (section 7) are welcome to skip to the appendix where we lay out our strategy of its use in predicting and designing nonequilibrium many-body dynamics in soft matter.

2. Limits and limitations of adiabatic dynamics

We go into some detail and describe why the DDFT represents adiabatic dynamics in the sense of a temporal sequence of spatially inhomogeneous equilibrium states. The equilibrium intrinsic free energy functional splits into ideal and excess (over ideal gas) contributions according to $F[\rho] = F_{\mathrm{id}}[\rho]+F_{\mathrm{exc}}[\rho]$. Here the excess free energy functional $F_{\mathrm{exc}}[\rho]$ accounts for the effects of the interparticle interactions on the equilibrium properties of the system and it is in general unknown and requires approximations to be made. The ideal gas free energy functional however is exactly given by

Equation (3)

where kB denotes the Boltzmann constant, Λ is the thermal de Broglie wavelength, and we consider three-dimensional systems. The functional derivative, as it is relevant for equation (1), is $\delta F_{\mathrm{id}}[\rho]/\delta \rho(\mathbf{r}) = k_BT \ln(\rho(\mathbf{r})\Lambda^3)$. When disregarding the excess contribution and inserting this result alone into the DDFT equation of motion (1), its right hand side becomes $\gamma^{-1} \nabla\cdot \rho(\mathbf{r},t) \nabla [k_BT \ln(\rho(\mathbf{r},t)\Lambda^3) + V_{\mathrm{ext}}(\mathbf{r},t)]$ with the dependence on Λ being irrelevant due to the spatial gradient operation. The result can be re-written further such that for the case of the ideal gas, where $F_{\mathrm{exc}}[\rho] = 0$ and $F[\rho] = F_{\mathrm{id}}[\rho]$, the equation of motion (1) attains the following form:

Equation (4)

Here $D_0 = k_BT/\gamma$ is the diffusion constant, $\nabla^2$ is the Laplace operator and the external force field is given (here) as $\mathbf{f}_{\mathrm{ext}}(\mathbf{r},t) = -\nabla V_{\mathrm{ext}}(\mathbf{r},t)$. Equation (4) is the exact drift-diffusion equation for overdamped motion of a mutually noninteracting system, i.e. the ideal gas.

Besides Evans' original proposal [4] based on the continuity equation and undoubtedly his physical intuition, derivations of the DDFT (1) were founded much more recently on Dean's equation of motion for the density operator [5], the Smoluchowski equation [24], a stationary action principle for the density [25], the projection operator formalism [26], a phase-space approach [27], the mean-field approximation [28], a local equilibrium assumption [29], and a non-equilibrium free energy [30]. The question of the well-posedness of the DDFT was addressed [31] and several extensions beyond overdamped BD were formulated, such as e.g. for dynamics including inertia [3235] and for particles that experience hydrodynamic interactions [35, 36] or undergo chemical reactions [37, 38].

The DDFT was also used beyond the description of fluids, such as e.g. for opinion dynamics [39] and epidemic spreading [40]. Recent reviews of DDFT are given in [41, 42], with [42] giving an updated overview of several very recent directions. The theory is put into a wider perspective, together with much background pedagogical material in [43]. A modern and well-accessible account of the general strategy of dynamical coarse-graining in statistical physics, of which the DDFT can be viewed as being a representative, has recently been given by Schilling [44].

The fact that both the static limit for the fully interacting system (2) as well as the full dynamics of the noninteracting system (4) are exact, taken together with the heft of the DDFT literature, appears to give much credibility to the equation of motion (1). However, despite the range of theoretical techniques employed [5, 2430] neither of these approaches has provided us with a concrete way of going beyond equation (1). Apart from several case-by-case and rather ad hoc modifications, no systematic or even only practical identification of what is missing has been formulated. (We turn to power functional theory in section 4.) This is a problematic situation as two defects of equation (1) are immediately obvious upon inspection: (i) the description is local in time and there is no natural mechanism for the inclusion of memory while time-locality is not sufficient for general nonequilibrium situations; (ii) only flow that leads to direct changes in the density profile is captured and hence effects of rotational flow, such as shearing, as well as of nonequilibrium effects in compression and expansion are lost (see below).

Here we argue that these defects are indicative of a broader failure of equation (1) to describe nonequilibrium physics. We show that the DDFT is only fit to describe situations in which the dynamics follow an adiabatic path through a sequence of equilibrium states. The description of genuine nonequilibrium dynamics in a functional setting on the one-body level rather requires recognition of the local velocity field as a further relevant physical variable besides the density profile, and this is provided by power functional theory [43]. Before laying out key principles of this approach in section 4, we first describe the microscopically sharp coarse-graining on the one-body level of correlation functions.

3. Towards exact one-body dynamics

Evans based his original derivation [4] of equation (1) on the continuity equation,

Equation (5)

where $\mathbf{J}(\mathbf{r},t)$ is the microscopically resolved one-body current distribution. Equation (5) is exact in a variety of contexts, including overdamped BD, as described either on the Fokker–Planck level by the Smoluchowski equation or by the corresponding overdamped Langevin equation that governs the trajectories, as they are realized in simulation work [7]. For BD the one-body current distribution is given exactly by [43]:

Equation (6)

This identity expresses the force density balance of the negative friction force density (left hand side) with the force densities due to ideal thermal diffusion, interparticle interactions, and external influence (three contributions on the right hand side). Here the interparticle force density distribution is given by the statistical average

Equation (7)

where the angular brackets indicate an average at fixed time t over the nonequilibrium many-body distribution, $u(\mathbf{r}^N)$ is the interparticle interaction potential that depends on all particle position coordinates $\mathbf{r}^N\equiv \mathbf{r}_1,\ldots,\mathbf{r}_N$ and $\nabla_i$ indicates the derivative with respect to the position ri of particle i. The formulation of equation (7) is based on the concept of static operators and a dynamically evolving probability distribution. This is analogous to the Schrödinger picture of quantum mechanics. The Heisenberg picture is more closely related to simulation work. Here the probability distribution is that of the initial microstates and the operators move forward in time, i.e. the position $\mathbf{r}_i(t)$ of particle i changes over the course of time. Then the Dirac distribution in equation (7) becomes $\delta(\mathbf{r}-\mathbf{r}_i(t))$, with the generic position variable r however remaining static. The forces are those that act in the given microstate $\mathbf{r}^N(t)$ at time t, i.e. the interparticle force on particle i at time t is $-\nabla_i u(\mathbf{r}^N(t))$.

In practice, using BD simulations, carrying out the average in equation (7) requires to build the mean over sufficiently many separate realizations of the microscopic evolution of the many-body system that differ in the initial microstate (as e.g. drawn from an equilibrium ensemble) and in the realization of the thermal noise. As equation (7) measures both the probability to find particle i at position r (via the delta function) and the interparticle force that acts via the negative gradient $-\nabla_i u(\mathbf{r}^N)$, we refer to $\mathbf{F}_{\mathrm{int}}(\mathbf{r},t)$ as a force density. The corresponding force field $\mathbf{f}_{\mathrm{int}}(\mathbf{r},t)$ is obtained by simple normalization with the density profile, i.e. $\mathbf{f}_{\mathrm{int}}(\mathbf{r},t) = \mathbf{F}_{\mathrm{int}}(\mathbf{r},t)/\rho(\mathbf{r},t)$. Building this ratio scales out the probability effect and the force field then carries physical units of force, i.e. energy per length.

In equilibrium the definition (7) remains intact. Complementing the statistical average, static density functional theory allows to express the equilibrium force density as being functionally dependent on the density profile via the functional derivative of the excess free energy functional according to:

Equation (8)

Crucially, and in contrast to equation (7), here the internal force density is directly expressed as a density functional. This dependence has superseded the original dependence on the external potential, as is manifest in the probability distribution for building the average (7) in equilibrium.

As a self-consistency check we insert the force density functional (8) into the equilibrium limit of the force density balance (6). The current vanishes in the equilibrium case, $\mathbf{J}(\mathbf{r},t)\equiv 0$, and we obtain

Equation (9)

This result is independent of time and it constitutes the gradient of the static Euler–Lagrange equation (2) when divided by the density profile. (Insert equation (8), identify the ideal gas contribution $-k_BT\nabla\rho(\mathbf{r}) =$ $-\rho(\mathbf{r})\delta F_{\mathrm{id}}[\rho]/\delta\rho(\mathbf{r})$, and divide by $\rho(\mathbf{r})$.) The classical force density balance result (9) by Yvon, Born and Green [3] has recently been derived from systematically addressing thermal Noether invariance [45, 46] against locally resolved spatial deformations of the statistical ensemble [4749], as also valid quantum mechanically [49] and at second order in the displacement field [50, 51]; we give a brief account of this theory in section 6 below.

A naive transfer of equation (8) to nonequilibrium lets one simply evaluate the equilibrium excess free energy functional at the instantaneous nonequilibrium density $\rho(\mathbf{r},t)$. In order to separate this contribution from true static equilibrium, we refer to this force density as being adiabatic (subscript 'ad') and to be defined as

Equation (10)

We recall that the right hand side offers a concrete computational structure that is of practical usefulness in actual applications, as considerable knowledge about approximative forms of the excess free energy density functional $F_{\mathrm{exc}}[\rho]$ is available. Using the adiabatic force density as a proxy for the true nonequilibrium intrinsic force density distribution (7), i.e. setting $\mathbf{F}_{\mathrm{int}}(\mathbf{r},t) = \mathbf{F}_{\mathrm{ad}}(\mathbf{r},t)$ in the force density balance (6) together with the continuity equation (5) leads to the DDFT equation of motion (1). The adiabatic force density approximation is uncontrolled though and the theory inherently yields the dynamics as an adiabatic sequence of equilibrium states. Surely, more than 40 years after the conception of the DDFT [4], we have to be able to do better!

4. Power functional techniques

Power functional theory [43] offers a concrete mathematical structure to go forward. We describe the essential steps that enable one to go beyond the DDFT and to hence address a significantly expanded realm of nonequilibrium physics which equation (1) is oblivious of.

The interparticle force density profile (7) is identified to consist of two contributions according to:

Equation (11)

Here $\mathbf{F}_{\mathrm{ad}}(\mathbf{r},t)$ is the adiabatic force density profile, as given formally via the explicit equilibrium free energy derivative (10) and directly accessible in simulations via the custom flow method [52, 53]. The custom flow algorithm allows to systematically construct a hypothetical adiabatic (equilibrium) system that shares its density profile with the nonequilibrium system at the given time. Then sampling the internal force density in the adiabatic system yields results for $\mathbf{F}_{\mathrm{ad}}(\mathbf{r},t)$.

The second, superadiabatic contribution in equation (11), $\mathbf{F}_{\mathrm{sup}}(\mathbf{r},t)$, contains all effects that are not expressible as an instantaneous density functional. This includes forces that lead to viscous and to nonequilibrium structure forming phenomena, as we exemplify below in a concrete model compressional flow situation. Formally, the superadiabatic force density is generated from the superadiabatic excess free power functional $P_t^{\mathrm{exc}}[\rho,\mathbf{J}]$ upon functional differentiation with respect to the one-body current via [43, 54]:

Equation (12)

The functional dependence of $P_t^{\mathrm{exc}}[\rho,\mathbf{J}]$ on the density and current is causal, i.e. on the values of these fields at prior times to t; density and current need to satisfy the continuity equation. Upon using equation (11) the force density balance (6) attains the following form:

Equation (13)

This relationship holds beyond gradient forms of $\mathbf{f}_{\mathrm{ext}}(\mathbf{r},t)$, i.e. for external force fields that contain non-conservative contributions. Crucially, $\mathbf{F}_{\mathrm{sup}}(\mathbf{r},t)$ will in general also acquire nonconservative contributions, such as e.g. damping effects that represent viscous behaviour. Moreover, nonequilibrium structure-forming effects will also arise in general. These affect directly the shape of the density profile, whether this evolves in time or persists in a nonequilibrium steady state.

If one wishes to eliminate the explicit occurrence of the current from the dynamics, then inputting the force density balance (13) into the continuity equation (5) leads to the following formally exact form of the equation of motion for the density profile:

Equation (14)

Here it is apparent that the superadiabatic force field $\mathbf{f}_{\mathrm{sup}}(\mathbf{r},t) = \mathbf{F}_{\mathrm{sup}}(\mathbf{r},t)/\rho(\mathbf{r},t)$ has a direct effect on the system dynamics. The effect is similar to that of the external force field. Crucially though, both force fields are independent of each other: the external force field represents a prescribed and inert influence on the system. In contrast, the superadiabatic force field is an emergent phenomenon that arises due to interparticle interactions and, from the functional point of view, depends non-locally in position and causally in time on the one-body density and on the current profile.

Although setting $\mathbf{f}_{\mathrm{sup}}(\mathbf{r},t) = 0$ yields the DDFT (1), the superadiabatic force field $\mathbf{f}_{\mathrm{sup}}(\mathbf{r},t)$ was demonstrated to exist [5561] and in general to play a major role in the dynamics on the one-body level and, based on test-particle concepts [6267], two-body correlation functions [6870], and for active matter [7175]. Both the flow properties as well as the spatial structure formation in the system are affected.

To reveal additional physics, it is useful to split into 'structural' and 'flow' contributions. This was established e.g. for complex flow patterns that occur in driven BD [56, 60], for active Brownian particles which form a self-sustained interface at motility-induced phase coexistence [7175], as well as very recently for a sheared three-body colloidal gel former [61]. Before we demonstrate these concepts for an example of steady nonequilibrium below, we first describe two simple model power functionals that respectively generate structure and viscously dampen the motion and that, as we will see, give a good account of the nonequilibrium flow considered below.

We concentrate on the low-order terms that are relevant for compressional/extensional flow, i.e. for situations where $\nabla\cdot\mathbf{v}(\mathbf{r},t)\neq 0$. We focus on cases where there is no rotational motion (such as shearing) and hence $\nabla\times\mathbf{v}(\mathbf{r},t) = 0$. The velocity gradient superadiabatic power functional consists of a sum,

Equation (15)

Here the flow and structural [56, 60] contributions are approximated, respectively, by the following time-local (Markovian) and space-semilocal (i.e. involving $\nabla$) forms

Equation (16)

Equation (17)

where the overall prefactors η and χ control the respective magnitude and they play the role of transport coefficients (see below). The flow functional (16) is quadratic both in density and in the velocity field; the structural functional (17) is of cubic order in each of these variables. Explicit higher-order functionals exist [60] and they become relevant when driving the system strongly. We will return to the consequences of equations (16) and (17) after laying out in section 5 the actual flow situation that we use as a model to exemplify the implications for the physics. Before doing so, we briefly describe several further key aspects of the power functional framework.

Power functional theory provides a formal mechanism for the inclusion of time- and space-nonlocal dynamics [57, 69, 80]. While equation (12) applies to overdamped dynamics, the acceleration field becomes a further relevant degree of freedom if inertia are relevant [7982] whether classically in molecular dynamics [79, 80] or in quantum dynamics [81, 82]. Here the memory functions act as convolution kernels on specific kinematic fields and rotational and compressional contributions to the dynamics are genuinely built in. As laid out above, the framework is based on an exact variational concept [43, 54], and the resulting functional mapping was shown to be explicitly accessible in many-body simulation via the custom flow computer simulation method [52, 53].

Even simple mathematical model forms for the nonequilibrium contribution to the power functional, such as equations (16) and (17), already capture essential physics (as we demonstrate below) and dynamical two-body correlation functions are accessible via test particle dynamics [9, 10, 6270]. The power functional is thereby not to be confused with the often vague concept of a 'nonequilibrium free energy'. The proper equilibrium free energy functional does play a central role in power functional theory though, via providing the description of the adiabatic reference state [43], see the generation of the force density distribution via functional differentiation (10), as is relevant for the interparticle force splitting (11), and the full density equation of motion (14).

The relevance of superadiabatic contributions to the dynamics, i.e. of those effects that lie beyond equation (1), has been amply demonstrated in the literature [5560, 6870]. Both adiabatic and superadiabatic effects arise from integrating out the dynamical degrees of freedom of the many-body problem.

Ensemble differences between canonical dynamics and grand canonical equilibrium have been systematically addressed [7678] and these do not account for the observed differences between adiabatic and superadiabatic dynamics. The kinematic dependence on the motion of the system arises formally [43], it can be explicitly traced in many-body computer simulation work [60], and it is amenable to machine learning, as we demonstrate in section 7. Before doing so, we first formulate the representative flow problem that we will use to apply the above concepts.

5. Nonequilibrium steady states

We restrict ourselves to flow situations with one-body fields that are inhomogeneous in position but independent of time, i.e. $\rho(\mathbf{r})$ and $\mathbf{v}(\mathbf{r})$. Then trivially $\partial\rho(\mathbf{r})/\partial t = 0$ and the continuity equation (5) constrains both fields to satisfy $\nabla\cdot[\rho(\mathbf{r})\mathbf{v}(\mathbf{r})] = 0$. As a representative case we illustrate in figure 1(a) nonequilibrium steady state of a three-dimensional liquid undergoing unidirectional compressional flow. Flow along a single given direction occurs e.g. under the influence of gravity, where sedimentation of colloids leads to both compression in the lower parts of the sample and expansion in the upper parts of the sample. Here we disregard transient phenomena and investigate an idealized periodic system, where flowing steady states can form.

Figure 1.

Figure 1. Illustration of unidirectional compressional flow of a liquid. The three-dimensional system is set into motion (red arrows) by the action of an external force profile $f_{\mathrm{ext}}(x)$ (blue arrows) which acts along the x-axis. The system retains planar geometry such that spatial inhomogeneities only occur as a function of x. The density profile $\rho(x)$ (orange curve) and the velocity profile v(x) (red curve) are both stationary in time but inhomogeneous in position. The local one-body current $J(x) = \rho(x)v(x) = \mathrm{const}$ and as a result the system is in a nonequilibrium steady state. The corresponding adiabatic system is in equilibrium (it has no mean flow) and it has by construction an unchanged density profile $\rho(x)$. In the adiabatic system the spatial variation of $\rho(x)$ is stabilized by the action of an external force field $-\nabla V_{\mathrm{ad}}(x)$ (olive arrows), which acts solely in the adiabatic system.

Standard image High-resolution image

This chosen uniaxial flow in planar geometry is complimentary to DDFT, as density gradients are relevant and the density profile alone already contains much non-trivial information about the dynamics that the system undergoes. Hence this specific geometry is often used to carry out generic tests; see e.g. the investigation of the quality of force-based DDFT [47, 48]. In contrast, shear flow is very different, as any motion that occurs perpendicular to the density gradient is not captured by equation (1); we refer the reader to [41, 42] for a description of efforts to include these effects within DDFT via different types of modifications of equation (1).

In order to elucidate the physics in the chosen uniaxial compressional setups, we follow the splitting (15) of the superadiabatic power functional into structural and flow contributions and hence decompose the superadiabatic force field accordingly as

Equation (18)

where the right hand side consists of the nonequilibrium structural force field $\mathbf{f}_{\mathrm{str}}(\mathbf{r})$ and the flow force field $\mathbf{f}_{\mathrm{flow}}(\mathbf{r})$. Both of these force contributions arise from the microscopic interparticle interactions, as coarse-grained in a microscopically sharp way to the one-body level. We lay out in the following the benefits of the structure-flow splitting (18) and its definition via flow reversal symmetry.

First, on the more practical level, equation (18) allows to carry out a corresponding splitting of the force density balance (13) (we divide by $\rho(\mathbf{r})$ to obtain force fields). The result is a set of two coupled equations of motion, with one of them depending explicitly on the velocity profile and the second one depending explicitly on the density profile:

Equation (19)

Equation (20)

Building the sum of equations (19) and (20) and multiplying by the density profile restores the full force density balance (13). The external force field is split according to $\mathbf{f}_{\mathrm{ext}}(\mathbf{r}) = \mathbf{f}_{\mathrm{ext},\mathrm{f}}(\mathbf{r}) +\mathbf{f}_{\mathrm{ext}, \mathrm{s}}(\mathbf{r})$, where the two terms couple to the flow via $\mathbf{f}_{\mathrm{ext},\mathrm{f}}(\mathbf{r})$ in equation (19) and to the structure via $\mathbf{f}_{\mathrm{ext},\mathrm{s}}(\mathbf{r})$ in equation (20).

On the superficial level the two equations (19) and (20) appear to be independent of each other, as no single field appears explicitly in both equations. However, the two equations are indeed intimately coupled to each other by the interparticle interactions, as represented by both the adiabatic and the two superadiabatic (flow and structural) force fields. These three intrinsic force contributions provide the physical representation of the true nonequilibrium steady state dynamics.

The flow-structure splitting (18) is uniquely determined by the symmetry properties of the forces upon motion reversal of the system [60]. Motion reversal is a discrete symmetry operation, and hence different from continuous invariances where Noether's theorem applies [4551]. One considers a 'reversed' system, which is also in steady state and possesses an unchanged density profile $\rho(\mathbf{r})$. The flow, however, is directed against the velocity orientation in the original 'forward' system. Hence the velocity profile in the reversed system is simply $-\mathbf{v}(\mathbf{r})$. As a result the current also acquires a minus sign, $-\rho(\mathbf{r})\mathbf{v}(\mathbf{r})$, which however does not affect the (vanishing) divergence, $\nabla\cdot[-\rho(\mathbf{r})\mathbf{v}(\mathbf{r})] = 0$. Thus the reversed state indeed is stationary. The two superadiabatic contributions are then defined to be unchanged [$\mathbf{f}_{\mathrm{str}}(\mathbf{r})$] and inverted [$-\mathbf{f}_{\mathrm{flow}}(\mathbf{r})$] in the reversed system. Consequentially, the superadiabatic force field in the reversed system is the difference $\mathbf{f}_{\mathrm{str}}(\mathbf{r})-\mathbf{f}_{\mathrm{flow}}(\mathbf{r})$.

Analyzing the symmetry properties of the adiabatic force field is straightforward. We recall that $\mathbf{f}_{\mathrm{ad}}(\mathbf{r})$ is a density functional via equation (10). The density profiles in the forward and in the reversed systems are identical though. Hence $\mathbf{f}_{\mathrm{ad}}(\mathbf{r})$ is invariant under motion reversal. Motion reversal is a useful device in order to (i) rationalize the nonequilibrium behaviour according to the split force balance (19) and (20), and to (ii) classify the dependence of superadiabatic forces on the velocity field into even powers, which constitute $\mathbf{f}_{\mathrm{str}}(\mathbf{r})$, and odd powers, which form $\mathbf{f}_{\mathrm{flow}}(\mathbf{r})$.

We can demonstrate this mechanism explicitly on the basis of the above flow and structural power functionals (16) and (17). Superadiabatic force fields are generated via the functional derivative (12) with respect to the current or, analogously, by functionally deriving by $\mathbf{v}(\mathbf{r},t)$ and dividing the result by $\rho(\mathbf{r},t)$. The resulting superadiabatic one-body force field consists of two components. The viscous flow force and [56, 59] and the structural force follow respectively as

Equation (21)

Equation (22)

where equation (21) is odd (linear) and equation (22) is even (quadratic) in the derivatives of the velocity field, as desired and we re-iterate that both expressions are only valid for small enough velocity gradients.

One might wonder where all this genuine nonequilibrium physics leaves the DDFT! Some readers will find the instantaneous dynamics, as generated from an adiabatic free energy according to (1), to be more appealing and intuitive than the thinking in terms of the above described apparently intricate functional relationships. Why not live with equation (1), use it, and simply accept its defects? In order to address this question and to demonstrate why this path is severely restricted from the outset, we turn in section 7 to an explicit demonstration of the functional relationship that governs the nonequilibrium physics, i.e. the kinematic functional map from the one-body mean motion to the internal force field. Before doing so, we demonstrate that Noether's theorem of invariant variations has much to say about our present setup.

6. Noether force sum rules

We discuss one of the arguably simplest cases of exploitation of the inherent symmetries of a thermal many-body system, that of global translational invariance of its statistical mechanics [45, 46]. We consider a 'shifting' transformation, where all particle coordinates change according to the map $\mathbf{r}_i\to\mathbf{r}_i + {\boldsymbol \epsilon}$, where ${\boldsymbol \epsilon} = \mathrm{const}$. This uniform shifting operation leaves all interparticle distance unchanged, $\mathbf{r}_i-\mathbf{r}_j\to(\mathbf{r}_i+{\boldsymbol \epsilon})-(\mathbf{r}_j+{\boldsymbol \epsilon})\equiv\mathbf{r}_i-\mathbf{r}_j$. As a consequence the interparticle potential is invariant under the transformation, which we can express as the identity $u(\mathbf{r}_1,\ldots,\mathbf{r}_N) = u(\mathbf{r}_1+{\boldsymbol \epsilon},\ldots,\mathbf{r}_N+{\boldsymbol \epsilon})$. Here equality holds irrespectively of the magnitude and the direction of the shifting vector ε .

The Noether argument proceeds with a twist. Despite the absence of dependence on ε , we can nevertheless differentiate both sides of the equation with respect to ε and the result will be a valid identity. We obtain $0 = \partial u(\mathbf{r}_i+{\boldsymbol \epsilon},\ldots,\mathbf{r}_N+{\boldsymbol \epsilon})/\partial {\boldsymbol \epsilon} = \sum_i \nabla_i u(\mathbf{r}_1,\ldots,\mathbf{r}_N)$, where we have set ${\boldsymbol \epsilon} = 0$ after taking the derivative. We multiply by −1 and insert $1 = \int d\mathbf{r}\delta(\mathbf{r}-\mathbf{r}_i)$, which yields

Equation (23)

The expression on the left hand side allows to identify the locally resolved interparticle force operator $\hat{\mathbf{F}}_{\mathrm{int}}(\mathbf{r}) = -\sum_i \delta(\mathbf{r}-\mathbf{r}_i)\nabla_i u(\mathbf{r}^N)$, such that equation (23) attains the form $\int d\mathbf{r} \hat{\mathbf{F}}_{\mathrm{int}}(\mathbf{r}) = 0$. This identity holds for each microstate rN and hence it remains trivially valid upon averaging over the many-body distribution function, irrespective of whether this is in- or out-of-equilibrium. We can hence conclude the vanishing of the global interparticle force, expressed as the integral over the mean force density $\mathbf{F}_{\mathrm{int}}(\mathbf{r}) = \langle \hat{\mathbf{F}}_{\mathrm{int}}(\mathbf{r})\rangle$ as

Equation (24)

Equation (24) holds at all times t and it can be viewed as a consequence of Newton's third law, see the discussion in [45]. Using the adiabatic-superadiabatic force splitting (11) one can further conclude that the both global contributions need to vanish individually,

Equation (25)

Equation (26)

The proof can either be based on the fact that equation (25) is merely equation (24) for the special case of an equilibrium system, from which then equation (26) follows from the force splitting (11). Alternatively and starting from a very fundamental point of view, the global translational invariance of the excess free energy functional $F_{\mathrm{exc}}[\rho]$ and of the superadiabatic free power functional $P_t^{\mathrm{exc}}[\rho,\mathbf{v}]$, here considered instantaneously at time t, lead directly to equations (25) and (26), see [45, 46] for detailed derivations.

It is interesting to apply the Noether concept to the flow-structure splitting equation (18) of the superadiabatic force field. One can see straightforwardly, from the symmetry upon motion reversal, that both the global structural force and the global flow force need to vanish individually:

Equation (27)

Equation (28)

We prove by contradiction and assume that it is not the case, i.e. that each integral gives the same global force, but with opposite sign, such that the sum vanishes and equation (26) remains valid. Per construction, $\mathbf{f}_{\mathrm{flow}}(\mathbf{r})$ changes sign in the motion reversed system, but $\mathbf{f}_{\mathrm{str}}(\mathbf{r})$ does not. Hence equation (26) can only be satisfied in the motion-reversed system provided that both the flow and structural contribution vanish separately.

We can explicitly test the validity of the sum rules (27) and (28) for the above analytical force approximations (21) and (22). The respective integrals are $\eta \int d\mathbf{r} \nabla[\rho(\mathbf{r})^2\nabla\cdot\mathbf{v}(\mathbf{r})] = 0$ and $\chi\int d\mathbf{r} \nabla \{\rho(\mathbf{r})^3[\nabla\cdot\mathbf{v}(\mathbf{r})]^2\} = 0$, which follows from the divergence theorem, as boundary terms vanish. Hence the simple non-local velocity gradient power functional approximations (16) and (17) have passed the global Noether validation test. This is nontrivial, as the proof rests on the specific structure of the integrands being gradients, which for more general analytical forms will not be the case. This exemplifies the merits of Noether sum rules for assessing and by extension also constructing theoretical nonequilibrium force approximations.

The Noether concept carries much further. Hermann and Schmidt [45] present a generalization of the global sum rules, such as the vanishing of the total superadiabatic force (26), for so-called time direct correlation functions. These are defined via functional derivatives of the superadiabatic power functional, in generalization of the superadiabatic force density as generated via the derivative (12) with respect to the current distribution. We have shown [45] that these time direct correlation functions satisfy additional memory sum rules and we expect the corresponding identities to be helpful in the study of temporal nonlocality. Further work was addressed at the variance of global fluctuations, which were shown to be constrained by Noether invariance at the second order global level [50]. Noether's theorem also yields the locally resolved force balance relationship in quantum mechanical many-body systems [49]. Very recently, striking two-body force-force and force-gradient correlation functions for the precise and novel characterization of disordered (liquid and gel) systems [51] were revealed. Exploiting Noether's concept in a statistical mechanical setting is robust against changes of ensemble, [46] presents the transfer of the grand ensemble formalism [45] to canonical systems. Considering global rotational invariance leads to (classical) spin–orbit coupling of torque identities [45].

We return to steady states and demonstrate that the seemingly entirely formal functional relationships do in fact apply to real systems. We present in the following new computational methodology that we use to demonstrate the functional point of view. We will also demonstrate that the sum rules (26) and (27) are highly valuable in providing checks for numerical results.

7. Machine learning the kinematic map

Machine learning proves itself to be an increasingly useful tool in a variety of settings in soft matter, ranging from soft matter characterization [83], engineering of colloidal self-assembly [84], to the inverse design of soft materials [85]. Pivotal studies were addressed at colloidal structure detection [86], the identification of combinatorial rules in mechanical metamaterials [87], the learning of many-body interaction potentials for spherical [88] and for anisotropic particles [89], and the prediction of the dynamics of supercooled liquids from their static properties [90].

Concerning slow dynamics, machine learning was used for obtaining memory kernels for generalised Langevin dynamics [91], classifying the age [92], assessing the structural heterogeneity [93], and investigating dimensionality reduction of local structure [94] of glasses. Machine learning was applied to equilibrium reactive processes such as molecular isomerization [95] and to the behaviour of rare diffusive molecular dynamics trajectories [96].

Machine learning plays an important role in the inverse design for self-assembly of soft materials [97, 98]. Examples thereof include sequence-specific aggregation of copolymers [99], inverse design of multicomponent colloidal crystals by reverse engineering the Hamiltonian of the system [100], characterizing the self-assembly of three-dimensional colloidal systems [101], controlling colloidal crystals via morphing energy landscapes [102], and learning free energy landscapes using artificial neural networks [103].

In a liquid state theory-informed approach, Limmer and his coworkers have considered potentials based on local representations of atomic environments, in order to learn intermolecular forces at liquid-vapor interfaces [104]. They relate their machine-learning approach to the local molecular field theory by Weeks and coworkers [105, 106], see [107] for a description of the relationship of this approach to DFT.

More specifically, in the context of classical density functional theory, an early and pioneering study formulated a neural-network approach to liquid crystal ordering in confinement [108]. Free energy density functionals were obtained for one-dimensional fluids from a convolutional neural network [109] and an analytical form of an excess free energy functional was generated from an equation learning network [110]. Cats et al [111] recently used machine learning to improve the standard mean-field approximation of the excess Helmholtz free-energy functional for a three-dimensional LJ system at a supercritical temperature. These significant reserach efforts were devoted to tailoring analytical forms of model free energy functionals, by training certain key components such as spatial convolution kernels, and much insight into the inner workings of excess free energy functionals was gained [109111]. Very recent developments include using physics-constrained Bayesian inference of state functions [112] and to emulate functionals by active learning with error control [113]. The results of DFT calculations were also used as training data for investigating gas solubility in nanopores [114].

However, here we proceed very differently and moreover do so out-of-equilibrium. We use the LJ model and the identical planar geometry as in [111], such that the density profile $\rho(x)$ depends only on a single position coordinate x. We consider steady states and retain planar symmetry by considering flow that is directed in the x-direction, such that the current $\mathbf{J}(x) = J(x) \mathbf{e}_x$, where J(x) is the magnitude of the current and ex is the unit vector in the x-direction. Both the density profile $\rho(x)$ and the velocity field $v(x) = J(x)/\rho(x)$ are independent of time. The continuity equation (5) then implies $0 = \partial\rho(x)/\partial t = -\partial[v(x) \rho(x)]/\partial x$, from which one obtains by spatial integration $\rho(x)v(x) = J_0 = \mathrm{const}$. Here the value of J0 determines the intensity of the flow; we recall the illustration shown in figure 1.

We base the machine learning procedure on a convolutional neural network, as was done e.g. in [109], and following [109111] we use many-body computer simulations to provide training, validation, and test data. In contrast to these equilibrium studies though, in order to address the nonequilibrium problem we need to represent the physical time evolution on the many-body trajectory level. We use the recently developed highly performant adaptive BD algorithm [7] and apply it to the three-dimensional LJ fluid. As laid out above, in order to address situations of planar symmetry we drive the system only along the ex -direction. The specific form of the driving force field $f_{\mathrm{ext}}(x)\mathbf{e}_x$ is however irrelevant, as the training data only serves to extract the intrinsic kinematic functional relationship.

In order to cover a sufficiently broad range of flow situations, we represent the external force field as a truncated Fourier series $f_{\mathrm{ext}}(x) = \sum_{n = 0}^{n_{\mathrm{max}}} A_n \sin(2\pi n x/L + B_n)$, where L is the size of the cubic simulation box with periodic boundary conditions, An are random amplitudes with zero mean and uniform distribution inside of a given finite interval, and Bn are random phases. We truncate at order $n_{\mathrm{max}} = 4$ such that the length scale $L/(2\pi n_{\mathrm{max}})$ is comparable to the LJ molecular size σ. Ten percent of our simulation runs are carried out in equilibrium, i.e. for $A_0 = 0$. We use N = 500 LJ particles inside of a cubic simulation box of size $L = 10\sigma$. The temporal duration of each run is 1000τ, where $\tau = \sigma^2/D_0$ is the Brownian time scale. After initialization the system is randomized for 1τ at a very high temperature. Then we wait for 100τ to allow the system to reach a steady state and then collect data during the remaining time. In total we use 1000 such simulation runs; these are subdivided for purposes of training (520), validation (280) and testing (200). The maximal current encountered during training was $J_0\sigma^2\tau = 4.93$. A more detailed account of our machine-learning strategy is given in the appendix

Our aim is to machine-learn and hence to explicitly demonstrate the kinematic map, $\rho(\mathbf{r}),\mathbf{v}(\mathbf{r})\to \mathbf{f}_{\mathrm{int}}(\mathbf{r})$ in steady state. We present the learning algorithm with inputs $\rho(x), v(x)$ and target $f_{\mathrm{int}}(x)$. The data for these three fields are obtained from building steady state averages via the adaptive BD over the corresponding one-body operators. We recall the microscopic definition of the interparticle one-body force density $\mathbf{F}_{\mathrm{int}}(\mathbf{r})$ via equation (7) and we refer the reader to appendix of [52] for a description of several methods to sample the current in BD and hence obtain the overdamped velocity profile $\mathbf{v}(\mathbf{r})$. Finally, we use the standard counting method for the density profile $\rho(\mathbf{r})$, although more efficient 'force sampling' methods [115118] exist. At this stage we neither impose adiabatic-superadiabatic splitting (11), nor structure-flow splitting (18), nor do we use any analytical model form of the functional relationship. We rather work on the level of the bare one-body simulation data, generated in the above described randomized uniaxial flow situations of the desired planar symmetry.

We refer to the result of this procedure as the machine-learned internal force field $f_{\mathrm{int}}^{\,\,\star}(x,[\rho,v])$. This represents a 'surrogate model' in the sense of the terminology of the machine learning community. By construction this data structure depends functionally on the density profile and on the velocity profile. Importantly the external force field $f_{\mathrm{ext}}(x)$, as given by the above described randomized Fourier series, has not been used in the training, which was rather based solely on the intrinsic force field and its kinematic dependence on the density profile and the velocity field.

In order to test the validity of the functional relationship and to address the question whether $f_{\mathrm{int}}^{\,\,\star}(x,[\rho,v])$ indeed represents the true $\mathbf{f}_{\mathrm{int}}(\mathbf{r},t,[\rho,v])$ of power functional theory, as restricted to the present planar and steady situation, we consider a toy flow situation as an application. We choose the density profile to consist of a single (co)sinusoidal deviation from the bulk, $\rho(x) = [0.5+$ $0.2\cos(2\pi x/L)]\sigma^{-3}$. In order for the system to be in steady state, the velocity then necessarily needs to satisfy $v(x) = J_0/\rho(x)$, where the strength of the current $J_0 = \mathrm{const}$ is a free parameter.

We proceed in two ways. First, we check for self-consistency. Therefore we solve the force density balance relationship (6) for the external force field, which yields the explicit result:

Equation (29)

As is explicit in equation (29), inputting the toy state $\rho(x),\, v(x)$ on the right hand side yields a concrete machine learning prediction for the external force field on the left hand side. We then input this result for $f_{\mathrm{ext}}(x)$ as the driving force field in a single adaptive BD simulation run and expect this procedure to reproduce the density and velocity profile of the toy state. The reproductive success will however materialize only provided that (i) the functional kinematic dependence actually exists and that (ii) it is accurately represented by the neural network.

The results, shown in figure 2, demonstrate the accomplishment of the reconstruction of the toy state. This establishes that the machine learned functional provides a numerically highly accurate representation of the true internal force functional. That quantitative differences between results from direct BD and from machine learning occur for the case of strongest flow ($J_0\sigma^2\tau = 5$) is not surprising, given that the value of the current is beyond the maximum encountered during training ($J\sigma^2\tau = 4.93$). However, despite the quantitative deviations of the prediction for the interparticle force field, the qualitative behaviour of the network remains entirely reasonable.

Figure 2.

Figure 2. Kinematic profiles and force fields for uniaxial compressional flow of the LJ fluid. Results are shown from machine learning (lines) and from direct adaptive BD simulations (symbols). Functional relationships are represented by vertical arrows. Shown are the density profile $\rho(x)$, the one-body current J(x) and the external force field $f_{\mathrm{ext}}(x)$ (top row) as a function of the scaled distance $x/\sigma$, where σ is the LJ length scale and $k_BT/\epsilon = 1.5$ throughout. The density and the current functionally determine both the interparticle force field $f_{\mathrm{int}}(x)$ via the kinematic map and the superadiabatic force field $f_{\mathrm{sup}}(x)$ via the superadiabatic kinematic map (middle row). The internal force field $f_{\mathrm{int}}(x)$ splits into superadiabatic and adiabatic force contributions. The adiabatic force field $f_{\mathrm{ad}}(x)$ is a density functional via the Mermin-Evans map of density functional theory. The structural and flow force fields are split according to their symmetry upon motion reversal. The colour code represents different values of the current $J_0\sigma^2\tau = 0,1,2,3,4,5$ (from violet to yellow, see the center panel in the top row). The two insets show the predictions from the analytical velocity gradient functionals (21) and (22) on the same scale as the respective main panel; the transport coefficients are chosen as $\eta/({\boldsymbol \epsilon} \tau\sigma^3 ) = 0.35$ and $\chi/({\boldsymbol \epsilon} \tau^2 \sigma^6 ) = 0.075$ to give good agreement with the quasi-exact data. The system with $J_0 = 0$ is at rest in equilibrium and it doubles as the adiabatic state because its density profile is identical to that of the flowing systems (as shown in the first panel). The small differences in superadiabatic forces from BD and from machine learning for the case of the highest current considered, $J_0\sigma^2\tau = 5$, occur as this value is beyond those encountered in the training data.

Standard image High-resolution image

We take this validation via the machine learning to be a practical, data-science-level verification of the existence of the power functional kinematic map. We recall the original formal construction [43, 54] and its subsequent confirmation via custom flow [52, 53].

Turning to the physics of the compressional flow, we use the adiabatic-superadiabatic decomposition (11) together with the flow-structure splitting (18) to analyze both the machine-learned functional $f_{\mathrm{int}}^{\,\,\star}(x,[\rho,v])$ as well as the direct simulation results. As anticipated, both flow and structural force fields have nontrivial spatial variation, see figure 2. The flow force primarily contains viscous effects that stem from the dissipation that the compressional and extensional regions of the flow pattern generate. The structural force field becomes more strongly inhomogeneous and also larger in magnitude upon increasing the amplitude of the flow. This trend is necessary to provide a balance for the increasingly asymmetric and growing external force field, which in turn is required to keep the density profile unchanged upon increasing the throughput through the prescribed density wave.

The power functional predictions (21) and (22) capture these effects reasonably well given the simplicity of the analytical expressions, see the insets in figure 2. We find our numerical results to satisfy the Noether sum rules (26) and (27) to very good accuracy. The values of the prefactors η and χ in equations (21) and (22) characterize the dominant behaviour of the system in response to spatial variation of the flow. The parameter $\eta/({\boldsymbol \epsilon} \tau\sigma^3 ) = 0.35$ measures viscosity and $\chi/({\boldsymbol \epsilon} \tau^2 \sigma^6 ) = 0.075$ quantifies the strength of nonequilibrium structure formation. We regard these amplitudes as being well-defined transport coefficients, which will determine the leading behaviour of the system in situations where higher-order gradient contributions are small or even irrelevant. Using our methodology, the precise values of η and χ can be obtained systematically from straightforward comparison to the data from the BD simulations or the machine learning model.

It remains to point out the stark contrast with the standard DDFT (1), which gives a trivial null result in the present setup by construction: the density profile remains unchanged upon increasing flow, and so does the adiabatic force field. So the DDFT provides no mechanism to account for the genuine nonequilibrium physics; see the appendix for further details.

8. Conclusions

For the purpose of assessing the status of the DDFT equation of motion (1) we have first described two exact limits that this approximation reproduces: the dynamics of the noninteracting diffusive ideal gas (see equation (4)) and the spatially inhomogeneous static equilibrium limit (see equation (2)). On general grounds one expects the DDFT to perform well when the situation under consideration is close to one of these limits. In particular near the static case this is nontrivial, as the system might be dense and spatially highly structured, as evident by a strongly inhomogeneous density profile. Provided that the dynamics are driven weakly enough via a time-dependent external potential then the DDFT [41, 42] can be a highly useful device, which enables one to describe the temporal evolution as a chain of equilibrium states, labelled by time. As the strictly static case (DFT) can correctly describe arbitrary spatial inhomogeneities, such adiabatic time evolution can provide highly nontrivial information. It is challenging, however, to know a priori whether or not the nonequilibrium situation under investigation will be close to adiabatic. Leaving the use of bare physical intuition aside, we are not aware of any simple quantitative criterion that would allow one to judge a priori whether DDFT is reliable or not. In this sense, the DDFT approximation can be viewed as being uncontrolled.

In general the contributions beyond the equilibrium physics will be relevant. On the level of the formally exact one-body equation of motion (14), the superadiabatic force field $\mathbf{f}_{\mathrm{sup}}(\mathbf{r},t)$ will then contribute and will potentially do very significantly so. Together with the adiabatic force field, which follows from the equilibrium excess free energy functional via $-\nabla \delta F_{\mathrm{exc}}[\rho]/\delta\rho(\mathbf{r},t)$, their sum constitutes the full interparticle forces. These force fields are coarse-grained, in a microscopically sharp way, to the one-body level of dynamical correlation functions. We have argued (i) that power functional theory is a concrete formal structure that allows to obtain $\mathbf{f}_{\mathrm{sup}}(\mathbf{r},t)$ from a generating functional and (ii) that simple approximate forms already capture much relevant nonequilibrium physics and they do so in a transparent and systematic way, and (iii) that machine learning can be used as a practical representation.

We have described and exemplified for uniaxial steady compressional flow of the three-dimensional LJ fluid the kinematic functional map that governs the exact nonequilibrium dynamics on the one-body level of dynamic correlation functions. As this description is based on a single position coordinate and a single time variable, it is of both conceptual and practical simplicity. As described by power functional theory the superadiabatic interparticle force field functionally depends on the density and the velocity field, i.e. $\mathbf{f}_{\mathrm{sup}}(\mathbf{r},t,[\rho,\mathbf{v}])$, for overdamped Brownian motion. The functional dependence is causal, i.e. on the values of the density profile and velocity field at previous times, in general up to an initial state. The superadiabatic force field carries this kinematic dependence, i.e. on the history of $\rho(\mathbf{r},t)$ and $\mathbf{v}(\mathbf{r},t)$, but crucially it is independent of the external force field that drives the system.

We have explicitly demonstrated the functional map $\rho(\mathbf{r},t),\mathbf{v}(\mathbf{r},t)\to\mathbf{f}_{\mathrm{int}}(\mathbf{r},t)$ by establishing this functional relationship via machine learning the intrinsic force field. This includes as a special case the equilibrium map $\rho(\mathbf{r},t)\to \mathbf{f}_{\mathrm{ad}}(\mathbf{r},t)$, as it is relevant for the approximative adiabatic time evolution via the DDFT (1). Using the force balance then gives direct access to the form of the required external force field via equation (29). The machine-learned model of the functional map hence enables 'instant custom flow' at negligible computational cost at the time of use. We recall that the custom flow method [52, 53] is based on the kinematic functional map, such that from knowing the kinematic one-body fields, the external force field that is necessary to generate the given time evolution follows straightforwardly from the exact force balance (6).

An analytical approach to one-body functional maps leads to the simple structure of velocity gradient forms for the viscous and structural superadiabatic forces, as exemplified in equations (16) and (17) for compressional flow, i.e. for velocity fields with nonvanishing divergence. As we have shown, the resulting predictions for the flow force (21) and for the structural force field (22) represent a reasonable description of the simulation data and its representation via the machine-learned functional. We attribute the remaining differences to higher-order terms [60] which we have not addressed here for simplicity and which we expect to become increasingly relevant under stronger driving. As we have shown, our results from direct simulation, from machine learning, and from the analytical approximations, satisfy exact global Noether sum rules.

We have restricted our discussion to a single and relatively easily accessible type of nonequilibrium dynamics, that of stationary uniaxial compressional flow that represents a model steady (batch) sedimentation situation. The power functional approach allows to go much further, including the treatment of viscoelasticity [57], as arising from superadiabatic memory, deconfinement under shear [58], the dynamic decay of the van Hove pair correlation function as governed by drag, viscous and structural forces [69, 70], and the complex forms of both flow and structural forces that arise under spatially complex forms of driving [60]. Time-dependent uniaxial flow is relevant in a variety of situations, including colloidal stratification [119, 120] and sedimentation [121].

Although power functional theory operates on the one-body level of dynamical correlation functions, two-body correlation functions are accessible both formally via the nonequilibrium Ornstein–Zernike route [43] and explicitly by the dynamical test particle limit. The latter is the dynamic generalization of Percus' static test particle limit [62], which identifies two-point correlation functions, such as g(r) as also recently shown to be intimatedly related to thermal Noether invariance at second order [51], with one-body density profiles in an external potential. This is set equal to the interparticle pair potential.

The dynamical test-particle limit goes further in that it describes the test particle via its own dynamical degrees of freedom, which are coupled to those of all other particles in the system. The concept was originally formulated as an approximation within DDFT [63, 64] and formally exactly within power functional theory [67]. Two-body superadiabatic effects were shown via simulation work to be significant [6870] and they arise naturally in an exact formulation of the test particle dynamics [67]. The test particle limit allowed for a rationalization of the dynamical pair structure as e.g. experimentally observed in two-dimensional colloids [10]. Recently an approach to DDFT based on the two-body level was formulated [122] and earlier work was addressed at construction of dynamical density functional theories from exactly solvable limits [123]; we recall that [41, 42] provide exhaustive overviews. In event-driven BD simulations superadiabatic forces were shown to consist of drag, viscous, and structural contributions [69, 70]; see [43] for an extended discussion. The physics of active particles [7175] is very significantly governed by a vigorous interplay between superadiabatic and adiabatic forces, both of which are very strong, as the tendency of these systems to self-compress leads naturally to very high local densities.

Furthermore, relevant and interesting microscopic models that go beyond the simple fluid paradigm of a pair potential, such as the monatomic water model by Molinero and Moore [124, 125] and the three-body gel by Saw et al [126, 127], are accessible. Despite the complexity of both its defining Hamiltonian and the intricate transient network structure, the inhomogeneous viscous response of the three-body gel was recently demonstrated [61] to be surprisingly well captured by a simple power functional flow approximation. We finally recall that superadiabatic effects transcend overdamped dynamics, and are relevant both in quantum dynamics [43, 81, 82] and in classical molecular dynamics [43, 79, 80].

While we have restricted ourselves to discussing the point of view of functional relationships, it would be interesting to explore in future work possible cross connections to other theoretical approaches, such as Onsager's variational principle for soft matter [128131], stochastic thermodynamics [132], large deviation theory [133, 134], mode-coupling theory [135, 136], generalized hydrodynamics [137], local molecular field theory for nonequilibrium systems [138], as well as to the physics of nonequilibrium phase transitions [139], Brownian solitons [140], and crystal dynamics [141144] and non-isothermal situations [145]. Implications of the machine-learning methodology are summarized at the end of the appendix.

Acknowledgments

This work is supported by the German Research Foundation (DFG) via Projects No. 436306241 and No. 447925252.

Data availability statement

The data cannot be made publicly available upon publication because they are not available in a format that is sufficiently accessible or reusable by other researchers. The data that support the findings of this study are available upon reasonable request from the authors.

Appendix: Simulation and training details

Appendix. Initialization

The nonequilibrium many-body physics that we investigate falls into the class of temporal initial value problems. This holds true both on the full many-body (phase space) level, as accessible via the simulations or formally via the Smoluchowski equation [43], as well as on the reduced one-body level of the temporal kinematic fields, i.e. the time-dependent density profile and current distribution. Evidently one-body initial data needs to be available in order to start the time evolution according to either the approximate DDFT (1) or the formally exact power functional equation of motion (14). In typical applications of one of these frameworks to a dynamical problem, the system is taken to be in an equilibrium state at a starting time t0 such that the current vanishes and the density profile is known. Standard ways [146] to specify an initial state include choosing the bulk and possibly applying (simple) external fields and letting the system relax therein. The DFT point of view allows for more general initializations as one can choose an equilibrium system with an arbitrary spatially inhomogeneous density profile $\rho(\mathbf{r},t_0)$ as the starting point.

We recall from section 1 that for an equilibrium system with given interparticle interactions, knowledge of the density profile [in the present case $\rho(\mathbf{r},t_0)$] is sufficient to formally exactly determine all static thermal properties. This fact [4, 150] constitues one of the major virtues of static DFT. Leaving representability issues aside, prescribing a (physically sensible) form of $\rho(\mathbf{r},t_0)$ is feasible, as one can picture this as being generated by an appropriate form of corresponding external potential $V_{\mathrm{ext}}(\mathbf{r},t_0)$, where t0 acts as a mere label to specify the initial state. As laid out above in section 1, the description formally requires to have access to the free energy density functional $F[\rho]$, which implicitly contains the full static information about the thermal physics of the system.

In particular, the equilibrium many-body probability distribution function $\Psi(\mathbf{r}^N,t_0)$ is uniquely determined, for given interparticle interactions and given knowledge of the density profile [3, 4, 150]. (In the notation we have dropped the dependence on momenta, which is trivial in the present situation.) One pictures the system to have been in the same equilibrium state also at all prior times $t\lt t_0$ and having undergone time evolution in this quiescent state up to t0. As an aside, the equilibrium dynamics can then be characterized for bulk liquids on the two-body level by the van Hove dynamical correlation function [3, 6370, 147149], with much recent progress from the power functional point of view [69, 70].

Obtaining a statistical description requires in principle to average over the initial distribution of microstates. In the context of many-body simulations, in practice this necessitates to carry out a sufficient number of independent realizations of the time evolution that is under consideration. Representative studies relied on e.g. 104 realizations for the transient dynamics of hard spheres under a temporally switched shear field [57], and on $2\times 10^6$ realizations [80] for investigating superadiabatic acceleration effects that occur in Molecular Dynamics. There are also special cases, such as test particle concepts that allow efficient sampling of the bulk van Hove function via building moving averages [69, 70]. We also point out work [151, 152] which address the initial state dependence in the context of quantum mechanical time-dependent density functional theory.

Appendix. Steady states

In the present model situation of uniaxial compressional steady flow (we recall its graphical illustration in figure 1) there is no need to temporally resolve the one-body fields, as these are invariant in time. Hence we proceed in the standard way of replacing the average over an ensemble of inital states with a temporal average over a single trajectory of sufficiently long duration. Recalling the details that are given in section 7, we average over time evolutions each with duration 1000τ, with $\tau = \sigma^2/D_0$ denoting the Browian time scale for self diffusion with diffusion constant $D_0 = k_BT/\gamma$, where γ is the friction constant against the background. The strategy of identifying temporal and ensemble averages relies on having an ergodic system of which the time evolution indeed explores the entirety of the ensemble. As we deal with a liquid in the present illustrative case, we expect ergodicity to hold.

In order to validate this expectation, we have investigated the possibility of dependence of the steady state results on the initial state of the simulation; see figure 3 for illustrations of the chosen four different (crystalline and disordered) microstates. We recall that we evolve the system over a waiting time of 100τ before starting to sample the one-body correlation functions. The resulting steady state profiles, see figure 3, bear no traces of the different initialization, and the data of each of the four runs collapse onto each other. As this behaviour is already observed on the level of starting with individual differing microstates, we expect no changes if we were to start with a representative sample of, say 104, microstates in order to numerically approximate an entire distribution.

Appendix. Training procedure

The results shown in figure 3 also serve to illustrate our training procedure in more detail. We use randomized forms of the external force field $f_{\mathrm{ext}}(x)$ via superimposing Fourier modes that are compatible with the box size L via: $f_{\mathrm{ext}}(x) = \sum_{n = 0}^{n_{\mathrm{max}}} A_n \cos(2\pi n x/L + B_n)$, which we cut off at $n_{\mathrm{max}} = 4$. The amplitudes An and phases Bn are generated randomly within a cutoff, which i) makes the specific training protocol free of having to perform manual choices and ii) removes any further bias thus easing the interpretation of the quality of the machine learning predictions. Our currently adopted random training strategy is suited to investigate issues of generality, universality and transferability both of the underlying mathematical structure of power functional theory and of its presently proposed specifically tailored implementation via supervised machine learning. Nevertheless, as our supervised learning procedure is general, one could well tune for specific applications and rather train on the basis of situations that are close to the eventual use of the network, see e.g. [111] for a corresponding enlightening study.

Figure 3.

Figure 3. Three representative cases (systems 1, 2, and 3) of nonequilibrium steady states under external driving, as used for training the neural network. The results are obtained from adaptive BD simulations under the influence of a temporally static external force field $f_{\mathrm{ext}}(x)$ (top row) that consists of a random superposition of spatial Fourier modes. The zeroth Fourier mode represents a constant force offset, which either vanishes (system 1) or drives the system out of equilibrium (systems 2 and 3). The steady states are characterized by spatially and temporally constant currents $J(x) = J_0 = \mathrm{const}$, with $J_0 = 0$ in equilibrium (system 1) and $J_0\gt0$ under drive (systems 2 and 3). The density profile $\rho(x)$ (second row) is temporally constant and spatially inhomogeneous, as induced by the action of the external force field. The inhomogeneous velocity profile is simply the inverse $v(x) = J_0/\rho(x)$ (third row). For each system 1, 2 and 3, the results for the respective steady state profiles $\rho(x), v(x)$, and the interparticle force field $f_{\mathrm{int}}(x)$ (fourth row) are independent of the type of initialization, as demonstrated by the results from the four differently initialized simulation runs (indicated by differently coloured symbols) lying on top of each other. Here each system is alternatively initialized via one of four protocols to select the initial microstate: simple cubic (green) or close packed (yellow) crystals or an equilibrated bulk liquid state (violet) or an inhomogeneous slab-like confined liquid (blue). Representative simulation snapshots in steady state ('final states' after 1000τ) for system 2 illustrate the spatial structure of the flowing liquid; no imprints of the initialization can be perceived. The machine learning is based on training data $\rho(x)$ and v(x) [or alternatively to the latter J(x) or $\partial v(x)/\partial x$] that lead to the target interparticle force field $f_{\mathrm{int}}(x)$ for each given training system (1000 in total) in the supervised learning. While we here solely illustrate the training procedure, we show in figures 2, 4, and 5 how the trained model can be used to both predict and design nonequilibrium steady states.

Standard image High-resolution image

Our supervision protocol operates on the level of the simulation output in a specifically organized way of (functional) dependencies of the obtained histograms that represent the one-body distributions. All three involved functions are taken from straightforward sampling in adaptive BD. The density profile $\rho(x)$ is obtained from the standard counting method. (We recall reduced-variance sampling techniques such as force sampling [115117], which could help in acquiring numerical training data more efficiently.)

The current distribution J(x) can be sampled via the force balance equation or alternatively via a temporally centered derivative of the particle trajectories. Here the position resolved histogram is filled with the displacement vector of each particle, between the position in the next and in the previous time step; see the appendix of [52] for an in-depth description. The velocity profile then follows straightforwardly from $v(x) = J(x)/\rho(x)$.

Finally the interparticle force density $F_{\mathrm{int}}(x)$ is sampled in an analogous way on the basis of a position-resolved histogram that simply accepts the instantaneous interparticle force that acts on each given particle. The interparticle force field is then obtained by simple normalization with the density profile, $f_{\mathrm{int}}(x) = F_{\mathrm{int}}(x)/\rho(x)$.

From a data science point of view, and possibly even when working in a physics-informed way, one might use the information in all three fields $\rho(x)$, v(x) and $f_{\mathrm{int}}(x)$ to analyze and make predictions of the dynamical behaviour of the system. However, our present approach is very specific and leaves no choice in the general setup of the supervised learning. We have $\rho(x)$ and v(x) as inputs and $f_{\mathrm{int}}(x)$ as the output or target of the neural network; we recall the description in section 4 of the functional relationships that apply to the nonequilibrium physics. As concrete illustration we show three representative training data sets in figure 3. We find the success of the learning procedure to be robust against changes in even simple network topology and choice of hyperparameters. We attribute these features to the fact that the mapping is formally exact, as summarized in the main text and reviewed in detail in [43].

Appendix. Universality and inverse design

The thus obtained functional relationship $f_{\mathrm{int}}^{\,\,\star}(x,[\rho,v])$ not merely interpolates between the training situations, but it rather captures the genuine correlated nature of the statistical physics under consideration. We recall the target cosine-shaped density wave (presented in figure 2 and replotted in the first row of figure 4 as a reference). This situation was not genuinely part of the training but certainly close in character. As a further demonstration we apply the network to a triangular density wave (second row in figure 4) and also to a sawtooth-like density wave (third row in figure 4), which are further removed from the situations encountered during training. The excellent match with the direct simulation results of the density profile and current distribution validates our approach.

Figure 4.

Figure 4. Demonstration of the machine-learned kinematic force map and its use in the design of nonequilibrium steady states. Shown are three different target shapes of the density profile $\rho(x)$ (left column): cosine wave (top row, results are identical to those shown in figure 2), triangle wave (second row), and sawtooth-like wave (third row). For each of the three density waves the current profile (second column) $J(x) = J_0 = \mathrm{const}$, and we prescribe alternative values $J_0 \tau/\sigma^2 = 0$ (equilibrium), 1, 2, 3, 4, and 5 (all nonequilibrium). Our aim is to obtain the specific form of $f_{\mathrm{ext}}(x)$ via our trained machine learning model which then generates the target shapes of $\rho(x)$ and J(x). That this procedure indeed is successful is validated with BD simulations as shown via the symbols, as both $\rho(x)$ and J(x) are reproduced to high accuracy. Each of the three target density profiles $\rho(x)$ (lines) is strikingly matched by the simulation results (symbols); the data for the external force profile $f_{\mathrm{ext}}(x)$ (right column) conincide per construction, as the output from machine learning is taken as the input force field in the many-body simulations that are carried out to assess the accuracy of the design procedure.

Standard image High-resolution image

We finally return to the DDFT and to the purpose of the present Perspective of discussing its virtues and shortcomings. We present as a final example, see figure 5, its application to the sawtooth state. Shown is the external force profile, as obtained from the machine-learned power functional, via the instant custom flow equation (29), which we reproduce for convenience:

Equation (A1)

In the present case this constitutes a (numerical) quasi-exact solution; we recall the excellent agreement of the resulting kinematic profiles, shown in figure 4, with the direct simulation results. We can now easily compare this to the adiabatic DDFT prediction, as the equilibrium states were part of our training protocol (cases of vanishing zeroth mode of the external force field). We can straightforwardly implement this on the level of the neural network by simply considering no flow, i.e. setting the velocity profile $v(x) = 0$, which yields a quasi-exact representation of the adiabatic force; we recall the perfect agreement shown in figure 4 for the cases with no flow, where $J_0 = 0$ (dark purple symbols and lines). Hence within DDFT the instant custom flow equation (A1) reduces to the following simple form:

Equation (A2)

Here the machine-learned adiabatic force field is obtained simply by evaluating the full nework at vanishing velocity, i.e. $f_{\mathrm{ad}}^{\,\,\star}(x,[\rho]) = f_{\mathrm{int}}^{\,\,\star}(x,[\rho,v = 0])$. The comparison shown in figure 5 indicates qualitatively correct behaviour of the DDFT but quantitative errors of magnitude $\epsilon/\sigma$.

Figure 5.

Figure 5. Shape of the external force field $f_{\mathrm{ext}}(x)$ (upper panel) that stabilizes the target nonequilibrium sawtooth density profile $\rho(x)$ (lower panel) with current strength $J_0 \sigma^2/\tau = 5$ (as also shown in the third row of figure 4). The force field is generated either from power functional theory via equation (A1) or from machine learning DDFT via equation (A2). While capturing the correct shape of the external force profile, the DDFT produces in this case a quantitative error between maximal and minimal force of size $\epsilon/\sigma$. The respective results for the density profiles are obtained from direct simulations. The power functional design meets the target profile in a quasi-exact way, whereas quantitative deviations occur in DDFT in particular near the cusps of the wave.

Standard image High-resolution image

The effects being quantitatively small should not lead one to conclude that the physics are irrelevant. First, as laid out above, the planar uniaxial geometry is intrinsically favorable for the DDFT. Secondly, we have chosen moderate values for temperature and for density to ease our current pilot study for the use of machine learning. This renders the situation relatively simply. However, as emphasized above, a priori it is very difficult to assess whether or not the DDFT will be sufficient to obtain a reliable estimate of the real time evolution. For a very recent demonstration of quantitatively large superadiabatic (viscous and structural) forces, we point the reader to the study by Sammüller et al [61] of the nonequilibrium dynamics of a three-body colloidal gel former [126, 127].

Appendix. Implications and related work

We take the quantitative success and computational ease of applying supervised machine learning to the formal functional dependencies of nonequilibrium many-body flow, as presently considered in a simple uniaxial flow geometry, as an incentive to summarize several possible connections that could be explored in future work. Our approach fits into the broader picture of coarse-graining many-body systems out of equilibrium [44] and it is very specific in terms of input and output variables of the neural network. The supervised machine learning method gives the interparticle force field directly, in contrast to approaches that are based on learning the excess free energy functional [109112] which then upon differentiation according to equation (10) give the (adiabatic approximation for the) interparticle force density. DDFT was also used in learning the physics of pattern formation from images [153], and for importance sampling in adaptive multiscale simulations, see [42, 154] for a list of several further examples.

In the terminology of multiscale simulation methods for soft matter systems [155], in our method we learn the characteristics of a fine-grained model (chosen as the LJ fluid in the present model study) and, while not strictly obtaining a coarse-grained model, are able to reduce the fine-grained information systematically to the one-body level in a microscopically resolved way. It would be interesting to see whether our approach is useful in the context of adaptive simulation techniques [156158] as applied e.g. to coupling boundaries of open systems [159].

We would need to go beyond the presently considered steady states to address memory effects, as are relevant both in classical [69, 70, 160, 161] and in quantum systems [151, 152]. Given the recent interest in force-based quantum-mechanical density functional theory [162, 163], we could imagine that apart from quantum power functional theory [43, 49, 81, 82] our study could potentially be inspirational in other force-based approaches to quantum dynamics [164167].

Please wait… references are loading.