Brought to you by:
Topical Review

Hydrodynamics of electrons in graphene

and

Published 5 January 2018 © 2018 IOP Publishing Ltd
, , Citation Andrew Lucas and Kin Chung Fong 2018 J. Phys.: Condens. Matter 30 053001 DOI 10.1088/1361-648X/aaa274

0953-8984/30/5/053001

Abstract

Generic interacting many-body quantum systems are believed to behave as classical fluids on long time and length scales. Due to rapid progress in growing exceptionally pure crystals, we are now able to experimentally observe this collective motion of electrons in solid-state systems, including graphene. We present a review of recent progress in understanding the hydrodynamic limit of electronic motion in graphene, written for physicists from diverse communities. We begin by discussing the 'phase diagram' of graphene, and the inevitable presence of impurities and phonons in experimental systems. We derive hydrodynamics, both from a phenomenological perspective and using kinetic theory. We then describe how hydrodynamic electron flow is visible in electronic transport measurements. Although we focus on graphene in this review, the broader framework naturally generalizes to other materials. We assume only basic knowledge of condensed matter physics, and no prior knowledge of hydrodynamics.

Export citation and abstract BibTeX RIS

1. Introduction

The study of the hydrodynamic behavior of electrons in solid-state systems is attracting a diverse community of physicists from a broad variety of backgrounds. It is important to have a clear and pedagogical introduction to the fundamentals of electronic hydrodynamics, emphasizing the particular challenges required to observe this regime in experiment. This topical review aims to fill this gap in the literature. To make this review accessible to physicists from diverse fields including condensed matter physics, high energy physics, hydrodynamics and plasma physics, we assume minimal knowledge of solid-state physics, at the level of [1], and no knowledge of fluid dynamics.

We have chosen to focus on electronic hydrodynamics in a particular material: graphene. Graphene is both a particularly rich playground for electronic hydrodynamics and well suited for connecting theory with experimental data due to the simplicity of its band structure. As we will discuss, hydrodynamics is a universal description of interacting, thermalizing physical systems. Thus, many of the topics discussed here will immediately be relevant for other materials as well. However, the onset of the hydrodynamic limit of electron fluids exhibits material-specific peculiarities too. The challenge of electron hydrodynamics is to tease out the universal physics from non-universal, often material-specific phenomena. We hope that our focus on graphene, with brief forays into other materials, conveys this theme throughout the review.

1.1. 'Relativistic' electrons in graphene

We begin with a brief introduction to monolayer graphene. Graphene is a honeycomb lattice of carbon atoms in two spatial dimensions: see figure 1. A simple calculation, shown in section 2.1, reveals that the quasiparticles in the honeycomb lattice have a 'relativistic' dispersion relation [2]:

Equation (1)

with $v_{{\rm F}}$ the Fermi velocity of graphene. The dispersion relation (1), describing massless fermions, was confirmed experimentally in a pair of seminal papers [3, 4].

Figure 1.

Figure 1. The honeycomb lattice in two spatial dimensions. The red and blue atoms both represent carbon in graphene—the colors denote the bipartite sublattices.

Standard image High-resolution image

The electrons in graphene are charged particles, and interact with one another via the standard electrostatic Coulomb force. How strong are these interactions? A useful way to answer this question is to compute the dimensionless parameter

Equation (2)

We have denoted r as the typical 'distance' between electrons, and used (1), together with the estimate $k\sim 1/r$ for the typical quasiparticle wavenumber. For realistic samples of graphene, we should take $v_{{\rm F}} \approx 1.1\times 10^6$ m s−1, and $ \newcommand{\e}{{\rm e}} 1\lesssim \epsilon/\epsilon_0 \lesssim 5$ [3, 4]. It is useful to rewrite (2) as

Equation (3)

Here $c\approx 3\times 10^8$ m s−1 is the speed of light, and $\alpha_{{\rm QED}}$ is the bare coupling strength of quantum electrodynamics (QED) in 3  +  1 spacetime dimensions. Unlike QED, the Coulomb interactions between electrons in graphene are not always weak.

The effective theory of the interacting electrons in graphene depends dramatically on the number density n of electrons. We will always measure this density relative to the Dirac point, or charge neutrality point, where the Fermi energy in (1) is 0. When n is large (relative to the density of thermal excitations), graphene behaves as a conventional Fermi liquid, albeit one in two spatial dimensions. In this Fermi liquid, one can find long-lived quasiparticles and understand the dynamics of the electrons via the gas dynamics of the quasiparticles. When n  =  0, there is no Fermi surface. The resulting theory of the electron fluid is—at reasonable temperatures T—strongly interacting; we call this state the Dirac fluid. We will discuss these regimes in section 3.

1.2. Hydrodynamics of quantum systems

At finite temperature, both the Fermi liquid and the Dirac fluid are interacting many-body quantum systems. The dimension of the many-body Hilbert space grows exponentially quickly with system size. A direct solution to the many-body dynamics problem is intractable. Nonetheless, there are many fundamental open questions. For example: how does the microscopic Hamiltonian affect the measurable properties of a (finite temperature) quantum system, on short and long time scales? What does it even mean for a closed quantum system to thermalize?

In some respects, these questions have very similar classical counterparts. A macroscopic body of water consists of  ∼1023 molecules, strongly interacting with each other. Nonetheless, we have a reasonable understanding of the dynamics of water (at least in simple settings). The reason is that we only care about the dynamics of water on long length scales. The only degrees of freedom which we can reasonably measure are the conserved quantities: the number of molecules, and their energy and momentum. The resulting effective theory is called hydrodynamics [5].

Because the assumptions of our theory of statistical mechanics do not depend in a fundamental way on whether the microscopic degrees of freedom are classical or quantum, it is natural to postulate that an interacting many-body quantum system also has a hydrodynamic description, on long length scales and at finite temperature [6]. The resulting equations of motion are classical differential equations, describing the rearrangement of the conserved quantities in space and time. They are valid on time scales $t \gg \tau_{{\rm ee}}$ , the electron–electron interaction time, and length scales $ \newcommand{\e}{{\rm e}} \ell \gg \ell_{{\rm ee}}$ , the mean free path for electron–electron collisions. These classical equations can be derived by symmetry considerations and thermodynamic postulates alone. We will do so in section 4. While it may not be obvious how to define $\tau_{{\rm ee}}$ and $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ in a quantum theory, we will provide a partial answer in section 5. These hydrodynamic descriptions have been used to describe spin waves [7] and superfluids [8] for decades; this review describes the new systems where the electrons themselves behave as a charged fluid, and how the motion of this charged fluid leads to novel electronic phenomena.

Let us emphasize from the start that there is a confusing but popular jargon that the hydrodynamics of superfluids is 'quantum hydrodynamics'. A superfluid is a quantum state which spontaneously breaks a global symmetry. There are associated gapless modes, Goldstone bosons, associated with this broken symmetry, and the hydrodynamics of these Goldstone bosons is called 'quantum'. There is a long literature of both theory and experiment on superfluid hydrodynamics [8]. In this review, we focus on non-superfluid quantum systems. We again emphasize that the hydrodynamics of these systems is classical, in spite of the quantum dynamics on the smallest length scales.

When $t\lesssim \tau_{{\rm ee}}$ and $ \newcommand{\e}{{\rm e}} \ell \lesssim \ell_{{\rm ee}}$ , a simple hydrodynamic description of many-body dynamics fails. If quasiparticles are long-lived, and these quasiparticles interact weakly with one another, then one can instead build a 'quantum' kinetic description of the dynamics [9]. A kinetic description is valid on length scales where quasiparticles can be approximated as point-like: $ \newcommand{\e}{{\rm e}} \ell \gg \lambda_{{\rm F}}$ , where $\lambda_{{\rm F}}$ is the Fermi wavelength of quasiparticles. In section 5, we will discuss the kinetic theory approach to many-body dynamics.

So far, our discussion of hydrodynamics has made no reference to a specific quantum system. Indeed, in the absence of disorder, we believe that interacting many-body quantum systems will generically exhibit a hydrodynamic limit. Experiments have uncovered evidence for hydrodynamics of finite temperature quantum systems in a broad variety of settings, including cold atomic gases [10] and the quark-gluon plasma [11]. So it may seem surprising that despite the long history of solid-state physics, we have only relatively recently found experimental evidence for electronic hydrodynamics in GaAs [12, 13], graphene [1416], ${\rm PdCoO}_2$ [17] and ${\rm WP}_2$ [18]. The main challenge for observing the hydrodynamics of the electronic fluid alone is that metals contain impurities and phonons. The scattering of electrons off of impurities and phonons destroys the collective hydrodynamic flow of the electrons alone. Hence, to see hydrodynamic electron flow, one must ensure that the electron–electron scattering, which occurs at a rate

Equation (4)

in graphene, occurs sufficiently quickly. This scattering rate may seem quite fast; however, electron-impurity scattering rates in typical metals can easily be $(0.01 \; {\rm ps}){\hspace{0pt}}^{-1}$ . Not surprisingly then, one of the major obstacles to observing electronic hydrodynamics in graphene has been the growth of exceptionally clean crystals. Indeed, the possibility of hydrodynamic electron flow was realized in the 1960s [19], but was mostly forgotten because at the time it was not feasible to see experimentally.

Hydrodynamics thus provides a partial answer to the question of how a (quantum) system reaches global thermal equilibrium. The hydrodynamic limit exists after the system has decohered, yet it does tell us directly about the long time response of a quantum system. As we will see, hydrodynamics tells us that certain response functions have a universal functional form, but will not tell us how to compute overall prefactors. Despite these limitations, it is still valuable to understand the hydrodynamic limit well. The hydrodynamic regime is precisely when the collective nature of the many-body dynamics becomes most pronounced. The usual regime that we study in condensed matter physics is the opposite: when the dynamics of electrons can be understood in a non-interacting approximation [1]. A complete solution of the many-body dynamics problem will necessarily involve an understanding of both limits.

1.3. The challenge of strange metals and non-quasiparticle physics

For historical reasons, our conventional theory of condensed matter physics relies almost entirely on the assumption that electronic correlations can be neglected [1]. In most 'typical' metals, this assumption is sensible. However, we have known for a long time that there are non-Fermi liquid 'strange metals' with strongly correlated electrons [20], which cannot be described in our conventional framework. There are no quasiparticles in these systems, and it is likely that they must be described through a many-body framework, non-perturbative in interaction strength. Hydrodynamics and gauge-gravity duality [21] are two such descriptions, yet only the former is directly relevant to experimental systems at present. The universality of hydrodynamics is further appealing, because experiments suggest universal behavior of many non-Fermi liquids. For example: why do so many of these strange metals have a very high electrical resistivity ρ, which scales linearly with temperature T [22]?

As we will see in sections 6 and 7, the electrical resistivity ρ (and transport phenomena more broadly) depend critically on the nature of electronic scattering. Hence, properly accounting for electron–electron scattering is crucial. Furthermore, because of the strong electronic correlations, the apparent mean free path $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ can become comparable to interatomic distances. In graphene, $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \sim v_{{\rm F}}\tau_{{\rm ee}} \sim 100$ nm. As hydrodynamic electron flow in graphene has been observed, it is not crazy to postulate that hydrodynamic phenomena could be relevant for understanding the behavior of correlated electrons in strange metals. Indeed, in recent years there have been various proposals that the ubiquitous observation of $\rho \propto T$ has a hydrodynamic origin [2326].

As we will see in section 7, the consequences of hydrodynamic flow on electronic transport can be quite subtle in unconventional non-Fermi liquid phases. As argued in [25, 26], if typical strange metals are in a hydrodynamic regime, it may be a rather unconventional one. Thus, the Dirac fluid of graphene is an important experimental test case: it is a non-Fermi liquid that we understand (at least qualitatively) theoretically. As we will show, conventional electrical transport experiments are not usually sufficient for discovering collective hydrodynamic motion of electrons in the Dirac fluid (or other exotic hydrodynamic regimes). We will propose ways of detecting hydrodynamic electron flow in the unconventional Dirac fluid in sections 7 and 8, but we emphasize that the search for crisp signatures of electronic hydrodynamics is an important open problem for both theory and experiment.

2. Graphene

With a broad view as to why the study of electronic dynamics in graphene is exciting, we now turn to a review of the physics of electrons and phonons in graphene. Many more details can be found the reviews [27, 28]. In this section, we will assume the validity of the textbook theory, and not carefully treat electron–electron interactions. In the Fermi liquid in graphene, many (but not all) experiments can be understood in such a limit.

2.1. Band structure of the honeycomb lattice

Neglecting electron–electron interactions, phonons and impurities, the mobile electrons in graphene are approximately described by a tight-binding model. For the moment, we ignore electronic spin. Letting $c_i^\dagger$ and ci be creation and annihilation operators for a fermion on site i of the honeycomb lattice, shown in figure 1, the Hamiltonian is

Equation (5)

with $t\approx2.8$ eV [27], and the sum over $\langle ij\rangle$ running over nearest neighbor sites. For completeness, we remind the reader that

Equation (6)

The displacement between two atoms in the honeycomb lattice is given $\pm a {\bf e}_1$ or $\pm a{\bf e}_2$ or $\pm a {\bf e}_3$ , where

Equation (7a
                  )

Equation (7b
                  )

Equation (7c
                  )

and $a\approx 0.14$ nm is the spacing between carbon atoms in the honeycomb lattice. The honeycomb lattice is not a Bravais lattice. The unit cell consists of one atom on each sublattice (denoted with red versus. blue in figure 1), and they form a triangular lattice (which is fundamental) with unit vectors $a ({\bf e}_1 - {\bf e}_2)$ and $a ({\bf e}_1 - {\bf e}_3)$ . The reciprocal lattice consists of vectors ${\bf k}$ for which

Equation (8a
                  )

Equation (8b
                  )

for m1 and m2 integers. One finds

Equation (9)

It is well known how to find the eigenvalues of such a Hamiltonian [1]. For (5), this was first done in [2], and a pedagogical discussion can be found in [29]. We write

Equation (10)

with the index A denoting one of the two sublattices (red versus blue). (5) becomes

Equation (11)

with the wave number integral over the Brillouin zone only. The eigenvalues of the global Hamiltonian H0 are simply given by the energies of all occupied electronic states. The energy levels of the electronic states are given by

Equation (12)

Note that there is an additional degeneracy due to the electronic spin, which we have neglected.

Of interest to us will be the charge neutrality point, where there is exactly one electron per site. This means that exactly half of the energy levels will be occupied, and from the symmetry of (12) it is clear that the charge neutrality point has Fermi energy, or chemical potential, $\mu=0$ . The Fermi surface consists of points ${\bf k}$ , in the Brillouin zone, where (12) vanishes. There are exactly two such points:

Equation (13)

All other ${\bf k}$ at which $ \newcommand{\e}{{\rm e}} \epsilon=0$ are related to these two by a reciprocal lattice vector given in (9). For $ \newcommand{\e}{{\rm e}} \epsilon \ll t$ , we may Taylor expand (12) and obtain (1) with

Equation (14)

This dispersion relation is identical to that of a massless Dirac fermion, with Hamiltonian

Equation (15)

where $\sigma^x$ and $\sigma^y$ are the usual Pauli matrices. For energies $ \newcommand{\e}{{\rm e}} \epsilon \ll t$ , one may just as well use this Hamiltonian. Of course, because there are inequivalent two Dirac points, as well as two electronic spin degrees of freedom, there are a total of N  =  4 Dirac fermions in this effective description of graphene.

The vanishing of a gap is related to the symmetry between the red and blue sublattices in figure 1. There are perturbations to graphene that can break this symmetry, such as uniaxial strain, which can open a gap [30]. This is equivalent to perturbing (15) with a term $\sigma^z\Delta$ , with Δ the energy of the gap. Indeed, there are many microscopic Hamiltonians which can give rise to emergent massless Dirac fermions; such points are generically symmetry-protected.

The energy scale at which the curvature of the band structure becomes significant is quite high. In units of temperature:

Equation (16)

For practical purposes, we can think of the electrons in graphene as quasirelativistic.

2.2. Charge puddles

The electronic properties of any material, including graphene, will be strongly affected by the phonons and impurities/imperfections which are always present in any experimentally realized system. Even for 'defect-free' graphene at low temperatures with negligible phonon excitations, impurities modify electronic transport. In graphene, the dominant source of impurities are believed to be charged impurities, located out of the plane of graphene, in the substrates on which a monolayer of graphene is inevitably placed [3133]. These impurities pin spatial fluctuations in the Fermi energy, or chemical potential. Near the Dirac point, this impurity-induced fluctuation can be larger than the average chemical potential. The charge density will then be positive in some regions of space, while negative in others; the resulting regions are often called charge puddles [34]. Both the magnitude of these fluctuating potentials, and their spatial size, can be mapped using scanning probes [3537]. In figure 2(a), the scanning tunneling microscope shows that these disorder puddles can lead to local fluctuations of 56 meV in the chemical potential, over a length scale  ∼10 nm, when graphene is put on top of a silicon dioxide (${\rm SiO}_2$ ) surface. In units of temperature, this is  ∼650 K, and it implies that until the electronic temperature is larger than 650 K, experiments will measure the physics of doped graphene, and not charge neutral graphene.

Figure 2.

Figure 2. Spatial maps of the charge puddles measured with scanning probe microscopy for graphene on (a) silicon dioxide and (b) hexagonal-boron nitride. Scale bars are 10 nm. Reprinted by permission from Macmillan Publishers Ltd: Nature Materials [35], Copyright 2011.

Standard image High-resolution image

A major improvement comes from suspending the graphene monolayer [38], or replacing the amorphous ${\rm SiO}_2$ by planar hexagonal-boron nitride (hBN) [39]. As the electronic wave function is mostly confined in the plane between boron and nitrogen atoms, hBN is an exceptional dielectric with band gap  ∼6 eV. With a band structure remarkably similar to graphene, along with a comparable interatomic spacing, it provides a clean electrical environment which encapsulates the graphene without significantly modifying the band structure (1) in most cases (though see [40, 41]). Figure 2(b) shows that both the size of charge puddles now increase up $\gtrsim$ 100 nm, and the chemical potential fluctuations reduce to  ∼5 meV. More improvement can come from a graphite back gate to further screen away any remote static potential.

We can also see the impact of charge puddles from electrical transport measurements [32]. In textbook transport theory [1], the electrical conductivity σ is proportional to the carrier density n in the presence of the long-range impurity scattering. However, this linearity will break down when the carrier density is comparable to the density induced by the charge puddles. Schematically

Equation (17)

where $n_{{\rm imp}}$ is (very crudely) the local carrier density in the presence of charge puddles. For temperatures below $T_{{\rm imp}} = \hbar v_{{\rm F}} \sqrt{\pi n_{{\rm imp}}}/k_{{\rm B}}$ , the charge puddles are essentially themselves tiny patches of Fermi liquid, and we find that $n_{{\rm imp}}$ is approximately T-independent. For temperatures above $T_{{\rm imp}}$ , we observe $n_{{\rm imp}}$ to be an increasing function of T [15]. This signifies the presence of thermally excited electrons and holes near the Dirac point.

2.3. Phonons

While the electronic properties of graphene are limited by impurities at low temperatures, it is lattice vibrations, or phonons, that constrain its electronic mobility at higher temperatures [4246]. In experiments, a clean graphene monolayer can reach a mobility of about $4\times 10^4$ cm2/V·s at the carrier density of $4.5 \times 10^{12}$ cm−2 [45], corresponding to a mean-free-path of almost 1 μm (see figure 3(a)). This mobility is higher than the two-dimensional III–V semiconductor heterostructure [47, 48]. It is only limited by scattering off acoustic phonons in the graphene lattice at high density. In addition to acoustic phonons, the electrons in graphene can also be scattered by the surface optical phonons on the substrate, or by ripples/strain in the single atomic layer of graphene. The scattering rates of acoustic versus optical phonons can be disentangled by their carrier density and temperature dependence in the electrical conductivity (see figures 3(b) and (c)). For instance, the resistivity ρ due to longitudinal acoustic phonons is directly proportional to temperature ($\rho \sim T$ ) whereas $\rho(T)$ highly non-linear when it is dominated by activated surface phonons [43]. Unlike conventional metals, graphene has a small Fermi surface but a large Debye temperature (∼2800 K) due to the stiffness of the chemical bonds. The characteristic temperature of the electron–phonon coupling in graphene is thus set by the Bloch–Grüneisen temperature $T_{{\rm BG}}$ instead of Debye temperature [49]. Below (above) $T_{{\rm BG}}$ , the phonon system is (non-)degenerate, giving a different resistivity temperature power law, i.e. $\rho\sim T^{4}$ ($\rho\sim T$ ), that can be measured in experiments.

Figure 3.

Figure 3. (a) Phonon-limited electronic mobility at high density and temperature. From [45]. Reprinted with permission from AAAS. (b) Resistivity after subtracting a temperature-independent component. Reprinted figure with permission from [44], Copyright 2008 by the American Physical Society. (c) Temperature dependent resistivity, demonstrating the contributions of various phonon scattering processes to the resistivity with gate voltages at 10, 15, 20, 30, 40, 50, 60 V from top to bottom (increasing carrier density). Reprinted by permission from Macmillan Publishers Ltd: Nature Nanotechnology [43], Copyright 2008. (d) Measuring the electron–phonon coupling in graphene by heat transfer. Reproduced from [61]. CC BY 3.0. The scaling $\rho \sim T^3$ is a hallmark of supercollisions.

Standard image High-resolution image

Electron–phonon coupling can also be studied using Raman spectroscopy [50, 51] and heat transfer [5256]. Since the electron–phonon coupling is weak in graphene, it is possible for the electrons in graphene to maintain a higher temperature than the phonon background for some time. The resulting heat transfer process can be measured by the time response of a photocurrent [57] or in thermal conductivity measurements [5861]. By measuring the dependence of the heat transfer rate on temperature, these experiments can identify the dominant energy relaxation process (see figure 3(c)), such as 'supercollisions' [62]: a three-body collision between an electron, phonon and impurity that leads to $\rho \sim T^3$ scaling. Heat transfer measurements also confirm the importance of the substrate as a source of optical phonons which strongly couple to the electrons in graphene at higher temperatures [63].

2.4. Is electron–electron scattering negligible?

Due to recent advances in pump-probe spectroscopy [6468], we can rather directly measure the rate of electron–electron scattering in graphene, at least in the limit of high energy excitations. The experiments are usually performed by observing the transmission or reflection properties of the graphene sample with a probe beam shortly after a pump beam promotes interband absorption. Experiments find a very fast decay signal, typically on the order of 10–100 fs, followed by a slow decay, on the order of 10 ps. They are attributed to the electron–electron and electron–phonon interaction time, respectively. More precisely, the initially excited electron–hole pairs have a very high energy relative to the ambient temperature. On fs time scales, electron–electron scattering causes rapid thermalization of the electrons, while on ps time scales electron–phonon scattering brings the combined electron–phonon system to thermal equilibrium. Remarkably, the electronic fluid appears to thermalize so quickly that on all time scales accessible experimentally, the distribution of electrons is of the equilibrium Fermi–Dirac form. The temperature of the electrons (found from the fitted form of the distribution) lowers as energy is lost to the phonon bath over time. This is strong evidence that electronic interactions cannot be neglected in any study of the quantum many-body dynamics of graphene.

Because the Fermi temperature is much smaller in graphene than in conventional metals, the electron–electron scattering time is expected to be relatively fast even in the doped regime. Near the Dirac point, it is almost as fast as possible. The rest of this review is about the measurable consequences of electron–electron interactions.

3. Electronic 'phase diagram'

As we saw in section 2.4, electron–electron interactions can be quite fast in graphene, especially near the neutrality point. We must now include them in our model. The most important electron–electron interactions are simply Coulomb interactions:

Equation (18)

with H0 given in (5). We emphasize that these Coulomb interactions have a 1/r tail, despite the fact that graphene is a 'two-dimensional metal', and in two dimensions Coulomb potentials are $\log r$ . This is because the Coulomb interactions are mediated by out-of-plane electromagnetism, in three spatial dimensions.

There are other types of interactions one could, in principle, include. For example, one could add a Hubbard interaction, penalizing electrons of opposite spin on the same lattice site [29]:

Equation (19)

This interaction arises from the same Coulomb interaction as (18), but between the orbitals on a single atom [69]. When U is very large, such interactions lead to an antiferromagnetic insulating phase [7072]. Experimentally, graphene is observed to be a conductor—evidently, U is not large enough in real graphene. It may be possible to experimentally drive an insulating phase upon applying strain [73]. Our focus in this review is on the conducting limit of ordinary graphene.

3.1. Fermi liquid

When the Fermi energy is large compared to the temperature, graphene behaves like a 'conventional' two-dimensional metal with long-lived quasiparticles [74]. The standard arguments suggest the 'robustness' of this Fermi liquid [75, 76]: since electrons can only scatter into states very close to the Fermi surface, the scattering rate for electrons is

Equation (20)

where μ is the chemical potential or Fermi energy. The thermodynamic functions will also be described by conventional Fermi liquid theory. We will discuss these results more quantitatively in section 5.

A generic Fermi liquid has superconducting instabilities at low temperature [76]. It is generally not possible to observe this in graphene due to the low density of states, made worse by the relativistic energy spectrum [77]. It may be possible to obtain (chiral) superconductivity at higher temperatures by doping graphene very far from the neutrality point [78, 79]. For a detailed review of possible superconducting instabilities in graphene, see [80]. At the end of the day, when thinking about hydrodynamics we may safely neglect any superconducting instabilities of the Fermi liquid in doped graphene due to (i) the extremely low $T_{{\rm c}}$ of any putative instability anywhere close to charge neutrality, and (ii) at very low T, the hydrodynamic description will break down as $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \gg \ell_{{\rm imp}}$ .

If graphene is an ordinary Fermi liquid at low temperature, albeit one in two spatial dimensions, why is graphene a good candidate material for observing the effects of electron–electron interactions? The high quality with which samples of graphene may be fabricated (section 2.2) and the lack of strong electron–phonon coupling (section 2.3) both allow us to make $ \newcommand{\e}{{\rm e}} \ell_{{\rm imp}}$ very large, even if $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ is relatively 'large' itself. The simple quasi-relativistic band structure of graphene also has a straightforward hydrodynamic description, unlike more complicated Fermi surfaces [25, 26]. However, graphene is not the unique material with favorable properties, and indeed signatures of Fermi liquid electronic hydrodynamics have been observed in GaAs [13], ${\rm PdCoO}_2$ [17] and ${\rm WP}_2$ [18]. While certain quantitative simplicities of the relativistic hydrodynamic description described in section 4 will not apply to these materials, so long as there are no additional hydrodynamic degrees of freedom, much of the theory of section 6 will apply to these more complicated materials.

3.2. Dirac fluid

The reason why the Fermi liquid is weakly interacting is essentially the fact that the Fermi surface provides strong kinematic constraints on the possible scattering pathways. If we place the chemical potential in graphene at the neutrality point, then there is no longer a Fermi surface, and so this argument no longer applies. Furthermore, because the only low energy scale is T, dimensional analysis implies that

Equation (21)

Recall the definition of α in (2); the prefactor of $1/\alpha^2$ will be justified in section 5. As we discussed in (3), a naive estimate of α in graphene is quite large. Also, at low temperatures, this time scale grows much more slowly than (20). So we expect that electron–electron interactions ought to be much more important in charge-neutral graphene.

3.2.1. Renormalization group.

As is well known in quantum field theory, the fact that the bare 'coupling constant' α in graphene is quite large is not sufficient to ensure that the low energy effective theory of graphene is strongly coupled. Indeed, α is large in the Fermi liquid phase, and yet the large Fermi surface screens out the strong interactions. A more sophisticated renormalization group (RG) analysis [81] allows us to compute an effective value of α, $\alpha_{{\rm eff}}$ at a given temperature T. This effective value of the coupling accounts for both thermal and quantum fluctuations, and serves as a diagnostic for the true coupling α which is in (21).

This paragraph describes the RG analysis of the Dirac fluid; readers unfamiliar with this approach may wish to consult a textbook such as [82]. This analysis assumes that α is perturbatively small. Readers uninterested in the details can skip to (25) for the physical result. Let us consider the quantum field theory version of the Hamiltonian (18), which replaces the electron creation/annihilation operators $c^\dagger_i /c_i$ (defined on honeycomb lattice sites) by $N_{{\rm f}}=4$ Dirac fermions $\Psi^A({\bf x}, t)$ ($A=1, \ldots, N_{{\rm f}}$ ) in the spacetime continuum. One then writes down an effective action which depends on the energy scale $\tilde\mu$ above which our theory is (by construction) ill-defined:

Equation (22)

with $\overline{\Psi}^A = \Psi^{\dagger A}\gamma^t$ and $n = \Psi^{\dagger A} \Psi^A$ . $\gamma^t$ and $\gamma^i$ are Dirac's gamma matrices, and Z is a prefactor corresponding to the 'renormalization' of the quasiparticle weight. One integrates out quantum fluctuations of the fermion field Ψ with momentum $\vert {\bf k}\vert \geqslant \mu$ , and (schematically) looks for a $Z(\mu)$ , $\varepsilon(\mu)$ and $v_{{\rm F}}(\mu)$ such that the correlation functions $\langle \Psi^\dagger({\bf k}_1)\cdots \Psi({\bf k}_n)\rangle_{\vert {\bf k}_i\vert < \mu}$ are the same, whether one evalutes the correlation function using the effective action $S_{{\rm eff}}(\tilde\mu)$ , or the effective action $S_{{\rm eff}}(\tilde\mu^\prime)$ , with $\tilde\mu^\prime > \tilde\mu$ . For the theory in (22), one finds that $Z(\tilde\mu)$ and $\varepsilon(\tilde\mu)$ are constants. The only parameter which varies is $v_{{\rm F}}(\tilde \mu)$ . It is common to write down a 'flow equation' for the effective Fermi velocity as a function of energy scale. At leading order in α one finds [8386]

Equation (23)

Using (2), it is more instructive to write this equation as

Equation (24)

We now assume that at a very high energy scale of $\mu = k_{{\rm B}}T_\Lambda \sim 10^5$ K (the energy scale at which the dispersion relation in graphene is not relativistic: see (16)), the coupling constant α is given by its 'bare' value $\alpha_0$ . Integrating this equation down to an energy scale $\mu=k_{{\rm B}}T$ , we find

Equation (25)

We say that Coulomb interactions are marginally irrelevant because the dimensionless coupling constant is vanishing logarithmically fast at low temperature.

If as $T\rightarrow 0$ , $\alpha_{{\rm eff}}(T)$ is given by (25), then why did we emphasize in the introduction that electron–electron interactions could still play an important role in graphene? Suspended graphene has $\alpha_0 = 2.2$ ; graphene on substrates has $\alpha_0 \approx 0.8$ due to the dielectric constants of the substrates [39]. At a temperature of T  =  100 K, we estimate $\alpha_{{\rm eff}} \approx 0.46$ for suspended graphene, and $\alpha_{{\rm eff}} = 0.34$ for graphene on subsbtrates. These are not small coupling constants. Because these coupling constants are not small, one should not take (25) too seriously, and ultimately any coefficients (such as viscosity) which will be very sensitive to the value of α can be treated as phenomenological fit parameters and experimentally measured.

Of course, the right hand side of (24) has corrections at ${\rm O}(\alpha^3)$ . Those corrections can be explicitly computed, and one finds [87, 88]

Equation (26)

where $\alpha_{{\rm c}} \approx 0.8$ . This suggests that the Dirac fluid may be unstable at strong coupling $\alpha>\alpha_{{\rm c}}$ . The proposed endpoint of such an instability is an excitonic insulator [87]. And while early numerical studies had suggested that the true ground state of charge-neutral graphene was not the Dirac fluid, but an insulator [89, 90], experiments unambiguously show that charge-neutral graphene is a conductor. More sophisticated treatments of the RG [84, 91, 92], along with more recent numerical studies [93, 94], have confirmed that the Dirac fluid is not unstable at large values of α—(25) is qualitatively correct, although the precise numerical coefficients may be incorrect.

A direct physical prediction of the RG described above is that the effective Fermi velocity of the Dirac fermions in graphene is temperature dependent. Combining (2) and (25), we find

Equation (27)

A similar prediction can be made for an effective density-dependent Fermi velocity at low densities [83]:

Equation (28)

The constants $v_{{\rm F, 0}}$ and $\tilde v_{{\rm F, 0}}$ can be different. Direct experimental evidence for this Fermi velocity renormalization as a function of density was observed in [95]: see figure 4. In this experiment, a magnetic field was applied to suspended graphene, and the resulting quantum oscillations in the conductivity were measured. Applying the standard quasiparticle-based theories [96], together with the proportionality between $v_{{\rm F, eff}}$ and the cyclotron frequency in graphene, [95] was able to observe a slight modification of (28), with relatively good experimental agreement with (28). Evidence for the enhancement of the Fermi velocity near the charge neutrality point was also observed using ARPES in [97].

Figure 4.

Figure 4. An apparent logarithmic divergence in the effective Fermi velocity near the Dirac point in graphene, experimentally observed by measuring the decay rate of quantum oscillations. Reprinted by permission from Macmillan Publishers Ltd: Nature Physics [95], Copyright 2011.

Standard image High-resolution image

Our reuslts so far are summarized in figure 5. Graphene gives rise to an ordinary Fermi liquid when $\mu \gg k_{{\rm B}}T$ , and a (relatively) strongly coupled Dirac fluid when $\mu \ll k_{{\rm B}}T$ (up to logarithmic corrections). Electronic dynamics in ultrapure graphene is described by hydrodynamics across the phase diagram, but as we will see, the qualitative change in the interaction rate from (20) to (21) will lead to profound, and experimentally measurable, changes in the hydrodynamic response of the theory: as we will detail in section 5, the hydrodynamic coefficients scale very differently with temperature across this 'phase diagram'.

Figure 5.

Figure 5. The 'phase diagram' of electronic dynamics in graphene. The blue regions denote the Fermi liquids, which can be either electron-like or hole-like. The green region denotes the Dirac fluid, an electron–hole plasma with relatively strong interactions. The boundary between blue and green regions denotes a crossover and is not sharp. We also depict the band structure in each part of the phase diagram: red denotes filled electronic states with negligible thermal fluctuations, and yellow denotes where thermal fluctuations are significant.

Standard image High-resolution image

The qualitative phase diagram shown in figure 5 is reminiscent of the theory of quantum criticality [20]: at finite temperature, the interplay of thermal and quantum fluctuations lead to a very strongly interacting quantum system. In fact, there is a rather trivial quantum critical point between a hole-like Fermi liquid and an electron-like Fermi liquid at T  =  0.

4. Relativistic hydrodynamics

In this section we discuss quantitatively the hydrodynamics of a relativistic system. The approach is that of [5]. Here, we will assume relativistic invariance, and focus on the physics near the charge neutrality point; we justify the relativistic assumption in section 4.5.2. This limit requires us to interpret the hydrodynamics in a slightly different way [98] than is done in the astrophysics literature. In a typical relativistic plasma in astrophysics, one has a fluid of heavy ions coupled to a fluid of light electrons. As such, the number of ions and the number of electrons are separately conserved. This is not the case in graphene: there are processes which create electrons and holes, conserving only the net electric charge: see section 5.3 for a detailed discussion of this issue. So it will be important to consider a fluid which can be either positively or negatively charged, instead of a two-fluid model.

4.1. Thermodynamics

Before a discussion of hydrodynamics, it is important to understand the static backgrounds about which we will build our hydrodynamic theory. These are states in thermal equilibrium. So let us begin with some elementary thermodynamics. Consider a system in a d-dimensional region of volume V, with a conserved charge and energy. In graphene, d  =  2, but we might as well keep d general for now. The first law of thermodynamics states that

Equation (29)

where E is the total energy, T is the temperature, S the total entropy, μ the chemical potential, N  =  −Q/e with Q the net charge, P the pressure and V the volume of the sample (keep in mind that for graphene in d  =  2, this volume is physically interpreted as the surface area of the sample). Dividing through by ${\rm d}V$ and demanding extensivity we obtain the Gibbs–Duhem relation

Equation (30)

with epsilon the energy density, n the (relative) number density, and s the entropy density. Now suppose that we work in a fixed area, so ${\rm d}V = 0$ , and $ \newcommand{\e}{{\rm e}} {\rm d}E = V{\rm d}\epsilon$ , ${\rm d}N = V{\rm d}n$ and ${\rm d}S = V{\rm d}s$ . Simple manipulations give:

Equation (31)

This thermodynamic identity implies that if we treat μ and T as our tuning parameters (as is useful for theoretical purposes), then the pressure P plays the role of our thermodynamic potential. All information about the thermodynamics of the fluid is contained in $P(\mu, T)$ .

Suppose that the only two energy scales in the problem are the temperature $k_{{\rm B}}T$ and the chemical potential μ. Dimensional analysis requires that

Equation (32)

where $\mathcal{F}$ is an arbitrary function that obeys thermodynamic requirements such as $s\geqslant 0$ . In graphene, assuming the interaction strength α is small, one can find the explicit form of $\mathcal{F}$ , and it is presented explicitly in (136), later in this review. Using (31) we find

Equation (33a
                  )

Equation (33b
                  )

Combining (30) and (33) we obtain

Equation (34)

This relation will prove to have important consequences.

There are two points our discussion has overlooked. Firstly, due to the weak logarithmic temperature dependence of $\alpha(T)$ in the Dirac fluid, the thermodynamics of graphene may be slightly more complicated than the above. This is unlikely to be a qualitative effect, and so we will neglect it in what follows for simplicity. Secondly, in a sample of graphene with charge puddles, there is another scale $\mu_{{\rm rms}}$ : the root-mean-square fluctuations of the chemical potential. This implies that the thermodynamics described above is too simple. Further discussion of both points can be found in [99].

4.2. The gradient expansion

Our modern understanding of hydrodynamics is that it is the effective theory describing the dynamics of any many-body system relaxing to thermodynamic equilibrium [5, 6]. The thermodynamic equilibria we described above—with a slight generalization to allow for finite momentum density in an infinite volume—must then be static solutions to the hydrodynamic equations. Our key postulate is that on time scales large compared to the mean free time between electron–electron collisions: $\tau_{{\rm ee}}$ , and on long length scales compared to $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \equiv v_{{\rm F}}\tau_{{\rm ee}}$ , the only slow dynamics is associated with locally conserved quantities. For us, this will consist of charge, energy and momentum.

In the discussion that follows, it is convenient to choose units where $v_{{\rm F}} = 1$ . Factors of $v_{{\rm F}}$ can be restored with dimensional analysis. We will restore such factors explicitly whenever an important physical formula is found.

In non-relativistic notation, the conservation law for a local charge density is

Equation (35)

where ${\bf J}$ is a spatial charge current. In relativistic notation, we write

Equation (36)

The conservation law for energy and momentum reads

Equation (37)

the stress-energy tensor $T^{\mu\nu}$ describes both densities and spatial currents of energy and momentum: Ttt is the energy density, Tti is the energy current, Tit is the momentum density, and Tij is the momentum current: more commonly known as the stress tensor. Relativistic invariance requires

Equation (38)

(36) and (37) will form the basis for the hydrodynamic equations of motion. We now must find expressions for $J^\mu$ and $T^{\mu\nu}$ in terms of the slow degrees of freedom. These are the local 'charge' density

Equation (39)

with $n_{{\rm el}}$ and $n_{{\rm hole}}$ the number densities of electrons and holes respectively, the local energy density $ \newcommand{\e}{{\rm e}} \epsilon({\bf x})$ , and the local momentum density ${\bf \Pi}({\bf x})$ . It is conventional in hydrodynamics to not solve for n, epsilon and ${\bf \Pi}$ . Instead, one solves for thermodynamic conjugate variables: the local chemical potential μ, the local temperature T, and the relativistic velocity vector

Equation (40)

Let us briefly remind the reader of relativistic index notation. In this review, indices are raised and lowered by multiplying by the matrix

Equation (41)

hence $u_t = -u^t$ and $u_i = u^i$ . Greek letters $\mu\nu$ will denote spacetime indices, while Latin letters ij will denote spatial indices. The simple identity

Equation (42)

follows from (40).

We now proceed along the lines of [98] to derive the hydrodynamic equations of motion. As we have previously stated, hydrodynamics is an effective theory. This means that one writes down the most complicated possible equations of motion consistent with basic principles such as symmetry, up to a given order in a small parameter 'δ'. In hydrodynamics, that small parameter is the ratio of the electronic mean free path to the size ξ of perturbations: $ \newcommand{\e}{{\rm e}} \delta = \ell_{{\rm ee}}/\xi$ . Alternatively, $ \newcommand{\e}{{\rm e}} \delta \sim \ell_{{\rm ee}}\partial$ , with derivatives understood to act on the slowly varying functions such as T or $u^\mu$ . We will then write down the most general ${\rm O}(\partial^n)$ expressions for $J^\mu$ and $T^{\mu\nu}$ , for a small value of n  =  0,1. Finally, we assume a local second law of thermodynamics, following [98] 3 .

4.2.1. Zeroth order.

We begin at zeroth order in derivatives. Let us imagine that we have a fluid, exactly at rest, and in global thermodynamic equilibrium. We then know that the charge current is given by

Equation (43)

and the stress-energy tensor is given by

Equation (44)

Here n is the charge density, epsilon is the energy density and P is the pressure. The reason that pressure corresponds to a momentum flux is that the force acting on a hard wall of area A is given by $P \times A$ —that is the momentum per unit time impacting the boundary. As we have shown in section 4.1, n, epsilon and P are not independent functions of μ and T.

It now remains to consider a fluid which is moving. This means that we must write (43) and (44) in terms of the vector $u^\mu = (1, {\bf 0})$ , as well as the metric $g^{\mu\nu}$ . The resulting hydrodynamics will be Lorentz-covariant: the fluid velocity (set by the state of the system) destroys the full relativistic Lorentz symmetry by picking a preferred reference frame (where the fluid is at rest), even if the microscopic action is invariant under arbitrary Lorentz transformations. It is simple to accomplish this: one finds

Equation (45a
                     )

Equation (45b
                     )

At zeroth order we find an 'accidental' conservation law for entropy. Defining the entropy current $ \newcommand{\e}{{\rm e}} s^\mu \equiv su^\mu$ , with the entropy $s = \partial P/\partial T$ , as given in (31), we claim that

Equation (46)

To prove this result, we note that for the zeroth order stress tensor given by (45),

Equation (47)

We have used (42) to simplify this result. Combining (47) with (30) and (31) we find

Equation (48)

From (45) and (36), the zeroth order charge conservation equation is $\partial_\mu (nu^\mu) = 0$ . Hence we obtain (46).

4.2.2. First order.

We now wish to go to first order in derivatives: this will require adding terms to $J^\mu$ and $T^{\mu\nu}$ that contain a single spatial derivative: for example, $\partial_\mu T$ or $\partial_\nu u_\mu$ . However, there is an immediate subtlety that arises. Strictly speaking, we defined μ, T and $u^\mu$ from a thermodynamic perspective. What does it mean to discuss thermodynamic properties in the presence of spatial gradients which (as we will see) do not generally persist to infinite time? Because the only meaningful quantities within hydrodynamics are physical objects which are the expectation values of quantum operators, like $J^\mu$ or $T^{\mu\nu}$ , it is best to assert that $\partial_\mu T$ , $\partial_\mu \mu$ and $\partial_\mu u_\nu$ do not have any 'microscopic' definitions, and can be chosen at will. Importantly, the freedom to re-define our hydrodynamic degrees of freedom at first order in derivatives is not inconsistent with anything we have done so far. For example, suppose that we shift $T \rightarrow T + K u^\nu \partial_\nu \mu $ . The expression for the charge density Jt will then be modified: $J^t \rightarrow n + (\partial_T n) K u^\nu \partial_\nu \mu + \cdots$ ; similar statements can be made for $T^{\mu\nu}$ . The crucial point is that the re-definition of T has led to first order corrections to $J^\mu$ and $T^{\mu\nu}$ —precisely what we still have to classify. So any field re-definitions that we make are compensated by a shift in the expressions for the conserved currents at higher orders in derivatives. This is called the freedom to choose a fluid frame. As we will see, it is most useful to pick a fluid frame where the equations

Equation (49a
                     )

Equation (49b
                     )

are exact to all orders in derivatives. This is called the Landau frame. Physically, we simply assert that the charge density n and energy density epsilon obey their thermodynamic relations to all orders in derivatives. $\mu(x^\mu)$ and $T(x^\mu)$ are locally defined by thermodynamic relations. The momentum density (and energy current, by Lorentz covariance) is proportional to the spatial components of the velocity, ui .

We are now ready to study the hydrodynamic gradient expansion to first order in derivatives. We write

Equation (50a
                     )

Equation (50b
                     )

with $u_\mu \widehat{J}^\mu = u_\mu \widehat{T}^{\mu\nu} = 0$ imposed by (49). We now make one more physical assumption: the existence of a local entropy current whose divergence is non-negative:

Equation (51)

This is the statement that the second law of thermodynamics holds locally. At zeroth order in derivatives, $s^\mu = s u^\mu$ has already been defined. At first order in derivatives, this object does not have a non-negative divergence:

Equation (52)

To obtain this result, we repeat the same steps as in (48), but now include the effects of the first order corrections $\widehat{J}^\mu$ and $\widehat{T}^{\mu\nu}$ to the constitutive relations, and note that

Equation (53)

this follows from the Landau frame choice. We now re-write (52) as

Equation (54)

If we now define the entropy current at first order to be

Equation (55)

then $\widehat{J}^\mu$ and $\widehat{T}^{\mu\nu}$ can be chosen in such a way as to make $\partial_\mu s^\mu \geqslant 0$ . Defining the projection tensor

Equation (56)

we write

Equation (57a
                     )

Equation (57b
                     )

with the constants $\sigma_{\textsc{q}}$ , η and ζ all non-negative, we see that both (49) and $\partial_\mu s^\mu \geqslant 0$ are satisfied. The three new coefficients we have introduced are called the intrinsic electrical conductivity 4 , a shear viscosity and a bulk viscosity respectively.

In this review, we will see the practical consequences of these dissipative coefficients within hydrodynamics for experiments. But let us mention from the outset that one immediate consequence of this formalism is that even in charge neutral plasma (n  =  0), it is possible to have $J^\mu \ne 0$ in the presence of a chemical potential gradient. The microscopic intuition for this is that a finite temperature plasma consists of positive and negatively charged 'particles'. Chemical potential gradients will drive oppositely charged particles in opposite directions, leading to a charge current. We will return to this in sections 5.3 and 7.1.

In many papers on relativistic hydrodynamics, one works in a fluid frame where

Equation (58)

is exact to all orders in derivatives. We have called this a 'historical' frame because conventionally one would define (i) the conserved charge density by the number density of a lone species of particles, and (ii) the velocity as an 'average' velocity of each individual particle, and so find (58) by construction: see the kinetic theory discussion in section 5.1.1. It is not a good choice for hydrodynamics in graphene, however. In graphene, it is quite natural to study the charge neutrality point where n  =  0. The charge currents do not vanish at the charge neutrality point, and this means that the velocity $u^\mu$ becomes singular at first order in derivatives. In contrast, within the Landau frame that we have described, the charge current can be finite even at points where $n=\mu=0$ , so long as $\nabla \mu \ne 0$ . What this frame choice means in practice is that the dissipative coefficient that we have called $\sigma_{\textsc{q}}$ is often called 5 $\kappa_{\textsc{q}}$ in textbooks: one finds an energy current $\propto -\kappa_{\textsc{q}} \nabla T$ in the absence of charge flow.

4.2.3. External electromagnetic fields.

In many situations, we will interested in the hydrodynamic equations in the presence of external electric and magnetic fields—such external perturbations are natural in the solid-state laboratory. We can combine these external electromagnetic fields into an antisymmetric tensor $F^{\mu\nu}$ . For example in 2  +  1 spacetime dimensions (relevant for graphene):

Equation (59)

The hydrodynamic equations are modified in two ways under such perturbations. Firstly, energy and momentum are no longer conserved (for example, Joule heating occurs in a background electric field). This modifies (37) to

Equation (60)

This equation can be derived on general grounds as a Ward identity [21], but we will assert it here without proof. This modifies the derivation of the entropy current in the previous subsection, and one finds that $\widehat{J}^\mu$ must be replaced by

Equation (61)

Some of the dissipative charge current is due to the background fields. This can be understood as the statement that the fluid only cares about the total value of the electrochemical potential, which has contributions both from the external $F^{\mu\nu}$ and the internal μ. In fact, such reasoning can be used to fix how $F^{\mu\nu}$ modifies the hydrodynamic equations. Expanding around a stationary state with $u^\mu = (1, {\bf 0})$ , one must replace $\partial_\nu \mu \rightarrow \partial_\nu \mu - F^{\nu\rho}u_\rho$ .

Let us summarize what we have learned. The hydrodynamic equations of motion are conservation laws for charge, energy and momentum: $\partial_\mu J^\mu = 0$ and $\partial_\nu T^{\mu\nu} = F^{\mu\nu}J_\nu$ . The conventional hydrodynamic variables are μ, T and $u^\mu$ , and their values point-by-point are tied to the local values of the charge, energy and momentum densities, and we found that

Equation (62a
                     )

Equation (62b
                     )

All coefficients in these equations are understood to be arbitrary up to the non-negativity of $\sigma_{\textsc{q}}$ , η and ζ, and thermodynamic constraints, given in section 4.1 for n, epsilon and P. Together with the constraint $u^\mu u_\mu = -1$ , this forms a closed set of classical differential equations.

4.3. Hydrodynamic modes

To gain some intuition into the hydrodynamic equations, let us solve the hydrodynamic equations within a linear response regime. Namely, suppose that we are very close to equilibrium, with $\mu = \mu_0$ and T  =  T0 constants (with associated pressure P0, density n0, etc) and velocity $u^\mu_0 = (1, {\bf 0})$ . We now perturb

Equation (63a
                  )

Equation (63b
                  )

Equation (63c
                  )

and solve the hydrodynamic equations to linearized order in the $ \newcommand{\mdelta}{{\delta}} \mdelta$ variables. Note that the form of $u^\mu$ is restricted by (42). For convenience, we will also write

Equation (64)

similar relations hold for other thermodynamic variables.

At first order in the gradient expansion, and at first order in the $ \newcommand{\mdelta}{{\delta}} \mdelta$ variables, the linearized hydrodynamic equations become

Equation (65a
                  )

Equation (65b
                  )

Equation (65c
                  )

Note that only two of the thermodynamic variables above are independent. It is simplest to take $ \newcommand{\mdelta}{{\delta}} \mdelta n$ and $ \newcommand{\mdelta}{{\delta}} \mdelta P$ as the independent variables. From (34), we know that $ \newcommand{\mdelta}{{\delta}} \newcommand{\e}{{\rm e}} \mdelta \epsilon = d\mdelta P$ .

Let us begin by setting all the dissipative coefficients to vanish: $ \newcommand{\e}{{\rm e}} \sigma_{\textsc{q}0} = \eta_0 = \zeta_0 = 0$ . Combining the last two equations of (65) leads to

Equation (66)

This equation describes sound waves which travel at a universal speed

Equation (67)

Keep in mind that d  =  2 for graphene. Such waves are analogous to the 'cosmic sound' of an ultrarelativistic plasma in outer space [102]. (67) follows generally from any theory with the thermodynamics described in section 4.1 [103, 104]. The first equation of (65) describes no interesting dynamics. In particular, in the absence of sound waves, $ \newcommand{\mdelta}{{\delta}} \mdelta v_i = 0$ and hence $ \newcommand{\mdelta}{{\delta}} \partial_t \mdelta n = 0$ . Density fluctuations (independent from $ \newcommand{\mdelta}{{\delta}} \mdelta P$ ) are frozen in place. Also frozen in place are divergenceless flows with $ \newcommand{\mdelta}{{\delta}} \partial_i \mdelta v_i = 0$ .

Including the dissipative coefficients, one finds the following results after some algebra [103, 104]. Looking for solutions to (65) of the form ${\rm e}^{{\rm i}kx-{\rm i}\omega t}$ , we find three types of modes. Firstly, there are sound waves of the previous paragraph have a dispersion relation

Equation (68)

We have not written the ${\rm O}(k^3)$ terms because second order corrections to the gradient expansion will also contribute at this order, just as the viscous effects contributed to the dispersion relation at k2. Indeed, recall that the hydrodynamics that we have developed should always be understood to be valid in the limit $k\rightarrow 0$ . Modes which were frozen in place—transverse velocity fields and some charge fluctuations—now diffuse:

Equation (69a
                  )

Equation (69b
                  )

We observe from (69a ) that the viscosity is related to the diffusion constant for (transverse) momentum. Indeed, the dynamical viscosity

Equation (70)

which has dimensions of $[{\rm length}]^2 / [{\rm time}]$ in any spatial dimension, is often the best way to compare how 'viscous' two fluids are, relative to one another. We note that the precise definition of ν should always be taken as the diffusion constant for momentum in (69a ), and depending on whether one is studying relativistic hydrodynamics or not, the relationship between ν and η can change.

At the charge neutrality point $n_0 = \mu_0 = 0$ . The charge current Ji is entirely carried by the diffusive mode (69b ). Because electrical transport is the simplest experiment to perform, this implies that it is rather subtle to detect the hydrodynamics of charge-neutral plasma in graphene, a point which we will return to in section 7. Away from the charge neutrality point, the charge density can fluctuate in a sound wave.

4.3.1. Nonlinear hydrodynamics in experiment?

The relativistic corrections to hydrodynamics can be safely treated in powers of the small parameter $v_{{\rm flow}} /v_{{\rm F}}$ , where $v_{{\rm flow}}$ is the value of the fluid velocity in a given setup. In graphene, typical flow velocities in simple experiments are of order 102 m s−1 [105] ($v_{{\rm flow}} \sim 10^{-4}v_{{\rm F}}$ ), although it is possible to reach $v_{{\rm flow}} > 0.1 v_{{\rm F}}$ [106, 107], especially closer to the Dirac point. Because we do not reach flows with $v\approx v_{{\rm F}}$ , it is generally a safe assumption to approximate the flow as non-relativistic, $v\ll v_{{\rm F}}$ . The relativistic dispersion relation of the electrons will leave its imprint in the form of the gradient expansion.

However, non-relativistic hydrodynamics is a nonlinear theory as well, due to convective terms in the hydrodynamic equations. Of particular interest are the nonlinear corrections to the momentum conservation equation $\partial_\mu T^{\mu i} = 0$ , which (at quadratic order in velocity) read

Equation (71)

One quantifies how important the quadratic nonlinear term is, relative to the linear viscous term, by computing

Equation (72)

$\mathcal{R}$ is a generalization of the Reynolds number from conventional hydrodynamics [5]. If $\mathcal{R} \ll 1$ , then the nonlinear terms can be neglected; if $\mathcal{R} \gg 1$ , then the nonlinear terms are important. It is instructive to estimate $\mathcal{R}$ as follows. As we will show explicitly in section 5, $ \newcommand{\e}{{\rm e}} \eta \sim v_{{\rm F}} \ell_{{\rm ee}} (\epsilon+P)$ . Letting $ \newcommand{\e}{{\rm e}} \ell_{{\rm flow}}$ denote the length scale of the flow (for example, the size of a sheet of graphene):

Equation (73)

The first fraction above is small, but the second could be relatively large. Device sizes are typically not larger than 10 μm; using $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \gtrsim 100$ nm at realistic temperatures 6 , we estimate that $ \newcommand{\e}{{\rm e}} \ell_{{\rm flow}} \lesssim 100\ell_{{\rm ee}}$ . As noted in [108], $\mathcal{R} \sim 10$ may be sufficient to observe hints of nonlinear hydrodynamics in graphene, although we will see in section 4.6.3 that disorder readily spoils this effect.

4.4. The Fermi liquid limit

Let us now discuss the limit $\mu \gg k_{{\rm B}}T$ , retaining the linearized approximation. As we will see in section 5, in this regime $ \newcommand{\e}{{\rm e}} \sigma_{\textsc{q}} \sim T^2 \eta/\mu^4$ is a small dissipative coefficient, and

Equation (74)

Let us suppose that the background μ does not vary much from a constant value $\mu_0$ . In this limit, $ \newcommand{\mdelta}{{\delta}} \mdelta \mu$ and $ \newcommand{\mdelta}{{\delta}} \mdelta v_i$ dominate the hydrodynamic response of the fluid, and the thermal response $ \newcommand{\mdelta}{{\delta}} \mdelta T$ is suppressed by a power of $T/\mu_0$ . To confirm this assertion, we must compare the charge and energy conservation equations, which read

Equation (75a
                  )

Equation (75b
                  )

In the $T\rightarrow 0$ limit we have $P\sim \mu^{d+1}$ . This means that $n = (d+1)P/\mu$ and $\partial_\mu n = {\rm d}n/\mu$ . Together with (34), we find

Equation (76a
                  )

Equation (76b
                  )

If the background $\mu_0$ is constant (independent of ${\bf x}$ ), then clearly these two equations are identical at leading order. If the background μ varies a small amount, these two equations are inequivalent, and thermal effects cannot be neglected (see section 7.2). Nevertheless, away from the charge neutrality point, it may be a reasonable assumption in graphene that the chemical potential is approximately homogeneous. And so we see that the hydrodynamic equations have reduced to

Equation (77a
                  )

Equation (77b
                  )

In much of this paper, we will make the further assumption that fluid flows are static. In practice, this means that we want the time scale of an experimental measurement to be slow compared to the decay times of hydrodynamic modes. In practice, this decay time is often set by momentum-relaxing scattering (see section 4.6) and can be of order 1 ps [45], which is quite fast. If the background pressure and chemical potential are uniform, (77) further reduces to

Equation (78a
                  )

Equation (78b
                  )

These equations are identical to the time-independent hydrodynamic equations that one finds for a Galilean invariant fluid [5], upon neglecting thermal effects. We will discuss particular solutions of these equations in sections 4.6.2 and 6.

We emphasize that the inclusion of thermal effects differs between the Galilean and Lorentz invariant fluids. In a Galilean invariant fluid, the charge current is proportional to the momentum density, while in a relativistic fluid the energy current is proportional to the momentum density. Thus, in a Galilean invariant fluid, there is a diffusive mode associated to energy fluctuations, while in a Lorentz invariant fluid, the diffusive mode is associated with charge fluctuations, as we have seen in section 4.3.

4.5. Long-range coulomb interactions

One major oversight in our development thus far has been that we have neglected the long-range nature of the Coulomb interactions in graphene (and in many metals, more generally). The standard way [109, 110] to account for such long range interactions is to simply couple the fluid dynamical equations to Maxwell's equations 7 . This implies that the electric field Ei in (59) is given by

Equation (79)

with φ obeying Gauss' law in three spatial dimensions:

Equation (80)

Neglecting charge puddles, the immobile background ions imply that only fluctuations of the density contribute to the long-range Coulomb potential. Note that the third dimension is important—the physical space has three dimensions, even though the electrons in graphene are only mobile in two of them. The coupling constant α is analogous to a dielectric constant. We find

Equation (81)

We now combine this equation with the hydrodynamic equations (36) and (60) and constitutive relations (62) to obtain our linearized theory for the Coulomb-interacting Dirac fluid [99, 113]:

Equation (82a
                  )

Equation (82b
                  )

Equation (82c
                  )

Due to (81), these equations are nonlocal.

We emphasize that $ \newcommand{\mdelta}{{\delta}} \mdelta \varphi$ enters (82) in a very special way. The spatial components of the charge and energy current are sensitive only to the gradient of the total electrochemical potential $ \newcommand{\mdelta}{{\delta}} \mdelta \mu + \mdelta \varphi$ . A static fluid is only sensitive to the 'net' electric field. Hence, we conclude that for time-independent flows, long-range Coulomb interactions have no physical effect, because the experimentalist can only measure the total electrochemical potential. In contrast, the time-dependent thermodynamic response is only dependent on $ \newcommand{\mdelta}{{\delta}} \mdelta \mu$ , not $\delta \varphi$ . As we will soon see, this does lead to measurable consequences for dynamics.

A final subtlety is that the same parameter α governing the strength of Coulomb interactions also governs the thermodynamics of the electron fluid (see section 5.4). Implicit in (82) is that the Coulomb interaction splits into a collective long range component, and a short-range component responsible for hydrodynamic and thermodynamic phenomena [109, 110], and a careful check of this assumption is called for. This 'splitting' does occur in a conventional Fermi liquid [114].

4.5.1. Plasmon-like corrections to sound.

The simplest way to observe the consequences of Coulomb interactions is to study the dispersion relation of sound waves. Neglecting dissipation, this can be done analytically [102, 104]. Using that $ \newcommand{\mpi}{{\pi}} \newcommand{\mdelta}{{\delta}} \mdelta \varphi(k) = 2\mpi \alpha \vert k\vert ^{-1} \mdelta n(k)$ in Fourier space, we find that the dispersion relation $\omega^2 = v_{{\rm s}}k^2$ becomes modified to

Equation (83)

As $k\rightarrow 0$ , we therefore see that the dispersion relation of sound waves is severely altered. In fact, the dispersion relation we have found is analogous to plasmons' dispersion relation in graphene. The fact that plasmons disperse with the relation $\omega \sim \sqrt{k}$ is a consequence of the fact that the electrons are mobile in two dimensions [115, 116], while the Coulomb potential exists in three dimensions 8 . Reference [117] is a recent review on plasmons in graphene; they were observed experimentally in [118, 119].

We caution the reader that in the limit where $\omega \sim \sqrt{k}$ , the dispersion relation has an analogous form to the conventional plasmon, but this mode is not the conventional plasmon of a two-component (electron–hole) plasma [120]. At higher frequencies, we obtain ordinary sound from (83). See [121, 122] for more discussion on this point.

Dissipative corrections to the dispersion relation $\omega \sim \sqrt{k}$ are given by $ \newcommand{\mdelta}{{\delta}} \mdelta \omega \sim -{\rm i}\sigma_{\textsc{q}}\vert k\vert $ within hydrodynamics, instead of $ \newcommand{\mdelta}{{\delta}} \newcommand{\e}{{\rm e}} \mdelta \omega \sim -{\rm i}\eta k^2$ as for the sound wave [104].

4.5.2. Breaking relativistic invariance?

Another possible issue is that because long-range Coulomb interactions break Lorentz invariance, the hydrodynamics of electrons in graphene will not be described by a Lorentz invariant hydrodynamics. To check whether this is a problem, we may directly derive the energy current and the momentum density from the action (22) using Noether's Theorem. Because only the kinetic terms contain derivatives, we see that the energy current and momentum density are identical to that of a free Dirac fermion. This implies that $T^{ti}=T^{it}$ should be true within hydrodynamics, as it is an operator identity. This equality alone is sufficient to recover the linearized hydrodynamic formalism of this section.

It may be the case that because the presence of nonlocal Coulomb interactions break Lorentz invariance, the thermodynamic and/or hydrodynamic properties of the Dirac fluid become more subtle. Evidence for such an assertion can be found in [123]. In particular, this may correspond to interesting ${\rm O}(v^2)$ corrections to the hydrodynamic equations we have described so far. Because the interactions do not break spatial isotropy, such effects cannot arise at linear order in velocities, which is the order to which we focus in this review. Recent work [124] is beginning to develop hydrodynamics without boost invariance.

4.6. Momentum relaxation

The second issue that we need to address is that the hydrodynamics we derived above assumed that momentum was an exactly conserved quantity. Unfortunately, this is not true for the electrons in metals. As we have seen in section 2, the scattering of electrons off of impurities and/or phonons cannot be neglected. Continuing to work in the linearized approximation, the simplest thing to do is to modify the momentum conservation equation (65c ) to

Equation (84)

In this equation, we have dropped the 0 subscript on the background quantities, for simplicity, and will continue to do so for the rest of the paper to avoid clutter. The parameter $\tau_{{\rm imp}}$ is a relaxation time for the total momentum, as can be readily seen by integrating this equation over space. It is often estimated to be the scattering rate between an electron/hole and an impurity or phonon, although we will see examples in section 7.2 where the impurity momentum relaxation time must be evaluated more carefully.

4.6.1. Destruction of sound modes.

Let us describe the consequences of momentum relaxation on the hydrodynamic modes described in section 4.3. Clearly, only the modes with $ \newcommand{\mdelta}{{\delta}} \mdelta v_i \ne 0$ will be affected. These are the sound modes and diffusive shear modes. The shear modes obtain a dispersion relation

Equation (85)

On time scales long compared to $\tau_{{\rm imp}}$ , this mode is 'gapped'—momentum is not long lived and will not play a role in the dynamics. The sound modes become [104]

Equation (86)

As $k\rightarrow 0$ , these modes split into

Equation (87a
                     )

Equation (87b
                     )

The interpretation of this effect is as follows. The latter mode is gapped, and associated with the finite lifetime of momentum. The former mode is diffusive, and describes the diffusion of energy. At long wavelengths, energy is no longer transported by sound waves, but by diffusion.

The physical importance of this is as follows. In conventional hydrodynamics, momentum is a long lived quantity. On time scales large compared to $\tau_{{\rm imp}}$ , and on distances long compared to $v_{{\rm F}}\tau_{{\rm imp}}$ , we see that the dynamics of charge and energy will reduce to a set of diffusion equations. This is what occurs in a conventional dirty metal. Hence, in order to see hydrodynamics of electrons, we must find samples where we can observe flows on time and length scales short compared to these scales.

4.6.2. Reduction to ohmic flow in the Fermi liquid.

It is also instructive to study the Fermi liquid limit $T/\mu \rightarrow 0$ , as described in section 4.4. (78b ) generalizes to

Equation (88)

while incompressibility continues to imply $ \newcommand{\mdelta}{{\delta}} \partial_i \mdelta v_i = 0$ . Relating $ \newcommand{\mdelta}{{\delta}} \mdelta P$ to $ \newcommand{\mdelta}{{\delta}} \mdelta \mu$ as in (77), we obtain

Equation (89)

where we have defined the kinematic viscosity

Equation (90)

A common trick to solve these equations is as follows [125]. Define the stream function ψ, so that

Equation (91)

For a two dimensional incompressible flow, we see that this ansatz automatically satisfies $ \newcommand{\mdelta}{{\delta}} \partial_i \mdelta v_i = 0$ . We can further take the curl of (89) to find

Equation (92)

where we have defined the 'Gurzhi length' or 'momentum relaxation length'

Equation (93)

When λ is finite, we can express solutions to this differential equation in the form

Equation (94)

Now suppose that λ is small compared to the geometric scales in our problem. The typical $\psi_\lambda$ is exponentially decaying away from the boundaries on the length scale λ: $\psi(x) \sim {\rm e}^{-x/\lambda}$ . Hence, in the interior of the sample, all velocity comes from $\psi_0$ . Furthermore, from (89), we see that on long distances compared to λ, the electric current $ \newcommand{\mdelta}{{\delta}} J_i \approx n\mdelta v_i$ is given by

Equation (95)

where

Equation (96)

is a constant which is, in fact, the (conventional, Ohmic) conductivity of an infinitely large sample, up to a factor of e2 which we will mostly neglect. We will return to this quantity, in some detail, in sections 6 and 7. For now, we simply emphasize that on long distances, there is no physical distinction between the equations of motion governing the flow of a hydrodynamic momentum-relaxing electron fluid, and the motion of electrons in a 'conventional' Ohmic metal with an isotropic conductivity tensor. Once again, we see how momentum relaxation ruins the interesting physics associated with hydrodynamic electron flow.

4.6.3. Destruction of turbulence.

As a final consequence of momentum relaxation, let us discuss the consequences of a finite $\tau_{{\rm imp}}$ on the development of turbulent flows. Turbulence is one of the most dramatic phenomena in classical fluid dynamics: the chaotic and 'self-organizing' nonlinear dynamics of fluid vortices. Turbulence has a rather peculiar character in two spatial dimensions [126]. Let us give a qualitative description of the phenomenon 9 . Regions of positive vorticity $\Omega = \partial_x v_y - \partial_y v_x$ will merge together, as will regions of negative vorticity. More quantitatively,

Equation (97)

where we may think of

Equation (98)

as a number characterizing the properties of a small-scale stirring of the fluid. The average in (97) is over statistical realizations of turbulence; we leave a precise defiinition to [126].

What happens if we now include a momentum relaxation time $\tau_{{\rm imp}}$ 10 ? We can form a second dimensionless number

Equation (99)

which is the ratio of the nonlinear convective term relative to the momentum relaxing term in the nonlinear generalization of (84). From (97) we estimate that $ \newcommand{\e}{{\rm e}} v_{{\rm flow}} \sim (\epsilon \ell_{{\rm flow}}){\hspace{0pt}}^{1/3}$ . As $ \newcommand{\e}{{\rm e}} \ell_{{\rm flow}} \rightarrow \infty$ , $\mathcal{R}_\tau \rightarrow 0$ , while $\mathcal{R} \rightarrow \infty$ . As we have seen throughout this subsection, the effects of momentum relaxation become most important at long distances. Furthermore, by comparing $\mathcal{R}$ to $\mathcal{R}_\tau$ , we conclude that the effects of momentum relaxation become more important than the effect of viscosity whenever $ \newcommand{\e}{{\rm e}} \ell_{{\rm flow}}^2 \gtrsim v_{{\rm F}} \ell_{{\rm ee}}\tau_{{\rm imp}} = \lambda^2$ . And when $\mathcal{R}_\tau \sim 1$ , the effects of momentum relaxation become more important than the effects of momentum convection via the nonlinear terms in the Navier–Stokes equations. Combining (97)–(99), we observe that this occurs when

Equation (100)

Obviously, to be in the hydrodynamic regime, we require $ \newcommand{\e}{{\rm e}} \ell_{{\rm stir}} \gtrsim \ell_{{\rm ee}}$ . We can now understand the difficulty for observing turbulent flows of electrons in metals. We must find a metal where $\tau_{{\rm imp}}$ is large enough:

Equation (101)

Even if $ \newcommand{\e}{{\rm e}} \ell_{{\rm flow}}$ is not too much larger than $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ , given the discussion in section 4.3.1, we would conservatively require $\tau_{{\rm imp}} \gtrsim 10 \tau_{{\rm ee}}$ to see even a hint of turbulence. This is very difficult to achieve experimentally. We do not expect that electronic hydrodynamics will be in the nonlinear regime in the near term, if ever.

5. Kinetic theory

In this section, we will present a microscopic 'derivation' of the hydrodynamics of electrons in graphene, based upon kinetic theory. The kinetic theory of electrons in graphene was recently reviewed in [128]. Kinetic theory is a framework for understanding the dynamics of weakly interacting quantum systems, as we will shortly review, and so while it can be useful for understanding Fermi liquid physics, one might question the legitimacy of such an approach for a strongly interacting quantum system such as the Dirac fluid. Our view is that it is worth knowing the main results obtained using kinetic theory—even if some assumptions may break down at charge neutrality. Kinetic theory gives us a controlled treatment of the ballistic-to-hydrodynamic crossover and will allow us to address questions such as the validity of relativistic hydrodynamics (section 4.5.2).

5.1. The Boltzmann equation

Let us begin with a physically intuitive picture of the kinetic equations. What follows can be derived more rigorously from quantum many-body theory [9], and we will pause where appropriate and comment on the effects of quantum mechanics. Indeed, what follows is often called 'quantum kinetic theory', although we feel this is a misnomer. The equations below are classical, while the coefficients of the classical equations can be microscopically computed in the quantum theory.

The basic idea of the kinetic equations is that if there were no interactions—namely, the many-body Hamiltonian in graphene was simply given by (5)—then the number of fermions in every single-particle state would be conserved (no fermion can scatter into any other state). One would like to construct a 'hydrodynamics' for these conserved quantities. However, these conserved quantities are spatially extended, and so we must be slightly careful. The key observation is that if we are only interested in long wavelength physics on scales $\gg {\rm \Delta} x$ , and willing to only discriminate between fermions whose momenta are at least ${\rm \Delta} p$ different, then whenever

Equation (102)

then we can assert that the number of fermions at every momentum is individually locally conserved. This is simply the fact that quantum mechanics, and the wave like nature of the quasiparticles, only becomes important on length scales where Heisenberg's uncertainty principle cannot be ignored. Writing $f_A({\bf x}, {\bf p})$ as the number density of fermions of flavor A (spin/valley in graphene) and momentum ${\bf p}$ , we can then write down conservation laws for fA . We further assert that $0\leqslant f_A({\bf x}, {\bf p}) \leqslant 1$ because the particles are fermions, and the Pauli exclusion principle forbids two of them from being in the same state. The key observation is that we can write such conservation laws down explicitly, and not phenomenologically. For simplicity, suppose that the single-particle Hamiltonian takes the form of

Equation (103)

The time evolution of fA is given by the Liouville equation of classical mechanics:

Equation (104)

where ${\bf v}$ is the group velocity of quasiparticles with momentum ${\bf p}$ :

Equation (105)

with $ \newcommand{\e}{{\rm e}} \epsilon_{A}$ the single-particle Hamiltonian for particles of flavor A. The external force is given by

Equation (106)

With two important subtleties, this equation can also be derived more carefully from the quantum theory [9]. First, the above derivation is only valid when the lifetime of quasiparticles $\tau_{{\rm ee}}$ , which we will define below, obeys

Equation (107)

Such an assumption is sensible in a Fermi liquid, but less so in the Dirac fluid of graphene. Second, we have overlooked the possibility that particles of different flavors A may convert back and forth. When such processes cannot be neglected (this is most commonly the case for spin degrees of freedom) one must generalize the distribution function to a matrix in flavor indices fAB . Although it is rarely done, in principle one can also compute the subleading contributions in $\hbar$ to (104). Interestingly, they will take the form of 'gradient' corrections to (104), involving higher derivatives of ${\bf x}$ and ${\bf p}$ .

We now introduce the effects of interactions, assuming that the distribution function may be written as fA . The essential idea of kinetic theory is that if interactions occur very rarely (in a sense we will shortly make explicit), then interactions can be introduced into (104) perturbatively. This is analogous to our treatment of hydrodynamics with weak momentum relaxation in section 4.6. Even though the $f_A({\bf p})$ will not all remain conserved, we can perturbatively correct the right hand side of (104):

Equation (108)

This is called the Boltzmann equation, and $\mathcal{C}[\,f]$ is called the collision integral. If all collisions between fermions are spatially local 2-body scattering events, such as Coulomb interactions, then

Equation (109)

To save space, we have suppressed the explicit ${\bf x}$ -dependence of all factors of f above. Note that $\mathcal{C}$ carries arguments A, ${\bf x}$ and ${\bf p}$ which have been suppressed. This equation looks more intimidating than it actually is. What we are doing is counting how frequently 2 particles in flavor/momentum $A{\bf p}$ and $B{\bf p}^\prime$ scatter into $C{\bf q}$ and $D{\bf q}^\prime$ . The rate with which this occurs is given by $\vert \mathcal{M}_{ABCD}\vert ^2 f_A f_B(1-f_C)(1-f_D)$ (we have suppressed momenta for simplicity): $\vert \mathcal{M}_{ABCD}\vert ^2$ is roughly proportional to the probability that such a scattering event would occur in the absence of all other particles, and it can be computed using Feynman diagrams in the microscopic quantum theory [9]. We further assume that $f_A f_B$ is the probability that two fermions are in states A and B, and $(1-f_C)(1-f_D)$ is the probability that C and D are empty (the Pauli exclusion principle forbids two fermions from being in the same state). We multiply all of the resulting probabilities together to obtain the number of scattering events that occur. The assumption that we can multiply such probabilities together, because the states of the incoming/outgoing particles are uncorrelated, is called molecular chaos. Finally, noting that a scattering event of this type would destroy an A and B, while creating a C and D, leads us to (109): the first term arises from collisions where an A is created, and the second term from collisions where A is destroyed. $\mathcal{C}$ straightforwardly generalizes to other kinds of collisions as well [9].

We can self-consistently argue that this kinetic formalism is consistent with (107) by estimating the lifetime of a quasiparticle as

Equation (110)

(107) must then hold for any choice of A, ${\bf x}$ or ${\bf p}$ for kinetic theory to be valid.

The difficulty of the kinetic approach is that firstly, the factor of $\vert \mathcal{M}\vert ^2$ in (109) is generally very complicated. As is the case in graphene, one may also need to make certain self-consistent approximations to further avoid spurious divergences in $\mathcal{M}$ and $\mathcal{C}$ . We will mostly not worry about such effects in this review. Instead, what we observe is that the Boltzmann equation is a huge, highly nonlinear integro-differential equation. It cannot possibly be solved in much generality: even numerics pose a real challenge. Still, there are two useful features one can prove in great generality about the kinetic approach. Firstly, so long as one considers a stable phase of matter, one can prove an 'H-theorem', analogous to the second law of thermodynamics, that (108) is a dissipative equation that tends towards thermal equilibrium. Secondly, one can often find nonlinear solutions to (108) of the form

Equation (111)

where qA is the electric charge of a particle of type A and $n_{{\rm F}}$ is the Fermi function

Equation (112)

Indeed, using (111) and (112), one finds that the object in square brackets in (109) is proportional to

This vanishes due to the conservation of energy in two-body collisions found in (109). The left hand side of (108) can also be shown to vanish on this solution using (105) and (106). In fact, whenever H1 does not depend on ${\bf x}$ at all, momentum is a good conserved quantity, and one can find (at least) a three-parameter family of equilibria, parameterized by T, μ and ${\bf u}$ :

Equation (113)

The free parameters in this equation are exactly the temperature, chemical potential, and velocity that we introduced in hydrodynamics in section 4. In many metals, including graphene, only accounting for electron–electron collisions can lead to even more (approximate) conservation laws, as we will see. Unlike the hydrodynamic approach, which required us to know a priori the conservation laws, the kinetic approach allows us to compute them.

5.1.1. Hydrodynamic limit of a kinetic theory.

If quasiparticles are long-lived and the kinetic expansion is valid, one can understand hydrodynamics directly from kinetic theory 11 . Let us now quickly sketch how this is done, in general circumstances. The approach follows textbook treatments of the derivation of hydrodynamics for a weakly interacting classical gas [129]. In what follows we assume that $V_{{\rm imp}A}=0$ —namely, there is no breaking of translational symmetry.

Suppose that we have identified the full nonlinear family of time-independent solutions to (108), and that they take the form

Equation (114)

where $X^I_A({\bf p})$ label amount of conserved quantity I carried by a particle of flavor A and momentum ${\bf p}$ , and $\lambda^I$ are corresponding free parameters; we have employed an Einstein summation convention on I. For example, if the conserved quantities are energy, momentum and charge, then we have

Equation (115)

The equations of motion of zeroth order hydrodynamics are found by plugging in the ansatz

Equation (116)

into (108), and one finds

Equation (117)

where

Equation (118a
                     )

Equation (118b
                     )

are the charge and current densities associated with each conserved quantity. These equations can be understood intuitively as follows. If we wait for times $t \gg \tau_{{\rm ee}}$ , then we qualitatively expect that the collision integral has relaxed away all non-equilibrium perturbations.

In section 4, we often expressed the hydrodynamic equations in terms of the variables $n(\mu, T)$ , $u^\mu$ , etc. For example, if ${\bf u}={\bf 0}$ , then from (118) we find that

Equation (119)

The sum A runs over electrons and holes with $q_A = \pm 1$ . In this expression, we have assumed the dispersion relation (1) for all species of particles, as is appropriate for graphene. At finite ${\bf u}$ , it is easier to replace ${\bf u}$ with $u^\mu$ , similarly to (40). The relativistic generalization of (119) then becomes (in units with $ \hbar=v_{{\rm F}}=1$ )

Equation (120)

Performing this integral, one finds the form (44) along with the identity (34).

As we saw in section 4, the hydrodynamic equations can be slightly pathological at ideal order, and to find a set of dissipative equations which truly settle to thermal equilibrium we must account for some dissipation. This can be accounted for by perturbatively solving (108) in the 'small parameter' $\tau_{{\rm ee}}$ . Because $\tau_{{\rm ee}}$ is defined implicitly via (110), the easiest way to do this is to write

Equation (121)

with $f_A^1$ characterizing the small correction to the distribution function arising from the fact that $\mathcal{C}[\,f]$ should not exactly vanish at all times. We enforce

Equation (122)

because any local fluctuation of a conserved quantity should be absorbed into the local value of $\lambda^I({\bf x})$ . This is the analogue of the 'Landau frame' of hydrodynamics. We then approximate that at first order in ${\rm \Delta} t$ :

Equation (123)

It is instructive to make a relaxation time approximation [130]

Equation (124)

Combining (122)–(124), we find that the equations of motion are still the continuity equations (117) but with a dissipative contribution to the current ${\bf J}^I$ :

Equation (125)

To leading order in $\tau_{{\rm ee}}$ one can then use the zeroth order equations of hydrodynamics to simplify the integrals on the right hand side. Relative to (118b ) and (125) contains an extra derivative. Thus $\widehat{{\bf J}}^I$ contains viscous dissipation, among other first order corrections to hydrodynamics. In the limit where $\tau_{{\rm ee}} \rightarrow 0$ , we indeed recover zeroth order hydrodynamics.

More carefully, one could linearize (123), replacing $\tau^{-1}_{{\rm ee}}$ with a matrix in both A and ${\bf p}$ indices. A generalization of the remaining steps gives a more accurate determination of the first order corrections to the hydrodynamic equations.

5.2. Collisions in graphene

We now turn to the application of kinetic theory to graphene [109, 110, 131134]. As in any kinetic theory, some of the subtlety arises from the explicit form of the collision integral. In graphene, the most important electron–electron interactions are long-range Coulomb interactions, as we discussed in section 3. So we can compute the collision integral in graphene using (109) with [109]

Equation (126)

with $ \newcommand{\e}{{\rm e}} \epsilon({\bf q}, \omega)$ a frequency-dependent effective permittivity [115], which we approximate as [109]

Equation (127)

The factor of $\mathcal{Y}$ is typically just a O(1) constant but is rather complicated: see [109].

However, in graphene one of the most important effects is 'geometric'. This paragraph follows the discussion in [110]; see also [135]. As discussed above, the collision integral contains $ \newcommand{\mdelta}{{\delta}} \mdelta$ -functions for energy and momentum conservation. We now consider the possibility that the two incoming and two outgoing momenta are nearly collinear: e.g. ${\bf k}_1 = k_{1\parallel} \hat{{\bf x}} + k_{1\perp}\hat{{\bf y}}$ with $k_{1\parallel} \gg k_{1\perp}$ . Without loss of generality we set $k_{2\perp}=0$ . If the momentum exchange during the collision is ${\bf q}$ , and incoming quasiparticles have momenta ${\bf k}_{1, 2}$ , then the energy $ \newcommand{\mdelta}{{\delta}} \mdelta$ -function is

Equation (128)

To get the second step, we have used $ \newcommand{\mdelta}{{\delta}} \mdelta$ -function identities. The functions f1,2 and $q_{\perp1, 2}^*$ are unimportant, and follow from solving a quadratic equation. The key point is that the collision integral (109) will involve integrals over ${\bf k}_2$ and ${\bf q}$ . The $k_{1\perp}$ integral will have a logarithmic divergence, which is sensitive to the fact that there are d  =  2 spatial dimensions. This is called the (foward) collinear scattering singularity. The integral is not truly divergent so long as the scattering amplitude $\mathcal{M}$ vanishes for collinear scattering. The form of (127) ensures this does happen. Because screening is proportional to α, we conclude that as $\alpha \rightarrow 0$ collinear scattering will lead to rapid thermalization at every angle. For example, at charge neutrality, we find [136]

Equation (129)

The consequences of this collinear scattering for graphene will be discussed in section 5.4. Experimental signatures of rapid collinear scattering were observed in [137]. A toy model of the kinetic theory of graphene accounting for this rapid collinear scattering can be found in [138].

5.3. The imbalance mode

Beyond the divergences in the collinear limit, the relativistic dispersion of graphene leads to another important effect: the separate (approximate) conservation of electrons and holes [131, 133, 134]. In particular, at strong coupling, a simple expectation is that an electron will sometimes 'spontaneously convert' into n electrons and n  −  1 holes; sometimes the reverse process could occur. There is no fundamental symmetry that prevents this from happening. However, in the weak coupling limit, this is very unlikely to happen for kinematic reasons. With a linear dispersion relation, conservation laws demand that both ${\bf p}_1 = {\bf p}_2+{\bf p}_3+{\bf p}_4$ and $\vert {\bf p}_1\vert = \vert {\bf p}_2\vert + \vert {\bf p}_3\vert + \vert {\bf p}_4\vert $ . This is only possible if all four momenta are collinear: see figure 6.

Figure 6.

Figure 6. A proposed ${\rm e}\rightarrow {\rm e}+{\rm e}+{\rm h}$ scattering event. It is only possible to end up with on-shell quasiparticles if $\vert {\bf p}_1\vert = \vert {\bf p}_2\vert + \vert {\bf p}_3\vert + \vert {\bf p}_4\vert $ and ${\bf p}_1 = {\bf p}_2 + {\bf p}_3 + {\bf p}_4 $ with graphene's relativistic dispersion relation. Scattering events must then be collinear. Reprinted figure with permission from [131], Copyright 2009 by the American Physical Society.

Standard image High-resolution image

Because the decay rate for a single electron would involve an integral over all of phase space, the contribution of the collinear decays is vanishingly small. We conclude that on times of order $\alpha^2$ , the number of electrons $n_{{\rm e}}$ and holes $n_{{\rm h}}$ are separately conserved. It is more conventional to write the two conserved densities as

Equation (130a
                  )

Equation (130b
                  )

with $n_{{\rm imb}}$ the 'imbalance density'.

The discovery of this imbalance mode at weak coupling is, in our view, the most important contribution of the kinetic theory of graphene hydrodynamics. We will discuss whether the imbalance mode is really present in the Dirac fluid at the end of this section, but the predictions of a theory of imbalance hydrodynamics can always be compared directly to experiment.

Finally, one might ask whether the presence of two valleys in graphene, which we have so far neglected, provides extra imbalance modes. The answer is yes—but with an important caveat. Because the two valley fluids in graphene have identical properties, so long as we are only interested in disorder or experimental probes which are smooth on atomic scales, the valley imbalance degree of freedom will decouple from any measurement. For this reason, we have neglected the presence of multiple valleys.

5.3.1. Hydrodynamics with an imbalance mode.

Following the logic of section 4, it is straightforward to construct the theory of hydrodynamics with an additional imbalance mode; see also [139]. Because both the electron fluid and hole fluid have a relativistic dispersion relation, and the instantaneous Coulomb interactions do not contribute to the energy current or momentum density, we conclude that $T^{\mu\nu}$ continues to be symmetric. We have two conserved charges $n^a = (n, n_{{\rm imb}})$ a, b indices will label the electric/imbalance charge indices. Still, the key observation is that nothing in the derivations of section 4 changes if we simply replace $\mu {\rm d}N$ in (29) with $\mu^a {\rm d}N^a$ , replace (30) with $ \newcommand{\e}{{\rm e}} \epsilon+P = Ts + \mu^an^a$ , etc. We conclude that the nonlinear hydrodynamics with imbalance modes is given by the equations

Equation (131a
                     )

Equation (131b
                     )

For compactness, we have written $F^{a\mu\nu} = (F^{\mu\nu}, 0)$ —only the electric part of the conserved charges may realistically be externally sourced in experiments 12 . To first order in the derivative expansion, one finds $T^{\mu\nu}$ is given by (62b ), while

Equation (132)

where $\sigma^{ab}_{\textsc{q}}$ is a positive-definite $2\times 2$ matrix. Many authors use different conventions than us (see the recent review [128] for an example), but one can show that their equations are equivalent after suitable relabelings.

The analysis of these hydrodynamic equations in linear response is a straightforward generalization of our analysis in section 4.3. One obtains sound modes with dispersion relation (68), together with diffusive modes $\omega = -{\rm i}Dk^2$ with diffusion constants

Equation (133)

Consistency of hydrodynamics requires that all of these eigenvalues are positive.

The possibility of imbalance modes in a far broader class of materials was discussed in [25, 26], along with their experimental implications; see also section 8.1.

5.3.2. Decay of the imbalance mode.

The reason that we did not include this imbalance mode explicitly in the hydrodynamics of section 4 is that this conservation law is not exact. In particular, consider the higher order scattering process shown in figure 7. In this case, we have 2 electrons scattering into 3 electrons and 1 hole—in other words, the 'assisted decay' of an energetic electron into electrons and holes.

Figure 7.

Figure 7. The imbalance mode can decay through a higher order (3-body) scattering event (left). Energy and momentum conservation laws no longer forbid such processes with graphene's relativistic dispersion relation, so long as the scattering event is of the form ${\rm e}+{\rm e} \rightarrow {\rm e}+{\rm e}+{\rm e}+{\rm h}$ and not ${\rm e}+{\rm e} \rightarrow {\rm e}+{\rm e}+{\rm h}+{\rm h}$ (right).

Standard image High-resolution image

In order to compute the rate of this scattering process, one must include higher order Feynman diagrams in the collision integral (109), and an explicit evaluation becomes more and more cumbersome. At the neutrality point, where the imbalance mode is likely to be most important, we can estimate the decay rate of such scattering events as

Equation (134)

up to $\log \alpha^{-1}$ prefactors. We see that while such processes are quite suppressed when $\alpha \ll 1$ , so long as $\alpha \sim 1$ the imbalance decay rate can be relatively fast. Keeping in mind our previous estimate $\alpha \sim 0.4$ , we conclude that the imbalance mode's lifetime is about 5 times longer than other non-conserved degrees of freedom. It could lead to quantitative changes in experiment, but likely not qualitative changes. Another recent discussion of $\tau_{{\rm imb}}$ can be found in [140].

Another decay channel for the imbalance mode is disorder-assisted two-body scattering [131]. Because we are typically interested in fluid dynamics where momentum is very long-lived, we will find it more useful to think about imbalance decay via higher order scattering events, as in (134).

If the imbalance mode is a long-lived degree of freedom, then it may be useful to maintain it in the equations of motion. This is analogous to keeping track of viscous effects in a momentum relaxing fluid—on short enough length scales, viscous effects may have experimental consequences. A simple model which accounts for the decay of the imbalance mode on a time scale $\tau_{{\rm imb}}$ is to replace (131a ) with

Equation (135a
                     )

Equation (135b
                     )

5.4. The hydrodynamic coefficients of graphene

In this section, we collect some explicit results (without proof) for the thermodynamic and hydrodynamic coefficients of graphene, as computed in kinetic theory. The numerical coefficients below may not be exact in the Dirac fluid regime. Let us emphasize that relativistic dimensional analysis can be used to predict almost everything below up to the O(1) constant prefactors. Below our purpose is to explicitly give such prefactors.

5.4.1. Thermodynamics.

The thermodynamics of graphene in both the Fermi liquid and Dirac fluid, at least when $\alpha \ll 1$ , is well approximated by the thermodynamics of a free Fermi gas of suitable Fermi velocity $v_{{\rm F}}$ [85, 86]. This can be understood by recalling that at weak coupling, the only scale (temperature) dependent parameter in the action (22) was the Fermi velocity $v_{{\rm F}}$ , given by (27). Within kinetic theory, we simply use the standard Fermi gas formula for the pressure

Equation (136)

where ${\rm Li}$ denotes the polylogarithm function. In (136), we are implicitly using the T and μ dependent $v_{{\rm F}}$ described in (27) and (28). As we described in section 4.1 this can be used to derive all other thermodynamic properties. To leading logarithmic order we can neglect the μ and T dependence of $v_{{\rm F}}$ when computing thermodynamic derivatives.

In the Fermi liquid regime ($\mu \gg k_{{\rm B}}T$ ), we can Taylor expand (136) to obtain

Equation (137)

Using (28), we may write the inverse compressibility as $T\rightarrow 0$ as [141]

Equation (138)

Similarly, we find the specific heat

Equation (139)

In the Dirac fluid regime ($\mu \ll k_{{\rm B}}T$ ), we instead find

Equation (140)

where $ \newcommand{\mzeta}{{\zeta}} \mzeta(x)$ is the Riemann zeta function. We find inverse compressibility

Equation (141)

and specific heat

Equation (142)

5.4.2. Dissipative coefficients.

In the remainder of the section, we study the dissipative hydrodynamic coefficients. First, on general grounds we anticipate that these dissipative coefficients will scale as $ \newcommand{\e}{{\rm e}} \eta \sim \sigma_{\textsc{q}} \sim \alpha^{-2}$ , up to logarithmic factors. The reason for this is two-fold. Firstly, we saw in section 4 that hydrodynamics is a derivative expansion in the small parameter $ \newcommand{\e}{{\rm e}} k\ell_{{\rm ee}}$ , where k is the wave number of spatial variations and $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ is the mean free path. Because dissipative coefficients like η and $\sigma_{\textsc{q}}$ show up at first order in the gradient expansion, we obtain $ \newcommand{\e}{{\rm e}} \eta, \sigma_{\textsc{q}} \sim \ell_{{\rm ee}}$ . We can estimate $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \sim v_{{\rm F}} \tau_{{\rm ee}}$ , where $\tau_{{\rm ee}}$ is the typical lifetime of quasiparticles. In section 5.2, we estimated that $\tau_{{\rm ee}} \sim \alpha^{-2}$ (up to possible logarithms). Hence we find that at weak coupling ($\alpha \rightarrow 0$ ), viscosity becomes very large, while at strong coupling ($\alpha \sim 1$ ) viscosity is small.

Why is it viscosity—a hydrodynamic, collective effect—is largest when interactions are weak? In section 4.3, we showed that the diffusion constant of transverse momentum was proportional to shear viscosity. For example, assuming that momentum is 'randomly' exchanged during collisions, we estimate the diffusion constant $ \newcommand{\e}{{\rm e}} D \sim v_{{\rm F}} \ell_{{\rm ee}}$ by dimensional analysis: $v_{{\rm F}}$ is the velocity of the particles carrying away the momentum from a 'source', and $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ is the typical distance they travel before being scattered. The weaker interactions are, and the larger $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ becomes, the farther particles can travel before they get scattered. This is why viscosity is so large for a weakly interacting quantum gas. For contrast, some of the most viscous classical liquids that we know are very strongly interacting; their viscosity is high because they are closer to a crystallization or jamming/glassy transition [142].

Let us now provide more quantitative results, beginning with the properties of graphene in the Fermi liquid limit ($\mu \gg k_{{\rm B}}T$ ). As we noted in section 4.4, in the Fermi liquid we can neglect the distinction between charge and energy conservation (and imbalance) to leading order in $k_{{\rm B}}T/\mu$ . Therefore, the most important dissipative coefficients are the viscosities η and ζ. Firstly, on general grounds we expect that $\zeta \approx 0$ as graphene is an approximately scale invariant quasirelativistic plasma [143]. Secondly, due to the rapid enhancement of the phase space of scattering in a 2d Fermi liquid, one finds that [144147]

Equation (143)

Based on the general arguments above, we might expect that $ \newcommand{\e}{{\rm e}} \eta \sim 1/\tau_{{\rm ee}}$ . However, this turns out to not be true. The dominant collisions in a 2d Fermi liquid are either 'head on' collisions where two quasiparticles of nearly opposite momenta scatter into two other quasiparticles of nearly opposite momenta [148, 149], or collinear scattering. The logarithmic enhacement in (143) can be traced to collinear scattering, but such collisions do not efficiently dissipate transverse momentum. Thus, it turns out that in graphene the shear viscosity is [143]

Equation (144)

It may even be possible that η is enhanced to scaling as $T^{-2} (\log(\mu/k_{{\rm B}}T)){\hspace{0pt}}^2$ [150] in certain 2d Fermi liquids, due to possible further suppression of head on collisions [151]. The T−2 scaling of dissipative coefficients is a classic result of Fermi liquid theory [152].

Earlier, we (correctly) observed that $\sigma_{\textsc{q}}$ will be negligible in the Fermi liquid limit. However, it may still useful to keep track of the first non-zero correction to $\sigma_{\textsc{q}}$ in $T/\mu$ . The reason is simple: as discussed below (58), a textbook (Galilean-invariant) fluid [5] has a dissipative coefficient $\kappa_{\textsc{q}}$ , related to the flow of heat, in the absence of momentum flow, in a temperature gradient. A Galilean-invariant Fermi liquid has $\kappa_{\textsc{q}} \sim T\tau_{{\rm ee}} \sim 1/T$ [152]. Making the frame choice (58), we then expect from the form of (62) that $\sigma_{\textsc{q}} \sim T^0$ . Although not directly stated, the authors of [109] indeed find this T-dependence of $\sigma_{\textsc{q}}$ in an explicit calculation.

Next we turn to the Dirac fluid regime ($\mu \ll k_{{\rm B}}T$ ). In this regime, one finds that [109, 110, 132]

Equation (145)

It is not easy to directly measure this result experimentally, unfortunately: as we will see in later sections, the experimentally measured conductivity is also affected by (e.g.) disorder. If we assume that the imbalance mode is also conserved, then we must compute the other coefficients of the matrix $\sigma_{\textsc{q}}^{ab}$ . At the neutrality point, one finds that off-diagonal components of this matrix vanish, and

Equation (146)

One also finds a very small value for the viscosity at the neutrality point [153]:

Equation (147)

Note that while α and $v_{{\rm F}}$ are temperature-dependent, $v_{{\rm F}}\alpha$ is temperature-independent. When $\alpha \sim 1$ , one finds that $ \newcommand{\e}{{\rm e}} \eta/s \sim \hbar/k_{{\rm B}}$ , as is typical of a strongly coupled quantum fluid [154]. Finally, as in the Fermi liquid, we find that $\zeta \approx 0$ .

6. Transport in the Fermi liquid

In this section, we now turn to the experimentally observable consequences of viscous fluid flow. We focus on the Fermi liquid regime where $\mu \gg k_{{\rm B}}T$ . We also assume a 'mean field' treatment of disorder, and do not consider an inhomogeneous charge puddle landscape. The equations that we must solve, subject to appropriate boundary conditions, were derived in section 4:

Equation (148a
               )

Equation (148b
               )

One crucial property of these equations is that they contain only a single true 'fit' parameter, the viscosity η. The density n is measured experimentally, and $ \newcommand{\e}{{\rm e}} (\epsilon+P)/\tau_{{\rm imp}}$ is also measured experimentally through the (temperature-dependent) dc conductivity, via (96). The hydrodynamic equations (148) provide clear and direct predictions for experiments, which we detail in this section.

To the extent that generic collisions should not have additional conservation laws besides charge and momentum (we saw in section 4.4 that energy conservation is mostly irrelevant in a Fermi liquid), we caution that this universality is only true when the electronic mean free path $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ is very small compared to all other length scales. When $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ is comparable to other length scales, the dynamics of correlated electrons can become much more exotic, obtaining sensitive dependence on the details of the Fermi surface [26]. Due to the simplicity of the Fermi surface, however, we expect that graphene is a very good candidate material for observing the simple viscous hydrodynamics described in section 4.4.

6.1. Flow through narrow channels

The simplest signature of hydrodynamics in the Fermi liquid occurs in the flow of an electron fluid through a narrow channel, driven by an electric field: see figure 8. We postulate that the flow is independent of position x along the channel, which is sensible if the channel is long. In the presence of an external electric field, $\partial_i P = - nE_i$ , and so the x-component of (88) reads

Equation (149)

Figure 8.

Figure 8. Fluid flow through a narrow channel of width w.

Standard image High-resolution image

Interestingly, a similar equation shows up in magnetohydrodynamics in the Hartmann flow geometry (see e.g. [155]). This equation can be exactly solved for various boundary conditions. Let us focus on two: (i) if the velocity is pinned to zero at the edges of the channel at $y = \pm w/2$ ('no slip'), then by symmetry we conclude that the solution is given by

Equation (150)

We will discuss the physics of this solution shortly. (ii) if the momentum flux through the boundary is fixed to zero ('no stress'), then

Equation (151)

We can see by inspection that, in this case, the solution to the equations of motion is

Equation (152)

An experimentally easy quantity to measure is the resistance per unit length $\mathcal{R}$ of this channel:

Equation (153)

Using the results above we find

Equation (154)

with $\sigma_{{\rm dc}}$ defined in (96). Let us now discuss the physical consequences of hydrodynamics. If there are no stress boundary conditions, then the flow down the channel is perfectly Ohmic: the second row of (154) is what one finds by solving Ohm's law in a channel of width w. The fluid hardly feels the boundary at all. However, with no slip boundary conditions, the fluid is pinned to the boundary at the edges of the channel. Because the fluid is viscous, the stationary fluid at the edges of the channel 'pulls back' on the fluid that tries to flow down the center of the channel. Hence, the resistance increases as the effective width of the channel becomes smaller. To be more quantitative, we have seen in section 4.6.2 that the length scale λ controls the onset of viscous effects in a momentum relaxing fluid. When $w\gg \lambda$ , one might expect that the flow is approximately Ohmic, but in an channel of effective width $w-2\lambda$ —there is a region of size λ on each end of the channel where viscous drag effectively forbids current from flowing. The explicit computation (154) confirms this. When $w\lesssim \lambda$ , then the viscous drag effects permeate the whole channel. Then one finds

Equation (155)

Now the resistance is much more sensitive to the width of the channel than before. This limit is famous in the fluid dynamics literature, where it goes by the name of Poiseuille flow [5]. The velocity profile vx (y) is approximately parabolic:

Equation (156)

Furthermore, the resistance is proportional to the viscosity alone, and is finite even when $\tau = \infty$ . Such effects were first noted by Gurzhi over 50 years ago [19], and this is sometimes called the Gurzhi effect.

The most dramatic experimental signature of the Gurzhi effect (155) is the fact that

Equation (157)

because (up to logarithms) $ \newcommand{\e}{{\rm e}} \eta \sim T^{-2}$ in a Fermi liquid (see (144)). This result violates a 'theorem' of the conventional theory of semiclassical transport [156], where adding more microscopic collisions always increases the resistivity; the resolution of this paradox is discussed in some detail in [26]. Although the prediction (157) was made for viscous electron flows (in Fermi liquids), it was not seen for a long time, and so historically there has been almost no interest in the hydrodynamic theory of transport in condensed matter physics. However, (155) also predicts interesting width dependence of $\mathcal{R}$ , which can also serve as an important experimental signature.

In order to see the effects of viscous flow in such channels, one needs to allow for momentum dissipation at the edges of the channel. Are such boundary conditions generic in metals? A simplistic answer would be yes—atomically rough edges could act as microscopic impurities, allowing for electrons to lose their momentum at the edges of the channel. We will discuss this question in more detail from a microscopic perspective in section 6.4.1. Experimentally, it is not definitively known what the correct boundary conditions are. The answer is likely sensitive to the material at hand, and perhaps even to details of device fabrication. There are reasons to believe that in graphene no stress boundary conditions are more appropriate: in strong magnetic fields, quasiparticle orbits have been imaged which scatter almost perfectly off of the boundary [157]. This suggests that graphene has atomically smooth edges, which (under pristine conditions) has been observed experimentally [158].

There is some experimental evidence of viscous electron flow through narrow channels, but we defer discussion to section 6.4.1.

6.2. Flow through constrictions

We now turn to a slightly different set-up: flows through constrictions or narrow openings into a broader region of fluid. The simplest example of such a flow is depicted in the top panel of figure 9; this set-up was considered in [159]. Fluid flows from a region of high chemical potential to a region of lower chemical potential through a constriction of width w. We assume that the current is blocked from flowing away from the constriction by an infinitely thin barrier (of course, this is a mathematical simplification). Neglect momentum-relaxing electronic collisions, we simply need to solve (92) in the limit $\lambda=\infty$ , subject to suitable boundary conditions. The solution of the problem is rather technical and relies on techniques from complex analysis, and we will not describe it here. The bottom panel of figure 9 shows the current distribution through the constriction: in the hydrodynamic regime of flow, it is given by $J(x) \propto \sqrt{(w/2){\hspace{0pt}}^2 - x^2}$ , where x is the distance from the center of the constriction.

Figure 9.

Figure 9. Top panel: the chemical potential of the electron fluid as current flow from the top to the bottom through a narrow constriction of width w. The curved black lines denote the streamlines along which elements of fluid will flow. Bottom panel: the spatial distribution of the electrons as they flow through the constriction; different lines correspond to different ratio $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}/w$ . In the ballistic limit, the non-interacting electrons are equally likely to be found anywhere in the constriction, but in the viscous regime they are clustered at the center of the constriction. The wiggles in the ballistic regime of the lower panel are a consequence of the truncation of the kinetic equations described in section 6.4, and should not detract from the physics. Reproduced with permission from [159].

Standard image High-resolution image

The simplest thing to measure, however, is again the total electrical resistance $\mathcal{R}$ , defined as the ratio of the voltage difference (far from the constriction) between the top and bottom half planes, divided by the current flowing through the constriction. One finds [159]

Equation (158)

This formula is somewhat similar to (155), up to the difference in the power of w. The smaller power of w appearing here is a consequence of the fact that the constriction is infinitely thin, while the channel is infinitely thick. Many of the same signatures of viscous flow that can, in principle, be seen in flows through narrow channels are the same signatures of viscous flows through constrictions: most notably, the temperature dependence $R\sim T^{-2}$ . However, as we will see in section 6.4.1, the flow through a constriction has proven a more useful setting to look for signatures of viscous flows experimentally.

An alternative way to measure the impact of viscosity on transport may be to study the flow around a circular obstacle [125, 160162]. Here, one measures the resistance associated with the obstacle is proportional to viscosity, similar to (158).

6.2.1. Negative nonlocal resistance.

A slight variation on the flow through a single constriction is the flow of current between a narrow source and a narrow drain [105, 163]. The precise nature of such a flow depends on the specific geometry studied. The simplest case is depicted in figure 10: a source and drain of current are located on opposite sides of an infinitely long slab of width W. Solving the momentum-relaxing Navier–Stokes equation (88), one finds two qualitatively different behaviors depending on whether the momentum relaxation length λ is large or small compared to W. If $\lambda \ll W$ , then the flow of current is essentially Ohmic, and the electrochemical potential (and thus voltage measured) will decrease monotonically along flow lines from source to drain. However, if $\lambda \gg W$ , momentum relaxation is negligible and one typically finds sign-changing voltage profiles, as shown in figure 10. The observation of such negative nonlocal resistance is a key signature that Ohmic transport theory is not applicable. Another important, possibly experimentally accessible difference between Ohmic/viscous transport arises from studying local Joule heating [163].

Figure 10.

Figure 10. The voltage or chemical potential of a Fermi liquid in the viscous (top) or Ohmic (bottom) regime. In the Ohmic regime, the potential is a monotonically decreasing function from source to drain, while in the viscous regime the voltage drop is strongest just to the sides of the source. This negative nonlocal resistance is not possible in an Ohmic limit. Reprinted by permission from Macmillan Publishers Ltd: Nature Physics [163], Copyright 2016.

Standard image High-resolution image

Perhaps a more direct signature of viscous current flow is the 'backflow' of current, or the formation of vortices. The arrows in figure 10 depict the formation of vortices in such a setup. A natural question is whether vortices always form whenever nonlocal resistance is obtained. Unfortunately, the answer is no [164, 165]. Negative nonlocal resistance is often observed in viscous regimes near the source or drain of current, and is not necessarily sensitive to the locations of other boundaries or sources [165]. In contrast, the existence of vortices is found to be much more sensitive to global boundary conditions. Intuitively, vortices form when the fluid flow can interfere with itself as it flows around the geometry. Some geometries turn out to only exhibit backflow with a large enough viscosity η, whereas others have backflow for any $ \newcommand{\e}{{\rm e}} \eta \ne 0$ [165].

Another subtlety with predicting vortex flow from nonlocal resistance measurements is that multiple current distributions, obeying different boundary conditions, can lead to the same potential distributions [164]. A measurement of nonlocal negative resistance is not sufficient to predict the current flow. This is a consequence of the fact that (92) is a fourth order differential equation, and not second order as in the Ohmic case. Different current distributions that lead to the same potential distribution can be distinguished by a magnetic field [164].

This negative nonlocal resistance has been observed experimentally in graphene [14], using the geometry depicted in figure 11. For the most part, we observe that a critical temperature is required before the onset of nonlocal resistance. This is consistent with the intuition that hydrodynamic, viscous effects are necessary to see this negative voltage, although we note that ballistic effects can also give negative voltages [165]. At higher temperatures, electron–phonon scattering becomes non-negligible and transport becomes phonon-dominated and conventional. Finally, observe that there is no nonlocal resistance in the Dirac fluid ($n\approx 0$ ). We will explain why this is so at the start of section 7.

Figure 11.

Figure 11. Left: experimental set-up of current source/drain and voltage probes for the nonlocal resistance measurements. Right: measured nonlocal resistance as a function of temperature T and density n. Red denotes a positive, Ohmic resistance measurement; blue denotes a negative, viscous resistance. Only in the Fermi liquid, for temperatures where electron–electron scattering length is smaller than both the electron-impurity and electron–phonon scattering length, is nonlocal resistance observed. From [14]. Reprinted with permission from AAAS.

Standard image High-resolution image

Using their experimental data, the authors of [14] estimated the dynamical viscosity of the graphene Fermi liquid (for moderate doping) to be

Equation (159)

This result is consistent with theoretical predictions discussed in section 5.4. For comparison, the dynamical viscosity of water at room temperature is $\nu \sim 10^{-6} {\rm m}^2~{\rm s}^{-1}$ . ν is so large for electrons in graphene due to both the weak electron–electron interactions of the Fermi liquid, and the very high $v_{{\rm F}}$ .

6.3. Viscometry

So far, we have discussed experiments that see clear hints of hydrodynamic flow, but for practical reasons they turn out to be rather non-ideal measurements of the electronic viscosity directly. The essential challenge is that (as we will see in great detail later) transport measurements are quite sensitive to momentum relaxation, and it can be challenging to disentangle this effect.

So let us now briefly discuss a few alternative ideas for how to measure the electronic viscosity of a metal. One proposal which has received a lot of attention is the Dyakonov–Shur instability [166], which occurs in a fluid with a uniform background velocity flow, subject to a pair of exotic boundary conditions at the edges of the device (density n is fixed at one end of the flow, and current Jx is fixed at the other). Such an instability, which would be experimentally observed via the detection of spontaneous ac current in electronics, is a dramatic and surprising feature of fluid motion. It arises from the amplification of sound waves as they reflect back and forth between the two walls, where very different boundary conditions are imposed [166]. Taking viscous dissipation into account, one estimates that within the hydrodynamic limit, this effect is only visible when the background velocity v0 is large enough: [166]

Equation (160)

with L the length of the device, and ν the dynamical viscosity. As we have noted previously, it is possible to make v0 rather large in many metals, and so detection of this instability ought to be possible. (160) gives a simple and unambiguous measure for the dynamical viscosity of the electron fluid, and so could be a very precise viscometer, in theory. In practice, the usefulness of (160) will be limited by both electron-impurity (momentum-relaxing) scattering. There is also debate in the literature as to whether this instability could reappear in the ballistic limit [167, 168], which may further complicate matters. A discussion of this instability in graphene can be found in [169].

One interesting observation about the Dyakonov–Shur instability is that it relies on the nonlinear structure of hydrodynamics. Another proposal for measuring ν that also relies on nonlinear terms in the Navier–Stokes equations is to study the frequency dependent response of an electron fluid in the Corbino disk geometry: see figure 12. The inspiration for this measurement is two-fold. First, we note that angular fluid flows cause to radial pressure (and hence voltage) drops: schematically,

Equation (161)

Figure 12.

Figure 12. Left: the Corbino disk geometry, with time-dependent magnetic flux $\Phi(t)$ , and potential measurements across the disk. Right: the measured electric potential drop as a function of frequency. The particular scales on the axes will vary from material to material but the shape of the curves is universal in the hydrodynamic limit. In particular, the drop in the voltage at low frequency allows us to estimate the viscosity. Reprinted figure with permission from [170], Copyright 2014 by the American Physical Society.

Standard image High-resolution image

Secondly, we observe that $v_\theta(r)$ will exponentially decay in a viscous flow oscillating with frequency ω on length scales larger than

Equation (162)

Then, the simple observation is that the pressure drop ${\rm \Delta}P$ , computed in (161) will get much smaller once ξ is small compared to the relative radii of the Corbino disk geometry. See figure 12 for the quantitative result found by solving the Navier–Stokes equations. In order to drive the fluid in such an angular flow, one threads a time-dependent magnetic field through the center of the disk. Maxwell's equations then apply a time-dependent angular electric field, even outside of the solenoid where the magnetic field is threaded. This electrical forcing is useful as it allows us to only drive the electronic degrees of freedom. Unfortunately, it is not easy to fabricate graphene in the annular shape required to realize such an experiment, and so such a viscometer has not been created yet. A similar proposal in a different geometry was recently given in [171].

One final way to measure the viscosity of an electron fluid in two dimensions is to study magnetotransport. We will discuss this in more detail in section 7.6, but let us emphasize for now that such a viscometer is likely not going to give a quantitative measurement of η. However, it may be sufficient to obtain the qualitative magnitude and temperature dependence of η. Along these lines, we note that a very simple model of viscous electronic flow is able to explain certain non-trivial features of magnetotransport in GaAs [172].

6.4. The ballistic-to-hydrodynamic crossover

So far, we have seen some simple signatures of viscous electron flow in a Fermi liquid. However, as we observed in section 5, to be cleanly in the hydrodynamic regime requires $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}}$ to be very small compared to all length scales in the problem. This is a challenge for graphene, where device sizes are generally $\lesssim $ $ 10\;~\mu{\rm m}$ , while $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \gtrsim 0.5\;~\mu{\rm m}$ is not an unreasonable expectation for the Fermi liquid regime. As such, it is useful to have a solvable model that interpolates between hydrodynamics, and a theory of a non-interacting Fermi gas. The discussion in section 5 makes clear that kinetic theory provides one such approach. In this section, we will describe a particular toy model of kinetic theory, appropriate for graphene. Although this model has some history [13], it has been studied comprehensively much more recently [125, 159, 162].

As we have already seen in our discussion of the Fermi liquid limit of hydrodynamics in section 4.4, in a homogeneous fluid the effects of energy conservation provide only ${\rm O}((T/\mu){\hspace{0pt}}^2$ corrections to the low temperature dynamics. Hence we propose (without proof) that in kinetic theory, the full distribution function $f({\bf x}, {\bf p})$ may be approximated as (at low temperature)

Equation (163)

Here ${\rm \Theta}(x)$ is the Heaviside step function, $p=\vert {\bf p}\vert $ and $ \newcommand{\e}{{\rm e}} \theta \equiv \arctan(\,p_y/p_x)$ . For simplicity, we have kept track of the distribution function f only in the conduction band. What (163) asserts is that, to good approximation, the distribution function of the low temperature Fermi liquid can be approximated by the spatially inhomogeneous 'sloshing' of a sharp Fermi surface. One can then approximate the solution to the full Boltzmann equation by

Equation (164)

Rather than performing a complicated microscopic calculation of $\mathcal{C}[\Phi]$ , we will simply guess the answer using a 'relaxation time approximation' [130] 13 , which relaxes Φ towards thermal equilibrium at a uniform rate $\tau^{-1}_{{\rm ee}}$ . To be more explicit, it is instructive to write

Equation (165)

Then one finds the equations

Equation (166)

These equations can (in some circumstances) be efficiently solved using fancy techniques [125, 159, 162]. We have also taken the liberty to account for momentum relaxation in these equations. Of course, this model does not yet specify what any of these relaxation times are. This can be actually a cumbersome question to address, even within kinetic theory, as one must explicitly evaluate the linearized collision integral. Along these lines, recent work [148, 149] has presented a particularly efficient scheme for evaluating such integrals. They also find that the relaxation time for even and odd harmonics an could be parametrically different, leading to novel physics on intermediate length scales. We refer the readers to this pair of papers for more details; in what follows, we use the simpler model of (166).

Following the procedure of section 5.1.1, the qualitative physics of (166) can be understood relatively easily. For simplicity, we set $\tau_{{\rm imp}}^{-1} = 0$ for the moment. On time scales $t\ll \tau_{{\rm ee}}$ , the right hand side is very small, and one approximately finds the equation $\partial_t \Phi + v_i(\theta) \partial_i \Phi = 0$ . This simply states that particles propagate ballistically at a fixed velocity. On time scales $t\gg \tau_{{\rm ee}}$ , one can show that $a_{\pm 2} \approx - \frac{1}{2}v_{{\rm F}}\tau_{{\rm ee}} (\partial_x \mp {\rm i}\partial_y) a_{\pm 1}$ , and that (166) approximately closes to a set of equations for a0, a1 and a−1. Furthermore, the fluctuations in the electronic number density are given by

Equation (167)

and a similar calculation shows that $a_1 + a_{-1} \propto v_x$ , while $(a_1 - a_{-1})/{\rm i} \propto v_y$ . A straightforward calculation then reveals that the resulting equations for a−1, a0 and a1 are exactly (77), the equations of hydrodynamics in the low temperature limit. The explicit expression for the speed of sound and dynamical viscosity are

Equation (168)

and are consistent with our previous discussions. A pedagogical treatment of these points, and more computational details, may be found in [125, 159].

Let us emphasize once more, however, that the use of this model is that it is relatively numerically tractable to study the entire crossover between the ballistic and hydrodynamic regimes. The assumption of a circular Fermi surface is also particularly well-suited to graphene, although it may not be well-suited to other materials [26].

6.4.1. Flow through narrow channels and constrictions, revisited.

With this toy model for the ballistic-to-hydrodynamic crossover at hand, let us return to the question of flow through narrow channels and constrictions. We are now ready to address more experimental observations of hydrodynamic electron flow.

We begin by discussing the flow of electrons through a narrow channel, as discussed in section 6.1. Recall that the boundary conditions must be chosen such that momentum can relax at the boundary—for example, the conventional no-slip boundary conditions of hydrodynamics. The equations (166) can be solved numerically in this setting, and here we simply focus on the qualitative features. The resistance of the channel is (up to dimensional prefactors) is

Equation (169)

This equation can be understood as follows. The resistance $R\sim 1/(w\tau_{{\rm mom}})$ where $\tau_{{\rm mom}}$ is the time scale it takes for momentum to relax, including relaxation due to the presence of a boundary. If $\tau_{{\rm imp}}^{-1}\rightarrow 0$ , then there is a simple competition between ballistic and hydrodynamic effects. If $v_{{\rm F}} \tau_{{\rm ee}} \gg w$ , then quasiparticles will approximately bounce back and forth between the edges of the channel, and at each bounce will lose much of their forward momentum. The time between bounces is given by $w/v_{{\rm F}}$ —the time it takes to travel between the two sides of the channel. When $v_{{\rm F}}\tau_{{\rm ee}} \ll w$ , then collisions occur frequently and the theory is described by hydrodynamics. Now the time scale required to relax momentum is set by the diffusion of transverse momentum. This mode was described in (69a ). Because dissipation is now diffusive, the time scale is $w^2/D_{{\rm mom}}$ , where $D_{{\rm mom}} \sim v_{{\rm F}}^2\tau_{{\rm ee}}$ is the diffusion constant for transverse momentum. Accounting for a finite $\tau_{{\rm imp}}$ , we see that if the channel width w is too large, then bulk momentum-relaxing scattering dominates R. This is, in principle, an easy effect to account for experimentally because the bulk resistivity can be measured in a multitude of other geometries independently.

The main experimental evidence for (169) comes from looking for the various powers of $R\propto w^{-n}$ in (169): n  =  1 signifies conventional Ohmic flow, n  =  2 signifies relatively conventional ballistic flow, and n  =  3 signifies hydrodynamic flow. As shown in figure 13, experiments have seen evidence for $R(w)$ decaying faster than 1/w2 in GaAs [13], ${\rm PdCoO}_2$ [17] and ${\rm WP}_2$ [18]; evidence in ${\rm WP}_2$ is particularly striking. While the w-dependence of $R(w)$ is evidence for some kind of 'hydrodynamic' effect, on the other hand, the expected non-monotonic temperature dependence of $R(T)$ for a fixed channel width w is not seen cleanly in the above experiments. Indeed, it is unclear whether the toy model above is appropriate for either ${\rm PdCoO}_2$ or ${\rm WP}_2$ . Both have more complicated band structures than graphene, and the effective hydrodynamics at the ballistic crossover could be more complicated [26].

Figure 13.

Figure 13. A normalized resistance wR as a function of inverse channel with 1/w. When 1/w is larger, we observe that wR grows faster than 1/w. This is consistent with a hydrodynamic regime with $\tau_{{\rm ee}} =10 \tau_{{\rm imp}}$ (red curve), but not $\tau_{{\rm ee}} = \tau_{{\rm imp}}$ (blue curve). From [17]. Reprinted with permission from AAAS.

Standard image High-resolution image

Next, let us return to the flow of electrons through a narrow constriction of width w, as discussed in section 6.2. Numerical evidence [159] suggests that the generalization of (158) to account for ballistic effects is simply

Equation (170)

Here R0 is the resistance in the non-interacting electron gas, entirely due to ballistic flow through the constriction. Viscous effects enhance transport beyond the ballistic limit. This effect has been observed directly in the flow of electrons through constrictions cut into samples of graphene [16]: see figure 14. The dramatic non-monotonic temperature dependence is the most clear indication observed yet in experiment of the onset of viscous transport regime of a Fermi liquid.

Figure 14.

Figure 14. The non-monotonic temperature dependence of the resistance $R(T)$ of a w  =  500 nm constriction is a clear signature of viscous effects. Reprinted by permission from Macmillan Publishers Ltd: Nature Physics [16], Copyright 2017.

Standard image High-resolution image

7. Transport in the Dirac fluid

The phenomena we have observed in section 6 are specific to the doped Fermi liquid regime, at least in graphene. As we noted in section 4.3, the dynamics of charge becomes decoupled from energy and momentum in a charge-neutral relativistic fluid. This regime of charge neutrality is experimentally accessible in graphene. The linear response phenomena described in section 6 all have analogues at charge neutrality, but energy density and temperature play the role of charge density and chemical potential: see, for example, [173]. This makes experiments to detect such flows challenging. Indeed, at charge neutrality, the linearized equations governing charge transport are (neglecting charge puddles) simply

Equation (171)

This is, of course, the equation governing Ohmic charge diffusion in a medium with conductivity $\sigma_{\textsc{q}}$ . Electrical transport in exceptionally pure Dirac fluid looks identical to transport in a conventional metal.

The interesting hydrodynamic response at charge neutrality is in the energy-momentum sector. In this section, we will discuss the signatures of the Dirac fluid in thermoelectric transport. These are more complicated experiments to perform than measurements of electrical resistance, and so we will also review the subtleties required to observe these phenomena experimentally.

7.1. 'Mean-field' hydrodynamic model of thermoelectric transport

Let us begin with a review of a 'mean field' model of thermoelectric transport within hydrodynamics, following [98]. By thermoelectric transport, we mean the following: consider a system perturbed by a background, time-dependent electric field and/or temperature gradient, which may be time-dependent. As a response to this perturbation, charge currents Ji and heat currents Qi will begin to flow in the system. If the electric fields and temperature gradients are small, then we can write

Equation (172)

Due to time-translation invariance, the matrix of coefficients above only depends on the difference, $t-t^\prime$ , and so we will often find it useful to compute the Fourier transform the above expression:

Equation (173)

We note without proof the following useful facts: (i) that the heat current in graphene can be defined as

Equation (174)

and is in fact equivalent to Tsi , where si is the spatial part of the entropy current defined in (55); (ii) the coefficients $\sigma^{ij}$ are complex-valued and must be analytic in the upper-half complex plane.

As we introduced in section 4.6, a simple way to account for momentum relaxation is to simply add a term to the hydrodynamic equations that spoils momentum conservation equation, as in (84). One of our goals in section 7.2 will be to rigorously assess (in a certain limit) the validity of this approximation. Nevertheless, we proceed for the moment assuming the validity of (84), and use these equations to evaluate Ji and Qi in the presence of background electric fields and temperature gradients.

The calculation is actually very straightforward. Because (84) do not break spatial homogeneity, we may look for an ansatz where the velocity vi is a constant. The resulting equation can be rearranged to the following simple form:

Equation (175)

Using our formal expression for the charge current from section 4.2.2, and simplifying to the linear response limit where Ei and vi are small (recall that Ei is analogous to $-\partial^i\mu$ ):

Equation (176)

Using (174) along with the two equations above, simple algebra leads us to

Equation (177a
                  )

Equation (177b
                  )

Equation (177c
                  )

In the limit $\tau_{{\rm imp}} \rightarrow \infty$ , the terms proportional to $\sigma_{\textsc{q}}$ can be neglected in (177). We then find a conventional Drude peak in the conductivity. Indeed, in a non-relativistic theory, one would replace $ \newcommand{\e}{{\rm e}} \epsilon+P \approx nmv_{{\rm F}}^2$ , and $\rho = -ne$ , with n the number density of quasiparticles. The conventional argument for the Drude peak assumes that there are long-lived quasiparticles, and in many cases relies on a crude 'relaxation time' approximation [1] that is not equivalent, in any sense, to the derivation above. It is clear from our derivation of the Drude peak that the time scale $\tau_{{\rm imp}}$ in the conductivity is related to momentum relaxation. This intuition has been made quite precise in [174, 175]; see [21] for a review. Interestingly, while it is quite common to interpret the dc conductivity according to the Drude formula in ordinary metals, most ordinary metals do not exhibit a sharp Drude peak in their ac conductivity. This is due to the many competing scattering pathways, such as interband transitions, that complicate dirty samples of conventional metals. Often, sharp Drude peaks are observed experimentally only in very pure systems, or in correlated electron materials [176]. While in much of the literature, scattering rates are estimated from the value of the dc conductivity, we emphasize that the extraction of a meaningful scattering time from the dc conductivity can be difficult, as we will see below.

Nevertheless, let us assume the validitiy of (177) beyond the strict $\tau_{{\rm imp}} \rightarrow \infty$ limit. Each of the thermoelectric conductivities consists of a sum of two different kinds of terms—a Drude peak which is sensitive to momentum relaxation, and a diffusive contribution which is not. This sum behavior violates Mattheisen's rule, which argues that the resistivity is a sum of all of the possible scattering mechanisms 14 . The logic behind this sum is as follows. There are two kinds of mechanisms that can contribute to the conductivities in graphene (or any other relativistic fluid). Firstly, a charged/thermal fluid will collectively flow unimpeded until it scatters off of obstacles, and this is responsible for the Drude behavior. But there is a second, parallel way for charge current to flow—particles and holes can move in opposite directions. This flow carries no momentum or energy, and so should not be sensitive to momentum relaxation. This flow, schematically depicted in figure 15, is responsible for non-vanishing $\sigma_{\textsc{q}}$ , and can contribute to the conductivities.

Figure 15.

Figure 15. The flow of electrons (red) versus holes (blue) in the charge neutral Dirac fluid. Both electrons and holes move in the same direction in a temperature gradient, but move in opposite directions in an electric field.

Standard image High-resolution image

It is common in experiments to not measure $\bar\kappa^{ij}$ , but to measure

Equation (178)

This can be understood as the coefficient of proportionality between a heat current and temperature gradient, subject to the boundary conditions that no electric current flows:

Equation (179)

Using (177), we find that

Equation (180)

Interestingly, the $\tau_{{\rm imp}}$ from the denominator in $\sigma(n)$ cancels the overall prefactor $\tau_{{\rm imp}}$ whenever $ n \ne 0$ . Thus, this measured κ is finite while σ is infinite in a clean theory.

This leads to a dramatic violation of the Wiedemann–Franz law, which states that in a conventional metal [1]

Equation (181)

is given by

Equation (182)

The coefficient $\mathcal{L}$ is often called the Lorenz number, and it is 1 whenever elastic impurity or phonon scattering dominates transport. However, in the hydrodynamic regime we find that

Equation (183)

At n  =  0, we see that $\mathcal{L} \rightarrow \mathcal{L}_0$ , but for large enough n we observe that $\mathcal{L}\rightarrow 0$ . Hence, a relativistic plasma like the Dirac fluid with long-lived conserved momentum violates the Wiedemann–Franz law both from above and below. The violation from above is due to the fact that the charge neutral plasma has an intrinsically finite σ, but a diverging κ which is kept finite only by disorder. The violation from below, for densities $n\ne 0$ , is due to the locking of charge and heat currents: the most efficient way to transport both charge and heat is to create local momentum density. We will discuss the experimental observation of (183) in section 7.3. We also note that $\mathcal{L} \ll \mathcal{L}_{{\rm WF}}$ in a very clean Fermi liquid where the momentum relaxation rate is much longer than the electron–electron scattering rate [177]. For further discussion of the Wiedemann–Franz law in correlated electron systems with Fermi surfaces, see [178].

7.2. Hydrodynamic theory of transport through charge puddles

In this section, we will finally relax the assumption that momentum relaxes in a 'homogeneous' way. Indeed, as we saw in section 2.2, experimental graphene is not homogeneous, but is described by a landscape of 'charge puddles' where the local value of the chemical potential $\mu({\bf x})$ varies from one point to the next: see figure 16. This inhomogeneity is sufficient to relax momentum, and render the transport coefficients finite. In this section, we describe the transport coefficients in such an inhomogeneous medium, when the inhomogeneity is very long wavelength. In particular, if ξ is the typical size of a charge puddle, and $ \newcommand{\e}{{\rm e}} \xi \gg \ell_{{\rm ee}}$ , then following [179, 180] we can compute the conductivity of the medium by solving the hydrodynamic equations.

Figure 16.

Figure 16. When the mean free path for electronic collisions is short compared to the size of charge puddles, then transport is described by the hydrodynamic equations. Momentum relaxation arises from the spatial variations in the thermodynamic coefficients such as entropy density (green) and charge density (blue/red). In graphene, the charge density may locally switch signs. Reprinted figure with permission from [99], Copyright 2016 by the American Physical Society.

Standard image High-resolution image

Indeed, all non-hydrodynamic modes will relax on length scales short compared to the inhomogeneity, and will play no role in transport. This follows from the fact that transport is sensitive to momentum relaxation, and not directly to electron–electron momentum-conserving scattering, as we saw previously; it can be observed explicitly within the full kinetic theory of transport [26]. We now folllow [99] and describe the solution to the transport problem within relativistic hydrodynamics.

Our starting point is to generalize (82), in the stationary limit, to study the response of an inhomogeneous fluid, in a background electric field or temperature gradient. For simplicity we assume that the disorder couples to the chemical potential, as in [99]; the case with inhomogeneous strain disorder is described in [181, 182]. We also focus only on time-independent solutions. The procedure is straightforward, and we find

Equation (184a
                  )

Equation (184b
                  )

Equation (184c
                  )

Here $ \newcommand{\mdelta}{{\delta}} \mdelta \mu$ denotes the deviation of the chemical potential from the inhomogeneous background $\mu_0({\bf x})$ . The coefficients such as n and s in the above equations are functions of $n({\bf x}) = n(\mu_0({\bf x}), T)$ , for example. These equations are a set of elliptic partial differential equations and are straightforward to solve numerically, as in [99].

Suppose that the local fluctuations in the chemical potential are small:

Equation (185)

with $\hat\mu$ an O(1) function and $u\ll T, \bar\mu_0$ a perturbatively small parameter governing the strength of chemical potential. In this case, one can perturbatively solve (184) order by order in u. The dc thermoelectric conductivities are given by (177) with $\omega=0$ and (see [99] for a precise formula) 15

Equation (186)

Note that in the above formula, all thermodynamic coefficients are evaluated in the homogeneous background with uniform chemical potential $\bar\mu_0$ ; ξ is the typical size of a charge puddle. A few comments are in order. Most importantly, we see that the momentum relaxation time $\tau_{{\rm imp}}$ which we have—so far—treated as a simple constant is in fact quite non-trivial. It will generally depend on μ and T in a complicated way. In the smooth disorder limit $\xi \rightarrow \infty$ , the conductivity becomes limited by $\sigma_{\textsc{q}}$ . An analogous effect was first observed in Galilean-invariant fluids in [179], and was found in general settings in [25, 26]. However, for fluids which are not deep in the hydrodynamic limit (in particular, $ \newcommand{\e}{{\rm e}} \xi \sim \ell_{{\rm ee}}$ ), the two terms above may be comparable and even the temperature dependence of the transport coefficients becomes quite sensitive to microscopic details of the sample.

A non-perturbative technique for analyzing (184) is to bound the transport coefficients using a variational principle [25, 180] 16 . For simplicity, let us describe the bound on the electrical conductivity. Intuitively, looks for a flow of charge and heat current which minimizes entropy production; rigorously, one finds

Equation (187)

where V2 is the total volume of the two spatial dimensional region of interest, Ji and Qi are arbitrary trial charge and heat currents, up to the constraints $\partial_i J_i = \partial_i Q_i = 0$ , and

Equation (188)

These conductivity bounds can be useful for obtaining qualitative estimates of the transport coefficients in the strong disorder limit, but one must keep in mind that there is no guarantee that a bound obtained from (187) is sharp. We note that one can derive (186) by plugging in the ansatz that Ji and Qi are ${\bf x}$ -independent into (187). Thus, (186) is a lower bound on the true conductivity, which becomes perturbatively exact at weak disorder.

7.3. Experimental measurement of thermal conductivity

The breakdown of Wiedemann–Franz (WF) law (182) provides strong evidence for electronic hydrodynamics in graphene [15]. Indeed, note that $\mathcal{L}_{{\rm WF}}$ depends only on fundamental constants, and not on material-specific parameters such as carrier density and effective mass of the charge carriers. Experimentally, the WF law holds robustly in many conductors, and this has confirmed the validity of our standard picture of transport for ordinary metals [1]. The breakdown of the WF law in graphene is especially significant, in our view, because specific predictions for the nature of this breakdown were made based on hydrodynamics many years prior [98], and were subsequently observed [15].

Testing the WF law is not always easy. Implicit in the standard theory is the assumption that the dominant contribution to κ is electronic. In many materials, including graphene, electrons contribute $\lesssim $ $ 1\%$ of the total κ as measured at room temperature: the dominant contribution arises from phonons.

We will now describe a technique that allows for the direct measurement of only the electronic contributions to κ. The thermal conductivity can be measured based on Fourier's law, by taking the ratio of the input heating power to the temperature change of the electrons 17 . To overcome limitations on the sensitivity of resistive thermometry in graphene [185], a Johnson noise radiometer was developed [59, 61]. Figure 17(a) shows the schematic diagram of the core of the experimental setup. It is analogous to the microwave radiometer used to measure the temperature of the cosmic microwave background [186]. While in the astrophysical setting one measures temperature by collecting blackbody radiation from space with an antenna, in graphene, [59, 61] employ a reactive impedance matching network with a center frequency in the microwave range. The Johnson noise across the resistor, which is directly proportional to the temperature [187], is then measured. The coupling of the Johnson noise and the noise from the low noise amplifier determines the sensitivity of the temperature measurement, which is often  ∼1 mK. The first application of this Johnson noise thermometry was to understand electron–phonon coupling in graphene [59, 61, 63], and to identify the regimes where such effects are small.

Figure 17.

Figure 17. (a) The core components of the experimental setup to measure the electronic thermal conductivity in graphene. (b) Measured Lorenz number as function of temperature and carrier density in graphene. The dramatic peak in the Lorenz number measured at charge neutrality and intermediate temperatures was a direct prediction of hydrodynamics (183). From [15]. Reprinted with permission from AAAS.

Standard image High-resolution image

Figure 17(b) plots the measured Lorenz number $\mathcal{L}$ in graphene as function of temperature and carrier density [15]. The data is normalized to $\mathcal{L}_{{\rm WF}}$ : in much of the plot, $\mathcal{L}/\mathcal{L}_{{\rm WF}} \approx 1$ means the WF law is satisfied. This law is always satisfied away from the charge neutrality point, as was mostly expected—graphene is often a conventional Fermi liquid away from the neutrality point. The quantitative agreement also confirms the accuracy of the Johnson noise thermometer. At low temperatures below 50 K, the WF law also holds well at the charge neutrality point. We expect that the reason for this is that the local fluctuations in the chemical potential due to charge puddles are of order 50 K (in suitable units) [15, 99]. Raising the temperature, however, the Lorenz number is enhanced by a factor of more than 20 at temperature around 60 K at neutrality. This is the signature of the hydrodynamic Dirac fluid; the physics behind this effect was discussed below (183). Above 100 K, experimental data shows that heat transfer from electrons to phonons is comparable to the electronic diffusion [59, 61, 63]. This electron–phonon scattering degrades the Dirac fluid and so the decrease of $\mathcal{L}$ at $T\sim 100$ K is not surprising.

Figure 18 shows a more quantitative comparison of the measured thermal conductivity to (183). For the cleanest sample, the extracted momentum relaxation lengths are on the order of 1 μm. This is comparable to the size of the charge puddles that can be probed directly using scanning probe microscopes [35, 36]. This suggests that these charge puddles will ultimately be the main obstacle to observing the Dirac fluid in futuer studies. Away from neutrality, $\mathcal{L}$ drops sharply from its enhanced value to  ∼$\mathcal{L}_{{\rm WF}}/4$ , before rising back up to $\mathcal{L}_{{\rm WF}}$ at a higher density. This suppression at finite carrier density n is consistent with (183) and also the theory of [177, 188].

Figure 18.

Figure 18. The measured Lorenz number from three samples at T  =  60 K as function of charge density n. The data fits well to (183) (dashed lines). Data in blue, red, and green are from samples with increasing amounts of charge puddles. All samples return to the Fermi liquid value (black dashed line) at high density. Insets show (left) the measured Lorenz number as function temperatures and (right) the fitted enthalpy density $ \newcommand{\e}{{\rm e}} \epsilon+P$ as a function of temperature, compared to the theoretical value in clean graphene (black dashed line). From [15]. Reprinted with permission from AAAS.

Standard image High-resolution image

7.4. Experimental measurement of thermoelectric conductivity

Another window into the hydrodynamic regime arises in thermoelectric transport—in particular, by studying the cross-coefficient $\alpha_{ij}$ in (173). In this subsection, we will assume that $ \newcommand{\mdelta}{{\delta}} \alpha_{ij} = \alpha \mdelta_{ij}$ is isotropic. In experiment, one often measures not α but the Seebeck coefficient

Equation (189)

In the hydrodynamic limit, we predict

Equation (190)

using (177). This formula is very different from the Fermi liquid result (Mott relation) [1]

Equation (191)

and so again provides a specific, experimentally testable prediction of hydrodynamics. In particular, (190) diverges as $n\rightarrow 0$ , and is expressable in terms of thermodynamic coefficients. Such a simple formula is a consequence of the 'mean field' treatment of disorder in section 7.1, but we expect it to be qualitatively reasonable.

Figure 19(a) shows the schematic diagram of the experimental setup used to measure $\mathcal{S}$ in [189]. The induced voltage is measured after a heater, applied to one end of the sample, generates a steady temperature gradient. Figure 19(b) shows the experimentally observed enhacement of $\mathcal{S}$ near the neutrality point is larger than predicted by (191). The enhancement diminishes as the system crosses over to the Fermi liquid regime at large n, or with increasing disorder or phonon scattering. Indeed, early experiments with impurity-scattering-limited graphene on ${\rm SiO}_2$ substrates show excellent agreement with (191) [190192]. However, using the higher quality BN substrated samples, noticable deviations from (191) may be observed [189].

Figure 19.

Figure 19. (a) Schematic diagram of the thermoelectric power measurement in graphene. Induced voltage is measured with a temperature gradient produced by a heater and controlled by resistive thermometers on the two ends of the sample. (b) Measured Seebeck coefficient-to-temperature ratio (solid lines) as function of carrier density in comparison to the hydrodynamic and Mott relation (191). Reprinted figure with permission from [189], Copyright 2016 by the American Physical Society.

Standard image High-resolution image

Using kinetic theory, [188] attributes the discrepancy of $\mathcal{S}$ from both (190) and (191) as a consequence of both the crossover between ballistic and hydrodynamic regimes of transport, together with the scattering off of optical phonons at higher temperatures. The temperature range where the optical phonon scattering occurs is consistent to the heat transfer measurement from graphene electrons to phonons [63].

7.5. Beyond relativistic hydrodynamics

In this subsection, we describe some of the complications that can arise for the models of hydrodynamic transport in (weakly) interacting electron fluids.

7.5.1. imbalance modes.

As we noted in section 5.3, there can be a long lived imbalance mode in graphene. [131] noted that this can lead to finite size corrections to the theory of transport derived in section 7.1. The precise form of these corrections depends on details of the boundary conditions, but we can qualitatively understand the effects of an imbalance mode as follows. Let us assume a homogeneous sample with momentum relaxation time $\tau_{{\rm imp}}$ and imbalance relaxation time $\tau_{{\rm imb}}$ , with boundaries located at x  =  0,L. The equations one must solve then take the schematic form of

Equation (192a
                     )

Equation (192b
                     )

Equation (192c
                     )

Equation (192d
                     )

where primes denote x-derivatives and

Equation (193)

For simplicity, we have neglected viscous effects above. The exact solution to the equations above is sensitive to the boundary conditions on n and $n_{{\rm imb}}$ . However, typical boundary conditions will all share the following features. Firstly, in the limit $\tau_{{\rm imb}} \rightarrow \infty$ , these equations are solved by constant v as well as constant $\mu^{a\prime}$ and $T^\prime$ —gradients are homogeneous throughout the sample. However, because the imbalance mode decays, the non-equilibrium imbalance gradient will be concentrated near the edges of the sample. The length scale over which the imbalance mode will decay obeys

Equation (194)

We have employed slightly different notation than [131]. What one then finds is that the electrical and thermal resistivities of the slab of length L can be written in the following schematic form:

Equation (195)

where $\mathcal{R}$ is the resistance per unit length of the infinite sample (where imbalance modes do not play any role), and $R_{{\rm contact}}$ is a finite contribution arising from imbalance modes that are excited near the contacts.

These imbalance modes limit the extent to which the large violations of the Wiedemann–Franz law described in section 7.3, can be observed. Indeed, the fact that such a large violation of the Wiedemann–Franz law was observed experimentally in [15] suggests that $ \newcommand{\e}{{\rm e}} \ell_{{\rm imb}} < 1 \;~\mu {\rm m}$ .

Possible nonlinear hydrodynamic signatures of the imbalance mode in graphene are discussed in [139]. Observable signatures of an imbalance mode appear related to the presence of an extra diffusive hydrodynamic mode. An imbalance mode can also lead to changes to the theory of transport through charge puddles [25, 26], although we expect such effects to be less significant in the Dirac fluid.

A final perspective on imbalance modes can be found in [193]. The authors consider an exotic model for the Dirac fluid coming from the AdS/CFT correspondence. The model considered has the flavor of a 'momentum relaxation time' model, but with a different assumption on the thermodynamics of the imbalance mode than [131], which allows this mode to couple to charge transport beyond the edges of the sample. It remains an open question whether such a model is appropriate for the Dirac fluid realized in experiment.

7.5.2. Bipolar diffusion.

Another possible complication of the hydrodynamic description, is the possibility that the electron and hole fluids essentially decouple. In the hydrodynamic langauge, this corresponds to the assumption that the charge/energy of the electrons/holes are separately conserved. The assumption that the energies of the two fluids are separately conserved is difficult to microscopically justify away from a non-interacting limit, but from a phenomenological, hydrodynamic perspective, we can take this as a postulate. This decoupling of electron and hole fluids leads to a phenomenon called bipolar diffusion [194], which also leads to an enhancement of $\mathcal{L}$ relative to the WF law.

It is simple to derive the effect. The conventional open-circuit thermal conductivity κ was defined under the assumption that no charge current flows. However, if there are electron and hole fluids, then we find

Equation (196)

If $\sigma=\sigma_{{\rm e}}+\sigma_{{\rm h}}$ , and the electron/hole fluids separately obey the WF law, the combined electron–hole fluid need not obey the WF law due to the presence of the third 'bipolar diffusion' contribution to κ. In particular, the presence of the bipolar diffusion term suggests that $\mathcal{L}\geqslant 1$ .

The bipolar diffusion effect is not adequate to explain the experimentally observed phenomena in graphene. Let us summarize the reasons why [15] (see figure 20): (i) the Lorenz number associated with bipolar diffusion in disorder-free graphene is about 4.2$\mathcal{L}_{{\rm WF}}$ at the Dirac point [195], and significantly smaller than what is seen experimentally; (ii) $\mathcal{L}$ is weakly dependent on temperature T in a theory of bipolar diffusion, but also a monotonic function of T, contradictory to experimental data; (iii) the theory of bipolar diffusion predicts disorder amplitudes an order of magnitude larger than experimentally observed, given the approximate observation of the WF law for T  <  50 K. Instead, thermal transport in graphene is described by a Dirac fluid, possibly with an electron–hole imbalance mode.

Figure 20.

Figure 20. The theory of bipolar diffusion in graphene [195] does not describe the experimentally measured κ. This data is instead consistent with a hydrodynamic transport theory: see figure 18. From [15]. Reprinted with permission from AAAS.

Standard image High-resolution image

7.6. Magnetic fields

In this final subsection, we briefly discuss hydrodynamic magnetotransport phenomena. These are most interesting near the Dirac point in graphene.

We first begin by extending the model of section 7.1 to include the effects of a magnetic field, following [98]. A magnetic field is introduced by adding a uniform background magnetic field in the external electromagnetic tensor $F^{\mu\nu}$ introduced in section 4.2.3: $F^{xy} = -F^{yx}=B$ . In the presence of a background magnetic field, (175) generalizes to include the magnetic Lorentz force:

Equation (197)

where Jj is the charge current and $ \newcommand{\e}{{\rm e}} \epsilon_{ij}$ is the Levi-Civita tensor: $ \newcommand{\e}{{\rm e}} \epsilon_{xx}=\epsilon_{yy} =0$ , $ \newcommand{\e}{{\rm e}} \epsilon_{xy}=-\epsilon_{yx}=1$ . The charge current is modified from (176) to

Equation (198)

To see this equation, note that $ \newcommand{\e}{{\rm e}} F^{i\nu}u_\nu = F^{it}u_t +$ $ \newcommand{\e}{{\rm e}} F^{ij}u_j = E^i + B\epsilon^{ij}v_j $ . It is straightforward to combine these two equations to compute the thermoelectric conductivity matrix, within the relaxation time approximation. For simplicity we write down the components of $\sigma_{ij}$ alone:

Equation (199a
                  )

Equation (199b
                  )

When $\sigma_{\textsc{q}} \rightarrow 0$ , these equations can be derived rigorously [21, 175] in the limit of perturbatively weak disorder 18 . If we assume their validity for finite $\sigma_{\textsc{q}}$ , then we predict a number of novel new phenomena. In particular, note that there are poles (divergences) in all components of $\sigma_{ij}$ whenever

Equation (200)

If $\tau_{{\rm imp}}^{-1}=0$ and $\sigma_{\textsc{q}}=0$ , we observe the presence of poles on the real axis. These are called cyclotron resonances—their presence is guaranteed in a Galilean-invariant fluid by Kohn's theorem [196]. With Galilean invariance, the charge current is proportional to the momentum density, and so the cyclotron resonance simply denotes the rotation of the charge current in a uniform magnetic field, guaranteed by the Ward identity (60). Although the derivation of these phenomena was not rigorous, we expect that they are qualitatively correct. Can they be experimentally observed? $\sigma_{\textsc{q}}$ is expected to be largest, relative to other coefficients, at the charge neutrality point (section 5.4), which will be accessible for $\mu/k_{{\rm B}} \lesssim 100$ K. In this regime, we find that

Equation (201)

Cyclotron resonances will occur for ${\rm Re}(\omega) \sim 100$ GHz at B  =  1 mT, which is an extremely weak magnetic field (the Earth's magnetic field is 10−5 T). Similarly, one finds

Equation (202)

and for $B\sim1$ mT, this suggests that ${\rm Im}(\omega) \sim 10^{-3} {\rm Re}(\omega)$ . This will likely be very hard to detect in graphene—in particular the momentum relaxation rate will likely be significantly larger.

We also note that the ratio $\sigma_{xy}/\sigma_{xx}$ , as computed from (199), can exhibit novel scaling with temperature T [197], which could shed light on experiments in the cuprates [198].

Another effect of a finite magnetic field which is missed by the 'mean field' treatment of disorder is Hall viscosity [199, 200]. The Hall viscosity is a non-dissipative modification of $\widehat{T}^{\mu\nu}$ , as given in (57b ): $\widehat{T}^{\mu\nu} \rightarrow\widehat{T}^{\mu\nu} + \widehat{T}^{\mu\nu}_{{\rm H}}$ where

Equation (203)

where $ \newcommand{\e}{{\rm e}} \sigma^{\mu\nu} = \mathcal{P}^{\alpha\mu} \mathcal{P}^{\beta \nu} (\partial_\alpha u_\beta + \partial_\beta u_\alpha - \eta_{\alpha \beta} \partial_\lambda u^\lambda)$ . Possible experimental signatures of this Hall viscosity have been proposed in [201203]. We also note that $\sigma_{\textsc{q}}$ can be generalized to contain an intrinsic Hall conductivity [200].

Magnetic fields also lead to qualitative changes to transport through inhomogeneous puddles. Using the same long wavelength limit of section 7.2, but now supposing that the magnetic field B is not perturbatively small, one finds that the momentum relaxation time becomes [204, 205]

Equation (204)

where L is the sample size and ξ is the size of the charge puddles. This result is relatively insensitive to the precise details of the hydrodynamics (in contrast to (186)), and depends only on the fact that the electron fluid in graphene is two-dimensional. Strictly speaking, $\tau_{{\rm imp}}$ becomes so short as $L\rightarrow \infty$ that the momentum relaxation time approximation itself fails. However, the prefactor of the logarithm is inversely proportional to η. Is studying dissipative transport in magnetic fields an effective way of measuring the viscosity of an electron fluid? We caution that this effect is not present in three spatial dimensions [206], and may be less well suited for electron fluids other than graphene.

8. Coulomb drag

So far we have looked for the signatures of electron–electron interactions directly from experiments on monolayer graphene. One way to possibly probe electron interactions more directly is by separating two monolayers of graphene by a few layers of insulator, such as boron nitride. Ensuring that the monolayers are not in electrical contact, we then ask for the currents/voltages in layer 1 due to a current/voltage induced in layer 2. This cross layer signal is believed to be dominated by Coulomb interactions between electrons in the different layers, and so is coined 'Coulomb drag' [207]. These experiments can also be performed on any effectively two-dimensional electron system. In graphene, Coulomb drag is noticable for interlayer spacings $\lesssim $ 10 nm, which can easily be achieved in heterostructures.

8.1. Hydrodynamic description

In some respects, Coulomb drag physics could be expected to be quite similar to the imbalance mode physics described in section 5.3. For example, if there is a Fermi liquid in both monolayers, then (neglecting energy conservation) we expect only the total momentum of electrons in both layers to be conserved, together with the charge density of electrons in each layer separately. Assuming linearized, time-dependent flows, one then writes down a set of coupled hydrodynamic equations

Equation (205a
                  )

Equation (205b
                  )

where a  =  1,2 denotes the electrical response in each monolayer, while vi denotes the collective fluid velocity. The physics is then identical to the imbalance mode physics described previously, including the existence of an extra diffusive mode (see also [25, 26]).

However, it is more common in the literature to treat this set-up as consisting of two fluids with two long lived momenta. We consider a theory of two coupled charged fluids with charge current $J^{\mu 1, 2}$ and energy-momentum current $T^{\mu\nu 1, 2}$ :

Equation (206a
                  )

Equation (206b
                  )

Equation (206c
                  )

Equation (206d
                  )

Analogous to our treatment of momentum relaxation in section 4.6, $\tau_{{\rm d}}$ is the relaxation rate of energy and momentum between the two layers, and will increase as the separation between the monolayers increases. We will only rely on these equations to linear order in the velocity/momentum of each fluid.

8.2. Coulomb drag and transport

We now turn to the transport signatures of the fluid-like models above. The most common measurement is of the drag resistivity

Equation (207)

It tells us the electrical response of layer 2 due to a current flowing in layer 1. We can easily generalize the model of section 7.1 to the theory (206), to get a flavor for what happens. The momentum balance equations in each layer read

Equation (208)

where $ \newcommand{\e}{{\rm e}} \Gamma = (\epsilon+P)/\tau_{{\rm imp}}$ is related to the momentum relaxation time, and the charge current in each layer takes the form of (176). For convenience and simplicity we have assumed that Γ and $\Gamma_{{\rm d}}$ take the same, simple form above for each layer, even if the layers are at different charge densities, and we have also (in this two-fluid approximation) not allowed for any inter-layer cross-terms in $\sigma_{\textsc{q}}$ , which could generically appear as in (205). After a short calculation we obtain

Equation (209)

Thus we predict that (as in a conventional two-dimensional double-layer system [207]), $\rho_{{\rm d}}$ will grow quite large as we approach neutrality (either n1  =  0 or n2  =  0) from high density. However, right at the neutrality point, (209) predicts $\rho_{{\rm d}}$ vanishes due to charge conjugation symmetry. We predict that the sign of $\rho_{{\rm d}}$ can also be flipped depending on the relative sign of n in the two layers. Many of these qualitative features are observed in experiments on graphene [208]: see figure 21.

Figure 21.

Figure 21. Left: the drag resistivity $\rho^{{\rm D}}$ at the charge neutrality point in graphene. The sign reversals (denoted by the red versus blue shading) can be understood simply from (209). Right: a clearer view of the magnitude of $\rho_{{\rm d}}$ as a function of the density n, at various temperatures. At lower temperatures there is a very sharp enhancement of $\rho_{{\rm d}}$ . Reprinted by permission from Macmillan Publishers Ltd: Nature Physics [208], Copyright 2012.

Standard image High-resolution image

However, there is one key experimental observation missed by (209). At relatively low temperatures, there is an extremely large peak in $\rho_{{\rm d}}$ observable in figure 21. It has been argued [209, 210] that this peak arises from the inhomogeneous puddle landscape of the Dirac fluid. In particular, in the presence of an (approximate) diffusive mode—either the relative energy density of the two layers (205), or electron/hole imbalances in a single layer (section 5.3)—we can estimate a contribution to $\rho_{{\rm d}}$ :

Equation (210)

A similar result was found in [209]. (210) suggests that even at the neutrality point, the drag resistivity will be finite, and the sign will be sensitive to whether charge puddles typically have the same or opposite sign in the two layers [209]. The experiment [208] finds that this sign is positive. The precise temperature dependence of (210) depends on the dominant diffusive mode. We also stress that the derivation of (210) makes certain assumptions about the leading order inhomogeneity-induced contributions to transport that may not be rigorous in general hydrodynamic models. It would be interesting to carry out the analysis of [99] to second order in perturbation theory and to rigorously confirm or correct (210) in the presence of inhomogeneity, in the hydrodynamic approximation. For other approaches to this problem, see [210, 211].

Magnetotransport drag phenomena have also been studied theoretically [212] and experimentally [213]. The most non-trivial effect is the presence of a non-vanishing Hall (xy) component to the drag resistivity tensor, which would be vanishing in a Drude-like limit [207]. See [212, 213] for more details on this limit.

The experiments described above took place before the observation of viscous hydrodynamics in graphene. It would be interesting to understand whether Coulomb drag experiments on viscous samples can help probe hydrodynamics more directly.

9. Outlook

There is a great deal of emerging interest in the study of the hydrodynamics of electron fluids. For concreteness, this review has focused on a specific material, graphene, although given the universality of hydrodynamics, much of what we have said is far more general. In this outlook, we look forward to some important open questions which remain unsolved, and to the possible promise of the field.

9.1. Pinning down hydrodynamics

Perhaps the most immediate question which must be addressed is a more 'absolute' probe of hydrodynamic flow, even in the Fermi liquid phase of graphene. One of the most striking signatures of viscous electron flow in the Fermi liquid is the formation of vortices. Such flows may be indirectly observed through a combination of nonlocal voltage probes and classical magnetotransport [164]. A more direct observation of vortices may be possible using a variety of techniques to image nanoscale current flow and magnetic fields [214216]. Another important open problem is to directly measure the electronic viscosity. We discussed proposals for such a measurement in section 6.3. In particular, two independent measurements of the viscosity may shed light onto the quantitative reliability of simple hydrodynamic models for electron flow.

At the moment, observing the onset of electronic hydrodynamics in graphene (or any other metal) already requires some of the highest quality materials capable of being grown. This can make it challenging to quantitatively determine the limiting factors in the development of hydrodynamics: short-range impurities, umklapp (whether in electron–electron or electron–phonon scattering), other long lived dynamical modes, etc. A better understanding of these issues will help to prepare optimal crystals for electronic hydrodynamic flow.

An even more spectacular hydrodynamic phenomenon is turbulence. However, as we discussed in section 4.6.3, this is not likely to be accessible experimentally for the near future. Although the estimates made in section 4.6.3 assume the textbook Navier–Stokes theory of turbulence, valid in the Fermi liquid limit of section 4.4, we would not be surprised if further complications spoil measures of turbulence in non-Fermi liquids. As an example, the enhancement of the Reynolds number at strong coupling in the Dirac fluid of graphene will be offset by the complications of thermal modes in hydrodynamics: at the charge neutrality point, electrical measurements probe diffusive dynamics (section 7) while turbulence is dominated by the energy-momentum sector, which is hard to measure.

Finally, there may be exotic signatures of hydrodynamics that we have not yet understood. For example, recent work has suggested the effects of hydrodynamic flow on nonlinear optical conductivity [122, 217] or electromagnetic penetration depth [218].

9.2. Hydrodynamics of complicated materials

Hydrodynamics is a common language—it allows us to describe the motion of interacting electrons in graphene, as well as the air and water which flow around us every day. Yet we have also focused in large part on the challenges of understanding electronic hydrodynamics in graphene, in this review. In particular, our derivation of hydrodynamics focused on the relativistic gradient expansion, appropriate for interacting quasirelativistic fermions, and our discussion of kinetic theory almost exclusively focused on graphene, along with our discussions of phonons and impurities.

One simple reason for the emphasis on graphene is that this is a material where many of the seminal experiments on hydrodynamic electron flow have been performed, and the sharpest signatures of hydrodynamics have been observed. To understand why graphene—out of thousands of other materials—was such a promising candidate for these experiments, it is important to understand in some detail the material-specific obstacles to conducting regimes dominated by electron–electron interactions. For example, in the context of graphene it was crucial to reduce the inhomogeneous 'charge puddle' landscape in order to see hydrodynamic flow—but it was also possible to do this. In other materials, such as quantum critical metals at 'optimal doping', the 'random' chemical composition may make this purification impossible. Not all materials will be amenable to simple signatures of viscous hydrodynamic flow such as negative non-local resistance.

Similarly, graphene has a particularly simple band structure with approximate rotational invariance. It can be well approximated, for most purposes, by a small single circular Fermi surface (in the Fermi liquid regime) or by a relativistic electron–hole plasma (in the Dirac fluid regime). However, many other materials of interest will not have such simple band structures. They may have larger Fermi surfaces, with disconnected pieces, or which badly break rotational invariance. These more complicated Fermi surface geometries can have a significant impact on the ballistic-to-hydrodynamic crossover [25, 26], which is the regime where most realistic experiments take place. It has even been observed that the ballistic-to-hydrodynamic crossover is not trivial, even for a Fermi liquid with a circular Fermi surface [148, 149]. An important open problem in this field is to understand the extent to which these complications modify the experimental signatures of hydrodynamics. For example, as we discussed in section 6, one of the key signatures of viscous transport in a Fermi liquid is a decreasing electrical resistivity at low temperatures: $\partial \rho /\partial T < 0$ , under rather generic circumstances. This has not been directly observed in bulk transport measurements, even on high-quality samples of correlated electron materials. References [25, 26] noted that the non-trivial Fermi surface structure of many complicated materials can 'short circuit' this viscous effect, and lead to a more conventional $\partial \rho /\partial T > 0$ . Random magnetic fields can cause viscous effects themselves to lead to $\rho \propto T^2$ [219]. To determine what mechanism destroys the conventional viscous $\rho \propto T^{-2}$ scaling, it is important to develop material-specific models of the ballistic-to-hydrodynamic crossover, and understand how to experimentally confirm the presence of a more complicated hydrodynamics.

9.3. Thermalization and the emergence of classical physics

In this review, we have mostly focused on the practical challenges with viewing hydrodynamics of electrons in a metal. As described in the introduction, there is also interest in understanding how such a classical, dissipative, description arises from unitary microscopic quantum mechanics. Conjectures [24, 154] that hydrodynamics would be fundamentally limited by the constraints of quantum mecahnics have inspired a large body of theoretical work (see [21] for a review). Recently, it has been noted [220, 221] that consistency with the theory of quantum chaos places fundamental constraints on the hydrodynamics of any many-body quantum system. In particular, for a typical experimentally realized quantum many-body system, the diffusion constants are bounded:

Equation (211)

where $v_{{\rm B}}$ is the butterfly velocity—the speed of quantum chaos [222]—and τ is the time scale beyond which the diffusion equation breaks down [221]. The earliest evidence for (211) comes from [223, 224]. The theory of quantum chaos tells us about how quantum systems thermalize, and so a measurement of $v_{{\rm B}}$ would directly inform us about the efficiency with which quantum information is scrambled and lost, and classical physics emerges. Unfortuntately, direct probes of quantum chaos are extremely challenging [225227]. But through (211), a simple measurement of diffusion and its breakdown provides exact constraints on $v_{{\rm B}}$ . Experimental detection of 'rapid' quantum chaos, perhaps along the lines suggested above, could serve as sharp signatures for whether certain electronic systems are as strongly interacting as has been suggested.

As a theoretical question, the emergence of thermalization in closed quantum systems is a fascinating problem. Perhaps hydrodynamic probes of experimental systems will be embarassingly useful in giving insight, or at least constraints, on this physics.

9.4. Nanoscale viscous electronics

Some of the recent interest in electronic hydrodynamics has also arisen from the possibility of practical applications. For instance, we observed in section 6 that in simple Fermi liquids, viscous electron flow enhances conductance, which could be useful in nanoscale environments requiring minimal dissipation. Another possible application of hydrodynamics is the creation of high quality thermoelectric devices, which efficiently convert charge into heat current—such devices employ the breakdown of the Wiedemann–Franz law observed in section 7.3. The low specific heat and rapid thermalization time of the Dirac fluid have also recently proven useful in building a single photon detector [228, 229].

As we discussed in section 6.3, the Dyakonov–Shur instability arises in the flow of a hydrodynamic electron liquid in one dimension. This instability has long been sought after experimentally for practical purposes: it may provide a route to the generation of THz radiation, which has proven quite challenging with other techniques [230]. A robust source of THz radiation could lead to breakthroughs in nanoscale imaging technology, with immediate medical, military and industrial applications. Experimental evidence for the presence of this instability in ultra-clean, weakly interacting two-dimensional electron liquids such as silicon is rather lacking [168, 231], and we expect this is in no small part due to the challenge in observing the hydrodynamic limit of electron flow.

We caution the reader that the industrial applications of viscous electronic flow appear will likely not arise for some time. Indeed, the extremely small scales necessary to observe hydrodynamic effects could very well spoil any practical application, or limit it to a very narrow set of devices. Nevertheless, we believe that understanding of the hydrodynamic flow of correlated electrons remains a beautiful (and, perhaps, simple) outstanding problem in theoretical and experimental physics. Given the convergence of a broad range of theoretical and experimental techniques and ideas, we predict the rapid advance of this field over the next few years. Thus, we hope that this review serves as an invitation to an emerging area of physics, and not as a summary of a well-understood problem. We will not be surprised if, in two decades, it is common knowledge that a hydrodynamic limit of electronic dynamics explains quite a few of the present day puzzles in electronic transport.

Acknowledgments

We thank Sankar Das Sarma, Zhiyuan Sun and Dmitrii Svintsov for comments on a draft of this review. AL is supported by the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4302. KCF is supported by Raytheon BBN Technologies and by the Army Research Office under Cooperative Agreement Number W911NF-17-2-0086. The authors also acknowledge the hospitality of the Simons Center for Geometry and Physics during the writing of this review.

Footnotes

  • Recently hydrodynamics has been understood from even more basic principles [100, 101], but these are extremely technical and well beyond the scope of this review.

  • $\sigma_{\textsc{q}}$ also goes by the name of 'quantum critical' conductivity, or 'incoherent' conductivity, in some of the recent literature.

  • Most sources, except [21], do not include an explicit subscript here. We think it is important to do so to distinguish between the hydrodynamic coefficient $\kappa_{\textsc{q}}$ and the experimentally measured thermal conductivity κ: see section 7.

  • Our estimate follows from the discussion below (4), using $ \newcommand{\e}{{\rm e}} \ell_{{\rm ee}} \sim v_{{\rm F}}\tau_{{\rm ee}}$ ; see also [66, 67].

  • There has been some recent debate in the literature [111, 112] over the validity of this procedure. From the point of view of effective field theory, coupling Maxwell's equations to matter breaks the derivative expansion of hydrodynamics, and is therefore rather concerning. In this review, we assume that the conventional approach is correct, which has been argued for from an effective field theory perspective in [112].

  • In a more conventional metal where the electrons are also mobile in three dimensions, one instead finds $ \newcommand{\mdelta}{{\delta}} \mdelta \varphi \sim \alpha k^{-2} \mdelta n $ , and so $\omega^2 = \omega_{{\rm p}}^2 + v_{{\rm s}}^2k^2$ , with $ \newcommand{\e}{{\rm e}} \omega_{{\rm p}}^2 \sim \alpha n^2/(\epsilon+P)$ the plasma frequency.

  • Although this discussion focuses on the turbulence of Galilean invariant fluids, it appears as though relativistic (uncharged) fluids have qualitatively similar phenomena [127].

  • 10 

    In the fluid dynamics literature, the momentum relaxation time $\tau_{{\rm imp}}$ is commonly referred to as a 'friction' term. It is sometimes used to regulate simulations of two dimensional turbulence, and could mimic, for example, the drag on atmospheric flows due to the Earth's surface.

  • 11 

    Unfortunately, there are many incorrect statements in the literature. We cannot stress strongly enough that hydrodynamic transport phenomena are computable in a correct and complete solution of the Boltzmann equation [26].

  • 12 

    Theorists should also not get confused by the notation $F^a_{\mu\nu}$ —the presence of multiple conservation laws here does not imply any emergent non-Abelian gauge invariance.

  • 13 

    We caution the reader that the relaxation time approximations commonly employed in condensed matter physics, as in conventional textbooks [1], often do not carefully account for conservation laws.

  • 14 

    Mattheisen's rule is commonly used in condensed matter physics. However, it is not true, even in any perturbative limit, and this is but one of many counter-examples. There are even more dramatic examples that can arise in kinetic theory [26].

  • 15 

    This formula can also be derived [23] using the memory matrix formalism [21, 175].

  • 16 

    Note that this variational principle is distinct from a separate variational principle which states that the probability of finding a particle in a given state, in thermodynamic equilibrium, is given by the distribution that maximizes entropy given the density of the conserved quantities: energy, charge and momentum [183, 184]. The variational principle (187) states that dissipative processes minimize entropy production, not that thermodynamic ensembles maximize entropy.

  • 17 

    Technically one has to be careful about the boundary conditions because we are dealing with thermoelectric transport. One can confirm that under experimental boundary conditions the techniques described below measure κ, the open-circuit thermal conductivity [15].

  • 18 

    In a certain 'cartoon' limit, these equations can also be derived at finite $\sigma_{\textsc{q}}$ [21, 175]. However, the derivation of these equations as $\sigma_{\textsc{q}}\rightarrow 0$ is generic and valid for general quantum systems, not just graphene.

Please wait… references are loading.
10.1088/1361-648X/aaa274