Brought to you by:
Paper The following article is Open access

Exponentially more precise quantum simulation of fermions in the configuration interaction representation

, , , , , , and

Published 7 December 2017 © 2017 IOP Publishing Ltd
, , Citation Ryan Babbush et al 2018 Quantum Sci. Technol. 3 015006 DOI 10.1088/2058-9565/aa9463

2058-9565/3/1/015006

Abstract

We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require $\widetilde{{ \mathcal O }}(N)$ qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires $\widetilde{{ \mathcal O }}(\eta )$ qubits, where $\eta \ll N$ is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as $\widetilde{{ \mathcal O }}({\eta }^{2}{N}^{3}t)$.

Export citation and abstract BibTeX RIS

1. Introduction

The simulation of electrons interacting in the external potential of nuclei is the central problem of quantum chemistry. Efficient solutions to this problem could enable ab initio design of new materials and chemical reactions, potentially revolutionizing diverse fields such as drug discovery, battery development, catalysis, superconductivity and more. While the ambition to use quantum computers to simulate physical systems dates back to Feynman in 1982 [1], the first concrete quantum algorithm to solve the quantum chemistry problem was introduced by Aspuru-Guzik et al in 2005 [2]. This original algorithm was based on the quantum phase estimation algorithm [3] in conjunction with Trotter-Suzuki methods of time-evolution, which Lloyd and Abrams first applied to quantum simulation in [4, 5].

Since then, there have been scores of papers on the topic introducing a variety of different simulation paradigms. For example, quantum algorithms for chemistry have been proposed in a variational framework [69], in an adiabatic algorithm [10], in first quantization [11], in real space [12, 13] and using basis sets with fewer Hamiltonian terms [14, 15]. Recently, there has been a large body of work dedicated to exploring different ways that one might map fermions into qubits [1621]. There have also been a number of experimental demonstrations of both phase estimation [2225] and variational approaches to quantum chemistry [2630].

In the last few years, the fast pace of development in quantum computing hardware has provoked the question of exactly what resources will be required to solve interesting chemistry problems with quantum error-correction [3133]. To enable such estimates, significant work has been dedicated to optimizing the resources required for phase estimation simulations using Trotter-Suzuki decompositions [3436]. Using arbitrarily high-order Trotter formulas, the tightest-known upper bound on the gate count of the second-quantized, Trotter-based quantum simulation of chemistry is $\widetilde{{ \mathcal O }}({N}^{8+o(1)}t/{\epsilon }^{o(1)})$ [37, 38] 7 , where N is the number of spin-orbitals and epsilon is the required accuracy. Thus, the Trotter-based quantum simulation of many molecular systems remains a costly proposition [39, 40]. One might worry that with such high gate scalings, many systems of practical interest could not be treated to chemical precision.

In [41], we introduced two novel quantum algorithms for chemistry based on the truncated Taylor series simulation method of [42], which are exponentially more precise than algorithms using the Trotter-Suzuki decomposition. Our first algorithm, referred to as the 'database' algorithm, was shown to have gate count scaling as $\widetilde{{ \mathcal O }}({N}^{4}\parallel H\parallel t)$. Our second algorithm, referred to as the 'on-the-fly' algorithm, was shown to have the lowest scaling of any approach to quantum simulation previously in the literature, $\widetilde{{ \mathcal O }}({N}^{5}t)$. Both of these algorithms use a second-quantized representation of the Hamiltonian; in this paper we employ a more compressed, first-quantized representation of the Hamiltonian known as the configuration interaction (CI) matrix. We also analyze the on-the-fly integration strategy far more rigorously, by making the assumptions explicit and rigorously deriving error bounds. Our approach combines a number of improvements:

  • a novel 1-sparse decomposition of the CI matrix (improving over that in [11]),
  • a self-inverse decomposition of 1-sparse matrices as introduced in [43],
  • the exponentially more precise simulation techniques of [42],
  • and the on-the-fly integration strategy of [41].

The paper is outlined as follows. In section 2, we summarize the key results and note the improvements presented here over previous approaches. In section 3, we introduce the configuration basis encoding of the wavefunction. In section 4, we show how to decompose the Hamiltonian into 1-sparse unitary matrices. In section 5, we use the decomposition of section 4 to construct a circuit which provides oracular access to the Hamiltonian matrix entries, assuming access to ${\rm{SAMPLE}}(w)$ from [41]. In section 6, we review the procedures in [42] and [41] to demonstrate that this oracle circuit can be used to effect a quantum simulation which is exponentially more precise than using a Trotter-Suzuki decomposition approach. In section 7, we discuss applications of this algorithm and future research directions.

2. Summary of results

In our previous work [41], simulation of the molecular Hamiltonian was performed in second quantization using Taylor series simulation methods to give a gate count scaling as $\widetilde{{ \mathcal O }}({N}^{5}t)$. In this work, we use the configuration interaction representation of the Hamiltonian to provide an improved scaling of $\widetilde{{ \mathcal O }}({\eta }^{2}{N}^{3}t)$. This result is summarized by the following Theorem.

Theorem 1. Using atomic units in which ${\rm{\hslash }}$, Coulomb's constant, and the charge and mass of the electron are unity, we can write the molecular Hamiltonian as

Equation (1)

where ${\vec{R}}_{i}$ are the nuclear coordinates, ${\vec{r}}_{j}$ are the electron coordinates, and ${Z}_{i}$ are the nuclear atomic numbers. Consider a basis set of $N$ spin-orbitals satisfying the following conditions:

  • 1.  
    each orbital takes significant values up to a distance at most logarithmic in $N$,
  • 2.  
    beyond that distance the orbital decays exponentially,
  • 3.  
    the maximum value of each orbital, and its first and second derivatives, scale at most logarithmically in $N$,
  • 4.  
    and the value of each orbital can be evaluated with complexity $\widetilde{{ \mathcal O }}(1)$.

Evolution under the Hamiltonian of equation (1) can be simulated in this basis for time $t$ within error $\epsilon \gt 0$ with a gate count scaling as $\widetilde{{ \mathcal O }}({\eta }^{2}{N}^{3}t)$, where $\eta $ is the number of electrons in the molecule.

We note that these conditions will be satisfied for most, but not all, quantum chemistry simulations. To understand the limitations of these conditions, we briefly discuss the concept of a model chemistry (i.e. standard basis set specifications) and how model chemistries are typically selected for electronic structure calculations. There are thousands of papers which study the effectiveness of various basis sets developed for the purpose of representing molecules [44]. These model chemistries associate specific orbital basis functions with each atom in a molecule. For example, wherever Nitrogen appears in a molecule a model chemistry would mandate that one add to the system certain basis functions which are centered on Nitrogen and have been pre-optimized for Nitrogen chemistry; different basis functions would be associated with each Phosphorus, and so on. In addition to convenience, the use of standardized model chemistries helps chemists to compare different calculations and reproduce results.

Within a standard model chemistry, orbital basis functions are almost always represented as linear combinations of pre-fitted Gaussians which are centered on each atom. Examples of such model chemistries include Slater Type Orbitals (e.g. STO-3G), Pople Basis Sets (e.g. 6-31G*) and correlation consistent basis sets (e.g. cc-DVTZ). We note that all previous studies on quantum algorithms for quantum chemistry in an orbital basis have advocated the use of one of these models. Simulation within any of these model chemistries would satisfy the conditions of our theorem because the basis functions associated with each atom have maximum values, derivatives and distances beyond which each orbital decays exponentially.

Similarly, when molecular instances grow because more atoms are added to the system it is standard practice to perform these progressively larger calculations using the same model chemistry and the conditions of theorem 1 are satisfied. For instance, in a chemical series such as progressively larger Hydrogen rings or progressively longer alkane chains or protein sequences, these conditions would be satisfied. We note that periodic systems such as conducting metals might require basis sets (e.g. plane waves) violating the conditions of theorem 1. When systems grow because atoms in the molecule are replaced with heavier atoms, the orbitals do tend to grow in volume and their maximum values might increase (even within a model chemistry). However, there are only a finite number of elements on the periodic table so this is irrelevant for considerations of asymptotic complexity. Finally, we point out that these conditions do not hold if the simulation is performed in the canonical molecular orbital basis, but this is not a problem for our approach since the Hartree–Fock state can easily be prepared in the atomic orbital basis at cost that is quadratic in the number of spin-orbitals. We discuss this procedure further in section 3.

The simulation procedure of [42] requires a decomposition of the Hamiltonian into a weighted sum of unitary matrices. In [41], we decomposed the molecular Hamiltonian in such a way that all the coefficients were integrals, i.e.

Equation (2)

where the ${H}_{{\ell }}$ are unitary operators, and the ${w}_{{\ell }}(\vec{z})$ are determined by the procedure. We then showed how one could evolve under H while simultaneously computing these integrals. In this paper, we investigate a different representation of the molecular Hamiltonian with the related property that the Hamiltonian matrix elements ${H}^{\alpha \beta }$ can be expressed as integrals,

Equation (3)

or a sum of a limited number of integrals. We decompose the Hamiltonian into a sum of one-sparse Hamiltonians, each of which has only a single integral in its matrix entries. We then decompose the Hamiltonian by discretizing the integrals and then further decompose the Hamiltonian into a sum of self-inverse operators, ${{ \mathcal H }}_{{\ell },\rho }$. Using this decomposition, we construct a circuit called ${\rm{SELECT}}({ \mathcal H })$ which selects and applies the self-inverse operators so that

Equation (4)

By repeatedly calling ${\rm{SELECT}}({ \mathcal H })$, we are able to evolve under H with an exponential improvement in precision over Trotter-based algorithms.

The CI matrix is a compressed representation of the molecular Hamiltonian that requires asymptotically fewer qubits than all second-quantized algorithms for chemistry. Though the CI matrix cannot be expressed as a sum of polynomially many local Hamiltonians, a paper by Toloui and Love [11] investigated the idea that one can simulate the CI matrix by decomposing it into a sum of polynomially many 1-sparse Hermitian operators. However, the particular decomposition discussed in that paper does not work as given. In this paper, we provide a new decomposition of the CI matrix into a sum of ${ \mathcal O }({\eta }^{2}{N}^{2})$ 1-sparse Hermitian operators, where $\eta \ll N$ is the number of electrons in the molecule and N is the number of spin-orbitals. This new decomposition enables our improved scaling. Using techniques introduced in [43], we further decompose these 1-sparse operators into unitary operators which are also self-inverse. As a consequence of the self-inverse decomposition, the Hamiltonian is an equally weighted sum of unitaries. ${\rm{SELECT}}({ \mathcal H })$ requires the ability to compute the entries of the CI matrix; accordingly, we can use the same strategy for computing integrals on-the-fly that was introduced in [41], but this time our Hamiltonian is of the form in equation (3).

Using this approach, the simulation of evolution over time t then requires $\widetilde{{ \mathcal O }}({\eta }^{2}{N}^{2}t)$ calls to ${\rm{SELECT}}({ \mathcal H })$. To implement ${\rm{SELECT}}({ \mathcal H })$, we make calls to the CI matrix oracle as described in section 5, which requires $\widetilde{{ \mathcal O }}(N)$ gates. This scaling is due to using a database approach to computing the orbitals, where a sequence of N controlled operations is performed. This causes our overall approach to require $\widetilde{{ \mathcal O }}({\eta }^{2}{N}^{3}t)$ gates. As in [11], the number of qubits is $\widetilde{{ \mathcal O }}(\eta )$ rather than $\widetilde{{ \mathcal O }}(N)$, because the compressed representation stores only the indices of occupied orbitals, rather than occupation numbers of all orbitals. To summarize, our algorithm with improved gate count scaling of $\widetilde{{ \mathcal O }}({\eta }^{2}{N}^{3}t)$ proceeds as follows:

  • 1.  
    Represent the molecular Hamiltonian in equation (1) in first quantization using the CI matrix formalism. This requires selection of a spin-orbital basis set, chosen such that the conditions in theorem 1 are satisfied.
  • 2.  
    Decompose the Hamiltonian into sums of self-inverse matrices approximating the required molecular integrals via the method of section 4.
  • 3.  
    Query the CI matrix oracle to evaluate the above self-inverse matrices, which we describe in section 5.
  • 4.  
    Simulate the evolution of the system over time t using the method of [42], which is summarized in section 6.

3. The CI matrix encoding

The molecular electronic structure Hamiltonian describes electrons interacting in a nuclear potential that is fixed under the Born-Oppenheimer approximation. Except for the proposals in [1113, 45, 46], all prior quantum algorithms for chemistry use second quantization. While in second quantization antisymmetry is enforced by the fermionic anti-commutation relations, in first quantization the wavefunction itself is explicitly antisymmetric. The representation of equation (1) in second quantization is

Equation (5)

where the operators ${a}_{i}^{\dagger }$ and aj in equation (5) obey antisymmetry due to the fermionic anti-commutation relations,

Equation (6)

The one-electron and two-electron integrals in equation (5) are

Equation (7)

Equation (8)

where (throughout this paper), ${\vec{r}}_{j}$ represents the position of the $j{\rm{th}}$ electron, and ${\varphi }_{i}({\vec{r}}_{j})$ represents the $i{\rm{th}}$ spin-orbital when occupied by that electron. To ensure that the integrand in equation (7) is symmetric, we can alternatively write the integral for hij as

Equation (9)

The second-quantized Hamiltonian in equation (5) is straightforward to simulate because one can explicitly represent the fermionic operators as tensor products of Pauli operators, using either the Jordan-Wigner transformation [47, 48] or the Bravyi-Kitaev transformation [17, 19, 49].

With the exception of real-space algorithms described in [12, 13], all quantum algorithms for chemistry represent the system in a basis of N single-particle spin-orbital functions, usually obtained as the solution to a classical mean-field treatment such as Hartree–Fock [50]. However, the conditions of theorem 1 only hold when actually performing the simulation in the atomic orbital basis8 (i.e. the basis prescribed by the model chemistry). The canonical Hartree–Fock orbitals are preferred over the atomic orbitals because initial states are easier to represent in the basis of Hartree–Fock orbitals. These orbitals are actually a unitary rotation of the orthogonalized atomic orbitals prescribed by the model chemistry. This unitary basis transformation takes the form

Equation (10)

where $\kappa =-{\kappa }^{\dagger }$ is an N by N anti-Hermitian matrix and so u is an N by N unitary matrix. Above, ${\tilde{a}}_{i}^{\dagger }$ and ${\tilde{a}}_{i}$ are creation and annihilation operators on orbital ${\widetilde{\varphi }}_{i}$. Then, as a consequence of the Thouless theorem [50]:

Equation (11)

where $U(u)$ is a ${2}^{N}$ by ${2}^{N}$ unitary matrix which is uniquely determined by κ.

The canonical Hartree–Fock orbitals and κ are obtained by performing a self-consistent field procedure to diagonalize a mean-field Hamiltonian for the system which is known as the Fock matrix. Because the Fock matrix describes a system of non-interacting electrons it can be expressed as the following N by N matrix:

Equation (12)

The integrals which appear in the Fock matrix are defined by equations (7) and (8). Importantly, the canonical orbitals are defined to be the orbitals which diagonalize the Fock matrix. Thus, the integrals in the definition of the Fock matrix are defined in terms of the eigenvectors of the Fock matrix so equation (12) is a recursive definition. The canonical orbitals are obtained by repeatedly diagonalizing this matrix until convergence with its own eigenvectors. The Hartree–Fock procedure is important because the Hartree–Fock state (which is a product state in the canonical basis with the lowest η eigenvectors of the Fock matrix occupied and the rest unoccupied) has particularly high overlap with the ground state of H.

As stated before, the conditions of theorem 1 do not apply if we represent the Hamiltonian in the basis of canonical orbitals. But this is not a problem for us because we can still prepare the Hartree–Fock state in the basis of orthogonalized atomic orbitals (which do satisfy the conditions) and then apply the operator U from equation (11) to our initial state at cost $\widetilde{{ \mathcal O }}({N}^{2})$ as described in [51]. Note that the use of a local basis has other advantages, as pointed out in [14]. In particular, in the limit of certain large molecules, use of a local basis allows one to truncate terms from the Hamiltonian so that there are $\widetilde{{ \mathcal O }}({N}^{2})$ terms instead of ${ \mathcal O }({N}^{4})$ terms However, theorem 1 exploits an entirely different property of basis locality which does not require any approximation from truncating terms.

The spatial encoding of equation (5) requires ${\rm{\Theta }}(N)$ qubits, one for each spin-orbital; under the Jordan-Wigner transformation, the state of each qubit indicates the occupation of a corresponding spin-orbital. Many states representable in second quantization are inaccessible to molecular systems due to symmetries in the Hamiltonian. For instance, molecular wavefunctions are eigenstates of the total spin operator so the total angular momentum is a good quantum number, and this insight can be used to find a more efficient spatial encoding [45, 46]. Similarly, the Hamiltonian in equation (5) commutes with the number operator, ν, whose expectation value gives the number of electrons, η,

Equation (13)

Following the procedure in [11], our algorithm makes use of an encoding which reduces the number of qubits required by recognizing η as a good quantum number.

Conservation of particle number implies there are only $\xi =\left(\genfrac{}{}{0em}{}{N}{\eta }\right)$ valid configurations of these electrons, but the second-quantized Hilbert space has dimension ${2}^{N}$, which is exponentially larger than ξ for fixed η. We work in the basis of Slater determinants, which are explicitly antisymmetric functions of both space and spin associated with a particular η-electron configuration. We denote these states as $| \alpha \rangle =| {\alpha }_{0},{\alpha }_{1},\,\cdots ,\,{\alpha }_{\eta -1}\rangle $, where ${\alpha }_{i}\in \{1,\,\ldots ,\,N\}$ and $\alpha \in \{1,\,\ldots ,\,{N}^{\eta }\}$. We emphasize that ${\alpha }_{i}$ is merely an integer which indexes a particular spin-orbital function ${\varphi }_{{\alpha }_{i}}(\vec{r})$. While each configuration requires a specification of η occupied spin-orbitals, there is no sense in which ${\alpha }_{i}$ is associated with 'electron i' since fermions are indistinguishable. Specifically,

Equation (14)

where the bars enclosing the matrix in equation (14) denote a determinant. Because determinants have the property that they are antisymmetric under exchange of any two rows, this construction ensures that our wavefunction obeys the Pauli exclusion principle. We note that although this determinant can be written equivalently in different orders (e.g. by swapping any two pairs of orbital indices), we avoid this ambiguity by requiring the Slater determinants to only be written in ascending order of spin-orbital indices.

The representation of the wavefunction introduced in [11] uses η distinct registers to encode the occupied set of spin-orbitals, thus requiring ${\rm{\Theta }}(\eta \mathrm{log}N)=\widetilde{{ \mathcal O }}(\eta )$ qubits. However, it would be possible to use a further-compressed representation of the wavefunction based on the direct enumeration of all Slater determinants, requiring only ${\rm{\Theta }}(\mathrm{log}\xi )$ qubits. When using very small basis sets (such as the minimal basis), it will occasionally be the case that the spatial overhead of ${\rm{\Theta }}(N)$ for the second-quantized algorithm is actually less than the spatial complexity of our algorithm. However, for a fixed η, the CI matrix encoding requires exponentially fewer qubits.

4. The CI matrix decomposition

The molecular Hamiltonian expressed in the basis of Slater determinants is known to chemists as the CI matrix. Elements of the CI matrix are computed according to the Slater-Condon rules [50], which we will express in terms of the one-electron and two-electron integrals in equations (7) and (8). In order to motivate our 1-sparse decomposition, we state the Slater-Condon rules for computing the matrix element

Equation (15)

by considering the spin-orbitals which differ between the determinants $| \alpha \rangle $ and $| \beta \rangle $ [50]:

  • 1.  
    If $| \alpha \rangle $ and $| \beta \rangle $ contain the same spin-orbitals ${\{{\chi }_{i}\}}_{i=1}^{\eta }$ then we have a diagonal element
    Equation (16)
  • 2.  
    If $| \alpha \rangle $ and $| \beta \rangle $ differ by exactly one spin-orbital such that $| \alpha \rangle $ contains spin-orbital k where $| \beta \rangle $ contains spin-orbital ${\ell }$, but otherwise contain the same spin-orbitals ${\{{\chi }_{i}\}}_{i=1}^{\eta -1}$, then
    Equation (17)
  • 3.  
    If $| \alpha \rangle $ and $| \beta \rangle $ differ by exactly two spin-orbitals such that occupied spin-orbital i in $| \alpha \rangle $ is replaced with spin-orbital k in $| \beta \rangle $, and occupied spin-orbital j in $| \alpha \rangle $ is replaced with spin-orbital ${\ell }$ in $| \beta \rangle $, then
    Equation (18)
  • 4.  
    If $| \alpha \rangle $ and $| \beta \rangle $ differ by more than two spin-orbitals,
    Equation (19)

These rules assume that α and β have the list of occupied orbitals given in a corresponding order, so all corresponding occupied orbitals are listed in the same positions. In contrast, we will be giving the lists of occupied orbitals in ascending order. In order to use the rules, we therefore need to change the order of the list of occupied orbitals. In changing the order of the occupied orbitals, there is a sign flip on the state for an odd permutation. This sign flip needs to be included when using the above rules.

In general, there is no efficient way to decompose the CI matrix into a polynomial number of tensor products of Pauli operators. It is thus inefficient to directly simulate this Hamiltonian in the same fashion with which we simulate local Hamiltonians. However, the CI matrix is sparse and there exist techniques for simulating arbitrary sparse Hamiltonians. A d-sparse matrix is one which contains at most d nonzero elements in each row and column. As discussed in [11, 31], the Slater-Condon rules imply that the sparsity of the CI matrix is

Equation (20)

Because N is always greater than η, we find that the CI matrix is d-sparse where $d\in { \mathcal O }({\eta }^{2}{N}^{2})$. This should be compared with the second-quantized Hamiltonian which is also d-sparse, but where $d\in { \mathcal O }({N}^{4})$. Our strategy here parallels the second-quantized decomposition, but works with the first-quantized wavefunction. This decomposition is explained in four steps, as follows.

  • A.  
    Decompose the molecular Hamiltonian into ${ \mathcal O }({\eta }^{2}{N}^{2})$ 1-sparse matrices.
  • B.  
    Further decompose each of these 1-sparse matrices into 1-sparse matrices with entries proportional to a sum of a constant number of molecular integrals.
  • C.  
    Decompose those 1-sparse matrices into sums approximating the integrals in equations (8) and (9).
  • D.  
    Decompose the integrands from those integrals into sums of self-inverse matrices.

4.1. Decomposition into 1-sparse matrices

In order to decompose the molecular Hamiltonian into 1-sparse matrices, we require a unique and reversible graph coloring between nodes (Slater determinants). We introduce such a graph coloring here, with the details of its construction and proof of its properties given in appendix A. The graph coloring can be summarized as follows.

  • 1.  
    Perform the simulation under ${\sigma }_{x}\otimes H$, where ${\sigma }_{x}$ is the Pauli x matrix, in order to create a bipartite Hamiltonian of the same sparsity as H.
  • 2.  
    Label the 'left' nodes α and the 'right' nodes β in the bipartite graph. We seek a procedure to take α to β, or vice versa, with as little additional information as possible, and without redundancy or ambiguity.
  • 3.  
    Provide an 8-tuple $\gamma =({a}_{1},{b}_{1},i,p,{a}_{2},{b}_{2},j,q)$ which determines the coloring. The coloring must uniquely determine α given β or vice versa. Using the 8-tuples, proceed via either Case 1, 2, 3, or 4 in appendix A to determine the other set of spin-orbitals, using an intermediate list of orbitals χ. The 4-tuples $({a}_{1},{b}_{1},i,p)$ and $({a}_{2},{b}_{2},j,q)$ each define a differing orbital. For a single difference, we can set p = 0, and for no differences, we can set $p=q=0$.

Step 1 is used to ensure that the graph is bipartite. That means the nodes can be partitioned into two sets, with connections only between these two sets, and not within them. These sets correspond to basis states with the ancilla qubit (added in step 1) being in state $| 0\rangle $ or $| 1\rangle $. The 'left' and 'right' nodes then correspond to those in each of these two sets. The 8-tuple in step 3 is composed of two 4-tuples, for each differing orbital. For the first 4-tuple, the variables are used as follows.

i—The position of the differing orbital.

p—The shift in the position of the orbital.

a1—A bit equal to 0 (or 1) if i gives the position of the differing orbital in α (or β).

b1—A bit resolving any remaining ambiguity in the graph coloring. The other 4-tuple is equivalent for the other differing orbital, with i replaced with j and so forth.

The basic idea is that we would like to give the positions i and j of those orbitals which differ in α, as well as by how much the occupied orbital indices shift, which we denote by p and q. This would allow us to determine β from α. However, it does not allow us to unambiguously determine α from β. To explain how to resolve this ambiguity, we consider the case of a single differing orbital. We will denote by i the position of the differing orbital in α, and by k the position of the differing orbital in β.

Consider the example in figure 1(a): given i which is the position in α, the position k in β can be immediately determined. But given β, multiple potential positions of occupied orbitals would need to be tested to see if they put the occupied orbital in position i = 2 in α. In this case, given β there is only one orbital which can be shifted to position 2 for α so the position in β is unambiguous. Now consider figure 1(b): multiple positions in β could lead to position 2 in α. The difference between the two cases is that in figure 1(a) there is a larger spacing between orbitals for β, whereas in figure 1(b) there is a larger spacing for α. More specifically, for figure 1(a) the spacing between ${\alpha }_{1}$ and ${\alpha }_{3}$ is 3, whereas the spacing between ${\beta }_{2}$ and ${\beta }_{4}$ is larger at 5. For figure 1(b) the spacing between ${\alpha }_{1}$ and ${\alpha }_{3}$ is 5, whereas the spacing between ${\beta }_{2}$ and ${\beta }_{4}$ is smaller at 2. It is the spacing between the occupied orbitals adjacent to the one that is moved that should be compared.

Figure 1.

Figure 1. Example of the 1-sparse coloring, where i is the position of the occupied orbital in α that must be moved. (a) i = 2, p = 4 is sufficient to determine β from α, as well as to determine α from β. (b) i = 2, p = 5 is sufficient to determine β from α, but not the reverse: subtracting p = 5 from ${\beta }_{2}$, ${\beta }_{3}$, or ${\beta }_{4}$ all give different valid values for ${\alpha }_{i}={\alpha }_{2}$. The spacing condition means that we would need to give the position of the occupied orbital for β instead.

Standard image High-resolution image

For the situation in figure 1(b), rather than specifying the position in α we should specify the position in β to resolve the ambiguity. The bit a determines whether we are specifying the position in α or in β; this is done depending on the relative spacing of the adjacent occupied orbitals in the two. However, this spacing condition does not completely resolve the ambiguity: there are potentially two different choices for the occupied orbital. The choice is made by the bit b. The coloring for the two differing orbitals is done by doing this twice with an intermediate list of occupied orbitals χ. There are ${ \mathcal O }({\eta }^{2}{N}^{2})$ possible colors: there are two possible choices of each of the bits a1, a2, b1, and b2, η choices each of i and j, and N choices each of p and q.

4.2. Decomposition into hij and hijkℓ

Each 1-sparse matrix from section 4.1 is associated with some 8-tuple $\gamma =({a}_{1},{b}_{1},i,p,{a}_{2},{b}_{2},j,q)$. However, without further modification, some of these 1-sparse matrices have entries given by a sum over a number of molecular integrals that grows with η, namely the diagonal terms as in equation (16), and the single-orbital terms as in equation (17). Here, we further decompose those matrices into a sum of 1-sparse matrices ${H}_{\gamma }$, which have entries proportional to the sum of a constant number of molecular integrals, in order to remove this changing upper bound.

We want to have a new set of 1-sparse matrices, each with entries corresponding to a single term in the sum over molecular integrals. To be more specific, the combinations of γ correspond to terms in equation (16) to equation (18) as follows.

  • 1.  
    If $p=q=0$, this indicates that we have a diagonal 1-sparse matrix. In equation (16), the entries on the diagonal would be a sum of ${ \mathcal O }({\eta }^{2})$ terms. As we have freedom in how to use i and j, we use these to give terms in the sum. When i = j for $p=q=0$, we take the 1-sparse matrix to have diagonal elements given by ${h}_{{ii}}$. If $i\lt j$ for $p=q=0$ we take the 1-sparse matrix to have diagonal entries ${h}_{{\chi }_{i}{\chi }_{j}{\chi }_{i}{\chi }_{j}}-{h}_{{\chi }_{i}{\chi }_{j}{\chi }_{j}{\chi }_{i}}$. We do not allow tuples γ such that $i\gt j$ for $p=q=0$ (alternatively we could just give zero in this case). The overall result is that the sum over i and j for the 1-sparse matrices for γ with $p=q=0$ yields the desired sum in equation (16).
  • 2.  
    Next, if p = 0 and $q\ne 0$, then this indicates that we have a 1-sparse matrix with entries where α and β differ by only one spin-orbital. According to equation (17), each entry would normally be a sum of ${ \mathcal O }(\eta )$ terms. Instead, when p = 0 and $q\ne 0$, we use the value of i to index terms in the sum in equation (17), though we only yield a nonzero result when i is in the Slater determinant. In particular, the 1-sparse matrix has entries ${h}_{k{\chi }_{i}{\ell }{\chi }_{i}}-{h}_{k{\chi }_{i}{\chi }_{i}{\ell }}$. We allow an additional value of i to indicate a 1-sparse matrix with entries ${h}_{k{\ell }}$. Then the sum over 1-sparse matrices for different values of i gives the desired sum equation (17). We do not allow γ such that q = 0 but $p\ne 0$.
  • 3.  
    Finally, if both p and q are nonzero, then we have a 1-sparse matrix with entries where α and β differ by two orbitals. In this case, there is no sum in equation (18), so there is no additional decomposition needed.

Combining these three steps we find that the decomposition into 1-sparse matrices Hγ can be achieved with the indices $({a}_{1},{b}_{1},i,p,{a}_{2},{b}_{2},j,q)$. Thus, there are ${ \mathcal O }({\eta }^{2}{N}^{2})$ terms without any redundancies. Note that sorting of the spin-orbital indices requires only $\widetilde{{ \mathcal O }}(\eta )$ gates, which is less than the number of complexity of evaluating the spin-orbitals. In the following sections, we denote the total number of terms given by the above decomposition by Γ, and the sum over Hγ yields the complete CI matrix,

Equation (21)

4.3. Discretizing the integrals

Next we consider discretization of the integrals for hij and hijkℓ. In [42] it is shown how to simulate Hamiltonian evolution with an exponential improvement in the scaling with $1/\epsilon $, as compared to methods based on Trotter formulas. In this approach, the time-ordered exponential for the evolution operator is approximated by a Taylor series up to an order K. The time t is broken into r segments, and the integrals are discretized in the following way on each segment:

Equation (22)

where ${ \mathcal T }$ is the time-ordering operator. In our case the Hamiltonian does not change in time, so the time-ordering is unimportant.

The Hamiltonian is expanded as a sum of Hγ as in equation (21), and each of those terms has matrix entries that can be given in the form of an integral as

Equation (23)

In cases where ${H}_{\gamma }^{\alpha \beta }$ corresponds to hij, the integral is over a three-dimensional region, and where ${H}_{\gamma }^{\alpha \beta }$ corresponds to hijkℓ the integral is over a six-dimensional region, so $\vec{z}$ represents six parameters.

Ideally, each integral can be truncated to a finite domain D with volume ${ \mathcal V }$. Using a set of grid points ${\vec{z}}_{\rho }$, we can approximate the integral by

Equation (24)

The complexity will then be logarithmic in the number of points in the sum, μ, and linear in the volume times the maximum value of the integrand.

In practice the situation is more complicated than this. That is because the integrals are all different. As well as the dimensionality of the integrals (three for hij and six for hijkℓ), there will be differences in the regions that the integrals will be over, as well as some integrals being in spherical polar coordinates. To account for these differences, it is better to write the discretized integral in the form

Equation (25)

The Hamiltonian Hγ can then be written as the sum

Equation (26)

As discussed in [41], the discretization is possible because the integrands can be chosen to decay exponentially [50]. The required properties of the orbitals are given in theorem 1. Here we present a more precise formulation of the required properties, and provide specific results on the number of terms needed. We make the following three assumptions about the spin-orbitals ${\varphi }_{{\ell }}$.

  • 1.  
    There exists a positive real number ${\varphi }_{{\rm{\max }}}$ such that, for all spin-orbital indices ${\ell }$ and for all $\vec{r}\in {\rm{I}}{{\rm{R}}}^{3}$,
    Equation (27)
  • 2.  
    For each spin-orbital index ${\ell }$, there exists a vector ${\vec{c}}_{{\ell }}\in {\rm{I}}{{\rm{R}}}^{3}$ (called the center of ${\varphi }_{{\ell }}$) and a positive real number ${x}_{{\rm{\max }}}$ such that, whenever $\parallel \vec{r}-{\vec{c}}_{{\ell }}\parallel \geqslant {x}_{{\rm{\max }}}$ for some $\vec{r}\in {\rm{I}}{{\rm{R}}}^{3}$,
    Equation (28)
    where α is some positive real constant.
  • 3.  
    For each spin-orbital index ${\ell }$, ${\varphi }_{{\ell }}$ is twice-differentiable and there exist positive real constants ${\gamma }_{1}$ and ${\gamma }_{2}$ such that
    Equation (29)
    and
    Equation (30)
    for all $\vec{r}\in {\rm{I}}{{\rm{R}}}^{3}$.

Note that α, ${\gamma }_{1}$ and ${\gamma }_{2}$ are dimensionless constants, whereas xmax has units of distance, and ${\varphi }_{{\rm{\max }}}$ has the same units as ${\varphi }_{{\ell }}$. The conditions of theorem 1 mean that ${\varphi }_{{\rm{\max }}}$ and ${x}_{{\rm{\max }}}$ grow at most logarithmically with the number of spin-orbitals. Note that we use xmax in a different way than in [41], where it was the size of the cutoff on the region of integrals, satisfying ${x}_{\max }={ \mathcal O }(\mathrm{log}({Nt}/\epsilon ))$. Here we take xmax to be the size scale of the orbitals independent of t or epsilon, and the cutoff will be a multiple of xmax. We also assume that xmax is bounded below by a constant, so the first and second derivatives of the spin-orbitals grow no more than logarithmically as a function of the number of spin-orbitals.

We next define notation used for the integrals for hij and hijkℓ. These integrals are

Equation (31)

Equation (32)

and

Equation (33)

for any choices of ${D}_{0},{D}_{1,q}\subseteq {\rm{I}}{{\rm{R}}}^{3}$ and ${D}_{2}\subseteq {\rm{I}}{{\rm{R}}}^{6}$. Thus

Equation (34)

and

Equation (35)

Using the assumptions on the properties of the orbitals, we can bound the number of terms needed in a Riemann sum that approximates each integral to within a specified accuracy, $\delta $ (which is distinct from the accuracy of the overall simulation, epsilon). These bounds are summarized in the following three lemmas.

Lemma 1. Let $\delta $ be any real number that satisfies

Equation (36)

where

Equation (37)

Then ${S}_{{ij}}^{(0)}({\rm{I}}{{\rm{R}}}^{3})$ can be approximated to within error $\delta $ using a Riemann sum with

Equation (38)

terms, where the terms in the sum have absolute value no larger than

Equation (39)

Lemma 2. Let $\delta $ be any real number that satisfies

Equation (40)

where

Equation (41)

Then ${S}_{{ij}}^{(1,q)}({\rm{I}}{{\rm{R}}}^{3})$ can be approximated to within error $\delta $ using a Riemann sum with

Equation (42)

terms, where the terms in the sum have absolute value no larger than

Equation (43)

Lemma 3. Let $\delta $ be any real number that satisfies

Equation (44)

where

Equation (45)

Then ${S}_{{ijk}{\ell }}^{(2)}({\rm{I}}{{\rm{R}}}^{6})$ can be approximated to within error $\delta $ using a Riemann sum with

Equation (46)

terms, where the terms in the sum have absolute value no larger than

Equation (47)

The conditions in equations (36), (40) and (44) are just used to ensure that we are considering a reasonable combination of parameters, and not for example a very large allowable error $\delta $ or a small value of xmax. We prove these lemmas in appendix B. Specifically, we prove lemma 1 in appendix B.2, lemma 2 in appendix B.3 and lemma 3 in appendix B.4. In discretizing these integrals it is important that the integrands are Hermitian, because we need ${{ \mathcal H }}_{\gamma ,\rho }$ to be Hermitian. The integrands of these integrals are not Hermitian as discretized in the way given in the proofs in appendix B. This is because the regions of integration are chosen in a non-symmetric way. For example, the region of integration for ${S}_{{ij}}^{(0)}$ is chosen centered on the orbital ${\varphi }_{i}$, so the integrand is not symmetric. It is simple to symmetrize the integrands, however. For example, for ${S}_{{ij}}^{(0)}$ we can add ${({S}_{{ji}}^{(0)})}^{* }$ and divide by two. That ensures that the integrand is symmetric, with just a factor of two overhead in the number of terms in the sum.

As a consequence of these Lemmas, we see that the terms of any Riemann sum approximation to one of the integrals that define the Hamiltonian coefficients hij and hijkℓ have absolute values bounded by

Equation (48)

where μ is the number of terms in the Riemann sum and $\delta $ is the desired accuracy of the approximation. Here we have taken Zq to be ${ \mathcal O }(1)$.

4.4. Decomposition into self-inverse operators

The truncated Taylor series strategy introduced in [42] requires that we can represent our Hamiltonian as a weighted sum of unitaries. To do so, we follow a procedure in [43] which shows how 1-sparse matrices can be decomposed into a sum of self-inverse matrices with eigenvalues ±1. Specifically, we decompose each ${\aleph }_{\gamma ,\rho }$ into a sum of $M\in {\rm{\Theta }}({\max }_{\gamma ,\rho }{\parallel {\aleph }_{\gamma ,\rho }\parallel }_{{\rm{\max }}}/\zeta )$ 1-sparse unitary matrices of the form

Equation (49)

where ζ is the desired precision of the decomposition.

First, we construct a new matrix ${\widetilde{\aleph }}_{\gamma ,\rho }$ by rounding each entry of ${\aleph }_{\gamma ,\rho }$ to the nearest multiple of $2\,\zeta $, so that ${\parallel {\aleph }_{\gamma ,\rho }-{\widetilde{\aleph }}_{\gamma ,\rho }\parallel }_{{\rm{\max }}}\leqslant \zeta $. We define ${C}_{\gamma ,\rho }\equiv {\aleph }_{\gamma ,\rho }/\zeta $ so that ${\parallel {C}_{\gamma ,\rho }\parallel }_{{\rm{\max }}}\leqslant 1+\parallel {\aleph }_{\gamma ,\rho }{\parallel }_{{\rm{\max }}}/\zeta $. We decompose each ${C}_{\gamma ,\rho }$ into $\parallel {C}_{\gamma ,\rho }{\parallel }_{{\rm{\max }}}$ 1-sparse matrices, indexed by m, with entries in $\{0,-2,2\}$, as follows:

Equation (50)

Finally, we remove zero eigenvalues by further dividing each ${C}_{\gamma ,\rho ,m}$ into two matrices ${C}_{\gamma ,\rho ,m,1}$ and ${C}_{\gamma ,\rho ,m,2}$ with entries in $\{0,-1,+1\}$. For every all-zero column β in ${C}_{\gamma ,\rho ,m}$, we choose α so that $(\alpha ,\beta )$ is the location of the nonzero entry in column β in the original matrix ${{ \mathcal H }}_{\gamma ,\rho }$. Then the matrix ${C}_{\gamma ,\rho ,m,1}$ has +1 in the $(\alpha ,\beta )$ position, and ${C}_{\gamma ,\rho ,m,2}$ has −1 in the $(\alpha ,\beta )$ position. Thus, we have decomposed each Hγ into a sum of 1-sparse, unitary matrices with eigenvalues ±1.

We now use a simplified notation where ${\ell }$ corresponds to the triples $(s,m,\gamma )$, and ${\aleph }_{{\ell },\rho }\equiv {C}_{\gamma ,\rho ,m,s}$. We denote the number of values of ${\ell }$ by L, and can write the Hamiltonian as a sum of ${ \mathcal O }({N}^{4}\mu M)$ unitary, 1-sparse matrices

Equation (51)

That is, the decomposition is of the form in equation (2), but in this case ${W}_{{\ell }}$ is independent of ${\ell }$.

To summarize, we decompose the molecular Hamiltonian into a sum of self-inverse matrices in four steps:

  • 1.  
    Decompose the molecular Hamiltonian into a sum of 1-sparse matrices using the bipartite graph coloring given in appendix A, summarized in section 4.1.
  • 2.  
    Decompose these 1-sparse matrices further, such that each entry corresponds to a single term in the sum over molecular integrals. This does not change the number of terms, but simplifies calculations.
  • 3.  
    Discretize the integrals over a finite region of space, subject to the constraints and bounds given in [41].
  • 4.  
    Decompose into self-inverse operators by the method proposed in [43].

This decomposition gives an overall gate count scaling contribution of ${ \mathcal O }({\eta }^{2}{N}^{2})$.

5. The CI matrix oracle

In this section, we discuss the construction of the circuit referred to in our introduction as ${\rm{SELECT}}({ \mathcal H })$, which applies the self-inverse operators in a controlled way. As discussed in [41], the truncated Taylor series technique of [42] can be used with a selection oracle for an integrand which defines the molecular Hamiltonian. This method will then effect evolution under this Hamiltonian with an exponential increase in precision over Trotter-based methods. For clarity of exposition, we describe the construction of ${\rm{SELECT}}({ \mathcal H })$ in terms of two smaller oracle circuits which can be queried to learn information about the 1-sparse unitary integrands. This information is then used to evolve an arbitrary quantum state under a specific 1-sparse unitary.

The first of the oracles described here is denoted as ${Q}^{{\rm{col}}}$ and is used to query information about the sparsity pattern of a particular 1-sparse Hermitian matrix from equation (21). The second oracle is denoted as ${Q}^{{\rm{val}}}$ and is used to query information about the value of integrands for elements in the CI matrix. We construct ${Q}^{{\rm{val}}}$ by making calls to a circuit constructed in [41] where it is referred to as '${\rm{SAMPLE}}(w)$'. The purpose of ${\rm{SAMPLE}}(w)$ is to sample the integrands of the one-electron and two-electron integrals hij and hijkℓ in equations (8) and (9). The construction of ${\rm{SAMPLE}}(w)$ in [41] requires $\widetilde{{ \mathcal O }}(N)$ gates.

The oracle ${Q}^{{\rm{col}}}$ uses information from the index γ. The index γ is associated with the indices $({a}_{1},{b}_{1},i,p,{a}_{2},{b}_{2},j,q)$ which describe the sparsity structure of the 1-sparse Hermitian matrix ${H}_{\gamma }$ according to the decomposition in section 4.2. ${Q}^{{\rm{col}}}$ acts on a register specifying a color $| \gamma \rangle $ as well a register containing an arbitrary row index $| \alpha \rangle $ to reveal a column index $| \beta \rangle $ so that the ordered pair (α, β) indexes the nonzero element in row α of ${H}_{\gamma }$,

Equation (52)

From the description in section 4.2, implementation of the unitary oracle ${Q}^{{\rm{col}}}$ is straightforward.

To construct ${\rm{SELECT}}({ \mathcal H })$ we need a second oracle that returns the value of the matrix elements in the decomposition. This selection oracle is queried with a register $| {\ell }\rangle =| s\rangle | m\rangle | \gamma \rangle $ which specifies which part of the 1-sparse representation we want, as well as a register $| \rho \rangle $ which indexes the grid point ρ and registers $| \alpha \rangle $ and $| \beta \rangle $ specifying the two Slater determinants. Specifically, the entries in the tuple identify the color (γ) of the 1-sparse Hermitian matrix from which the 1-sparse unitary matrix originated, which positive integer index ($m\leqslant M$) it corresponds to in the further decomposition of ${\aleph }_{\gamma ,\rho }$ into ${C}_{\gamma ,\rho ,m}$, and which part it corresponds to in the splitting of ${C}_{\gamma ,\rho ,m}$ into ${C}_{\gamma ,\rho ,m,s}$ (where $s\in \{1,2\}$).

As a consequence of the Slater-Condon rules shown in equations (16)–(19), ${Q}^{{\rm{val}}}$ can be constructed given access to ${\rm{SAMPLE}}(w)$, which samples the integrand of the integrals in equations (8) and (9) [41]. Consistent with the decomposition in section 4.2, the i and j indices in the register containing $\gamma =(i,p,j,q)$ specify the dissimilar spin-orbitals in $| \alpha \rangle $ and $| \beta \rangle $ that are needed in the integrands defined by the Slater-Condon rules; therefore, the determination of which spin-orbitals differ between $| \alpha \rangle $ and $| \beta \rangle $ can be made in ${ \mathcal O }(\mathrm{log}N)$ time (only the time needed to read their values from γ). As ${\rm{SAMPLE}}(w)$ is comprised of $\widetilde{{ \mathcal O }}(N)$ gates, ${Q}^{{\rm{val}}}$ has time complexity $\widetilde{{ \mathcal O }}(N)$ and acts as

Equation (53)

where ${{ \mathcal H }}_{{\ell },\rho }^{\alpha \beta }$ is the value of the matrix entry at $(\alpha ,\beta )$ in the self-inverse matrix ${{ \mathcal H }}_{{\ell },\rho }$. When either $| \alpha \rangle $ or $| \beta \rangle $ represents an invalid Slater determinant (with more than one occupation on any spin-obital), we take ${{ \mathcal H }}_{{\ell },\rho }^{\alpha \beta }=0$ for $\alpha \ne \beta $. This ensures there are no transitions into Slater determinants which violate the Pauli exclusion principle. The choice of ${{ \mathcal H }}_{{\ell },\rho }^{\alpha \alpha }$ for invalid α will not affect the result, because the state will have no weight on the invalid Slater determinants.

Having constructed the column and value oracles, we are finally ready to construct ${\rm{SELECT}}({ \mathcal H })$. This involves implementing 1-sparse unitary operations. The method we describe is related to the scheme presented in [52] for evolution under 1-sparse Hamiltonians, but is simplified due to the simpler form of the operators. As in equation (4), ${\rm{SELECT}}({ \mathcal H })$ applies the term ${{ \mathcal H }}_{{\ell },\rho }$ in the 1-sparse unitary decomposition to the wavefunction $| \psi \rangle $. Writing $| \psi \rangle ={\sum }_{\alpha }{c}_{\alpha }| \alpha \rangle $, we require that ${\rm{SELECT}}({ \mathcal H })$ first call ${Q}^{{\rm{col}}}$ to obtain the columns, β, corresponding to the rows, α, for the nonzero entries of the Hamiltonian:

Equation (54)

Now that we have the row and column of the matrix element, we apply ${Q}^{{\rm{val}}}$ which causes each Slater determinant to accumulate the phase factor ${k}_{\alpha }={{ \mathcal H }}_{{\ell },\rho }^{\alpha \beta }=\pm 1$:

Equation (55)

Next, we swap the locations of α and β in order to complete multiplication by the 1-sparse unitary,

Equation (56)

Finally we apply ${Q}^{{\rm{col}}}$ again but this time β is in the first register. Since ${Q}^{{\rm{col}}}$ is self-inverse and always maps $| \alpha \rangle | b\rangle $ to $| \alpha \rangle | b\oplus \beta \rangle $ and $| \beta \rangle | b\rangle $ to $| \beta \rangle | b\oplus \alpha \rangle $, this allows us to uncompute the ancilla register.

Equation (57)

Note that this approach works regardless of whether the entry is on-diagonal or off-diagonal; we do not need separate schemes for the two cases. The circuit for ${\rm{SELECT}}({ \mathcal H })$ is depicted in figure 2.

Figure 2.

Figure 2. Circuit implementing ${\rm{SELECT}}({ \mathcal H })$, which applies the term ${{ \mathcal H }}_{{\ell }}({\vec{z}}_{\rho })$ labeled by ${\ell }=(\gamma ,m,s)$ in the unitary 1-sparse decomposition to the wavefunction $| \psi \rangle $.

Standard image High-resolution image

6. Simulating Hamiltonian evolution

The simulation technique we now discuss is based on that of [42]. We partition the total time t into r segments of duration t/r. For each segment, we expand the evolution operator ${e}^{-{iHt}/r}$ in a Taylor series up to order K,

Equation (58)

Provided $r\geqslant \parallel H\parallel t$, the total simulation will have error no more than epsilon for

Equation (59)

Using our full decomposition of the Hamiltonian from equation (51) in the Taylor series formula of equation (58), we obtain

Equation (60)

The sum in equation (60) takes the form

Equation (61)

where $\widetilde{U}$ is close to unitary and the Vj are unitary. Note that in contrast to [41], ${\beta }_{j}\geqslant 0$, consistent with the convention used in [42]. Our simulation will make use of an ancillary 'selection' register $| j\rangle =| k\rangle | {{\ell }}_{1}\rangle \cdots | {{\ell }}_{K}\rangle | {\rho }_{1}\rangle \cdots | {\rho }_{K}\rangle $ for $0\leqslant k\leqslant K$, with $1\leqslant {{\ell }}_{\upsilon }\leqslant L$ and $1\leqslant {\rho }_{\upsilon }\leqslant \mu $ for all υ. It is convenient to encode k in unary, as $| k\rangle =| {1}^{k}{0}^{K-k}\rangle $, which requires ${\rm{\Theta }}(K)$ qubits. Additionally, we encode each $| {{\ell }}_{k}\rangle $ in binary using ${\rm{\Theta }}(\mathrm{log}L)$ qubits and each $| {\rho }_{k}\rangle $ in binary using ${\rm{\Theta }}(\mathrm{log}\mu )$ qubits. We denote the total number of ancilla qubits required as J, which scales as

Equation (62)

To implement the truncated evolution operator in equation (61), we wish to prepare a superposition state over j, then apply a controlled Vj operator. Following the notation of [42], we denote the state preparation operator as B, and it has the effect

Equation (63)

where λ is a normalization factor. We can implement B in the following way. Because k is encoded in unary, we can prepare the required superposition over k with K rotations and controlled rotations, in exactly the same way as described in [41]. In addition, we apply Hadamard gates to every qubit in the $| {{\ell }}_{\upsilon }\rangle $ and $| {\rho }_{\upsilon }\rangle $ registers. This requires ${ \mathcal O }(K\mathrm{log}(\mu L))$ gates; parallelizing the Hadamard transforms leads to circuit depth ${ \mathcal O }(K)$ for B.

We then wish to implement an operator to apply the Vj which is referred to in [41, 42] as ${\rm{SELECT}}(V)$,

Equation (64)

This operation can be applied using ${ \mathcal O }(K)$ queries to a controlled form of the oracle ${\rm{SELECT}}({ \mathcal H })$ defined in section 5. One can apply ${\rm{SELECT}}({ \mathcal H })$ K times, using each of the $| {{\ell }}_{\upsilon }\rangle $ and $| {\rho }_{\upsilon }\rangle $ registers. Thus, given that ${\rm{SELECT}}({ \mathcal H })$ requires $\widetilde{{ \mathcal O }}(N)$ gates, our total gate count for ${\rm{SELECT}}(V)$ is $\widetilde{{ \mathcal O }}({KN})$. Table 1 lists relevant parameters along with their bounds, in terms of chemically relevant variables. Table 2 lists relevant operators and their gate counts.

Table 1.  Taylor series simulation parameters and bounds.

Parameter Explanation Bound
r Number of time segments, equation (72) $\zeta L\mu t/\mathrm{ln}(2)$
L Terms in unitary decomposition, equation (69) ${ \mathcal O }({\eta }^{2}{N}^{2}{\max }_{\gamma ,\rho }\parallel {\aleph }_{\gamma ,\rho }{\parallel }_{\max }/\zeta )$
K Truncation point for Taylor series, equation (59) ${ \mathcal O }\left(\tfrac{\mathrm{log}(r/\epsilon )}{\mathrm{loglog}(r/\epsilon )}\right)$
J Ancilla qubits in selection register, equation (62) ${\rm{\Theta }}(K\mathrm{log}(\mu L))$

Table 2.  Taylor series simulation operators and complexities.

Operator Purpose Gate count
${\rm{SELECT}}({ \mathcal H })$ Applies specified terms from decomposition, equation (4) $\widetilde{{ \mathcal O }}(N)$
${\rm{SELECT}}(V)$ Applies specified strings of terms, equation (64) $\widetilde{{ \mathcal O }}({NK})$
B Prepares superposition state, equation (63) ${ \mathcal O }(K\mathrm{log}(\mu L))$
${ \mathcal W }$ Probabilistically performs simulation under ${Ht}/r$, equation (65) $\widetilde{{ \mathcal O }}({NK})$
P Projector onto $| 0{\rangle }^{\otimes J}$ state of selection register ${ \mathcal O }(K\mathrm{log}(\mu L))$
G Amplification operator to implement sum of unitaries, equation (67) $\widetilde{{ \mathcal O }}({NK})$
${({PG})}^{r}$ Entire algorithm $\widetilde{{ \mathcal O }}({rNK})$

As in [41, 42] we introduce the operator

Equation (65)

Equation (66)

where $| {\rm{\Phi }}\rangle $ is a state for which the selection register ancilla qubits are orthogonal to $| 0{\rangle }^{\otimes J}$. We can then use an oblivious amplitude amplification operator

Equation (67)

with $P={(| 0\rangle \langle 0| )}^{\otimes J}\otimes {\mathbb{1}}$. The sum of the absolute values of the coefficients in the self-inverse decomposition of the Hamiltonian in equation (51) is $\lambda =\zeta L\mu $. If we choose the number of segments as $r=\zeta L\mu t/\mathrm{ln}(2)$, then our choice of K as in equation (59) ensures that ${\parallel \widetilde{U}-{U}_{r}\parallel }_{{\rm{\max }}}\in { \mathcal O }(\epsilon /r)$, and hence [42]

Equation (68)

Then the total error due to oblivious amplitude amplification on all segments will be ${ \mathcal O }(\epsilon )$. Therefore, the complexity of the total algorithm is r times the complexity of implementing ${\rm{SELECT}}(V)$ and B. While we implement B with gate count ${ \mathcal O }(K\mathrm{log}(\mu L))$, our construction of ${\rm{SELECT}}({ \mathcal H })$ has gate count $\widetilde{{ \mathcal O }}({NK})$.

The gate count of our entire algorithm depends crucially on r. Above we have taken $r\in { \mathcal O }(\zeta L\mu t)$ where

Equation (69)

Equation (70)

Equation (71)

As a result, we may bound r as

Equation (72)

As a consequence of lemmas 13, $\mu {\max }_{\gamma ,\rho }{\parallel {\aleph }_{\gamma ,\rho }\parallel }_{\max }$ can be replaced with

Equation (73)

where ${\varphi }_{\max }$ is the maximum value taken by the orbitals, and xmax is the scaling of the spatial size of the orbitals. To relate epsilon to $\delta $, in section 4.2 the Hamiltonian is broken up into a sum of ${ \mathcal O }({\eta }^{2}{N}^{2})$ terms, each of which contains one or two of the integrals. Therefore, the error in the Hamiltonian is ${ \mathcal O }(\delta {\eta }^{2}{N}^{2})$. The Hamiltonian is simulated for time t, so the resulting error in the simulation will be ${ \mathcal O }(\delta t{\eta }^{2}{N}^{2})$. To ensure that the error is no greater than epsilon, we should therefore choose $\delta ={\rm{\Theta }}(\epsilon /(t{\eta }^{2}{N}^{2}))$. Since we are considering scaling with large η and N, $\delta $ will be small and the conditions in equations (36), (40) and (44) will be satisfied. In addition, the conditions of theorem 1 mean that ${\varphi }_{\max }$ and xmax are logarithmic in N. Hence one can take, omitting logarithmic factors,

Equation (74)

The complexity of B does not affect the scaling, because it is lower order in N. Therefore, our overall algorithm has gate count

Equation (75)

as stated in theorem 1. This scaling represents an exponential improvement in precision as compared to Trotter-based methods. However, we suspect that the actual scaling of these algorithms is much better for real molecules, just as has been observed for the Trotter-based algorithms [35, 36]. Furthermore, the approach detailed here requires fewer qubits than any other approach to quantum simulation of chemistry in the literature.

7. Discussion

We have outlined a method to simulate the quantum chemistry Hamiltonian in a basis of Slater determinants using recent advances from the universal simulation literature. We find an oracular decomposition of the Hamiltonian into 1-sparse matrices based on an edge coloring routine first described in [11]. We use that oracle to simulate evolution under the Hamiltonian using the truncated Taylor series technique described in [43]. We discretize the integrals which define entries of the CI matrix, and use the sum of unitaries approach to effectively exponentially compress evaluation of these discretized integrals.

Asymptotic scalings suggest that the algorithms described in this paper series will allow for the quantum simulation of much larger molecular systems than would be possible using a Trotter-Suzuki decomposition. Recent work [14, 31, 3436] has demonstrated markedly more efficient implementations of the original Trotter-Suzuki-based quantum chemistry algorithm [2, 53]; similarly, we believe the implementations discussed here can still be improved upon, and that numerical simulations will be crucial to this task.

Finally, we note that the CI matrix simulation strategy discussed here opens up the possibility of an interesting approach to adiabatic state preparation. An adiabatic algorithm for quantum chemistry was suggested in second quantization in [10] and studied further in [54]. However, those works did not suggest a compelling adiabatic path to take between an easy-to-prepare initial state (such as the Hartree–Fock state) and the ground state of the exact Hamiltonian. We note that one could start the system in the Hartree–Fock state, and use the CI matrix oracles discussed in this paper to 'turn on' a Hamiltonian having support over a number of configuration basis states which increases smoothly with time.

Acknowledgments

The authors thank Jonathan Romero Fontalvo, Jarrod McClean, Borzu Toloui, and Nathan Wiebe for helpful discussions. DWB is funded by an Australian Research Council Future Fellowship (FT100100761) and an Australian Research Council Discovery Project (DP160102426). PJL acknowledges the support of the National Science Foundation under grant number PHY-0955518. AA-G and PJL acknowledge the support of the Air Force Office of Scientific Research under award number FA9550-12-1-0046. AA-G acknowledges the Army Research Office under award number W911NF-15-1-0256. AA-G acknowledges support from the Vannevar Bush Faculty Fellowship program sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering and funded by the Office of Naval Research through grant N00014-16-1-2008.

Appendix A.: Decomposition into 1-sparse matrices

In [52], Aharonov and Ta-Shma considered the problem of simulating an arbitrary d-sparse Hamiltonian using the ability to query bits of the Hamiltonian. According to their prescription, we should imagine the Hamiltonian as an undirected graph where each basis state corresponds to a node and each nonzero matrix element ${H}^{\alpha \beta }={H}^{\beta \alpha * }\ne 0$ corresponds to an edge which connects node $| \alpha \rangle $ to $| \beta \rangle $. Since an edge coloring of a graph using Γ colors is equivalent to the division of that graph into Γ sets of disjoint graphs of degree 1, this edge coloring represents a decomposition of the Hamiltonian into Γ 1-sparse matrices. Aharonov and Ta-Shma show a procedure for accomplishing the 1-sparse decomposition of any arbitrary d-sparse matrix using ${\rm{\Theta }}({d}^{2})$ terms by coloring an arbitrary graph of degree d with ${\rm{\Theta }}({d}^{2})$ colors. This result was tightened from ${\rm{\Theta }}({d}^{2})$ terms to d2 terms in [43]. Importantly, Aharonov and Ta-Shma also showed how these Hamiltonians can be efficiently simulated using an oracular scheme based on the Trotter-Suzuki decomposition. Toloui and Love proposed a procedure to decompose the CI matrix into $d={ \mathcal O }({\eta }^{2}{N}^{2})$ 1-sparse matrices [11], but that proposal does not work as given. We provide an improved procedure that overcomes the problem with the proposal in [11], and achieves a 1-sparse decomposition into ${ \mathcal O }({\eta }^{2}{N}^{2})$ terms.

For convenience of notation, we denote the occupied spin-orbitals for $| \alpha \rangle $ by ${\alpha }_{1},\,\ldots ,\,{\alpha }_{\eta }$, and the occupied spin-orbitals for $| \beta \rangle $ by ${\beta }_{1},\,\ldots ,\,{\beta }_{\eta }$. We also drop the bra-ket notation for the lists of orbitals (Slater determinants); that is, we denote the list of occupied orbitals for the left portion of the graph by α, and the list of occupied orbitals for the right portion of the graph by β. We require both these lists of spin-orbitals to be sorted in ascending order. According to the Slater-Condon rules, the matrix element between two Slater determinants is zero unless the determinants differ by two spin-orbitals or less. Thus, two vertices (Slater determinants) in the Hamiltonian graph are connected if and only if they differ by a single occupied orbital or two occupied orbitals.

In order to obtain the decomposition, for each color (corresponding to one of the resulting 1-sparse matrices) we need to be able to obtain β from α, and vice versa. Using the approach in [43], we take the tensor product of the Hamiltonian with a ${\sigma }_{x}$ operator. That is, we perform the simulation under the Hamiltonian ${\sigma }_{x}\otimes H$, which is bipartite and has the same sparsity as H. The ${\sigma }_{x}$ operator acts on the ancilla register that determines whether we are in the left (α) or right (β) partition of the graph. We do this without loss of generality as simulation under H can be recovered from simulation under ${\sigma }_{x}\otimes H$ using the fact that ${e}^{-i({\sigma }_{x}\otimes H)t}| +\rangle | \psi \rangle =| +\rangle {e}^{-{iHt}}| \psi \rangle $ [43].

In order for the graph coloring to be suitable for the quantum algorithm, for any given color we must have a procedure for obtaining β given α, and another procedure for obtaining α given β. For this to be a valid graph coloring, the procedure must be reversible, and different colors must not give the same β from α or vice versa.

To explain the decomposition, we will first consider how it works for α and β differing by only a single spin-orbital occupation. We are given a 4-tuple $(a,b,{\ell },p)$, where a and b are bits, ${\ell }$ is a number in the sorted list of occupied orbitals, and p is a number that tells us how many orbitals the starting orbital is shifted by. Our notation here differs slightly from that in section 4, where i and j were used in place of ${\ell }$ to represent the positions of the two orbitals which differed: here we will use i and j for a different purpose. To simplify the discussion, we do not perform the addition modulo N, and instead achieve the same effect by allowing p to take positive and negative values. If adding p takes us beyond the list of allowable orbitals, then the matrix element returned is zero, and the list of occupied orbitals is unchanged (corresponding to a diagonal element of the Hamiltonian). We will also use the convention that ${\alpha }_{0}={\beta }_{0}=0$ and ${\alpha }_{\eta +1}={\beta }_{\eta +1}=N+1$. These values are not explicitly stored, but rather are dummy values to use in the algorithm when ${\ell }$ goes beyond the range $1,\,\ldots ,\,\eta $.

The register a tells us whether the ${\ell }$ is for α or β. To simplify the discussion, when a = 0 we take $i={\ell }$, and when a = 1 we take $j={\ell }$. In either case, we require that ${\beta }_{j}={\alpha }_{i}+p$, but in the case a = 0 we are given i and need to work out j, whereas in the case a = 1 we are given j and need to work out i. In particular, for a = 0 we just take ${\alpha }_{i}$ and add p to it. Then j is the new position in the list β, so ${\beta }_{j}={\alpha }_{i}+p$.

The general principle is that, if we are given i for α and need to determine j for β, we require that ${\beta }_{j+1}-{\beta }_{j-1}\geqslant {\alpha }_{i+1}-{\alpha }_{i-1}$, (i.e. the spacing between orbitals is larger in β than in α). Alternatively, if we were given j for β and needed to determine a corresponding i for α, we would require ${\beta }_{j+1}-{\beta }_{j-1}\lt {\alpha }_{i+1}-{\alpha }_{i-1}$ (i.e. the spacing between orbitals is larger in α than in β). If the inequality is not consistent with the value of a (i.e. we are proceeding in the wrong direction), then the matrix element for this term in the decomposition is taken to be zero (in the graph there is no line of that color connecting the nodes). This procedure allows for a unique connection between nodes, without double counting.

The reason for requiring these inequalities is that the list of orbitals with a larger spacing will have less ambiguity in the order of occupied orbitals. To reduce the number of terms in the decomposition, we are only given i or j, but not both, so we either need to be able to determine j from i given β, or i from j given α. When the spacing between the occupied orbitals for β is larger, if we are given β and i there is less ambiguity in determining j. In particular, when ${\beta }_{j+1}-{\beta }_{j-1}\geqslant {\alpha }_{i+1}-{\alpha }_{i-1}$, there can be at most two values of j that could have come from i, and the bit b is then used to distinguish between them.

There are four different cases that we need to consider.

  • 1.  
    We are given β and need to determine α; a = 0.
  • 2.  
    We are given α and need to determine β; a = 0.
  • 3.  
    We are given α and need to determine β; a = 1.
  • 4.  
    We are given β and need to determine α; a = 1.

Next we explain the procedure for each of these cases in detail. In the following we use the terminology 'invalid' to indicate that we need to return $\alpha =\beta $ and a matrix element of zero.

1. Given $\beta $ and need to determine $\alpha ;{\text{}}a=0$.

We are given β, but ${\ell }$ is the position in the list of occupied orbitals for α. We do not know which is the ${\beta }_{j}$ to subtract p from, so we loop through all values as follows to find a list of candidates for α, ${\tilde{\alpha }}^{(k)}$. We define this as a procedure so we can use it later.

procedure FindAlphas
   k = 0
   For $j=1,\,\ldots ,\,\eta $:
      Subtract p from ${\beta }_{j}$ and check that this yields a valid list of orbitals, in that ${\beta }_{j}-p$ does not yield an orbital number beyond the desired range, or duplicate another orbital. That is:
      If $(({\beta }_{j}-p\in \{1,\,\ldots ,\,N\})\wedge (\forall j^{\prime} \in \{1,\,\ldots ,\,\eta \}:{\beta }_{j}-p\ne {\beta }_{j^{\prime} }))\,\vee (p=0)$ then
         Sort the list of orbitals to obtain ${\tilde{\alpha }}^{(k)}$, and denote by i the new position of ${\beta }_{j}-p$ in this list of occupied orbitals.
         Check that the new value of i corresponds to ${\ell }$, and that the spacing condition for a = 0 is satisfied, as follows.
         If $(i={\ell })\wedge ({\beta }_{j+1}-{\beta }_{j-1}\geqslant {\tilde{\alpha }}_{i+1}^{(k)}-{\tilde{\alpha }}_{i-1}^{(k)})$ then
            $k=k+1$
         end if
      end if
   end for
end procedure
After this procedure there is a list of at most two candidates for α, and k will correspond to how many have been found. Depending on the value of k we perform the following:

${\bf{k}}={\bf{0}}$ We return invalid.

${\bf{k}}={\bf{1}}$ If b = 0 then return $\alpha ={\tilde{\alpha }}^{(0)}$, else return invalid.

${\bf{k}}={\bf{2}}$ Return $\alpha ={\tilde{\alpha }}^{(b)}$.

That is, if we have two possibilities for α, then we use b to choose between them. If there is only one, then we only return that one if b = 0 to avoid obtaining two colors that both link α and β.

2. Given α and need to determine β; a = 0.

We are given α, and ${\ell }=i$ is the position of the occupied orbital in α that is changed. We therefore add p to ${\alpha }_{i}$ and check that it gives a valid list of orbitals. Not only this, we need to check that we would obtain α if we work backwards from the resulting β.

If $(({\alpha }_{i}+p\in \{1,\,\ldots ,\,N\})\wedge (\forall i^{\prime} \in \{1,\,\ldots ,\,\eta \}:{\alpha }_{i}+p\ne {\alpha }_{i^{\prime} }))\,\vee (p=0)$ then
   We sort the new list of occupied orbitals to obtain a candidate for β, denoted $\tilde{\beta }$. We next check that the spacing condition for a = 0 is satisfied.
   If $({\tilde{\beta }}_{j+1}-{\tilde{\beta }}_{j-1}\geqslant {\alpha }_{i+1}-{\alpha }_{i-1})$ then
      Perform the procedure FindAlphas to find potential candidates for α that could be obtained from $\tilde{\beta }$. There can only be 1 or 2 candidates returned from this procedure.
      If $((k=1)\wedge (b=0))\vee ((k=2)\wedge (\alpha ={\tilde{\alpha }}^{(b)}))$ then
         return $\beta =\tilde{\beta }$
      else return invalid
   else return invalid
else return invalid

3. Given α and need to determine β; a = 1.

This case is closely analogous to the case where we need to determine α from β, but a = 0. We are given α, but ${\ell }$ is the position in the list of occupied orbitals for β. We do not know which is the ${\alpha }_{i}$ to add p to, so we loop through all values as follows to find a list of candidates for β, ${\tilde{\beta }}^{(k)}$. We define this as a procedure so we can use it later.

procedure FindBetas
   k = 0
   For $i=1,\,\ldots ,\,\eta $:
      Add p to ${\alpha }_{i}$ and check that this yields a valid list of orbitals, in that ${\alpha }_{i}+p$ does not yield an orbital number beyond the desired range, or duplicate another orbital. That is:
      If $(({\alpha }_{i}+p\in \{1,\,\ldots ,\,N\})\wedge (\forall i^{\prime} \in \{1,\,\ldots ,\,\eta \}:{\alpha }_{i}+p\ne {\alpha }_{i^{\prime} }))\,\vee (p=0)$ then
         Sort the list of orbitals to obtain ${\tilde{\beta }}^{(k)}$, and denote by j the new position of ${\alpha }_{i}+p$ in this list of occupied orbitals. Check that the new value of j corresponds to ${\ell }$, and that the spacing condition for a = 1 is satisfied.
         If $(j={\ell })\wedge ({\tilde{\beta }}_{j+1}^{(k)}-{\tilde{\beta }}_{j-1}^{(k)}\lt {\alpha }_{i+1}-{\alpha }_{i-1})$ then
            $k=k+1$
         end if
      end if
   end for end procedure

After this procedure there is a list of at most two candidates for β, and k will correspond to how many have been found. Depending on the value of k we perform the following:

${\bf{k}}={\bf{0}}$ We return invalid.

${\bf{k}}={\bf{1}}$ If b = 0 then return $\beta ={\tilde{\beta }}^{(0)}$, else return invalid.

${\bf{k}}={\bf{2}}$ Return $\beta ={\tilde{\beta }}^{(b)}$.

That is, if we have two possibilities for β, then we use b to choose between them. If there is only one, then we only return that one if b = 0 to avoid obtaining two colors that both link α and β.

4. Given β and need to determine α; a = 1.

We are given β, and ${\ell }=j$ is the position of the occupied orbital in β that is changed. We therefore subtract p from ${\beta }_{j}$ and check that it gives a valid list of orbitals. Again we also need to check consistency. That is, we work back again from the α to check that we correctly obtain β.

If $(({\beta }_{j}-p\in \{1,\,\ldots ,\,N\})\wedge (\forall j^{\prime} \in \{1,\,\ldots ,\,\eta \}:{\beta }_{j}-p\ne {\beta }_{j^{\prime} }))\,\vee (p=0)$ then
   We sort the new list of occupied orbitals to obtain a candidate for α, denoted $\tilde{\alpha }$. We next check that the spacing condition for a = 1 is satisfied.
   If $({\beta }_{j+1}-{\beta }_{j-1}\lt {\tilde{\alpha }}_{i+1}-{\tilde{\alpha }}_{i-1})$ then
      Perform the procedure FindBetas to find potential candidates for β that could be obtained from $\tilde{\alpha }$. There can only be 1 or 2 candidates returned from this procedure.
      If $((k=1)\wedge (b=0))\vee ((k=2)\wedge (\beta ={\tilde{\beta }}^{(b)}))$ then
         return $\alpha =\tilde{\alpha }$
      else return invalid
   else return invalid
else return invalid

To prove that this technique gives a valid coloring, we need to show that it is reversible and unique. The most important part to show is that, provided the spacing condition holds, the ambiguity is limited to two candidates that may be resolved by the bit b. We will consider the case that $p\gt 0;$ the analysis for $p\lt 0$ is equivalent.

Consider Case 1, where we are given β and need to determine α, but a = 0. Then we take $i={\ell }$, and need to determine j. Let $j^{\prime} $ and $j^{\prime\prime} $ be two potential values of j, with $j^{\prime} \lt j^{\prime\prime} $. For these to be potential values of j, they must satisfy

Equation (A1)

Equation (A2)

Equation (A3)

Equation (A4)

Condition (A1) is required because, for $j^{\prime} $ to be a potential value of j, ${\beta }_{j^{\prime} }-p$ must correspond to an ${\alpha }_{i}$ that is between ${\alpha }_{i-1}$ and ${\alpha }_{i+1}$ (α is sorted in ascending order). Condition (A2) is the spacing condition for a = 0. Conditions (A3) and (A4) are simply the equivalent conditions for $j^{\prime\prime} $.

Next we consider how α is found from β. In the case where $j^{\prime} =i$, then we immediately know that ${\alpha }_{i-1}={\beta }_{i-1}$ and ${\alpha }_{i+1}={\beta }_{i+1}$. Then the conditions (A1) and (A2) become

Equation (A5)

Equation (A6)

In the case that $j^{\prime} \gt i$, it is clear that ${\alpha }_{i-1}={\beta }_{i-1}$ still holds. Moreover, in going from the sequence of occupied orbitals for α to the sequence for β, we have then removed ${\alpha }_{i}$, which means that ${\alpha }_{i+1}$ has moved to position i. That is to say, ${\beta }_{i}$ must be equal to ${\alpha }_{i+1}$. Therefore, conditions (A1) and (A2) become

Equation (A7)

Equation (A8)

In either case ($j^{\prime} =i$ or $j^{\prime} \gt i$), because $j^{\prime\prime} \gt j^{\prime} $, we know that $j^{\prime\prime} \gt i$. Then the same considerations as for $j^{\prime} \gt i$ hold, and conditions (A3) and (A4) become

Equation (A9)

Equation (A10)

Using (A10) we have

Equation (A11)

In the second-last line we have used ${\beta }_{j^{\prime} }-p\gt {\beta }_{i-1}$ from (A5) and (A7), and in the second line we have used $j^{\prime\prime} \gt j^{\prime} $. The inequality ${\beta }_{j^{\prime\prime} +1}-p\gt {\beta }_{i}$ means that ${\beta }_{j^{\prime\prime} +1}-p\notin ({\beta }_{i-1},{\beta }_{i})$, and therefore ${\beta }_{j^{\prime\prime} +1}$ could not have come from ${\alpha }_{i}$ by adding p. That is because ${\beta }_{j^{\prime\prime} +1}$ would have to satisfy a relation similar to (A9). In turn, any $j\gt j^{\prime\prime} +1$ will satisfy ${\beta }_{j}-p\gt {\beta }_{i}$, because the ${\beta }_{k}$ are sorted in ascending order.

The net result of this reasoning is that, if there are two ambiguous values of j, then there can be no third ambiguous value. This is because, if we call the first two ambiguous values $j^{\prime} $ and $j^{\prime\prime} $, there can be no more ambiguous values for $j\gt j^{\prime\prime} $. Hence, if we have a bit b which tells us which of the two ambiguous values to choose, then it resolves the ambiguity and enables us to unambiguously determine α, given β, p, and i.

Next consider Case 3, where we wish to determine β from α, but a = 1. In that case, we take $j={\ell }$, and need to determine i. That is, we wish to determine a value of i such that adding p to ${\alpha }_{i}$ gives ${\beta }_{j}$, and also require the condition ${\beta }_{j+1}-{\beta }_{j-1}\lt {\alpha }_{i+1}-{\alpha }_{i-1}$. Now the situation is reversed; if we start with β, then we can immediately determine α, but if we have α then we potentially need to consider multiple values of i and resolve an ambiguity. In exactly the same way as above, there are at most two possible values of i, and we distinguish between these using the bit b.

In this case, we cannot have j = i, because that would imply that ${\alpha }_{k}={\beta }_{k}$ for all $k\ne j$, and the condition ${\beta }_{j+1}-{\beta }_{j-1}\lt {\alpha }_{i+1}-{\alpha }_{i-1}$ would be violated. Therefore, consider two possible values of i, $i^{\prime} $ and $i^{\prime\prime} $, with $i^{\prime\prime} \lt i^{\prime} \lt j$. The equivalents of the conditions in equations (A1) to (A4) are

Equation (A12)

Equation (A13)

Equation (A14)

Equation (A15)

Because $i^{\prime\prime} \lt i^{\prime} \lt j$, using similar reasoning as before, we find that ${\beta }_{j+1}={\alpha }_{j+1}$ and ${\beta }_{j-1}={\alpha }_{j}$. That means that the conditions (A12) to (A15) become

Equation (A16)

Equation (A17)

Equation (A18)

Equation (A19)

Starting with equation (A19) we obtain

Equation (A20)

Hence ${\alpha }_{i^{\prime\prime} -1}+p$ is not in the interval $({\beta }_{j-1},{\beta }_{j+1})$, and therefore cannot give ${\beta }_{j}$. Therefore there can be no third ambiguous value, in the same way as above for a = 0. Hence the single bit b is again sufficient to distinguish between any ambiguous values, and enables us to determine β given α, p, and j.

We now consider the requirement that the procedure is reversible. In particular, Case 1 needs to be the reverse of Case 2, and Case 3 needs to be the reverse of Case 4. Consider starting from a particular β and using the method in Case 1. We have shown that the procedure FindAlphas in Case 1 can yield at most two potential candidates for α, and then one is chosen via the value of b. For the resulting α, adding p to ${\alpha }_{i}$ will yield the original set of occupied orbitals β. Moreover, the inequality ${\beta }_{j+1}-{\beta }_{j-1}\geqslant {\alpha }_{i+1}-{\alpha }_{i-1}$ must be satisfied (otherwise Case 1 would yield invalid).

If Case 1 yields β from α, then Case 2 should yield β given α. Case 2 simply adds p to ${\alpha }_{i}$ (where i is given), which we know should yield β. The method in Case 2 also performs some checks, and outputs invalid if those fail. These checks are:

  • 1.  
    It checks that β is a valid list of orbitals, which must be satisfied because we started with a valid β.
  • 2.  
    It checks that ${\beta }_{j+1}-{\beta }_{j-1}\geqslant {\alpha }_{i+1}-{\alpha }_{i-1}$, which must be satisfied for Case 1 to yield α instead of invalid.
  • 3.  
    It checks that using Case 1 on β would yield α, which must be satisfied here because we considered initially using Case 1 to obtain α from β.

Thus we see that, if Case 1 yields α from β, then Case 2 must yield β from α.

Going the other way, and starting with α and using Case 2 to find β, a result other than invalid will only be provided if Case 1 would yield α from that β. Thus we immediately know that if Case 2 provides β from α, then Case 1 will provide α from β. This means that the methods for Cases 1 and 2 are the inverses of each other, as required. Via exactly the same reasoning, we can see that the methods in Cases 3 and 4 are the inverses of each other as well.

Next, consider the question of uniqueness. The color will be unique if we can determine the color from a pair α, β. Given α and β, we will see that all the occupied orbitals are identical, except one. Then the occupied orbitals for α and β which are different will be i and j, respectively. We can then immediately set $p={\beta }_{j}-{\alpha }_{i}$ for the color. We can then compare ${\beta }_{j+1}-{\beta }_{j-1}$ and ${\alpha }_{i+1}-{\alpha }_{i-1}$.

If ${\beta }_{j+1}-{\beta }_{j-1}\geqslant {\alpha }_{i+1}-{\alpha }_{i-1}$ then for the color a = 0 and ${\ell }=i$. We can then find how many ambiguous values of α there would be if we started with β. If α was obtained uniquely from β, then we would set b = 0 for the color. If there were two ambiguous values of α that could be obtained from β, then if the first was correct we would set b = 0, and if the second were correct then we would set b = 1.

If ${\beta }_{j+1}-{\beta }_{j-1}\lt {\alpha }_{i+1}-{\alpha }_{i-1}$ then for the color a = 1 and ${\ell }=j$. We can then find how many ambiguous values of β there would be if we started with α. If β was obtained uniquely from α, then we would set b = 0 for the color. If there were two ambiguous values of β that could be obtained from α, then if the first was correct we would set b = 0, and if the second were correct then we would set b = 1. In this way we can see that the pair α, β yields a unique color, and therefore we have a valid coloring.

So far we have considered the case where α and β differ by just one orbital for simplicity. For cases where α and β differ by two orbitals, the procedure is similar. We now need to use the above reasoning to go from α to β through some intermediate list of orbitals χ. That is, we have one set of numbers $({a}_{1},{b}_{1},{{\ell }}_{1},p)$ that tells us how to find χ from α, then a second set of numbers $({a}_{2},{b}_{2},{{\ell }}_{2},q)$ that tells us how to obtain β from χ.

First, it is easily seen that this procedure is reversible, because the steps for going from α to χ to β are reversible. Second, we need to be able to determine the color from α and β. First, we find the two occupied orbitals for α and β that differ. Call the different occupied orbitals for α, i1 and i2, and the different orbitals for β, j1 and j2 (assume in ascending order so the labels are unique). Then there are four different ways that one could go from α to β, through different intermediate states χ.

  • 1.  
    ${\alpha }_{{i}_{1}}\mapsto {\beta }_{{j}_{1}}$ then ${\alpha }_{{i}_{2}}\mapsto {\beta }_{{j}_{2}}$
  • 2.  
    ${\alpha }_{{i}_{2}}\mapsto {\beta }_{{j}_{2}}$ then ${\alpha }_{{i}_{1}}\mapsto {\beta }_{{j}_{1}}$
  • 3.  
    ${\alpha }_{{i}_{1}}\mapsto {\beta }_{{j}_{2}}$ then ${\alpha }_{{i}_{2}}\mapsto {\beta }_{{j}_{1}}$
  • 4.  
    ${\alpha }_{{i}_{2}}\mapsto {\beta }_{{j}_{1}}$ then ${\alpha }_{{i}_{1}}\mapsto {\beta }_{{j}_{2}}$

To resolve this ambiguity we require that the color is obtained by assuming the first alternative that ${\alpha }_{{i}_{1}}\mapsto {\beta }_{{j}_{1}}$ then ${\alpha }_{{i}_{2}}\mapsto {\beta }_{{j}_{2}}$. Then α and β yield a unique color. This also requires a slight modification of the technique for obtaining α from β and vice versa. First the color is used to obtain the pair α, β, then it is checked whether the orbitals were mapped as in the first alternative above. If they were not, then invalid is returned.

To enable us to include the matrix elements where α and β differ by a single orbital or no orbitals with a coloring by an 8-tuple $\gamma =({a}_{1},{b}_{1},{{\ell }}_{1},p,{a}_{2},{b}_{2},{{\ell }}_{2},q)$, we can also allow p = 0 (for only one differing) and $p=q=0$ (for $\alpha =\beta $). The overall number of terms in the decomposition is then ${ \mathcal O }({\eta }^{2}{N}^{2})$.

Appendix B.: Riemann sum approximations of Hamiltonian coefficients

The aim of this appendix is to prove lemmas 13. We begin in appendix B.1 with preliminary matters that are integral to the proofs themselves. We then prove the lemmas in appendixs B.2B.4 respectively.

Throughout this appendix, we employ the following two notational conventions.

  • 1.  
    The vector symbol $\vec{\bullet }$ refers to an element of ${\rm{I}}{{\rm{R}}}^{3}$. We write $\bullet $ for the Euclidean length of $\vec{\bullet }$. Thus $\vec{v}$ refers to a 3-vector of magnitude v. We denote the zero vector as $\vec{0}=(0,0,0)$. We use $\oplus $ to denote vector concatenation: if $\vec{v}=({v}_{1},{v}_{2},{v}_{3})$ and $\vec{w}=({w}_{1},{w}_{2},{w}_{3})$, we write $\vec{v}\oplus \vec{w}=({v}_{1},{v}_{2},{v}_{3},{w}_{1},{w}_{2},{w}_{3})$. The gradient operator over ${\rm{I}}{{\rm{R}}}^{6}$ is then written as ${{\rm{\nabla }}}_{1}\oplus {{\rm{\nabla }}}_{2}$.
  • 2.  
    If x is a positive real number and $\vec{v}$ is a 3-vector, we write ${{ \mathcal B }}_{x}(\vec{v})$ for the closed ball of radius x centered at $\vec{v}$ and ${{ \mathcal C }}_{x}(\vec{v})$ for the closed cube of side length $2x$ centered at $\vec{v}$. Thus ${{ \mathcal B }}_{x}(\vec{v})\subset {{ \mathcal C }}_{x}(\vec{v})$ and ${{ \mathcal B }}_{y}(\vec{v})\not\hspace{-2pt}{\subseteq }{{ \mathcal C }}_{x}(\vec{v})$ whenever $y\gt x$.

This notation will be used extensively and without comment in what follows.

B.1. Preliminaries

The purpose of this subsection is to present two key discussions that will be needed at many points in the proofs of this appendix. First, in appendix B.1.1, we discuss the general structure of the proofs of lemmas 13. Second, in appendix B.1.2, we prove an ancillary lemma (lemma 4) that we use several times.

The ancillary lemma offers bounds on the function

Equation (B1)

where μ is a positive real constant and $\vec{c}$ is a constant vector. The lemma is stated as follows.

Lemma 4. Suppose $\mu $ and x are positive real numbers and $\vec{c}\in {\rm{I}}{{\rm{R}}}^{3}$ is some constant vector. Then

Equation (B2)

and for $c\leqslant x$,

Equation (B3)

The function ${{\rm{\Lambda }}}_{\mu ,x}(\vec{c})$ appears in bounds derived in the proofs of lemmas 2 and 3. Although it is possible to compute an analytic formula for the value of integral, the result is unwieldy. The bounds of lemma 4 are then used to ensure meaningfully expressed bounds on the Riemann sum approximations.

B.1.1. Structure of the Proofs

The proofs of lemmas 13 each roughly follow a general structure consisting of the following three stages, though with minor deviations.

First stage: The domain of integration is truncated to a domain D. The size of D is specified by a positive real parameter x, which the conditions of the lemmas ensure is at least xmax. We then bound the error due to the truncation

Equation (B4)

where $f:{\rm{I}}{{\rm{R}}}^{d}\to {\rm{I}}{\rm{R}}$ refers to the relevant integrand.

Second stage: We specify a Riemann sum that is designed to approximate this truncated integral and give a bound on the error ${\delta }_{{\rm{Riemann}}}$ of this Riemann sum approximation. We specify the number of terms in the Riemann sum in order to give the bound on μ in the lemma. We also give a bound on the absolute value of each term in the Riemann sum using the value of x specified in the first stage.

Third stage: In the final stage of each proof, we bound the total error

Equation (B5)

via the triangle inequality as

Equation (B6)

Our choice of x then ensures that the error is bounded by $\delta $.

To be more specific about the approach in the second stage, we partition D into regions T, and the Riemann sum approximates the integral over each T with the value of the integrand multiplied by the volume of T. The error due to this approximation is bounded by observing the following. Suppose $f:{\rm{I}}{{\rm{R}}}^{d}\to {\rm{I}}{\rm{R}}$ is once-differentiable and ${f}_{\max }^{{\prime} }$ is a bound on its first derivative. If ${\vec{r}}_{T}$ is any element of T, we will seek to bound the error of the approximation

Equation (B7)

where ${\rm{vol}}(T)$ is the d-dimensional hypervolume of the set T. The error of this approximation is

Equation (B8)

which can be bounded as follows:

Equation (B9)

where

Equation (B10)

We will choose the points ${\vec{r}}_{T}$ in the centers of the regions T, so that

Equation (B11)

where

Equation (B12)

The Riemann sum approximations we define will then take the form

Equation (B13)

and the error of this approximation is

Equation (B14)

which can be bounded via the triangle inequality as

Equation (B15)

B.1.2. Proof of lemma 4

We prove the lemma by deriving exact formulae for ${{\rm{\Lambda }}}_{\mu ,x}(\vec{c})$ in the cases $c\leqslant x$ and $c\gt x$ and then deriving bounds on these formulae that have simpler functional forms.

To derive exact formulae for ${{\rm{\Lambda }}}_{\mu ,x}(\vec{c})$, we use the Laplace expansion

Equation (B16)

where ${R}_{{\ell },m}$ and ${I}_{{\ell },m}$ refer to the regular and irregular solid spherical harmonic functions, respectively, and $r\geqslant c$. That is to say,

Equation (B17)

and

Equation (B18)

where

Equation (B19)

are the spherical harmonics (see section 14.30(i) in [55]), ${P}_{{\ell }}^{m}$ are the associated Legendre polynomials, and θ and ϕ are respectively the polar and azimuthal angles of $\vec{r}$. Via equation (8) of section 14.30(ii) in [55], we have

Equation (B20)

and

Equation (B21)

where ${\delta }_{a,b}$ denotes the Kronecker delta.

If $c\leqslant x$:

Equation (B22)

Equation (B3) follows from the fact that $(1+z){e}^{-z}\lt 2{e}^{-z/2}$ for all $z\gt 0$.

If $c\gt x$:

Equation (B23)

Therefore,

Equation (B24)

where we use the fact that $({z}^{2}+2z+2){e}^{-z}\lt {e}^{-z/2}$ for any $z\gt 0$ and the fact that

Equation (B25)

which follows from equation (B22). This gives us equation (B2) for $c\gt x$. In the case that $c\leqslant x$,

Equation (B26)

and, since $({z}^{2}+z){e}^{-z}\lt 4{e}^{-z/2}$ for all $z\gt 0$, we have

Equation (B27)

Therefore the bound equation (B2) holds for $c\leqslant x$ as well.

B.2. Proof of lemma 1

Our proof for lemma 1 roughly follows the three stages presented in appendix B.1.1. Here we give the proof in summary form and relegate some of the details to the later subsections.

B.2.1. First stage for lemma 1

The first part of the proof corresponds to the first stage discussed in appendix B.1.1. We choose

Equation (B28)

The condition equation (36) ensures that ${x}_{0}\geqslant {x}_{{\rm{\max }}}$. We show in appendix B.2.5 that the error due to this truncation can be bounded as

Equation (B29)

B.2.2. Green's identity for lemma 1

The next part of the proof is specific to lemma 1, and is not one of the general stages outlined in appendix B.1.1. The integral is given in the form with a second derivative of an orbital, which means that to bound the error we would need additional bounds on the third derivatives of the orbitals. We have not assumed such bounds, so we would like to reexpress the integral in terms of first derivatives before approximating it as a Riemann sum. We have already truncated the domain, though, so we will obtain terms from the boundary of the truncated domain.

We reexpress the integral via Green's first identity, which gives

Equation (B30)

where ${\rm{d}}V$ and ${\rm{d}}\vec{S}$ are the volume and oriented surface elements, respectively, and $\partial {D}_{0}$ is the boundary of D0. The reason why we do not make this change before truncating the domain is that we have not made any assumptions on the rate of decay of the derivatives of the orbitals. We define

Equation (B31)

We show (in appendix B.2.6) that

Equation (B32)

B.2.3. Second stage for lemma 1

Next we consider the discretization into a Riemann sum for lemma 1. We define

Equation (B33)

so that

Equation (B34)

The Riemann sum is then

Equation (B35)

where, for every triple of integers $\vec{k}=({k}_{1},{k}_{2},{k}_{3})$ such that $0\leqslant {k}_{1},{k}_{2},{k}_{3}\lt {N}_{0}$, we define

Equation (B36)

and

Equation (B37)

Thus we have partitioned D0 into $\mu ={N}_{0}^{3}$ equal-sized cubes ${T}_{\vec{k}}^{(0)}$ that overlap on sets of measure zero. The expression in equation (38) of lemma 1 then follows immediately.

Each term of ${{ \mathcal R }}_{0}$ satisfies

Equation (B38)

where the second inequality follows from equation (29). Using the value of x0 in equation (B28) in equation (B38), each term in the sum has the upper bound on its absolute value (corresponding to equation (39) in lemma 1)

Equation (B39)

We show (in appendix B.2.7) that

Equation (B40)

B.2.4. Third stage for lemma 1

In the final part of the proof of lemma 1 we show that the total error is properly bounded. By the triangle inequality, we have

Equation (B41)

We therefore arrive at a simple bound on the total error:

Equation (B42)

where

Equation (B43)

To ensure that ${\delta }_{{\rm{total}}}^{(0)}\leqslant \delta $, we should have

Equation (B44)

We can satisfy this inequality with x0 given by equation (B28). This last step completes our proof. The remainder of this subsection gives the details for some of the steps above.

B.2.5. Bounding ${\delta }_{{\rm{trunc}}}^{(0)}$ for lemma 1

Observe first that

Equation (B45)

where the last inequality follows from the fact that ${{ \mathcal B }}_{x}({\vec{c}}_{i})\subset {D}_{0}$. Using this fact together with assumptions 2 and 3 from section 4.3, we have

Equation (B46)

We simplify this expression by expressing $\vec{r}$ in polar coordinates with center ${\vec{c}}_{i}$. After integrating over the solid angles, we have

Equation (B47)

Noting that

Equation (B48)

we have

Equation (B49)

B.2.6. Bounding ${\delta }_{{\rm{Green}}}^{(0)}$ for lemma 1

Using equations (B30) and (B32) we have

Equation (B50)

Then using equations (28) and (29) gives us

Equation (B51)

We further observe that $\parallel \vec{r}-{\vec{c}}_{i}\parallel \geqslant x$ for all $\vec{r}\in \partial {D}_{0}$, and the cube with side length $2x$ has surface area $24{x}^{2}$, giving

Equation (B52)

where we have noted $12{z}^{2}{e}^{-z}\lt 26{e}^{-z/2}$ for all $z\gt 0$.

B.2.7. Bounding ${\delta }_{{\rm{Riemann}}}^{(0)}$

First we bound the derivative of the integrand. We use the chain rule, the triangle inequality, equation (29) and equation (30) to find

Equation (B53)

We have

Equation (B54)

and

Equation (B55)

for all $\vec{k}$. Using equations (B15) and (B33), and noting that $1/\lceil z\rceil \leqslant 1/z$ for any $z\in {\rm{I}}{\rm{R}}$, we have

Equation (B56)

B.3. Proof of lemma 2

For this proof, the discretization into the Riemann sum will be performed differently depending on whether spin-orbital i is considered distant from or nearby to nucleus q. If the nucleus is far from the spin-orbital, the singularity in the integrand is not inside our truncated domain of integration and we need not take special care with it. Otherwise, we can remove the singularity by defining spherical polar coordinates centered at the nucleus. In each case, we select different truncated integration domains and therefore different Riemann sums.

We focus on the center of spin-orbital i for simplicity; in principle, the center of spin-orbital j could also be taken into account.

B.3.1. First stage for lemma 2

We again start by truncating the domain of integration. We select

Equation (B57)

The condition in equation (40) ensures that ${x}_{1}\geqslant {x}_{{\rm{\max }}}$. We regard the spin-orbital as distant from the nucleus if

Equation (B58)

If so, we use the truncated domain

Equation (B59)

Otherwise, we use

Equation (B60)

We define

Equation (B61)

Equation (B62)

Equation (B63)

and show in appendix B.3.5 that

Equation (B64)

B.3.2. Second stage for lemma 2 with Cartesian coordinates

Now we consider in the discretization of the integral for the case that $\parallel {\vec{R}}_{q}-{\vec{c}}_{i}\parallel \geqslant \sqrt{3}{x}_{1}+{x}_{{\rm{\max }}}$, so orbital i can be regarded as distant from the nucleus. We set

Equation (B65)

and define two different Riemann sums containing $\mu ={N}_{1}^{3}$ terms. We also use this expression for N1 in the case that the spin-orbital is near the nucleus. Using our value of x1 in equation (B57),

Equation (B66)

Then, noting that $\mu ={N}_{1}^{3}$ is the number of terms in either Riemann sum, we obtain the bound on μ in equation (42) of lemma 2.

We approximate ${S}_{{ij}}^{(1,q)}({D}_{1,q,{\rm{non-singular}}})$ with the sum

Equation (B67)

where, for every triple of integers $\vec{k}=({k}_{1},{k}_{2},{k}_{3})$ such that $0\leqslant {k}_{1},{k}_{2},{k}_{3}\lt {N}_{1}$, we define

Equation (B68)

and

Equation (B69)

Thus we have partitioned ${D}_{1,q,{\rm{non-singular}}}$ into ${N}_{1}^{3}$ equal-sized cubes ${T}_{\vec{k}}^{(1,q,{\rm{non-singular}})}$ that overlap on sets of measure zero. Each term of ${{ \mathcal R }}_{1,q,{\rm{non-singular}}}$ satisfies

Equation (B70)

where we have used equation (27) and the fact that $\parallel {\vec{R}}_{q}-\vec{r}\parallel \geqslant {x}_{{\rm{\max }}}$ for every $\vec{r}\in {D}_{1,q,{\rm{non-singular}}}$. This upper bound is no greater than

Equation (B71)

Now substituting our value of x1 from equation (B57) shows that no term has absolute value greater than (corresponding to equation (43) in lemma 2)

Equation (B72)

We show in appendix B.3.6 that

Equation (B73)

B.3.3. Second stage for lemma 2 with spherical polar coordinates

Next we consider discretization of the integral for the case where $\parallel {\vec{R}}_{q}-{\vec{c}}_{i}\parallel \lt \sqrt{3}{x}_{1}+{x}_{{\rm{\max }}}$, so orbital i is nearby the nucleus. We express

Equation (B74)

where we define $\vec{s}:= (\vec{r}-{\vec{R}}_{q})/(4{x}_{1})$ and

Equation (B75)

Here we use θ and ϕ to refer to the polar and azimuthal angles, respectively, of the vector $\vec{s}$. Note that the singularity in the nuclear Coulomb potential has been absorbed into the spherical polar volume form ${s}^{2}\sin \theta \ {\rm{d}}s\ {\rm{d}}\theta \ {\rm{d}}\phi $. For every triple of natural numbers $\vec{k}=({k}_{s},{k}_{\theta },{k}_{\phi })$ such that $0\leqslant {k}_{s},{k}_{\theta },{k}_{\phi }\lt {N}_{1}$, we define

Equation (B76)

so that ${\bigcup }_{\vec{k}}{T}_{\vec{k}}^{(1,q,{\rm{singular}})}={D}_{1,q,{\rm{singular}}}$. We select

Equation (B77)

Thus our Riemann sum approximation is

Equation (B78)

where we emphasize that

Equation (B79)

is not the volume of ${T}_{\vec{k}}^{(1,q,{\rm{singular}})}$ when considered as a subset of ${\rm{I}}{{\rm{R}}}^{3}$ under the usual Euclidean metric. The reason for this discrepancy is that we absorbed the Jacobian introduced by switching from Cartesian to spherical polar coordinates into the definition of f1. Thus we are integrating f1 with respect to the volume form ${\rm{d}}s\,{\rm{d}}\theta \,{\rm{d}}\phi $, not ${s}^{2}\sin \theta \,{\rm{d}}s\,{\rm{d}}\theta \,{\rm{d}}\phi $. The terms of ${{ \mathcal R }}_{1,q,{\rm{singular}}}$ are bounded by

Equation (B80)

where the inequality follows from equation (27). Again this expression is upper bounded by equation (B71), so substituting our value of x1 from equation (B57) gives the upper bound in equation (43).

We show in appendix B.3.7 that

Equation (B81)

B.3.4. Third stage for lemma 2

We again finish the proof by showing that the total error is bounded by $\delta $. From equation (B6), we have

Equation (B82)

Equation (B83)

We have given a bound that holds simultaneously for both ${\delta }_{{\rm{trunc}}}^{(1,q,{\rm{non-singular}})}$ and ${\delta }_{{\rm{trunc}}}^{(1,q,{\rm{singular}})}$, and we have given a bound for ${\delta }_{{\rm{Riemann}}}^{(1,q,{\rm{singular}})}$ that is larger (as a function of x) than our bound for ${\delta }_{{\rm{Riemann}}}^{(1,q,{\rm{non-singular}})}$. We are therefore able to assert that the error of our Riemann sum approximation, no matter which we choose, is always bounded above by

Equation (B84)

where

Equation (B85)

We have found two different upper bounds on the magnitudes of the terms in the Riemann sums given in equations (B70) and (B80). Finally, we note that by substituting our value of x1 from equation (B57), this expression is upper bounded by $\delta $. This last step completes our proof of lemma 2. The remainder of this subsection gives the details for some of the steps above.

B.3.5. Bounding ${\delta }_{{\rm{trunc}}}^{(1,q)}$ for lemma 2

Note first that ${{ \mathcal B }}_{{x}_{1}}({\vec{c}}_{i})\subset {D}_{1,q,{\rm{non-singular}}}$ and ${{ \mathcal B }}_{{x}_{1}}({\vec{c}}_{i})\subset {D}_{1,q,{\rm{singular}}}$. To see the latter, note that we only consider ${D}_{1,q,{\rm{singular}}}$ in the case that $\parallel {\vec{R}}_{q}-{\vec{c}}_{i}\parallel \leqslant \sqrt{3}{x}_{1}+{x}_{{\rm{\max }}}$, which implies that

Equation (B86)

As we have

Equation (B87)

for any D such that ${{ \mathcal B }}_{x}({\vec{c}}_{i})\subset D$, we may compute

Equation (B88)

where the function Λ is as defined in equation (B1) and the inequality follows from equation (28). By lemma 4, in the case $\parallel {\vec{R}}_{q}-{\vec{c}}_{i}\parallel \gt {x}_{1}$ we have

Equation (B89)

where the second inequality follows from ${x}_{1}\geqslant {x}_{{\rm{\max }}}$. In the case $\parallel {\vec{R}}_{q}-{\vec{c}}_{i}\parallel \leqslant {x}_{1}$ we can use

Equation (B90)

We can add the bounds to find, in general, that

Equation (B91)

B.3.6. Bounding ${\delta }_{{\rm{Riemann}}}^{(1,q,{\rm{non-singular}})}$ for lemma 2

Following appendix B.1.1, we note that

Equation (B92)

and

Equation (B93)

for each $\vec{k}$. We can bound the derivative of the integrand using the product rule and the triangle inequality as follows:

Equation (B94)

where the last inequality follows from equations (27) and (29), as well as the fact that $\parallel {\vec{R}}_{q}-\vec{r}\parallel \geqslant {x}_{{\rm{\max }}}$ for any $\vec{r}\in {D}_{1,q,{\rm{non-singular}}}$. From equations (B15) and (B65), and noting $1/\lceil z\rceil \leqslant 1/z$, we have

Equation (B95)

B.3.7. Bounding ${\delta }_{{\rm{Riemann}}}^{(1,q,{\rm{singular}})}$ for lemma 2

Recalling that we are using a non-standard metric to evaluate the volumes and diameters of sets, we find

Equation (B96)

and

Equation (B97)

By equation (B15), it remains to find a bound on the derivative of f1. Throughout this subsection, we write ${f}_{{\rm{\max }}}^{{\prime} }$ for this bound.

To bound this derivative, we consider the gradient in three different ways. First there is ${\rm{\nabla }}$, which is the gradient with respect to the unscaled position coordinates. Second there is ${{\rm{\nabla }}}_{s}$, which is the gradient with respect to the spherical polar coordinates, but just taking the derivatives with respect to each coordinate. That is,

Equation (B98)

We use this because we are treating the coordinates like they were Euclidean for the discretized integral. Third, there is the usual gradient in spherical polar coordinates,

Equation (B99)

Because we consider $s\in [0,1]$, the components of the gradient ${{\rm{\nabla }}}_{s}$ are upper bounded by the components of the gradient ${{\rm{\nabla }}}_{s}^{{\prime} }$. Therefore

Equation (B100)

The restriction on the magnitude of the gradient in equation (29) holds on the usual gradient in spherical polar coordinates. This means that

Equation (B101)

Using these results, we have

Equation (B102)

Thus we have the bound

Equation (B103)

We now can give a bound for our approximation to ${S}_{{ij}}^{(1,q)}({D}_{1,{\rm{singular}}})$. Using the above definitions of ${f}_{{\rm{\max }}}^{{\prime} }$, ${\rm{vol}}({D}_{1,{\rm{singular}}})$ and ${\rm{diam}}({T}_{\vec{k}}^{(1,q,{\rm{singular}})})$, we have

Equation (B104)

Using equation (B65) and noting $1/\lceil z\rceil \leqslant 1/z$, we have

Equation (B105)

where we have used ${x}_{1}\geqslant {x}_{{\rm{\max }}}$.

B.4. Proof of lemma 3

As in appendix B.3, we separate our proof into two cases, depending on whether the singularity of the integrand is relevant or not. If the orbitals i and j are distant, then the singularity is unimportant and we can use rectangular coordinates. If these orbitals are nearby, then we use spherical polar coordinates to eliminate the singularity from the integrand. We do not consider the distance between the orbitals k and ${\ell }$ in order to simplify the analysis.

B.4.1. First stage for lemma 3

Again the first stage is to truncate the domain of integration. We take

Equation (B106)

to be the size of the truncation region. The condition in equation (44) ensures that $x\geqslant {x}_{{\rm{\max }}}$. We regard the orbitals as distant if $\parallel {\vec{c}}_{i}-{\vec{c}}_{j}\parallel \geqslant 2\sqrt{3}{x}_{2}+{x}_{{\rm{\max }}}$. Then we take the truncation region

Equation (B107)

Otherwise, if the orbitals are nearby we take the truncation region

Equation (B108)

Where $\zeta := 2\sqrt{3}+3$. The error in the first case is

Equation (B109)

and the error in the second case is

Equation (B110)

The maximum error for either case is denoted

Equation (B111)

We upper bound this error in appendix B.4.5 as

Equation (B112)

B.4.2. Second stage for lemma 3 with Cartesian coordinates

The second stage for the proof of lemma 3 is to discretize the integrals into Riemann sums. In this subsection we consider the case that orbitals i and j are distant, so we wish to approximate the truncated integral ${S}_{{ijk}{\ell }}^{(2)}({D}_{2,{\rm{non-singular}}})$. In the next subsection we consider discretization in the case where orbitals i and j are nearby, and we wish to approximate ${S}_{{ijk}{\ell }}^{(2)}({D}_{2,{\rm{singular}}})$. Each sum contains $\mu ={N}_{2}^{6}$ terms, where

Equation (B113)

The same value of N2 will be used for spherical polar coordinates. Using the value of x2 from equation (B106) gives

Equation (B114)

Since $\mu ={N}_{2}^{6}$ is the number of terms in either Riemann sum, we obtain the lower bound on μ in equation (46) of lemma 3.

We approximate ${S}_{{ijk}{\ell }}^{(2)}({D}_{2,{\rm{non-singular}}})$ with the sum

Equation (B115)

where, for every triple of integers $\vec{k}=({k}_{1},{k}_{2},{k}_{3})$ such that $0\leqslant {k}_{1},{k}_{2},{k}_{3}\lt {N}_{2}$, we define

Equation (B116)

and

Equation (B117)

Thus we have partitioned ${D}_{2,{\rm{non-singular}}}$ into μ equal-sized regions that overlap on sets of measure zero. Each term of ${{ \mathcal R }}_{2,{\rm{non-singular}}}$ has absolute value no greater than

Equation (B118)

where the inequality follows from equation (27) and the fact that the distance between ${{ \mathcal C }}_{x}({\vec{c}}_{i})$ and ${{ \mathcal C }}_{x}({\vec{c}}_{j})$ is no smaller than ${x}_{{\rm{\max }}}$ if $\parallel {\vec{c}}_{i}-{\vec{c}}_{j}\parallel \geqslant 2\sqrt{3}{x}_{2}+{x}_{{\rm{\max }}}$. This expression is upper bounded by

Equation (B119)

Substituting our value of x2 from equation (B106) shows that no term has absolute value greater than (corresponding to equation (47) in lemma 3)

Equation (B120)

We show in appendix B.4.6 that the error may be bounded as

Equation (B121)

B.4.3. Second stage for lemma 3 with spherical polar coordinates

In this subsection we discretize the integral ${S}_{{ijk}{\ell }}^{(2)}({D}_{2,{\rm{singular}}})$ for the case of nearby orbitals. We introduce the following definition for convenience in what follows:

Equation (B122)

We define $\vec{s}:= ({\vec{r}}_{1}-{\vec{c}}_{i})/{x}_{2}$ and $\vec{t}:= ({\vec{r}}_{1}-{\vec{r}}_{2})/(\zeta {x}_{2})$. We write $\vec{s}=({s}_{1},{s}_{2},{s}_{3})$ and θ and ϕ for the polar and azimuthal angles of $\vec{t}$. Next we define

Equation (B123)

Then we can write

Equation (B124)

Let ${\vec{k}}_{\vec{s}}=({k}_{1},{k}_{2},{k}_{3})$, where $0\leqslant {k}_{1},{k}_{2},{k}_{3}\lt {N}_{2}$, and let ${\vec{k}}_{\vec{t}}=({k}_{t},{k}_{\theta },{k}_{\phi })$, where $0\leqslant {k}_{t},{k}_{\theta },{k}_{\phi }\lt {N}_{2}$. Define ${s}_{{k}_{{\ell }}}=(2{k}_{{\ell }}+1-{N}_{2})/{N}_{2}$ for each ${\ell }=1,2,3$ so that ${\vec{s}}_{{\vec{k}}_{\vec{s}}}=({s}_{{k}_{1}},{s}_{{k}_{2}},{s}_{{k}_{3}})$ and define

Equation (B125)

for each ${\vec{k}}_{\vec{t}}$. We then define

Equation (B126)

Now we define our Riemann sum:

Equation (B127)

where

Equation (B128)

Here

Equation (B129)

is not the volume of ${D}_{2,{\rm{singular}}}$ considered under the usual Euclidean metric, as in equation (B79). We need to use this non-standard volume because the Jacobian introduced by changing from Cartesian to spherical polar coordinates was absorbed into the definition of our integrand f2. Therefore, each term in the Riemann sum has absolute value no greater than

Equation (B130)

where the inequality follows from equation (27) applied to the definition of f2 in terms of ${\eta }_{i{\ell }}$ and ${\eta }_{{jk}}$. Again this expression is upper bounded by equation (B136) and substituting our value of x2 yields the upper bound in equation (47).

We show in appendix B.4.7 that

Equation (B131)

B.4.4. Third stage for lemma 3

Lastly we show that the error is properly bounded. From equation (B6), we have

Equation (B132)

and

Equation (B133)

We have given a bound that holds simultaneously for both ${\delta }_{{\rm{trunc}}}^{(2,{\rm{non-singular}})}$ and ${\delta }_{{\rm{trunc}}}^{(2,{\rm{singular}})}$, and we have given a bound for ${\delta }_{{\rm{Riemann}}}^{(2,{\rm{singular}})}$ that is larger (as a function of x2) than our bound for ${\delta }_{{\rm{Riemann}}}^{(2,{\rm{non-singular}})}$. We are therefore able to assert that the error of our Riemann sum approximation, no matter which we choose, is always bounded above by

Equation (B134)

where

Equation (B135)

We have also found that the terms in the Riemann sum are upper bounded by equations (B118) and (B130) in the two cases. A bound that will hold for both is given by

Equation (B136)

Then substituting our value of x2 from equation (B106) shows that the error is upper bounded by $\delta $. This last step completes our proof of lemma 3. The remainder of this subsection gives the details for some of the steps above.

B.4.5. Bounding ${\delta }_{{\rm{trunc}}}^{(2)}$ for lemma 3

Note that ${{ \mathcal B }}_{{x}_{2}}({\vec{c}}_{i})\times {{ \mathcal B }}_{{x}_{2}}({\vec{c}}_{j})$ is a subset of both ${D}_{2,{\rm{non-singular}}}$ and ${D}_{2,{\rm{singular}}}$. The former is immediately apparent. To see the latter, observe that $\parallel {\vec{c}}_{i}-{\vec{c}}_{j}\parallel \lt 2\sqrt{3}{x}_{2}+{x}_{{\rm{\max }}}$ implies that the maximum possible value of $\parallel {\vec{r}}_{1}-{\vec{r}}_{2}\parallel $ for any ${\vec{r}}_{1}\in {{ \mathcal B }}_{x}({\vec{c}}_{i})$ and ${\vec{r}}_{2}\in {{ \mathcal B }}_{x}({\vec{c}}_{j})$ is $2\sqrt{3}{x}_{2}+2{x}_{2}+{x}_{{\rm{\max }}}\leqslant (2\sqrt{3}+3){x}_{2}=\zeta {x}_{2}$. Therefore,

Equation (B137)

where we have used equation (28) and, with the change of variables $\vec{s}={\vec{r}}_{1}-{\vec{c}}_{i}$, the definition of Λ from equation (B1). By lemma 4, for $\parallel \vec{s}+{\vec{c}}_{i}-{\vec{c}}_{j}\parallel \leqslant {x}_{2}$ we get

Equation (B138)

and for $\parallel \vec{s}+{\vec{c}}_{i}-{\vec{c}}_{j}\parallel \gt {x}_{2}$ we get

Equation (B139)

In either case we then get

Equation (B140)

We use the fact that $({z}^{2}+2z+2){e}^{-z}\lt 4{e}^{-z/2}$ for any $z\gt 0$ to find

Equation (B141)

which gives us

Equation (B142)

B.4.6. Bounding ${\delta }_{{\rm{Riemann}}}^{(2,{\rm{non-singular}})}$ for lemma 3

Following appendix B.1.1, we note that

Equation (B143)

and

Equation (B144)

for each ${\vec{k}}_{1}$ and ${\vec{k}}_{2}$. To find a bound on ${\delta }_{{\rm{Riemann}}}^{(2,{\rm{non-singular}})}$, it only remains to find a bound on the derivative of the integrand.

To bound the derivative of the integrand, we first find bounds on the gradients of the numerator and the denominator separately. The gradient of the numerator can be bounded using the product rule and triangle inequality, as well as equations (27) and (29):

Equation (B145)

The gradient of the denominator can be computed directly:

Equation (B146)

Again by the product rule and the triangle inequality,

Equation (B147)

The last inequality follows from our assumption that $\parallel {\vec{c}}_{i}-{\vec{c}}_{j}\parallel \gt 2\sqrt{3}{x}_{2}+{x}_{{\rm{\max }}}$, which implies that the distance between ${{ \mathcal C }}_{{x}_{2}}({\vec{c}}_{i})$ and ${{ \mathcal C }}_{{x}_{2}}({\vec{c}}_{j})$ is greater than ${x}_{{\rm{\max }}}$. Therefore,

Equation (B148)

B.4.7. Bounding ${\delta }_{{\rm{Riemann}}}^{(2,{\rm{singular}})}$ for lemma 3

Following appendix B.1.1, we again note that ${\rm{vol}}({D}_{2,{\rm{singular}}})=16{\pi }^{2}$. We also observe that

Equation (B149)

where we are again treating the variables s, θ and ϕ formally as Euclidean coordinates instead of spherical polar. It then remains to bound the derivative of the integrand

Equation (B150)

where $\vec{s}=({s}_{1},{s}_{2},{s}_{3})$.

The derivative of the integral is bounded as follows. Define ${{\rm{\nabla }}}_{s}=\left(\tfrac{\partial }{\partial {s}_{1}},\tfrac{\partial }{\partial {s}_{2}},\tfrac{\partial }{\partial {s}_{3}}\right)$ and ${{\rm{\nabla }}}_{t}=\left(\tfrac{\partial }{\partial t},\tfrac{\partial }{\partial \phi },\tfrac{\partial }{\partial \theta }\right)$. By the product rule and the triangle inequality,

Equation (B151)

where the last inequality follows from equation (27). We also have

Equation (B152)

where ${\rm{\nabla }}$ in the second-to-last inequality refers to the gradient operator expressed in the usual basis and the final inequality follows from equation (29). Finally, we have

Equation (B153)

where we have again used the product rule and the triangle inequality and, in the last inequality, equation (27). We have also used the bounds on the gradient operator ${{\rm{\nabla }}}_{t}$ in the same was as in appendix B.3.7. We note that $\parallel {{\rm{\nabla }}}_{t}(t\sin \theta )\parallel \leqslant \sqrt{2}$, $\parallel {{\rm{\nabla }}}_{s}({\eta }_{{jk}}({x}_{2}\vec{s}-\zeta {x}_{2}\vec{t}+{\vec{c}}_{i}))\parallel \leqslant 2x{\gamma }_{1}{\varphi }_{{\rm{\max }}}^{2}/{x}_{{\rm{\max }}}$ (as above) and

Equation (B154)

In summary, we have shown

Equation (B155)

We can now compute our bound on ${\delta }_{{\rm{Riemann}}}^{(2,{\rm{singular}})}$. Including the factor of ${\zeta }^{2}{x}_{2}^{5}$ in the integral and using equation (B15),

Equation (B156)

Footnotes

  • We use the typical computer science convention that $f\in {\rm{\Theta }}(g)$, for any functions f and g, if f is asymptotically upper and lower bounded by multiples of g, ${ \mathcal O }$ indicates an asymptotic upper bound, $\widetilde{{ \mathcal O }}$ indicates an asymptotic upper bound suppressing any polylogarithmic factors in the problem parameters, Ω indicates the asymptotic lower bound and $f\in o(g)$ implies $f/g\to 0$ in the asymptotic limit.

  • The basis of atomic orbitals is not necessarily orthogonal. However, this can be fixed using the efficient Lowdin symmetric orthogonalization procedure which seeks the closest orthogonal basis [14, 50].

Please wait… references are loading.