The emergence of scanning probe and electron beam imaging techniques has allowed quantitative studies of atomic structure and minute details of electronic and vibrational structure on the level of individual atomic units. These microscopic descriptors, in turn, can be associated with local symmetry breaking phenomena, representing the stochastic manifestation of the underpinning generative physical model. Here, we explore the reconstruction of exchange integrals in the Hamiltonian for a lattice model with two competing interactions from observations of microscopic degrees of freedom and establish the uncertainties and reliability of such analysis in a broad parameter-temperature space. In contrast to other approaches, we specifically specify a loss function inherent to thermodynamic systems and utilize it to estimate uncertainty in simulated realizations of different models. As an ancillary task, we develop a machine learning approach based on histogram clustering to predict phase diagrams efficiently using a reduced descriptor space. We further demonstrate that reconstruction is possible well above the phase transition and in the regions of parameter space when the macroscopic ground state of the system is poorly defined due to frustrated interactions. This suggests that this approach can be applied to the traditionally complex problems of condensed matter physics such as ferroelectric relaxors and morphotropic phase boundary systems, spin and cluster glasses, and quantum systems once the local descriptors linked to the relevant physical behaviors are known.

Phase transitions between dissimilar phases underpin multiple areas of condensed matter physics and materials science. From an applications viewpoint, phase transitions are intricately involved in the process of material formation in virtually all technologies from ceramics to metals to semiconductors. Electrical, gating, and chemical control of phase transitions have emerged as a promising pathway for low-voltage electronics in tunneling barriers and field effect devices.1,2 Beyond condensed matter physics, phase transitions in Bose–Einstein condensates allow new classes of quantum information technology devices.3 

The ubiquity of the phase transitions in virtually all areas of applied and fundamental science has made them one of the central areas for theoretical exploration. Among the existing paradigms for exploring phase transitions, a special place is held by the lattice models such as Ising,4 Heisenberg,4 Kitaev,5 etc. Here, the system is represented as a set of interacting spins on a spatial lattice. The spins represent the individual degrees of freedom of the material and can be magnetic spins, electrical dipoles, atomic species, etc. The lattice defines the geometry of the interactions. Depending on the conservation laws, character of the spins, interactions, and geometry, these lattice models can represent an extremely broad range of phenomena from ordering in metal alloys to ferroic transitions to surface adsorbate dynamics.

The universality of the lattice model approach has spurred intensive analytic and numerical investigations of these models.6,7 These studies mainly seek to explore the phase diagrams of the lattice models, i.e., correspondence between the global variables such as temperature and field and preponderant behaviors and patterns for spin arrangement. Similarly of interest are global thermodynamics parameters such as magnetization, heat capacity, and susceptibility. The spatial arrangements of the spins are traditionally explored in the form of the ground-state pattern as well as correlation functions representing the distributions of the defects and disorder, i.e., over which length scales the system is ordered.8 Notably, these behaviors allow for straightforward exploration only for systems with well-defined ground states, whereas the presence of competing interactions leads to frustrated and poorly defined ground states. Nonetheless, the combination of efficient sampling strategies and high-performance computing has allowed exploration of even complex multidimensional models.

Over recent years, the emergence of machine learning (ML) tools and ready availability of information-compression algorithms, such as principal and independent component analysis, support vector machines and deep learning tools such as variational autoencoders, have provided new impetus within the field.9–12 Even though the applications of ML tools to lattice models are relatively recent, the field has experienced an exponential growth in the last three years, opening pathways to exploration of exotic quantum phases, off-lattice models, etc.13–16 

However, experimental selection of the appropriate lattice model corresponding to a specific phase transition and determination of its parameters, termed the “inverse Ising problem,” presents a more difficult problem;17–20 an in-depth view of that field is explored in an article by Nguyen et al.21 Traditionally, the symmetry and type of order parameter are determined based on macroscopic measurements and scattering studies, whereas numerical values of exchange integrals are obtained based on the matching between theoretical predictions and macroscopic measurements of thermodynamic parameters. However, these studies typically require high-quality samples for which the corresponding critical behaviors can be reliably determined. Similarly, in the materials with high disorder, non-ergodic or non-thermalized ground states determination of appropriate lattice models can be highly non-trivial.

The development of high-resolution imaging tools such as scanning transmission electron microscopy and scanning probe microscopy has allowed direct insight into the atomic-scale structure of materials.22–24 Beyond atomically resolved and element specific imaging, these techniques provide information on the minute symmetry-breaking distortions, allowing for direct mapping of order parameter fields such as polarization,25–29 octahedra tilts,30–33 strains,34 and more complex emergent phenomena.35 Correspondingly, of interest is the extraction of the parameters of the underpinning physics model from the atomically resolved observations. As one approach, this can be achieved using the direct matching of the mesoscopic order parameter field to Ginzburg–Landau type models.36,37 Alternatively, the mesoscopic descriptors such as hysteresis loops or relaxation curves can be used as proxy identifiers for local behavior.38 Finally, it has been proposed that the direct observation of the mesoscopic degrees of freedom can be directly compared to the lattice model via statistical distance minimization.39–43 Indeed, naively expecting to minimize loss between theoretical and experimental datasets without the appropriate metric for thermodynamic systems is incorrect and will lead to incorrect reconstructions.44 Previously, we have reported the principles of the statistical distance minimization41,42 and applications for the establishing of the parameters of the non-ideal solid solutions in layered superconductors,43 information fusion between the surface and depth-resolved chemical information in manganites,41,43 and pair- and triple-atom model comparison for segregation in layered chalcogenides.40 Here, we explore the reconstruction of the Ising model parameters in the presence of competing interactions with statistical distance minimization and explore the sensitivity of the reconstruction to the incomplete knowledge of the model as well as deriving uncertainty bounds on the reconstructions. The guidelines for experimentally driven model selection are proposed.

We explore the behavior of the Ising model with the nearest neighbor and next nearest neighbor interactions. Depending on the sign and magnitude of the exchange integrals, this model can give rise to a rich phase diagram containing ferromagnetic, antiferromagnetic, paramagnetic, and frustrated phases. Here, first we apply the ML tools for rapid simulation of the associate phase diagram based on the local statistics.

The classical Ising Hamiltonian model realized on N2-evenly spaced lattice sites of a square lattice with nearest neighbor (NN) and next nearest neighbor (NNN) interactions is the chosen model for this study. The Hamiltonian of a given configuration σ in the absence of an external magnetic field is given by

H(σi)=i,j(J1+ΔJ1)σiσji,k(J2+ΔJ2)σiσk,
(1)

where J1 and J2 are the exchange integrals corresponding to the nearest neighbor interactions and next nearest neighbor interactions, respectively; ΔJ1 and ΔJ2 are the corresponding disorders in exchange integrals; and ⟨i,j⟩ and ⟨i,k⟩ are the sums over all nearest neighbors and next nearest neighbors, respectively.

A Monte Carlo (MC) simulation was performed on a square lattice with N = 40 using the Metropolis algorithm. The system was equilibrated for 1000×N2 Monte Carlo steps, and the data are acquired on the next 1000×N2 Monte Carlo steps. A spin configuration reversal of each lattice site was attempted at every step. The flip was accepted when the energy of the resultant configuration was lower than the previous step. Else, the probability of the spin flip was estimated using the Boltzmann distribution given by Eq. (2), where β=1/kBT is the inverse temperature, with T being the absolute temperature and kB being the Boltzmann constant, and the denominator is the partition function,

Pβ(σi)=eβH(σi)jeβH(σj).
(2)

Macroscopic properties as a function of temperature and a function of exchange integrals are explored. Setting J2 and ΔJ2 to zero gives the traditional 2D Ising model with only the nearest neighbor (NN) interactions. Macroscopic properties of such model as a function of reduced temperature are shown in Figs. 1(a)1(d). Clearly, all four properties delineate the phase transition from the ferromagnetism to the paramagnetic region. Phase transitions as a function of exchange integrals at a constant temperature are shown in Figs. 1(e) and 1(f). Energy per site is illustrated in Fig. 1(g). The corresponding magnetization is shown in Fig. 1(e), clearly highlighting the ferromagnetic region and transition between the ferromagnetic and the paramagnetic states (diagonal line). The specific heat is shown in Fig. 1(h), delineating the primary phase transitions between ferromagnetic, antiferromagnetic, paramagnetic, and frustrated states. Finally, susceptibility is shown in Fig. 1(f), showing the divergence across ferromagnetic–paramagnetic phase transition. Overall, Fig. 1 provides a traditional approach for mapping the phase diagram of lattice models based on the macroscopic thermodynamic parameter analysis.

FIG. 1.

(a) Magnetization, (b) susceptibility, (c) energy, and (d) specific heat curves as a function of reduced temperature for the traditional 2D Ising model on a square lattice with NN interactions only (J1 = 0.65 and J2 = 0). (e) Magnetization, (f) susceptibility, (g) energy, and (h) specific heat surfaces of the square lattice Ising model at a reduced temperature Tr = 0.96 as a function of exchange integrals corresponding to nearest (J1) and next nearest neighbors in units of kBT.

FIG. 1.

(a) Magnetization, (b) susceptibility, (c) energy, and (d) specific heat curves as a function of reduced temperature for the traditional 2D Ising model on a square lattice with NN interactions only (J1 = 0.65 and J2 = 0). (e) Magnetization, (f) susceptibility, (g) energy, and (h) specific heat surfaces of the square lattice Ising model at a reduced temperature Tr = 0.96 as a function of exchange integrals corresponding to nearest (J1) and next nearest neighbors in units of kBT.

Close modal

An alternative approach for mapping phase diagrams in simulated lattice models is based on the machine learning and statistical analysis of the local spin configurations and correlation between spins. For example, multiple works showed that using a supervised learning techniques such as convolutional neural networks, which are traditionally used for image recognition tasks, one can identify phase transition and map phase boundaries in the classical and quantum lattice models using two-dimensional and three-dimensional “images” of the spin configurations around the critical points as a training set.9–11 At the same time, the more conventional multivariate statistics and probabilistic learning tools such as the kernelized principal component analysis and the variational autoencoder allowed learning phase transition and the associated order parameter in the classical Ising model from local spin configurations in an unsupervised manner.45,46

Traditional descriptors of such microstates are spin configurations of all the lattice points.47 In this scenario, the number of rows (R) in the feature matrix is R = Nsim* Nmc, where Nsim is the number of distinct sets of simulation parameters and Nmc is the number of Monte Carlo steps considered for the analysis. The columns (C) are the number of features used for the analysis. Spin configurations of the entire lattice at each simulation are used as attributes. The first few principal components corresponding to such matrix closely resemble the macroscopic properties’ (magnetization, energy, specific heat, and susceptibility) trends as a function of reduced temperature.47 

Here, we propose the approach where phase classification is performed based on the relative frequencies of local configurations at a given set of simulation parameters. These frequencies of local configurations describe the microstates of the system exhaustively. For a given case, these are averaged over all the microstates generated by the Monte Carlo simulations. The local configurations considered for further analysis are described in Fig. 2(e). The relative frequencies of local configurations of ferromagnetic, anti-ferromagnetic, paramagnetic, and the frustrated systems are shown in Figs. 2(a)2(d). In this case, the number of elements of the feature matrix is 5400 as opposed to 109 in the study referred. Monte Carlo simulations are performed at a reduced temperature of Tr = 0.96 and at 900 evenly spaced points on a grid space of exchange integrals J1 = [−1.25, 1.25] and J2 = [−0.75, 0.75] in the units of kBT. From here on, all the values of exchange integrals are reported in the units of kBT. Phase diagrams of complex systems can be mapped using the statistical analysis of the local structure values.

FIG. 2.

Representative relative frequencies of local configurations at reduced temperature Tr = 0.96 for (a) paramagnetic, (b) anti-ferromagnetic, (c) paramagnetic, and (d) frustrated regimes of the phase diagram. (e) Exhaustive set of six local configurations that serve as descriptors for a given microstate. Representative configurations for each regime are shown in Fig. 4(d) in the same order.

FIG. 2.

Representative relative frequencies of local configurations at reduced temperature Tr = 0.96 for (a) paramagnetic, (b) anti-ferromagnetic, (c) paramagnetic, and (d) frustrated regimes of the phase diagram. (e) Exhaustive set of six local configurations that serve as descriptors for a given microstate. Representative configurations for each regime are shown in Fig. 4(d) in the same order.

Close modal

To establish the phase diagram of the system over the parameter space, we use a simple k-means clustering approach. A similar analysis was proposed by Canabarro et al. in which the pairwise correlations among all spins are used as the descriptors.48 This algorithm segregates the input data into k-clusters based on minimizing the variance within each cluster.49 The number of clusters is apriori unknown but can be estimated based on the quality of separation and the target output. Here, when the feature matrix is segregated into four clusters using the k-means clustering technique, the resulting clusters [Fig. 3(a)] exactly resembled the energy [Fig. 1(a)] and specific heat [Fig. 1(c)] phase diagram. In Fig. 3(a), phases 1, 2, 3, and 4 represent the ferromagnetic, anti-ferromagnetic, frustrated, and paramagnetic regions, respectively. Representative configurations for the four phases are plotted in Fig. 3(d). Ferromagnetic and anti-ferromagnetic transitions to paramagnetic phase can clearly be observed in Fig. 3(a). It is also observed that for k = 2, the clustering plot reproduced the magnetization and susceptibility phase diagrams. The clusters are also plotted in two-dimensional space of first two principal components49 [Fig. 3(b)] and as a dendrogram [Fig. 3(c)] to show the hierarchy between the clusters. Figure 3(b) illustrates the separability of the phases and their general configurations in reduced (principal component) space. This provides information on which phases are separated by continuous and discontinuous transitions. Dendrogram [Fig. 3(b)] shows the intra-cluster distances between different clusters in the “histogram space.” The subsequent increase in the clusters’ number leads to the emergence of additional regions concentrated at the boundary between frustrated and paramagnetic states; however, primary ferromagnetic, antiferromagnetic, and frustrated regions remain invariant.

FIG. 3.

(a) Four clusters generated as a result of k-means algorithm when relative frequencies of local configurations are used as descriptors of macro-states. (b) Clusters and their centers plotted in the 2-D space of first two principal components. (c) Dendrogram generated using hierarchical clustering. (d) Representative configurations that correspond to the four clusters formed.

FIG. 3.

(a) Four clusters generated as a result of k-means algorithm when relative frequencies of local configurations are used as descriptors of macro-states. (b) Clusters and their centers plotted in the 2-D space of first two principal components. (c) Dendrogram generated using hierarchical clustering. (d) Representative configurations that correspond to the four clusters formed.

Close modal

We note that classically the exploration of the lattice models relies on the macroscopic thermodynamic descriptors such as those shown in Fig. 1 as well as structure factors and correlation functions. Here, we rapidly map the relevant phase diagram using the comparison of the local motifs. In addition to the obvious and previously reported applications for the exploration of the phase diagrams of the lattice models, these observations suggest that the statistics of local spin configurations contain the information on the microscopic interactions in Eq. (1).

Here, we extend the statistical distance minimization method to reconstruct the interaction in the Ising model within the full parameter space. During reconstruction, we have access to the spin configurations (synthetic experimental data) but not the values of Ising model parameters (exchange integrals). Correspondingly, the goal of the reconstruction is to determine the exchange integrals (parameters of the generative physics model) based on the observations of local configurations and establish the associated uncertainties.

To determine the unknown model parameters, we run the Monte Carlo simulations at a given set of simulation parameters to generate the spin configurations of the base (pseudo-experimental or synthetic experimental data) case. Here, 20 snapshots from the last 400 steps of the MC simulations are used as experimental observables for the base case. To reconstruct the exchange integrals of a given base case, simulations were carried out on the entire grid space of the exchange integrals. The local configurations of the base case are then compared to the simulated cases using the statistical distance metric.50 This metric is employed to quantify the distinction between pair thermodynamics systems and is given by

s=arccos(i=1lpiqi),
(3)

where pi and qi are the probabilities of finding a local configuration i in the base case and model simulations, respectively, and the summation runs over all possible (six in this case) local configurations. The probabilities of the local configurations are determined as the maximum likelihood estimates from the experimental (p) and simulated (q) histograms. Statistical hypothesis testing indicates that as the measured distance metric reduces to zero, the target and model system measurements become indistinguishable. A unique characteristic of this metric, which sets it apart from popular measures such as Kullback–Leibler divergence, is that it automatically incorporates the statistical weights of the collected data, thus allowing for separation of weak signals (viz., thermal fluctuations) from the statistical noise inherent in samples of a thermodynamic system. Extracting these fluctuations from the data is critical for improving the predictive capabilities of the optimized model, as they encode the system's response to changes in external conditions.

Since sampling in any real experiment is limited, the key question of exactly how many samples are required to achieve a certain tolerance on the inferred parameters is an important one. Toward this aim, we consider an averaging method utilizing the samples at hand. In this case, we divided the parent dataset of 400 images (last 400 MC steps of synthetic experimental data) into 20 sets of uncorrelated images. Interaction parameters are reconstructed for each subset at a given reduced temperature, and mean, standard deviation, and the probability density function of the reconstructed values are reported.

First, we explore the reconstruction of the model parameters for the full model. The reconstruction results for all four quadrants of the phase diagram are shown in Figs. 4 and 5. For all values of J1 and J2, the reconstruction shows that the statistical distance minimization yields an unbiased reconstruction of the model parameters. Here, two base cases (J1 = + 0.75, J2 = + 0.35 and J1 = − 0.75, J2 = + 0.35) were reconstructed that have a well-defined ground state where the nearest and next nearest neighbor interactions are not frustrated. The uncertainty is quantified using the experimental averaging technique discussed above.

FIG. 4.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) in the (a) first (J1 > 0, J2 > 0) and (b) second (J1 < 0, J2 > 0) quadrants, where there are well-defined ground states as a function of reduced temperature.

FIG. 4.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) in the (a) first (J1 > 0, J2 > 0) and (b) second (J1 < 0, J2 > 0) quadrants, where there are well-defined ground states as a function of reduced temperature.

Close modal
FIG. 5.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) in the (a) third (J1 < 0, J2 < 0) and (b) fourth (J1 > 0, J2 < 0) quadrants, where there are well-defined ground states as a function of reduced temperature. Likelihoods of the reconstructed values in the exchange integrals space of the (c) third and (d) fourth quadrants. The values of the exchange integrals used to produce the pseudo-experimental data are shown by the cross in each plot (c) and (d).

FIG. 5.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) in the (a) third (J1 < 0, J2 < 0) and (b) fourth (J1 > 0, J2 < 0) quadrants, where there are well-defined ground states as a function of reduced temperature. Likelihoods of the reconstructed values in the exchange integrals space of the (c) third and (d) fourth quadrants. The values of the exchange integrals used to produce the pseudo-experimental data are shown by the cross in each plot (c) and (d).

Close modal

The reconstruction fails below the phase transition region as there is no information in the spin configurations. Exchange integrals were reconstructed with high certainty at temperatures just above the phase transitions, where the configurations are strictly defined by the interaction parameters. The uncertainty of reconstruction increases with increasing temperature due to random spin configurations. Still, the reconstruction is possible for temperatures that are an order greater than the transition temperature before the uncertainty impedes the interpretation.

For the next two cases (J1 = − 0.75, J2 = − 0.35 and J1 = + 0.75, J2 = − 0.35), the nearest and next nearest neighbor interactions are incompatible and lead to the frustrated ground states. The ground states of these systems are not well defined. Proposed reconstruction technique works even in this scenario (of no well-defined ground states). Since there are no apparent phase-transitions in these regions, the reconstruction works even at low temperatures (Fig. 5). High temperature behavior for these cases shows similar behavior to that described for the first two cases.

To complement the analysis of the uncertainties via the corresponding point estimates of mean and dispersion and the marginalized posterior distribution for individual exchange integral values, we also analyze corresponding joint distributions to perform uncertainty quantification (UQ) in the 2D parameter space. In the limit of large sample, where each sample corresponds to an individual local configuration, the likelihood function attains the shape of the normal distribution centered around the limiting distribution with variance equal to 1/4 as a result of the variance equalizing transformation built into the probability space with metric s. The log-likelihood of a sample at a distance s from the limiting probability distribution is then proportional to −2ns2, where n is the number of samples (individual local configurations).50,51 As the distance s increases, the log-likelihood decreases, and it does so with a linear dependence on the number of samples n. The likelihood over the exchange integral space is shown in Figs. 5(c) and 5(d) at different points of reduced temperature. The increase in uncertainty with temperature can be visualized as the increase in spread of likelihood.

In this section, we explore the cases where the pseudo-experimental (base) case is guided only by the interactions between the nearest neighbors (J1 = 0.75) and is reconstructed by simulations incorporating both nearest and next nearest neighbors (J1 and J2). This case is equivalent to the traditional 2D Ising model with NN interactions only. The reconstructions follow a similar trend as discussed in the previous cases [Fig. 5(a)]. It works with very high certainty post the transition temperature and the uncertainty increases with the temperature. Interestingly, for this case, the zero value of J2 can be reliably determined.

We further explore the reconstruction of an underdetermined model [Fig. 6(b)] where the experimental image is guided by both nearest and next nearest neighbors (J1 = + 0.75, J2 = + 0.35) while it is modeled using the simulations with only nearest neighbor interactions (J1). The model attempts to fit the data by overestimating the exchange integral, but the estimated value is different at different temperatures. It can be inferred from the reconstruction curves that a model with only the nearest neighbor interactions will not be able to reproduce the experimental results that are governed by both nearest and next-nearest neighbors. The likelihood for both these cases in the exchange integral space is shown in Figs. 7(a) and 7(b), respectively.

FIG. 6.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) in the (a) overdetermined and (b) underdetermined cases as a function of reduced temperature.

FIG. 6.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) in the (a) overdetermined and (b) underdetermined cases as a function of reduced temperature.

Close modal
FIG. 7.

Likelihoods of the reconstructed values in the exchange integrals space of (a) overdetermined and (b) underdetermined cases. The values of the exchange integrals used to produce the pseudo-experimental data in the overdetermined case are shown by cross in (a) and the values of exchange integrals used in the underdetermined case are J1 = +0.55 and J2 = +0.35.

FIG. 7.

Likelihoods of the reconstructed values in the exchange integrals space of (a) overdetermined and (b) underdetermined cases. The values of the exchange integrals used to produce the pseudo-experimental data in the overdetermined case are shown by cross in (a) and the values of exchange integrals used in the underdetermined case are J1 = +0.55 and J2 = +0.35.

Close modal

Reconstruction until this section has been performed on a square lattice in which the values of exchange integrals in the pseudo-experimental dataset were constant over the entire lattice. In this section, we explore the reconstruction when a bond-disorder is introduced. To execute this, every lattice site is associated with a different value of exchange integral J + ΔJ, where ΔJ is normally distributed with zero mean and a specified standard deviation. The disorder is added to both exchange integrals J1 and J2. The standard deviation is referred to as disorder for the rest of the section. The bond disorder was only introduced in the pseudo-experiment case while the simulation cases were run at ΔJ = 0.

Reconstructions are performed for disorder levels of 10%, 100%, and 1000% of the values of exchange integrals, and the results are shown in Fig. 8. The reconstruction works well at 10%. With 100% disorder, the reconstruction starts to fail at low temperatures since at these temperatures the values of exchange integrals have a very strong effect on configuration. The configurations get more random with the increase in temperature, and this is represented by the increase in uncertainty in reconstructions. At 1000% disorder, the reconstruction completely fails, and the proposed methods cannot be applied at these high levels of disorder.

FIG. 8.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) corresponding to the (a) nearest neighbor interactions and (b) next nearest neighbor interactions as a function of reduced temperature and noise.

FIG. 8.

Reconstructed values (mean, μ) of the exchange integrals and their uncertainties (standard deviation, σ) corresponding to the (a) nearest neighbor interactions and (b) next nearest neighbor interactions as a function of reduced temperature and noise.

Close modal

The likelihood over the exchange integral space is shown in Figs. 9(a)9(c) at different points of reduced temperature. Note that for temperatures close to transition temperature and low disorder, the recovered exchange integrals are strongly correlated, suggesting that the system behavior can be well defined by single values of exchange integrals. For higher disorder level, reconstruction is impossible.

FIG. 9.

Likelihoods of the reconstructed values in the exchange integrals space at disorder levels of (a) 10%, (b) 100%, and (c) 1000% at different points of reduced temperature. The values of the exchange integrals used to produce the pseudo-experimental data are marked by the cross in each plot.

FIG. 9.

Likelihoods of the reconstructed values in the exchange integrals space at disorder levels of (a) 10%, (b) 100%, and (c) 1000% at different points of reduced temperature. The values of the exchange integrals used to produce the pseudo-experimental data are marked by the cross in each plot.

Close modal

Above the transition, the reconstruction yields unbiased estimates of exchange integrals for low and intermediate disorder, whereas for high disorder the reconstructed integrals are below the true value. This trend continues for higher temperatures. Finally, in all cases, the temperature increase yields broader distribution of posterior values, rendering reconstruction less reliable. Furthermore, the effects of lattice size on reconstruction and uncertainty quantification are discussed in the supplementary material.

To summarize, here we have developed an approach for the reconstruction of the microscopic parameters of the lattice model with two competing interactions from the observations of the microscopic degrees of freedom. We established the uncertainties and reliability of such analysis in a broad parameter-temperature space. Below the phase transition, the reconstruction is impossible and can lead to the biased estimates of the parameters. However, these situations can be readily identified from the experiment as the presence of large single-phase regions and correspondingly extremely poor sampling of the configuration space. At the same time, the reconstruction is possible well above the phase transition (1–2 orders of magnitude) and in the regions of the parameter space when the macroscopic ground state of the system is poorly defined due to frustrated interactions. Similarly, the reconstruction is robust with respect to the frozen disorder. Interestingly, bond disorder tends to affect reconstruction close to transition temperatures leading to biased estimates of exchange integrals, whereas at high temperatures unbiased estimates with large uncertainty emerge.

As an ancillary task, we have developed a machine learning approach based on histogram clustering to predict phase diagrams efficiently using a reduced descriptor space. We illustrated that the phase evolution in the vicinity of regions comprising frustrated phase gives rise to the hierarchy of local configurations.

The explored approach can be applied to traditionally complex problems in condensed matter physics such as ferroelectric relaxors and morphotropic phase boundary systems, spin and cluster glasses, and quantum systems once the local descriptors linked to the relevant physical behaviors are known. Correspondingly, of interest becomes the role of precision in the determination of local descriptors on parameter reconstruction, necessitating the development of appropriate Bayesian frameworks to link the experimental data and reconstructions.

See the supplementary material for discussion of the effect of lattice size on reconstruction and uncertainty quantification.

This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division (M.V., L.V., S.V.K., and R.K.V.). A portion of this research was performed and partially supported (M.Z.) at the Center for Nanophase Materials Sciences (CNMS), a U.S. Department of Energy Office of Science User Facility.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
D. A.
Chenet
,
O. B.
Aslan
,
P. Y.
Huang
,
C.
Fan
,
A. M.
van der Zande
,
T. F.
Heinz
, and
J. C.
Hone
,
Nano Lett.
15
(
9
),
5667
5672
(
2015
).
2.
K.-A. N.
Duerloo
,
Y.
Li
, and
E. J.
Reed
,
Nat. Commun.
5
,
4214
(
2014
).
3.
Y.
Kohama
,
H.
Ishikawa
,
A.
Matsuo
,
K.
Kindo
,
N.
Shannon
, and
Z.
Hiroi
,
Proc. Natl. Acad. Sci. U.S.A.
116
(
22
),
10686
(
2019
).
4.
L. D.
Landau
and
E. M.
Lifshitz
,
Statistical Physics
(
Elsevier Science
,
2013
).
5.
A.
Kitaev
,
Ann. Phys.
321
(
1
),
2
111
(
2006
).
6.
S.
Cocco
,
R.
Monasson
,
L.
Posani
,
S.
Rosay
, and
J.
Tubiana
,
Physica A
504
,
45
76
(
2018
).
7.
S. N.
Datta
and
S.
Hansda
,
Chem. Phys. Lett.
621
,
102
108
(
2015
).
8.
D.
Chandler
and
D.
Wu
,
Introduction to Modern Statistical Mechanics
(
Oxford University Press
,
1987
).
9.
J.
Carrasquilla
and
R. G.
Melko
,
Nat. Phys.
13
(
5
),
431
434
(
2017
).
10.
K.
Ch’ng
,
J.
Carrasquilla
,
R. G.
Melko
, and
E.
Khatami
,
Phys. Rev. X
7
(
3
),
031038
(
2017
).
11.
E. P. L.
van Nieuwenburg
,
Y.-H.
Liu
, and
S. D.
Huber
,
Nat. Phys.
13
(
5
),
435
439
(
2017
).
12.
A. L.
Ferguson
,
J. Phys. Condens. Matter
30
(
4
),
043002
(
2018
).
13.
B. S.
Rem
,
N.
Käming
,
M.
Tarnowski
,
L.
Asteria
,
N.
Fläschner
,
C.
Becker
,
K.
Sengstock
, and
C.
Weitenberg
,
Nat. Phys.
15
(
9
),
917
920
(
2019
).
14.
A.
Bohrdt
,
C. S.
Chiu
,
G.
Ji
,
M.
Xu
,
D.
Greif
,
M.
Greiner
,
E.
Demler
,
F.
Grusdt
, and
M.
Knap
,
Nat. Phys.
15
(
9
),
921
924
(
2019
).
15.
J. F.
Rodriguez-Nieva
and
M. S.
Scheurer
,
Nat. Phys.
15
(
8
),
790
795
(
2019
).
16.
Y.
Ming
,
C.-T.
Lin
,
S. D.
Bartlett
, and
W.-W.
Zhang
,
npj Comput. Mater.
5
(
1
),
88
(
2019
).
17.
E.
Aurell
and
M. J. P. r. l.
Ekeberg
,
Phys. Rev. Lett.
108
(
9
),
090201
(
2012
).
18.
C.
Donner
and
M. J. P. R. E.
Opper
,
Phys. Rev. E
96
(
6
),
062104
(
2017
).
19.
H.-L.
Zeng
,
M.
Alava
,
E.
Aurell
,
J.
Hertz
, and
Y. J. P. r. l.
Roudi
,
Phys. Rev. Lett.
110
(
21
),
210601
(
2013
).
20.
A. Y.
Lokhov
,
M.
Vuffray
,
S.
Misra
, and
M. J. S. a.
Chertkov
,
Sci. Adv.
4
(
3
),
e1700791
(
2018
).
21.
H. C.
Nguyen
,
R.
Zecchina
, and
J. J. A. i. P.
Berg
,
Adv. Phys.
66
(
3
),
197
261
(
2017
).
22.
S. J.
Pennycook
and
P. D
,
Nellist
, Scanning Transmission Electron Microscopy: Imaging and Analysis (
Springer
,
New York
,
2011
).
23.
O. L.
Krivanek
,
M. F.
Chisholm
,
V.
Nicolosi
,
T. J.
Pennycook
,
G. J.
Corbin
,
N.
Dellby
,
M. F.
Murfitt
,
C. S.
Own
,
Z. S.
Szilagyi
,
M. P.
Oxley
,
S. T.
Pantelides
, and
S. J.
Pennycook
,
Nature
464
(
7288
),
571
574
(
2010
).
24.
P. Y.
Huang
,
S.
Kurasch
,
J. S.
Alden
,
A.
Shekhawat
,
A. A.
Alemi
,
P. L.
McEuen
,
J. P.
Sethna
,
U.
Kaiser
, and
D. A.
Muller
,
Science
342
(
6155
),
224
227
(
2013
).
25.
C. L.
Jia
,
V.
Nagarajan
,
J. Q.
He
,
L.
Houben
,
T.
Zhao
,
R.
Ramesh
,
K.
Urban
, and
R.
Waser
,
Nat. Mater.
6
(
1
),
64
69
(
2007
).
26.
C. L.
Jia
,
S. B.
Mi
,
K.
Urban
,
I.
Vrejoiu
,
M.
Alexe
, and
D.
Hesse
,
Phys. Rev. Lett.
102
(
11
),
117061
(
2009
).
27.
M. F.
Chisholm
,
W. D.
Luo
,
M. P.
Oxley
,
S. T.
Pantelides
, and
H. N.
Lee
,
Phys. Rev. Lett.
105
(
19
),
197602
(
2010
).
28.
A. K.
Yadav
,
C. T.
Nelson
,
S. L.
Hsu
,
Z.
Hong
,
J. D.
Clarkson
,
C. M.
Schlepuetz
,
A. R.
Damodaran
,
P.
Shafer
,
E.
Arenholz
,
L. R.
Dedon
,
D.
Chen
,
A.
Vishwanath
,
A. M.
Minor
,
L. Q.
Chen
,
J. F.
Scott
,
L. W.
Martin
, and
R.
Ramesh
,
Nature
530
(
7589
),
198
(
2016
).
29.
S.
Das
,
Y. L.
Tang
,
Z.
Hong
,
M. A. P.
Goncalves
,
M. R.
McCarter
,
C.
Klewe
,
K. X.
Nguyen
,
F.
Gomez-Ortiz
,
P.
Shafer
,
E.
Arenholz
,
V. A.
Stoica
,
S. L.
Hsu
,
B.
Wang
,
C.
Ophus
,
J. F.
Liu
,
C. T.
Nelson
,
S.
Saremi
,
B.
Prasad
,
A. B.
Mei
,
D. G.
Schlom
,
J.
Iniguez
,
P.
Garcia-Fernandez
,
D. A.
Muller
,
L. Q.
Chen
,
J.
Junquera
,
L. W.
Martin
, and
R.
Ramesh
,
Nature
568
(
7752
),
368
(
2019
).
30.
C. L.
Jia
,
S. B.
Mi
,
M.
Faley
,
U.
Poppe
,
J.
Schubert
, and
K.
Urban
,
Phys. Rev. B
79
(
8
),
081405
(
2009
).
31.
A.
Borisevich
,
O. S.
Ovchinnikov
,
H. J.
Chang
,
M. P.
Oxley
,
P.
Yu
,
J.
Seidel
,
E. A.
Eliseev
,
A. N.
Morozovska
,
R.
Ramesh
,
S. J.
Pennycook
, and
S. V.
Kalinin
,
ACS Nano
4
(
10
),
6071
6079
(
2010
).
32.
A. Y.
Borisevich
,
H. J.
Chang
,
M.
Huijben
,
M. P.
Oxley
,
S.
Okamoto
,
M. K.
Niranjan
,
J. D.
Burton
,
E. Y.
Tsymbal
,
Y. H.
Chu
,
P.
Yu
,
R.
Ramesh
,
S. V.
Kalinin
, and
S. J.
Pennycook
,
Phys. Rev. Lett.
105
(
8
),
087204
(
2010
).
33.
Q.
He
,
R.
Ishikawa
,
A. R.
Lupini
,
L.
Qiao
,
E. J.
Moon
,
O.
Ovchinnikov
,
S. J.
May
,
M. D.
Biegalski
, and
A. Y.
Borisevich
,
ACS Nano
9
(
8
),
8412
8419
(
2015
).
34.
Y. M.
Kim
,
J.
He
,
M. D.
Biegalski
,
H.
Ambaye
,
V.
Lauter
,
H. M.
Christen
,
S. T.
Pantelides
,
S. J.
Pennycook
,
S. V.
Kalinin
, and
A. Y.
Borisevich
,
Nat. Mater.
11
(
10
),
888
894
(
2012
).
35.
E. A.
Eliseev
,
S. V.
Kalinin
,
Y. J.
Gu
,
M. D.
Glinchuk
,
V.
Khist
,
A.
Borisevich
,
V.
Gopalan
,
L. Q.
Chen
, and
A. N.
Morozovska
,
Phys. Rev. B
88
(
22
),
224105
(
2013
).
36.
A. Y.
Borisevich
,
A. N.
Morozovska
,
Y. M.
Kim
,
D.
Leonard
,
M. P.
Oxley
,
M. D.
Biegalski
,
E. A.
Eliseev
, and
S. V.
Kalinin
,
Phys. Rev. Lett.
109
(
6
),
065702
(
2012
).
37.
Q.
Li
,
C. T.
Nelson
,
S. L.
Hsu
,
A. R.
Damodaran
,
L. L.
Li
,
A. K.
Yadav
,
M.
McCarter
,
L. W.
Martin
,
R.
Ramesh
, and
S. V.
Kalinin
,
Nat. Commun.
8
,
1468
(
2017
).
38.
O. S.
Ovchinnikov
,
S.
Jesse
,
P.
Bintacchit
,
S.
Trolier-McKinstry
, and
S. V.
Kalinin
,
Phys. Rev. Lett.
103
(
15
),
157203
(
2009
).
39.
L.
Vlcek
,
S.
Yang
,
M.
Ziatdinov
,
S.
Kalinin
, and
R.
Vasudevan
,
Microsc. Microanal.
25
(
S2
),
130
131
(
2019
).
40.
L.
Vlcek
,
A.
Maksov
,
M.
Pan
,
R. K.
Vasudevan
, and
S. V.
Kalinin
,
ACS Nano
11
(
10
),
10313
10320
(
2017
).
41.
L.
Vlcek
,
R. K.
Vasudevan
,
S.
Jesse
, and
S. V.
Kalinin
,
J. Chem. Theory Comput.
13
(
11
),
5179
5194
(
2017
).
42.
L.
Vlcek
,
W. W.
Sun
, and
P. R. C.
Kent
,
J. Chem. Phys.
147
(
16
),
161713
(
2017
).
43.
L.
Vlcek
,
M.
Ziatdinov
,
A.
Maksov
,
A.
Tselev
,
A. P.
Baddorf
,
S. V.
Kalinin
, and
R. K.
Vasudevan
,
ACS Nano
13
(
1
),
718
727
(
2019
).
44.
L.
Vlcek
,
R. K.
Vasudevan
,
S.
Jesse
, and
S. V.
Kalinin
,
J. Chem. Theory Comput.
13
,
5179
5194
(
2017
).
45.
S. J.
Wetzel
,
Phys. Rev. E
96
(
2
),
022140
(
2017
).
46.
J. M.
Sanchez
,
F.
Ducastelle
, and
D.
Gratias
,
Physica A
128
(
1
),
334
350
(
1984
).
47.
W.
Hu
,
R. R. P.
Singh
, and
R. T.
Scalettar
,
Phys. Rev. E
95
(
6
),
062122
(
2017
).
48.
A.
Canabarro
,
F. F.
Fanchini
,
A. L.
Malvezzi
,
R.
Pereira
, and
R.
Chaves
,
Phys. Rev. B
100
(
4
),
045129
(
2019
).
49.
G.
James
,
D.
Witten
,
T.
Hastie
, and
R.
Tibshirani
,
An Introduction to Statistical Learning: With Applications in R
(
Springer New York
,
2013
).
50.
W. K.
Wootters
,
Phys. Rev. D
23
(
2
),
357
362
(
1981
).
51.
A.
Bhattacharyya
,
Sankhyā
7
(
4
),
401
406
(
1946
), (1933–1960).

Supplementary Material