1 Nonlinear Time-Delayed Thermoacoustic Model

We investigate the acoustics of a resonator excited by a heat source, which is a monopole source of sound. The main assumptions of the reduced-order model are [7]: (i) the acoustics evolve on top of a uniform mean flow; (ii) the mean-flow Mach number is negligible, therefore the acoustics are linear and no flow inhomogeneities are convected; (iii) the flow is isentropic except at the heat-source location; (iv) the length of the duct is sufficiently larger than the diameter, such that the cut-on frequency is high, i.e., only longitudinal acoustics are considered; (v) the heat source is compact, i.e., it excites the acoustics at a specific location, \(x_f\); (vi) the boundary conditions are ideal and open-ended, i.e., the acoustic pressure at the ends is zero. Under these assumptions, the non-dimensional momentum and energy equations read, respectively [5]

$$\begin{aligned} \frac{\partial u}{\partial t} + \frac{\partial p}{\partial x}&= 0, \end{aligned}$$
(1)
$$\begin{aligned} \frac{\partial p}{\partial t} + \frac{\partial u}{\partial x}+\zeta p - \dot{\mathcal {Q}}\delta _D(x-x_f)&= 0, \end{aligned}$$
(2)

where u is the acoustic velocity; p is the acoustic pressure; t is the time; x is the axial coordinate of the duct; \(\delta _D(x-x_f)\) is the Dirac delta distribution at the heat source location, \(x_f\); \(\zeta \) is the damping factor, which models the acoustic energy radiation from the boundaries and thermo-viscous losses; and \(\dot{\mathcal {Q}}\) is the heat release rate (or, simply, heat release). The heat release, \(\dot{\mathcal {Q}}\), is modelled by a nonlinear time delayed law [10]

$$\begin{aligned} \dot{\mathcal {Q}} \equiv \beta \text {Poly}(u_f(t-\tau )), \end{aligned}$$
(3)

where \(\tau \) is the time delay; \(\beta \) is the strength of the heat source; and \(\text {Poly}({u}({t}))=a_1{u}^5({t})+\cdots +a_5{u}({t})\). The time delay and strength of the heat source are the two key parameters of a reduced-order model for the flame [3]. Physically, \(\tau \) is the time that the heat release takes to respond to a velocity perturbation at the flame’s base; while \(\beta \) provides the strength of the coupling between the heat source and the acoustics. Velocity, pressure, length and time are nondimensionalized as in [5]. The set of nonlinear time-delayed partial differential Eqs. (1)–(2) provides a physics-based reduced-order model for the nonlinear thermoacoustic dynamics. Owing to the assumptions we made, the model can only qualitatively replicate the nonlinear thermoacoustic behaviour. In this paper, we propose a Lagrangian method to make a qualitative model quantitatively more accurate any time that reference data can be assimilated. Such data can come, for example, from sensors in experiments or time series from high-fidelity simulations.

1.1 Numerical Discretization with Acoustic Modes

We use separation of variables to decouple the time and spatial dependencies of the solution. The spatial basis on to which the solution is projected consists of the natural acoustic modes. When decomposed on the natural acoustic eigenfunctions, the acoustic velocity and pressure read, respectively

$$\begin{aligned}&u(x,t) = \sum _{j=1}^{N_{m}} \eta _j(t)\cos (j\pi x), \end{aligned}$$
(4)
$$\begin{aligned}&p(x,t) = \sum _{j=1}^{N_{m}} \left( \frac{\dot{\eta }_j(t)}{j\pi }\right) \sin (j\pi x), \end{aligned}$$
(5)

where the relationship between \(\eta _j\) and \(\dot{\eta }_j\) has yet to be found and \(N_m\) is the number of acoustic modes considered. This discretization is sometimes known as the Galerkin discretization [12]. The state of the system is provided by the amplitudes of the Galerkin modes that represent velocity, \(\eta _j\), and those that represent pressure, \(\dot{\eta }_j/(j\pi )\). The damping has modal components, \(\zeta _j=C_1j^2+C_2\sqrt{j}\), where \(C_1\) and \(C_2\) are damping coefficients [1, 5, 6, 8, 9]. In vector notation, \(\varvec{\eta } \equiv (\eta _1,\cdots , \eta _N)^{\text {T}}\) and \(\varvec{\dot{\eta }} \equiv (\dot{\eta }_1/\pi , \cdots , \dot{\eta }_N/(N\pi ))^{\text {T}}\). The state vector of the discretized system is the column vector \(\mathbf x \equiv (\varvec{\eta }; \varvec{\dot{\eta }})\). The governing equations of the Galerkin modes are found by substituting (4)–(5) into (1)–(3). Equation (2) is then multiplied by \(\sin (k\pi x)\) and integrated over the domain \(x = [0, 1]\) (projection). In so doing, the spatial dependency is removed and the Galerkin amplitudes are governed by a set of nonlinear time-delayed differential equations

$$\begin{aligned} \text {F}_{1j}&\equiv \frac{d}{dt} \left( \eta _j\right) - j\pi \left( \frac{\dot{\eta }_j}{j\pi }\right) = 0&t>0,\end{aligned}$$
(6)
$$\begin{aligned} \text {F}_{2j}&\equiv \frac{d}{dt}\left( \frac{\dot{\eta }_j}{j\pi }\right) + j\pi \eta _j + \zeta _j\left( \frac{\dot{\eta }_j}{j \pi }\right) = 0&t \in [0,\tau ), \end{aligned}$$
(7)
$$\begin{aligned} \text {F}_{2j}&\equiv \frac{d}{dt}\left( \frac{\dot{\eta }_j}{j\pi }\right) + j\pi \eta _j + \zeta _j\left( \frac{\dot{\eta }_j}{j \pi }\right) + 2 s_j \beta \ \text {Poly}(u_f(t-\tau )) = 0&t \in [\tau ,T], \end{aligned}$$
(8)

where \(u_f(t-\tau ) = \sum _{j=1}^{N_{m}} \eta _j(t-\tau )c_j\); \(s_j\equiv \sin (j\pi x_f)\) and \(c_j\equiv \cos (j\pi x_f)\). The labels \(\text {F}_{\bullet }\) are introduced for the definition of the Lagrangian (Sect. 3.2). Because the Galerkin expansions (4)–(5) are truncated at the \(N_m\)-th mode, we obtain a reduced-order model of the original thermoacoustic system (1)–(2) with \(2N_m\) degrees of freedom (6)–(8). The reduced-order model is physically a set of \(2N_m\) time-delayed oscillators, which are nonlinearly coupled through the heat release term. In the following sections, we employ 4D-Var data assimilation to improve the accuracy of such a reduced-order modelFootnote 1.

2 Data Assimilation as a Constrained Optimization Problem

The ingredients of data assimilation are (i) a reduced-order model to predict the amplitude of thermoacoustic oscillations, which provides the so-called background state vector \(\mathbf {x}^{bg}\) at any time, t (red thick line in Fig. 1); (ii) data from external observations, \(\mathbf {y}^i\), which is available from high-fidelity simulations or experiments at times \(t^i_{obs}\) (grey diamonds in Fig. 1); and (iii) an educated guess on the parameters of the system, which originates from past experience. The uncertainties on the background solution and observations are here assumed normal and unbiased. \(\mathbf {B}\) and \(\mathbf {R}\) are the background and observation covariance matrices, respectively, which need to be prescribed. For simplicity, we assume that \(\mathbf {B}\) and \(\mathbf {R}\) are diagonal with variances B and R (i.e., errors are statistically independent). A cost functional is defined to measure the statistical distance between the background predictions and the evidence from observations. First, we want the state of the system to be as close as possible to the observations. Second, we do not want the improved solution to be “too far” away from the background solution. This is because we trust that our reduced-order model provides a reasonable solution. Mathematically, these two requirements can be met, respectively, by minimizing the following cost functional

$$\begin{aligned} J&= \underbrace{\frac{1}{2}|| \mathbf x _0 - \mathbf x _0^{bg} ||_{\mathbf{B }}^2}_{J_{bg}} + \underbrace{\frac{1}{2}\sum _{i=1}^{N_{obs}} ||\mathbf {M}{} \mathbf x ^{i} - \mathbf y ^{i}||_{\mathbf{R }}^2}_{J_{obs}}\;\;\;\;\;\;\;\;\text {over}\;\;\;[0, T], \end{aligned}$$
(9)

where \(N_{obs}\) is the number of observations over the assimilation window [0, T]. \(\mathbf {M}\) is a linear measurement operator, which maps the state space to the observable space (see Eqs. (4)–(5)). Moreover, \(|| \mathbf x _0 - \mathbf x _0^{bg} ||_{\mathbf{B }}^2 \equiv \left( \mathbf x _0 - \mathbf x _0^{bg} \right) ^\text {T}\mathbf{B }^{-1}\left( \mathbf x _0 - \mathbf x _0^{bg} \right) \). Likewise, \(|| \mathbf {M}\mathbf {x}^i - \mathbf y ^i||_{\mathbf{R }}^2 \equiv \left( \mathbf {M}\mathbf {x}^i - \mathbf y ^i \right) ^\text {T}\mathbf{R }^{-1}\left( \mathbf {M}\mathbf {x}^i - \mathbf y ^i \right) \).

The objective of state estimation is to improve the prediction of the state, \(\mathbf x \), over the interval [0, T], by reinitializing the background initial conditions, \(\mathbf x _{0}^{bg}\), with optimal initial conditions. These optimal initial conditions are called analysis initial conditions, \(\mathbf x _0^{analysis}\), which are the minimizers of the cost functional (9). We start from a background knowledge of the model’s initial conditions, \(\mathbf x _0^{bg}\), which is provided by the reduced-order model when data is not assimilated. By integrating the system from \(\mathbf x _0^{bg}\), we obtain the red trajectory in Fig. 1, \(\mathbf x ^{bg}(t)\). The set of observations is assumed to be distributed over an assimilation window at some time instants. Pictorially, the analysis trajectory corresponds to the green thin line in Fig. 1, which is the minimal statistical distance between the background initial condition (magenta thick arrow) and observations (blue thin arrows). This algorithm is known as 4D-Var in weather forecasting [2]. State estimation enables the adaptive update of reduced-order thermoacoustic models whenever data is available.

Fig. 1.
figure 1

Pictorial representation of data assimilation. The background error, \(J_{bg}\), is proportional to the length of the magenta thick arrow, while the observation error, \(J_{obs}\), is proportional to the sum of the blue thin arrows. The vertical cyan line marks the end of the assimilation window, after which the forecast begins. (Color figure online)

3 Data Assimilation for Nonlinear Thermoacoustic Dynamics

We propose a set of cost functionals to perform data assimilation with the thermoacoustic model introduced in Sect. 1. We also introduce the formalism to perform Lagrangian optimization, in which adjoint equations enable the efficient calculation of the gradients of the thermoacoustic cost functionals with respect to the initial state.

3.1 Thermoacoustic Cost Functionals

Crucial to data assimilation is the definition of the cost functionals \(J_{bg}\) and \(J_{obs}\). Three physical thermoacoustic cost functionals are proposed and compared to reproduce different scenarios. For the background error

$$\begin{aligned} J_{bg}^{a}&= \frac{1}{2B} \left( p(0)-p(0)_{bg}\right) ^2, \end{aligned}$$
(10)
$$\begin{aligned} J_{bg}^{b}&= \frac{1}{2B}\sum _{j=1}^{N_{m}} \bigg \{ \left[ \left( \frac{\dot{\eta }_{j0}}{j\pi }\right) - \left( \frac{\dot{\eta }_{j0}}{j\pi }\right) _{bg}\right] \sin (j\pi x_m) \bigg \}^2, \ \end{aligned}$$
(11)
$$\begin{aligned} J_{bg}^{c}&= \frac{1}{2B}\sum _{j=1}^{N_{m}} \left[ \left( \frac{\dot{\eta }_{j0}}{j\pi }\right) - \left( \frac{\dot{\eta }_{j0}}{j\pi }\right) _{bg}\right] ^2 + \frac{1}{2B}\sum _{j=1}^{N_{m}} \left[ \eta _{j0} - \eta _{j0,bg}\right] ^2. \end{aligned}$$
(12)

For the observation error

$$\begin{aligned} J_{obs}^{a}&= \sum _{i=1}^{N_{obs}}J_{obs,i}^{a} = \frac{1}{2R}\sum _{i=1}^{N_{obs}} \left( p(t_{obs}^{i})-p^{(i)}_{obs}\right) ^2, \end{aligned}$$
(13)
$$\begin{aligned} J_{obs}^{b}&= \sum _{i=1}^{N_{obs}}J_{obs,i}^{b} = \frac{1}{2R}\sum _{i=1}^{N_{obs}}\sum _{j=1}^{N_{m}} \bigg \{ \left[ \left( \frac{\dot{\eta }_j(t_{obs}^{i})}{j\pi }\right) - \left( \frac{\dot{\eta }_j}{j\pi }\right) ^{(i)}_{obs}\right] \sin (j\pi x_m) \bigg \}^2 , \end{aligned}$$
(14)

where \(x_m\) is the location where the measurement is taken and \(t_{obs}^{i}\) is the instant at which the \(i-\)th observation is assimilated. On the one hand, by using \(J_{bg}^{a}\) and \(J_{obs}^{a}\) the analysis solution is optimized against the background pressure at \(t=0\) and the measured pressure at \(t=t_{obs}^i\), \(i=1,\dots ,N_{obs}\). Physically, this means that the acoustic pressure is the model output, \(p(0)_{bg}\), and the observable from the sensors, \(p_{obs}^i\). On the other hand, \(J_{bg}^{b}\) and \(J_{obs}^{b}\) constrain the pressure modes. Physically, this means that every pressure mode provided by the background solution is a model output, \(\left( \dot{\eta }_{j0}/(j\pi )\right) _{bg}\), and it is assumed that the modes of the acoustic pressure, \(\left( \dot{\eta }_j/(j\pi )\right) ^{(i)}_{obs}\), can be calculated from the sensors on the fly. For the background cost functional, we also defined \(J_{bg}^c\) as a norm of the modes, which does not have a corresponding observation cost functional because the spatial dependency is not explicit.

To attain a minimum of J, a necessary condition is that the gradient vanishes, i.e.,

$$\begin{aligned} \nabla _{\mathbf{x _0}}(J) = \nabla _{\mathbf{x _0}}(J_{bg}) + \sum _{i=1}^{N_{obs}} \nabla _{\mathbf{x _0}}(J_{obs,i})=0, \end{aligned}$$
(15)

where \(\nabla _{\mathbf{x _0}}\) is the gradient with respect to the initial conditions. There exists \(\mathbf x _0\) such that \(\nabla _{\mathbf{x _0}}(J) = 0\) because of the convexity of the cost functionals in the neighbourhood of a local extremum. To compute \(\nabla _{\mathbf{x _0}}(J_{bg})\) and \(\nabla _{\mathbf{x _0}}(J_{obs,i})\), we use calculus of variation (Sect. 3.2). The Lagrange multipliers, also known as adjoint, or dual, or co-state variables (Sect. 3.3), provide the gradient information with respect to the initial state.

3.2 Lagrangian of the Thermoacoustic System

The governing equations and their initial conditions are rewritten in the form of constraints, \(\text {F}\), which hold over time intervals, while \(\text {G}\) are the constraints that hold for a specific time only, i.e., \(t=t_0\). Together with Eqs. (6)–(8) and by defining the auxiliary variable \(\bar{\eta }(t)\equiv u_f(t-\tau )\), they read

$$\begin{aligned} \text {F}_{3\ }&\equiv \bar{\eta }(t) = 0,&t \in [0,\tau ) \end{aligned}$$
(16)
$$\begin{aligned} \text {F}_{3\ }&\equiv \bar{\eta }(t) - u_f(t-\tau ) = 0,&t \in [\tau ,T]. \end{aligned}$$
(17)

The constraints for the initial conditions read

$$\begin{aligned} \text {G}_{1j}&\equiv \eta _j(0) - \eta _{j0} = 0, \end{aligned}$$
(18)
$$\begin{aligned} \text {G}_{2j}&\equiv \left( \frac{\dot{\eta }_j(0)}{j\pi }\right) - \left( \frac{\dot{\eta }_{j0}}{j\pi }\right) = 0. \end{aligned}$$
(19)

By defining an inner product

$$\begin{aligned} \left[ a, b\right] =\frac{1}{T}\int \limits _0^T ab \ dt \end{aligned}$$
(20)

where a and b are arbitrary functions, the Lagrangian of the nonlinear system can be written as

$$\begin{aligned} \mathcal {L} \equiv J_{bg} + J_{obs,i} + \sum _{j=1}^{N_{m}} \mathcal {L}_j - \left[ \bar{\xi }(t),\text {F}_{3}\right] , \end{aligned}$$
(21)

where each \(\mathcal {L}_j\) is

$$\begin{aligned} \mathcal {L}_j \equiv - \left[ \frac{\xi _j}{j\pi },\text {F}_{1j}\right] - \left[ \nu _j,\text {F}_{2j}\right] - b_{1j}\text {G}_{1j} - b_{2j}\text {G}_{2j}, \end{aligned}$$
(22)

where \(\bar{\xi }\), \(\xi _j/j\pi \), \(\nu _j\) and \(b_{\bullet j}\) are the Lagrange multipliers, or adjoint variables, of the corresponding constraints. Because we wish to derive the adjoint equations for the cost functional \(J_{obs,i}\), we consider the time window to be \(T=t_{obs}^i\).

3.3 Adjoint Equations

We briefly report the steps to derive the evolution equations of the Lagrange multipliers (adjoint equations) [7]. First, the Lagrangian (21) is integrated by parts to make the dependence on the direct variables explicit. Secondly, the first variation is calculated by a Fréchet derivative

$$\begin{aligned} \left[ \frac{\partial \mathcal {L}}{\partial {\mathbf {x}}},\delta {\mathbf {x}}\right] \equiv \lim \limits _{\epsilon \rightarrow 0} \frac{\mathcal {L}({\mathbf {x}}+\epsilon \delta {\mathbf {x}})-\mathcal {L}({\mathbf {x}})}{\epsilon }. \end{aligned}$$
(23)

Thirdly, the derivatives of (21) are taken with respect to the initial condition of each variable of the system, \(\partial \mathcal {L}/\partial \mathbf x _0\). These expressions will be used later to compute the gradient. Finally, to find the set of Lagrange multipliers that characterizes an extremum of the Lagrangian, \(\mathcal {L}\), variations with respect to \(\delta \mathbf x \) are set to zero. The adjoint equations and their initial conditions are derived by setting variations of \(\delta \eta _j\), \(\delta \left( \dot{\eta }_j/(j\pi ) \right) \) and \(\delta \bar{\eta }\) to zero over [0, T].

3.4 Gradient-Based Optimization

The optimization loop consists of the following steps:

  1. (1)

    Integrate the system forward from \(t=0\) to \(t=T\) from an initial state \(\mathbf x _0\);

  2. (2)

    Initialize the adjoint variables;

  3. (3)

    Evolve the adjoint variables backward from \(t=T\) to \(t=0\);

  4. (4)

    Evaluate the gradient using the adjoint variables at \(t=0\).

Once the gradient is numerically computed, the cost functional can be minimized via a gradient based optimization loop. The conjugate gradient [11] is used to update the cost functional until the condition \(\nabla _\mathbf{x _0}(J)=0\) is attained to a relative numerical tolerance of \(10^{-4}\). By using a gradient based approach, we find a local minimum of J. We verify that there is no other local minimum by computing \(J=J(\mathbf x _0)\) in the vicinity of \(\mathbf x _0^{analysis}\).

4 Results

We validate the data-assimilation algorithm by twin experiments: The true state solution, \(\mathbf x ^{true}(t)\), is produced by perturbing the unstable fixed point at the origin of the phase spaceFootnote 2; the background trajectory, \(\mathbf x ^{bg}(t)\), is obtained by perturbing each true mode initial condition with Gaussian error with variance \(B=0.005^2\); the \(i-\)th observation is produced by adding Gaussian error with variance \(R=0.005^2\) to \(\mathbf x ^{true}(t^i_{obs})\). The outcome of twin experiments is summarized by the error plots shown in Figs. 3 and 4. The cyan vertical line indicates the end of the assimilation window, the red thick line is the difference between the true pressure and the background pressure, the green thin line is the difference between the true pressure and the analysis pressure. First, it is shown how the number of computed acoustic modes affects the solution of the system. Secondly, we investigate the effects that the different cost functionals have on the analysis solution. Finally, we discuss the effects that the number of observations have on the analysis.

The parameters we use are \(\beta =1\), \(\tau =0.02\), \(C_1=0.05\), \(C_2=0.01\) and \((a_1, a_2, a_3, a_4, a_5) = (-0.012,0.059,-0.044,-0.108,0.5)\) for the heat release term \(\dot{\mathcal {Q}}\). The position of the heat source is \(x_f=0.3\) and all the measurements are taken at \(x_m=0.8\).

4.1 Remarks on the Thermoacoustic Nonlinear Dynamics

The thermoacoustic model is a set of \(2N_{m}\) nonlinearly coupled oscillators, which we initialize by imposing non-equilibrium initial conditions. We compare two solutions, using \(N_{m}=3\) and \(N_{m}=10\) in Fig. 2a and b, respectively. Higher modes are quickly damped out, thus, after a transient where strong nonlinear modal coupling occurs, the solution obtained with \(N_{m}=10\) is qualitatively similar to the solution obtained with \(N_{m}=3\). During the transient, if sufficient modes are computed, the dynamics are more unpredictable because of the intricate modal interaction. The twin experiments are performed with \(N_m=10\), which provide a more accurate solution. It is shown that state estimation is markedly affected depending on whether we observe the system during the transient or at regime.

Effect of the Observation Error. We can simulate two main scenarios, depending on the choice of \(J_{obs}\) (Sect. 3.1). Figure 3a is obtained using \(J_{obs}^a\), i.e., by modelling observations on the pressure only. The analysis pressure error slightly deviates from zero in the assimilation window. When the forecast window starts, the analysis suddenly approaches the background again. Figure 3b is obtained using \(J_{obs}^b\), therefore the observations contain information about every pressure modes. The forecast quality is considerably enhanced. The assimilation window is \(T_{as}=0.4\), thus, the observations are obtained during the transient, which lasts up to \(t\approx 2\), where the dynamics are more unpredictable due to nonlinear interactions between modes. As we will show in the next subsection, increasing \(N_{obs}\) is not an effective strategy to improve the forecast during the transient when the pressure is observed.

Effect of the Background Error. From a numerical standpoint, the background error, \(J_{bg}\), acts as an observation at \(t=0\). The analysis trajectory is an optimal blending of the information from the measurements and the previous educated model output. Therefore, we emphasize that the outcome of twin experiments is improved if the cost functionals of the background knowledge and observations are consistent. In the present framework, it means that \(J_{bg}^a\) should be used with \(J_{obs}^a\) and \(J_{bg}^b\) should be used with \(J_{obs}^b\). On the one hand, if the source of assimilated data favours the background knowledge (e.g. using \(J_{bg}^b\) or \(J_{bg}^c\) together with \(J_{obs}^a\)), the analysis trajectory will be closer to the background. On the other hand, if the source of assimilated data favours the observations (e.g. using \(J_{obs}^b\) with \(J_{bg}^a\)), the analysis trajectory will be closer to the observations.

Effect of the Number of Observations. Generally speaking, the higher the number of observations the more the optimal solution will be similar to the true solution. This can be deduced by inspection of Fig. 4a and b. The value of \(N_{obs}\) is increased from 50 to 250, over an assimilation window of 2.5 time units (the observations are not shown in the figures), resulting in a smaller error amplitude when more observations are available.

Fig. 2.
figure 2

Time series of the pressure, p and velocity, u evaluated at \(x_m=0.8\) using (a) \(N_{m}=3\) and (b) \(N_{m}=10\). When 10 modes are computed, a transient region can be identified for \(t\lesssim 2\), which is characterized by irregular fluctuations due to nonlinear modal coupling.

Fig. 3.
figure 3

Time series of the background and analysis acoustic pressure deviation from the true state (normalized with the true acoustic pressure at \(t=0\)). Both twin experiments are performed with \(N_{obs}=100\) (the observations are not shown). (a) The cost functional measures the (a) pressure (\(J_{obs}^a\)), and (b) pressure modes (\(J_{obs}^b\)). (Color figure online)

Fig. 4.
figure 4

Error plots of two different twin experiments. The assimilation window is \(T_{as}=2.5\) time units and the observation error is measured using \(J_{obs}^a\) in both cases. The choice of \(T_{as}\) implies that the system is observed also at regime. (a) \(N_{obs}=50\) and (b) \(N_{obs}=250\). (Color figure online)

However, it is possible that increasing \(N_{obs}\) will not result in a better state estimation. This happens if we measure the pressure during the transient, that is, using \(J_{obs}^a\) and \(T_{as} < 2\). In the transient, the measured pressure is a combination of all modes. Therefore, the same pressure level is associated with different combinations of modes, hence, no useful information is assimilated to help determine the state of the system by knowing only the pressure (i.e., by using \(J_{obs}^a\)). Under these circumstances, the forecast quality will remain poor, as shown in Fig. 3a, regardless of the number of observations. When the assimilation window is extended up to 2.5 time units, as shown in Fig. 4, the pressure is observed also at regime, that is in \(t\in [2,2.5]\), approximately. In this interval, the measured pressure signal is produced chiefly by the first three modes because higher modes are damped out. Therefore, the measured pressure becomes more effective information about the state to assimilate. At regime, as intuitively expected, increasing the observation frequency produces a better forecast.

We conclude that having poor information about the system’s state cannot be simply balanced by increasing the number of observations. Given a number of observations with their time distribution, it is the synergy between an appropriate cost functional and the recognition of the type of dynamics (transient vs. regime) to be the key for a successful data assimilation.

5 Conclusions and Future Work

Preliminary thermoacoustic design is based on simplified and computationally cheap reduced-order models that capture the inception of thermoacoustic instabilities (linear analysis) and their saturation to finite amplitude oscillations (nonlinear analysis). We propose a Lagrangian method to make a qualitative reduced-order model quantitatively more accurate any time that reference data can be assimilated. Such data can come, for example, from sensors in experiments or time series from high-fidelity simulations. To test the method we perform a series of twin experiments with the thermoacoustic model of a resonator excited by a heat source (horizontal Rijke tube). When sufficient modes are computed, a clear distinction emerges between a transient state and the state at regime. The former is characterized by irregular dynamics due to the interaction between all modes, while at regime the dynamics are chiefly dominated by the first three modes. We find that, at regime, it is possible to enhance the forecast by assimilating data about the pressure. As intuitively expected, the higher the number of observations, the better the forecast accuracy. While testing the effectiveness of data assimilation during the transient, we find that it is not possible to improve the forecast by measuring the pressure only. Moreover, the quality of the forecast remains poor regardless of the number of observations. Therefore, we propose a more effective cost functional, which takes into consideration the spectral content of the measured signal to enable a successful state estimation also during the transient. In state estimation, we implicitly assume that the parameters are correct. However, this is rarely the case in thermoacoustics, where the parameters are uncertain and need to be optimally calibrated. Ongoing work includes simultaneous parameter and state estimation using Lagrangian optimization with state augmentation. This work opens up new possibilities for on-the-fly optimal calibration and state estimation of reduced-order models in thermoacoustics for applications in propulsion and power generation.