Next Article in Journal
Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications
Next Article in Special Issue
Embedded Temporal Convolutional Networks for Essential Climate Variables Forecasting
Previous Article in Journal
Investigating the Efficient Use of Word Embedding with Neural-Topic Models for Interpretable Topics from Short Texts
Previous Article in Special Issue
Semi-Automatic Spectral Image Stitching for a Compact Hybrid Linescan Hyperspectral Camera towards Near Field Remote Monitoring of Potato Crop Leaves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing the Energy Efficiency of Unreliable Memories for Quantized Kalman Filtering

1
IMT Atlantique, Lab-STICC, CNRS UMR 6285, 29238 Brest, France
2
Department of Electrical Engineering, École Polytechnique de Montréal, Montreal, QC H3T 1J4, Canada
3
Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(3), 853; https://doi.org/10.3390/s22030853
Submission received: 28 October 2021 / Revised: 15 January 2022 / Accepted: 17 January 2022 / Published: 23 January 2022

Abstract

:
This paper presents a quantized Kalman filter implemented using unreliable memories. We consider that both the quantization and the unreliable memories introduce errors in the computations, and we develop an error propagation model that takes into account these two sources of errors. In addition to providing updated Kalman filter equations, the proposed error model accurately predicts the covariance of the estimation error and gives a relation between the performance of the filter and its energy consumption, depending on the noise level in the memories. Then, since memories are responsible for a large part of the energy consumption of embedded systems, optimization methods are introduced to minimize the memory energy consumption under the desired estimation performance of the filter. The first method computes the optimal energy levels allocated to each memory bank individually, and the second one optimizes the energy allocation per groups of memory banks. Simulations show a close match between the theoretical analysis and experimental results. Furthermore, they demonstrate an important reduction in energy consumption of more than 50%.

1. Introduction

Kalman filtering is a very common recursive estimation task in statistical signal processing [1], and it is often implemented on resource-limited hardware. Applications that require an embedded energy-efficient Kalman filter include air quality monitoring [2], biomedical wearable sensors [3], forest fire detection [4] and vehicle positioning [5]. Energy budgets for embedded systems show that memory access consumes about a hundred-times more energy than integer computations [6]. Therefore, in this paper, we focus on optimizing the energy used by memories in Kalman filters.
All memories used in integrated circuits exhibit a fundamental trade-off between data storage reliability and energy consumption that is related to the inability of perfectly controlling the fabrication process. For example, the energy consumption of static random access memories (SRAMs) can be reduced by lowering its supply voltage; however, this increases the probability that some of the stored bits cannot be retrieved correctly [7]. Following this principle, ref. [8] developed an optimization method to lower the energy consumed by SRAM access by reducing the bit-line voltages. This methodology was also used to decrease the write energy of magnetic random access memories (MRAM) in [9]. In both cases, however, this introduces errors in the words stored in memory.
The robustness to unreliability in computation operations and memories has been investigated for several signal processing and machine-learning applications, including binary recursive estimation [10], binary linear transformation [11], deep neural networks [12,13], multi-agent systems [14] and distributed logistic regression [15]. Moreover, several techniques have been proposed to compensate for faults introduced by unreliable systems. For instance, [16] proposed to add redundancy in the system through algorithmic noise tolerance, and [17] investigated the use of error-correction codes (ECC) for fault correction.
Although Kalman filtering has not previously been investigated under unreliable hardware implementation, some related works considered this filter and other similar models for linear systems under uncertain conditions. These include errors on the filter’s gain [18,19], sensors failures, uncertainties on the observations [20,21], or inaccuracies in the filter parameters [22,23,24]. In these works, new filter equations were derived using the Riccati equations approach to find new bounds or guarantee on the performance of the filter.
Although these models are not relevant for characterizing the effect of unreliable memories, the main lessons they provide are that Kalman filtering is very sensitive to inaccuracies and that one should re-derive the optimal Kalman filter depending on the specifically considered uncertainty model. On a different line of research, other prior works aim at reducing the energy requirements for Kalman filtering by focusing on reduced computational complexity in field-programmable gate arrays (FGPAs) [25,26] and application-specific integrated circuits (ASICs) [27].
Designing a digital hardware implementation requires quantizing all the variables and computational operations. Therefore, to further reduce the memory energy consumption, one option is to properly optimize the quantization to reduce the memory requirements of the implementation. Significant energy gains from optimized quantization have been demonstrated in [28,29,30] for signal processing and digital communications applications and in [31,32,33] for neural networks. The effects of quantization on the Kalman filter were first studied in [34,35] to understand the convergence of filters with reduced precision.
More recently, refs. [36,37,38] considered two distributed quantized Kalman filters, one based on quantized observations and one based on quantized innovations, where sensors process and transmit quantized observations and innovations to a fusion center. Furthermore, ref. [36] proposed to optimize the number of quantization bits at each sensor to minimize the required data transmission energy.
More general linear stochastic systems were also investigated under quantized measurements [39] and quantized innovations [40], where it was shown that the derived quantized filters converged to standard Kalman filters as the number of quantization levels increased. However, none of these theoretical works considered quantized parameters (e.g., quantized Kalman gain matrices, quantized measurement matrices, etc.), in addition to quantized observations/innovations. Therefore, in this paper, we study a fully-quantized Kalman filter and investigate its energy consumption when using unreliable memories.
Here, we aim to optimize the energy consumption of a Kalman filter implemented with fixed-point quantization [41] and with unreliable memories. Fixed-point representations are often preferred in energy-constrained systems as a fixed-point operation can consume 10-times less energy than a floating-point one [6]. We consider the statistical model of [7], which relates the amount of faults introduced in memory for its energy consumption. Then, as a first contribution, we propose a unified framework to analyze the performance of Kalman filters with both quantization errors and faults introduced in the memory.
To develop this framework, we build on the approach of [35], which consists of evaluating the covariance matrix of the estimation error at each filter iteration by considering both error propagation from previous iterations and errors introduced at the current iteration. Our analysis also includes quantized filter parameters and further incorporates the effect of unreliable memories. Determining the covariance matrix of the estimation error has two advantages. First, it allows us to derive the optimal Kalman filter equations under the considered quantization and memory error models. Second, and more specific to our case, it defines a performance criterion that will be used to optimize the memory energy consumption.
As a second contribution, we define two optimization problems to minimize the memory energy consumption while satisfying a target constraint on the estimation performance of the Kalman filter. In the first problem, we optimize the number B of quantization bits and the energy allocated to each bit position to minimize the overall energy consumption of the memory. This optimization problem extends to [8], which was not dedicated to Kalman filtering but derived optimal bitwise energy allocations with a fixed number of quantization bits by considering a generic Mean-Squared Error (MSE) performance criterion when reading a word in memory.
Although a useful baseline, the setting where each bit position can have a different energy allocation is not practical, since each of the B bits should be placed in a different memory bank with its own power supply. This is why we also introduce a second optimization problem in which we fix the number L < B of possible energy levels and optimize the energy value in each level and the mapping of bit positions to an energy level.
At the price of a small energy increase, this optimization problem allows us to build a practical implementation that only requires L memory banks. By using the Karush–Kuhn–Tucker (KKT) conditions, we provide solutions for the two considered optimization problems. Both solutions can be numerically computed using water-filling. Numerical simulations show that, after optimization, the memory energy consumption is reduced by up to 56 % compared to uniform allocation.
The main contributions of this paper can be summarized as:
(1)
We develop an error propagation model of the Kalman filter that takes different sources of errors (quantization, unreliable memories) into account and allows us to derive new filter equations to minimize the estimation error. Moreover, these equations accurately predict the filter’s performance, depending on the considered sources of errors and on their parameters.
(2)
We propose a methodology for minimizing the energy of the unreliable memories used in the Kalman filter, under a given performance constraint. This methodology consists of computing the optimal number of quantization levels and bit energy allocation in two setups. The first setup considers that the B energy levels can be chosen freely, while the second one assumes that only L < B energy levels can be set.
A preliminary version of this paper [42] only considered optimizing the energy allocation for each memory bank individually and for a fixed number of bits without taking into account the quantization noise nor trying to reduce the energy consumption by adjusting the number of bits. Since the number of quantization bits also affects the energy consumption of the memory, in this work, we add this parameter to the theoretical analysis and the optimization problems.
The rest of the paper is organized as follows. Section 2 describes the quantized Kalman filter and introduces the uncertainty model for unreliable memories. Section 3 investigates the theoretical performance of the filter. Section 4 formally defines and solves the two considered optimization problems. Section 5 presents the simulation results.

2. System Model

We first review the Kalman filter for estimating dynamic state variables from noisy measurements. We then present the considered implementation of the filter by first introducing its quantization model and then describing its implementation with an unreliable memory.

2.1. Kalman Filter

The process:
x k + 1 = F x k + u k ,
describes the linear dynamic variable x R c , where the state vector of the process at step k is noted as x k , F is the state transition matrix of size c × c , and u k R c is an additive white noise vector [43]. Observation of the state of x can be obtained through y R d , the measurement vector defined as
y k = H x k + v k .
Here, H is the d × c measurement model and v k R d is an additive white noise on the measurements, independent from the model noise u k . We denote Q and R as the known covariance matrices of the noise vectors u k and v k , respectively.
Using the knowledge of the model as well as the the measurement vectors y k , the Kalman filter [1] recursively estimates the successive states x k . This is done by minimizing the mean squared error between x k and its estimate x ^ k at each step k: MSE ( x ) = E [ x k x ^ k 2 ] . The filter can be decomposed in two phases: the a priori estimation uses only the known model, and the a posteriori estimation takes into account the measurements.
At each phase, both the estimates x ^ k + 1 | k (for the a priori phase) and x ^ k + 1 | k + 1 (for the a posteriori phase) of the state vector x k + 1 , and the covariance matrices of the estimations errors P k + 1 | k = Cov [ x k + 1 x ^ k + 1 | k ] and P k + 1 | k + 1 = Cov [ x k + 1 x ^ k + 1 | k + 1 ] are computed. The recursive equations of the a priori estimation step are [44]:
x ^ k + 1 | k = F x ^ k | k ,
P k + 1 | k = F P k | k F + Q ,
and the recursive equations of the a posteriori estimation step are:
K k + 1 = P k + 1 | k H ( H P k + 1 | k H + R ) 1 ,
x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 ( y k + 1 H x ^ k + 1 | k ) ,
P k + 1 | k + 1 = ( I K k + 1 H ) P k + 1 | k ,
where A denotes the transpose of a matrix A . In these equations, the Kalman gain K of size c × d and the covariance matrices P of size c × c can be computed offline. On the other hand, the terms x ^ k + 1 | k and x ^ k + 1 | k + 1 must be computed online as they depend on the measurements y k .

2.2. Quantized Implementation of the Filter

In the rest of the paper, we study Kalman filters that are implemented under fixed-point quantization [41]. Under this model, each number is represented as a signed integer coded on ( 1 + n + m ) bits, where one bit is used for the sign, n bits are used for the integral part of the number, and m bits are used for its fractional part. Using this model, we can write a given number z as
z = ( 1 ) z n b = m n 1 2 b z b ,
where z b { 0 , 1 } are the bits stored in memory to represent z. In our modeling of the Kalman filter, all variables (including matrix components) involved in Equations (3)–(7) are stored using this quantization model, all with the same values of n and m. The quantization of the variables to this fixed-point model is done using a uniform quantizer. Note that the distribution of the quantized data is not necessarily uniform (the random variables x ^ k | k and y k could follow Gaussian distributions for example). However, in [45] it is shown that a uniform quantizer can be applied independently of the probability distribution of the source with only a small difference to an optimal quantizer.
In the considered quantizer, the value of n is chosen to be able to represent the largest possible value in the system. The value of m sets the resolution of the quantization so that the smallest difference between two quantized numbers is 2 m [41]. The value of m will be a parameter that is optimized for minimizing the energy in later sections.
In the case of fixed values, such as components of the matrices of the filter, the fixed-point quantized value can be written as f ¯ = f + δ f where δ f is the quantization error. Using the previously described uniform quantizer, δ f < 2 m . In the case of quantized random variables, such as the components of x ^ k | k or y k , we let ϵ x be the quantization error and express x ¯ = x + ϵ x . In [46], conditions are given for the quantization error ϵ x to be independent from the quantized variable depending on the distribution of the quantized data.
For the special case of a Gaussian distribution, the quantization step needs to be significantly smaller than the variance of the quantized data. In this case, it can be shown that the quantization error is a white noise following a uniform distribution of variance 2 2 m 12 . This independence assumption will be used in the theoretical derivations of Section 3. Note that most existing works on quantized Kalman filters only consider that random quantities, such as x ^ k | k and y k , are quantized, whereas here, the components of the matrices, e.g., K k of the filter, are also quantized. This will require a new theoretical analysis to treat this case.

2.3. Implementation of the Filter by Using an Unreliable Memory

In order to reduce its energy consumption, the quantized Kalman filter can be implemented on unreliable hardware [8,10,11,12]. Here, we assume, as in [10,12], that only the memory is faulty. In this case, each memory cell of a memory bank has a bit flipping probability p. We then use the model of [7] to express p with respect to the memory bank energy consumption e as
p = exp ( e a ) ,
where a is a parameter that depends on the device technology. We assume that bit errors occur independently. This is justified first by the fact that, in many cases of interest, such as the common case of SRAM memories in a CMOS digital circuit, memory failures can be assumed to occur independently for each bit cell [47]. Therefore, we have a spatial independence between each memory cell for one iteration.
However, typically faults are caused by fabrication variations, and therefore this cannot guarantee a temporal independence for successive reads of the same memory cells. To resolve this issue, we can assume that a diversity scheme is implemented at the system level to avoid re-using the same memory location to store the same variable, which can be implemented at very low cost simply by modifying the memory addressing scheme.
Each memory bank has a uniform energy consumption (e.g., single supply voltage) and is used in our case to store the bits at a certain position of all components of matrices that are stored in the unreliable memory. Since the other terms of the filter can be precomputed offline and stored on a reliable memory separately in the system, we assume that only the estimates x ^ k + 1 | k and x ^ k + 1 | k + 1 are stored in an unreliable memory bank. Therefore, in the Kalman filter, instead of having an estimate component x ^ , such as the one computed in (3), we have a possibly incorrect estimate component x ˜ . We can define an energy per memory bank vector using the binary representation given in (8):
e = e m , e m + 1 , , e n 1 .
We can then express as x ˜ b = x ^ b γ b a bit at position b stored in the unreliable memory. Here, p b = Pr ( γ b = 1 ) = exp ( e b a ) , and ⊕ denotes the modulo-2 addition. As the filter would be particularly sensitive to faults on the sign bit, we consider a sign-preserving model, as in [10,48,49]. This sign-preserving model can be implemented by storing the sign bits in a separate reliable memory.
Using this noise model defined at the bit-level x ˜ b , we can define a noise model at the symbol level x ˜ as
x ˜ = x ^ + γ ,
where γ is the noise introduced by the unreliable memory. For the subsequent theoretical analysis, we assume that the mean E [ γ ] of this memory noise is negligible compared to its variance Var [ γ ] = σ γ 2 . We verified this condition with Monte Carlo simulations. The covariance matrix Γ of a memory noise vector γ of length c is defined as Γ = Cov [ γ ] = I c σ γ 2 , and has size c × c . The matrix Γ is diagonal since the memory noise variables are considered independent.

3. Error Analysis

As described in Section 2, we consider two types of errors affecting the filter: the quantization error and the unreliable memory noise. In this section, we first describe a generic model of error propagation in the Kalman filter, before studying both types of errors in more detail. Finally, we compute the new covariance matrix P k | k * = Cov [ x ˜ k | k x k ] of the total estimation error by taking both sources of noise (quantization and unreliable memories) into account, compared to a standard Kalman filter, which does not include either.

3.1. Error Propagation Model

Our objective is to compute the total error Δ x ^ k + 1 | k + 1 on the computation of x ^ k + 1 at step k + 1 by considering the two types of errors: quantization and unreliable memory. To handle recursion as in [35], we choose to split the error model in two parts: the errors occurring at step k and the errors from the previous steps, which are propagated up to step k.
To compute Δ x ^ k + 1 | k + 1 , we first need to express the total error Δ P k + 1 | k on the a posteriori covariance matrix P k + 1 | k after step k + 1 . As in [35], we express this total error as
Δ P k + 1 | k = f P ( Δ P k | k 1 ) + δ P k + 1 | k ,
where the function f P models the errors propagated from step k, and δ P k + 1 | k represents the errors occurring at step k + 1 . In this case, according to [35]:
f P ( Δ P k | k 1 ) = G k Δ P k | k 1 G k + o ( Δ 2 ) ,
where Δ = H Δ P k | k 1 H 2 σ min ( H P k | k 1 H + R ) and H P k | k 1 H + R is a square nonsingular matrix with σ min representing the smallest singular value. Therefore, we have the approximation
f P ( Δ P k | k 1 ) G k Δ P k | k 1 G k ,
where G k = F ( I K k H ) . We then express the total error Δ x ^ k + 1 | k + 1 on x ^ k + 1 | k + 1 by considering the same separation between propagation errors and errors from the current iteration. This gives
Δ x ^ k + 1 | k + 1 = f x ( Δ x ^ k | k , Δ P k | k 1 ) + δ x ^ k + 1 | k + 1 ,
where the error propagation function f x is provided in [35], using the same assumption as for (13), as
f x ( Δ x ^ k | k , Δ P k | k 1 )   ( I K k H ) ( F Δ x ^ k | k + Δ P k | k 1 H ( H P k , k 1 H + R ) 1 ( y k + 1 H F x ^ k , k ) ) .
In this expression, we observe the error propagation from the previous computations of x ^ k | k P k + 1 | k . In particular, x ^ k + 1 | k + 1 depends on K k + 1 , which is precomputed from P k + 1 | k at each iteration.
Using the recursive Equations (12) and (15), we now estimate the covariance matrix of the total estimation error P k | k * . Note that [35] considered only quantization errors, while here we consider two sources of errors: quantization and unreliable memories. To evaluate P k | k * , we must first compute the covariance of each term of Δ x ^ k + 1 | k + 1 . By assuming that the two sources of noise (quantization and unreliable memory noise) are statistically independent, we decompose δ x ^ k + 1 | k + 1 as
δ x ^ k + 1 | k + 1 = δ x ^ k + 1 | k + 1 quant + δ x ^ k + 1 | k + 1 mem ,
and study the two terms δ x ^ k + 1 | k + 1 quant and δ x ^ k + 1 | k + 1 mem separately.

3.2. Quantization Error

We now aim for an analytical expression for δ x ^ k + 1 | k + 1 quant , defined as the difference between the full precision estimate x ^ k + 1 | k + 1 and its quantized version x ¯ ^ k + 1 | k + 1 :
δ x ^ k + 1 | k + 1 quant = x ^ k + 1 | k + 1 x ¯ ^ k + 1 | k + 1 .
Before expressing δ x ^ k + 1 | k + 1 quant , we first review generic quantization errors expressions [35]. For the scalar fixed-point multiplication of a coefficient s ¯ with a random variable t ¯ both quantized according to the model presented in Section 2.2, we can show that
s t ¯ = ( s + δ s ) ( t + ϵ t ) + ϵ s t = s t + s ϵ t + t δ s + δ s ϵ t + ϵ s t ,
where δ s = s ¯ s and ϵ t and ϵ s t follow uniform distributions of variance 2 2 m 12 . The scalar expression (19) can then be generalized to the case of a product between a matrix of fixed-point coefficients A ¯ of size p × q and a matrix of fixed-point random variables B ¯ of size q × r as
A B ¯ = A B + A ϵ B + B δ A + δ A ϵ B + ϵ AB ,
where ϵ AB is of size p × r with ϵ A B i , j = k = 1 q ϵ A B i , j , k . According to Section 2.2, each ϵ A B i , j , k follows a uniform distribution of variance 2 2 m 12 . In (20), the product δ A ϵ B can be considered as negligible compared to the other error terms. Indeed, all scalar quantization errors ϵ and δ are upper-bounded by 2 m 1 , and since m 1 , their product is bounded by 2 2 m 2 . Thus, given that, for a value of m large enough, the value of 2 m 1 is much less than 1 and 2 2 m 2 = ( 2 m 1 ) 2 , we have that 2 2 m 2 is negligible compared to 2 m 1 . Therefore, in the following derivation, we neglect the products of quantization errors.
We now study quantization errors introduced during the computation of x ¯ ^ k + 1 | k + 1 . While existing works, e.g., [36,37,38], assume that only the random quantities x ¯ ^ k | k and y ¯ k + 1 are quantized, we here also consider that the matrices D ¯ k + 1 and K ¯ k + 1 are quantized as well. This corresponds to a more practical implementation setup and requires a more complex theoretical analysis. We first note that Equation (6) can be rewritten as
x ¯ ^ k + 1 | k + 1 = D ¯ k + 1 x ¯ ^ k | k + K ¯ k + 1 y ¯ k + 1 ,
where both D k = ( I K k H ) F and the Kalman gains K k can be computed offline. We thus consider that the matrices K k and D k are computed in full precision and then quantized with a fixed point model. Under these conditions, according to (20) and if we consider that the product of quantization errors is negligible, the quantized vector x ¯ ^ k + 1 | k + 1 can be approximated as
x ¯ ^ k + 1 | k + 1 = D k + 1 x ^ k | k + δ D k + 1 x ^ k | k + D k + 1 ϵ x k | k +   ϵ D k + 1 x k | k + K k + 1 y k + 1 + δ K k + 1 y k + 1 +   K k + 1 ϵ y k + 1 + ϵ K ¯ k + 1 y ¯ k + 1 + o ( 2 m 1 ) .
We see that the expression of x ¯ ^ k + 1 | k + 1 depends on the full precision vectors x ^ k | k and y k + 1 and on the quantization errors and noise. Finally, the quantization error δ x ^ k + 1 | k + 1 quant defined in (18) can be computed by using (22):
δ x ^ k + 1 | k + 1 quant δ D k + 1 x ^ k | k + D k + 1 ϵ x k | k + ϵ D k + 1 x k | k + δ K k + 1 y k + 1 + K k + 1 ϵ y k + 1 + ϵ K k + 1 y k + 1 ,
where the covariance matrix Σ × of ϵ × = ϵ D k + 1 x k | k + ϵ K k + 1 y k + 1 is given by
Σ × = Cov [ ϵ D k + 1 x k | k ] + Cov [ ϵ K ¯ k + 1 y ¯ k + 1 ]
= I c ( c + d ) 2 2 m 12 ,  
and Cov [ ϵ x k | k ] = I c 2 2 m 12 , Cov [ ϵ y k + 1 ] = d 2 2 m 12 . Equation (23) gives us the quantization error on the computation of x ^ k + 1 | k + 1 based on the unquantized values of x ^ k | k , the filter parameters and the quantization resolution m.

3.3. Unreliable Memory Error

We now consider the second source of noise from the unreliable memories and derive an expression for the covariance matrix Γ = Cov [ δ x ^ k + 1 | k + 1 mem ] of the unreliable memory noise δ x ^ k + 1 | k + 1 mem introduced in (15).
Assuming E [ γ ] Var [ γ ] , as discussed in Section 2.3, the variance Var [ γ ] = σ γ 2 of the memory noise γ can be approximated by the MSE as σ γ 2 E [ ( x ˜ x ^ ) 2 ] . The value of E [ ( x ˜ x ^ ) 2 ] depends on the error probabilities p b as well as on the probability distributions of the variables x, which are stored in memory. However, from ([8], Claim 17), if p n 1 1 2 or Pr x ^ b = x ^ b Pr x ^ b x ^ b for any b b , then the MSE E [ ( x ˜ x ^ ) 2 ] can be approximated as
σ γ 2 = E [ ( x ˜ x ^ ) 2 ] b = m n 1 4 b p b = b = m n 1 4 b e e b a ,
where the last equality is obtained from the noise-versus-energy model (9). Therefore, the probability distributions of the variables x have no significant impact on the value of the MSE.
Equation (26) gives us a relation between the noise variance σ γ 2 and the vector e of energy levels defined in (10). Moreover, by using (26), we show that the covariance Γ of the memory noise vector δ x ^ k + 1 | k + 1 mem is given by
Γ = I c σ γ 2 .

3.4. Total Error

After separately studying the two error terms δ x k + 1 | k + 1 quant and δ x ^ k + 1 | k + 1 mem , we now combine them to get an expression of the total estimation error e k + 1 | k + 1 * = x ˜ k + 1 | k + 1 x k + 1 . We then provide the covariance matrix P k + 1 | k + 1 * of this total error.
By using x ˜ k | k to denote the faulty estimate of x k , we can express
x ˜ k | k = x ^ k | k + Δ x ˜ k | k .
Considering that only the x ^ k | k are stored in the unreliable memories, the error propagation model (15) can be rewritten as
Δ x ˜ k + 1 | k + 1 = D k + 1 Δ x ˜ k | k + δ x ^ k + 1 | k + 1 quant + δ x ^ k + 1 | k + 1 mem
= D k + 1 Δ x ˜ k | k + δ D k + 1 x ˜ k | k + D k + 1 ϵ x k | k + δ K k + 1 y k + 1 + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem
  = ( D k + 1 + δ D k + 1 ) Δ x ˜ k | k + δ D k + 1 x ^ k | k + D k + 1 ϵ x k | k + δ K k + 1 y k + 1 + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem ,
where (30) is obtained by replacing δ x ^ k + 1 | k + 1 quant by its expression (23), and (31) comes from (28), which allows u to write x ˜ k | k = x ^ k | k + Δ x ˜ k | k . Equation (31) provides a recursive form of the error at step k, since Δ x ˜ k + 1 | k + 1 depends on Δ x ˜ k | k and x ^ k | k . All the other terms in (31) come from the current iteration k + 1 .
In certain conditions, such as if H and F are only composed of integer components, the total estimation error x ˜ k + 1 | k + 1 x k + 1 = x ^ k + 1 | k + 1 + Δ x ˜ k + 1 | k + 1 x k + 1 can be further developed as (see Appendix A for more details):
x ˜ k + 1 | k + 1 x k + 1 = ( D k + 1 + δ D k + 1 ) ( x ˜ k | k x k )
  + ( K k + 1 + δ K k + 1 ) v k + 1 + ( ( K k + 1 + δ K k + 1 ) H I ) u k
+ D k + 1 ϵ x k | k + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem .
This equation gives us a recursive form of the total estimation error x ˜ k + 1 | k + 1 x k + 1 at step k + 1 , depending on the estimation error x ˜ k | k x k at step k and on the quantization resolution 2 m and the memory noise δ x ^ k + 1 | k + 1 mem .
Finally, we can compute the covariance matrix P k + 1 | k + 1 * of this error as
P k + 1 | k + 1 * = Cov [ x ˜ k + 1 | k + 1 x k + 1 ] = ( D k + 1 + δ D k + 1 ) P k | k * ( D k + 1 + δ D k + 1 ) + ( K k + 1 + δ K k + 1 ) R ( K k + 1 + δ K k + 1 ) + ( ( K k + 1 + δ K k + 1 ) H I ) Q ( ( K k + 1 + δ K k + 1 ) H I ) + D k + 1 Cov [ ϵ x k | k ] D k + 1 + K k + 1 Cov [ ϵ y k + 1 ] K k + 1 + Σ × + Γ ,
where all the terms involved, including the covariance matrices, have been explicited in the previous sections. Equation (35) shows that the covariance matrix P k + 1 | k + 1 * can be computed recursively.
Equation (35) provides us a measure of the performance of the filter, depending on the quantization resolution and on the energy supplied to the memory. Equipped with this derivation, we can now use the covariance matrix P k + 1 | k + 1 * as a performance criterion against which to optimize the energy consumed of the unreliable memory.

4. Energy Optimization

In this section, we optimize the energy consumption of the memory while satisfying a performance constraint defined on the total estimation error of the filter. As parameters to optimize, we consider the number of bits m for the quantization, and the energy vector e of the memory banks. We define two optimization problems, which both seek to minimize the energy consumed by the memory. In the first problem, we find the optimal number of bits m and the corresponding n + m levels of energy to allocate to the memory banks. Although solving this problem provides the minimum energy that needs to be supplied to the memory, it is not very practical since each of the n + m bits should be stored in a different memory bank with a specific voltage supply.
Therefore, in the second problem, we consider that the number of bits m is fixed but that the number of possible energy levels is limited to L possibilities. Both the L energy values and the allocation of each bit to one of the L possible values should be optimized. Solving this problem allows to consider only L < n + m different memory banks.

4.1. Optimization across All the Bits

We first find the optimal level of energy e b of each memory bank and the optimal number of fractional bits m to minimize the total memory energy consumption. As performance criterion, we consider the covariance matrix P N | N * of the total estimation error at step N, where N is chosen to be large enough so that the filter can converge. We further introduce a matrix V of the same size as P N | N * to define the performance constraint for the variances and covariances of estimation error on each component. The optimization problem is then defined as follows:
min e , m e tot = b = m n 1 e b = 𝟙 e , s . t . P N | N * V and e b e thres b m , n 1 ,
where ≺ is a component-wise inequality between the two matrices and where the minimum is taken over all energy vectors e as defined in (10) and for all the possible values of the number of bits m. We consider that m 0 , M , where M is the maximum number of bits, which could be stored in a memory. The value e thres is the minimum level of energy for each memory bank to avoid undesired effects, such as circuit delays and energy leakage [7].
Problem (36) involves one discrete parameter m and m + n continuous parameters e , which makes it difficult to solve at once. As a first step, we assume that the value of m is fixed and solve the following simplified problem:
min e e tot = b = m n 1 e b = 𝟙 e , s . t . P N | N * V and e b e thres b m , n 1 ,
by using the Karush–Kuhn–Tucker (KKT) conditions (see Appendix B). From these conditions, we show that the optimal energy level e b * for bit b has expression
e b * = e thres , if λ < 1 4 b a , 1 a log ( 4 b a λ ) , otherwise ,
where λ is a dual variable. It allows balancing a trade-off between preserving the performance of the system and reducing the energy consumption. A water-filling algorithm [8] can be used to compute the optimal vector e * for a fixed desired performance V of the filter. We can observe that, according to this optimal solution, the energy of the least significant bits will be set to the threshold energy level e thres . The energy levels then increase logarithmically for each bit as their significance increases.
Since m is discrete, the optimal solution (38) is computed using the water-filling algorithm for each possible value of m. We then retain the solution ( m , e ) , which gives the lowest total energy e tot = b = m n e b . In this method, the influence of the quantization error is taken into account through the performance criterion P N | N * . For a small number of bits m, quantization errors may make it impossible to satisfy the desired performance constraint, and therefore the water-filling algorithm will not be able to find an optimal solution. In this case, if we detect that the algorithm converges toward a performance value that is still higher than the constraint, the algorithm is stopped, and we proceed to the next value of m in the considered range.
The full optimization process is summarized in Algorithm 1. In this algorithm, the parameter β controls the rate at which the energy for each memory bank is increased at each iteration. The value of β is chosen either using the precision with which energy can be set in a given device technology, or based on the desired rate of convergence for the water-filling algorithm.
The condition P prev P N | N * > ξ is used to detect whether the water-filling algorithm has a feasible solution, and thus the value of ξ is set to be low. The computation of P N | N * ( e , m ) accounts for most of the computing time of this algorithm. The total run time thus depends on the number of iterations required by the water-filling algorithm (while loop in Algorithm 1). For fixed values of β and V , we expect the number of iterations to increase with m.
Algorithm 1: Computing the optimal values for e and m.
  Input: V , a, β , ξ , e thres
e min +
Sensors 22 00853 i001
Result: Optimal number of bits m opti and optimal energy allocation vector e min

4.2. Optimization with a Limited Number of Energy Levels

In practice, the solution of Problem 1 makes the implementation costly as each bit position should be stored in a separate memory bank. Therefore, we define a second optimization problem with only L < m + n possible levels of energy. For implementation purposes, we only consider small values for L ( L < 10 ). The vector f = [ f 0 , , f L 1 ] contains the L levels of energy. We use n to denote the number of bits allocated to energy level f l , so that = 0 L 1 n = n + m . This means that each memory bank of the energy group has an energy level e b = f n . We write n = [ n 0 , , n L 1 ] for the vector containing the L values n .
In the following, for simplicity, we consider that the number of bits n and m are fixed, and we seek to optimize the total energy consumption of the unreliable memory for a fixed number of energy levels L. The objective is to reduce the total energy consumed by the unreliable memory by allocating different levels of energy to the L groups of bits. Two parameters are considered in this optimization: the values of each energy level e and the number of bits allocated to each of these energy levels n . The optimization problem can be written as
min f , n e tot = l = 0 L 1 f = 𝟙 , f s . t . P N | N * V and f e thres n , n 1 , n + m L 0 , L 1 .
First, we solve the optimization problem in the case where we know which bit is allocated to which energy level. This means that the values of n are known and that we only want to compute the optimal values of the energy levels f . In this case, the optimization problem can be written as
min f e tot = l = 0 L 1 f = 𝟙 f , s . t . P N | N * V and f e thres n 0 , L 1 .
Problem (40) is quite similar to the one described in Section 4.1 and can be solved using the same method as the one presented in Appendix B, by relying on the KKT conditions. The optimal solution in this case is
f l * = e thres n , if λ < 1 b = 0 n 4 b a , 1 a log ( b = 0 n 4 b a λ ) , otherwise .
This solution allows us to compute the optimal energy levels f for a given energy allocation across the bits. The second step consists of computing the best allocation of bits to each energy group. Given that we only consider small values of L, we compute the optimal solution from (41) for each possible energy allocation of the bits. Then, the solution with the smallest total energy = 0 L 1 f is retained. Although Problem 2 leads to a more practical solution, it is expected that the optimal total energy of the memory is higher for Problem 2 than for Problem 1.

5. Simulation Results

In our simulations, unless explicitly stated, we consider a simple tracking problem where the state vector x is composed of two variables representing the position and velocity of an object. Measurements y only consist of noisy observations of the position of the object. The process matrix F and measurement matrix H are defined as
F = 1 δ t 0 1 , H = 1 0 ,
and the process noise covariance matrix Q and measurement covariance matrix R are given by
Q = σ x 2 0 0 σ x 2 , R = σ y 2 .
where δ t = 1 and σ x = 0.01 and σ y = 10 . The factor a in (9) is taken as a = 12.8 as in [13]. This section is divided into two parts. We first evaluate the accuracy of the proposed theoretical analysis, and we then provide solutions to the two considered optimization problems.

5.1. Accuracy of the Theoretical Analysis

First, to evaluate the accuracy of the proposed theoretical analysis, we perform Monte Carlo simulations ( N m c = 10 7 ) and measure the covariance matrix of the error on the estimation at step N = 250 , thus giving enough time for the filter to converge in normal conditions. This covariance matrix is compared with the theoretical expression of the covariance of the estimation error P N | N * computed in Section 3.4. Figure 1 shows the variance of the estimation errors on the position and the velocity for different values of m in the case of a reliable memory, meaning that we consider only the quantization error and not the memory noise.
We observe that the theoretical predictions of the errors closely match the simulations, which shows the accuracy of our theoretical analysis. From Figure 1, we can observe that, for a small number of bits m, the quantization error is large and dominates the total estimation error. However, starting from m = 10 bits, the estimation error reaches a constant level, which can be interpreted as a minimum bound on the estimation error that one can obtain from a standard Kalman filter for this tracking problem. This shows that, from m = 10 bits, the quantization errors become negligible compared to the estimation error achieved by the standard full precision Kalman filter. Thus, at this point, using more bits will not result in minimizing the estimation error, justifying the need for optimizing the parameter m.
In a second step, we introduce the memory noise in addition to the quantization. Figure 2 shows the variance of the estimation error on the position depending on the total energy e tot for different values of m. The variance values were obtained from both Monte Carlo simulations and from the theoretical analysis of Section 3.4. Further, Figure 3 shows the variance of estimation error on the position depending on the number of bits m for different values of the total energy e tot . The comparison between theoretical results and Monte Carlo simulations show the accuracy of the theoretical analysis that predicts the new computed covariance P k | k * .
From Figure 2, we can also see that both the number of bits and the total energy can affect the variance of the estimation error. If the number of bits or the energy supplied is too low, then the quantization error or the memory noise will dominate the total estimation error. However, we can see that there is a minimum number of bits, around m = 12 , from which, given enough energy, it will be possible to reach the minimum possible variance of estimation error.
Moreover from Figure 3, we can see that for a low value of supplied energy per variable, the variance of the estimation error will increase with the number of bits as there is too little energy. However, for a larger amount of energy e tot > 10 , the variance of the estimation error will decrease with the number of bits since the quantization error decreases. Finally for a large enough number of bits, the total estimation error will only depend on the total energy and on the variance of the estimation error of a reliable full precision filter.
As the work presented in this paper was conducted to reduce the energy consumption of the memory of a Kalman filter, it is of a greater utility when the memory is large. For this reason, the results presented before were also tested on a larger Kalman filter with a dimension for the state vector x of c = 20 . For simulations on this large-size example, we use a state transition model that performs a shifting of the entries of the state to the next state at each iteration, such as the one used in [50]. That is,
F i , j = 1 , if i j = 1 , 0 , otherwise ,
and F c , 1 = 1 . The initial state vector is drawn from a normal distribution.
In this case, the performance of the filter is measured by the trace of the covariance matrix P N | N . The results in Figure 4 and Figure 5 show that the same conclusion can be taken from the simulations done on the small-size Kalman filter and that the method presented in this paper can therefore be applied to large-size filters.

5.2. Solutions to the Optimization Problems

We now focus on the optimization problems introduced in Section 4, starting with the first one. Figure 6 shows the amount of energy e tot needed to store each number in the unreliable memory to achieve a fixed variance of estimation error on the position for each value of m.
The total energy e tot was calculated both using the optimal allocation from Algorithm 1 and using a uniform energy allocation. From Figure 7, we can see that the total energy e tot of the memory slightly increases with the number of bits m + n . The slight increase in memory consumption is due to the form of the optimal solution (38). Indeed, once the minimum number of bits needed to achieve the performance constraint is reached, then additional bits will be set at the minimum energy threshold e thres .
Figure 7 compares the optimal solution from Figure 6 with a uniform energy allocation. This shows that the optimal energy allocation allows for a significant energy gain compared to the uniform allocation. Here, for the minimum number of bits needed to achieve the performance constraint, the optimal allocation require 56% less energy than the uniform allocation.
We now focus on the second optimization problem defined in Section 4.2 at Equation (40) where only a limited number of energy levels are available. Figure 8 shows the total energy needed for each variable in memory to achieve a fixed level of error depending on the number L of energy level possible. For each considered number of levels L 1 , 7 , the total energy e tot was computed for all possible energy allocations using the optimal solution (41). The minimum energy possible for each value of L was then kept and is shown in Figure 8. This minimum energy is compared with the minimum energy needed for Problem (36) where there are as many energy levels possible as the number of bits. Here, the total number of quantization bits is B = 20 .
We observe that even a small number of energy levels L can lead to significant gains in energy. In this case, only seven levels of energy allow achieving 95% of the maximum energy gain that was obtained in the first optimization problem. When looking at the optimal energy allocation for each value of L bit by bit, we notice that, in most cases, the optimal solution seems to be when the energy levels are uniformly shared between the bits. This means that, if there are B bits and L levels available and L is a divisor of B, then each group of bits assigned to each energy level will have a size of n = B L .

6. Conclusions

In this paper, we studied a quantized Kalman filter implemented with unreliable memories. We provided analytical expressions for the covariance matrix of the estimation error and provided updated filter equations to take into account all considered sources of errors. We proposed and solved two optimization problems that allowed us to find the best trade-offs between the energy consumption and the performance of the filter. The simulation results showed the accuracy of the theoretical analysis and illustrated the significant energy gains provided by our approach.
Due to the generic nature of the considered error propagation model, these results could be used for various realistic noise-versus-energy models of unreliable components. Furthermore, the methodology presented in this work could also be extended to other algorithms where sources of unreliability can be introduced, such as belief propagation [51], binary recursive estimation [10], and multi-agent systems [52].

Author Contributions

Conceptualization, E.D., A.A.-E.-B., L.R.V. and F.L.-P.; Data curation, J.K.; Formal analysis, J.K.; Funding acquisition, E.D., A.A.-E.-B. and F.L.-P.; Methodology, J.K.; Project administration, E.D., A.A.-E.-B. and F.L.-P.; Software, J.K.; Supervision, E.D., A.A.-E.-B. and L.R.V.; Validation, E.D., A.A.-E.-B., L.R.V. and F.L.-P.; Visualization, J.K. and F.L.-P.; Writing—original draft, J.K.; Writing—review and editing, J.K., E.D., A.A.-E.-B., L.R.V. and F.L.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by grant ANR-17-CE40-0020 (EF-FECtive project), by Fonds de recherche du Québec—Nature et technologies (grant 2021-NC-286323) and by the “Make our Planet Great Again” Initiative of the Thomas Jefferson Fund.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Computation of x ˜ k + 1 | k + 1 x k + 1

x ˜ k + 1 | k + 1 x k + 1 = x ^ k + 1 | k + 1 + Δ x ˜ k + 1 | k + 1 x k + 1
= ( D k + 1 + δ D k + 1 ) Δ x ˜ k | k + ( D k + 1 + δ D k + 1 ) x ^ k | k + D k + 1 ϵ x k | k   + ( K k + 1 + δ K k + 1 ) y k + 1 + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem x k + 1
= ( D k + 1 + δ D k + 1 ) Δ x ˜ k | k + ( D k + 1 + δ D k + 1 ) x ^ k | k + D k + 1 ϵ x k | k + ( K k + 1 + δ K k + 1 ) ( H x k + 1 + v k + 1 ) + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem x k + 1
= ( D k + 1 + δ D k + 1 ) Δ x ˜ k | k + ( D k + 1 + δ D k + 1 ) x ^ k | k + D k + 1 ϵ x k | k + ( K k + 1 + δ K k + 1 ) v k + 1 + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem + ( ( K k + 1 + δ K k + 1 ) H I ) x k + 1
= ( D k + 1 + δ D k + 1 ) x ˜ k | k ( I ( K k + 1 + δ K k + 1 ) H ) F x k + ( K k + 1 + δ K k + 1 ) v k + 1 + K k + 1 ϵ y k + 1 + ϵ × + δ x ^ k + 1 | k + 1 mem + D k + 1 ϵ x k | k + ( ( K k + 1 + δ K k + 1 ) H I ) u k .  
If H and F are only composed of integer components, due to how the quantization is done:
( D k + 1 + δ D k + 1 ) = ( I ( K k + 1 + δ K k + 1 ) H ) F .

Appendix B. Computation of the Optimal Solution to Problem 1

From optimization Problem (37), we can define the Lagrangian:
L ( e , ν , λ ) = b = 0 B 1 e b + ν ( b = 0 B 1 4 b e e b a V ) b = 0 B 1 λ b ( e b e thres ) .
From the KKT conditions, for the optimal solution e * :
ν ( b = 0 B 1 4 b e e b * a V ) = 0 , ν 0
λ b ( e b * e thres ) = 0 , λ b 0 b [ [ 0 , B 1 ] ]
L e b * = 1 ν 4 b a e e b a λ b = 0
From (A9) and (A10):
λ b = 1 ν 4 b a e e b a 0 .
If ν = 0 then λ b = 1 and e b = e thres . Therefore, we claim that ν 0 , and thus b = 0 B 1 4 b e e b * a = V .
If ν 1 4 b a , then it is not possible to have e b > e thres since it would mean that λ b = 0 , and thus ν = 1 4 b a e e b a 1 4 b a , which is in contradiction with the hypothesis. Therefore, if ν 1 4 b a , then e b = e thres .
If ν > 1 4 b a , then by the same logic as before e b > e thres . In this case, λ b = 0 , and thus e b = 1 a log ( ν 4 b a ) .

References

  1. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  2. Lai, X.; Yang, T.; Wang, Z.; Chen, P. IoT Implementation of Kalman Filter to Improve Accuracy of Air Quality Monitoring and Prediction. Appl. Sci. 2019, 9, 1831. [Google Scholar] [CrossRef] [Green Version]
  3. Anania, G.; Tognetti, A.; Carbonaro, N.; Tesconi, M.; Cutolo, F.; Zupone, G.; Rossi, D.D. Development of a novel algorithm for human fall detection using wearable sensors. In Proceedings of the IEEE SENSORS, Lecce, Italy, 26–29 October 2008; pp. 1336–1339. [Google Scholar] [CrossRef]
  4. Wang, T.; Hu, J.; Ma, T.; Song, J. Forest fire detection system based on Fuzzy Kalman filter. In Proceedings of the 2020 International Conference on Urban Engineering and Management Science (ICUEMS), Zhuhai, China, 24–26 April 2020; pp. 630–633. [Google Scholar] [CrossRef]
  5. Sung, K.; Kim, H. Simplified KF-based energy-efficient vehicle positioning for smartphones. J. Commun. Netw. 2020, 22, 93–107. [Google Scholar] [CrossRef]
  6. Horowitz, M. 1.1 Computing’s energy problem (and what we can do about it). In Proceedings of the IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), San Francisco, CA, USA, 9–13 February 2014; pp. 10–14. [Google Scholar] [CrossRef]
  7. Dreslinski, R.G.; Wieckowski, M.; Blaauw, D.; Sylvester, D.; Mudge, T. Near-Threshold Computing: Reclaiming Moore’s Law Through Energy Efficient Integrated Circuits. Proc. IEEE 2010, 98, 253–266. [Google Scholar] [CrossRef]
  8. Kim, Y.; Kang, M.; Varshney, L.R.; Shanbhag, N.R. Generalized Water-Filling for Source-Aware Energy-Efficient SRAMs. IEEE Trans. Commun. 2018, 66, 4826–4841. [Google Scholar] [CrossRef] [Green Version]
  9. Kim, Y.; Jeon, Y.; Guyot, C.; Cassuto, Y. Optimizing the Write Fidelity of MRAMs. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 792–797. [Google Scholar] [CrossRef]
  10. Dupraz, E.; Varshney, L.R. Binary Recursive Estimation on Noisy Hardware. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 877–881. [Google Scholar]
  11. Yang, Y.; Grover, P.; Kar, S. Computing Linear Transformations With Unreliable Components. IEEE Trans. Inf. Theory 2017, 63, 3729–3756. [Google Scholar] [CrossRef]
  12. Henwood, S.; Leduc-Primeau, F.; Savaria, Y. Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks. In Proceedings of the 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August–2 September 2020; pp. 271–275. [Google Scholar] [CrossRef] [Green Version]
  13. Hacene, G.B.; Leduc-Primeau, F.; Soussia, A.B.; Gripon, V.; Gagnon, F. Training Modern Deep Neural Networks for Memory-Fault Robustness. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
  14. Shang, Y. Resilient consensus in multi-agent systems with state constraints. Automatica 2020, 122, 109288. [Google Scholar] [CrossRef]
  15. Yang, Y.; Grover, P.; Kar, S. Fault-tolerant distributed logistic regression using unreliable components. In Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016; pp. 940–947. [Google Scholar] [CrossRef]
  16. Hegde, R.; Shanbhag, N.R. Energy-efficient signal processing via algorithmic noise-tolerance. In Proceedings of the International Symposium on Low Power Electronics and Design (Cat. No. 99TH8477), San Diego, CA, USA, 17 August 1999; pp. 30–35. [Google Scholar] [CrossRef]
  17. Huang, C.; Li, Y.; Dolecek, L. ACOCO: Adaptive Coding for Approximate Computing on Faulty Memories. IEEE Trans. Commun. 2015, 63, 4615–4628. [Google Scholar] [CrossRef]
  18. Yaz, E.E.; Jeong, C.S.; Yaz, Y.I. An LMI approach to discrete-time observer design with stochastic resilience. J. Comput. Appl. Math. 2006, 188, 246–255. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, Y.; Chen, C.; Xue, A. Distributed non-fragile l2l filtering over sensor networks with random gain variations and fading measurements. Neurocomputing 2019, 338, 154–162. [Google Scholar] [CrossRef]
  20. Nahi, N. Optimal recursive estimation with uncertain observation. IEEE Trans. Inf. Theory 1969, 15, 457–462. [Google Scholar] [CrossRef]
  21. Hounkpevi, F.O.; Yaz, E.E. Robust minimum variance linear state estimators for multiple sensors with different failure rates. Automatica 2007, 43, 1274–1280. [Google Scholar] [CrossRef]
  22. Petersen, I.R.; McFarlane, D.C.; Rotea, M.A. Optimal Guaranteed Cost Control of Discrete-time Uncertain Linear Systems. IFAC Proc. Vol. 1993, 26, 35–38. [Google Scholar] [CrossRef]
  23. Yang, G.H.; Wang, J.L. Robust nonfragile Kalman filtering for uncertain linear systems with estimator gain uncertainty. IEEE Trans. Autom. Control 2001, 46, 343–348. [Google Scholar] [CrossRef]
  24. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A Novel Adaptive Kalman Filter with Inaccurate Process and Measurement Noise Covariance Matrices. IEEE Trans. Autom. Control 2018, 63, 594–601. [Google Scholar] [CrossRef] [Green Version]
  25. Jarrah, A. Optimized parallel architecture of Kalman filter for radar tracking applications. Jordan J. Electr. Eng. 2016, 2, 215–230. [Google Scholar]
  26. Sunil Kumar, T.; Duraiswamy, P. Optimization of Kalman Filter for Target Tracking Applications. In Advances in Multidisciplinary Analysis and Optimization; Salagame, R.R., Ramu, P., Narayanaswamy, I., Saxena, D.K., Eds.; Springer: Singapore, 2020; pp. 203–212. [Google Scholar] [CrossRef]
  27. Pereira, P.T.L.; Paim, G.; Ücker, P.; Costa, E.; Almeida, S.; Bampi, S. Exploring Architectural Solutions for an Energy-Efficient Kalman Filter Gain Realization. In Proceedings of the 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Genoa, Italy, 27–29 November 2019; pp. 650–653. [Google Scholar] [CrossRef]
  28. Wang, Z.; Zhang, J.; Verma, N. Reducing quantization error in low-energy FIR filter accelerators. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 1032–1036. [Google Scholar] [CrossRef]
  29. Xia, D.; Zhang, Y.; Cai, P.; Huang, L. An Energy-Efficient Signal Detection Scheme for a Radar-Communication System Based on the Generalized Approximate Message-Passing Algorithm and Low-Precision Quantization. IEEE Access 2019, 7, 29065–29075. [Google Scholar] [CrossRef]
  30. Marcastel, A.; Fijalkow, I.; Swindlehurst, L. Energy efficient downlink massive MIMO: Is 1-bit quantization a solution? In Proceedings of the 16th International Symposium on Wireless Communication Systems (ISWCS), Oulu, Finland, 27–30 August 2019; pp. 198–202. [Google Scholar] [CrossRef] [Green Version]
  31. Hashemi, S.; Anthony, N.; Tann, H.; Bahar, R.I.; Reda, S. Understanding the impact of precision quantization on the accuracy and energy of neural networks. In Proceedings of the Conference on Design, Automation & Test in Europe, Lausanne, Switzerland, 27–31 March 2017; pp. 1478–1483. [Google Scholar]
  32. Ding, R.; Liu, Z.; Blanton, R.D.S.; Marculescu, D. Quantized deep neural networks for energy efficient hardware-based inference. In Proceedings of the 23rd Asia and South Pacific Design Automation Conference (ASP-DAC), Jeju, Korea, 22–25 January 2018; pp. 1–8. [Google Scholar] [CrossRef]
  33. Jain, S.; Venkataramani, S.; Srinivasan, V.; Choi, J.; Chuang, P.; Chang, L. Compensated-DNN: Energy efficient low-precision deep neural networks by compensating quantization errors. In Proceedings of the 55th Annual Design Automation Conference, San Francisco, CA, USA, 24–28 June 2018; pp. 1–6. [Google Scholar] [CrossRef]
  34. Stripad, A.B. Performance Degradation in Digitally Implemented Kalman Filters. IEEE Trans. Aerosp. Electron. Syst. 1981, AES-17, 626–634. [Google Scholar] [CrossRef]
  35. Verhaegen, M.; Dooren, P.V. Numerical aspects of different Kalman filter implementations. IEEE Trans. Autom. Control 1986, 31, 907–917. [Google Scholar] [CrossRef]
  36. Sun, S.; Lin, J.; Xie, L.; Xiao, W. Quantized Kalman Filtering. In Proceedings of the IEEE 22nd International Symposium on Intelligent Control, Singapore, 1–3 October 2007; pp. 7–12. [Google Scholar] [CrossRef]
  37. Li, D.; Kar, S.; Cui, S. Distributed Kalman Filtering with quantized sensing state. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 4040–4044. [Google Scholar] [CrossRef]
  38. Hu, X.; Bao, M.; Zhang, X.; Wen, S.; Li, X.; Hu, Y. Quantized Kalman Filter Tracking in Directional Sensor Networks. IEEE Trans. Mob. Comput. 2018, 17, 871–883. [Google Scholar] [CrossRef]
  39. You, K.; Xie, L.; Sun, S.; Xiao, W. Quantized filtering of linear stochastic systems. Trans. Inst. Meas. Control 2011, 33, 683–698. [Google Scholar] [CrossRef]
  40. You, K.; Zhao, Y.; Xie, L. Recursive quantized state estimation of discrete-time linear stochastic systems. In Proceedings of the 7th Asian Control Conference, Hong Kong, China, 27–29 August 2009; pp. 170–175. [Google Scholar]
  41. Dally, W.J.; Harting, R.C.; Aamodt, T.M. Digital Design Using VHDL: A Systems Approach; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  42. Kern, J.; Dupraz, E.; Aïssa-El-Bey, A.; Leduc-Primeau, F. Improving the Energy-Efficiency of a Kalman Filter Using Unreliable Memories. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 5345–5349. [Google Scholar] [CrossRef]
  43. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. State Estimation in Discrete-Time Linear Dynamic Systems. In Estimation with Applications to Tracking and Navigation; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2002; pp. 199–266. [Google Scholar] [CrossRef]
  44. Thacker, N.; Lacey, A. Tutorial: The Kalman Filter; Imaging Science and Biomedical Engineering Division, Medical School, University of Manchester: Manchester, UK, 1998; p. 61. [Google Scholar]
  45. Ziv, J. On universal quantization. IEEE Trans. Inf. Theory 1985, 31, 344–347. [Google Scholar] [CrossRef]
  46. Sripad, A.; Snyder, D. A necessary and sufficient condition for quantization errors to be uniform and white. IEEE Trans. Acoust. Speech Signal Process. 1977, 25, 442–448. [Google Scholar] [CrossRef]
  47. Mukhopadhyay, S.; Mahmoodi, H.; Roy, K. Modeling of failure probability and statistical design of SRAM array for yield enhancement in nanoscaled CMOS. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2005, 24, 1859–1880. [Google Scholar] [CrossRef]
  48. Dupraz, E.; Declercq, D.; Vasić, B.; Savin, V. Analysis and Design of Finite Alphabet Iterative Decoders Robust to Faulty Hardware. IEEE Trans. Commun. 2015, 63, 2797–2809. [Google Scholar] [CrossRef] [Green Version]
  49. Kameni Ngassa, C.; Savin, V.; Dupraz, E.; Declercq, D. Density Evolution and Functional Threshold for the Noisy Min-Sum Decoder. IEEE Trans. Commun. 2015, 63, 1497–1509. [Google Scholar] [CrossRef] [Green Version]
  50. Berberidis, D.; Giannakis, G.B. Data Sketching for Large-Scale Kalman Filtering. IEEE Trans. Signal Process. 2017, 65, 3688–3701. [Google Scholar] [CrossRef]
  51. Huang, C.; Li, Y.; Dolecek, L. Belief Propagation Algorithms on Noisy Hardware. IEEE Trans. Commun. 2015, 63, 11–24. [Google Scholar] [CrossRef]
  52. Shang, Y. Group consensus of multi-agent systems in directed networks with noises and time delays. Int. J. Syst. Sci. 2015, 46, 2481–2492. [Google Scholar] [CrossRef]
Figure 1. Theoretical and simulated variance of estimation error on the position and velocity depending on the numbers of bits in the representation, using a reliable memory.
Figure 1. Theoretical and simulated variance of estimation error on the position and velocity depending on the numbers of bits in the representation, using a reliable memory.
Sensors 22 00853 g001
Figure 2. Theoretical and simulated variance of estimation error on the position depending on the energy supplied to each variable using an unreliable memory for different numbers of bits in the representation m.
Figure 2. Theoretical and simulated variance of estimation error on the position depending on the energy supplied to each variable using an unreliable memory for different numbers of bits in the representation m.
Sensors 22 00853 g002
Figure 3. Theoretical and simulated variance of estimation error on the position depending on the number of quantization bits using an unreliable memory for different for different total energy values e tot .
Figure 3. Theoretical and simulated variance of estimation error on the position depending on the number of quantization bits using an unreliable memory for different for different total energy values e tot .
Sensors 22 00853 g003
Figure 4. Theoretical and simulated variance of estimation error on the position depending on the energy supplied to each variable using an unreliable memory for different numbers of bits in the representation m in the case of the large-size example.
Figure 4. Theoretical and simulated variance of estimation error on the position depending on the energy supplied to each variable using an unreliable memory for different numbers of bits in the representation m in the case of the large-size example.
Sensors 22 00853 g004
Figure 5. Theoretical and simulated variance of estimation error on the position depending on the number of quantization bits using an unreliable memory for different total energy values e tot in the case of the large-size example.
Figure 5. Theoretical and simulated variance of estimation error on the position depending on the number of quantization bits using an unreliable memory for different total energy values e tot in the case of the large-size example.
Sensors 22 00853 g005
Figure 6. The energy needed to store each variable in an unreliable memory to achieve various desired variances of estimation error on the position depending on the number m of bits in the representation with the optimal energy allocation.
Figure 6. The energy needed to store each variable in an unreliable memory to achieve various desired variances of estimation error on the position depending on the number m of bits in the representation with the optimal energy allocation.
Sensors 22 00853 g006
Figure 7. Energy needed to store each variable in an unreliable memory to achieve a variance of estimation error on the position P N | N [ 0 , 0 ] = 15 , depending on the number m of bits in the representation.
Figure 7. Energy needed to store each variable in an unreliable memory to achieve a variance of estimation error on the position P N | N [ 0 , 0 ] = 15 , depending on the number m of bits in the representation.
Sensors 22 00853 g007
Figure 8. Energy needed to store a variable in memory e tot for different values of energy level available L to achieve a fixed covariance value.
Figure 8. Energy needed to store a variable in memory e tot for different values of energy level available L to achieve a fixed covariance value.
Sensors 22 00853 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kern, J.; Dupraz, E.; Aïssa-El-Bey, A.; Varshney, L.R.; Leduc-Primeau, F. Optimizing the Energy Efficiency of Unreliable Memories for Quantized Kalman Filtering. Sensors 2022, 22, 853. https://doi.org/10.3390/s22030853

AMA Style

Kern J, Dupraz E, Aïssa-El-Bey A, Varshney LR, Leduc-Primeau F. Optimizing the Energy Efficiency of Unreliable Memories for Quantized Kalman Filtering. Sensors. 2022; 22(3):853. https://doi.org/10.3390/s22030853

Chicago/Turabian Style

Kern, Jonathan, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Lav R. Varshney, and François Leduc-Primeau. 2022. "Optimizing the Energy Efficiency of Unreliable Memories for Quantized Kalman Filtering" Sensors 22, no. 3: 853. https://doi.org/10.3390/s22030853

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop