Next Article in Journal
The Choice of an Appropriate Information Dissimilarity Measure for Hierarchical Clustering of River Streamflow Time Series, Based on Calculated Lyapunov Exponent and Kolmogorov Measures
Next Article in Special Issue
Efficient Algorithms for Coded Multicasting in Heterogeneous Caching Networks
Previous Article in Journal
Attack Algorithm for a Keystore-Based Secret Key Generation Method
Previous Article in Special Issue
Amplitude Constrained MIMO Channels: Properties of Optimal Input Distributions and Bounds on the Capacity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Rate-Distortion Analysis of Symmetric Remote Gaussian Source Coding: Centralized Encoding vs. Distributed Encoding

1
College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin 300222, China
2
Department of Electrical System of Launch Vehicle, Institute of Aerospace System Engineering Shanghai, Shanghai Academy of Spaceflight Technology, Shanghai 201109, China
3
Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON L8S 4K1, Canada
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(2), 213; https://doi.org/10.3390/e21020213
Submission received: 10 January 2019 / Revised: 17 February 2019 / Accepted: 20 February 2019 / Published: 23 February 2019
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)

Abstract

:
Consider a symmetric multivariate Gaussian source with components, which are corrupted by independent and identically distributed Gaussian noises; these noisy components are compressed at a certain rate, and the compressed version is leveraged to reconstruct the source subject to a mean squared error distortion constraint. The rate-distortion analysis is performed for two scenarios: centralized encoding (where the noisy source components are jointly compressed) and distributed encoding (where the noisy source components are separately compressed). It is shown, among other things, that the gap between the rate-distortion functions associated with these two scenarios admits a simple characterization in the large limit.

1. Introduction

Many applications involve collection and transmission of potentially noise-corrupted data. It is often necessary to compress the collected data to reduce the transmission cost. The remote source coding problem aims to characterize the optimal scheme for such compression and the relevant information-theoretic limit. In this work we study a quadratic Gaussian version of the remote source coding problem, where compression is performed on the noise-corrupted components of a symmetric multivariate Gaussian source. A prescribed mean squared error distortion constraint is imposed on the reconstruction of the noise-free source components; moreover, it is assumed that the noises across different source components are independent and obey the same Gaussian distribution. Two scenarios are considered: centralized encoding (see Figure 1) and distributed encoding (see Figure 2). It is worth noting that the distributed encoding scenario is closely related to the CEO problem, which has been studied extensively [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18].
The present paper is primarily devoted to the comparison of the rate-distortion functions associated with the aforementioned two scenarios. We are particularly interested in understanding how the rate penalty for distributed encoding (relative to centralized encoding) depends on the target distortion as well as the parameters of source and noise models. Although the information-theoretic results needed for this comparison are available in the literature or can be derived in a relatively straightforward manner, the relevant expressions are too unwieldy to analyze. For this reason, we focus on the asymptotic regime where the number of source components, denoted by , is sufficiently large. Indeed, it will be seen that the gap between the two rate-distortion functions admits a simple characterization in the large limit, yielding useful insights into the fundamental difference between centralized encoding and distributed coding, which are hard to obtain otherwise.
The rest of this paper is organized as follows. We state the problem definitions and the main results in Section 2. The proofs are provided in Section 3. We conclude the paper in Section 4.
Notation: The expectation operator and the transpose operator are denoted by E [ · ] and ( · ) T , respectively. An -dimensional all-one row vector is written as 1 . We use W n as an abbreviation of ( W ( 1 ) , , W ( n ) ) . The cardinality of a set C is denoted by | C | . We write g ( ) = O ( f ( ) ) if the absolute value of g ( ) f ( ) is bounded for all sufficiently large . Throughout this paper, the base of the logarithm function is e, and log + x max { log x , 0 } .

2. Problem Definitions and Main Results

Let S ( S 1 , , S ) T be the sum of two mutually independent -dimensional ( 2 ) zero-mean Gaussian random vectors, source X ( X 1 , , X ) T and noise Z ( Z 1 , , Z ) T , with
E [ X i X j ] = γ X , i = j , ρ X γ X , i j , E [ Z i Z j ] = γ Z , i = j , 0 , i j ,
where γ X > 0 , ρ X [ 1 1 , 1 ] , and γ Z 0 . Moreover, let { ( X ( t ) , Z ( t ) , S ( t ) ) } t = 1 be i.i.d. copies of ( X , Z , S ) .
Definition 1 (Centralized encoding).
A rate-distortion pair ( r , d ) is said to be achievable with centralized encoding if, for any ϵ > 0 , there exists an encoding function ϕ ( n ) : R × n C ( n ) such that
1 n log | C ( n ) | r + ϵ , 1 n i = 1 t = 1 n E [ ( X i ( t ) X ^ i ( t ) ) 2 ] d + ϵ ,
where X ^ i ( t ) E [ X i ( t ) | ( ϕ ( n ) ( S n ) ) ] . For a given d, we denote by r ̲ ( d ) the minimum r such that ( r , d ) is achievable with centralized encoding.
Definition 2 (Distributed encoding).
A rate-distortion pair ( r , d ) is said to be achievable with distributed encoding if, for any ϵ > 0 , there exist encoding functions ϕ i ( n ) : R n C i ( n ) , i = 1 , , , such that
1 n i = 1 log | C i ( n ) | r + ϵ , 1 n i = 1 t = 1 n E [ ( X i ( t ) X ^ i ( t ) ) 2 ] d + ϵ ,
where X ^ i ( t ) E [ X i ( t ) | ( ϕ 1 ( n ) ( S 1 n ) , , ϕ ( n ) ( S n ) ) ] . For a given d, we denote by r ¯ ( d ) the minimum r such that ( r , d ) is achievable with distributed encoding.
We will refer to r ̲ ( d ) as the rate-distortion function of symmetric remote Gaussian source coding with centralized encoding, and r ¯ ( d ) as the rate-distortion function of symmetric remote Gaussian source coding with distributed encoding. It is clear that r ̲ ( d ) r ¯ ( d ) for any d since distributed encoding can be simulated by centralized encoding. Moreover, it is easy to show that r ̲ ( d ) = r ¯ ( d ) = 0 for d γ X (since the distortion constraint is trivially satisfied with the reconstruction set to be zero) and r ̲ ( d ) = r ¯ ( d ) = for d d min (since d min is the minimum achievable distortion when { S ( t ) } t = 1 is directly available at the decoder), where (see Section 3.1 for a detailed derivation)
d min 1 E [ ( X E [ X | S ] ) T ( X E [ X | S ] ) ] = ( 1 ) γ X γ Z γ X + ( 1 ) γ Z , ρ X = 1 1 , ( ρ X γ X + λ X ) γ Z ( ρ X γ X + λ X + γ Z ) + ( 1 ) λ X γ Z ( λ X + γ Z ) , ρ X ( 1 1 , 1 ) , γ X γ Z γ X + γ Z , ρ X = 1 ,
with λ X ( 1 ρ X ) γ X . Henceforth we shall focus on the case d ( d min , γ X ) .
Lemma 1.
For d ( d min , γ X ) ,
r ̲ ( d ) = 1 2 log ( 1 ) γ X 2 ( γ X + ( 1 ) γ Z ) ( ( 1 ) d γ X ) , ρ X = 1 1 , 1 2 log + ( ρ X γ X + λ X ) 2 ( ρ X γ X + λ X + γ Z ) ξ + 1 2 log + λ X 2 ( λ X + γ Z ) ξ , ρ X ( 1 1 , 1 ) , 1 2 log γ X 2 ( γ X + γ Z ) d γ X γ Z , ρ X = 1 ,
where
ξ d d min , d min { ( ρ X γ X + λ X ) 2 ρ X γ X + λ X + γ Z , λ X 2 λ X + γ Z } + d min , ( d d min ) 1 ( ρ X γ X + λ X ) 2 ( 1 ) ( ρ X γ X + λ X + γ Z ) , d > ( ρ X γ X + λ X ) 2 ρ X γ X + λ X + γ Z + d min , ( d d min ) ( 1 ) λ X 2 λ X + γ Z , d > λ X 2 λ X + γ Z + d min .
Proof. 
See Section 3.1. □
The following result can be deduced from ([19] Theorem 1) (see also [11,15]).
Lemma 2.
For d ( d min , γ X ) ,
r ¯ ( d ) = 1 2 log ρ X γ X + λ X + γ Z + λ Q λ Q + 1 2 log λ X + γ Z + λ Q λ Q ,
where
λ Q b + b 2 4 a c 2 a
with
a ( γ X d ) , b ( ρ X γ X + λ X ) ( λ X + 2 γ Z ) + ( 1 ) λ X ( ρ X γ X + λ X + 2 γ Z ) ( ρ X γ X + 2 λ X + 2 γ Z ) d , c ( ρ X γ X + λ X + γ Z ) ( λ X + γ Z ) ( d min d ) .
The expressions of r ̲ ( d ) and r ¯ ( d ) as shown in Lemmas 1 and 2 are quite complicated, rendering it difficult to make analytical comparisons. Fortunately, they become significantly simplified in the asymptotic regime where (with d fixed). To perform this asymptotic analysis, it is necessary to restrict attention to the case ρ X [ 0 , 1 ] ; moreover, without loss of generality, we assume d ( d min ( ) , γ X ) , where
d min ( ) lim d min = λ X γ Z λ X + γ Z , ρ X [ 0 , 1 ) , 0 , ρ X = 1 .
Theorem 1 (Centralized encoding).
1. 
ρ X = 0 : For d ( d min ( ) , γ X ) ,
r ̲ ( d ) = 2 log γ X 2 ( γ X + γ Z ) d γ X γ Z .
2. 
ρ X ( 0 , 1 ] : For d ( d min ( ) , γ X ) ,
r ̲ ( d ) = 2 log λ X 2 ( λ X + γ Z ) d λ X γ Z + 1 2 log + α ̲ + O ( 1 ) , d < λ X , 1 2 log + 1 2 log ρ X γ X ( λ X + γ Z ) λ X 2 + γ Z 2 2 λ X 2 + O ( 1 ) , d = λ X , 1 2 log ρ X γ X d λ X + O ( 1 ) , d > λ X ,
where
α ̲ 1 2 log ρ X γ X ( λ X + γ Z ) λ X 2 + γ Z 2 2 ( ( λ X + γ Z ) d λ X γ Z ) .
Proof. 
See Section 3.2. □
Theorem 2 (Distributed encoding).
1. 
ρ X = 0 : For d ( d min ( ) , γ X ) ,
r ¯ ( d ) = 2 log γ X 2 ( γ X + γ Z ) d γ X γ Z .
2. 
ρ X ( 0 , 1 ] : For d ( d min ( ) , γ X ) ,
r ¯ ( d ) = 2 log λ X 2 ( λ X + γ Z ) d λ X γ Z + 1 2 log + α ¯ + O ( 1 ) , d < λ X , ( λ X + γ Z ) 2 λ X + 1 4 log + 1 2 log ρ X 1 ρ X ( λ X + γ Z ) ( λ X ρ X γ Z ) 4 ρ X λ X 2 + O ( 1 ) , d = λ X , 1 2 log ρ X γ X d λ X + ( λ X + γ Z ) ( γ X d ) 2 ρ X γ X ( d λ X ) + O ( 1 ) , d > λ X ,
where
α ¯ 1 2 log ρ X γ X ( λ X d ) λ X 2 + ( λ X + γ Z ) d 2 2 ( λ X d ) ( ( λ X + γ Z ) d λ X γ Z ) .
Proof. 
See Section 3.3. □
Remark 1.
One can readily recover ([20] Theorem 3) for the case m = 1 (see [20] for the definition of parameter m) and Oohama’s celebrated result for the quadratic Gaussian CEO problem ([3] Corollary 1) by setting γ Z = 0 and ρ X = 1 , respectively, in Theorem 2.
The following result is a simple corollary of Theorems 1 and 2.
Corollary 1 (Asymptotic gap).
1. 
ρ X = 0 : For d ( d min ( ) , γ X ) ,
r ¯ ( d ) r ̲ ( d ) = 0 .
2. 
ρ X ( 0 , 1 ] : For d ( d min ( ) , γ X ) ,
lim r ¯ ( d ) r ̲ ( d ) = ψ ( d ) 1 2 log λ X d λ X + γ Z + γ Z + d 2 ( λ X d ) , d < λ X , , d = λ X , ( λ X + γ Z ) ( γ X d ) 2 ρ X γ X ( d λ X ) , d > λ X .
Remark 2.
When ρ X = 0 , we have ψ ( d ) = γ Z ( γ X d ) 2 γ X d , which is a monotonically decreasing function over ( 0 , γ X ) , converging to ∞ (here we assume γ Z > 0 ) and 0 as d 0 and γ X , respectively. When ρ X ( 0 , 1 ) , it is clear that the function ψ ( d ) is monotonically decreasing over ( λ X , γ X ) , converging to ∞ and 0 as d λ X and γ X , respectively; moreover, since ψ ( d ) = γ Z + d 2 ( λ X d ) 2 > 0 for d ( d min ( ) , λ X ) , the function ψ ( d ) is monotonically increasing over ( d min ( ) , λ X ) , converging to τ ( γ Z ) 1 2 log λ X 2 ( λ X + γ Z ) 2 + 2 λ X γ Z + γ Z 2 2 λ X 2 and ∞ as d d min ( ) and λ X , respectively. Note that τ ( γ Z ) = 2 λ X γ Z + γ Z 2 λ X 2 ( λ X + γ Z ) 0 for γ Z [ 0 , ) ; therefore, the minimum value of τ ( γ Z ) over [ 0 , ) is 0, which is attained at γ Z = 0 . See Figure 3 and Figure 4 for some graphical illustrations of ψ ( d ) .

3. Proofs

3.1. Proof of Lemma 1

It is known [21] that r ̲ ( d ) is given by the solution to the following optimization problem:
( P 1 ) min p X ^ | S I ( S ; X ^ ) subject   to E [ ( X X ^ ) T ( X X ^ ) ] d , X S X ^   form   a   Markov   chain .
Let X ˜ Θ X , Z ˜ Θ Z , and S ˜ Θ S , where Θ is an arbitrary (real) unitary matrix with the first row being 1 1 . Since unitary transformations are invertible and preserve the Euclidean norm, we can write ( P 1 ) equivalently as
( P 2 ) min p X ^ | S ˜ I ( S ˜ ; X ^ ) subject   to E [ ( X ˜ X ^ ) T ( X ˜ X ^ ) ] d , X ˜ S ˜ X ^   form   a   Markov   chain .
For the same reason, we have
d min = E [ ( X ˜ E [ X ˜ | S ˜ ] ) T ( X ˜ E [ X ˜ | S ˜ ] ) ] .
Denote the i-th components of X ˜ , Z ˜ , and S ˜ by X ˜ i , Z ˜ i , and S ˜ i , respectively, i = 1 , , . Clearly, S ˜ i = X ˜ i + Z ˜ i , i = 1 , , . Moreover, it can be verified that X ˜ 1 , , X ˜ , Z ˜ 1 , , Z ˜ are independent zero-mean Gaussian random variables with
E [ ( X ˜ 1 ) 2 ] = ρ X γ X + λ X ,
E [ ( X ˜ i ) 2 ] = λ X , i = 2 , , ,
E [ ( Z ˜ 1 ) 2 ] = γ Z , i = 1 , , .
Now denote the i-th component of S ^ E [ X ˜ | S ˜ ] by S ^ i , i = 1 , , . We have
S ^ i = E [ X ˜ i | S ˜ i ] , i = 1 , , ,
and
E [ ( S ^ 1 ) 2 ] = 0 , ρ X = 1 1 , ( ρ X γ X + λ X ) 2 ρ X γ X + λ X + γ Z , ρ X ( 1 1 , 1 ] ,
E [ ( S ^ i ) 2 ] = λ X 2 λ X + γ Z , ρ [ 1 1 , 1 ) , 0 , ρ X = 1 , i = 2 , , .
Note that
E [ ( X ˜ S ^ ) T ( X ˜ S ^ ) ] = i = 1 E [ ( X ˜ i ) 2 ] i = 1 E [ ( S ^ i ) 2 ] ,
which, together with (1)–(5), proves
d min = 1 E [ ( X ˜ S ^ ) T ( X ˜ S ^ ) ] = ( 1 ) γ X γ Z γ X + ( 1 ) γ Z , ρ X = 1 1 , ( ρ X γ X + λ X ) γ Z ( ρ X γ X + λ X + γ Z ) + ( 1 ) λ X γ Z ( λ X + γ Z ) , ρ X ( 1 1 , 1 ) , γ X γ Z γ X + γ Z , ρ X = 1 .
Clearly, S ^ is determined by S ˜ ; moreover, for any -dimensional random vector X ^ jointly distributed with ( X ˜ , S ˜ ) such that X ˜ S ˜ X ^ form a Markov chain, we have
E [ ( X ˜ X ^ ) T ( X ˜ X ^ ) ] = E [ ( S ^ X ^ ) T ( S ^ X ^ ) 2 ] + E [ ( X ˜ S ^ ) T ( X ˜ S ^ ) 2 ] = E [ ( S ^ X ^ ) T ( S ^ X ^ ) 2 ] + d min .
Therefore, ( P 2 ) is equivalent to
( P 3 ) min p X ^ | S ^ I ( S ^ ; X ^ ) subject   to E [ ( S ^ X ^ ) T ( S ^ X ^ ) ] ( d d min ) .
One can readily complete the proof of Lemma 1 by recognizing that the solution to ( P 3 ) is given by the well-known reverse water-filling formula ([22] Theorem 13.3.3).

3.2. Proof of Theorem 1

Setting ρ X = 0 in Lemma 1 gives
r ̲ ( d ) = 2 log γ X 2 ( γ X + γ Z ) d γ X γ Z
for d ( γ X γ Z γ X + γ Z , γ X ) . Setting ρ X = 1 in Lemma 1 gives
r ̲ ( d ) = 1 2 log 2 γ X 2 ( γ X + γ Z ) d γ X γ Z
for d ( γ X γ Z γ X + γ Z , γ X ) ; moreover, we have
1 2 log 2 γ X 2 ( γ X + γ Z ) d γ X γ Z = 1 2 log γ X d + O ( 1 ) ,
and γ X γ Z γ X + γ Z 0 as .
It remains to treat the case ρ X ( 0 , 1 ) . In this case, it can be deduced from Lemma 1 that
r ̲ ( d ) = 1 2 log ( ρ X γ X + λ X ) 2 ( λ X + γ Z ) λ X 2 ( ρ X γ X + λ X + γ Z ) + 2 log λ X 2 ( λ X + γ Z ) ( d d min ) , d ( d min , λ X 2 λ X + γ Z + d min ] , 1 2 log ( ρ X γ X + λ X ) 2 ( λ X + γ Z ) ( ρ X γ X + λ X + γ Z ) ( ( λ X + γ Z ) ( d d min ) ( 1 ) λ X 2 ) , d ( λ X 2 λ X + γ Z + d min , γ X ) ,
and we have
d min = ( ρ X γ X + λ X ) γ Z ( ρ X γ X + λ X + γ Z ) + ( 1 ) λ X γ Z ( λ X + γ Z ) = λ X γ Z λ X + γ Z + ρ X γ X γ Z 2 ( ρ X γ X + λ X + γ Z ) ( λ X + γ Z )
= λ X γ Z λ X + γ Z + γ Z 2 ( λ X + γ Z ) + O ( 1 2 ) .
Consider the following two subcases separately.
  • d ( λ X γ Z λ X + γ Z , λ X ]
    It can be seen from (6) that d min is a monotonically decreasing function of and converges to λ X γ Z λ X + γ Z as . Therefore, we have d ( d min , λ X 2 λ X + γ Z + d min ] and consequently
    r ̲ ( d ) = 1 2 log ( ρ X γ X + λ X ) 2 ( λ X + γ Z ) λ X 2 ( ρ X γ X + λ X + γ Z ) + 2 log λ X 2 ( λ X + γ Z ) ( d d min ) ,
    when is sufficiently large. Note that
    1 2 log ( ρ X γ X + λ X ) 2 ρ X γ X + λ X + γ Z = 1 2 log + 1 2 log ( ρ X γ X ) + O ( 1 )
    and
    1 2 log ( d d min ) = 1 2 log d λ X γ Z λ X + γ Z γ Z 2 ( λ X + γ Z ) O ( 1 2 )
    = 1 2 log ( λ X + γ Z ) d λ X γ Z λ X + γ Z γ Z 2 2 ( ( λ X + γ Z ) d λ X γ Z ) + O ( 1 2 ) ,
    where (10) is due to (7). Substituting (9) and (11) into (8) gives
    r ̲ ( d ) = 2 log λ X 2 ( λ X + γ Z ) d λ X γ Z + 1 2 log + 1 2 log ρ X γ X ( λ X + γ Z ) λ X 2 + γ Z 2 2 ( ( λ X + γ Z ) d λ X γ Z ) + O ( 1 ) .
    In particular, we have
    r ̲ ( λ X ) = 1 2 log + 1 2 log ρ X γ X ( λ X + γ Z ) λ X 2 + γ Z 2 2 λ X 2 + O ( 1 ) .
  • d ( λ X , γ X )
    Since d min converges to λ X γ Z λ X + γ Z as , it follows that d ( λ X 2 λ X + γ Z + d min , γ X ) and consequently
    r ̲ ( d ) = 1 2 log ( ρ X γ X + λ X ) 2 ( λ X + γ Z ) ( ρ X γ X + λ X + γ Z ) ( ( λ X + γ Z ) ( d d min ) ( 1 ) λ X 2 )
    when is sufficiently large. One can readily verify that
    1 2 log ( ρ X γ X + λ X ) 2 ( ρ X γ X + λ X + γ Z ) ( ( λ X + γ Z ) ( d d min ) ( 1 ) λ X 2 ) = 1 2 log ρ X γ X ( λ X + γ Z ) ( d λ X ) + O ( 1 ) .
    Substituting (13) into (12) gives
    r ̲ ( d ) = 1 2 log ρ X γ X d λ X + O ( 1 ) .
    This completes the proof of Theorem 1.

3.3. Proof of Theorem 2

One can readily prove part one of Theorem 2 by setting ρ X = 0 in Lemma 2. So only part two of Theorem 2 remains to be proved. Note that
b = g 1 2 + g 2 , c = h 1 2 + h 2 ,
where
g 1 ρ X γ X ( λ X d ) , g 2 λ X 2 + 2 γ X γ Z 2 ( λ X + γ Z ) d , h 1 ρ X γ X ( λ X + γ Z ) ( d min ( ) d ) , h 2 ρ X γ X γ Z 2 + λ X γ Z ( λ X + γ Z ) ( λ X + γ Z ) 2 d .
We shall consider the following three cases separately.
  • d < λ X
    In this case g 1 > 0 and consequently
    λ Q = b + b 1 4 a c b 2 2 a
    when is sufficiently large. Note that
    1 4 a c b 2 = 1 2 a c b 2 2 a 2 c 2 b 4 + O ( 1 3 ) .
    Substituting (15) into (14) gives
    λ Q = c b a c 2 b 3 + O ( 1 2 ) .
    It is easy to show that
    c b = h 1 g 1 g 1 h 2 g 2 h 1 g 1 2 + O ( 1 2 ) ,
    a c 2 b 3 = ( γ X d ) h 1 2 g 1 3 + O ( 1 2 ) .
    Combining (16), (17) and (18) yields
    λ Q = η 1 + η 2 + O ( 1 2 ) ,
    where
    η 1 h 1 g 1 , η 2 g 1 2 h 2 g 1 g 2 h 1 + ( γ X d ) h 1 2 g 1 3 .
    Moreover, it can be verified via algebraic manipulations that
    η 1 = ( λ X + γ Z ) d λ X γ Z λ X d , η 2 = λ X 2 d 2 ( λ X d ) 3 .
    Now we write r ¯ ( d ) equivalently as
    r ¯ ( d ) = 1 2 log ρ X γ X + λ X + γ Z + λ Q λ X + γ Z + λ Q + 2 log λ X + γ Z + λ Q λ Q .
    Note that
    1 2 log ρ X γ X + λ X + γ Z + λ Q λ X + γ Z + λ Q = 1 2 log + 1 2 log ρ X γ X λ X + γ Z + η 1 + O ( 1 ) = 1 2 log + 1 2 log ρ X γ X ( λ X d ) λ X 2 + O ( 1 )
    and
    1 2 log λ X + γ Z + λ Q λ Q = 1 2 log λ X + γ Z + η 1 η 1 ( λ X + γ Z ) η 2 2 ( λ X + γ Z + η 1 ) η 1 + O ( 1 2 ) = 1 2 log λ X 2 ( λ X + γ Z ) d λ X γ Z + ( λ X + γ Z ) d 2 2 ( λ X d ) ( ( λ X + γ X ) d λ X γ Z ) + O ( 1 2 ) .
    Substituting (20) and (21) into (19) gives
    r ¯ ( d ) = 2 log λ X 2 ( λ X + γ Z ) d λ X γ Z + 1 2 log + 1 2 log ρ X γ X ( λ X d ) λ X 2 + ( λ X + γ Z ) d 2 2 ( λ X d ) ( ( λ X + γ X ) d λ X γ Z ) + O ( 1 ) .
  • d = λ X
    In this case g 1 = 0 and consequently
    λ Q = g 2 + g 2 2 4 ( γ X λ X ) ( h 1 + h 2 ) 2 ( γ X λ X ) .
    Note that
    g 2 2 4 ( γ X λ X ) ( h 1 + h 2 ) = 4 ( γ X λ X ) h 1 + O ( 1 ) .
    Substituting (23) into (22) gives
    λ Q = μ 1 + μ 2 + O ( 1 ) ,
    where
    μ 1 h 1 γ X λ X , μ 2 g 2 2 ( γ X λ X ) .
    Moreover, it can be verified via algebraic manipulations that
    μ 1 = λ X , μ 2 = ( 1 ρ X ) 2 γ X 2 ρ X γ Z 2 ρ X .
    Now we proceed to derive an asymptotic expression of r ¯ ( d ) . Note that
    1 2 log ρ X γ X + λ X + γ Z + λ Q λ X + γ Z + λ Q = 1 4 log + 1 2 log ρ X γ X μ 1 + O ( 1 ) = 1 4 log + 1 2 log ρ X 1 ρ X + O ( 1 )
    and
    1 2 log λ X + γ Z + λ Q λ Q = λ X + γ Z 2 λ Q ( λ X + γ Z ) 2 4 λ Q 2 + O ( 1 3 2 ) = λ X + γ Z 2 μ 1 ( λ X + γ Z ) ( λ X + γ Z + 2 μ 2 ) 4 μ 1 2 + O ( 1 3 2 ) = λ X + γ Z 2 λ X ( λ X + γ Z ) ( λ X ρ X γ Z ) 4 ρ X λ X 2 + O ( 1 3 2 ) .
    Substituting (24) and (25) into (19) gives
    r ¯ ( λ X ) = ( λ X + γ Z ) 2 λ X + 1 4 log + 1 2 log ρ X 1 ρ X ( λ X + γ Z ) ( λ X ρ X γ Z ) 4 ρ X λ X 2 + O ( 1 ) .
  • d > λ X
    In this case g 1 < 0 and consequently
    λ Q = b b 1 4 a c b 2 2 a
    when is sufficiently large. Note that
    1 4 a c b 2 = 1 + O ( 1 ) .
    Substituting (27) into (26) gives
    λ Q = b a + O ( 1 ) .
    It is easy to show that
    b a = ρ X γ X ( d λ X ) γ X d + O ( 1 ) .
    Combining (28) and (29) yields
    λ Q = ρ X γ X ( d λ X ) γ X d + O ( 1 ) .
    Now we proceed to derive an asymptotic expression of r ¯ ( d ) . Note that
    1 2 log ρ X γ X + λ X + γ Z + λ Q λ X + γ Z + λ Q = 1 2 log ρ X γ X d λ X + O ( 1 )
    and
    1 2 log λ X + γ Z + λ Q λ Q = λ X + γ Z 2 λ Q + O ( 1 2 ) = ( λ X + γ Z ) ( γ X d ) 2 ρ X γ X ( d λ X ) + O ( 1 2 ) .
    Substituting (30) and (31) into (19) gives
    r ¯ ( d ) = 1 2 log ρ X γ X d λ X + ( λ X + γ Z ) ( γ X d ) 2 ρ X γ X ( d λ X ) + O ( 1 ) .
    This completes the proof of Theorem 2.

4. Conclusions

We have studied the problem of symmetric remote Gaussian source coding and made a systematic comparison of centralized encoding and distributed encoding in terms of the asymptotic rate-distortion performance. It is of great interest to extend our work by considering more general source and noise models.

Author Contributions

Conceptualization, Y.W. and J.C.; methodology, Y.W.; validation, L.X., S.Z. and M.W.; formal analysis, L.X., S.Z. and M.W.; investigation, L.X., S.Z. and M.W.; writing—original draft preparation, Y.W.; writing—review and editing, J.C.; supervision, J.C.

Funding

S.Z. was supported in part by the China Scholarship Council.

Acknowledgments

The authors wish to thank the anonymous reviewer for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berger, T.; Zhang, Z.; Viswanathan, H. The CEO problem. IEEE Trans. Inf. Theory 1996, 42, 887–902. [Google Scholar] [CrossRef]
  2. Viswanathan, H.; Berger, T. The quadratic Gaussian CEO problem. IEEE Trans. Inf. Theory 1997, 43, 1549–1559. [Google Scholar] [CrossRef]
  3. Oohama, Y. The rate-distortion function for the quadratic Gaussian CEO problem. IEEE Trans. Inf. Theory 1998, 44, 1057–1070. [Google Scholar] [CrossRef]
  4. Prabhakaran, V.; Tse, D.; Ramchandran, K. Rate region of the quadratic Gaussian CEO problem. In Proceedings of the IEEE International Symposium onInformation Theory, Chicago, IL, USA, 27 June–2 July 2004; p. 117. [Google Scholar]
  5. Chen, J.; Zhang, X.; Berger, T.; Wicker, S.B. An upper bound on the sum-rate distortion function and its corresponding rate allocation schemes for the CEO problem. IEEE J. Sel. Areas Commun. 2004, 22, 977–987. [Google Scholar] [CrossRef] [Green Version]
  6. Oohama, Y. Rate-distortion theory for Gaussian multiterminal source coding systems with several side informations at the decoder. IEEE Trans. Inf. Theory 2005, 51, 2577–2593. [Google Scholar] [CrossRef]
  7. Chen, J.; Berger, T. Successive Wyner-Ziv coding scheme and its application to the quadratic Gaussian CEO problem. IEEE Trans. Inf. Theory 2008, 54, 1586–1603. [Google Scholar] [CrossRef]
  8. Wagner, A.B.; Tavildar, S.; Viswanath, P. Rate region of the quadratic Gaussian two-encoder source-coding problem. IEEE Trans. Inf. Theory 2008, 54, 1938–1961. [Google Scholar] [CrossRef]
  9. Tavildar, S.; Viswanath, P.; Wagner, A.B. The Gaussian many-help-one distributed source coding problem. IEEE Trans. Inf. Theory 2010, 56, 564–581. [Google Scholar] [CrossRef]
  10. Wang, J.; Chen, J.; Wu, X. On the sum rate of Gaussian multiterminal source coding: New proofs and results. IEEE Trans. Inf. Theory 2010, 56, 3946–3960. [Google Scholar] [CrossRef]
  11. Yang, Y.; Xiong, Z. On the generalized Gaussian CEO problem. IEEE Trans. Inf. Theory 2012, 58, 3350–3372. [Google Scholar] [CrossRef]
  12. Wang, J.; Chen, J. Vector Gaussian two-terminal source coding. IEEE Trans. Inf. Theory 2013, 59, 3693–3708. [Google Scholar] [CrossRef]
  13. Courtade, T.A.; Weissman, T. Multiterminal source coding under logarithmic loss. IEEE Trans. Inf. Theory 2014, 60, 740–761. [Google Scholar] [CrossRef]
  14. Wang, J.; Chen, J. Vector Gaussian multiterminal source coding. IEEE Trans. Inf. Theory 2014, 60, 5533–5552. [Google Scholar] [CrossRef]
  15. Oohama, Y. Indirect and direct Gaussian distributed source coding problems. IEEE Trans. Inf. Theory 2014, 60, 7506–7539. [Google Scholar] [CrossRef]
  16. Nangir, M.; Asvadi, R.; Ahmadian-Attari, M.; Chen, J. Analysis and code design for the binary CEO problem under logarithmic loss. IEEE Trans. Commun. 2018, 66, 6003–6014. [Google Scholar] [CrossRef]
  17. Ugur, Y.; Aguerri, I.-E.; Zaidi, A. Vector Gaussian CEO problem under logarithmic loss and applications. arXiv, 2018; arXiv:1811.03933. [Google Scholar]
  18. Nangir, M.; Asvadi, R.; Chen, J.; Ahmadian-Attari, M.; Matsumoto, T. Successive Wyner-Ziv coding for the binary CEO problem under logarithmic loss. arXiv, 2018; arXiv:1812.11584. [Google Scholar]
  19. Wang, Y.; Xie, L.; Zhang, X.; Chen, J. Robust distributed compression of symmetrically correlated Gaussian sources. arXiv, 2018; arXiv:1807.06799. [Google Scholar]
  20. Chen, J.; Xie, L.; Chang, Y.; Wang, J.; Wang, Y. Generalized Gaussian multiterminal source coding: The symmetric case. arXiv, 2017; arXiv:1710.04750. [Google Scholar]
  21. Dobrushin, R.; Tsybakov, B. Information transmission with additional noise. IRE Trans. Inf. Theory 1962, 8, 293–304. [Google Scholar] [CrossRef]
  22. Cover, T.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
Figure 1. Symmetric remote Gaussian source coding with centralized encoding.
Figure 1. Symmetric remote Gaussian source coding with centralized encoding.
Entropy 21 00213 g001
Figure 2. Symmetric remote Gaussian source coding with distributed encoding.
Figure 2. Symmetric remote Gaussian source coding with distributed encoding.
Entropy 21 00213 g002
Figure 3. Illustration of ψ ( d ) with γ X = 1 and γ Z = 0.1 for different ρ X .
Figure 3. Illustration of ψ ( d ) with γ X = 1 and γ Z = 0.1 for different ρ X .
Entropy 21 00213 g003
Figure 4. Illustration of ψ ( d ) with γ X = 1 and ρ X = 0.5 for different γ Z .
Figure 4. Illustration of ψ ( d ) with γ X = 1 and ρ X = 0.5 for different γ Z .
Entropy 21 00213 g004

Share and Cite

MDPI and ACS Style

Wang, Y.; Xie, L.; Zhou, S.; Wang, M.; Chen, J. Asymptotic Rate-Distortion Analysis of Symmetric Remote Gaussian Source Coding: Centralized Encoding vs. Distributed Encoding. Entropy 2019, 21, 213. https://doi.org/10.3390/e21020213

AMA Style

Wang Y, Xie L, Zhou S, Wang M, Chen J. Asymptotic Rate-Distortion Analysis of Symmetric Remote Gaussian Source Coding: Centralized Encoding vs. Distributed Encoding. Entropy. 2019; 21(2):213. https://doi.org/10.3390/e21020213

Chicago/Turabian Style

Wang, Yizhong, Li Xie, Siyao Zhou, Mengzhen Wang, and Jun Chen. 2019. "Asymptotic Rate-Distortion Analysis of Symmetric Remote Gaussian Source Coding: Centralized Encoding vs. Distributed Encoding" Entropy 21, no. 2: 213. https://doi.org/10.3390/e21020213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop