Elsevier

Signal Processing

Volume 86, Issue 11, November 2006, Pages 3096-3101
Signal Processing

Approaching the Slepian–Wolf boundary using practical channel codes

https://doi.org/10.1016/j.sigpro.2006.03.018Get rights and content

Abstract

We review the interpretation of the Slepian–Wolf result for compression of correlated sources as a problem of channel coding with side information. Based on this review, we propose a method that allows the use of practical error correcting codes of fixed length to achieve performance close to any point in the Slepian–Wolf boundary without the use of explicit time-sharing arguments. The performance of the proposed approach when turbo codes are utilized is shown by simulations.

Introduction

We consider two sources (Sx and Sy) producing sequences X and Y. The sequences are generated by repeated independent drawings of the discrete random variables X and Y, which have a joint probability distribution P(x,y). The well known Slepian–Wolf result [1] states that the achievable compression rates for sources Sx and Sy(Rx,Ry) are given byRxH(X|Y),RyH(Y|X),Rx+RyH(X,Y).

The interpretation first given by Wyner in [2], and extended for the case of lossy compression in [3], shows that the Slepian–Wolf result can be derived in a very intuitive way from a channel coding perspective. However, the use of practical channel codes to deal with the Slepian–Wolf problem has only recently been discussed in the literature. Ref. [4] may have been the first reference in which channel codes were used to obtain constructive schemes in the Slepian–Wolf problem. The application of turbo codes in this context was introduced in [5], [6], [7] (for symmetric cases in which Rx=Ry=R/2), and also studied in [8], [9] (for cases in which either Rx=H(X) or Ry=H(Y)). The use of LDPC codes in this problem (for the asymmetric case) was first proposed in [10].

It is obvious that, from a theoretical perspective, it is enough to solve the cases of Rx=H(X) and Ry=H(Y). The case Rx=H(X) (or Ry=H(Y)) means that source Sx(Sy) is compressed up to its entropy limit.1 Then, the goal is to compress source Sy(Sx) to a rate R2(R1) as close to H(Y|X)(H(X|Y)) as possible. Once the rate pairs (H(X),R2) and (R1,H(Y)) have been obtained, it is possible to achieve other rate pairs in the Slepian–Wolf region by using time sharing arguments. In practical applications, time sharing can be utilized in two different ways, but both possibilities present some disadvantages:

  • The desired rate pair can be achieved by sending only two block pairs (the first one at a rate pair (H(X),R2) and the second at a rate pair (R1,H(Y))), with the length of the blocks adjusted so that the desired rate pair can be achieved. This approach complicates the structure of the encoder/decoder, which has to deal with different block lengths.

  • The desired rate pair can be achieved by sending block pairs of fixed length and changing the proportion of blocks with rate pairs (H(X),R2) and (R1,H(Y)). However, in order to achieve the desired pair, it is necessary to send many fixed-length blocks, with the consequent undesired delay.

In this paper, we design practical turbo codes that, without the need of calculating the rate pairs (H(X),R2) and (R1,H(Y)) (and without the later use of explicit time sharing approaches) achieve rate pairs (Rx,Ry) very close to the boundary defined by (3). For a given (achievable) rate pair of interest, we show how to design an appropriate turbo-like code that processes block pairs of fixed length and achieves the desired rate pair for every block pair [11]. Related work utilizing different code design techniques to achieve performance close to any point in Slepian–Wolf boundary can be found in [12], [13], [14], [15]. A very similar approach utilizing punctured LDPC codes was proposed in [16]. Although not considered in this paper, the proposed approach can be easily extended for the case of joint source–channel coding of correlated sources, where each source is transmitted to the common receiver through a different noisy channel.

Section snippets

Interpretation of the Slepian–Wolf result

Consider blocks of source outputs of length n (X=(X1Xn) and Y=(Y1Yn), with n). The objective is to independently compress these two sequences up to the limit provided in (1)–(3). In order to do so, the encoder corresponding to source Sx divides sequence X into subsequences Xh=(X1Xl) and Xs=(Xl+1Xn). Analogously, the encoder corresponding to source Sy divides sequence Y into subsequences Ys=(Y1Yl) and Yh=(Yl+1Yn). As shown in Fig. 1, sequences Xh and Yh are compressed by a perfect source

Design of practical channel codes to approach the Slepian–Wolf boundary

The development in the previous section can be applied to the design of practical channel codes to deal with the Slepian–Wolf problem. If the channel codes utilized to compress Xs and Ys were perfect (i.e., achieving the Shannon limit), we would solve (6) and (7) (with the inequalities transformed in equalities) to obtain the proper value of l for a given pair (Rx,Ry) in the Slepian–Wolf boundary. However, a practical code will not achieve the theoretical limit and therefore, in order to design

Simulation results

In order to illustrate the proposed design method, we consider the case in which the sources are binary with H(X)=H(Y)=1. The correlation between sources is defined as Y=XE, where is the modulus 2 addition and E is a binary random variable taking value 1 with probability p and 0 with probability 1-p. For the simulations, the length of the input blocks is fixed to n=16384, which is also the interleavers length. All the interleavers used in the turbo encoders have spread 15. Each one of the

Conclusion

We have reviewed the interpretation of the Slepian–Wolf result for compression of correlated sources as a problem of channel coding with side information. Based on this idea, we design practical error correcting codes (turbo codes) which achieve performance very close to the theoretical Slepian–Wolf limit for block pairs of fixed length. The joint probability distribution between sources does not need to be known at the decoder site, since it can be estimated jointly with the decoding process.

References (20)

  • D. Slepian, J.K. Wolf, Noiseless coding of correlated information sources, IEEE Trans. Inform. Theory (July 1973)...
  • A.D. Wyner, Recent results in Shannon theory, IEEE Trans. Inform. Theory (January 1974)...
  • S. Shamai (Shitz), S. Verdu, R. Zamir, Systematic lossy source/channel coding, IEEE Trans. Inform. Theory (March 1998)...
  • S.S. Pradham, K. Ramchandran, Distributed source coding using syndromes (DISCUS): design and construction, Proceedings...
  • J. Garcia-Frias, Joint source–channel decoding of correlated sources over noisy channels, Proceedings of the DCC’01,...
  • J. Garcia-Frias, Y. Zhao, Compression of correlated binary sources using turbo codes, IEEE Comm. Lett. (October 2001)...
  • J. Garcia-Frias, Y. Zhao, Data compression of unknown single and correlated binary sources using punctured turbo codes,...
  • J. Bajcsy, P. Mitran, Coding for the Slepian–Wolf problem with turbo codes, Proceedings of the Globecom’01, November...
  • A. Aaron, B. Girod, Compression with side information using turbo codes, Proceedings of the DCC’02, April...
  • A.D. Liveris, Z. Xiong, C.N. Georghiades, Compression of binary sources with side information at the decoder using LDPC...
There are more references available in the full text version of this article.

Cited by (20)

  • Distributed arithmetic coding with interval swapping

    2015, Signal Processing
    Citation Excerpt :

    DSC based on channel codes usually meets the maximum Hamming distance constraint. Therefore, powerful channel codes such as linear block codes [9], turbo codes [10,11] and LDPC codes [12,13] are usually employed in DSC. Moreover, DSC based on channel codes is particularly efficient for long sequence lengths and uniform sources.

  • Lifetime maximized data gathering in wireless sensor networks using limited-order distributed source coding

    2011, Signal Processing
    Citation Excerpt :

    The SW theorem also holds for the case with multiple sources [4]. Practical distributed source codes based on turbo or low-density parity check codes are presented, in e.g., [5–11], which perform near the bound predicted by the SW theorem. Data gathering and aggregation in large WSNs are often facilitated via grouping the network nodes to a set of clusters.

  • Protograph LDPC Based Distributed Joint Source Channel Coding

    2017, Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology
View all citing articles on Scopus

This work was supported in part by NSF Award CCR-0311014, and prepared through collaborative participation in the Communications and Networks Consortium sponsored by the US Army Research Laboratory under the Collaborative Technology Alliance Program, Cooperative Agreement DAAD19-01-2-0011. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon. This paper was presented in part at the IEEE International Symposium in Information Theory, Chicago, IL, USA, June 2004.

View full text