Next Article in Journal
On Finding Two Posets that Cover Given Linear Orders
Previous Article in Journal
Can People Really Do Nothing? Handling Annotation Gaps in ADL Sensor Data
Previous Article in Special Issue
A Finite Regime Analysis of Information Set Decoding Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Coding Paradigm for the Primitive Relay Channel †

1
Institute of Science and Technology (IST) Austria, 3400 Klosterneuburg, Austria
2
Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA
3
School of Computer and Communication Sciences, EPFL, CH-1015 Lausanne, Switzerland
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2018 IEEE International Symposium on Information Theory (ISIT 2018), Vail, CO, USA, 17–22 June 2018.
Algorithms 2019, 12(10), 218; https://doi.org/10.3390/a12100218
Submission received: 18 June 2019 / Revised: 2 October 2019 / Accepted: 14 October 2019 / Published: 18 October 2019
(This article belongs to the Special Issue Coding Theory and Its Application)

Abstract

:
We consider the primitive relay channel, where the source sends a message to the relay and to the destination, and the relay helps the communication by transmitting an additional message to the destination via a separate channel. Two well-known coding techniques have been introduced for this setting: decode-and-forward and compress-and-forward. In decode-and-forward, the relay completely decodes the message and sends some information to the destination; in compress-and-forward, the relay does not decode, and it sends a compressed version of the received signal to the destination using Wyner–Ziv coding. In this paper, we present a novel coding paradigm that provides an improved achievable rate for the primitive relay channel. The idea is to combine compress-and-forward and decode-and-forward via a chaining construction. We transmit over pairs of blocks: in the first block, we use compress-and-forward; and, in the second block, we use decode-and-forward. More specifically, in the first block, the relay does not decode, it compresses the received signal via Wyner–Ziv, and it sends only part of the compression to the destination. In the second block, the relay completely decodes the message, it sends some information to the destination, and it also sends the remaining part of the compression coming from the first block. By doing so, we are able to strictly outperform both compress-and-forward and decode-and-forward. Note that the proposed coding scheme can be implemented with polar codes. As such, it has the typical attractive properties of polar coding schemes, namely, quasi-linear encoding and decoding complexity, and error probability that decays at super-polynomial speed. As a running example, we take into account the special case of the erasure relay channel, and we provide a comparison between the rates achievable by our proposed scheme and the existing upper and lower bounds.

1. Introduction

The relay channel, introduced by van der Meulen in [1], represents the simplest network model with a single source and a single destination. The source wants to communicate with the destination, and the relay helps the communication. More specifically, let X S be the signal sent by the source to the relay and to the destination, Y SR the signal received by the relay, X R the signal sent by the relay to the destination, and Y D the signal received by the destination which comes from the source and from the relay. Note that the relay channel has a broadcast component going from the source to the relay and to the destination, and a multiple access component going from the source and from the relay to the destination. The model is schematized in Figure 1.
Cover and El Gamal provided a general upper bound (the cut-set bound) and two lower bounds (decode-and-forward and compress-and-forward) in [2]. Since that seminal work, several lower bounds have been derived, i.e., amplify-and-forward, compute-and-forward, noisy network coding, quantize-map-and-forward, hybrid coding, see [3,4,5,6,7]. The cut-set bound is tight in most of the settings where capacity is known [2,8,9,10]. However, the cut-set bound was shown not be tight in some special cases [11,12], and novel upper bounds tighter than cut-set were recently presented in [13,14,15,16]. For a review on the relay channel, see also ([17] Chapter 16) and ([18] Chapter 9).
Polar codes, introduced by Arıkan in [19], have been employed to devise practical schemes for the relay channel. In particular, for the case of the degraded relay channel where X S ( X R , Y SR ) Y D forms a Markov chain, polar coding techniques for decode-and-forward are presented in [20,21,22,23]. Furthermore, for the case of the relay channel with orthogonal receiver components, a polar coding scheme for compress-and-forward is proposed in [22]. For general relay channels, polar coding techniques for decode-and-forward and compress-and-forward are described in [24]. We will adopt these schemes as primitives in our approach. Soft decode-and-forward relaying strategies which employ low-density parity-check (LDPC) codes are considered in [25].
In this work, we consider the relay channel with orthogonal receiver components, which is also known as the primitive relay channel. The difference with respect to the general relay channel consists in the fact that the destination receives two separate signals: Y SD from the source and Y RD from the relay. Basically, the multiple access component going from the source and from the relay to the destination is substituted by two parallel channels. Furthermore, we assume that the relay can listen and transmit simultaneously, namely, it is full-duplex. The model is schematized in Figure 2. Note that the relay communicates with the destination via a direct link. Thus, the relay can communicate reliably to the destination at a rate arbitrarily close to capacity by using a capacity achieving code (e.g., a random code or a polar code). Consequently, we can just assume that the relay and the destination are connected via a noiseless link of given capacity. Even in this simplified setting, the capacity of the primitive relay channel is unknown in general. A review on coding scheme for the primitive relay channel is contained in [26].
The main contribution of this paper is a novel coding scheme that combines compress-and-forward with decode-and-forward and improves upon both of them. The idea is to consider pairs of blocks and use a chaining construction: in the first block, we perform a variation of compress-and-forward where the relay sends only a part of the compressed signal to the destination; in the second block, we perform decode-and-forward and the relay sends to the destination the new information bits together with the remaining part of the compressed signal coming from the previous block. The idea of chaining was first presented in [27] to design universal codes and in [28] to guarantee strong security for the degraded wiretap channel. Since then, it has been employed in numerous other settings, such as the broadcast channel [29,30], the asymmetric channel [31,32], and the wiretap channel [33]. We highlight that our proposed coding paradigm is implementable with codes used for compress-and-forward and decode-and-forward. Thus, polar codes are an appealing choice [24]: they have an encoding and decoding complexity of Θ ( n log n ) and a block error probability scaling roughly as 2 n , where n is the block length.
The rest of the paper is organized as follows. In Section 2, we provide a review of existing upper bounds (cut-set and its improvements) and lower bounds (direct transmission, decode-and-forward, partial decode-and-forward, compress-and-forward, and partial decode-compress-and-forward). These bounds are also evaluated for the special case of the erasure relay channel, which serves as a running example throughout the paper. In Section 3, we state and prove our new lower bound. In Section 4, we present some numerical results for the erasure relay channel: we compare the rates achieved by our proposed coding scheme with existing upper and lower bounds. Some concluding remarks are provided in Section 5. This work is an extended version of [34].

2. Existing Upper and Lower Bounds

We assume that all channels are binary memoryless and symmetric (BMS). We denote by h 2 ( x ) = x log 2 x ( 1 x ) log 2 ( 1 x ) the binary entropy function and by X S , X R , Y SR , and Y SD the alphabets associated with X S , X R , Y SR , and Y SD , respectively. We define a b = a + b ( 1 a ) for any a , b R .
Throughout the paper, we will use as a running example the special case of the erasure relay channel. As schematized in Figure 3, in the erasure relay channel, the links between source and destination and between source and relay are binary erasure channels (BECs) with erasure probabilities ε SD and ε SR , respectively.

2.1. Cut-Set Upper Bound

For the general relay channel, the cut-set upper bound on the achievable rate R is given by ([17] Theorem 16.1)
R max p X S , X R min { I ( X S , X R ; Y D ) ; I ( X S ; Y SR , Y D | X R ) } .
For the case of the primitive relay channel, the cut-set bound specializes to ([26] Proposition 1)
R max p X S min { I ( X S ; Y SD ) + C RD ; I ( X S ; Y SR , Y SD ) } .
For the special case of the erasure relay channel, the cut-set bound can be rewritten as
R min { 1 ε SD + C RD ; 1 ε SR ε SD } .

2.2. Improvements on a Cut-Set Upper Bound

For the case of the primitive relay channel, an upper bound demonstrating an explicit gap to the cut-set bound was presented in [13]. Furthermore, two new upper bounds that are generally tighter than cut-set are proposed in [14] for the symmetric primitive relay channel, in which Y SR and Y SD are conditionally identically distributed given X S . The results of [14] are extended to the non-symmetric case and to the Gaussian case in [15,16], respectively.
Let us now state the result in ([15] Theorem 3.1), which provides an extension of the first bound of [14]. If a rate R is achievable, then there exists some p X S ( x S ) and a 0 such that
R I ( X S ; Y SR , Y SD ) , R I ( X S ; Y SD ) + C RD a , R I ( X S ; Y SD , Y ˜ SR ) + h 2 a ln 2 2 + a ln 2 2 log 2 ( | Y SR | 1 ) a ,
for any random variable Y ˜ SR with the same conditional distribution as Y SR given X S . The evaluation of the term I ( X S ; Y SD , Y ˜ SR ) that gives the tightest bound is simple in the following special cases:
  • Symmetric ( Y SR and Y SD are conditionally identically distributed given X S ): I ( X S ; Y SD , Y ˜ SR ) = I ( X S ; Y SD ) .
  • Degraded ( Y SD is a stochastically degraded version of Y SR ): I ( X S ; Y SD , Y ˜ SR ) = I ( X S ; Y SR ) .
  • Reversely degraded ( Y SR is a stochastically degraded version of Y SD ): I ( X S ; Y SD , Y ˜ SR ) = I ( X S ; Y SD ) .
For the special case of the erasure relay channel, the bound can be re-written as
R max a 0 min 1 ε SR ε SD , 1 ε SD + C RD a , 1 min { ε SR , ε SD } + h 2 a ln 2 2 + a ln 2 2 a .
In order to present the second bound of [14], we need some preliminary definitions. Given a channel transition probability p ( ω | x ) , for any p ( x ) and d 0 , we define Δ ( p ( x ) , d ) as
Δ ( p ( x ) , d ) = max p ˜ ( ω | x ) H ( p ˜ ( ω | x ) | p ( x ) ) + D ( p ˜ ( ω | x ) | | p ( ω | x ) | p ( x ) ) H ( p ( ω | x ) | p ( x ) ) ,
subject to the condition
1 2 ( x , ω ) | p ( x ) p ˜ ( ω | x ) p ( x ) p ( ω | x ) | d ,
where D ( p ˜ ( ω | x ) | | p ( ω | x ) | p ( x ) ) is the conditional relative entropy defined as
D ( p ˜ ( ω | x ) | | p ( ω | x ) | p ( x ) ) = ( x , ω ) p ( x ) p ˜ ( ω | x ) log 2 p ˜ ( ω | x ) p ( ω | x ) .
H ( p ˜ ( ω | x ) | p ( x ) ) is the conditional entropy defined with respect to the joint distribution p ( x ) p ˜ ( ω | x ) , i.e.,
H ( p ˜ ( ω | x ) | p ( x ) ) = ( x , ω ) p ( x ) p ˜ ( ω | x ) log 2 p ˜ ( ω | x ) ,
and H ( p ( ω | x ) | p ( x ) ) is the conditional entropy similarly defined with respect to p ( x ) p ( ω | x ) . At this point, we can state the result in ([14] Theorem 4.2). If a rate R is achievable, then there exists some p X S ( x S ) and a [ 0 , min { C RD , H ( Y SR X S ) } ] such that
R I ( X S ; Y SR , Y SD ) , R I ( X S ; Y SD ) + C RD a , R I ( X S ; Y SD ) + Δ p X S ( x S ) , a ln 2 2 .
As pointed out at the end of Section IV.C of [14], for the special case of the symmetric erasure relay channel, we have that Δ ( p X S ( x S ) , d ) = for all p X S ( x S ) and d > 0 . Thus, formula (10) reduces to the cut-set bound (3).

2.3. Direct Transmission Lower Bound

In the direct transmission, the source communicates with the destination by using an optimal point-to-point code. The relay transmission is fixed at the most favorable symbol for the channel from the source to the destination.
For the general relay channel, direct transmission allows for achieving the following rate ([17] Section 16.3):
R DT = max p X S , x R I ( X S ; Y D | X R = x R ) .
For the case of the primitive relay channel, the direct transmission lower bound specializes to
R DT = max p X S I ( X S ; Y SD ) .
Note that the direct transmission lower bound (12) meets the cut-set upper bound (2), and it equals the capacity of the primitive relay channel when either of the following two conditions holds:
  • the primitive relay channel is reversely degraded, which implies that I ( X S ; Y SD ) = I ( X S ; Y SR , Y SD ) ;
  • C RD = 0 .
For the special case of the erasure relay channel, the direct transmission lower bound can be rewritten as
R DT = 1 ε SD .
The direct transmission lower bound (13) meets the cut-set upper bound (3), and it equals the capacity of the erasure relay channel when either 1 ε SD = 1 ε SR ε SD or C RD = 0 .

2.4. Decode-and-Forward Lower Bound

In decode-and-forward, the relay completely decodes the received sequence and cooperates with the source to communicate the message to the destination.
For the general relay channel, decode-and-forward allows for achieving the following rate ([17] Theorem 16.2):
R DF = max p X S , X R min { I ( X S , X R ; Y D ) , I ( X S ; Y SR | X R ) } .
For the case of the primitive relay channel, the decode-and-forward lower bound specializes to ([26] Proposition 2)
R DF = max p X S min { I ( X S ; Y SD ) + C RD ; I ( X S ; Y SR ) } .
Note that the decode-and-forward lower bound (15) meets the cut-set upper bound (2) and is equal to the capacity of the primitive relay channel when either of the following two conditions holds:
  • the primitive relay channel is degraded, which implies that I ( X S ; Y SR ) = I ( X S ; Y SR , Y SD ) ;
  • I ( X S ; Y SR ) I ( X S ; Y SD ) + C RD .
For the special case of the erasure relay channel, the decode-and-forward lower bound can be rewritten as
R DF = min { 1 ε SD + C RD ; 1 ε SR } .
The decode-and-forward lower bound (16) meets the cut-set upper bound (3), and it equals the capacity of the erasure relay channel when either 1 ε SR = 1 ε SR ε SD or 1 ε SD + C RD 1 ε SR .

2.5. Partial Decode-and-Forward Lower Bound

In partial decode-and-forward, the relay decodes and sends to the destination only part of the received sequence.
For the general relay channel, partial decode-and-forward allows for achieving the following rate ([17] Theorem 16.3):
R pDF = max p U , X S , X R min { I ( X S , X R ; Y D ) , I ( U ; Y SR | X R ) + I ( X S ; Y D | X R , U ) } ,
where the cardinality of the alphabet associated with U can be bounded as | U | | X S | · | X R | . Note that U is an auxiliary random variable that represents the part of the message decoded by the relay. By taking U = X S , we recover the decode-and-forward lower bound (14). Furthermore, by taking U = , we recover the direct transmission lower bound (11).
Note that the partial decode-and-forward lower bound (17) meets the cut-set upper bound (1) when the relay channel has orthogonal sender components, namely, the broadcast channel from the source to the relay and the destination is decoupled into two parallel channels.
For the case of the primitive relay channel, the partial decode-and-forward lower bound specializes to ([26] Equation (5))
R pDF = max p U , X S min { I ( X S ; Y SD ) + C RD , I ( U ; Y SR ) + I ( X S ; Y SD | U ) } ,
with | U | | X S | .
For the special case of the erasure relay channel, we show that partial decode-and-forward does not provide any improvement upon both direct transmission and decode-and-forward. After some simple calculations, one obtains that
I ( X S ; Y SD ) = H ( X S ) H ( X S | Y SD ) = H ( X S ) ( 1 ε SD ) , I ( U ; Y SR ) = H ( U ) H ( U | Y SR ) = H ( U ) ε SR H ( U ) ( 1 ε SR ) H ( U | X S ) = ( 1 ε SR ) ( H ( X S ) H ( X S | U ) ) , I ( X S ; Y SD | U ) = H ( X S | U ) H ( X S | U , Y SD ) = H ( X S | U ) ( 1 ε SD ) .
Hence, by setting α = H ( X S ) and β = H ( X S | U ) , we can re-write (18) as
R pDF = max 0 β α 1 min { α ( 1 ε SD ) + C RD , α ( 1 ε SR ) + β ( ε SR ε SD ) } = max 0 β 1 min { ( 1 ε SD ) + C RD , ( 1 ε SR ) + β ( ε SR ε SD ) } .
On the one hand, if ε SR ε SD , then the maximum is achieved by taking β = 1 , and R pDF = 1 ε SD = R DT . On the other hand, if ε SR ε SD , then the maximum is achieved by taking β = 0 , and R pDF = min { ( 1 ε SD ) + C RD , 1 ε SR } = R DF . Consequently, no improvement is possible over both direct transmission and decode-and-forward.

2.6. Compress-and-Forward Lower Bound

In compress-and-forward, the relay does not attempt to decode the received sequence, but it sends a (possibly compressed) description of it, denoted by Y ^ SR , to the destination. Since this description is correlated with the sequence received by the destination from the source, Wyner–Ziv coding is used to reduce the rate needed to communicate it to the destination.
For the general relay channel, compress-and-forward allows for achieving the following rate ([17] Theorem 16.4):
R CF = max p X S p X R p Y ^ SR | X R , Y SR min { I ( X S , X R ; Y D ) I ( Y SR ; Y ^ SR | X S , X R , Y D ) , I ( X S ; Y ^ SR , Y D | X R ) } ,
where the cardinality of the alphabet associated with Y ^ SR can be bounded as | Y ^ SR | | X R | · | Y SR | + 1 . This expression can be equivalently rewritten as ([17] Remark 16.3)
R CF = max p X S p X R p Y ^ SR | X R , Y SR { I ( X S ; Y ^ SR , Y D | X R ) : I ( Y SR ; Y ^ SR | X R , Y D ) I ( X R ; Y D ) } .
The bound is in general not convex, therefore it can be improved via time sharing.
For the case of the primitive relay channel, the compress-and-forward lower bound specializes to ([26] Proposition 3)
R CF = max p X S p Y ^ SR | Y SR { I ( X S ; Y ^ SR , Y SD ) : I ( Y SR ; Y ^ SR | Y SD ) C RD } ,
with | Y ^ SR | | Y SR | + 1 .
Note that the compress-and-forward lower bound (23) meets the cut-set upper bound (2), and it equals the capacity of the primitive relay channel when H ( Y SR | Y SD ) C RD . Indeed, in this case, we can pick Y ^ SR = Y SR , namely, the relay performs Slepian–Wolf source coding. Therefore, R CF = I ( X S ; Y SR , Y SD ) , which is one of the two terms in the cut-set bound.
On the contrary, if H ( Y SR | Y SD ) > C RD , then we can degrade Y SR into Y ^ SR , namely, the relay performs a step of lossy source coding. The relay transmits this lossy description to the destination that can decode it successfully since Y ^ SR requires less bits than Y SR . However, after that the destination has recovered Y ^ SR , there is a penalty loss: we can achieve rates up to I ( X S ; Y ^ SR , Y SD ) , instead of up to I ( X S ; Y SR , Y SD ) .
For the case of the erasure relay channel, we have that
H ( Y SR | Y SD ) = h 2 ( ε SR ) + ε SD ( 1 ε SR ) .
Hence, if C RD h 2 ( ε SR ) + ε SD ( 1 ε SR ) , then the compress-and-forward lower bound meets the cut-set upper bound, and it equals the capacity of the erasure relay channel.
On the contrary, if C RD < h 2 ( ε SR ) + ε SD ( 1 ε SR ) , it is not easy to find the best choice of Y ^ SR even for this simple scenario. Following [25], let us assume that Y ^ SR is the output of an erasure-erasure channel (EEC) with erasure probability ε ^ R and input Y SR . This means that, if Y SR = ? , then Y ^ SR = ? with probability 1; if Y SR { 0 , 1 } , then Y ^ SR = ? with probability ε ^ R and Y ^ SR = Y SR with probability 1 ε ^ R . Consequently,
I ( X S ; Y ^ SR , Y SD ) = H ( X S ) H ( X S | Y ^ SR , Y SD ) = H ( X S ) ( 1 ( ε ^ R ε SR ) · ε SD ) .
Clearly, I ( X S ; Y ^ SR , Y SD ) is maximized by setting p X S to the uniform distribution. Furthermore,
H ( Y ^ SR | Y SR , Y SD ) = H ( Y ^ SR | Y SR ) = ( 1 ε SR ) h 2 ( ε ^ R ) , H ( Y ^ SR | Y SD ) = h 2 ( ε SR ε ^ R ) + ε SD ( 1 ε SR ε ^ R ) .
As a result, the rate (23) can be rewritten as
R CF = max 0 ε ^ R 1 { 1 ( ε ^ R ε SR ) · ε SD : h 2 ( ε SR ε ^ R ) + ε SD ( 1 ε SR ε ^ R ) ( 1 ε SR ) h 2 ( ε ^ R ) C RD } .

2.7. Partial Decode-Compress-and-Forward Lower Bound

In partial decode-compress-and-forward, the relay decodes and sends to the destination part of the source message, and it also sends to the destination a compressed description of the remaining signal by Wyner–Ziv coding.
For the general relay channel, partial decode-compress-and-forward allows for achieving the following rate ([2] Theorem 7):
R pDCF = max min { I ( X S ; Y ^ SR , Y D | U , X R ) + I ( U ; Y SR | V , X R ) , I ( X S , X R ; Y D ) I ( Y SR ; Y ^ SR | U , X S , X R , Y D ) } ,
where the maximization is taken over all the joint probability density functions of the form
p U , V , X S , X R , Y SR , Y ^ SR , Y D = p V p U | V p X S | U p X R | V · p Y SR , Y D | X S , X R p Y ^ SR | X R , Y SR , U
such that
I ( X R ; Y D | V ) I ( Y ^ SR ; Y SR | U , X R , Y D ) .
Partial decode–compress-and-forward is a generalization of partial decode-and-forward and compress-and-forward. Futhermore, it can strictly improve on both, e.g., for the state-dependent orthogonal relay channel with state information available at the destination [35].
Let us consider the case of the primitive relay channel and pick V = . Then, the partial decode-compress-and-forward lower bound specializes to
R pDCF = max min { I ( X S ; Y ^ SR , Y SD | U ) + I ( U ; Y SR ) , I ( X S ; Y SD ) + C RD I ( Y SR ; Y ^ SR | U , X S ) } ,
such that
C RD I ( Y ^ SR ; Y SR | Y SD , U ) .

3. Main Result

We are now ready to state our new lower bound for the primitive relay channel.
Theorem 1.
Consider the transmission over a primitive relay channel, where the source sends X S to the relay and the destination, the relay receives Y S R from the source, the destination receives Y S D from the source, and relay and destination are connected via a noiseless link with capacity C R D . Furthermore, denote by Y ^ S R the compressed description of Y S R transmitted by the relay, and define I max = max { 0 , I ( X S ; Y S R ) I ( X S ; Y S D ) } . Then, the following rate is achievable:
R new = C R D I max I ( X S ; Y ^ S R , Y S D ) + max { I ( X S ; Y S R ) , I ( X S ; Y S D ) } I ( Y S R ; Y ^ S R Y S D ) C R D I ( Y S R ; Y ^ S R Y S D ) I max ,
for any joint distribution p X S p Y ^ S R Y S R such that
I ( X S ; Y S R ) < I ( X S ; Y S D ) + C R D ,
I ( Y S R ; Y ^ S R | Y S D ) C R D ,
and where | Y ^ S R | | Y S R | + 1 . Furthermore, the rate (33) can be achieved by a polar coding scheme with encoding/decoding complexity Θ ( n log n ) and error probability O ( 2 n β ) for any β ( 0 , 1 / 2 ) , where n is the block length.
Remark 1.
If (34) does not hold, then decode-and-forward achieves the cut-set bound, and it is optimal. Furthermore, if (35) does not hold, then our scheme reduces to compress-and-forward, and the achievable rate is given by (23). As we will see in the proof, we have two slightly different schemes for the cases (i) I ( X S ; Y S R ) I ( X S ; Y S D ) and (ii) I ( X S ; Y S R ) < I ( X S ; Y S D ) . Thus, introducing the term I max allows us to write the achievable rate in a more compact form.
Remark 2.
The proposed scheme can be thought of as a particular form of time-sharing between decode-and-forward and compress-and-forward: in the first block, we are performing (a variant of) compress-and-forward, and, in the second block, we are performing decode-and-forward. However, we allow different time-sharing strategies across different channels: in the channel from relay to destination, part of the compressed message of the first block is sent together with the message of the second block. This is different from the ‘classical’ way of implementing time-sharing, which can be realized through the partial decode-compress-and-forward scheme, as described for example in [35]. In [35], in the same block, a part of the message is processed according to the decode-and-forward scheme, and the remaining part is processed according to the compress-and-forward scheme. Therefore, it is not clear that the rate achievable by our scheme can also be achieved by partial decode-compress-and-forward. In fact, in the special case considered in the numerical simulations of Section 4, our achievable rate strictly improves upon partial decode-compress-and-forward.
Remark 3.
The proposed scheme is based on a chaining construction. Chaining can be thought of as a form of block Markov encoding, where the joint distribution is over blocks of symbols (instead of being over a single symbol). As described in detail in the proof, at the relay, we generate the first block according to a first codebook; we repeat part of the first block into the second block; and we generate the rest of the second block according to a second codebook. Thus, the repetition of part of the first block into the second block can be interpreted as a particular joint distribution over pairs of blocks.
The special case of the erasure relay channel is handled by the corollary below.
Corollary 1.
Consider the transmission over the erasure relay channel, where Y S D is obtained from X S via a BEC ( ε S D ) , Y S R is obtained from X S via a BEC ( ε S R ) , Y ^ S R is obtained from Y S R via an EEC ( ε ^ R ) , and the relay is connected to the destination via a noiseless link with capacity C R D . Then, the rate
R new = ( C R D max { 0 , ε S D ε S R } ) ( 1 ( ε ^ R ε S R ) · ε S D ) h 2 ( ε S R ε ^ R ) + ε S D ( 1 ε S R ε ^ R ) ( 1 ε S R ) h 2 ( ε ^ R ) max { 0 , ε S D ε S R } + max { 1 ε S R , 1 ε S D } ( h 2 ( ε S R ε ^ R ) + ε S D ( 1 ε S R ε ^ R ) ( 1 ε S R ) h 2 ( ε ^ R ) C R D ) h 2 ( ε S R ε ^ R ) + ε S D ( 1 ε S R ε ^ R ) ( 1 ε S R ) h 2 ( ε ^ R ) max { 0 , ε S D ε S R }
is achievable for any ε ^ R [ 0 , 1 ] such that
1 ε S R < 1 ε S D + C R D ,
h 2 ( ε SR ε ^ R ) + ε S D ( 1 ε S R ε ^ R ) ( 1 ε S R ) h 2 ( ε ^ R ) C R D .
Furthermore, the rate (36) can be achieved by a polar coding scheme with encoding/decoding complexity Θ ( n log n ) and error probability O ( 2 n β ) for any β ( 0 , 1 / 2 ) , where n is the block length.
The proof of Corollary 1 easily follows from the application of Theorem 1 and of Formulas (25) and (26). We will now proceed with the proof of our main result.
Proof of Theorem 1.
We start by presenting the main idea of our scheme. We split the transmission into two blocks. In the first block, we perform a variant of compress-and-forward: the relay does not decode the received sequence, but it sends a compressed description of it to the destination. However, differently from standard compress-and-forward, we require that (35) holds. Hence, we cannot transmit all the compressed description Y ^ SR during the first block. In the second block, we perform decode-and-forward: the relay completely decodes the received sequence. Furthermore, we choose the length of the second block so that the relay can transmit the part of Y ^ SR that was not sent in the previous block plus the new information needed to decode the second block.
Let us now describe this scheme more in detail and provide the achievability proof of the rate (33). First, we deal with the case I ( X S ; Y SR ) I ( X S ; Y SD ) .
Consider the transmission of the first block. Denote by n 1 and R 1 the block length and the rate of the message transmitted by the source, and let R 1 approach from below I ( X S ; Y ^ SR , Y SD ) . The relay receives Y SR and constructs the compressed description Y ^ SR . Recall that the destination receives the side information Y SD from the source. Hence, by using Wyner–Ziv coding, the destination needs from the relay a number of bits approaching from above I ( Y SR ; Y ^ SR Y SD ) · n 1 , in order to decode the message sent by the source. As I ( Y SR ; Y ^ SR | Y SD ) C RD , the relay transmits right away a number of these bits approaching from below C RD · n 1 . The number of remaining bits approaches from above ( I ( Y SR ; Y ^ SR | Y SD ) C RD ) · n 1 , and it is stored by the relay. The destination stores the message received from the relay and the observation Y SD obtained from the source.
Consider the transmission of the second block and define
α = I ( Y SR ; Y ^ SR | Y SD ) C RD C RD I ( X S ; Y SR ) I ( X S ; Y SD ) .
Denote by n 2 and R 2 the block length and the rate of the message transmitted by the source. Let n 2 = n 1 · α and let R 2 approach from below I ( X S ; Y SR ) . The relay receives Y SR and successfully decodes the message. Again, the destination receives the side information Y SD from the source. Hence, it needs from the relay a number of bits approaching from above ( I ( X S ; Y SR ) I ( X S ; Y SD ) ) · n 1 · α , in order to decode the message sent by the source. The relay transmits to the destination these ( I ( X S ; Y SR ) I ( X S ; Y SD ) ) · n 1 · α information bits plus the ( I ( Y SR ; Y ^ SR | Y SD ) C RD ) · n 1 bits remaining from the previous block. This transmission is reliable as (39) implies that
( I ( X S ; Y SR ) I ( X S ; Y SD ) ) · n 1 · α + I ( Y SR ; Y ^ SR | Y SD ) C RD · n 1 = C RD · n 2 .
At this point, the destination can reconstruct the second block by using the side information received from the source and the extra ( I ( X S ; Y SR ) I ( X S ; Y SD ) ) · n 1 · α bits received from the relay. Furthermore, it can also reconstruct the first block by using the side information previously received from the source and the extra I ( Y SR ; Y ^ SR Y SD ) · n 1 bits received from the relay (partly in the first and partly in the second block).
The overall block length is n = n 1 + n 2 = ( 1 + α ) n 1 , and the achievable rate is
R = R 1 + α R 2 1 + α ,
which approaches from below
C RD ( I ( X S ; Y SR ) I ( X S ; Y SD ) ) I ( X S ; Y ^ SR , Y SD ) I ( Y SR ; Y ^ SR | Y SD ) I ( X S ; Y SR ) I ( X S ; Y SD ) + I ( Y SR ; Y ^ SR | Y SD ) C RD I ( X S ; Y SR ) I ( Y SR ; Y ^ SR | Y SD ) I ( X S ; Y SR ) I ( X S ; Y SD ) .
Note that the expression (42) coincides with (33) when I ( X S ; Y SR ) I ( X S ; Y SD ) .
The case I ( X S ; Y SR ) < I ( X S ; Y SD ) is handled in a similar way. As concerns the transmission of the first block, nothing changes. Denote by n 1 and R 1 the block length and the rate of the message transmitted by the source, and let R 1 approach from below I ( X S ; Y ^ SR , Y SD ) . The relay receives Y SR and constructs the compressed description Y ^ SR . By using Wyner–Ziv coding, the destination needs from the relay a number of bits approaching from above I ( Y SR ; Y ^ SR Y SD ) · n 1 , in order to decode the message sent by the source. As I ( Y SR ; Y ^ SR | Y SD ) C RD , the relay transmits right away a number of these bits approaching from below C RD · n 1 . The number of remaining bits approaches from above ( I ( Y SR ; Y ^ SR | Y SD ) C RD ) · n 1 and it is stored by the relay. The destination stores the message received from the relay and the observation Y SD obtained from the source.
As concerns the transmission of the second block, define
α = I ( Y SR ; Y ^ SR | Y SD ) C RD C RD ,
and denote by n 2 and R 2 the block length and the rate of the message transmitted by the source. Let n 2 = n 1 · α and let R 2 approach from below I ( X S ; Y SD ) . The relay discards the received message and transmits to the destination the ( I ( Y SR ; Y ^ SR | Y SD ) C RD ) · n 1 bits remaining from the previous block. This transmission is reliable as (43) implies that
I ( Y SR ; Y ^ SR | Y SD ) C RD · n 1 = C RD · n 2 .
At this point, the destination can reconstruct the second block by using the message received from the source. Furthermore, it can also reconstruct the first block by using the side information previously received from the source and the extra I ( Y SR ; Y ^ SR Y SD ) · n 1 bits received from the relay (partly in the first and partly in the second block).
The overall block length is n = n 1 + n 2 = ( 1 + α ) n 1 and the achievable rate is
R = R 1 + α R 2 1 + α ,
which approaches from below
C RD · I ( X S ; Y ^ SR , Y SD ) + I ( Y SR ; Y ^ SR | Y SD ) C RD I ( X S ; Y SD ) I ( Y SR ; Y ^ SR | Y SD ) .
Note that the expression (46) coincides with (33) when I ( X S ; Y SR ) < I ( X S ; Y SD ) .
Clearly, the coding scheme described so far can be implemented with codes that are suitable for compress-and-forward and for decode-and-forward. Hence, we can employ the polar coding schemes for compress-and-forward and for decode-and-forward presented in [24]. However, polar codes require block lengths n 1 and n 2 (or n 1 and n 2 ) that are powers of two, which puts a constraint on the possible values for α = n 2 / n 1 (or α = n 2 / n 1 ). To remove this constraint and achieve the rate (33) for any α , it suffices to use the punctured polar codes described in ([36] Theorem 1). ☐

4. Numerical Results

Let us consider the special case of the erasure relay channel. In Figure 4, we compare the achievable rate (36) of our scheme with the existing upper and lower bounds, i.e., the cut-set upper bound (3) (which coincides with the improved bound (5)), the decode-and-forward lower bound (16) and the compress-and-forward lower bound (27). We consider two pairs of choices for ε SD and ε SR : ( ε SD , ε SR ) = ( 0.85 , 0.5 ) for the plot on the left (see Figure 4), and ( ε SD , ε SR ) = ( 0.4 , 0.2 ) for the plot on the right. We plot the various bounds as functions of C RD . Our scheme outperforms both decode-and-forward and compress-and-forward for an interval of values of C RD in both settings. As C RD increases, the improvement guaranteed by our strategy decreases, until eventually the performance of our scheme is matched by compress-and-forward.
For a general primitive relay channel, it is not immediate how to compare the partial decode-compress-and-forward rate given in (28) with our new rate given in (33)—the partial decode-compress-and-forward scheme involves three auxiliary random variables ( U , V , Y ^ SR ) and the complex joint distribution expressed in (29) to maximize over. Thus, one immediate advantage of our new rate is that it is easier to compute. In fact, the proposed lower bound involves only one auxiliary random variable ( Y ^ SR ). Even if we simplify the partial decode-compress-and-forward rate as in (31), the formula remains harder to evaluate (two auxiliary random variables: U , Y ^ SR ). Although a full optimization over all parameters is very challenging, we have specialized (31) to the setting of the erasure relay channel and, for fairness of comparison with the other schemes, we have considered the case in which Y ^ SR is obtained from Y SR via an EEC( ε ^ R ). Then, by performing the maximization numerically over ε ^ R and over all the auxiliary random variables U s.t. | U | 2 , the achievable rate of partial decode-compress-and-forward does not improve upon decode-and-forward and compress-and-forward. Therefore, in this setting, partial decode-compress-and-forward is strictly worse than our proposed scheme.
In [25], for ε SD = 0.85 , ε SR = 0.5 and C RD = 0.99125 , the proposed soft decode-and-forward strategy based on LDPC codes achieves a rate of 0.507 , while both decode-and-forward and compress-and-forward achieve a rate of 0.5 . Our new coding strategy is reliable for rates up to 0.545 , hence it outperforms all existing lower bounds. As a reference, note that in this setting the cut-set bound is 0.575 .

5. Conclusions

We have proposed a new coding paradigm for the primitive relay channel that combines compress-and-forward and decode-and-forward by means of a chaining construction. The achievable rates obtained by our scheme surpass the state-of-the-art coding approaches (compress-and-forward, decode-and-forward, and the soft decode-and-forward strategy of [25]). Our coding paradigm is general in the sense that we treat decode-and-forward and compress-and-forward as existing primitives. For this reason, any coding scheme that can be used to implement decode-and-forward/compress-and-forward can also be used to implement our new strategy. Polar codes are one notable example, since polar coding schemes for decode-and-forward and compress-and-forward have been developed; see [20,21,22,23,24]. This leads to a scheme with the typical attractive features of polar codes, i.e., quasi-linear encoding/decoding complexity and fast decay of the error probability. A detailed analysis of the finite length performance of polar codes for our strategy (as well as of polar codes for decode-and-forward and compress-and-forward) is an interesting direction for future research.
In the numerical simulations, we consider the special case of the erasure relay channel. In this setting, the upper bounds presented in Section 2.2 do not provide an improvement over the cut-set bound. An interesting avenue for future work is to study the performance of our strategy in scenarios where the cut-set bound is not tight (e.g., as in [11,12,35]). For example, in [35], the model also includes a state sequence, and the partial decode-compress-and-forward strategy crucially takes advantage of it by optimally adapting its transmission to the dependence of the orthogonal channels on the state sequence. In this paper, we do not consider such a state sequence, and it is not obvious how to adapt our results to the model of [35].

Author Contributions

Conceptualization, M.M., S.H.H. and R.U.; Formal analysis, M.M., S.H.H. and R.U.; Funding acquisition, M.M. and R.U.; Supervision, R.U.; Visualization, S.H.H.; Writing—review & editing, M.M.

Funding

M.M. was supported by an Early Postdoc. Mobility fellowship from the Swiss NSF. R.U. was supported by Grant No. 200021_156672/1 of the Swiss NSF.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. van der Meulen, E.C. Three-terminal communication channels. Adv. Appl. Probab. 1971, 3, 120–154. [Google Scholar] [CrossRef]
  2. Cover, T.; Gamal, A.E. Capacity theorems for the relay channel. IEEE Trans. Inform. Theory 1979, 25, 572–584. [Google Scholar] [CrossRef]
  3. Schein, B.; Gallager, R. The Gaussian parallel relay network. In Proceedings of the 2000 IEEE International Symposium on Information Theory, Sorrento, Italy, 25–30 June 2000; p. 22. [Google Scholar]
  4. Avestimehr, A.S.; Diggavi, S.N.; Tse, D.N.C. Wireless Network Information Flow: A Deterministic Approach. IEEE Trans. Inform. Theory 2011, 57, 1872–1905. [Google Scholar] [CrossRef] [Green Version]
  5. Nazer, B.; Gastpar, M. Compute-and-forward: Harnessing interference through structured codes. IEEE Trans. Inform. Theory 2011, 57, 6463–6486. [Google Scholar] [CrossRef]
  6. Lim, S.H.; Kim, Y.H.; Gamal, A.E.; Chung, S.Y. Noisy network coding. IEEE Trans. Inform. Theory 2011, 57, 3132–3152. [Google Scholar] [CrossRef]
  7. Minero, P.; Lim, S.H.; Kim, Y.H. A unified approach to hybrid coding. IEEE Trans. Inform. Theory 2015, 61, 1509–1523. [Google Scholar] [CrossRef]
  8. Zahedi, S. On Reliable Communication over Relay Channels. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2005. [Google Scholar]
  9. Gamal, A.E.; Aref, M. The capacity of the semideterministic relay channel. IEEE Trans. Inform. Theory 1982, 28, 536. [Google Scholar] [CrossRef]
  10. Kim, Y.H. Capacity of a class of deterministic relay channels. IEEE Trans. Inform. Theory 2008, 54, 1328–1329. [Google Scholar] [CrossRef]
  11. Zhang, Z. Partial converse for a relay channel. IEEE Trans. Inform. Theory 1988, 34, 1106–1110. [Google Scholar] [CrossRef]
  12. Aleksic, M.; Razaghi, P.; Yu, W. Capacity of a class of modulo-sum relay channels. IEEE Trans. Inform. Theory 2009, 55, 921–930. [Google Scholar] [CrossRef]
  13. Xue, F. A new upper bound on the capacity of a primitive relay channel based on channel simulation. IEEE Trans. Inform. Theory 2014, 60, 4786–4798. [Google Scholar] [CrossRef]
  14. Wu, X.; Özgür, A.; Xie, L.L. Improving on the Cut-Set Bound via Geometric Analysis of Typical Sets. IEEE Trans. Inform. Theory 2017, 63, 2254–2277. [Google Scholar] [CrossRef]
  15. Wu, X.; Özgür, A. Improving on the cut-set bound for general primitive relay channels. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 1675–1679. [Google Scholar]
  16. Wu, X.; Özgür, A. Cut-set bound is loose for Gaussian relay networks. In Proceedings of the Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 29 September 2015; pp. 1135–1142. [Google Scholar]
  17. Gamal, A.E.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  18. Kramer, G. Topics in Multi-User Information Theory. Found. Trends Commun. Inf. Theory 2007, 4, 265–444. [Google Scholar] [CrossRef]
  19. Arıkan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inform. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  20. Andersson, M.; Rathi, V.; Thobaben, R.; Kliewer, J.; Skoglund, M. Nested polar codes for wiretap and relay channels. IEEE Commun. Lett. 2010, 14, 752–754. [Google Scholar] [CrossRef]
  21. Karzand, M. Polar codes for degraded relay channels. In Proceedings of the International Zurich Seminar on Communication, Zurich, Switzerland, 29 February–2 March 2012; pp. 59–62. [Google Scholar]
  22. Blasco-Serrano, R.; Thobaben, R.; Andersson, M.; Rathi, V.; Skoglund, M. Polar Codes for Cooperative Relaying. IEEE Trans. Commun. 2012, 60, 3263–3273. [Google Scholar] [CrossRef]
  23. Karas, D.; Pappi, K.; Karagiannidis, G. Smart decode-and-forward relaying with polar codes. IEEE Wirel. Comm. Lett. 2014, 3, 62–65. [Google Scholar] [CrossRef]
  24. Wang, L. Polar coding for relay channels. In Proceedings of the International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 1532–1536. [Google Scholar]
  25. Bennatan, A.; Shamai, S.; Calderbank, A.R. Soft-Decoding-Based Strategies for Relay and Interference Channels: Analysis and Achievable Rates Using LDPC Codes. IEEE Trans. Inform. Theory 2014, 60, 1977–2009. [Google Scholar] [CrossRef]
  26. Kim, Y.H. Coding techniques for primitive relay channels. In Proceedings of the Allerton Conference on Communication, Control, and Computing, Monticello, NY, USA, 26–28 September 2007; pp. 26–28. [Google Scholar]
  27. Hassani, S.H.; Urbanke, R. Universal Polar Codes. arXiv 2013, arXiv:1307.7223. [Google Scholar]
  28. Şaşoğlu, E.; Vardy, A. A new polar coding scheme for strong security on wiretap channels. In Proceedings of the IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; pp. 1117–1121. [Google Scholar]
  29. Mondelli, M.; Hassani, S.H.; Sason, I.; Urbanke, R. Achieving Marton’s Region for Broadcast Channels Using Polar Codes. IEEE Trans. Inform. Theory 2015, 61, 783–800. [Google Scholar] [CrossRef]
  30. Chou, R.A.; Bloch, M.R. Polar Coding for the Broadcast Channel With Confidential Messages: A Random Binning Analogy. IEEE Trans. Inform. Theory 2016, 62, 2410–2429. [Google Scholar] [CrossRef] [Green Version]
  31. Mondelli, M.; Hassani, S.H.; Sason, I.; Urbanke, R. How to achieve the capacity of asymmetric channels. IEEE Trans. Inform. Theory 2018, 64, 3371–3393. [Google Scholar] [CrossRef]
  32. Gad, E.E.; Li, Y.; Kliewer, J.; Langberg, M.; Jiang, A.; Bruck, J. Asymmetric Error Correction and Flash-Memory Rewriting using Polar Codes. IEEE Trans. Inform. Theory 2016, 62, 4024–4038. [Google Scholar] [CrossRef]
  33. Wei, Y.P.; Ulukus, S. Polar Coding for the General Wiretap Channel With Extensions to Multiuser Scenarios. IEEE J. Sel. Areas Commun. 2016, 34, 278–291. [Google Scholar]
  34. Mondelli, M.; Hassani, S.H.; Urbanke, R. A New Coding Paradigm for the Primitive Relay Channel. In Proceedings of the IEEE International Symposium on Information Theory, Vail, CO, USA, 17–22 June 2018; pp. 351–355. [Google Scholar] [CrossRef]
  35. Aguerri, I.E.; Gündüz, D. Capacity of a class of state-dependent orthogonal relay channels. IEEE Trans. Inf. Theory 2016, 62, 1280–1295. [Google Scholar] [CrossRef]
  36. Hong, S.N.; Hui, D.; Marić, I. Capacity-Achieving Rate-Compatible Polar Codes. IEEE Trans. Inform. Theory 2017, 63, 7620–7632. [Google Scholar] [CrossRef] [Green Version]
Figure 1. General relay channel.
Figure 1. General relay channel.
Algorithms 12 00218 g001
Figure 2. Primitive relay channel: relay channel with orthogonal receiver components.
Figure 2. Primitive relay channel: relay channel with orthogonal receiver components.
Algorithms 12 00218 g002
Figure 3. The erasure relay channel: primitive relay channel in which the link from source to relay is a BEC ( ε SR ) and the link from source to destination is a BEC ( ε SD ) .
Figure 3. The erasure relay channel: primitive relay channel in which the link from source to relay is a BEC ( ε SR ) and the link from source to destination is a BEC ( ε SD ) .
Algorithms 12 00218 g003
Figure 4. Comparison between the achievable rate provided by our strategy and the existing upper and lower bounds. We use “CF” and “DF” as abbreviations for “compress-and-forward” and “decode-and-forward”, respectively.
Figure 4. Comparison between the achievable rate provided by our strategy and the existing upper and lower bounds. We use “CF” and “DF” as abbreviations for “compress-and-forward” and “decode-and-forward”, respectively.
Algorithms 12 00218 g004

Share and Cite

MDPI and ACS Style

Mondelli, M.; Hassani, S.H.; Urbanke, R. A New Coding Paradigm for the Primitive Relay Channel. Algorithms 2019, 12, 218. https://doi.org/10.3390/a12100218

AMA Style

Mondelli M, Hassani SH, Urbanke R. A New Coding Paradigm for the Primitive Relay Channel. Algorithms. 2019; 12(10):218. https://doi.org/10.3390/a12100218

Chicago/Turabian Style

Mondelli, Marco, S. Hamed Hassani, and Rüdiger Urbanke. 2019. "A New Coding Paradigm for the Primitive Relay Channel" Algorithms 12, no. 10: 218. https://doi.org/10.3390/a12100218

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop