Next Article in Journal
Robust Audio Content Classification Using Hybrid-Based SMD and Entropy-Based VAD
Next Article in Special Issue
On the Rate-Distortion Function of Sampled Cyclostationary Gaussian Processes
Previous Article in Journal
Numerical Analysis of the Combustion of Gases Generated during Biomass Carbonization
Previous Article in Special Issue
Secure Retrospective Interference Alignment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiplexing Gains under Mixed-Delay Constraints on Wyner’s Soft-Handoff Model

1
Laboratoire Traitement et Communication de l’Information (LTCI), Telecom Paris, Institut polytechnique de Paris, 91120 Palaiseau, France
2
Department of Electrical Engineering, Technion–Israel Institute of Technology, Haifa 32000, Israel
*
Authors to whom correspondence should be addressed.
Entropy 2020, 22(2), 182; https://doi.org/10.3390/e22020182
Submission received: 6 January 2020 / Revised: 29 January 2020 / Accepted: 30 January 2020 / Published: 5 February 2020
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives)

Abstract

:
This paper analyzes the multiplexing gains (MG) achievable over Wyner’s soft-handoff model under mixed-delay constraints, that is, when delay-sensitive and delay-tolerant data are simultaneously transmitted over the network. In the considered model, delay-sensitive data cannot participate or profit in any ways from transmitter or receiver cooperation, but delay-tolerant data can. Cooperation for delay-tolerant data takes place over rate-limited links and is limited to a fixed number of cooperation rounds. For the described setup, inner and outer bounds are derived on the set of MG pairs that are simultaneously achievable for delay-sensitive and delay-tolerant data. The bounds are tight in special cases and allow us to obtain the following conclusions. For large cooperation rates, and when both transmitters and receivers can cooperate, it is possible to simultaneously attain maximum MG for delay-sensitive messages and maximum sum MG for all messages. For comparison, in scheduling schemes (also called time-sharing schemes), the largest achievable sum MG decreases linearly with the MG of delay-sensitive messages. A similar linear decrease is proved for any coding scheme, not only for scheduling schemes, if only transmitters or only receivers can cooperate (but not both) and if delay-sensitive messages have moderate MG. In contrast, if the MG of delay-sensitive messages is small, the maximum sum MG can be achieved even with only transmitter or only receiver cooperation. To summarise, when cooperation rates are high and both transmitters and receivers can cooperate or when delay-sensitive messages have small MG, then transmitting delay-sensitive messages causes no penalty on the sum-MG. In other regimes, this penalty increases proportionally to the delay-tolerant MG in the sense that increasing the delay-sensitive MG by Δ penalises the largest achievable delay-tolerant MG by 2 Δ and thus the sum MG by Δ .

1. Introduction

One of the major challenges of today’s wireless communication networks is to design coding schemes for transmission of heterogeneous traffic types. For example, different data streams (pertaining to different applications) can be subject to different delay constraints. Such mixed delay constraints in wireless networks have recently been studied in References [1,2,3,4,5]. In particular, Reference [1] proposes a broadcasting approach over a single-antenna fading channel to communicate a stream of “fast” messages, which have to be sent over a single coherence block, and a stream of “slow” messages, which can be sent over multiple blocks. A similar approach was taken in Reference [2] but for a broadcast scenario with K users. Instead of superposing “slow” on “fast” messages, this latter work proposes a scheduling approach to give preference to the communication of “fast” messages. A scheduling algorithm that prioritizes “fast” messages over “slow” messages was also proposed in Reference [3]. In particular, “fast” messages can be stored in the buffer for only one scheduling period.
A related scenario was introduced in Reference [4] for cloud radio access networks (C-RAN). In Reference [4], the messages sent by the mobile users close to the base stations (BS) are directly decoded at these BSs, whereas messages from users located further away are decoded at the cloud processor. In our terminology, the messages sent by close by users are “fast” messages because they will incur smaller decoding delay, and the messages sent by further away users are “slow” messages because their decoding is performed at the cloud processor and thus takes more time. In the spirit of this interpretation, we considered a similar approach in Reference [5], but where in our setup each user can send both “fast” and “slow” messages and “fast” messages have to be decoded immediately at the BS, whereas “slow” messages are decoded at the central processor. Moreover, in Reference [5] the channel from the mobile users to the BSs is modelled as a fading channel. The results in Reference [5] show that for small front haul capacities from the BSs to the cloud processor it is beneficial, in terms of sum-rate, to send both “fast” and “slow” messages. However, when the rate of “fast” messages is already large, then increasing it further, deteriorates the sum-rate of the system. In this regime, the stringent delay constraints on the “fast” messages penalise the overall performance.
In this paper, we consider a cellular network without a cloud processor, but where neighbouring BS or/and neighbouring mobile users can cooperate over dedicated cooperation links that do not interfere with the main communications channel. These cooperation links can model the backhauls between BSs or bluetooth or microwave links between neighbouring mobiles. We consider a scenario in which the transmitters have both delay-sensitive and delay-tolerant messages to transmit to their corresponding receivers. In our setup, delay-sensitive messages, the “fast” messages, cannot profit from cooperation due to their stringent delay constraints. That means they cannot participate in the transmitter (Tx) cooperation phase and they also have to be decoded prior to the receiver (Rx) cooperation phase. Delay-tolerant messages, the “slow” messages, can profit from both Tx- and Rx-cooperation. This specific problem formulation has strong similarity with (and was also inspired by) the model studied by Huleihel and Steinberg [6,7] where cooperation links may be absent and some of the messages are only sent if these links are present.
The focus of this paper is on the pairs of Multiplexing Gains (MG), also called degree of freedom or capacity prelog, that are simultaneously achievable for the “fast” and “slow” messages in our setup. We consider Wyner’s soft-handoff model also known as one-dimensional Wyner model [8,9]. Notice that cooperation in this network has been studied in various works including [10,11,12,13,14,15,16,17]. The focus of [10] is to identify associations between Rxs and Txs, that maximize the average MG across both uplink and downlink sessions, using cooperative transmission and reception schemes between Txs. More closely related are the works in [11,12]. In particular [12] is related to our setup when only “slow” messages are transmitted. In [12], neighbouring Txs can cooperate with each other over a fixed number of rounds and neighbouring Rxs can cooperate with each other over a fixed number of rounds. It is proved that for small cooperation prelogs, a single cooperation round at the Txs or at the Rxs achieves the same MG as when the number of cooperation rounds is unlimited. On the other hand, for large cooperation prelogs, the maximum per user MG increases with every additional cooperation round that is permitted either at the Txs or at the Rxs.
In the present work, we thus extend the work in Reference [12] to accommodate not only “slow” messages that can tolerate the delays from cooperation, but also “fast” messages that have to be encoded and decoded without further delay and thus cannot profit from cooperation. Notice that the standard approach to combine the transmissions of delay-tolerant and delay-sensitive data is to apply a smart scheduling algorithm and thus to time-share a scheme for only delay-tolerant data with a scheme for only delay-sensitive data. Since the maximum MG attained for only delay-tolerant data is larger (there are less constraints imposed on this transmission) than the maximum MG attained for only delay-sensitive data, this approach can achieve the maximum sum-MG only when exclusively sending delay-tolerant data. More specifically, for scheduling schemes, the sum-MG decreases linearly with the MG of delay-sensitive data. In this paper, we determine the set of all achievable delay-sensitive and delay-tolerant MG pairs, that is, the optimal MG region, in the function of the prelogs of the cooperation links and the total number of cooperation rounds allowed for “slow” messages. The obtained results show that (for Wyner’s soft-handoff model) when only Txs or only Rxs can cooperate, transmitting “fast” messages at low MG does not penalise the sum-MG of “slow” and “fast” messages. In contrast, when the MG of “fast” messages is large, this is not the case and increasing the MG of “fast” messages by Δ comes at the expense of decreasing the MG of “slow” messages by 2 Δ and the sum MG by Δ . When the cooperation rates are sufficiently large and both Txs and Rxs can cooperate, then it is possible to accommodate the largest possible MG for delay-sensitive messages without decreasing the maximum sum-MG. The stringent delay constraints thus do not harm the overall performance in this scenario.
To achieve the described performance, we propose a new coding scheme where every second Tx sends a “fast” message and the other Txs send a “slow” message or no message at all. Due to the structure of Wyner’s soft-handoff network, communication of “fast” messages is only interfered by transmissions of “slow” messages. This interference can thus be described during the Tx-cooperation phase and precanceled at the Txs sending the “fast” messages. On the other hand, Rxs that have to decode “fast” messages do this without further delay and describe their decoded messages during the Rx-cooperation phase to their adjacent Rxs. Since we alternated the transmission of “fast” and “slow” messages across Tx/Rx-pairs, these adjacent Rxs decode “slow” messages. With the obtained cooperation messages they can thus first subtract the interference from the “fast” messages and then decode their own “slow” messages. The described mechanism allows interference-free transmission of “fast” messages to be accommodated on every second Tx/Rx pair without disturbing the transmission of “slow” messages. Employing an optimal coding scheme for the transmission of “slow” messages on all other Tx/Rx pairs will then give the same over-all performance as when using an optimal coding scheme to send a “slow” message on each and every Tx/Rx pair. This explains why with Tx- and Rx-cooperation the maximum MG can be attained even with a “fast” MG of L/2, where L denotes the number of antennas at each Tx and Rx. Notice that this is the largest MG when only “fast” messages but no “slow” messages are transmitted.

1.1. Organization

The rest of this paper is organised as follows. We end this section with some remarks on notation. The following Section 2 describes the problem setup. Section 3 presents our results when only transmitters or only receivers can cooperate and Section 4 the results when transmitters and receivers can cooperate. Section 5 concludes the main body of the paper. Technical proofs of the converse results are referred to in appendices.

1.2. Notation

We use the shorthand notations “Rx“ for “Receiver“ and “Tx“ for “Transmitter“. The set of all integers is denoted by Z , the set of positive integers by Z + and the set of real numbers by R . For other sets we use calligraphic letters, for example, X . Random variables are denoted by uppercase letters, for example, X, and their realizations by lowercase letters, for example, x. For vectors we use boldface notation, that is, upper case boldface letters such as X for random vectors and lower case boldface letters such as x for deterministic vectors.) Matrices are depicted with sans serif font, for example, H . We also write X n for the tuple of random variables ( X 1 , , X n ) and X n for the tuple of random vectors ( X 1 , , X n ) .

2. Problem Description

Consider Wyner’s soft-handoff network with K Txs and K Rxs that are aligned on two parallel lines so that each Tx k has two neighbours, Tx k 1 and Tx k + 1 , and each Rx k has two neighbours, Rx k 1 and Rx k + 1 . Interference is short-range in the sense that the signal sent by Tx k is observed only by Rx k and by the neighbouring Rx k + 1 (see Figure 1). Let Txs and Rxs be equipped with L > 0 antennas each. The time-t channel output at Rx k is then described as
Y k , t = H k , k X k , t + H k 1 , k X k 1 , t + Z k , t ,
where X k , t and X k 1 , t are the real L -dimensional vectors sent by Tx k and Tx k 1 at time t; { Z k , t } is a noise sequence consisting of i.i.d. standard Gaussian vectors; H k , k and H k 1 , k are fixed full rank channel matrices; and X 0 , t = 0 for all t.
Each Tx k { 1 , , K } wishes to send a pair of independent messages M k ( F ) and M k ( S ) to Rx k. The “fast” message M k ( F ) is uniformly distributed over the set M k ( F ) { 1 , , 2 n R k ( F ) } and needs to be decoded subject to a stringent delay constraint, as we explain shortly. The “slow” message M k ( S ) is uniformly distributed over M k ( S ) { 1 , , 2 n R k ( S ) } and is subject to a less stringent decoding delay constraint. Here, n denotes the blocklength of transmission and R k ( F ) and R k ( S ) the rates of transmissions of the “fast” and “slow” messages.
We consider three different cooperation scenarios:
  • Neighbouring Txs cooperate by communicating during D Tx > 0 rounds over dedicated cooperation links. Rxs cannot cooperate, and so the number of Rx-cooperation rounds is D Rx = 0 . (This scenario is termed “Tx-cooperation Only“)
  • Neighbouring Rxs cooperate by communicating during D Rx > 0 rounds over dedicated cooperation links. Txs cannot cooperate, and so the number of Tx-cooperation rounds is D Tx = 0 . (Termed “Rx-cooperation Only“)
  • Neighbouring Txs cooperate during D Tx > 0 rounds over dedicated cooperation links and neighbouring Rxs cooperate during D Rx > 0 rounds. (Termed “Tx- and Rx-cooperation“).
The cooperative communication is subject to a total delay constraint
D Tx + D Rx D ,
where D > 0 is a given parameter of the system. In the “Tx-cooperation Only” scenario D Rx has to be 0 and thus D Tx D . Similarly in the “Rx-cooperation Only” scenario, D Tx = 0 and D Rx D . For “Tx- and Rx-cooperation” the values of D Tx and D Rx are design parameters and can be chosen arbitrary such that (2) is satisfied. As we will see, in our setup the cooperative communication only concerns “slow” messages, because “fast” messages are subject to stringent delay constraint and thus have to be transmitted and decoded without further delay.
We describe the encoding at the Txs. In the case of Tx-cooperation, neighbouring Txs can communicate to each other over dedicated noise-free, but rate-limited, links. Communication takes place over D Tx > 0 rounds and can depend only on the “slow” messages but not on the “fast” messages. In each cooperation round j { 1 , , D Tx } , Tx k produces a cooperation message T k ( j ) for each of its neighbours { k 1 , k + 1 } by computing
T k ( j ) = ξ k ( n ) M k ( S ) , T k ( 1 ) , , T k ( j 1 ) { k 1 , k + 1 } , j { 1 , , D Tx } , { k 1 , k + 1 } ,
for some function ξ k ( n ) on appropriate domains. Tx k sends the messages T k ( 1 ) , , T k ( D Tx ) over the cooperation link to Tx { k 1 , k + 1 } . The rate-limitation on the cooperation link imposes
j = 1 D Tx H ( T k ( j ) ) μ T x · n 2 log ( P ) , k { 1 , , K } , { k 1 , k + 1 } ,
for a given μ Tx > 0 .
Tx k finally computes its channel inputs X k n = ( X k , 1 , , X k , n ) R L × n as a function of its “fast” and “slow” messages and of all the 2 D Tx cooperation messages that it obtained from its neighbouring transmitters:
X k n = f ˜ k ( n ) M k ( F ) , M k ( S ) , { T k ( 1 ) , , T k ( D Tx ) } { k 1 , k + 1 } .
In the setup without Tx-cooperation, Tx k computes its channel inputs X k n simply as a function of its “fast” and “slow” messages:
X k n = f k ( n ) M k ( F ) , M k ( S ) .
In any case (i.e., with and without Tx-cooperation), the channel inputs have to satisfy the average block-power constraint
1 n t = 1 n | | X k , t | | 2 P , k { 1 , , K } ,
almost surely.
We now describe the decoding. In the case of Rx-cooperation, decoding takes place in two phases. During the first fast-decoding phase, each Rx k decodes its intended “fast” message M k ( F ) based on its own channel outputs Y k n = ( Y k , 1 , , Y k , n ) R L × n . So, it produces:
M ^ k ( F ) = g k ( n ) Y k n ,
where g k ( n ) denotes a decoding function on appropriate domains.
In the subsequent slow-decoding phase, Rxs first communicate with their neighbours during D Rx 0 rounds over dedicated noise-free and rate-limited links, and then they decode their intended “slow” messages based on their outputs and based on this exchanged information. Specifically, in each cooperation round j { 1 , , D Rx } , each Rx k, for k { 1 , , K } , produces a cooperation message Q k ( j ) for each of its neighbours { k 1 , k + 1 } :
Q k ( j ) = ψ k , ( n ) Y k n , Q k ( 1 ) , , Q k ( j 1 ) } { k 1 , k + 1 } ,
for an encoding function ψ k , ( n ) on appropriate domains. Rx k then sends the messages Q k ( 1 ) , , Q k ( D Rx ) over the cooperation link to Rx { k 1 , k + 1 } . The rate-limitation on the cooperation link imposes
j = 1 D Rx H ( Q k ( j ) ) μ R x · n 2 log ( P ) , k { 1 , , K } , { k 1 , k + 1 } ,
for some given μ Rx > 0 .
After the last cooperation round, each Rx k decodes its desired “slow” messages as
M ^ k ( S ) = b k ( n ) Y k n , Q k ( 1 ) , , Q k ( D Rx ) { k 1 , k + 1 } ,
where b k ( n ) denotes a decoding function on appropriate domains.
For each of the three cooperation scenarios, given cooperation prelogs μ Rx , μ Tx 0 and maximum delay D, a MG pair ( S ( F ) , S ( S ) ) is called achievable, if for every positive integer K there exists a sequence of average rates { R K ( F ) ( P ) , R K ( S ) ( P ) } P > 0 so that
S ( F ) lim ¯ K lim ¯ P R K ( F ) ( P ) 1 2 log ( P ) ,
S ( S ) lim ¯ K lim ¯ P R K ( S ) ( P ) 1 2 log ( P ) ,
and so that for each average rate pair ( R K ( F ) ( P ) , R K ( S ) ( P ) ) it is possible to find a set (in the blocklength n) of encoding, cooperation, and decoding functions satisfying constraints (2), (4), (7), and (10) and with vanishing probability of error:
p ( error ) P k { 1 , , K } M ^ k ( F ) M k ( F ) M ^ k ( S ) M k ( S ) 0 as n .
The closure of the set of all achievable MG pairs ( S ( F ) , S ( S ) ) is called optimal MG region. In the case of Tx-cooperation only, it is denoted S Tx 🟉 ( μ Tx , D ) , in the case of Rx-cooperation only S Rx 🟉 ( μ Rx , D ) , and in the case of Tx- and Rx-cooperation S 🟉 ( μ Tx , μ Rx , D ) .

3. Rx- or Tx-Cooperation Only

In the following two subsections, we consider the Rx-cooperation only scenario and the Tx-cooperation only scenario. For each scenario we present coding schemes and the optimal MG region. The scenario with both Tx-and Rx-cooperation is treated in the next Section 4.

3.1. Optimal MG Region and Coding Schemes for Rx-Cooperation Only

Theorem 1
(Optimal Multiplexing Gain Region: Rx-cooperation Only). For any given μ Rx > 0 , the MG region S Rx 🟉 ( μ Rx , D ) is the set of all nonnegative pairs ( S ( F ) , S ( S ) ) satisfying
2 S ( F ) + S ( S ) L
S ( F ) + S ( S ) min L 2 + μ R x , L · 2 D + 1 2 D + 2 .
Proof. 
The converse to (16) follows by extending the proof in Reference [12] to the multi-antenna case and by noting that the sum MG of “slow” and “fast” messages cannot be larger than the MG of a scenario with only “slow” messages. The converse to (15) is proved in Appendix A. For the achievability, define the following five MG pairs:
( S ( F ) = L 2 , S ( S ) = 0 ) ,
( S ( F ) = 0 , S ( S ) = L · 2 D + 1 2 D + 2 ) ,
( S ( F ) = 0 , S ( S ) = L 2 + μ Rx ) ,
( S ( F ) = L 2 D + 2 , S ( S ) = L · 2 D 2 D + 2 ) ,
( S ( F ) = L 2 μ Rx , S ( S ) = 2 μ Rx ) .
In the following Section 3.1.1 we show that when μ Rx μ max , where
μ max L · D 2 D + 2 ,
the MG pairs (17a,b,d) are achievable. When μ Rx < μ max the MG pairs (17a,c,e) are achievable. The proof of achievability of Theorem 1 then follows from simple time-sharing arguments. □
Figure 2 depicts the MG region in Theorem 1 for different values of μ Rx . When there are only “slow” messages, the maximum MG is min { L 2 + μ R x , L · 2 D + 1 2 D + 2 } . Notice that in any scheme, we can replace a “fast” message by a “slow” message. By a rate-transfer argument, the maximum sum-MG thus coincides with the maximum “slow” MG. Interestingly, this sum MG remains unchanged whenever the “fast” MG S ( F ) is below a certain threshold. Mathematically, this is described by the slope of the boundary of the region being equal to 1 when
S ( F ) max L 2 μ Rx , L 2 D + 2 .
For
S ( F ) > max L 2 μ Rx , L 2 D + 2 ,
the slope is 2 . In this latter regime, increasing the MG of “fast” messages by Δ requires decreasing the MG of “slow” messages by 2 Δ . There is thus a penalty in sum MG caused by the more stringent delay constraints on “fast” messages.

3.1.1. Schemes Proving Achievability of Theorem 1

We prove achievability of the MG pairs in (17).
1. MG pair in (17a): Periodically silence every second Tx. This splits the network into K / 2 non-interfering point-to-point links. Send a “fast” message over each of these links (see Figure 3), but no “slow” message at all. The described scheme achieves the MG pair in (17a) and requires no cooperation rate.
2. MG pairs in (17b,c): Let the Txs only send “slow” messages but no “fast” messages. Under this coding assumption, the setup at hand is a multi-antenna version of the setup in Reference [12], but specialized to 0 Tx-cooperation rounds and D Rx-cooperation rounds. The multi-antenna extension of the scheme proposed in Reference [12] (Section V) can thus be used to achieve the MG pair in (17b) if μ Rx μ max and the MG pair in (17c) if μ Rx < μ max .
For reference in the following subsection, we briefly review the scheme in Reference [12] (Section V) when specialized to Rx-cooperation only. For details, see Reference [12]. Consider first the case μ Rx μ max . In this case, the scheme periodically silences every 2 D + 2 nd Tx. This splits the network into smaller subnets, each consisting of 2 D + 1 active Txs and 2 D + 2 active Rxs. We describe the communication in the first subnet, see also Figure 4; the others are treated in an analogous way.
Each Tx k { 1 , , 2 D + 1 } in this first subnet encodes its “slow” message M k ( S ) using an L -dimensional Gaussian codebook and then sends the resulting codeword using its L Tx-antennas over the channel. Decoding is performed as follows. Rx 1 decodes its desired message using an optimal point-to-point decoding method based on the interference-free channel outputs Y 1 n = H 1 , 1 X 1 n + Z 1 n . Then it sends its decoded message M ^ 1 ( S ) over the cooperation link to Rx 2 during the first cooperation round. Rxs 2 to D + 1 apply successive interference cancellation (SIC) where they cancel the interference from the preceding Tx with the cooperation message obtained from their left neighbour. After decoding its intended “slow” message, each Rx k { 2 , , D } sends its decoded message M ^ k ( S ) over the cooperation link to Rx k + 1 during cooperation round k.
We now describe decoding at Rxs D + 2 , , 2 D + 2 . Recall that Tx 2 D + 2 is silenced. Therefore Rx 2 D + 2 observes the interference-free channel outputs Y 2 D + 2 n = H 2 D + 1 , 2 D + 2 X 2 D + 1 n + Z 2 D + 2 n . Based on these outputs, Rx 2 D + 2 decodes the “slow” message M 2 D + 1 ( S ) intended for Rx 2 D + 1 and transmits the decoded message M ^ 2 D + 1 ( S ) to this Rx over the cooperation link in round 1. Rxs D + 2 to 2 D + 1 declare the cooperation message that they receive from their right neighbour as their desired message. They also employ SIC to decode the “slow” message intended for the neighbour to their left. Finally, after this decoding step, each Rx k { D + 3 , , 2 D + 2 } sends the decoded message M ^ k 1 ( S ) over the cooperation link to its left neighbour during cooperation round 2 D + 3 k . Figure 4 illustrates the decodings and conferenced messages.
In the described scheme, 2 D + 1 Txs send a “slow” message using an L -dimensional Gaussian codebook of power P and all these messages can be decoded based on interference-free outputs. An average “slow” MG of L · 2 D + 1 2 D + 2 is thus achieved in each subnet. Moreover, 2 D cooperation messages are sent in each subnet, each of prelog equal to the rate of a “slow” message, i.e., L . The average cooperation prelog per link is thus L · 2 D 2 ( 2 D + 2 ) = μ max . If one time-shares 2 D + 2 different instances of the described scheme with a different subset of silenced users in each of them, the overall scheme achieves the MG pair ( S ( F ) = 0 , S ( S ) = L · 2 D + 1 2 D + 2 ) with each cooperation link being loaded at average cooperation prelog μ max .
When μ Rx < μ max , we can time-share the scheme achieving (17b) with a scheme that deactivates every second Tx and sends “slow” messages over the interference-free links. This latter scheme does not require any cooperation. Time-sharing is done according to the available cooperation prelog μ Rx : the first scheme that uses cooperation prelog μ max is used over a fraction μ Rx μ max of time and the no-cooperation scheme over the remaining fraction 1 μ Rx μ max of time. The combined scheme then requires cooperation prelog μ Rx and achieves the MG pair in (17c).
3. MG pairs in (17d,e): Reconsider the coding scheme that achieves MG pair (17b) and that is described in the previous subsection and illustrated in Figure 4. A close inspection of the scheme reveals that in each subnet, decoding of the message sent by the left-most Tx does not rely on the conferenced information. This first message of each subnet thus satisfies our decoding requirement for “fast” messages.
We propose to apply the above scheme, but to let the first Tx of every subnet (the red Tx in Figure 4) send a “fast” message and the subsequent 2 D Txs of the subnet send “slow” messages. This modified scheme requires the same cooperation prelog μ max as before and it achieves the MG pair in (17d).
For setups where μ Rx < μ max , we propose to time-share the scheme achieving (17d) over a fraction μ Rx μ max of time with the scheme achieving (17a) over the remaining fraction 1 μ Rx μ max of time. This time-sharing scheme has cooperation prelog equal to μ Rx , and thus respects the constraint (10). Moreover, it achieves the MG pair in (17e).

3.2. Optimal MG Region and Coding Schemes for Tx-Cooperation Only

Theorem 2
(Optimal MG region: Tx-cooperation Only). For any given μ Tx > 0 , the MG region S Tx 🟉 ( μ Tx , D ) is the set of all nonnegative pairs ( S ( F ) , S ( S ) ) satisfying
2 S ( F ) + S ( S ) L
S ( F ) + S ( S ) min L 2 + μ T x , L · 2 D + 1 2 D + 2 .
Proof. 
The converse to (22) follows by extending the proof in Reference [12] to the multi-antenna case and by noting that the sum MG cannot be larger than the MG of a scenario with only “slow” messages. The converse to (21) is proved in Appendix B. For the achievability, define the following MG pairs:
( S ( F ) = 0 , S ( S ) = L · 2 D + 1 2 D + 2 ) ,
( S ( F ) = 0 , S ( S ) = L 2 + μ Tx ) ,
( S ( F ) = L 2 D + 2 , S ( S ) = L · 2 D 2 D + 2 ) ,
( S ( F ) = L 2 μ Tx , S ( S ) = 2 μ Tx ) .
In the following Section 3.2.1 we show that when μ Tx μ max the MG pairs (17a) and (23a,c) are achievable and when μ Tx < μ max the MG pairs (17a) and (23b,d) are achievable. The achievability proof of the theorem then follows by simple time-sharing arguments. □
Remark 1.
Notice the duality between Theorems 1 and 2, which show that cooperation is equally beneficial for only Tx- or only Rx-cooperation. As we will see in Section 4, it is however more beneficial, when Txs and Rxs can cooperate.

3.2.1. Schemes Proving the Achievability of Theorem 2

We prove achievability of the MG pairs in (23). MG pair (17a) is achievable as described in the previous section (no cooperation is required at all).
1. MG pairs in (23a,b): Let the Txs only send “slow” messages but no “fast” messages. Under this coding assumption, the introduced setup corresponds to a multi-antenna version of the setup in Reference [12] but specialized to D Tx-cooperation rounds and 0 Rx-cooperation rounds. Achievability of MG pairs (23a,b) then follows immediately by specializing [12] (Theorem 1) to Tx-cooperation only. In the following we briefly describe the schemes achieving (23a,b). For details see Reference [12].
We silence every 2 D + 2 nd Tx. This splits the network into non-interfering subnets, and in a given subnet we apply the scheme depicted in Figure 5. Specifically, Tx 1 encodes its message using an L -dimensional power- P Gaussian point-to-point codebook, and sends the resulting codeword X 1 n using its L Tx-antennas over the channel. It also precodes the obtained sequence with the matrix H 2 , 2 1 H 1 , 2 , quantises the precoded sequence I 1 n H 2 , 2 1 H 1 , 2 X 1 n with a rate- L 2 log ( 1 + P ) quantiser to obtain a quantisation I 1 n ^ at noise level, and sends the resulting quantisation message as a first-round cooperation message to Tx 2. For each k = 2 , , D + 1 , Tx k obtains a round- ( k 1 ) cooperation message from its left neighbour Tx k 1 that describes the quantised version I ^ k 1 n of I k 1 n H k , k 1 H k 1 , k X k 1 n . Based on this message, Tx k reconstructs I ^ k 1 n , encodes its “slow” message M k ( S ) using a power P dirty-paper code (DPC) that mitigates the interference I ^ k 1 n , and sends the resulting DPC sequence X k n over the channel. Moreover, it precodes this input sequence with the matrix H k + 1 , k + 1 1 H k , k + 1 , quantises the precoded sequence I k n H k + 1 , k + 1 1 H k , k + 1 X k n with a rate- L / 2 log ( 1 + P ) quantiser (for a quantisation at noise level) to obtain I ^ k n , and sends the quantisation message as a round-k cooperation message over the link to its right neighbour. Tx D + 1 produces its inputs in a similar way, that is, using DPC, but sends no cooperation message at all.
Rx 1 decodes M 1 ( S ) based on the interference-free outputs
Y 1 n = H 1 , 1 X 1 n + Z 1 n ,
using a standard point-to-point decoding rule. Each Rx k { 2 , , D + 1 } decodes its desired message M k ( S ) based on the premultiplied outputs
H k , k 1 Y k n = H k , k 1 H k 1 , k X k 1 n + X k n + H k , k 1 Z k n ,
using an optimal DPC decoding rule. (Recall that X k n was produced as a DPC sequence that mitigates I ^ k 1 n , a quantised version of I k 1 n = H k , k 1 H k 1 , k X k 1 n ). Since quantisation was performed at noise level, each message M 1 ( S ) , , M D + 1 ( S ) can be sent reliably with MG L .
Each message M k , with k { D + 3 2 D + 2 } , is sent over the path Tx k Tx k 1 Rx k. We describe the transmissions in more detail, starting with the last Tx in the subnet. Tx 2 D + 2 does not send any channel inputs, that is, X 2 D + 2 n = 0 n . However, it first encodes its “slow” message M 2 D + 2 ( S ) using an L -dimensional Gaussian point-to-point codebook, precodes the codeword U 2 D + 2 n by the matrix H 2 D + 1 , 2 D + 2 1 , and then quantises this precoded codeword S 2 D + 1 n H 2 D + 1 , 2 D + 2 1 U 2 D + 2 n with a rate- L / 2 log ( 1 + P ) to obtain a quantisation S ^ 2 D + 1 n at noise level. It finally sends the quantisation message describing S ^ 2 D + 1 n as a first-round cooperation message to Tx 2 D + 1 . Tx 2 D + 1 reconstructs S ^ 2 D + 1 n and sends it over the channel, that is, X 2 D + 1 n = S ^ 2 D + 1 n .
In a similar way, each Tx k { 2 D + 1 , , D + 2 } encodes its own “slow” message M k ( S ) by means of DPC of power P that mitigates the interference H k 1 , k 1 H k , k X k n of the signal sent by Tx k itself; precodes the obtained sequence U k n with the matrix H k 1 , k 1 H k , k ; quantises the precoded sequence S k 1 n H k 1 , k 1 H k , k U k n to obtain a quantisation S ^ k 1 at noise level; and sends the corresponding quantisation message as a ( 2 D + 3 k ) -round cooperation message over the link to Tx k 1 . Tx k 1 then reconstructs S ^ k 1 n and sends it over the channel: X k 1 n = S ^ k 1 n . Rxs D + 2 , , 2 D + 1 decode their intended messages using an optimal DPC decoding rule based on the premultiplied outputs
H k 1 , k 1 Y k n = X k 1 n + H k 1 , k 1 H k , k X k n + H k 1 , k 1 Z k n .
Recall that X k 1 n is a quantised version (at noise level) of the precoded signal S k 1 n H k 1 , k 1 H k , k U k n , where U k n is a DPC sequence that mitigates the interference H k 1 , k 1 H k , k X k n . Each of the messages M D + 3 ( S ) , , M 2 D + 2 ( S ) can thus be transmitted reliably at full MG L .
In the described scheme, an average “slow” MG of L · 2 D + 1 2 D + 2 is thus achieved in each subnet. Moreover, 2 D cooperation messages of prelog L are sent in each subnet, and the average cooperation prelog per link is L · 2 D 2 ( 2 D + 2 ) = μ max . If one time-shares 2 D + 2 different instances of the described scheme with a different subset of silenced users in each of them, the overall scheme achieves the MG pair ( S ( F ) = 0 , S ( S ) = L 2 D + 1 2 D + 2 ) with each cooperation link being loaded at average cooperation prelog μ max .
When μ Tx < μ max , we propose to time-share above described scheme over a fraction μ Tx μ max of time with a scheme that deactivates every second Tx and sends “slow” messages over the interference-free links (which does not require any cooperation) over the remaining fraction 1 μ Tx μ max of time. The overall time-sharing scheme achieves the MG pair (23b) and loads each Tx-cooperation link at prelog μ Tx .
2. MG pairs in (23c,d): A close inspection of the coding scheme described above and depicted in Figure 5 reveals that in each subnet, the message pertaining to the D + 1 st Tx does not participate in the cooperation, see Figure 5. That means, all conferenced information is independent of this message. The message thus satisfies the constraints imposed on “fast” messages in our scenario. We thus propose to employ above scheme, but where the D + 1 st Tx in each subnet (the red Tx in Figure 5) sends a “fast” message and the first and the last D Txs in the subnet send “slow” messages. This scheme requires again cooperation prelog μ max and achieves the MG pair in (23c).
When μ Tx < μ max , we can time-share this scheme over a fraction μ Tx μ max of time with the scheme achieving (17a) over the remaining fraction 1 μ Tx μ max of time. The time-shared scheme achieves the MG pair (23d) and loads each Tx-cooperation link at orelog μ Tx .

4. Both Tx-and Rx-Cooperation

In this section we consider both Tx- and Rx-cooperation. Recall that the number of Tx- and Rx-cooperation rounds D Tx and D Rx is a design parameter over which we can optimize subject to the sum-constraint D Tx + D Rx D . For simplicity, in this section we assume that the total number of cooperation rounds D is even.
In Section 4.1 we present our inner and outer bounds on the MG region. We also prove that they match in some cases. In the following subsections we then present the coding schemes that allow us to conclude our achievability result.

4.1. Results on MG Region

Let the maximum number of total cooperation rounds D be given. For any pair D Rx { 1 , , D 1 } and D Tx { 1 , , D 1 } summing to less than D, define
μ Tx , L ( D Tx ) L · D Tx 2 D + 2 ,
μ Rx , L ( D Rx ) L · D Rx 2 D + 2 ,
μ Tx , H ( D Tx ) L · D 2 + 3 4 D Tx 1 4 2 D + 2 ,
μ Rx , H ( D Rx ) L · D 2 + D Rx 1 2 D + 2 .
Notice that μ Tx , L ( D Tx ) μ Tx , H ( D Tx ) and μ Rx , L ( D Rx ) μ Rx , H ( D Rx ) .
Also, define the five MG pairs:
S NoCoop ( F ) S ( F ) = L 2 , S ( S ) = 0 ,
S NoCoop ( S ) S ( F ) = 0 , S ( S ) = L 2 ,
S Coop S ( F ) = 0 , S ( S ) = L · 2 D + 1 2 D + 2 ,
S Partial S ( F ) = L · 2 2 D + 2 , S ( S ) = L · 2 D 1 2 D + 2 ,
S Interlaced S ( F ) = L 2 , S ( S ) = L · D 2 D + 2 .
Notice that all these MG pairs do not depend on the number of cooperation rounds D Tx and D Rx . In what follows, we will be interested in convex combinations of these points and therefore define for each α [ 0 , 1 ] :
S Coop ( α ) α · S Coop + ( 1 α ) · S NoCoop ( S ) ,
S Partial ( α ) α · S Partial + ( 1 α ) · S NoCoop ( F ) ,
S Interlaced ( α ) α · S Interlaced + ( 1 α ) · S NoCoop ( F ) ,
S Partial-Inter ( α ) α · S Interlaced + ( 1 α ) · S Partial .
Notice that S Coop ( 1 ) = S Coop and S Partial ( 1 ) = S Partial and S Interlaced ( 1 ) = S Interlaced . Moreover, S Partial-Inter ( 1 ) = S Interlaced and S Partial-Inter ( 0 ) = S Partial .
Theorem 3
(Achievable MG Region: Tx- and Rx-cooperation). For any choice of odd-valued integers D T x , D R x { 1 , 3 , 5 , , D 1 } summing to D, the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains some of the following regions, depending on the available cooperation prelogs μ Tx and μ Rx .
  • If μ Tx μ T x , H ( D T x ) and μ Rx μ R x , H ( D R x ) , the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains the trapezoidal region
    conv hull ( 0 , 0 ) , S NoCoop ( F ) , S Coop , S Interlaced .
  • If μ Tx μ T x , L ( D T x ) and μ Rx μ R x , L ( D R x ) , the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains the pentagon
    conv hull ( 0 , 0 ) , S NoCoop ( F ) , S Coop , S Partial-Inter ( α 1 🟉 ) , S Interlaced ( β 1 🟉 ) ,
    where
    α 1 🟉 min μ Tx μ T x , L ( D T x ) μ T x , H ( D T x ) μ T x , L ( D T x ) , μ Rx μ R x , L ( D R x ) μ R x , H ( D R x ) μ R x , L ( D R x ) , 1
    and
    β 1 🟉 min μ Tx μ T x , H ( D T x ) , μ Rx μ R x , H ( D R x ) , 1 .
  • For μ Tx μ T x , L ( D T x ) or μ Rx μ R x , L ( D R x ) , the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains the region
    conv hull ( 0 , 0 ) , S Coop ( α 2 🟉 ) , S Partial ( α 2 🟉 ) , S Interlaced ( β 2 🟉 ) .
    where
    α 2 🟉 min μ Tx μ T x , L ( D T x ) , μ Rx μ R x , L ( D R x )
    and
    β 2 🟉 = min μ Tx μ T x , H ( D T x ) , μ Rx μ R x , H ( D R x ) .
In Figure 6 we schematically illustrate above MG regions (33), (34) and (37). We see that for large cooperation prelogs our MG region is the trapezoid in Figure 6a. For smaller cooperation prelogs the MG region turns into a pentagon, see Figure 6b, because MG pair S Interlaced is not included anymore. Finally, for even smaller cooperation prelogs even the MG pair S Coop is not included anymore, but needs to be replaced by S Coop ( 0.93 ) . Similarly, S Partial-Inter ( 0.6 ) needs to be replaced by S Partial ( 0.93 ) .
The achievable MG region described in the theorem can also be written as a union over the choice of the Tx- and Rx-cooperation rounds D Tx and D Rx summing to no more than D. Notice however, that one cannot take the convex hull of this union because the way we defined the problem setup the choice of D Tx and D Rx needs to be fixed in advance and time-sharing between different choices is not possible.
Proof of Theorem 3.
In the following Section 4.2, Section 4.3 and Section 4.4 we show how to achieve the MG pairs in (31c–e) with sufficiently large cooperation prelogs μ Tx and μ Rx . In particular, to achieve (31c,d), cooperation prelogs μ Tx μ Tx , L ( D Tx ) and μ Rx μ Rx , L ( D Rx ) are required. To achieve (31e) cooperation prelogs μ Tx μ Tx , H ( D Tx ) and μ Rx μ Rx , H ( D Rx ) are required. MG pairs (31a,b) can be achieved without any Tx- or Rx-cooperation by simply silencing every second transmitter and sending either only “fast” or only “slow” messages over the remaining K / 2 isolated point-to-point links.
The proof of the theorem follows then by simple time-sharing arguments. In particular, for any α [ 0 , 1 ] the MG pair S Coop ( α ) can be achieved by time-sharing the scheme achieving S Coop over a fraction α of the time with the scheme achieving S NoCoop ( S ) over the remaining fraction of time. Such a time-sharing scheme requires cooperation prelogs of μ Tx α μ Tx , L and μ Rx α μ Rx , L . The MG pairs S Partial ( α ) and S Interlaced ( α ) are achieved by time-sharing the scheme achieving S Partial or the scheme achieving S Interlaced over a fraction α of the time with the scheme achieving S NoCoop ( F ) over the remaining fraction of time. The time-sharing scheme leading to S Partial ( α ) requires cooperation prelogs μ Tx α μ Tx , L and μ Rx α μ Rx , L and the time-sharing scheme leading to S Interlaced ( α ) requires μ Tx α μ Tx , H and μ Rx α μ Rx , H . The MG pair S Partial-Inter ( α ) is achieved by time-sharing the scheme achieving S Interlaced over a fraction α of the time with the scheme achieving S Partial over the remaining fraction of time. This time-sharing schme requires cooperation prelogs μ Tx α μ Tx , H + ( 1 α ) μ Tx , L and μ Rx α μ Rx , H + ( 1 α ) μ Rx , L .
Notice that for all of above time-sharing arguments, it is important that the MG pairs S NoCoop ( F ) , S NoCoop ( S ) , S Coop , S Partial , and S Interlaced can be achieved using the same values of D Tx and D Rx . As will become clear in the following sections, these MG pairs can be achieved using any values D Tx , D Rx { 1 , 3 , 5 , D 1 } summing to D. The required cooperation prelogs however depend on the specific choices of D Tx and D Rx . This explains why the allowed time-sharing coefficients α depend on the number of cooperation rounds D Tx and D Rx . □
Remark 2.
If in Theorem 3 we allow the parameters D T x , D R x to take on any values in { 1 , 2 , , D 1 } summing to D and we remove the MG points S Interlaced , S Interlaced ( β 1 🟉 ) , S Interlaced ( β 2 🟉 ) , and S Partial-Inter ( α 1 🟉 ) , we obtain a different achievable region, which can be larger for certain system parameters.
To see that this modified region is also achievable, notice that our schemes achieving S Coop and S Partial described in Section 4.2 and Section 4.3 can be run with any number of Tx- and Rx- cooperation rounds D T x and D R x , irrespective of whether they are odd or even. Their performance remains unchanged. In contrast, the scheme achieving S Interlaced that we present in Section 4.4 requires that both D T x and D R x are both odd.
In Figure 7 we schematically illustrate the MG regions that are achieved for D T x or D R x even. Specifically, Figure 7a shows the MG region for large cooperation prelogs and Figure 7b for small cooperation prelogs.
We also have the following converse result.
Proposition 1
(Outer Bound on Optimal MG Region: Both Tx- and Rx-cooperation). Any MG pair ( S ( F ) , S ( S ) ) in S 🟉 ( μ Tx , μ Rx , D ) satisfies
S ( F ) L 2 ,
S ( F ) + S ( S ) min L 2 + μ Tx + μ Rx , L · 2 D + 1 2 D + 2 .
Proof. 
Follows from the converse result in Reference [12] and by a rate-transfer argument from “fast” to “slow” messages. □
Figure 8 depicts our inner and outer bounds (Theorem 3, Remark 2, and Proposition 1) on the optimal MG region with μ Tx = μ Rx = 0.45 and D = 10 for different values of D Tx and D Rx . For D Rx = D / 2 = 5 and D Tx = D / 2 = 5 ,
μ Tx μ Tx , H and μ Rx μ Rx , H ,
and the inner bound is given by the trapezoidal region defined in (33). It coincides with the outer bound, and thus establishes the exact MG region. Notice that in this case, the MG region is solely constrained by the fact that the MG of “fast” messages cannot exceed L 2 and that the sum MG of all messages cannot exceed L · 2 D Rx + 2 D Tx + 1 2 D Rx + 2 D Tx + 2 . Imposing a stringent constraint on the decoding delay of the “fast” messages in this case never penalises the sum-MG of the system. Our inner bounds obtained for odd-valued cooperation-rounds ( D Tx , D Rx ) { ( 1 , 9 ) , ( 3 , 7 ) , ( 7 , 3 ) , ( 9 , 1 ) } coincide with the outer bound only if S ( F ) L · 2 ( 1 α 1 🟉 ) + α 1 🟉 · ( D + 1 ) 2 D + 2 , where α 1 🟉 depends on the choice of ( D Tx , D Rx ) and is defined in (35). The inner bounds for even-valued cooperation-rounds ( D Tx , D Rx ) { ( 2 , 8 ) , ( 4 , 6 ) , ( 6 , 4 ) , ( 8 , 2 ) } all coincide and attain the outer bound only if S ( F ) L · 2 2 D + 2 .
Figure 9 depicts our inner and outer bounds on the optimal MG region for the same D = 10 but smaller values of μ Tx = μ Rx = 0.3 . In Figure 9, we see that by decreasing μ Tx and μ Rx from 0.45 to 0.3 , our inner and outer bounds do not coincide for all values of S ( F ) . Our inner bound for D Rx = 5 and D Tx = 5 contains all other inner bounds and it matches the outer bound in the regime S ( F ) L · 2 ( 1 α 1 🟉 ) + α 1 🟉 · ( D + 1 ) 2 D + 2 , where for the definition of α 1 🟉 in (35) one should set D Tx = D Rx = 5 .
Figure 10 depicts our inner and outer bounds on the optimal MG region for the same D = 10 but when μ Tx = 0.3 is smaller than μ Rx = 0.45 . Here, the inner bound obtained for ( D Tx = 3 , D Rx = 7 ) includes all other inner bounds and it matches the outer bound in the regime S ( F ) L · 2 ( 1 α 1 🟉 ) + α 1 🟉 · ( D + 1 ) 2 D + 2 , where α 1 🟉 is defined in (35) with D Tx = 3 and D Rx = 7 .
The following corollaries generalise these observations.
Corollary 1.
If there exist integers D T x , D R x { 1 , 3 , 5 , , D 1 } summing to D such that the two constraints
μ Tx μ Tx , H ( D T x )
μ Rx μ Rx , H ( D R x )
are simultaneously satisfied, then the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) coincides with the trapezoidal region in (33). That means, S 🟉 ( μ Tx , μ Rx , D ) is the set of all nonnegative pairs ( S ( F ) , S ( S ) ) satisfying
S ( F ) L 2 ,
S ( F ) + S ( S ) L · 2 D + 1 2 D + 2 .
Proof. 
Follows directly from the achievability result in Theorem 3, see (33), and the converse result in Proposition 1. For the converse result notice in particular that under constraints (42) the sum μ Tx + μ Rx exceeds L · D 2 D + 2 . □
Remark 3.
Under conditions (42) there is no penalty in sum-MG due to the stringent decoding constraint on “fast” messages. These “fast” messages can be submitted at maximum MG without decreasing the overall performance of the system.
The following corollaries present partial characterizations of the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) for S ( F ) below a certain threshold.
Corollary 2.
If a pair of integers D T x , D R x { 1 , 3 , , D 1 } summing to D satisfies
μ Tx μ Tx , L ( D T x ) ,
μ Rx μ Rx , L ( D R x ) ,
then the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains the MG pair ( S ( F ) , S ( S ) ) with
S ( F ) L 2 1 D 1 D + 1 ( 1 α 1 🟉 )
(where α 1 🟉 is defined in (35) and depends on the choice of D T x , D R x and on μ T x , μ R x ) if, and only if,
S ( F ) + S ( S ) L · 2 D + 1 2 D + 2 .
Similarly, if a pair of integers D T x , D R x { 2 , 4 , , D 2 } summing to D satisfies (45), then the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains the MG pair ( S ( F ) , S ( S ) ) with
S ( F ) L 2 · 2 D + 1
if, and only if,
S ( F ) + S ( S ) L · 2 D + 1 2 D + 2 .
Noting the fundamental bound S ( F ) L 2 , one observes that when there is a odd-valued pair D Tx , D Rx { 1 , 3 , , D 1 } such that (45) holds and α 1 🟉 = 1 , then the first part of Corollary 2 recovers Corollary 1 and determines the entire optimal MG region S 🟉 ( μ Tx , μ Rx , D ) .
Proof. 
Achievability of (47) follows from Theorem 3, see (34), because the two components of S Inter-Partial ( α 1 🟉 ) = ( S ( S ) , S ( F ) ) satisfy:
S ( F ) = α 1 🟉 L 2 + ( 1 α 1 🟉 ) L 2 2 D + 1 = L 2 1 D 1 D + 1 ( 1 α 1 🟉 )
and
S ( S ) + S ( F ) = L · 2 D + 1 2 D + 2 .
Achievability of (49) can be proved in a similar way from Remark 2. The converse to both results follows from Proposition 1 because constraint (47) implies that the sum μ Tx + μ Rx exceeds L · D 2 D + 2 . □
Corollary 3.
If
μ Rx + μ Tx < L · D 2 D + 2
and if a pair D T x , D R x { 1 , 2 , 3 , , D 1 } (both odd and even values are allowed) summing to D satisfies
μ Tx D T x = μ Rx D R x ,
then the optimal MG region S 🟉 ( μ Tx , μ Rx , D ) contains the MG pair ( S ( F ) , S ( S ) ) with
S ( F ) L 2 1 μ T x 2 ( D 1 ) D T x
if, and only if,
S ( F ) + S ( S ) L 2 + μ Tx + μ Rx .
Proof. 
The result (55) follows from the converse result in Proposition 1 and the achievability results in Theorem 3, see (37), and Remark 2. More specifically, to prove achievability let D Tx and D Rx be such that Condition (53) is satisfied. Then,
μ Tx , L ( D Tx ) μ Tx = μ Rx , L ( D Rx ) μ Rx
and Condition (52) implies that both inequalities
μ Tx < μ Tx , L and μ Rx < μ Rx , L
are satisfied. Moreover, α 2 🟉 as defined in (38) satisfies
α 2 🟉 = μ Tx μ Tx , L = μ Tx 2 D + 2 L · D Tx = μ Rx μ Rx , L = μ Rx 2 D + 2 L · D Rx .
Notice next that for the two MG pairs S Coop ( α 2 🟉 ) and S Partial ( α 2 🟉 ) , which are achievable by either Theorem 3 or Remark 2, the sum of the two components satisfies
S ( F ) + S ( S ) = α 2 🟉 · L 2 · 2 D + 1 D + 1 + ( 1 α 2 🟉 ) L 2 = L 2 + μ Tx D Tx D = L 2 + μ Tx + μ Rx ,
where in the last equation we used (53). Moreover, the “fast” MG S ( F ) in S Coop ( α 2 🟉 ) equals 0, whereas in S Partial ( α 2 🟉 ) it equals
S ( F ) = α 2 🟉 · L 2 · 2 D + 1 + ( 1 α 2 🟉 ) L 2 = 1 μ Tx 2 ( D 1 ) D Tx = 1 μ Rx 2 ( D 1 ) D Rx .
Since one can always choose to transmit at smaller MGs and because the convex hull of all achievable MG pairs is also achievable, this concludes the proof of achievability. □
Remark 4.
For both corollaries, in the regimes where we could characterize the optimal MG region, i.e., for “fast” MGs below a certain threshold, the sum-MG is at its maximum. We can thus conclude that for sufficiently small S ( F ) the sum-MG is not decreased due to the stringent constraint on the “fast” messages.
In the following subsections we present the coding schemes achieving the MG regions in Theorem 3 and Remark 2.

4.2. Scheme Achieving (31c)

Let each Tx only send “slow” messages but no “fast” messages. Under this coding assumption, our setup is a multi-antenna version of the setup in [12]. Achievability of (31c) then follows immediately by the multi-antenna version of [12] (Theorem 1). We redescribe the coding schemes achieving (31c) for completeness and reference in the next subsection.
We silence every 2 D + 2 nd Tx, which splits the network into smaller subnets. In each subnet, we combine the SIC idea explained for the setup with only Rx-cooperation (see Section 3.1.1) with the DPC coding idea that was explained for the setup with only Tx-cooperation (see Section 3.2.1). The scheme for the first subnet is illustrated in Figure 11 and will be explained in the following. Communication in the other subnets is similar.
The Tx/Rx pairs of the first subnet are assigned to four groups, depending on their mode of operation. Notice that the Tx/Rx pair D Rx + 2 D Tx + 2 is assigned to both groups G 3 and G 4 , whereas all other Tx/Rx pairs are assigned to only one group. The reason is that message M D Rx + 2 D Tx + 2 ( S ) is split into two parts ( M D Rx + 2 D Tx + 2 ( S , 3 ) , M D Rx + 2 D Tx + 2 ( S , 4 ) ) of equal rates, and part M D Rx + 2 D Tx + 2 ( S , 3 ) is communicated in the same way as the messages for Tx/Rx pairs in group G 3 , whereas M D Rx + 2 D Tx + 2 ( S , 4 ) is communicated in the same way as the messages for Tx/Rx pairs in group G 4 .
Group G 1 { 1 , , D R x + 1 } : Each Tx k G 1 encodes its “slow” message M k ( S ) using a codeword X k n ( M k ( S ) ) from a Gaussian point-to-point code of power P , and transmits this codeword over the channel: X k n = X k n ( M k ( S ) ) . Each Rx k G 1 uses the cooperation message received from its left neighbour Rx k 1 for SIC, i.e., to delete the interference term H k 1 , k X k 1 n ( M ^ k 1 ( S ) ) from its output sequence Y k n :
Y ^ k n = Y k n H k 1 , k X k 1 n ( M ^ k 1 ( S ) ) ,
and to decode its desired message M k ( S ) based on Y ^ k n . Rx k also describes its decoded message M ^ k over the cooperation link to Rx k + 1 , so as to facilitate SIC at this next Rx.
To facilitate the transmissions in the next group, the last Tx of group G 1 , Tx D Rx + 1 , precodes its channel inputs X D Rx + 1 n with the matrix H D Rx + 2 , D Rx + 2 1 H D Rx + 1 , D Rx + 2 , quantises the produced sequence I D Rx + 1 n H D Rx + 2 , D Rx + 2 1 H D Rx + 1 , D Rx + 2 X D Rx + 1 n with a rate- L / 2 log ( 1 + P ) quantiser to obtain the quantisation I ^ D Rx + 1 n at noise level and sends the resulting quantisation index as a first-round cooperation message to the first Tx in group G 2 , i.e., to Tx D Rx + 2 .
Group G 2 { D R x + 2 , , D R x + D T x + 1 } : Each Tx k G 2 obtains a cooperation message from its left neighbour Tx k 1 that describes the quantised version I ^ k 1 n of I k 1 n H k , k 1 H k 1 , k X k 1 n . Based on this message, Tx k reconstructs I ^ k 1 n , encodes its “slow” message M k ( S ) using a power P DPC that mitigates the interference I ^ k 1 n , and sends the resulting DPC sequence X k n over the channel. Moreover, it precodes this input sequence with the matrix H k + 1 , k + 1 1 H k , k + 1 , quantises the precoded sequence I k n H k + 1 , k + 1 1 H k , k + 1 X k n with a rate- L / 2 log ( 1 + P ) quantiser (for a quantisation at noise level) to obtain I ^ k n , and sends the quantisation message as a round-k cooperation message over the link to its right neighbour. Tx D Tx + D Rx + 1 produces its inputs in a similar way, i.e., using DPC, but sends no cooperation message at all. Rxs in G 2 use a standard DPC decoding rule based on the premultiplied outputs
H k , k 1 Y k n = H k , k 1 H k 1 , k X k 1 n + X k n + H k , k 1 Z k n ,
to decode their intended “slow” messages. (Recall that X k n was produced as a DPC sequence that mitigates I ^ k 1 n , a quantised version of H k , k 1 H k 1 , k X ^ k 1 n ). Since quantisation was performed at noise level, each message M D Rx + 2 ( S ) , , M D + 1 ( S ) can be sent reliably with MG L .
Group G 3 { D R x + D T x + 2 , , D R x + 2 D T x + 2 } : This group of Tx/Rx pairs participates in the transmission of the “slow” messages
M D Rx + D Tx + 3 ( S ) , , M D Rx + 2 D Tx + 1 ( S ) , M D Rx + 2 D Tx + 2 ( S , 3 ) .
In particular, Tx D Rx + 2 D Tx + 2 does not send an own message to its corresponding Rx.
Each of the messages in (63) is transmitted over the communication path Tx k Tx k 1 Rx k for some k { D Rx + D Tx + 3 , , D Rx + 2 D Tx + 2 } .
For each k { D Rx + D Tx + 3 , , D Rx + 2 D Tx + 2 } , Tx k encodes its own “slow” message M k ( S ) by means of DPC of power P that mitigates the interference H k 1 , k 1 H k , k X k n of the signal sent by Tx k itself; precodes the obtained sequence U k n with the matrix H k 1 , k 1 H k , k ; quantises the precoded sequence S k 1 n H k 1 , k 1 H k , k U k n to obtain a quantisation S ^ k 1 at noise level; and sends the corresponding quantisation message as a ( 2 D + 3 k ) -round cooperation message over the link to Tx k 1 . Tx k 1 then reconstructs S ^ k 1 n and sends it over the channel: X k 1 n = S ^ k 1 n . The construction of the transmit signal X D Rx + 2 D Tx + 2 n mentioned above, is explained in the following paragraph. RXs D Rx + D Tx + 3 , , D Rx + 2 D Tx + 2 decode their intended “slow” messages using an optimal DPC decoding rule based on the premultiplied outputs
H k 1 , k 1 Y k n = X k 1 n + H k 1 , k 1 H k , k X k n + H k 1 , k 1 Z k n .
Recall that X k 1 n is a quantised version (at noise level) of the precoded signal S k 1 n H k 1 , k 1 H k , k U k n for U k n a DPC sequence that mitigates the interference H k 1 , k 1 H k , k X k n . Each of the messages M D + 3 ( S ) , , M 2 D + 2 ( S ) can thus be transmitted reliably at full MG L .
Group G 4 { D R x + 2 D T x + 2 , , 2 D R x + 2 D T x + 2 } : This group of Tx/Rx pairs participates in the transmission of the “slow” messages
M D Rx + 2 D Tx + 2 ( S , 4 ) , M D Rx + 2 D Tx + 3 ( S ) , , M 2 D Rx + 2 D Tx + 1 ( S ) .
Tx 2 D Rx + 2 D Tx + 2 thus is not sending an own message to its corresponding Rx. The messages in (65) are transmitted over the path Tx k Rx k + 1 Rx k , for some k { D Rx + 2 D Tx + 2 , , 2 D Rx + 2 D Tx + 2 } .
Each Tx k { D Rx + 2 D Tx + 2 , , 2 D Rx + 2 D Tx + 1 } encodes its “slow” message M k ( S ) (or M k ( S , 4 ) if k = D Rx + 2 D Tx + 2 ) using a codeword from a Gaussian codebook of power P , and sends this codeword over the channel X k n = X k n ( M k ( S ) ) (or X k n = X k n ( M k ( S , 4 ) ) if k = D Rx + 2 D Tx + 1 ).
Rx 2 D Rx + 2 D Tx + 2 decodes M 2 D Rx + 2 D Tx + 1 ( S ) based on an interference-free output Y 2 D Rx + 2 D Tx + 2 n = H 2 D Rx + 2 D Tx + 1 , 2 D Rx + 2 D Tx + 2 X 2 D Rx + 2 D Tx + 1 n + Z 2 D Rx + 2 D Tx + 2 n , and sends the decoded message M ^ 2 D Rx + 2 D Tx + 1 ( S ) over the cooperation link to the intended Rx 2 D Rx + 2 D Tx + 1 . For k = 2 D Rx + 2 D Tx + 1 , , D Rx + 2 D Tx + 3 , Rx k uses the cooperation message received from its right neighbour Rx k + 1 to decode M k 1 ( S ) (or M k 1 ( S , 4 ) if k = D Rx + 2 D Tx + 2 ) using SIC, i.e., to first delete the interference H k , k X k n from Y k n and then decode message M k 1 ( S ) (or M k 1 ( S , 4 ) if k = D Rx + 2 D Tx + 2 ) from an interference-free signal. Rx k then sends the decoded message M ^ k 1 ( S ) (or M ^ k 1 ( S , 4 ) if k = D Rx + 2 D Tx + 2 ) over the cooperation link to its left neighbour Rx k 1 , which is the intended Rx for this message.
In the described scheme, each transmitted message is either decoded based on interference-free outputs or using DPC. Since precoding matrices do not depend on the power and quantizations are performed at noise levels, all messages can be transmitted reliably at MG L . Tx D Rx + 2 D Tx + 2 sends two “slow” messages and 2 D Rx + 2 D Tx 1 other Txs send one “slow” message. An average “slow” MG of L · 2 D Rx + 2 D Tx + 1 2 D Rx + 2 D Tx + 2 is thus achieved in each subnet. Moreover, 2 D Rx + 2 D Tx cooperation messages of prelog L are sent in each subnet:
  • Rxs in G 1 send D Rx Rx-cooperation messages with prelog L ;
  • Txs in G 2 send D Tx Tx-cooperation messages with prelog L ;
  • Txs in G 3 send D Tx Tx-cooperation messages with prelog L ;
  • Rxs in G 4 send D Rx Rx-cooperation messages with prelog L .
The average cooperation prelog per link at the Tx-side is μ Tx , L and at the Rx-side it is μ Rx , L . If one time-shares 2 D + 2 different instances of the described scheme with a different subset of silenced users in each of them, the overall scheme still achieves the MG pair ( S ( F ) = 0 , S ( S ) = L 2 D + 1 2 D + 2 ) in (31d), each Tx-cooperation link is loaded at exactly this average cooperation prelog μ Tx , L , and each Rx-cooperation link is loaded at the average cooperation prelog μ Rx , L .

4.3. Scheme Achieving MG Pair (31d)

Consider the scheme described in the previous Section 4.2 and depicted in Figure 11. Notice that the first Tx in each subnet does not at all participate in the cooperation, and decoding of its message also does not rely on cooperation messages. The same observation applies also to the D Rx + D Tx + 1 st Tx of each subnet and its message. The first and the D Rx + D Tx + 1 st message of each subnet (the red Txs in Figure 11) thus satisfy the requirements on “fast” messages. We propose to use this scheme but let the first and the ( D Rx + D Tx + 1 ) st messages in each subnet be “fast” messages and all other messages be “slow” messages. This achieves the MG pair (31d).
The required cooperation rates equal μ Tx , L and μ Rx , L , as explained in the previous Section 4.2.

4.4. Schemes Achieving MG Pair (31e)

We periodically silence every 2 D + 2 -nd Tx to split the network into smaller subnets. Then we send a “fast” message on all odd Txs and a “slow” message on all even Txs, except for the previously silenced Txs (which are all even). See Figure 12.
In what follows, we describe and analyze transmissions over the first subnet. Other subnets are treated analogously.
Odd Txs 1 , 3 , 5 , , 2 D + 1 : Each odd Tx encodes its “fast” message M k ( F ) using a codeword U k n ( M k ( F ) ) from a Gaussian codebook of power P that depends on the Tx and the channel realizations and is explained later. Tx 1 simply sends this Gaussian codeword X 1 n = U 1 n ( M 1 ( F ) ) . Any other odd Tx k first considers the cooperation message it received from its left neighbour Tx k 1 and reconstructs X ^ k 1 n , a quantised version of Tx k 1 th input X k 1 n . Tx k then sends the input signal
X k n = U k n ( M k ( F ) ) H k , k 1 H k 1 , k X ^ k 1 n .
Odd Txs relay some of the cooperation messages they obtain from their neighbours, as will become clear in the following, but they do not create new cooperation messages.
Odd Rxs 1 , 3 , 5 , , 2 D + 1 : Given the precanceling at odd Txs described above, each odd Rx k observes an almost interference-free signal:
Y k n = H k , k U k n + H k 1 , k ( X k 1 n X ^ k 1 n ) + Z k , k ,
where notice that X ^ k 1 n is a quantised version of X k 1 n at noise level. Each odd Rx k therefore decodes its desired fast message M k ( F ) using standard point-to-point decoding. It also sends the decoded message M ^ k ( F ) over the cooperation link to its right neighbour Rx k + 1 as a first round cooperation message.
Odd Rxs also relay some of the cooperation messages they obtain from their neighbours, as will become clear in the following.
Before describing the operations at the even Tx/Rx pairs, we make the following observations based on the operations at the odd Tx/Rx pairs. Irrespective of the operations performed at the even Txs, each even Rx k observes the sum of a signal depending only on “slow” messages and a signal depending only on its left-neighbour’s “fast” message (the signal H k 1 , k U k 1 n ). Since odd Rxs convey their decoded “fast” messages to their right-neighbour, even Rxs can cancel the signals depending on “fast” messages whenever they have been decoded correctly. There is thus no loss in reliable communication rate caused by the transmission of “fast” messages. And transmission of “slow” messages at even Txs can be designed as if no “fast” messages were present. However, if “slow” Rxs wish to send cooperation messages that do not depend on the “fast” transmissions, they have to wait for the second round.
Even Txs 2 , 4 , 6 , , 2 D : Each even Tx k, for k = 2 , , 2 D , performs the same steps as Tx k in the scheme described in Section 4.2, but where the scheme needs to be adapted to include only even Txs. In particular, if an even Tx k previously sent a quantisation message to its direct left- or right-neighbour Tx k 1 or k + 1 , now it will send it to the previous or following even Tx k 2 or Tx k + 2 . (This simply means that the odd Tx lying between them has to relay the cooperation message as we already mentioned previously.) Similarly, when using DPC, if Tx k previously mitigated the quantised sequence I ^ k 1 n or S ^ k + 1 n , now it mitigates the quantised sequence I ^ k 2 n or S ^ k + 2 n . Notice that since D is even, Tx D is the last even Tx in G 2 (so the last Tx in G 2 sending a “slow” message). Tx-cooperation in group G 2 thus takes place only during the first D Tx 1 rounds. The only Tx-cooperation message in round D Tx is the message sent from Tx D + 3 to Tx D + 2 in group G 3 .
In addition, if this is not already done as part of the scheme in Section 4.2, any even Tx k also quantizes its channel inputs X k n at rate L · 1 / 2 log ( 1 + P ) to generate the quantised sequence I ^ k n . The quantisation message describing I ^ k n is then sent as a D Tx -round cooperation message over the link to Tx k + 1 to allow this Tx to precancel this interference in the way that was described previously. Even Tx D + 2 (the first Tx in group G 3 ) does not need to send this round- D Tx cooperation message because its right neighbour Tx D + 3 already learns the Tx signal X D + 2 n as part of the proposed scheme in Section 4.2. Since all even Txs (except for Tx D + 2 ) receive their last cooperation message in round D Tx 1 , they can indeed compute their input perior to the last round D Tx and thus perform the proposed round- D Tx cooperation.
Even Rxs 2 , 4 , 6 , , 2 D + 2 : Using the round-1 Rx-cooperation messages from its left neighbour, each even Rx k, for k = 2 , , 2 D + 2 , first subtracts the interference caused by the transmission of the “fast” message M k 1 ( F ) at its left neighbour. That means, it forms
Y ˜ k n = Y k n H k 1 , k U k 1 n .
It then proceeds with this modified output sequence Y ˜ k n and performs all the steps as Rx k did in the scheme in Section 4.2, but where the scheme again needs to be adapted to include only even Rxs and it also needs to be adapted to start only at cooperation round 2. This allows even Rxs to calculate (68) before performing the other steps. Notice that since the first Txs of G 1 and G 4 only send “fast” messages (the latter holds because D Rx is odd), there is no harm in waiting for this second round. To adapt the scheme in Section 4.2 only to even Rxs, any even Rx k that previously sent its decoded message to its direct left- or right-neighbour Rx k 1 or Rx k + 1 , now sends it to the previous or following even Rx k 2 or Rx k + 2 . Similarly, any Rx k that previously applied the SIC step to cancel the interference from Tx k 1 or Tx k + 1 , now cancels the interference from Tx k 2 or Tx k + 2 .
In the described scheme, all odd Txs of a subnet can send reliably a “fast” message of MG L and the even Txs { 2 , 4 , , 2 D } each can send reliably a “slow” message of MG L . The scheme thus achieves the MG pair in (31e): ( S ( F ) = L 2 , S ( S ) = L · D 2 D + 2 ) .
We now analyze the cooperation prelog of the described scheme. Recall that in this scheme each even Tx sends a quantised version of its inputs to its right neighbour and each odd Rx sends its decoded message to its right neighbour. Since each of these cooperation messages is of prelog L the described messages consume a Tx-cooperation prelog of L · D and a Rx-cooperation prelog of L · D .
In addition, for encoding and decoding of “slow” messages:
  • Rxs in G 1 send D Rx 1 Rx-cooperation messages with prelog L . (The cooperation message from Rx 1 to Rx 2 has already been counted in the previous paragraph.)
  • Txs in G 2 send ( D Tx 1 ) / 2 Tx-cooperation messages with prelog L . (The cooperation message from even to odd Txs in G 2 have already been counted in the previous paragraph.)
  • Txs in G 3 send D Tx Tx-cooperation messages with prelog L .
  • Rxs in G 4 send D Rx 1 Rx-cooperation messages with prelog L . (The first Rx in G 4 does not obtain a cooperation message because it is a “fast” Tx.)
To summarize, the described scheme requires an average prelog per Tx-cooperation link of μ Tx , H = L · D 2 + 3 4 D Tx 1 4 2 D + 2 and an average prelog per Rx-cooperation link of μ Rx , H = L · D 2 + D Rx 1 2 D + 2 . (Notice that this is larger than in the scheme in Section 4.2.) If one time-shares 2 D + 2 different instances of the described scheme with a different subset of silenced users in each of them, the required prelog on each Tx-cooperation link is exactly μ Tx , H and the required prelog on each Rx-cooperation link is exactly μ Rx , H . This concludes the proof.

5. Summary and Concluding Remarks

We considered Wyner’s soft-handoff network and characterized the MG region with transmitter and receiver cooperation when part of the messages are subject to stringent delay constraints. For the setup with only transmitter or only receiver cooperation we observed the following. Increasing the MG of delay-sensitive messages by Δ requires decreasing the MG of delay-tolerant messages approximately by 2 Δ . This penalty does not arise when both transmitters and receivers can cooperate. More precisely, for small cooperation prelogs, when delay-sensitive messages have moderate or small MGs, then the sum-MG is not decreased compared to when only delay-tolerant messages are transmitted. For large cooperation prelogs, this conclusion even holds when delay-sensitive messages have large MGs.
An interesting line of future work concerns extending the existing results to two-dimensional cellular models (i.e., to models where transmitters and receivers are not aligned on a grid). First results on the hexagonal Wyner model [18] indicate that similar conclusions hold as for Wyner’s soft-handoff model investigated in this talk. Another interesting line of future work studies the impact of channel state information (CSI) at the transmitter as in Reference [19] but for the considered model with mixed-delay constraints. In particular a model where CSI is present for en/decoding delay-tolerant messages but not for en/decoding of delay-sensitive messages is a natural extension of the presented setup.

Author Contributions

Writing–original draft, H.N.; Writing–review and editing, M.A.W. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work of H.N. and M.A.W. has been supported by the European Union’s Horizon 2020 Research And Innovation Programme, grant agreement no. 715111. The work of S.S. has been supported by the European Union’s Horizon 2020 Research And Innovation Programme, grant agreement no. 694630.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of the Converse to (15)

For convenience of notation, define for any k { 1 , , K } :
M k ( M k ( F ) , M k ( S ) ) .
For each power P > 0 , fix a sequence (in the blocklength n) of encoding and decoding functions respecting the power constraints and the Rx-cooperation rate-limitations (recall that we consider a setup with only Rx-cooperation but no Tx-cooperation) such that the error probability p ( error ) 0 as the blocklength n .
By Fano’s Inequality there exists a sequence ϵ n satisfying ϵ n n 0 as n such that for any k [ 1 : K 1 ] and each blocklength n:
R k ( F ) + R k ( S ) + R k + 1 ( F ) = 1 n H ( M k ( F ) ) + H ( M k ( S ) ) + H ( M k + 1 ( F ) ) = 1 n [ H ( M k ( F ) | M k 1 )
+ H M k ( S ) | M 1 , , M k 1 , M k ( F ) , M k + 1 , , M K + H M k + 1 ( F ) | M k 1 , M k + 1 ( S ) ] 1 n [ I ( M k ( F ) ; Y k n | M k 1 ) + I ( M k ( S ) ; Y 1 n , , Y K n | M 1 , , M k 1 , M k ( F ) , M k + 1 , , M K )
+ I ( M k + 1 ( F ) ; Y k + 1 n | M k 1 , M k + 1 ( S ) ) ] + ϵ n n = ( a ) 1 n [ I ( M k ( F ) ; Y k n | M k 1 ) + I ( M k ( S ) ; Y k n , Y k + 1 n | M k 1 , M k ( F ) , M k + 1 )
+ I ( M k + 1 ( F ) ; Y k + 1 n | M k 1 , M k + 1 ( S ) ) ] + ϵ n n = ( b ) 1 n [ I ( M k ( F ) , M k ( S ) ; Y k n | M k 1 ) + I ( M k ( S ) ; Y k + 1 n | Y k n , M k ( F ) , M k 1 , M k + 1 )
+ I ( M k + 1 ( F ) ; Y k + 1 n | M k 1 , M k + 1 ( S ) ) ] + ϵ n n 1 n [ h ( H k , k X k n + Z k n ) h ( Z k n ) + h ( H k , k + 1 X k n + Z k + 1 n | H k , k X k n + Z k n )
h ( Z k + 1 n ) + h ( Y k + 1 n | M k + 1 ( S ) ) h ( H k , k + 1 X k n + Z k + 1 n ) ] + ϵ n n ( c ) i = 1 L 1 2 log 1 + j = 1 L | H k + 1 , k + 1 ( i , j ) | 2 P + j = 1 L | H k , k + 1 ( i , j ) | 2 P + 1 2 log det I L + H k , k + 1 H k , k 1 H k , k T H k , k + 1 T
+ 1 2 log det H k , k 1 H k , k T + H k , k + 1 1 H k , k + 1 T + log det ( H k , k ) + ϵ n n ,
where I L denotes the L -by- L identity matrix and H k + 1 , k + 1 ( i , j ) and H k , k + 1 ( i , j ) denote the elements of matrices H k + 1 , k + 1 and H k , k + 1 in row i and column j. Here, ( a ) follows because given source messages M k 1 and M k + 1 , the triple ( M k , Y k n , Y k + 1 n ) is independent of the rest of the outputs Y 1 n , , Y k 1 n , Y k + 2 n , , Y K n and source messages M 1 , , M k 2 , M k + 2 , , M K ; ( b ) follows by the chain rule of mutual information and because M k + 1 is independent of the tuple ( M k 1 , M k , Y k n ) ; and ( c ) is obtained by rearranging terms, and the following bounds (A12), (A15) and (A24).
We first bound the term h ( Y k + 1 n | M k + 1 ( S ) ) , and start by noting that because conditioning can only reduce entropy and by the entropy-maximizing property of the Gaussian distribution:
h ( Y k + 1 n | M k + 1 ( S ) ) i = 1 L t = 1 n h ( Y k + 1 , t ( i ) ) i = 1 L t = 1 n 1 2 log ( ( 2 π e ) Var ( Y k + 1 , t ( i ) ) ) ,
where Y k + 1 , t ( i ) denotes the i-th entry of the vector Y k + 1 , t . Recall that in this setup without Tx-cooperation the input vectors X k n and X k + 1 n are independent. However the elements of each input vector can be arbitrarily correlated. The variance Var ( Y k + 1 , t ( i ) ) is maximized if the elements of X k + 1 , t are fully correlated and thus:
Var ( Y k + 1 , t ( i ) ) 1 + j = 1 L | H k , k + 1 ( i , j ) | P k , t ( j ) 2 + j = 1 L | H k + 1 , k + 1 ( i , j ) | P k + 1 , t ( j ) 2 ,
where P k , t ( j ) and P k + 1 , t ( j ) denote the variances of the j-th elements of input vectors X k , t n and X k + 1 , t n . In the following we relax the power constraint (7) by requiring only that the power of the n channel inputs produced by any given Tx-antenna cannot exceed n P :
t = 1 n P k , t ( j ) n P , k { 1 , , K } and j { 1 , , L } .
Since the right-hand side of (A10) is monotonically increasing and jointly concave in the powers { P k , t ( j ) } and { P k + 1 , t ( j ) } , the upper bound on Var ( Y k + 1 , t ( i ) ) is largest when P k , t ( j ) = P k + 1 , t ( j ) = P . Moreover since also the function x log ( 1 + x ) is monotonically increasing, we conclude:
h ( Y k + 1 n | M k + 1 ( S ) ) n i = 1 L 1 2 log ( 2 π e ) 1 + j = 1 L | H k , k + 1 ( i , j ) | 2 P + j = 1 L | H k + 1 , k + 1 ( i , j ) | 2 P .
We next bound the term
1 n h ( H k , k + 1 X k n + Z k + 1 n | H k , k X k n + Z k n ) = 1 n h ( Z k + 1 n H k , k + 1 H k , k 1 Z k n | H k , k X k n + Z k n )
1 n h ( Z k + 1 n H k , k + 1 H k , k 1 Z k n )
= 1 2 log det ( I L + H k , k + 1 H k , k 1 H k , k T H k , k + 1 T ) ,
where recall that I L denotes the L -by- L identity matrix.
For the last bound, define
T k n H k , k 1 Z k n H k , k + 1 1 Z k + 1 n .
Then:
1 n h ( H k , k X k n + Z k n ) 1 n h ( H k , k + 1 X k n + Z k + 1 n )
= ( e ) 1 n h ( X k n + H k , k + 1 1 Z k + 1 n + T k n ) 1 n h ( X k n + H k , k + 1 1 Z k + 1 n ) + log det ( H k , k ) det ( H k , k + 1 )
( f ) 1 n h ( X k n + H k , k + 1 1 Z k + 1 n + T k n ) 1 n h ( X k n + H k , k + 1 1 Z k + 1 n | T k n ) + log det ( H k , k ) det ( H k , k + 1 )
= 1 n I ( X k n + H k , k + 1 1 Z k + 1 n + T k n ; T k n ) + log det ( H k , k ) det ( H k , k + 1 )
( g ) 1 n I ( H k , k + 1 1 Z k + 1 n + T k n ; T k n | X k n ) + log det ( H k , k ) det ( H k , k + 1 )
= ( g ) 1 n I ( H k , k + 1 1 Z k + 1 n + T k n ; T k n ) + log det ( H k , k ) det ( H k , k + 1 )
= ( h ) 1 n h ( T k n ) h ( H k , k + 1 1 Z k + 1 n ) + log det ( H k , k ) det ( H k , k + 1 )
= 1 2 log det H k , k 1 H k , k T + H k , k + 1 1 H k , k + 1 T det ( H k , k + 1 1 H k , k + 1 T ) + log det ( H k , k ) det ( H k , k + 1 )
= 1 2 log det H k , k 1 H k , k T + H k , k + 1 1 H k , k + 1 T + log det ( H k , k ) ,
where ( e ) holds by the definition of T k n and because h ( A X ) = log det ( A ) + h ( X ) for any matrix A and vector X ; ( f ) holds because conditioning can only reduce entropy; (inequalities) ( g ) hold again because conditioning can only reduce entropy and by the independence of T k n and X k n ; and ( h ) holds because by the independence of the noise vectors we have h ( T k n | H k , k + 1 1 Z k + 1 n + T k n ) = h ( H k , k + 1 1 Z k n | H k , k 1 Z k n ) = h ( H k , k + 1 1 Z k n ) .
Following similar steps as the ones leading to (A12), one can also prove that
R 1 ( F ) 1 n I ( M 1 ( F ) ; Y K n | M 1 ( S ) ) + ϵ n n
i = 1 L 1 2 log 1 + j = 1 L | H 1 , 1 ( i , j ) | 2 P + ϵ n n ,
and
R K ( F ) + R K ( S ) 1 n I ( M K ( F ) , M K ( S ) ; Y K n | M K 1 ) + ϵ n n
i = 1 L 1 2 log 1 + j = 1 L | H K , K ( i , j ) | 2 P + ϵ n n ,
where H 1 , 1 ( i , j ) and H K , K ( i , j ) denote row-i, column-j elements of the matrices H 1 , 1 and H K , K .
We sum up the bound in (A8) for all values of k { 1 , , K 1 } , and combine it with (A26) and (A28). Taking n , it follows that because the probability of error p ( error ) vanishes as n (and thus ϵ n n 0 as n ):
k = 1 K 2 R k ( F ) + R k ( S ) = R 1 ( F ) + k = 1 K 1 R k ( F ) + R k ( S ) + R k + 1 ( F ) + R K ( F ) + R K ( S ) k = 1 K 1 [ i = 1 L 1 2 log 1 + j = 1 L | H k + 1 , k + 1 ( i , j ) | 2 P + j = 1 L | H k , k + 1 ( i , j ) | 2 P + 1 2 log det I L + H k , k + 1 H k , k 1 H k , k T H k , k + 1 T + 1 2 log det H k , k 1 H k , k T + H k , k + 1 1 H k , k + 1 T + log det ( H k , k ) ]
+ i = 1 L 1 2 log 1 + j = 1 L | H 1 , 1 ( i , j ) | 2 P + i = 1 L 1 2 log 1 + j = 1 L | H K , K ( i , j ) | 2 P .
Dividing by K and 1 2 log ( P ) and taking P , K , establishes the converse bound (15).

Appendix B. Proof of Converse to (21)

Fix a sequence (in the blocklength n) of encoding and decoding functions respecting the power constraints and the Tx-cooperation rate-limitations (recall that we consider a setup with only Tx-cooperation but no Rx-cooperation) such that the error probability p ( error ) 0 as the blocklength n .
Let M ( S ) ( M 1 ( S ) , , M K ( S ) ) . By Fano’s Inequality there exists a sequence ϵ n satisfying ϵ n n 0 as n such that for any k { 1 , , K 1 } and any blocklength n:
R k ( F ) + R k + 1 ( S ) + R k + 1 ( F ) = 1 n H ( M k ( F ) | M ( S ) , M k 1 ( F ) ) + H ( M k + 1 | M 1 ( S ) , , M k ( S ) , M k + 2 ( S ) , , M K ( S ) ) + ϵ n n 1 n I ( M k ( F ) ; Y k n | M ( S ) , M k 1 ( F ) ) + I ( M k + 1 ; Y k + 1 n | M 1 ( S ) , , M k ( S ) , M k + 2 ( S ) , , M K ( S ) ) + ϵ n n = 1 n [ h ( H k , k X k n + Z k n | M ( S ) ) h ( Z k n ) + h ( Y k + 1 n | M 1 ( S ) , , M k ( S ) , M k + 2 ( S ) , , M K ( S ) ) h ( H k , k + 1 X k n + Z k + 1 n | M ( S ) ) ] + ϵ n n ( a ) i = 1 L 1 2 log 1 + j = 1 L ( | H k + 1 , k + 1 ( i , j ) | + | H k , k + 1 ( i , j ) | ) 2 P + 1 2 log det H k , k 1 H k , k T + H k , k + 1 1 H k , k + 1 T + log det ( H k , k ) + ϵ n n ,
where ( a ) follows by similar steps as lead to (A15) and (A24), but where one has to account for the fact that due to the Tx-cooperation, the input vectors X k n and X k + 1 n can be correlated.
Similarly to (A24) one can further prove that
R 1 ( F ) i = 1 L 1 2 log 1 + j = 1 L | H 1 , 1 ( i , j ) | 2 P + ϵ n n ,
and
R K ( F ) + R K ( S ) i = 1 L 1 2 log 1 + j = 1 L ( | H K , K ( i , j ) | + | H K 1 , K ( i , j ) | ) 2 P + ϵ n n ,
where again one has to consider that because of the Tx-cooperation the various input vectors can be correlated.
We now sum up the bound in (A31) for all values of k { 1 , , K 1 } and combine it with (A32) and (A33). Taking n , it follows that because the probability of error p ( error ) vanishes as n (and thus ϵ n n 0 as n ):
k = 1 K 2 R k ( F ) + R k ( S ) = R 1 ( F ) + k = 1 K 1 R k ( F ) + R k ( S ) + R k + 1 ( F ) + R K ( F ) + R K ( S ) ) k = 1 K 1 [ i = 1 L 1 2 log 1 + j = 1 L ( | H k + 1 , k + 1 ( i , j ) | + | H k , k + 1 ( i , j ) | ) 2 P + 1 2 log det H k , k 1 H k , k T + H k , k + 1 1 H k , k + 1 T + log det ( H k , k ) ] + i = 1 L 1 2 log 1 + j = 1 L | H 1 , 1 ( i , j ) | 2 P + i = 1 L 1 2 log 1 + j = 1 L ( | H K , K ( i , j ) | + | H K 1 , K ( i , j ) | ) 2 P .
Dividing by K and 1 2 log ( P ) and taking P , K , establishes the converse bound (21).

References

  1. Cohen, K.M.; Steiner, A.; Shamai (Shitz), S. The broadcast approach under mixed delay constraints. In Proceedings of the 2012 IEEE International Symposium on Information Theory Proceedings, Cambridge, MA, USA, 1–6 July 2012; pp. 209–213. [Google Scholar]
  2. Zhang, R.; Cioffic, J.; Liang, Y.-C. MIMO broadcasting with delay-constrained and no-delay-constrained services. In Proceedings of the IEEE International Conference on Communications, Seoul, Korea, 16–20 May 2005; pp. 783–787. [Google Scholar]
  3. Zhang, R. Optimal dynamic resource allocation for multi-antenna broadcasting with heterogeneous delay-constrained traffic. IEEE J. Sel. Top. Signal Process. 2008, 2, 243–255. [Google Scholar] [CrossRef] [Green Version]
  4. Kassab, R.; Simeone, O.; Popovski, P. Coexistence of URLLC and eMBB services in the C-RAN uplink: An information-theoretic study. arXiv 2018, arXiv:1804.06593. [Google Scholar]
  5. Nikbakht, H.; Wigger, M.; Hachem, W.; Shamai (Shitz), S. Mixed delay constraints on a fading C-RAN uplink. In Proceedings of the ITW 2019: IEEE Information Theory Workshop, Visby, Sweden, 25–28 August 2019. [Google Scholar]
  6. Huleihel, W.; Steinberg, Y. Channels with cooperation links that may be absent. IEEE Trans. Inf. Theory 2017, 63, 5886–5906. [Google Scholar] [CrossRef] [Green Version]
  7. Itzhak, D.; Steinberg, Y. The broadcast channel with degraded message sets and unreliable conference. arXiv 2017, arXiv:1701.05780. [Google Scholar]
  8. Wyner, A.D. Shannon-theoretic approach to a Gaussian cellular multiple-access channel. IEEE Trans. Inf. Theory 1994, 40, 1713–1727. [Google Scholar] [CrossRef]
  9. Hanly, S.V.; Whiting, P. Information-theoretic capacity of multi-receiver networks. Telecommun. Syst. 1993, 1, 1–42. [Google Scholar] [CrossRef]
  10. Singhal, M.; Seyfi, T.; Gamal, A.E. Joint Uplink-Downlink Cooperative Interference Management with Flexible Cell Associations. arXiv 2018, arXiv:1811.11986. [Google Scholar]
  11. Lapidoth, A.; Levy, N.; Shamai (Shitz), S.; Wigger, M. Cognitive Wyner networks with clustered decoding. IEEE Trans. Inf. Theory 2014, 60, 6342–6367. [Google Scholar] [CrossRef] [Green Version]
  12. Wigger, M.; Timo, R.; Shamai (Shitz), S. Conferencing in Wyner’s asymmetric interference network: Effect of number of rounds. IEEE Trans. Inf. Theory 2016, 63, 1199–1226. [Google Scholar] [CrossRef] [Green Version]
  13. Levy, N.; Shamai (Shitz), S. Clustered local decoding for Wyner-type cellular models. IEEE Trans. Inf. Theory 2009, 55, 4967–4985. [Google Scholar] [CrossRef]
  14. Lapidoth, A.; Shamai (Shitz), S.; Wigger, M.A. On cognitive interference networks. In Proceedings of the 2007 IEEE Information Theory Workshop, Tahoe City, CA, USA, 2–6 September 2007; pp. 325–330. [Google Scholar]
  15. He, B.; Yang, N.; Zhou, X.; Yuan, J. Base station cooperation for confidential broadcasting in multi-cell networks. IEEE Trans. Wirel. Commun. 2015, 14, 5287–5299. [Google Scholar] [CrossRef] [Green Version]
  16. Annapureddy, V.S.; El Gamal, A.; Veeravalli, V.V. Degrees of freedom of interference channels with CoMP transmission and reception. IEEE Trans. Inf. Theory 2012, 58, 5740–5760. [Google Scholar] [CrossRef] [Green Version]
  17. Shamai (Shitz), S.; Wigger, M. Rate-limited transmitter-cooperation in Wyner’s asymmetric interference network. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July–5 August 2011; pp. 425–429. [Google Scholar]
  18. Nikbakht, H.; Wigger, M.; Shamai (Shitz), S. Multiplexing gain region of sectorized cellular networks with mixed delay constraints. In Proceedings of the 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2–5 July 2019. [Google Scholar]
  19. Wang, J.; Yuan, B.; Huang, L.; Jafar, S.A. GDoF of Interference Channel with Limited Cooperation under Finite Precision CSIT. arXiv 2019, arXiv:1908.00703. [Google Scholar]
Figure 1. System model with transmitter (Tx-) and receiver (Rx-)cooperation.
Figure 1. System model with transmitter (Tx-) and receiver (Rx-)cooperation.
Entropy 22 00182 g001
Figure 2. The optimal MG region S Rx 🟉 ( μ Rx , D ) for Rx-cooperation only, for different values of μ Rx and D = 10 and L = 1 .
Figure 2. The optimal MG region S Rx 🟉 ( μ Rx , D ) for Rx-cooperation only, for different values of μ Rx and D = 10 and L = 1 .
Entropy 22 00182 g002
Figure 3. Scheme achieving Multiplexing Gain (MG) pair (17a) where only “fast” messages are transmitted.
Figure 3. Scheme achieving Multiplexing Gain (MG) pair (17a) where only “fast” messages are transmitted.
Entropy 22 00182 g003
Figure 4. Scheme for Rx-cooperation only.
Figure 4. Scheme for Rx-cooperation only.
Entropy 22 00182 g004
Figure 5. Scheme for Tx-cooperation only.
Figure 5. Scheme for Tx-cooperation only.
Entropy 22 00182 g005
Figure 6. Examples of the three MG regions in (33), (34) and (37). Specifically, we used D Tx = 3 , D Rx = 3 , and μ Tx { 0.4 , 0.3 , 0.2 } and μ Rx { 0.4 , 0.3 , 0.2 } .
Figure 6. Examples of the three MG regions in (33), (34) and (37). Specifically, we used D Tx = 3 , D Rx = 3 , and μ Tx { 0.4 , 0.3 , 0.2 } and μ Rx { 0.4 , 0.3 , 0.2 } .
Entropy 22 00182 g006
Figure 7. Examples of the MG regions discussed in Remark 2 for even values of D Tx and D Rx .
Figure 7. Examples of the MG regions discussed in Remark 2 for even values of D Tx and D Rx .
Entropy 22 00182 g007
Figure 8. Bounds on S 🟉 ( μ Tx , μ Rx , D ) for μ Tx = 0.45 , μ Rx = 0.45 , D = 10 and L = 1 with both Tx- and Rx-cooperation.
Figure 8. Bounds on S 🟉 ( μ Tx , μ Rx , D ) for μ Tx = 0.45 , μ Rx = 0.45 , D = 10 and L = 1 with both Tx- and Rx-cooperation.
Entropy 22 00182 g008
Figure 9. Bounds on S 🟉 ( μ Tx , μ Rx , D ) for μ Tx = 0.3 , μ Rx = 0.3 , D = 10 and L = 1 with both Tx- and Rx-cooperation.
Figure 9. Bounds on S 🟉 ( μ Tx , μ Rx , D ) for μ Tx = 0.3 , μ Rx = 0.3 , D = 10 and L = 1 with both Tx- and Rx-cooperation.
Entropy 22 00182 g009
Figure 10. Bounds on S 🟉 ( μ Tx , μ Rx , D ) for μ Tx = 0.3 , μ Rx = 0.45 , D = 10 and L = 1 with both Tx- and Rx-cooperation.
Figure 10. Bounds on S 🟉 ( μ Tx , μ Rx , D ) for μ Tx = 0.3 , μ Rx = 0.45 , D = 10 and L = 1 with both Tx- and Rx-cooperation.
Entropy 22 00182 g010
Figure 11. Scheme with Rx- and Tx-cooperation.
Figure 11. Scheme with Rx- and Tx-cooperation.
Entropy 22 00182 g011
Figure 12. An illustration of the scheme achieving MG pair (31e). Notice that since D is even, the last Tx of G 2 sends a “fast” message. And since D Rx is odd, also the first Tx in G 4 sends a “fast” message.
Figure 12. An illustration of the scheme achieving MG pair (31e). Notice that since D is even, the last Tx of G 2 sends a “fast” message. And since D Rx is odd, also the first Tx in G 4 sends a “fast” message.
Entropy 22 00182 g012

Share and Cite

MDPI and ACS Style

Nikbakht, H.; Wigger, M.A.; Shamai, S. Multiplexing Gains under Mixed-Delay Constraints on Wyner’s Soft-Handoff Model. Entropy 2020, 22, 182. https://doi.org/10.3390/e22020182

AMA Style

Nikbakht H, Wigger MA, Shamai S. Multiplexing Gains under Mixed-Delay Constraints on Wyner’s Soft-Handoff Model. Entropy. 2020; 22(2):182. https://doi.org/10.3390/e22020182

Chicago/Turabian Style

Nikbakht, Homa, Michèle Angela Wigger, and Shlomo Shamai (Shitz). 2020. "Multiplexing Gains under Mixed-Delay Constraints on Wyner’s Soft-Handoff Model" Entropy 22, no. 2: 182. https://doi.org/10.3390/e22020182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop