Files

Abstract

Information theory has allowed us to determine the fundamental limit of various communication and algorithmic problems, e.g., the channel coding problem, the compression problem, and the hypothesis testing problem. In this work, we revisit the assumptions underlying two of the classical information theoretic problems: the channel coding problem and the hypothesis testing problem. In the first part, we study the information velocity problem. If the channel coding problem answers the question of how much information we can send per time unit, the information velocity problem tackles the question of the latency of communicating said information on a communication network composed of relays. In the literature, this problem is commonly studied in the regime of finite message size but with a growing number of relays. In this work, we consider an asymptotic regime where we let the message size to grow to infinity. We present a converse result and two achievability results: one for Binary Erasure Channels (BEC) and one for Additive White Gaussian Noise (AWGN) channels with feedback. The converse result is obtained by extending the argument given in (Rajagopalan and Schulman, 1994) using the tools of F-divergences. The achievability results that we obtain are based on two different ideas. In the achievability result for BEC, we exploited the property of tree codes which ensure that all message bits can eventually be correctly decoded after a certain time delay. We use this property to build a tape abstraction which allows for the streaming of message bits through the relay chain. For AWGN channels, we modify the Schalkwijk-Kailath scheme to allow each relay to focus on locally transmitting its estimate of the message bits to its neighboring relay. We analyze the local behavior of this scheme, and show that we can prove results about the information velocity of the whole network based on these local results. In the second part we study the monitoring problem. This problem capture a scenario where several regular data-generating processes maximize their own reward, with one adversarial data-generating process hiding among these regular processes and privy to certain private information. This model introduces an interesting trade-off where the adversarial data-generating process aims to exploit its private information without deviating too much from the regular data-generating processes. As by increasing its deviation, it also becomes more distinguishable from the regular data generating processes. We will analyze this problem using tools from information theory and characterize the extent of the advantage that can be obtained by the adversarial data generation process. In doing so, we showed that classification problems, which are commonly modeled as hypothesis testing problems, become more complex when an adversarial data-generating process can adapt to the tester's protocol.

Details

PDF