Paper The following article is Open access

Multiplexing oscillatory biochemical signals

and

Published 1 April 2014 © 2014 IOP Publishing Ltd
, , Citation Wiet de Ronde and Pieter Rein ten Wolde 2014 Phys. Biol. 11 026004 DOI 10.1088/1478-3975/11/2/026004

1478-3975/11/2/026004

Abstract

In recent years it has been increasingly recognized that biochemical signals are not necessarily constant in time and that the temporal dynamics of a signal can be the information carrier. Moreover, it is now well established that the protein signaling network of living cells has a bow-tie structure and that components are often shared between different signaling pathways. Here we show by mathematical modeling that living cells can multiplex a constant and an oscillatory signal: they can transmit these two signals simultaneously through a common signaling pathway, and yet respond to them specifically and reliably. We find that information transmission is reduced not only by noise arising from the intrinsic stochasticity of biochemical reactions, but also by crosstalk between the different channels. Yet, under biologically relevant conditions more than 2 bits of information can be transmitted per channel, even when the two signals are transmitted simultaneously. These observations suggest that oscillatory signals are ideal for multiplexing signals.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Cells live in a highly dynamic environment, which means that they continually have to respond to a large number of different signals. One possible strategy for signal transmission would be to use distinct signal transduction pathways for the transmission of the respective signals. However, it is now clear that components are often shared between different pathways. Prominent examples are the mitogen-activated protein kinase (MAPK) signaling pathways in yeast, which share multiple components [1, 2]. In fact, cells can even transmit different signals through one and the same pathway, and yet respond specifically and reliably to each of them. Arguably the best-known example is the rat PC-12 system, in which the epidermal growth factor (EGF) and neuronal growth factor (NGF) stimuli are transmitted through the same MAPK pathway, yet give rise to different cell fates, respectively differentiation and proliferation [3, 4]. Another example is the p53 system, in which the signals representing double-stranded and single-stranded breaks in the DNA are transmitted via the same pathway [5]. These observations suggest that cells can multiplex biochemical signals [6], i.e. transmit multiple signals through one and the same signaling pathway, just as many telephone calls can be transmitted simultaneously via a shared medium, a copper wire or the ether.

One of the key challenges in transmitting multiple signals via pathways that share components is to avoid unwanted crosstalk between the different signals. In recent years, several mechanisms for generating signaling specificity have been proposed. One strategy is spatial insulation, in which the shared components are recruited into distinct macromolecular complexes on scaffold proteins [1, 7]. This mechanism effectively creates independent communication channels, one for each signal to be transmitted. Another mechanism is kinetic insulation, in which the common pathway is used at different times, and a temporal separation between the respective signals is thus established [8]. Another solution is cross-pathway inhibition, in which one signal dominates the response [913]. In the latter two schemes, kinetic insulation and cross-pathway inhibition, the signals are effectively transmitted via one signaling pathway, though in these schemes multiple messages cannot be transmitted simultaneously.

We have recently demonstrated that cells can truly multiplex signals: they can transmit at least two signals simultaneously through a common pathway, and yet respond specifically and reliably to each of them [6]. In the multiplexing scheme that we proposed, the input signals are encoded in the concentration levels of the signaling proteins. The underlying principle is, however, much more generic, since essentially any coding scheme can be used to multiplex signals. This observation is important, because it is becoming increasingly clear that cells employ a wide range of coding strategies for transducing signals. One is to encode the signals in the duration of the signal. This is the scheme used by the NGF–EGF system: while EGF stimulation yields a transient response of ERK, NGF leads to a sustained response of ERK [3, 4]. Another strategy is to encode the message in the frequency or amplitude of oscillatory signals. Indeed, a large number of systems have now been identified that employ oscillatory signals to transduce information. Arguably the best-known example is calcium oscillations [14], but other examples are the p53 [5], NFAT [15, 16], nuclear ERK oscillations [17] and NF-κB system [15, 1820]. In fact, cells use oscillatory signals not only to transmit information intracellularly, but also from one cell to the next—insulin [21] and gonadotropin release hormone [22] are prominent examples of extracellular signals that oscillate in time. More examples of systems that encode stimuli in the temporal dynamics of the signal are provided in the recent review article by Behar and Hoffmann [23].

While in [6] we showed that biochemical networks can multiplex two binary signals that are constant time, here we demonstrate that they can also multiplex oscillatory signals. We present a system that multiplexes two signals. One signal is constant in time, yet the magnitude of the signal depends on the message to be transmitted; the information is thus encoded in the magnitude of the protein concentration. The other signal oscillates in time, but with an amplitude that depends on the message; the information is thus encoded in the amplitude of the concentration oscillations. These two input signals are then multiplexed in the dynamics of a common signaling pathway, which is based on well-known network motifs such as the Goldbeter–Koshland push–pull network [24] and the incoherent feedforward motif [25]. The dynamics of the common pathway are finally decoded by downstream networks.

Our results highlight that information transmission is a mapping problem. For optimal information transmission, each input signal needs to be mapped onto a unique output signal, allowing the cell to infer from the output what the input was. It is now well established that noise, arising from the inherent stochasticity of biochemical reactions, can reduce information transmission [6, 20, 2631], because a given output signal may correspond to different input signals. Additionally, here we show that crosstalk between the two different signals can also compromise information transmission: a given state of a given input signal can map onto different states of its corresponding output signal, because the input–output mapping for that channel depends on the state of the signal that is transmitted through the other channel. This crosstalk presents a fundamental bound on the amount of information that can be transmitted, because it limits information transmission even in the deterministic, mean-field limit. We also show, however, that under biologically relevant conditions more than 2 bits of information can be transmitted per channel, which means that each channel can transmit at least four messages with 100% fidelity. We end by comparing our results with observations on experimental systems, in which oscillatory and constant signals are transmitted through a common pathway.

2. Results

2.1. The model

Figure 1 shows a cartoon of the setup. We consider two input species S1 and S2, with two corresponding output species, X1 and X2, respectively. The concentration S1(t) of input S1 oscillates in time, while the concentration of S2 is constant in time. An input signal can represent different messages; that is, the input can be in different states. For S1 the different states could be encoded either in the amplitude or in the period of the oscillations. Here, unless stated otherwise, we will focus on the former and comment on the latter in section 3. The different states of S2 correspond to different copy numbers or, since we are working at constant volume, concentration levels S2. The signals S1 and S2 drive oscillations in the concentration V(t) of an intermediate component V, with a mean that is determined by S2 and an amplitude that is determined by S1 (see Supporting Information, stacks.iop.org/PhysBio/11/026004/mmedia).The states of S1 are thus encoded in the amplitude of V(t) while the states of S2 are encoded in the mean level of V. The output X2 reads out the mean of V(t) and hence the state of its input S2 by simply time-integrating the oscillations of V(t). The output X1 reads out the amplitude of the oscillations in V(t) and hence the state of S1 via an adaptive network, indicated by the dashed circle. We now describe the coding and decoding steps in more detail.

Figure 1.

Figure 1. Schematic drawing of the multiplexing system. Two signals are multiplexed. Signal S1 oscillates in time while signal S2 is constant. The message of S1 could be encoded either in the amplitude or in the frequency of the oscillations, but in this work we focus on the former. The message of S2 is encoded in the concentration. The output or response of S1 is X1 while the response of S2 is X2. Encircled is the adaptive motif, used to read out the amplitude of the oscillations of S1.

Standard image High-resolution image

2.1.1. Encoding

In the encoding step of the motif, the two signals S1, S2 are combined into the shared pathway. The signals Si are modeled as a sinusoidal function

Equation (1)

where μi is the signal mean, Ai is the signal amplitude and Ti is the period of the signal oscillation. We assume that the signals are deterministic and discuss the effects of noise later. As discussed above, S1 is an oscillatory signal, with kinetic parameters A1, T1 and constant μ1. S2 is constant, A2 = 0, and the concentration level μ2 carries the information (i.e. sets the state) in the signal. In recent years it has been shown that biochemical systems can tune separately the amplitude and frequency of an oscillatory signal [32, 33].

The simplest shared pathway is a single component, V, which could be a receptor on the cell or nuclear membrane, but could also be an intracellular enzyme or a gene regulatory protein. We imagine that each signal is a kinase for V, which can switch between an active (e.g. phosphorylated) state (VP) and an inactive (e.g. unphosphorylated) state, such that

Equation (2)

where we sum over the two signals S1(t) and S2(t) = μ2. The dephosphorylation is mediated by a phosphatase, that has a constant copy number ET. In equation (2) we assume Michaelis–Menten dynamics for V(see Supporting Information, stacks.iop.org/PhysBio/11/026004/mmedia).

The Michaelis–Menten kinetics of the activation of V could distort the transmission of the oscillatory signal of S1. This could reduce the amplitude of the oscillations or transform the sinusoidal signal into a signal that effectively switches between two values. Such transformations potentially hamper information transmission. We therefore require that the component V accurately tracks the dynamics of the input signals. It is well known that a linear transfer function between S and V does not lead to a deformation of the dynamic behavior, but only to a rescaling of the absolute levels (see Supporting Information, stacks.iop.org/PhysBio/11/026004/mmedia). A linear transfer function can be realized if the kinase acts in the saturated regime, while the phosphatase is not saturated (KV ≪ (VTVP(t)), MVVP(t)), leading to

Equation (3)

with $m^{\prime }_V=m_VE_T/M_V$.

2.1.2. Decoding VP to X1, X2

The second part of the multiplexer is the decoding of the information in VP into a functional output (see figure 1). The signals that are encoded in VP have to be decoded into two output signals, the responses X1 and X2. We imagine that the cell should be able to infer from an instantaneous measurement of the output response the state of the corresponding input signal. Therefore, we take the outputs of the multiplexing motif to be the concentration levels X1 and X2 of the output species X1 and X2, respectively. Here X1 is the response of S1, while X2 is the response of S2.

The response X2 should be sensitive to the concentration of S2, but be blind to any characteristics of S1. In our simple model there is only one time-dependent signal, namely S1; S2 is constant in time. Since VP has a linear transfer function of the signals (equation (3)), the average level of VP, 〈VP〉, is independent of both the amplitude A1 and the period T1 of the oscillations in S1. The average 〈VP〉 does depend on the mean concentration level of the two signals, and since S1 has a constant mean, changes in 〈VP〉 reflect only a change in the mean of S2, μ2. As a result, a simple linear time-integration motif can be used as the final read-out for S2. We therefore model X2 as

Equation (4)

The degradation term in the above expression constitutes a simple and common integration motif which is sufficient for our purpose, even though other decoding motifs might work even better. Since equation (4) is linear, 〈X2〉 is a function of 〈VP〉 only. Moreover, if the response time of X2, $\tau _{X_2}( =m^{-1}_{X_2})$, is much longer than the oscillation period T1 of S1, the effect of the oscillations on the instantaneous concentration X2 is integrated out. This is important for reducing the variability in 〈X2〉 due to dynamics in the system [34].

For X1 a simple time-integration scheme does not work. The information that has to be mapped onto the output concentration X1 is the amplitude of S1, which is propagated to VP. The output X1 should therefore depend on the amplitude of the oscillations of VP, but not on its mean 〈VP〉, since the mean represents the information in S2. These requirements mean that the frequency-dependent gain of the network from V to X1 should have a band-pass structure. The frequency-dependent gain shows how the amplification of the input signal depends on the frequency of the signal [35] (figure 2). Due to the finite lifetime of the molecules, the frequency-dependent gain of any biochemical network inevitably reaches zero at high frequencies. Here we require that at the other end of the frequency spectrum, in the zero-frequency limit, the gain should also be small: changes in the constant level of VP, which result from changes in S2, should not be amplified because X1 should not respond to changes in S2. Indeed, only at intermediate frequencies should the gain be large: changes in the amplitude of the oscillations of VP at the frequency of the input S1 must be strongly amplified, because these changes correspond to changes in S1 to which X1 must respond. The network between V and X1 should thus have a frequency transmission band that matches the frequency of S1. The output X1 will then strongly respond to S1 but not to S2.

Figure 2.

Figure 2. The gain $g^2_{W^P}( \omega )$ for channel 1 for different parameter sets. The circles indicate the response times τi. (a) The effect of changing the time scale kR = μR. (b) The effect of changing the signal S2 in the other channel 2; it is seen that the gain $g^2_{W^P}( \omega )$ of channel 1 depends on S2, which may lead to crosstalk. Parameters: panel (a): μ2 = 500, kR = mR; panel (b): μ2 = 200 and μ2 = 800, kR = 1, mR = 1; panels (a), (b): μ1 = 0, kV = 0.1, KV = 10−4VT, mVET = 600, MV = 5VT, VT = 1000, kW = 1, KW = MW = WT/4, WT = 1000; mW sets the time scale.

Standard image High-resolution image

A common biochemical motif with a frequency band-pass filter is an adaptive motif [36]. An adaptive system does not respond to very slowly varying signals, essentially because it then already adapts to the changing signal before a response is generated. Indeed, the key feature of an adaptive system is that the steady-state output is independent of the magnitude of a constant input, meaning that

Equation (5)

This appears to be precisely what is required, because it means that when S1 is absent and S2 is changed, the output X1 remains constant, as it should—only X2 should change when S2, a signal constant in time, is changed. On the other hand, while the steady-state output of an adaptive network is insensitive to variations in constant inputs, it is, in general, sensitive to dynamical inputs. This observation is well known; it is, e.g., the basis for the chemotactic behavior of E. coli, where the system responds to a change in the input concentration, and the strength of the response depends on the magnitude of the change in input concentration. This is another characteristic that is required, because it allows the magnitude of the response X1 to depend on the amplitude A1 of the oscillations in S1, thus enabling a mapping from A1 to X1. The important question that remains is that of whether the magnitude of the response solely depends on the change in the input concentration, which reflects S1, or whether it also depends on the absolute value of the input concentration, which reflects S2. In the following, we will show that it may depend on both, potentially introducing crosstalk between the two signals.

Two common ways to construct an adaptive motif are known: the negative feedback motif and the incoherent feedforward motif [3741]. In this multiplexing system we use the latter. In the incoherent feedforward motif the input signal, in our case VP, stimulates two downstream components R, W; see figure 1. One of the downstream components, R, is also a signal for the other downstream component, W. Importantly, the regulatory effect of the direct pathway (VPW) is opposite to the effect of the indirect pathway (VPRW). As a result, if VP activates W, this activation is counteracted by the regulation of W through R. We thus obtain

Equation (6)

Equation (7)

This motif is adaptive, which can be shown by setting the time derivatives in equation (6) and equation (7) to zero and solving for the steady state 〈WP〉. This yields

Equation (8)

Although the full expression for 〈WP〉 is unwieldy to present, equation (8) shows that it does not depend on the magnitude of a constant input 〈VP〉, which means that the network is indeed adaptive.

For a correct separation of the signals, the response W should be insensitive to the average level of VP, 〈VP〉, since 〈VP〉 carries information on S2, and not S1. Indeed, a dependence of W on 〈VP〉 and hence on S2 necessarily leads to unwanted crosstalk between the two information channels. The adaptive property of the network ensures that WP is insensitive to the mean of VP when VP is constant in time. Consequently, when signal 1 is absent (S1 = A1 = 0), then WP and hence X1 will not depend on the level μ2 of S2, and there is thus no crosstalk from channel 2, precisely as required. However, the response of WP(t) (and thereby X1) to changes in A1 may depend on the mean of VP(t) and hence on S2, thus potentially generating crosstalk; in fact, when at constant A1 ≠ 0, S2 is changed, both the mean and the amplitude of the oscillations of WP(t) may change. These characteristics are a result of the nonlinearity of the adaptive network. While both terms on the right-hand side of equation (6) are linear and the first term on the right-hand side of equation (7) is linear in the regime KW < (WTWp), the second term on the right-hand side of equation (7) is necessarily nonlinear, since deactivation of W depends on both W and R, involving bimolecular reactions. Crosstalk may thus arise, as we discuss below. In the Supporting Information (available at stacks.iop.org/PhysBio/11/026004/mmedia), we describe in more detail how the transmission of the oscillatory signal from VP(t) to WP(t) depends on the properties of the oscillatory input signal and on the characteristics of the adaptive network.

To elucidate the crosstalk it is instructive to study the frequency-dependent gain of the adaptive network [3542]. The frequency-dependent gain describes how much the amplitude of the output oscillations WP changes when the amplitude of the input oscillations VP (which depends on S1) is varied, as a function of the frequency ω of the input. As we will see, the frequency-dependent gain depends on the mean of VP(t), which is set by S2. This can generate crosstalk.

The full expression for the frequency-dependent gain g2(ω) is too unwieldy to present here, but, linearizing equation (6) and equation (7), we find in simplified form

Equation (9)

where α and β are proportionality constants and τi is the response time of component i. For slowly varying signals (ω → 0), the amplitude of the response is negligible due to the ω2-term in the numerator of equation (9), reflecting the adaptive nature of the network. Second, for $\omega \ll \min [ \tau ^{-1}_V, \tau ^{-1}_R, \tau ^{-1}_{W}]$, the power scales with ω2. For very large ω the power scales with ω−4. In the intermediate regime for ω, the scaling depends on the precise response times. The response times are the diagonal Jacobian elements for the linearized system (2),(6),(7),

Equation (10)

Equation (11)

Equation (12)

Equation (11) gives the response time for a protein with a simple birth–death reaction. The mathematical form of the response times, τV and τW, equation (10) and equation (12), resembles that of a switching process with a forward and a backward step; their values depend on the signal parameters. When the dynamics of VP operate in the linear regime (see equation (3)), τV simplifies to τV ≈ − (mVET/(MV))−1, which is just the linear decay rate of VP. Importantly, the response time τW and hence the gain g2(ω) depend on 〈VP〉 and thereby on S2. This means that the response of X1 to S1 will depend on S2, generating crosstalk.

The gain equation (9) is shown in figure 2(a) and (b) for two different parameter sets. The band-pass structure, with corresponding resonance frequency (the peak in the gain), is observed. Further, with circles, the response times τV (black open), τR (black solid) and τW (gray open) are shown, which determine the position of the peak in the gain; the peak occurs at a frequency in between the two largest response times. In figure 2(a) we observe the influence of increasing kR, mR, which are taken to be equal. For very slow changes in R, corresponding to kR, mR being small, the network has a very large gain. Increasing the response time of R decreases the amplitude at the resonance frequency considerably. Faster tracking of VP by R makes the adaptation of the biochemical circuit very fast and as a result, WP does not respond at all to changes in VP. In figure 2(b) we observe the influence of changing the state μ2 of S2. The gain decreases for larger μ2, and the response time τW increases. This may lead to crosstalk, since the mapping of A1 to X1 will now depend on S2.

Finally, we look at the last step in the motif, the conversion of the dynamic response of the adaptive motif W into X1. The instantaneous concentration X1 should inform the system on the state of input S1. Simple time-integration of W, similar to the response X2 equation (4), is not sufficient. While time-integration by itself is important for averaging over multiple oscillation cycles, it is not sufficient because time-integration with a linear transfer function does not lead to a change in the response when the amplitude of the input is varied, assuming that the oscillations are symmetric. Indeed to respond to different amplitudes, a nonlinear transfer function is required:

Equation (13)

These Hill-type nonlinear transfer functions are very common in biological systems, for example in gene regulation by transcription factors, or protein activation by multiple enzymes.

2.2. Multiplexing

Now that we have specified the model with its components, we characterize its multiplexing capacity. Komarova, Bardwell and co-workers have quantified crosstalk via two measures, a specificity measure that quantifies how much a given input generates the desired response as compared to the unwanted response, and a fidelity measure that quantifies how much a given response is generated by the corresponding 'intended' input signal as compared to the 'unintended' input signal [43, 44]. We would like to quantify how many different messages can be transmitted reliably through each channel. To this end, we used the formalism of information theory [45]. We define two measures: (1) I1(X1; A1), the mutual information of the concentration X1 and the amplitude A1 of signal S1, and (2) I2(X2; μ2), the mutual information of the concentration X2 and the concentration level μ2 of S2. The information capacity of the system is then defined by the total information IT = I1(X1; A1) + I2(X2; μ2) that is transmitted through the system. The mutual information I(Si, Xi) quantifies how much, on average, our uncertainty in one variable, e.g. the input Si, is reduced by knowing the value of the other variable, e.g. the output Xi. It measures, in bits, how many different messages can be transmitted with 100% fidelity. Indeed, I(Si, Xi) quantifies how many messages can be transmitted through channel i with 100% reliability. Importantly, the mutual information does not necessarily reflect whether each input signal is transmitted with 100% fidelity. For example, increasing the number of input states NA can increase the mutual information I1(A1; X1) [45], yet a specific output concentration X1 could be less informative about a specific input amplitude A1. To quantify the fidelity of signal transmission, we normalize the mutual information by the information entropy H(A1) and H2) of the respective inputs. We therefore define the relative mutual information

Equation (14)

Equation (15)

Note that IR((A1; X1), (μ2; X2)) has a maximum value of 2, meaning that each channel i = 1, 2 transmits all its messages SiXi with 100% fidelity.

The mutual information depends on the kinetic parameters of the system, on the input distribution of the signal states, and on the amount of noise in the system. In a previous study we have shown that under biologically relevant conditions, a simple biochemical system using only constant signals is capable of simultaneously transmitting at least two bits of information [6], meaning that at least two signals with two input states can be transmitted with 100% fidelity. Here we wondered whether this information capacity can be increased. Therefore, we study the system for increasing number of input states (increasing NA for S1 and Nμ for S2), where we assume a uniform distribution of the states for S1, A1 ∈ [0: 1], and for S2, μ2 ∈ [0: μmax] (see equation (1)). To obtain a lower bound on the information that can be transmitted, we optimize the total mutual information over a subset of the kinetic parameters, where we constrain the kinetic rates to being such that 10−3 < ki < 103, the dissociation constants to being such that 1 < Ki < 7.5 × 104, the maximum concentration level for S2 to being such that 10 < μmax < 1000 and the oscillation period to being such that 10 < T < 10000. We set the response times of X1, X2 to be much longer than the oscillation period, so that the variability in V and W due to the oscillations in S1 is time-integrated; specifically, $m_{X_1}=m_{X_2}=( NT_p)^{-1}\ {\rm s}^{-1}$, such that the output averages over N = 10 oscillations with period Tp. The noise strength is calculated using the linear-noise approximation [46] while assuming that the input signals are constant, of magnitude μ1, μ2. The effects of the nonlinear and oscillatory nature of the network on the noise strength are thus not taken into account. However, we do not expect these two effects to qualitatively change the observations discussed below. To compute the noise strength, we assume that the maximum copy numbers of X1 and X2 are 1000. The optimization is performed using an evolutionary algorithm (see Supporting Information, stacks.iop.org/PhysBio/11/026004/mmedia).

Before we discuss the information transmission capacity of our system, we first show typical results for the time traces and input–output relations as obtained by the evolutionary algorithm. Figure 3(a) shows that the oscillations in VP are amplified by the adaptive network to yield large amplitude oscillations in WP. In contrast, X1 and X2 only exhibit very weak oscillations due to their long lifetime. Figure 3(b) shows that when A1 is increased while μ2 is kept constant, the average of VP, which is set by μ2, is indeed constant. As a result, 〈X2〉 is constant, as it should be (because μ2 is constant). In contrast, X1 increases with A1. This is because the amplitude of the oscillations in WP increases with A1, which is picked up by the nonlinear transfer function from WP to X1. In addition, X1 increases because the mean of WP itself increases, due to the nonlinearity of the network; this further helps to increase X1 with A1. Figure 3(c) shows that when μ2 is increased while A1 is kept constant, 〈VP〉 and hence X2—the response of S2—increases. Importantly, while the mean of the buffer node R of the adaptive network increases with 〈VP〉, the mean of the output of this network, WP, is almost constant. Consequently, X1 is nearly constant, as it should be because X1 should reflect the value of A1 which is kept constant. These two panels thus show that this system can multiplex two signals: it can transmit multiple states of two signals through one and the same signaling pathway, and yet each output responds very specifically to changes in its corresponding input. This is the central result of our work.

Figure 3.

Figure 3. Typical time traces and input–output curves as obtained by the evolutionary algorithm. Shown are results for a system with NA = Nμ = 16. (a) Time traces of VP, R, WP, X1 and X2 for A1 = 0.5 and μ2 = 275. (b) 〈VP〉, 〈R〉, 〈WP〉, 〈X1〉, and 〈X2〉 as a function of A1, keeping μ2 = 275 constant. (c) 〈VP〉, 〈R〉, 〈WP〉, 〈X1〉, and 〈X2〉 as a function of μ2, keeping A1 = 0.5. In both (b) and (c) 〈VP〉 ≈ 〈R〉 and the lines for VP and R thus fall on top of each other. The figure shows that the system can multiplex: X1 is sensitive to S1 = A1 (panel (b)) but not S2 = μ2 (panel (c)), while X2 is sensitive to S2 = μ2 (panel (c)) but not S1 = A1 (panel (b)). The time traces in panel (a) correspond to the points in panels (b) and (c) that are indicated by the dashed lines. All panels correspond to the point in figure 5(c).

Standard image High-resolution image

Interestingly, figure 3(c) shows a (very) weak dependence of X1 on S2 = μ2, which will introduce crosstalk in the system. It is important to realize that this will reduce information transmission, even in a deterministic noiseless system. In a deterministic system, every combination of inputs (S1, S2) maps onto a unique combination of outputs (X1, X2) and, in general, each output Xi depends on both Si and Sji. Maximal information transmission from S1 to X1 and from S2 to X2 occurs when for each channel i the input–output relation Xi(Si) is independent of the state of the other channel. Thus, A1 should map onto a unique value of X1 independent of μ2 while μ2 should map onto a unique value of X2 independent of A1. However, crosstalk causes the mapping from Si to Xi to depend on the state of the other channel ji. This dependence reduces information transmission, because a given value of Xi can now correspond to multiple values of Si. This is illustrated in figure 4(a) for channel 1. The input–output relation X1(A1) depends on μ2 and, as a result, from the output X1 the value of the input A1 can no longer be uniquely inferred. This reduces the number of distinct messages that can be transmitted through channel 1 with 100% fidelity. Crosstalk can thus reduce information transmission even in a deterministic system without biochemical noise.

Figure 4.

Figure 4. The influence of noise and crosstalk on information transmission in the pathway S1X1. (a) Schematic: crosstalk reduces the amount of information that can be transmitted. For every A1, multiple values of 〈X1〉 are obtained, each corresponding to a different value of μ2. The dark red line corresponds to the maximum value of 〈X1〉 for each A1, while the light red line denotes the minimum value. The black line in between the red lines visualizes the range for which a specific 〈X1〉 uniquely maps to a single input amplitude A1. Crosstalk from the S2 → X2 channel thus limits the number of states, and hence the amount of information, that can be transmitted through channel 1. (b) Schematic: also noise reduces the number of input states that can be resolved. Shown is the mean response curve 〈X1〉(A1) together with the noise in X1. Dotted lines give the minimum and maximum values of X1 for each amplitude. Since for each A1 a larger range of X1 values is obtained, fewer states A1 can be uniquely encoded in the phase space. This is reflected in the width of the boxes; indeed, here only five input states can be transmitted with absolute reliability. (c) Combined effect of noise and crosstalk on information transmission for a system with NA = Nμ = 8, as obtained from the evolutionary optimization algorithm; the results corresponds to the black dot in figure 5(c). Both the noise and the crosstalk reduce the number of possible input states that can be transmitted. Solid lines give the deterministic dose–response curve, while dashed lines correspond to a network with noise. Dark red lines indicate the maximum of 〈X1〉 for a specific A1 over the range of possible values of μ2, while light red lines give the minimum value. Because for each A1 a range of 〈X1〉 values is obtained, the number of states A1 that can be uniquely encoded in the phase space is limited. This is reflected in the increase in the width of the boxes; indeed, here only seven input states can be transmitted with absolute reliability.

Standard image High-resolution image

It is of interest to quantify the amount of information that can be transmitted in the presence of crosstalk in a deterministic, noiseless system. Via the procedure described in the Supporting Information (stacks.iop.org/PhysBio/11/026004/mmedia), we compute the maximal mutual information for the two channels, assuming that we have a uniform distribution of input states for each channel, with A1 ∈ [0: 1] and μ2 ∈ [0: μmax]; WT = XT = 100. We find that for channel 2, the mutual information is given by the entropy of the input distribution, which means that the number of signals that can be transmitted with 100% fidelity through that channel is just the total number of input signals for that channel. This is because signal transmission through channel 2 is hardly affected by crosstalk from the other channel. Below we will see and explain that this observation also holds in the presence of biochemical noise. For signal transmission through channel 1, however, the situation is markedly different. The maximum amount of information that can be transmitted through that channel is limited to about 4 bits. This means that up to 24 signals can be transmitted with 100% fidelity; in this regime, the input signal S1 can be uniquely inferred from the output signal X1. Increasing the number of input signals beyond 24, however, does not increase the amount of information that is transmitted through that channel; more signals will be transmitted, but, due to the crosstalk from the other channel, each signal will be transmitted less reliably (see figure 4 and Supporting Information, stacks.iop.org/PhysBio/11/026004/mmedia).

We will now quantify how many messages can be transmitted reliably in the presence of not only crosstalk, but also biochemical noise. The results of the optimization of the mutual information using the evolutionary algorithm are shown in figure 5. The left panel shows the relative mutual information for channel 1, the middle panel shows that for channel 2, and the right panel shows the total relative mutual information equation (14). Clearly, biochemical noise affects information transmissions through the two respective channels differently.

Figure 5.

Figure 5. The transmitted relative information IR equation (14) as a function of the number of input states NA, Nμ, where 2 bits correspond to 22 = 4 input states. Results are shown for a stochastic system with XT = 1000. In panels (a), (b) 100% corresponds to IR = 1, while in (c) 100% corresponds to IR = 2. (a) The relative mutual information IR(A1, X1) for the S1 → X1 channel; the total mutual information is obtained by multiplying IR with log2(NA), the horizontal axis. Both decrease in IR(A1, X1) as a function of NA due to the presence of biochemical noise and decrease in IR(A1, X1) as a function of Nμ due to the presence of crosstalk are observed. (b) The relative mutual information IR2, X2) for the S2 → X2 channel. The total mutual information is obtained by multiplying IR with log2(Nμ), the vertical axis. The effect of noise is relatively small and crosstalk from S1 is hardly present. (c) The relative information of the total network IR((A1, X1), (μ2, X2)) = IR(A1, X1) + IR2, X2). The dot corresponds to the time traces in figure 3. All results are obtained through numerical optimization (see the supporting information, stacks.iop.org/PhysBio/11/026004/mmedia).

Standard image High-resolution image

Firstly, we see that the fidelity of signal transmission through channel 2 is effectively independent of the number of states NA that are transmitted through channel 1, even in the presence of biochemical noise (figure 5(b)). This means that channel 2 is essentially insensitive to crosstalk from channel 1. This is because X2 time-integrates the sinusoidal VP(t) via a linear transfer function—the output X2 is thus sensitive to the mean of VP (set by S2), but not to the amplitude of VP(t) (set by S1). We also observe that even in the presence of noise, the relative information stays close to 100% when Nμ is below 3 bits. Channel 2 is thus fairly resilient to biochemical noise, which can be understood by noting that a linear transfer function (from Vp to X2) allows for an optimal separation of the Nμ input states in phase space [4749].

The left channel, S1 → X1, is more susceptible to noise (figure 5(a)) and to crosstalk from the other channel, S2. The susceptibility to noise can be seen for Nμ = 1 bit = 2 states: the relative information decreases as NA increases. This sensitivity to noise becomes more pronounced as Nμ increases, an effect that is due to the crosstalk from the other channel. A larger Nμ reduces the accessible phase space for channel 1—it reduces the volume of state space that allows for a unique mapping from S1 to X1. As a result, a small noise source is more likely to cause a reduction in IR(S1; X1). How crosstalk and noise together reduce information transmission is further elucidated in figure 4(c). Remarkably, even in the presence of noise, maximal relative information is obtained for NA = Nμ = 4 ( = 2 bits) (figure 5(c)), showing that four input states can be transmitted for each channel simultaneously without loss of information.

2.3. Experimental observations

Here we connect our work to two biological systems. The first system is the p53 DNA damage response system. The p53 protein is a cellular signal for DNA damage. Different forms of DNA damage exist and they lead to different temporal profiles of the p53 concentration. Double-stranded breaks cause oscillations in the p53 concentration, while single-stranded damage leads to a sustained p53 response [5, 5051]. Compared to our simple multiplexing motif, the encoding scheme in this system is more involved. In our system two external signals activate the shared component V. In the p53 system, p53 itself is V, but interestingly, negative (indirect) autoregulation of p53 is required to obtain sustained oscillations.

Although the encoding structure is different, the main result is that the system is able to encode two different signals into different temporal profiles simultaneously; depending on the type of damage either a constant and/or an oscillatory profile of p53 is present. These two signals could therefore be transmitted simultaneously due to their difference in temporal profile. For the p53 system the input signals are binary, e.g. either there is DNA damage or not, although some experiments suggest that the amount of damage also could be transmitted [52]. The maximum information that can be transmitted following our simplified model is much larger than that required for two binary signals. A mathematical model, based upon experimental observations, shows that the encoding step creates a temporal profile for p53 that could be decoded by our suggested decoding module (not shown).

Another system of interest is the MAPK (or RAF–MEK–ERK) signaling cascade. The final output of this cascade is the protein ERK, which shuttles between the cytoplasm and the nucleus. ERK is regulated by many different incoming signals of which EGF, NGF and HRG are well known [53]. The temporal profile of ERK depends on the specific input that is present. NGF and HRG lead to a sustained ERK level [4], while EGF leads to a transient or even oscillatory profile of the ERK level [4, 17, 54]. In the framework of our model, ERK would be the shared component V. Experiments show that oscillations in the ERK concentration can arise due to intrinsic dynamics of the system [17]. However, these oscillations could be amplified by, or even arise because of, oscillations in the signal EGF, especially since, to our knowledge, it is unclear what the temporal behavior of EGF is under physiological conditions.

For both experimental systems, we have only described the encoding step. In both cases, two signals are encoded in a shared component V, where one signal leads to a constant response, while the other signal creates oscillations. Both p53 and ERK are transcription factors for many downstream genes [55, 56]. For the decoding of the constant signal, only a simple birth–death process driven by V would be required. Many genes are regulated in this way [25]. The decoding of the oscillatory signal requires an adaptive motif. Although adaptive motifs are common in biological processes [25], it is unclear whether downstream of either p53 or ERK an adaptive motif is present, which would complete our suggested multiplexing motif. Hence, our study should be regarded as a proof-of-principle demonstration that biochemical networks can multiplex oscillatory signals.

3. Discussion

We have presented a scheme for multiplexing two biochemical signals. The premise of the proposal is that the two signals have to be transmitted, not integrated. Indeed, the central hypothesis is that X1 should only respond to S1 and X2 only to S2. Information transmission is then maximized when the crosstalk between the different channels is minimized.

The model discussed here consists of elementary motifs, and can simultaneously transmit two signals reliably. One of these signals is constant in time, and its corresponding information is encoded in its concentration level, while the other signal is dynamic, and its information is encoded in the dynamical properties, but not in its average concentration level. The decoding of the constant signals is performed by a time-integration motif, while the decoding of the oscillatory signal requires a frequency-sensitive motif, for example an adaptive motif.

The main problem in multiplexing biochemical signals is crosstalk between the two signals. In this system the signals are encoded on the basis of their dynamical profile—S1 is oscillatory and S2 is constant in time. The decoding module for the oscillatory signal, an adaptive motif, is nonlinear. Therefore, this motif is sensitive not only to the temporal properties like the amplitude, but also to the mean or average of its input. This inevitably leads to crosstalk between channel 1 and channel 2, reducing information transmission.

Remarkably, the system is capable of transmitting over 3 bits of information through each channel with 100% fidelity. In the presence of noise the information transmission decreases, but even with considerable noise levels in the biologically relevant regime, more than 2 bits of information can be transmitted through each channel simultaneously; this information transmission capacity is comparable to what has been measured recently in the context of NF-κB signaling [20]. To transmit signals without errors it is preferable to send most information using channel 2 and a smaller number of states through channel 1. The reason for this is twofold. First, channel 2 is less noisy since the number of components is smaller; secondly, channel 1 is corrupted by crosstalk from channel 2, leading to overlaps in the state space of X1 as a function of A1 (see figure 4). Nonetheless, the two channels can reliably transmit four states in the presence of noise. This is a considerable increase in the information transmission as compared to a system where both signals are constant in time [6]—this could transmit two binary signals with absolute fidelity. This indicates that oscillatory signals could significantly enhance the information transmission capacity of biochemical systems. Importantly, while we have optimized the parameters of our model system using an evolutionary algorithm, it is conceivable that architectures other than those studied here would allow for larger information transmission. Indeed, the results presented here provide a lower bound on information transmission.

In this system we have assumed that the amplitude of the oscillatory signal is the information carrier of that signal. The same analysis could be performed for an oscillatory signal at constant amplitude but with different frequencies. Qualitatively, the results will be similar. The dependence of the gain on the frequency means that the amplitude of the output varies with the frequency of the input (see figure 2). The amplitude of the output thus characterizes the signal frequency. However, an intrinsic redundancy is present in using the frequency as the information carrier, which can be understood from the symmetry of the gain (see figure 2). The response of the system is equal for frequencies that are positioned symmetrically with respect to the resonance frequency. As a result, for any given output, there are always two possible input frequencies, and without additional information, the cell cannot resolve which of the two frequencies is present. Of course, one way to avoid this would be to use only a part of the gain, in which the gain increases monotonically with frequency.

In this study we have assumed that the input signals are deterministic. Results are obtained following deterministic simulations, where noise is added following a solution of the linear-noise approximation assuming non-oscillatory inputs. The effect of noise is a reduction of the information transmission. However, the effect of noise can always be counteracted by increasing the copy number. At the cost of producing and maintaining more proteins, similar results can therefore be obtained [6]. The effect of oscillations on the variability of the output is small since the response times of X1 and X2 are much longer than the oscillation period. Slower responding outputs would time-average the oscillation cycles even more, reducing the variability in the response further.

Transmitting information via oscillatory signals has many advantages. Oscillatory signals minimize the prolonged exposure to high levels of the signal, which can be toxic for cells, as has been argued for calcium oscillations [57]. In systems with cooperativity [58], an oscillating signal effectively reduces the signal threshold for response activation. Pulsed signals also provide a way of controlling the relative expression of different genes [59]. Encoding of stimuli into oscillatory signals can reduce the impact of noise in the input signal and during signal propagation [60]. Frequency encoded signals can be decoded more reliably than constant signals [34].

Here we show that information can be encoded in the amplitude or frequency of oscillatory signals, which are then decoded using a nonlinear integration motif. We also discussed two biological systems that may have implemented this multiplexing strategy. The idea of using the temporal kinetics as the information carrier in a signal has been studied in a slightly different context, where the dose information is encoded in the duration of an intermediate component, which in turn is time-integrated by a downstream component [61]. Here, we show that encoding signals into the temporal dynamics of a signaling pathway allows for multiplexing, making it possible to simultaneously transmit multiple input signals through a common network with high fidelity. It is intriguing that systems with a bow-tie structure, such as calcium and NF-κB [20], tend to transmit information via oscillatory signals.

4. Materials and methods

The model is based on mean-field chemical rate equations or the linear-noise approximation [62]. For details see the supporting information (stacks.iop.org/PhysBio/11/026004/mmedia).

Acknowledgments

We thank José Alvarado for a critical reading of the manuscript. This work is part of the research programme of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organization for Scientific Research (NWO).

Please wait… references are loading.