Skip to main content
Advertisement
  • Loading metrics

Shared input and recurrency in neural networks for metabolically efficient information transmission

Abstract

Shared input to a population of neurons induces noise correlations, which can decrease the information carried by a population activity. Inhibitory feedback in recurrent neural networks can reduce the noise correlations and thus increase the information carried by the population activity. However, the activity of inhibitory neurons is costly. This inhibitory feedback decreases the gain of the population. Thus, depolarization of its neurons requires stronger excitatory synaptic input, which is associated with higher ATP consumption. Given that the goal of neural populations is to transmit as much information as possible at minimal metabolic costs, it is unclear whether the increased information transmission reliability provided by inhibitory feedback compensates for the additional costs. We analyze this problem in a network of leaky integrate-and-fire neurons receiving correlated input. By maximizing mutual information with metabolic cost constraints, we show that there is an optimal strength of recurrent connections in the network, which maximizes the value of mutual information-per-cost. For higher values of input correlation, the mutual information-per-cost is higher for recurrent networks with inhibitory feedback compared to feedforward networks without any inhibitory neurons. Our results, therefore, show that the optimal synaptic strength of a recurrent network can be inferred from metabolically efficient coding arguments and that decorrelation of the input by inhibitory feedback compensates for the associated increased metabolic costs.

Author summary

Information processing in neurons is mediated by electrical activity through ionic currents. To reach homeostasis, neurons must actively work to reverse these ionic currents. This process consumes energy in the form of ATP. Typically the more energy the neuron can use, the more information it can transmit. It is generally assumed that due to evolutionary pressures, neurons evolved to process and transmit information efficiently at high rates but also at low costs. Many studies have addressed this balance between transmitted information and metabolic costs for the activity of single neurons. However, information is often carried by the activity of a population of neurons instead of single neurons, and few studies investigated this balance in the context of recurrent neural networks, which can be found in the cortex. In such networks, the external input from thalamocortical synapses introduces pairwise correlations between the neurons, complicating the information transmission. These correlations can be reduced by inhibitory feedback through recurrent connections between inhibitory and excitatory neurons in the network. However, such activity increases the metabolic cost of the activity of the network. By analyzing the balance between decorrelation through inhibitory feedback and correlation through shared input from the thalamus, we find that both the shared input and inhibitory feedback can help increase the information-metabolic efficiency of the system.

1 Introduction

The efficient coding hypothesis poses that neurons evolved due to evolutionary pressure to transmit information as efficiently as possible [1]. Moreover, the brain has only a limited energy budget, and neural activity is costly [2, 3]. The metabolic expense associated with neural activity should, therefore, be considered, and neural systems likely work in an information-metabolically efficient manner, balancing the trade-off between transmitted information and the cost of the neural activity [4, 5, 6, 7, 8].

The principles of information-metabolically efficient coding have been successfully applied to study the importance of the excitation-inhibition balance in neural systems. It has been shown that the mutual information between input and output per unit of cost for a single neuron is higher if the excitatory and inhibitory synaptic currents to the neuron are approximately equal if the source of noise lies in the stochastic nature of the voltage-gated Na+ and K+ channels [9]. In a rate coding scheme, where the source of noise lies in the random arrival of pre-synaptic action potentials, the mutual information per unit of cost has been shown to be rather unaffected by the increase of pre-synaptic inhibition associated with an excitatory input [10].

However, the balance of excitation and inhibition is likely to be more important in the context of recurrent neural networks than in the context of single neurons. In recurrent neural networks, the inhibitory input to neurons associated with a stimulus [11] arises as inhibitory feedback from a population of inhibitory neurons. The inhibitory feedback prevents a self-induced synchronization of the neural activity [12] and reduces noise correlations (correlations between neurons calculated across trials of the same stimulus) induced by shared input to neurons in the population [13, 14, 15]. If noise correlations have the same sign as signal correlations (correlations between neurons calculated across different stimuli), then noise correlations are detrimental to information transmission by neural populations [16, 17, 18]. Information is likely transmitted by the activity of a population of neurons instead of a single neuron [19], therefore, when studying the effect of excitation-inhibition balance on information transmission, it is essential to consider the context of neural populations. In the case of a population of neurons tuned to the same stimulus, positive noise correlations decrease the information content in the population.

Several studies have analyzed the effect of noise correlations on information transmission properties [16, 17, 20]. However, these studies did not analyze the relationship between the noise correlations and the metabolic cost of neural activity. In our work, we consider a computational model of a small part of the sensory cortex and the noise correlations caused by shared connections from an external thalamic population. The noise correlations may then be reduced by inhibitory feedback, which, however, increases the cost of the neural activity [10]. Our point of interest is the trade-off between improved information transmission due to lower noise correlations and the increase in metabolic costs due to stronger inhibitory feedback.

2 Results

2.1 Constrained information maximization in a simple linear model

In order to gain an insight into what affects the information-metabolic efficiency of a neural population, we first solve the problem for a simple linear system. The mean response of the system is given by γext) = gλext, where λext is the stimulus and g is the gain of the system. We measure the trial-to-trial variability of the response with the Fano factor, defined as (1) where N is a random variable representing the response n of the network to some stimulus. In this section, we assume the Fano factor to be constant, and we assume that the output is continuous and normally distributed. Therefore, the input-output relationship is described by the conditional probability (2)

We assume that the cost of the activity wext) depends linearly on the input: (3) where W0 is the cost of the resting state.

We treat the input λext as a random variable Λ with probability distribution function pext). We can then calculate the average metabolic cost as (4)

The mutual information between the input and the output I(Λ; N) is calculated as (5) (6) (7) (8) where f(next) is the probability distribution function of N given that Λ = λext, pext) is the input probability distribution, iext; n) is the amount of information that an observation of n spikes gives us about the stimulus λext, iext; N) is then the average amount of information we get from the input λext, qp(n) is the marginal output probability distribution.

The capacity-cost function C(W) is the lowest upper bound on the amount of mutual information (in bits) achievable given the constraint that Wp < W: (9) The information-metabolic efficiency E is then the maximal amount of mutual information per molecule of ATP between the input and the output: (10) (11)

The capacity-cost function can be obtained numerically with the Blahut-Arimoto algorithm [21]. The information-metabolic efficiency can be conveniently obtained directly with the Jimbo-Kunisawa algorithm [22, 23]. However, if the Fano factor is very small, a lower bound on the capacity-cost function can be found analytically [24, 25]. In the low noise approximation, the optimal input distribution maximizing the mutual information constrained by metabolic expenses W is given by (12) where Jext) is the Fisher information and λ1 and λW are the Lagrange multipliers which can be obtained from the normalization condition: (13) and the average metabolic cost constraint (Eq 4). In the second-moment approximation [26, 27], the Fisher information is given by (14) where μext) is the mean response to the external input λext, μ′(λext) is the derivative, and σexcext) is the standard deviation of the spike counts at input intensity λext. The low noise estimate on the capacity-cost function is then (15) the information-metabolic efficiency can be conveniently obtained directly with the Jimbo-Kunisawa algorithm [22, 23].

In the case of our simple linear system the Fisher information (Eq 14) is (16) and the probability distribution derived from the low-noise approximation (Eq 12) is then (17) After applying the normalization conditions (Eqs 4 and 13) and using Eq (15) we obtain the lower bound on the capacity-cost function: (18) (19) where wAP is the cost of increasing the output intensity by one action potential.

The gain g, cost scaling w0, and Fano factor FF cannot be considered constant for real neural populations. However, Eq (18) provides an insight into the importance of these properties, which we will study numerically for a more realistic neural system.

In the following, we use (20) (21)

Next, we analyze the information-metabolic efficiency of a recurrent spiking neural network, consisting of 800 excitatory and 200 inhibitory neurons. This network may represent a small area in the cortex, tuned to the same external stimulus, such as approximately a sphere of a 145 μm radius in the rat barrel cortex, which comprises only a small fraction of a single barrel [28, 29]. In such case, the external input is the input from a single barreloid in the thalamus. We assume that the role of this subnetwork is to process information about the stimulus intensity. We analyze the information-metabolic efficiency in two extreme cases of the readout of the network. First, we assume that the output of the network is read out as the summed rate of all the neurons in the network, and second, we assume that the brain acts as an efficient unbiased decoder with access to the rate of each neuron. In each case, we calculate the rate of each neuron as the number of fired spikes in a time window ΔT = 1 s.

2.2 Inhibitory feedback decorrelates the neural activity

In our model, 1000 external neurons randomly connect to the excitatory and inhibitory subpopulations with a connection probability Pext (Fig 1). Increasing Pext increases the mean pairwise correlation between the rates of the neurons in the network (feedforward network, Fig 1B). These correlations could be removed by recurrent connections. Initially, we set the excitatory recurrent synaptic amplitude as aexc = 0.01 nS to create a small perturbation from the feedforward network and varied the scaling α determining the amplitude of inhibitory synapses (ainh = αaexc) from 15 to 25, which leads to the amplitude of inhibitory post-synaptic potentials being sever-fold (approximately 2× to 8×, depending on α and on the memory potential) larger than the excitatory post-synaptic potentials, as commonly chosen in network modelling [30, 31, 32, 29]. Correlations between neurons were decreased for α ≥ 20 (Fig 1C), which was also associated with stronger negative net current from the recurrent synapses (Fig 1D). For the network considered further in our work we set α = 20. Simultaneously increasing the strength of the recurrent synapses with fixed α led to a further decrease of the correlations among the neurons (Fig 1E) while further decreasing the net current from the recurrent synapses (Fig 1F).

thumbnail
Fig 1. Inhibitory feedback decreases noise correlations.

A: Schematic illustration of the simulated neural network. Poisson neurons in the external population make random connections to neurons in the excitatory and inhibitory subpopulations. The connection probability Pext ∈ [0.01, 1] is varied to achieve different levels of shared external input to the neurons. The neurons in the inhibitory (inh.) and excitatory (exc.) subpopulations make recurrent connections (exc. to exc., exc. to inh., inh. to inh., inh. to exc.) with probability Prec = 0.2. The strength of those connections is parametrized by arec. B: Mean pairwise correlations between any two neurons in the exc. and inh. subpopulations plotted against the mean output of the network for different values of Pext in a feedforward network (arec = 0 nS). Pairwise correlations are calculated from the number of spikes each neuron fires in a time window ΔT = 1 s across many trials of the simulation. The plot is vertically separated into two parts to also illustrate the smaller differences at lower values of Pext. C: Mean pairwise correlations as in B, for different values of α (ratio of inhibitory-to-excitatory synaptic strength), arec = 0.01 nS. The black line represents the pairwise correlations in a feedforward network without any recurrent connections (arec = 0). D: Total current from recurrent synapses for different values of α, as in C. E-F: Same as in C-D, but with fixed α = 20 and different values of arec.

https://doi.org/10.1371/journal.pcbi.1011896.g001

2.3 Fano factor of single neurons vs. a population

In an inhibition-dominated network, the input needed from the external population in order to evoke a given average firing rate has to be higher than in the case of the feedforward network. The resulting increase in synaptic noise leads to higher Fano factor in the LIF model (Fig 2A, 2B and 2C; see also [33]).

thumbnail
Fig 2. Fano factor of single neurons and of populations.

A-C: Mean Fano factor of individual neurons for different values of Pext: 0.01 (A), 0.2 (B), 1 (C). The strength of the recurrent synapses (arec) is color-coded. The mean Fano factor increases with the strength of the recurrent synapses. D-F: Same as in A-C but for the Fano factor of the population activity. The points represent the population Fano factor obtained from the simulation, and the lines are a weighted 7th-degree polynomial, used only as a visual aid. For Pext = 0.01, the increase in Fano factor of individual neurons (A) can have a stronger effect on the population Fano factor than decreasing the pairwise correlations, resulting in an increase of the population Fano factor with high values of arec (D). For higher values of Pext, the pairwise correlations greatly increase the population Fano factor, which then decreases with increasing arec.

https://doi.org/10.1371/journal.pcbi.1011896.g002

If we assume that the downstream areas decode the stimulus intensity from the summed activity of the network, we need to look at the Fano factor of the summed activity, that is, ratio of variance of the sum to the mean of the sum across the trials of duration ΔT = 1 s. In the case of the total population activity, however, the pairwise correlations between the neurons have a significant effect on the Fano factor. By denoting the random variable representing the number of spikes of the i- th neuron observed during time window ΔT as Ni, we get for the Fano factor of the population activity: (22) (23) (24) (25) (26) where c is the mean pairwise covariance, v the mean variance of a neuron, μ is the mean number of spikes in ΔT, ntot is the number of neurons, and r is the Pearson correlation coefficient. The last approximation holds for neurons with identical variances and pairwise covariances [16]. It provides an insight into how the pairwise correlations and Fano factor of individual neurons affect the Fano factor of the total activity. If the correlations or number of neurons are small (rntot ≪ 1), the decorrelation by strengthening the recurrent synapses does not significantly decrease the population Fano factor. Instead, the population Fano factor may increase due to the increase of the Fano factor of individual neurons (Fig 2D, Pext = 0.01). If greater correlations are induced due to the shared input to the network, the correlations have a dominating effect on the population Fano factor, which can then be greatly decreased by strengthening the recurrent synapses and in turn decreasing the pairwise correlations (Fig 2E and 2F).

2.4 Inhibitory feedback is metabolically costly

2.4.1 Stronger recurrence strength increases the cost of the resting state.

We calculated the cost of the activity by summing the cost of action potentials from the excitatory, inhibitory, and external subpopulations, and the cost of excitatory synaptic currents in the excitatory and inhibitory subpopulations. These excitatory currents may be evoked by action potentials from the external or excitatory subpopulations, or from the background input. We did not consider the cost of synaptic currents evoked in neurons not involved in our simulation. We assume that such synaptic currents would be part of the background activity of a different area. Therefore, if we included these costs and considered multiple cortical areas, we would have included the background activity cost multiple times. We also did not include the cost of synaptic currents in the external population.

The cost of the resting state is an important factor for information-metabolic efficiency [10]. In our network, increasing the recurrence strength decreased the spontaneous activity of the neurons, due to inhibition dominating the recurrent currents. However, the simultaneous increase in the strength of the recurrent excitatory synapses increased the cost of the excitatory synaptic currents (Fig 3A, 3B and 3C), because the spontaneous action potentials from the excitatory subpopulation evoke stronger excitatory post-synaptic currents.

thumbnail
Fig 3. Metabolic cost of the network activity.

A-C: Cost at resting state (λext = 0). A: Cost of the excitatory synaptic currents from the background input (Eq 35) and excitatory action potentials evoked by the background input. B: Cost of the action potentials (both excitatory and inhibitory) evoked by the background input. C: Total resting cost obtained by summing A and B. D: The total cost of the network activity is plotted against the output of the network (the total post-synaptic firing rate). Filled areas represent individual contributions of each cost component: cost of action potentials from the external population, cost of the excitatory synaptic currents, and cost of the post-synaptic (evoked) action potentials. As Pext increases, the contribution of external action potentials to the overall cost decreases. With increasing arec, the contribution of excitatory synaptic currents increases. E: The cost of increasing the mean input by one action potential (wAP, Eq 19) is significantly lower for higher Pext. However, although the difference between Pext = 0.01 and Pext = 0.2 is approximately 10-fold, the difference between Pext = 0.2 and Pext = 1 is only approximately 2-fold, as the cost of the external population starts to contribute less to the overall cost.

https://doi.org/10.1371/journal.pcbi.1011896.g003

2.4.2 Inhibitory feedback decreases gain.

Because the net current from recurrent synapses is hyperpolarizing, with stronger recurrent synapses, a stronger excitatory current is necessary to bring the neuron to a given post-synaptic firing rate, and higher pre-synaptic firing rates are necessary. Therefore, the gain g of the network decreases, and with increasing arec the cost of synaptic currents and the cost of external activity increase (Fig 3D and 3E).

2.5 Shared input decreases gain

The number of synapses from the external population for each neuron in the excitatory and inhibitory subpopulations follows a binomial distribution: (27) with the mean number of synapses given by nextPext and variance nextPext(1 − Pext). We scaled the firing rate of the individual neurons in the external population as . Therefore the mean output to a single neuron was always λext, independently of Pext and the variance of the input across neurons was .

Given the convexity of the single neuron tuning curve in the analyzed input range (S1 Fig) that out of two inputs with an identical mean λext, but different variances across neurons, the input with the higher variance will lead to a higher average firing rate. Assuming that the input across neurons follows a normal distribution with mean λext and variance σ2 and that the single neuron tuning curve can be approximated by an exponential function in the form of c1 exp(c2x), where x is the input intensity to the single neuron, we obtain the mean firing rate: (28) which grows with the standard deviation of the input.

Accordingly, we observed that networks with higher Pext needed higher λext in order to produce the same mean PSFR as networks with lower Pext (Fig 4A, 4B and 4C), which translates to lower gain with higher Pext (Fig 4D, 4E and 4F). Moreover, the mean Fano factor of individual neurons increased with increasing Pext (Fig 4G, 4H and 4I). This effect could be mostly removed by fixing the number of connections from the external population to each neuron in the excitatory and inhibitory populations to Pextnext (S2 Fig).

thumbnail
Fig 4. Shared input decreases the gain and increases the individual Fano factor.

A-C: The input intensity λext needed to evoke a given firing rate (x-axis) with different connection probabilities Pext relative to the input intensity for Pext = 0.01. A: arec = 0 nS, B: arec = 0.2 nS, C: arec = 1 nS. For higher Pext, higher values of λext are needed to achieve the same post-synaptic firing rates as with lower values of Pext. This effect becomes more pronounced in stronger recurrent synapses (E-F). D-F: Gain of the network (Eq 20). A higher Pext leads to a lower gain of the population activity. G-I: Higher values of Pext also increase the Fano factor of individual neurons.

https://doi.org/10.1371/journal.pcbi.1011896.g004

2.6 Optimal regimes for metabolically efficient information transmission

We illustrated that the recurrence strength 1) increases the metabolic cost of the neural activity and 2) decreases the population Fano factor by decreasing the correlations between the neurons. Similarly, the increased probability of a synapse from an external population (Pext) decreases the cost of the neural activity but increases the noise correlations. The increased noise correlations then result in higher Fano factor (Eq 26). To find the balance between the cost of the network activity (Eq 4) and the mutual information between the input and the output (Eq 5), we calculated the information-metabolic efficiency, which maximizes the ratio of the mutual information to the cost of the network activity (Eq 10).

For low values of Pext (≤ 0.1), increasing the strength of the recurrent input did not lead to an increase in the information-metabolic efficiency. For higher values of Pext the information-metabolic efficiency was maximized for arec between 0.1 nS and 0.5 nS (Fig 5A and 5B), meaning that the strength of the recurrent excitatory synapses was 2× to 5× lower that the strength of the synapses from the external population.

thumbnail
Fig 5. Information transmission with cost constraints.

A: Information-metabolic efficiency E (Eq 10) for different values of recurrence strength arec. Pext is color-coded. B: Contour plot of the information-metabolic efficiency. Contours are at 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, and 2.25 bits/s. C-H: Contour plots showing the capacity-cost function C(W) (Eq 9) with dependence on the recurrence strength arec for different values of Pext. The contours show the maximal capacities constraint at different values of W (see Table 1 for the costs and capacity values at the contours). The heatmaps in B-H were calculated using piece-wise cubic 2D interpolation (SciPy interpolator CloughTocher2DInterpolator [34]) from the grid calculated with Pext values 0.01, 0.02, 0.03, 0.05, 0.1, 0.2, 0.5, 0.8, 1 and arec values 0, 0.01, 0.02, 0.03, 0.05, 0.1, 0.2, 0.3, 0.5, and 1 nS.

https://doi.org/10.1371/journal.pcbi.1011896.g005

Moreover, varying Pext had a significant effect on the information-metabolic efficiency across all values of arec. Namely, low values of Pext resulted in lower values of information-metabolic efficiency across all values of arec, showing that shared input from the external population is beneficial for metabolically efficient information transmission. Overall, the highest values of information-metabolic efficiency (E ≥ 2bit/1012 ATP) were reached for arec between 0.05 nS and 0.5 nS and Pext between 0.2 and 1 (Fig 5B).

We analyzed the effect of the resting cost (Fig 3A, 3B and 3C) by setting the resting cost in all cases equal to W0, the resting cost of the feedforward network. This did not have a significant effect on the information-metabolic efficiencies (S3 Fig).

Neural circuits might not necessarily maximize the ratio of information to cost. Instead, neurons and neural circuits could modulate their properties to maximize information transmission with the available energy resources [5]. For example, neurons in the mouse visual cortex have been shown to decrease the conductance of their synaptic channels after food restriction [35].

Accordingly, we studied how the optimal strength of recurrent synapses changes with the available resources. We calculated the optimal value of arec for different values of available resources (3, 4, 5, 6, 7, 8, 10, 12, 15, 20, 30, and 40 × 1012 ATP). In Fig 5C, 5D, 5E, 5F, 5G and 5H, we plotted C(W; arec), the capacity-cost function (Eq 9) extended by one dimension with arec. For each cost W, the optimal arec is highlighted, and the corresponding contour of C(W) is shown (see Table 1 for the values of C(W)). With decreasing W, the optimal value of arec typically decreases. This effect is more robust with high values of Pext, because the contours are more curved at the optimum.

We calculated the extended capacity-cost functions using input distributions obtained from the low-noise approximation. To verify that the low noise approximation applies in the case of the studied system, we compared these results to the information-metabolic efficiency obtained with the Jimbo-Kunisawa algorithm. The relative difference did not exceed 10% and did not have a significant impact on the information-metabolic efficiency heatmap structure (S4 Fig).

2.7 Limits of efficient information transmission by the population activity

So far we have assumed that the information about the stimulus is transmitted by the total activity of the network. Such analysis provides us with important insights, however, such simplistic decoding might not necessarily occur in the brain. To explore the limits of decoding the input intensity from the population activity, we assert that the brain can perform optimal unbiased decoding of the stimulus, i.e., for each stimulus λext, it holds for the estimation of the input that (29) (30) where the second equation corresponds to an estimator which saturates the Cramér-Rao bound, and Jpopext) is the Fisher information about the stimulus from the population activity. If we assume that is distributed normally, we may then write the conditional probability distribution function as: (31) obtaining a noisy identity channel with the noise given by the Cramér-Rao bound.

To reduce the effect of sampling bias, we estimated Jpop from the first 500 principal components of the output and employed a bias correction (see section 4.4 for details). Increasing the strength of recurrent connections (arec) increased the information metabolic efficiency of the network (Fig 6). The increase was more pronounced with higher values of Pext, and overall was the highest for Pext = 0.8 and Pext = 1. In this sense, the results remain qualitatively very similar to the information-metabolic efficiency calculated from the summed activity (Fig 5). Interestingly, however, our results indicate that when using information from the entire population, not only the summed activity, the noise correlations introduced by the shared input are less detrimental, and Pext = 1 reaches the highest or close to highest values of the information-metabolic efficiency.

thumbnail
Fig 6. Information-metabolic efficiency with multi-dimensional output.

A: Information-metabolic efficiency E (Eq 10) for different values of recurrence strength arec. Pext is color-coded. B: Contour plot of the information-metabolic efficiency. Contours are at 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, and 2.75 bits/s.

https://doi.org/10.1371/journal.pcbi.1011896.g006

3 Discussion

Information in the brain is likely transmitted by neuronal populations instead of single neurons [19]. One of the benefits is that by considering the signal from many neurons, it is possible to decrease the noise inherent to rate coding spiking neurons, and thus increase the information carried by the system. The information increase is however influenced by correlations between the neurons and their structure. In this work, we investigated a situation where a population of neurons tuned to the same stimulus transmits information about the stimulus intensity. In this case, positive noise correlations decrease the information carried by the population.

We parameterized the shared input with the probability of connection from the external population Pext. Higher Pext means that the firing rate of neurons in the external population can be lower to maintain the same mean input to the information-transmitting population. This way, the shared input, while increasing the noise correlations, decreases the metabolic cost of the activity. In the studied system, we could mitigate the noise correlations by strengthening the recurrent connections and thus increasing the inhibitory feedback. However, to excite a population with inhibitory feedback requires stronger input than to excite a population without inhibitory feedback, and therefore, strengthening the recurrent connection increased the cost of the activity.

In our work, we studied the balance between increasing the transmitted information by decreasing the noise correlations and the associated increase in the cost of the activity. We showed that in a linear system, if the Fano factor of the population activity and the ratio (g is the gain of the system, or slope of the stimulus-response curve, w0 is the slope of the stimulus-cost curve) remain constant, the cost-constrained capacity will remain constant as well.

We proceeded to calculate the stimulus-response relationship and the metabolic cost for a more biologically realistic neural system. In the studied system, the population Fano factor could not be considered constant. Instead, correlations between neurons increased with the mean output of the system, and the mean Fano factor of single neurons was also dependent on the mean output of the system, leading to complex dependence of the population Fano factor on the mean output of the system (Fig 2D, 2E and 2F). We found that despite increasing the noise correlations, the shared input helps with information-metabolically efficient information transmission. This was further accented if the noise correlations are decreased by the increase in the inhibitory feedback. Increasing the recurrence strength could lead to a 10% to 15% increase in the information-metabolic efficiency. The magnitude of the increase was dependent on the cost of the action potentials. If the cost of synaptic currents is negligible compared to the cost of the action potentials, there would be a higher benefit in increasing the inhibitory feedback since the increases in the cost of the synaptic current could also be neglected.

We illustrated the effect of inhibition-dominated recurrence and shared input on the metabolic cost of neural activity. An increased strength of recurrence increased the cost of excitatory synaptic currents due to the stronger excitatory synapses and stronger input from the external population, as well as the cost of the activity of the external population. A higher connection probability from the external population (higher shared input probability) led to a decrease in the external population activity cost, as the overall activity of the external population could be lower to result in the same mean input to the post-synaptic neurons. On the other hand, due to less variable input to single neurons with high values of Pext, a higher mean input was required across all neurons to evoke the same mean post-synaptic activity.

In our model of the cortical area, we considered two neural subpopulations: excitatory and inhibitory. Each subpopulation was homogeneous, but we set the threshold of the inhibitory neurons lower to mimic the behavior of fast-spiking inhibitory neurons. The difference between excitatory, regular spiking neurons and inhibitory, fast-spiking neurons is often described not only by differences in the threshold but also in differences in the adaptation properties [36, 37, 29]. In our case, we did not consider adaptation for simplicity because estimating the information capacity of a neural system with adaptation is computationally considerably more difficult [10].

In our work, we assumed that the neural circuit maximizes the mutual information between the input and the output neurons while minimizing the cost of the neural activity. Such an approach does not provide any information about how the information is encoded. It only calculates the limit on the amount of information that can be reliably transmitted. Yet, the principles of mutual information maximization have proven very useful in explaining the properties of neural systems. For example, the tuning curves of blowfly’s contrast-sensitive neurons are adapted to the distribution of contrasts encountered in the natural environment [38]; the power spectrum of distribution of odor in pheromone plumes follows the power spectrum predicted for an optimal input to olfactory receptor neurons [39]; distributions of post-synaptic firing rates of single neurons during in-vivo recordings follow distributions predicted from cost-constrained mutual information maximization [40, 41, 42].

By assuming a particular coding scheme, it is possible to place further constraints on the complexity of information encoding, with the assumption that complex codes are not an efficient way to transmit information [43, 44]. We did not attempt this in our study. However, it would be interesting to study whether inhibitory feedback decreases or increases the encoding complexity.

We have shown that a cortical area can adapt to the amount of available energy resources. When resources are scarce, information transmission can be adapted by weakening the synaptic weights, thus expending fewer resources to reduce the noise correlations. Such a mechanism is implemented in the mouse visual cortex [35]. Padamsey et al. [35] showed that in food-restricted mice, the orientation tuning curves of individual orientation-sensitive neurons in the visual cortex become broader due to weakened synaptic conductances. In our work, we studied the properties of a neuronal population instead of single neurons. In particular, we considered a population encoding the stimulus intensity instead of the stimulus identity, such as orientation. An extension this model to a situation in which stimulus identity is encoded and shared input is introduced due to the overlap of receptive fields would be interesting.

Neurons recorded in-vivo typically exhibit a Fano factor close to 1.0 and constant over a broad range of post-synaptic firing rates [45, 46, 19]. In the optimal regimes with stronger recurrent synapses, the Fano factor decreased only very slowly over the studied range of post-synaptic firing rates (up to 30 Hz in a single neuron). With weaker synaptic strengths, the Fano factor of a single neuron decreases rapidly with an increasing post-synaptic firing rate. Our model predicts that fewer available resources would lead to weaker recurrent synapses. This hypothesis is straightforward to test by calculating the Fano factors during stimulus presentation (both population and single neuron) in food-restricted animals and comparing them to controls. We expect that the population Fano factor will increase (alternatively, the noise correlations will increase) with food scarcity, and single neuron Fano factors will decrease.

4 Methods

4.1 Network model

We modeled a network consisting of three subpopulations: external (ext), excitatory (exc), and inhibitory (inh). The external subpopulation consisted of Poisson neurons, defined by their firing intensity (same for all the neurons in the subpopulation). Neurons in the excitatory and inhibitory subpopulations were modeled as leaky integrate-and-fire (LIF) neurons: (32) (33) (34) (35) (36) (37) (38) (39) (40) Irec is the synaptic current arising from the recurrent connections (exc. to exc., exc. to inh., inh. to exc., inh. to inh.). Iext is the excitatory current from external neurons. Ibcg is the current from synapses from neighboring cortex areas. , , represent the spike times of the j-th external, excitatory, and inhibitory neuron respectively. The matrices Wext, Wexc, Winh contain the synaptic connection strengths, (X ∈ {ext, exc, inh}) if the j-th neuron connects to the i-th neuron and 0 otherwise. The background (bcg) input from neighboring cortical areas is modeled as the Ornstein-Uhlenbeck process with means μbcg,exc and μbcg,inh and standard deviations of the limiting distributions σbcg,exc and σbcg,inh [47, 48]. We set the values of the background activity to match the moments of an exponential Poisson shot noise with rates λbcg,exc = 0.5 kHz and λbcg,inh = 0.125 kHz [49]: (41) (42) where X represents the excitatory or inhibitory background activity, leading to the ratio of inhibitory to excitatory conductance of , as observed in-vivo [48] and a spontaneous firing rate of about 0.5 Hz to 1 Hz.

When the membrane potential V crosses the firing threshold (θexc, θinh) a spike is fired and the membrane potential is reset to EL.

The network consisted of next = 1000 neurons in the external population, nexc = 800 neurons in the excitatory population, and ninh = 200 neurons in the inhibitory population. The connections were set randomly with connection probability for the recurrent connections (exc. to exc., exc. to inh., inh. to inh., inh. to exc.) set to Prec = 0.2 and the connection probability from the external population (ext. to exc. and ext. to inh., Pext) was varied from to 0.01 to 1 (Fig 1A). We created the connection matrices WX by generating a matrix of random uniformly distributed numbers RX from the interval [0, 1) and set if or for X ∈ {exc, inh}. The random matrix Rext was the same for all values of Pext. In simulations where we controlled for the effects caused by a random number of connections from the external population, we fixed the number of connections by setting only the k = nextPext elements in each row of Wext non-zero, in the location of the k largest elements of the i-th row of Rext.

The simulations were carried out using the Brian 2 package [50] in Python with a 0.1 ms time step. Used parameters are given in Table 2.

4.2 Obtaining the input-output relationship of the network

We considered the total number of action potentials n from the excitatory and inhibitory subpopulations in time window ΔT = 1 s as the output of the network. We modeled the stimulus as the input from the thalamic neurons, parametrized by the mean input rate to a single neuron: (43) where is the firing rate of a single neuron in the external population, is the input firing rate at Pext = 1, and is a scaling factor to keep the mean input same regardless of Pext. For each set of parameters (arec and Pext pair) we determined the input for which the output reached 30 kHz. In order to obtain the input-output relationship, we discretized the input space into 30 equidistant stimulus intensities: , where i = 0, …, 30. With a fixed network connectivity, we simulated the network 10800 times for each .

We discretized the input space to 1000 equidistant stimulus intensities and estimated the mean output μext) and variance σ2ext) for each intensity by linear interpolation from the simulated data. We then estimated the input-output relationship, defined by the conditional probability distribution f(next) as a discretized normal distribution for each λext, with corresponding mean and variance: (44) (45)

4.3 Metabolic cost of neural activity

In our calculations, we focus on the energy in the form of ATP molecules required to pump out Na+ ions. We take into account the Na+ influx due to excitatory post-synaptic currents, Na+ influx during action potentials, and Na+ influx to maintain the resting potential. To this end, we follow the calculations in [2] and [3], which we modify for our neuronal model.

We assume the standard membrane capacitance per area as cm = 1 μF/cm2 and the cell diameter as D = 69 μm, giving the total capacitance Cm = πD2cm = 150 pF. Therefore, to depolarize a neuron by ΔV = 100 mV the minimum charge influx is ΔVCm = 1.5 × 10−11 C and the minimum number of Na+ ions , where e ≐ 1.6 × 10−19 is the elementary charge. The minimal number of Na+ ions is then quadrupled to get a more realistic estimate of the Na+ influx due to the simultaneous opening of the K+ channels [2]. The Na+ influx must be then pumped out by the Na+/K+-ATPase, which requires one ATP molecule per 3 Na+ ions. The cost of a single action potential can be then estimated as . However, about 75% of the metabolic costs associated with an action potential are expected to come from the propagation of the action potential through the neuron’s axons [51, 2]. Therefore, we estimate the total cost as 5.0 × 108 ATP.

Next, we assume that the excitatory synaptic current is mediated by the opening of Na+ and K+ channels with reversal potentials ENa = 90 mV and EK = −105 mV. For the excitatory synaptic current, the following must hold (46) (47)

Therefore: (48) The sodium entering with the sodium current INa must be pumped out by the Na+/K+-ATPase and therefore we calculate the cost of the synaptic current as , where ΔT is the time interval over which we are measuring the cost.

Each input to the network (parametrized by λext) is then associated with a cost, which we express as (49) where μexc = μexcext), μinh = μinhext) are the mean firing rates of a single excitatory and inhibitory neuron (given the input λext), and are the average excitatory synaptic currents in a single excitatory and inhibitory neuron.

4.4 Fisher information with multidimensional output

When we consider that the output of the network is either the full vector of firing rates, or its low-dimensional projection, we can calculate the Fisher information as (50) where fext) is the mean of the multidimensional response vector, Σ(λext) (dependence of Σ was omitted for legibility) is the covariance matrix of the response components at input λext, and Tr stands for the Trace operator. The first term in the equation is analogous to the Fisher information in one-dimensional case (Eq 14), while the second term indicates how much information we gain about the stimulus from changes in the covariance matrix. In our case, the second term was always very small compared to the first term.

We performed dimensionality reduction of the output across all stimuli by principal component analysis and used the first 500 principal components. We used 500, because the increase in information-metabolic efficiency for higher number of components is small, and the sampling bias is still relatively small (S5 Fig). To deal with the remaining sampling bias we calculated the information-metabolic efficiency with the Jimbo-Kunisawa for different numbers of trials and performed the quadratic extrapolation method to estimate the unbiased information-metabolic efficiency [52, 53]. Overall, the results remain qualitatively very similar to the information-metabolic efficiency calculated from the summed activity. However, we found the increase in information-metabolic efficiency from using high-dimensional output is the largest for higher values arec and Pext.

4.4.1 Correcting the sampling bias.

In the case of a high-dimensional output, insufficient number of trials may lead to perceived correlations in the data which are in fact not there, subsequently increasing the calculated mutual information [54, 52, 55, 56, 53]. To decrease the sampling bias, we first performed principal component analysis to decrease dimensionality of the output and employed a quadratic extrapolation method to estimate the unbiased value of information-metabolic efficiency. We used the Jimbo-Kunisawa algorithm to calculate information-metabolic efficiency with 10800, 5400, and 2700 trials, obtaining the estimates of E (Eq 10): E10800, E5400, and E2700. We then assumed that the estimates follow the following dependency on the number of trials k [52]: (51) By solving the linear system we obtained the estimate of the unbiased information-metabolic efficiency E0 (S5 Fig). We found that with 500 principal components the bias is still relatively low, and further increasing the number of components leads only to minor increase in the information-metabolic efficiency. Therefore, we used the first 500 components to obtain the results in the Fig 6.

Supporting information

S1 Fig. Input-output relationship of a single neurons.

To exclude the network effects, we plotted the tuning curves for the feedforward network separately for the excitatory (blue) and inhibitory (yellow) neurons. The thick line represents the median response across the neurons, which shows that their tuning curves are convex in the studied range. The shaded area shows the spread of the tuning curves across neurons (2.5 to 97.5 percentile). With low values of Pext, the tuning curves across neurons vary significantly and are skewed to the higher firing rates.

https://doi.org/10.1371/journal.pcbi.1011896.s001

(TIF)

S2 Fig. Fixing the number of external connections to each neuron.

Same as Fig 4, but exactly kextPext external neurons connected to each excitatory and inhibitory neuron. This removed a large part of the dependence on Pext seen in Fig 4.

https://doi.org/10.1371/journal.pcbi.1011896.s002

(TIF)

S3 Fig. Effect of equalizing the resting cost on the information-metabolic efficiency.

We observed that the cost of the resting state was different for different recurrence strengths arec (Fig 3A–3C). This could potentially explain the higher information-metabolic efficiency E (Eq 10) for intermediate values of arec and its decrease for high values of arec. To quantify the effect of the resting cost, we set the resting cost in each case to the resting cost of the feedforward network W0(arec = 0). The differences in the cost of the resting state did not have a qualitative effect on the conclusions. A: The same contour plot as in Fig 5B. B: Contour plot with equalized resting costs (contours as in Fig 5B: 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, and 2.25 bits/s). C: Heatmap of the relative differences.

https://doi.org/10.1371/journal.pcbi.1011896.s003

(TIF)

S4 Fig. Accuracy of information-metabolic efficiency approximation.

To calculate the capacity-cost functions, we calculated the mutual information using Eq (5) with the input probability distribution calculated from Eqs (12) and (14). Here we compare the information-metabolic efficiencies calculated with the approximation and the Jimbo-Kunisawa algorithm. A: The same contour plot as in Fig 5B with information-metabolic efficiencies calculated with the Jimbo-Kunisawa algorithm. B: Information-metabolic efficiencies calculated with the Fisher-information-based input distribution. C: Heatmap of the relative differences. Note that the approximation can only reach values lower than the actual information-metabolic efficiency.

https://doi.org/10.1371/journal.pcbi.1011896.s004

(TIF)

S5 Fig. Sampling bias and extrapolation.

The information-metabolic efficiency calculated by the Jimbo-Kunisawa algorithm is plotted for different numbers of principal components used. We calculated the information-metabolic efficiency from different numbers of trials. At high number of components, lower number of trials lead to significantly higher information-metabolic efficiency. This is the effect of the sampling bias. We attempted to remove the bias by using the quadratic extrapolation method. For 500 principal components the bias is still relatively low, and increasing the number of components brings little benefit in terms of information-metabolic efficiency.

https://doi.org/10.1371/journal.pcbi.1011896.s005

(TIF)

References

  1. 1. Barlow HB. Possible Principles Underlying the Transformations of Sensory Messages. In: Sensory Communication. The MIT Press; 1961. p. 217–234.
  2. 2. Attwell D, Laughlin SB. An Energy Budget for Signaling in the Grey Matter of the Brain. J Cereb Blood Flow Metab. 2001;21(10):1133–1145. pmid:11598490
  3. 3. Harris JJ, Jolivet R, Attwell D. Synaptic energy use and supply. Neuron. 2012;75(5):762–777. pmid:22958818
  4. 4. Levy WB, Baxter RA. Energy Efficient Neural Codes. Neural Comput. 1996;8(3):531–543. pmid:8868566
  5. 5. Balasubramanian V, Kimber D, Berry MJ II. Metabolically Efficient Information Processing. Neural Comput. 2001;13(4):799–815. pmid:11255570
  6. 6. Laughlin S. Energy as a constraint on the coding and processing of sensory information. Curr Opin Neurobiol. 2001;11(4):475–480. pmid:11502395
  7. 7. Niven JE, Laughlin SB. Energy limitation as a selective pressure on the evolution of sensory systems. J Exp Biol. 2008;211(11):1792–1804. pmid:18490395
  8. 8. Yu L, Yu Y. Energy-efficient neural information processing in individual neurons and neuronal networks. J Neurosci Res. 2017;95(11):2253–2266. pmid:28833444
  9. 9. Sengupta B, Laughlin SB, Niven JE. Balanced Excitatory and Inhibitory Synaptic Currents Promote Efficient Coding and Metabolic Efficiency. PLoS Comput Biol. 2013;9(10):e1003263. pmid:24098105
  10. 10. Barta T, Kostal L. The effect of inhibition on rate code efficiency indicators. PLoS Comput Biol. 2019;15(12):e1007545. pmid:31790384
  11. 11. Monier C, Chavane F, Baudot P, Graham LJ, Frégnac Y. Orientation and Direction Selectivity of Synaptic Inputs in Visual Cortical Neurons. Neuron. 2003;37(4):663–680. pmid:12597863
  12. 12. Brunel N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. J Comput Neurosci. 2000;8:183–208. pmid:10809012
  13. 13. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, et al. The Asynchronous State in Cortical Circuits. Science. 2010;327(5965):587–590. pmid:20110507
  14. 14. Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of Neural-Network Activity by Inhibitory Feedback. PLoS Comp Biol. 2012;8(8):e1002596. pmid:23133368
  15. 15. Bernacchia A, Wang XJ. Decorrelation by Recurrent Inhibition in Heterogeneous Neural Circuits. Neural Comput. 2013;25(7):1732–1767. pmid:23607559
  16. 16. Abbott LF, Dayan P. The Effect of Correlated Variability on the Accuracy of a Population Code. Neural Comput. 1999;11(1):91–101. pmid:9950724
  17. 17. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7(5):358–366. pmid:16760916
  18. 18. Panzeri S, Moroni M, Safaai H, Harvey CD. The structures and functions of correlations in neural population codes. Nat Rev Neurosci. 2022;23(9):551–567. pmid:35732917
  19. 19. Shadlen MN, Newsome WT. The Variable Discharge of Cortical Neurons: Implications for Connectivity, Computation, and Information Coding. J Neurosci. 1998;18(10):3870–3896. pmid:9570816
  20. 20. Moreno-Bote R, Beck J, Kanitscheider I, Pitkow X, Latham P, Pouget A. Information-limiting correlations. Nature Neuroscience. 2014;17(10):1410–1417. pmid:25195105
  21. 21. Blahut R. Computation of channel capacity and rate-distortion functions. IEEE Trans Inf Theory. 1972;18(4):460–473.
  22. 22. Jimbo M, Kunisawa K. An iteration method for calculating the relative capacity. Information and Control. 1979;43(2):216–223.
  23. 23. Suksompong P, Berger T. Capacity Analysis for Integrate-and-Fire Neurons With Descending Action Potential Thresholds. IEEE Trans Inf Theory. 2010;56(2):838–851.
  24. 24. Kostal L, Lansky P. Information capacity and its approximations under metabolic cost in a simple homogeneous population of neurons. Biosystems. 2013;112(3):265–275. pmid:23562831
  25. 25. Kostal L, Lansky P, McDonnell MD. Metabolic cost of neuronal information in an empirical stimulus-response model. Biol Cybern. 2013;107(3):355–365. pmid:23467914
  26. 26. Stemmler M. A single spike suffices: the simplest form of stochastic resonance in model neurons. Network. 1996;7(4):687–716.
  27. 27. Greenwood PE, Lansky P. Optimum signal in a simple neuronal model with signal-dependent noise. Biol Cybern. 2005;92(3):199–205. pmid:15750866
  28. 28. Meyer HS, Wimmer VC, Oberlaender M, de Kock CPJ, Sakmann B, Helmstaedter M. Number and Laminar Distribution of Neurons in a Thalamocortical Projection Column of Rat Vibrissal Cortex. Cereb Cortex. 2010;20(10):2277–2286. pmid:20534784
  29. 29. Bernardi D, Doron G, Brecht M, Lindner B. A network model of the barrel cortex combined with a differentiator detector reproduces features of the behavioral response to single-neuron stimulation. PLOS Comput Biol. 2021;17(2). pmid:33556070
  30. 30. Hennequin G, Vogels T, Gerstner W. Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements. Neuron. 2014;82(6):1394–1406. pmid:24945778
  31. 31. Potjans TC, Diesmann M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cereb Cortex. 2014;24(3):785–806. pmid:23203991
  32. 32. Kobayashi R, Kurita S, Kurth A, Kitano K, Mizuseki K, Diesmann M, et al. Reconstructing neuronal circuitry from parallel spike trains. Nat Commun. 2019;10(1):4468. pmid:31578320
  33. 33. Barta T, Kostal L. Regular spiking in high-conductance states: The essential role of inhibition. Phys Rev E. 2021;103(2):022408. pmid:33736083
  34. 34. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods. 2020;17(3):261–272. pmid:32015543
  35. 35. Padamsey Z, Katsanevaki D, Dupuy N, Rochefort NL. Neocortex saves energy by reducing coding precision during food scarcity. Neuron. 2022;110(2):280–296. pmid:34741806
  36. 36. Kobayashi R, Tsubo Y, Shinomoto S. Made-to-order spiking neuron model equipped with a multi-timescale adaptive threshold. Front Comput Neurosci. 2009;3:9. pmid:19668702
  37. 37. Zerlaut Y, Chemla S, Chavane F, Destexhe A. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons. J Comput Neurosci. 2017;44(1):45–61. pmid:29139050
  38. 38. Laughlin S. A simple coding procedure enhances a neuron’s information capacity. Z Naturforsch [C]. 1981;36(9-10):910–912. pmid:7303823
  39. 39. Kostal L, Lansky P, Rospars JP. Efficient olfactory coding in the pheromone receptor neuron of a moth. PLoS Comput Biol. 2008;4:e1000053. pmid:18437217
  40. 40. Treves A, Panzeri S, Rolls ET, Booth M, Wakeman EA. Firing rate distributions and efficiency of information transmission of inferior temporal cortex neurons to natural visual stimuli. Neural Comput. 1999;11(3):601–632. pmid:10085423
  41. 41. de Polavieja GG. Errors Drive the Evolution of Biological Signalling to Costly Codes. J Theor Biol. 2002;214(4):657–664. pmid:11851374
  42. 42. de Polavieja GG. Reliable biological communication with realistic constraints. Phys Rev E. 2004;70(6). pmid:15697405
  43. 43. Kostal L, Kobayashi R. Optimal decoding and information transmission in Hodgkin-Huxley neurons under metabolic cost constraints. Biosystems. 2015;136:3–10. pmid:26141378
  44. 44. Kostal L, Kobayashi R. Critical size of neural population for reliable information transmission. Phys Rev E (Rapid Commun). 2019;100(1):050401(R). pmid:31870018
  45. 45. Gur M, Beylin A, Snodderly DM. Response Variability of Neurons in Primary Visual Cortex (V1) of Alert Monkeys. J Neurosci. 1997;17(8):2914–2920. pmid:9092612
  46. 46. Geisler WS, Albrecht DG. Visual cortex neurons in monkeys and cats: Detection, discrimination, and identification. Vis Neurosci. 1997;14(5):897–919. pmid:9364727
  47. 47. Uhlenbeck GE, Ornstein LS. On the Theory of the Brownian Motion. Phys Rev. 1930;36(5):823–841.
  48. 48. Destexhe A, Rudolph M, Fellous JM, Sejnowski TJ. Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience. 2001;107(1):13–24. pmid:11744242
  49. 49. Rajdl K, Lansky P. Stein’s neuronal model with pooled renewal input. Biol Cybern. 2015;109(3):389–399. pmid:25910437
  50. 50. Stimberg M, Brette R, Goodman DF. Brian 2, an intuitive and efficient neural simulator. eLife. 2019;8. pmid:31429824
  51. 51. Vetter P, Roth A, Häusser M. Propagation of Action Potentials in Dendrites Depends on Dendritic Morphology. J Neurophysiol. 2001;85(2):926–937. pmid:11160523
  52. 52. Strong SP, Koberle R, de Ruyter van Steveninck RR, Bialek W. Entropy and Information in Neural Spike Trains. Phys Rev Lett. 1998;80(1):197–200.
  53. 53. Panzeri S, Senatore R, Montemurro MA, Petersen RS. Correcting for the Sampling Bias Problem in Spike Train Information Measures. J Neurophysiol. 2007;98(3):1064–1072. pmid:17615128
  54. 54. Panzeri S, Treves A. Analytical estimates of limited sampling biases in different information measures. Network. 1996;7(1):87–107. pmid:29480146
  55. 55. Paninski L. Estimation of Entropy and Mutual Information. Neural Comput. 2003;15(6):1191–1253.
  56. 56. Nemenman I, Bialek W, de Ruyter van Steveninck R. Entropy and information in neural spike trains: Progress on the sampling problem. Phys Rev E. 2004;69(5):056111. pmid:15244887