Table of contents

Volume 14

Number 1, January 2003

Previous issue Next issue

1

and

One of the most challenging and fascinating problems in science is deciphering how neural systems encode information. Mathematical models are critical in any effort to determine how neural systems represent and transmit information. Such mathematical approaches span a wide range, which may be divided approximately into two kinds of research activities. The first kind of activity uses detailed biophysical models (e.g., Hodgkin–Huxley and its variants) of individual neurons, detailed biophysical models of networks of neurons, or artificial neural network models to study emergent behaviors of neural systems. The second kind of activity develops signal-processing algorithms to analyze the ever-growing volumes of data collected in neuroscience experiments.

In an ideal scientific investigation there is a direct link between experiments and theoretical modeling. The theoretical models make predictions that can be used to guide experiments; the experiments provide data that allow for refinement (or rejection) of theoretical models. The growing complexity of neuroscience experiments makes use of appropriate data analysis methods crucial for establishing how reliably specific system properties can be identified from experimental measurements. Thus, careful data analysis is an essential complement to theoretical modeling. It allows validation of theoretical model predictions and provides biologically relevant constraints and parameter values for further analytic and simulation studies(see figure 1).

Figure 1. Neuroscience data: dynamic and multivariate.

Neural spike train data have special features that present new, exciting challenges for signal processing research. For this reason in 2000 and 2001 we organized two workshops entitled `Information and Statistical Structure in Spike Trains' as part of the Neural Information Processing meetings for those years. The papers in this special issue are a compilation of some of the work presented at those two workshops. All focus in some way on the question of how to decipher the neural codes. In this regard, the broad range of topics addressed is a sample of the breadth of issues required to make the link from experiments to theory, and of the theoretical considerations that can make experimental predictions. The data analysis methods divide into three types: data pre-processing (spike sorting), statistical modeling of neural spike train data, and algorithms for calculating information with explicit and implicit models. The system-dependent or theoretical models use simulations to gain physiologic and mechanistic insight.

The recent advent of the capability to record with multiple electrode arrays the simultaneous spiking activity of many neurons (>100) has made it possible to study information encoding by ensembles rather than by just single neurons. Simultaneous recording of multiple neurons is now a standard tool in neuroscience research. Often, the first data-analysis problem confronting the investigator is that of discerning the discharges of individual neurons within the multichannel, noise-contaminated signals that result from tetrode recordings. Nguyen, Frank and Brown's contribution, `An application of reversible-jump MCMC to spike classification of multi-unit extracellular recordings', presents a new approach to this problem. The core of the approach is a Bayesian Markov chain Monte Carlo procedure that simultaneously estimates the correlation structure of the noise (the variability of the individual spike waveforms) and the number of distinct waveforms. In this way the authors determine the number of neurons being simultaneously recorded while assigning each action potential to its source neuron.

Kass, Ventura and Cai (`Statistical smoothing of neuronal data') take a careful look at the notion of firing rate. Though seemingly a simple problem, estimation of the firing rate (and, using this estimate to determine quantities such as the time of the peak response) is far from straightforward. As they show, straightforward approaches based on the post-stimulus histogram and raster plot—visualization tools that are ubiquitous in the exploratory analysis of neural data—are rather inefficient, and smoothing techniques based on adaptive splines offer substantial advantages.

Information theory methods are perhaps the most widely used techniques for analyzing neural data. In `An exact method to quantify the information transmitted by different mechanisms of correlational coding', Pola, Theil, Hoffman and Panzeri extend their previous work on exact measures of information encoded by an individual neuron to exact measures of information encoded by a population of neurons. Their analysis technique allows a decomposition of the information structure in terms of the mean neural response, the correlation among the neurons and stimulus-induced changes in correlation among the neurons.

Dimitrov, Miller and co-workers (`Analysis of neural coding using quantization with an information-based distortion measure') present another approach for the estimation of information transmitted by spike trains, and apply this approach to the cricket cercal (air velocity sensation) system. Rather than calculate information directly by estimation of joint input–output probabilities, their approach applies rate distortion theory to identify a `codebook' that minimally distorts the information available in the stimulus. One advantage of this approach is that it does not assume that the neural code resembles a rate code. More importantly, along with the estimate of information that the procedure yields, the codebook provides insight into how information is represented.

Understanding the distinction between single spikes and spikes that belong to bursts is crucial for characterizing neural encoding schemes. In `Information encoding and computation with spikes and bursts', Kepecs and Lisman use simulation studies based on the Hodgkin–Huxley model combined with principal components analysis (PCA) and discriminant analysis to address this question. Specifically, the authors simulate a Hodgkin–Huxley model of a bursting neuron stimulated by stochastic inputs and study the relation between spiking patterns and stimulus features using the PCA discriminant analysis applied to a form of the spike-triggered average covariance matrix. The authors suggest a way to identify the distinct stimulus features that spikes and bursts encode.

Dynamic synapses (synapses whose efficacy increases or decreases in a manner that depends on their recent activity) clearly are differentially affected by isolated spikes and by spikes in bursts. Pantic, Torres and Kappen (`Coincidence detection with dynamic synapses') shows that dynamic changes in synaptic efficacy not only affect the way that spike trains from a single input are processed, but can also play an important role in the way that spike trains from convergent inputs interact. Via a computational study of idealized integrate-and-fire and Hodgkin-Huxley neurons, this work demonstrates that synaptic dynamics substantially improve the range over which neurons can act as coincidence detectors.

Although action potentials are the primary way in which neurons communicate, a crucial part of understanding this communication process lies in understanding the intricacies of the subthreshold processes that lead to the generation of action potentials. In `Influence of subthreshold nonlinearities on signal-to-noise ratio and timing precision for small signals in neurons: minimal model analysis', Svirskis and Rinzel use an elementary integrate-and-fire model to examine the different roles that subthreshold voltages and time-dependent conductances play in signal integration and the production of action potentials. The key to their analysis is the role that a non-inactivating low-threshold outward current can play in increasing the precision of small signal integrations. Svirskis and Rinzel provide several examples to illustrate the importance of the subthreshold feedback mechanism.

Defining what a neuron encodes (i.e., its receptive field properties) is a basic question in neuroscience. Many experiments in neuroscience allow investigators to study this question by providing a statistical characterization of the response of the neuron to a given stimulus. In `Likelihood approaches to sensory coding in auditory cortex', Jenison and Reale use likelihood methods to study the problem of sound localization based on the ensemble response recorded from primary auditory cortex. They do this by using an inverse Gaussian probability density to model the neural response latency as a function of multiple acoustic parameters. The interesting feature of this approach is the use of likelihood methods based on formal probability model to carry out the analysis. There are several advantages to the likelihood approach. The parameter estimates obtained using the likelihood approach have several optimality properties such as consistency (converging to the true value as the sample size increases), having an asymptotic Gaussian distribution and providing a straightforward way to compute confidence intervals that are as short as possible. This gives a quantitative description of the relative importance of direction, azimuth and sound amplitude in inducing auditory neural responses and hence useful insight into how the neurons use this information for sound localization.

Most sensory systems are faced with the problem of providing useful signals over a wide dynamic range of the input, within the constraints of a relatively narrow range of outputs. In vision, this problem is particularly acute: the retinal output, which consists of spike trains that rarely exceed 200 impulses/s, can signal contrasts as low as one part in 300, over a 1010-fold operating range of intensities—performance that can only be accomplished through gain controls. Lesica, Boloori and Stanley (`Adaptive encoding in the visual pathway') present a promising approach to the analysis of such gain controls, through a procedure that adaptively tracks response characteristics. Although described in the context of data from the visual system, the approach is a general one, and is particularly useful in the challenging situation in which the timescale of the adaptive mechanisms and the timescales of the signals being encoded are not well separated.

In sum, the availability of new experimental techniques, analytic and computational strategies, and the hardware with which to implement them has led to a surge of activity at the confluence of experimental, computational and theoretical neuroscience. The papers in this special issue provide windows into the range and vigor of this current research.

5

, and

The purpose of smoothing (filtering) neuronal data is to improve the estimation of the instantaneous firing rate. In some applications, scientific interest centres on functions of the instantaneous firing rate, such as the time at which the maximal firing rate occurs or the rate of increase of firing rate over some experimentally relevant period. In others, the instantaneous firing rate is needed for probability-based calculations. In this paper we point to the very substantial gains in statistical efficiency from smoothing methods compared to using the peristimulus–time histogram (PSTH), and we also demonstrate a new method of adaptive smoothing known as Bayesian adaptive regression splines (DiMatteo I, Genovese C R and Kass R E 2001 Biometrika88 1055–71). We briefly review additional applications of smoothing with non-Poisson processes and in the joint PSTH for a pair of neurons.

17

, and

Recent experimental findings show that the efficacy of transmission in cortical synapses depends on presynaptic activity. In most neural models, however, the synapses are regarded as static entities where this dependence is not included. We study the role of activity-dependent (dynamic) synapses in neuronal responses to temporal patterns of afferent activity. Our results demonstrate that, for suitably chosen threshold values, dynamic synapses are capable of coincidence detection (CD) over a much larger range of frequencies than static synapses. The phenomenon appears to be valid for an integrate-and-fire as well as a Hodgkin–Huxley neuron and various types of CD tasks.

35

, , and

We derive a new method to quantify the impact of correlated firing on the information transmitted by neuronal populations. This new method considers, in an exact way, the effects of high order spike train statistics, with no approximation involved, and it generalizes our previous work that was valid for short time windows and small populations. The new technique permits one to quantify the information transmitted if each cell were to convey fully independent information separately from the information available in the presence of synergy–redundancy effects. Synergy–redundancy effects are shown to arise from three possible contributions: a redundant contribution due to similarities in the mean response profiles of different cells; a synergistic stimulus-dependent correlational contribution quantifying the information content of changes of correlations with stimulus, and a stimulus-independent correlational contribution term that reflects interactions between the distribution of rates of individual cells and the average level of cross-correlation. We apply the new method to simultaneously recorded data from somatosensory and visual cortices. We demonstrate that it constitutes a reliable tool to determine the role of cross-correlated activity in stimulus coding even when high firing rate data (such as multi-unit recordings) are considered.

61

, and

Multi-electrode recordings in neural tissue contain the action potential waveforms of many closely spaced neurons. While we can observe the action potential waveforms, we cannot observe which neuron is the source for which waveform nor how many source neurons are being recorded. Current spike-sorting algorithms solve this problem by assuming a fixed number of source neurons and assigning the action potentials given this fixed number. We model the spike waveforms as an anisotropic Gaussian mixture model and present, as an alternative, a reversible-jump Markov chain Monte Carlo (MCMC) algorithm to simultaneously estimate the number of source neurons and to assign each action potential to a source. We derive this MCMC algorithm and illustrate its application using simulated three-dimensional data and real four-dimensional feature vectors extracted from tetrode recordings of rat entorhinal cortex neurons. In the analysis of the simulated data our algorithm finds the correct number of mixture components (sources) and classifies the action potential waveforms with minimal error. In the analysis of real data, our algorithm identifies clusters closely resembling those previously identified by a user-dependent graphical clustering procedure. Our findings suggest that a reversible-jump MCMC algorithm could offer a new strategy for designing automated spike-sorting algorithms.

83

and

Likelihood methods began their evolution in the early 1920s with R A Fisher, and have developed into a rich framework for inferential statistics. This framework offers tools for the analysis of the differential geometry of the full likelihood function based on observed data. We examine likelihood functions derived from inverse Gaussian (IG) probability density models of cortical ensemble responses of single units. Specifically, we investigate the problem of sound localization from the observation of an ensemble of neural responses recorded from the primary (AI) field of the auditory cortex. The problem is framed as a probabilistic inverse problem with multiple sources of ambiguity.

Observed and expected Fisher information are defined for the IG cortical ensemble likelihood functions. Receptive field functions of multiple acoustic parameters are constructed and linked to the IG density. The impact of estimating multiple acoustic parameters related to the direction of a sound is discussed, and the implications of eliminating nuisance parameters are considered. We examine the degree of acuity afforded by a small ensemble of cortical neurons for locating sounds in space, and show the predicted patterns of estimation errors, which tend to follow psychophysical performance.

103

and

Neurons compute and communicate by transforming synaptic input patterns into output spike trains. The nature of this transformation depends crucially on the properties of voltage-gated conductances in neuronal membranes. These intrinsic membrane conductances can enable neurons to generate different spike patterns including brief, high-frequency bursts that are commonly observed in a variety of brain regions. Here we examine how the membrane conductances that generate bursts affect neural computation and encoding. We simulated a bursting neuron model driven by random current input signal and superposed noise. We consider two issues: the timing reliability of different spike patterns and the computation performed by the neuron. Statistical analysis of the simulated spike trains shows that the timing of bursts is much more precise than the timing of single spikes. Furthermore, the number of spikes per burst is highly robust to noise. Next we considered the computation performed by the neuron: how different features of the input current are mapped into specific output spike patterns. Dimensional reduction and statistical classification techniques were used to determine the stimulus features triggering different firing patterns. Our main result is that spikes, and bursts of different durations, code for different stimulus features, which can be quantified without a priori assumptions about those features. These findings lead us to propose that the biophysical mechanisms of spike generation enables individual neurons to encode different stimulus features into distinct spike patterns.

119

, and

In a natural setting, the mean luminance and contrast of the light within a visual neuron's receptive field are constantly changing as the eyes saccade across complex scenes. Adaptive mechanisms modulate filtering properties of the early visual pathway in response to these variations, allowing the system to maintain differential sensitivity to nonstationary stimuli. An adaptive variant of the reverse correlation technique is used to characterize these changes during single trials. Properties of the adaptive reverse correlation algorithm were investigated via simulation. Analysis of data collected from the mammalian visual system demonstrates the ability to continuously track adaptive changes in the encoding scheme. The adaptive estimation approach provides a framework for characterizing the role of adaptation in natural scene viewing.

137

and

Subthreshold voltage- and time-dependent conductances can subserve different roles in signal integration and action potential generation. Here, we use minimal models to demonstrate how a non-inactivating low-threshold outward current (IKLT) can enhance the precision of small-signal integration. Our integrate-and-fire models have only a few biophysical parameters, enabling a parametric study of IKLT's effects. IKLT increases the signal-to-noise ratio (SNR) for firing when a subthreshold 'signal' EPSP is delivered in the presence of weak random input. The increased SNR is due to the suppression of spontaneous firings to random input. In accordance, SNR grows as the EPSP amplitude increases. SNR also grows as the unitary synaptic current's time constant increases, leading to more effective suppression of spontaneous activity. Spike-triggered reverse correlation of the injected current indicates that, to reach spike threshold, a cell with IKLT requires a briefer time course of injected current. Consistent with this narrowed integration time window, IKLT enhances phase-locking, measured as vector strength, to a weak noisy and periodically modulated stimulus. Thus subthreshold negative feedback mediated by IKLT enhances temporal processing. An alternative suppression mechanism is voltage-and time-dependent inactivation of a low-threshold inward current. This feature in an integrate-and-fire model also shows SNR enhancement, in comparison with a case when the inward current is non-inactivating. Small-signal detection can be significantly improved in noisy neuronal systems by subthreshold negative feedback, serving to suppress false positives.

151

, , , and

We discuss an analytical approach through which the neural symbols and corresponding stimulus space of a neuron or neural ensemble can be discovered simultaneously and quantitatively, making few assumptions about the nature of the code or relevant features. The basis for this approach is to conceptualize a neural coding scheme as a collection of stimulus–response classes akin to a dictionary or 'codebook', with each class corresponding to a spike pattern 'codeword' and its corresponding stimulus feature in the codebook. The neural codebook is derived by quantizing the neural responses into a small reproduction set, and optimizing the quantization to minimize an information-based distortion function. We apply this approach to the analysis of coding in sensory interneurons of a simple invertebrate sensory system. For a simple sensory characteristic (tuning curve), we demonstrate a case for which the classical definition of tuning does not describe adequately the performance of the cell studied. Considering a more involved sensory operation (sensory discrimination), we also show that, for some cells in this system, a significant amount of information is encoded in patterns of spikes that would not be discovered through analyses based on linear stimulus–response measures.