Physica A: Statistical Mechanics and its Applications
STDP-driven networks and the C. elegans neuronal network
Introduction
In any theoretical study on neuronal networks, the formal structure of the network as a directed graph is an important ingredient which, in reality, can be rather complicated. In the last decade, statistical concepts and tools have been developed for analyzing the large scale properties of complex networks [1], [2], [3], [4], [5]. On this theoretical basis, it has been found that neural networks can exhibit scale free and small world properties [6], [7]. For a more refined analysis and the identification of deeper properties that may or may not distinguish neuronal networks from other classes of biological or non-biological networks, it is necessary to identify those factors that determine the structure of neuronal networks. Naively, one might think that the structure of a neuronal network is determined genetically. But for animals with larger brains, this would require an enormous amount of genetically encoded information. In other words, at best the connections of a few important axons could be genetically encoded. Moreover, several experimental findings we describe below suggest a lack of any hard-wired programme of axon guidance. Another possible factor could be geometric constraint since the network is embedded in a small three dimensional volume. This is likely to affect only the long range connections and not the local structure, and in any case, this constraint seems to be too unspecific. Therefore, we should expect that some self-organization process yields the connectivity structure of neuronal networks. As most biological self-organization processes are triggered by external factors or signals, we should also look here for sources of external influences. Since a neuronal network processes sensory inputs, it should thereby adapt itself to its experiences. Hence, we are naturally led to consider learning as a key factor guiding the self-organization of a neuronal network. The standard Hebbian paradigm tells us that learning is represented by modifications of the strengths of the synapses between neurons. In particular, learning is local in the sense that it depends on correlations between the activities of synaptically connected neurons. In more biological detail, we have the learning scheme of spike timing dependent synaptic plasticity first discovered in Ref. [8], abbreviated as STDP. This learning rule says that a synapse is strengthened when the presynaptic neuron fires shortly before the postsynaptic one, and that it is weakened instead when this temporal order is reversed. This learning rule has received a lot of attention in the neurobiological literature. In [9], it has been formally analyzed how this learning rule, being based on activity correlations, in turn shapes these correlations. Again, this is a local rule, but it is then natural to expect that the global statistical properties of neuronal networks result from the iterated application of this local rule at all the active synapses of the network. In particular, some synapses could possibly become so weak that they will get eliminated entirely.
The initial explorations to test this line of thinking have been encouraging [10], [11]. In Ref. [10], in order to separate the abstract features of this learning rule from the details of its neurobiological implementation, we have considered a simple model of coupled chaotic maps wherein the coupling strengths changed according to this learning rule. Starting from a globally coupled network, we obtained a stationary network with a broad degree distribution in accordance with the experimental findings for real neural networks. In Ref. [11], similar conclusions were arrived at using a continuous Fitzhugh–Nagumo model.
These developments suggest that the learning dynamics may be a relevant factor in determining the network structure. A closer comparison is needed to confirm this conjecture. To this effect we carry out simulations with realistic models of neural dynamics and compare the resultant network with a real one, that is, the neuronal network of C. elegans. The neuronal network of C. elegans has been studied in detail [12], [13]. In fact, this is the only real neuronal network where such detail is currently available, and, unfortunately, we therefore are not able to use other experimentally determined networks for comparison. In any case, in learning, there are various kinds of plasticities and different kinds of connections, like, chemical synapses, gap junctions etc. Here we work with realistic neuron models, the STDP learning rule and properly chosen input signals. We show that applying this scheme formally leads to a network which is similar to the network in C. elegans in certain aspects.
It is known that the brain has a very dense population of synaptic connections just after birth and most of these connections are pruned in the course of time [14]. This type of pruning takes place even in C. elegans where the size of the network is very small [15]. It has also been shown that the perturbed sensory activity or the mutations that alter the calcium channels or membrane potential affect the axon outgrowth [16]. This is reflected by the irreversible deletion of synapses whose strength falls below a certain threshold.
The plan of the paper is as follows. In Section 2, we describe the neuron models used, the STDP learning rule and also the tools used to analyse the network. The present status of our knowledge of C. elegans’ network is recalled in Section 3. This section also includes a more detailed analysis of C. elegans’ neuronal network. The main results of the paper are presented in Section 4; in particular, we describe the influence of the input and of different parameters on the final results. The paper ends with a discussion in Section 5.
Section snippets
Neuron models
Networks of neurons were modeled using the NEST Simulation Tool [17]. To show the generality of the results, two models for neurons are utilized, i.e. the Leaky Integrate-and-Fire (LIF) model and the Hodgkin–Huxley (HH) model.
The membrane potential of the conductance based LIF neuron with index is governed by where is the membrane capacitance, is the leak conductance which is equivalent to where is the membrane resistance,
C. elegans neuronal network
In this section we discuss the neuronal network of C. elegans. It is a small sensory transduction neuronal network that consists of sensory neurons, interneurons and motor neurons. In this study, the most recently published wiring diagram ofC. elegans [13] is used. The somatic neuronal network contains 279 neurons which are connected by 2194 directed connections implemented by one or more chemical synapses, and 514 gap junction connections consisting of one or more electrical junctions. We
Evolution of neural networks with STDP
It is known that the density of synapses in the human frontal cortex continues to increase during infancy and remains at a very dense level. After a short stable period, synapses begin to be constantly removed, yielding a decrease in synaptic density. This pruning process continues until puberty, when synaptic density achieves adult levels [29]. As such a pruning of synapses that are in some sense superfluous may be a rather universal process, we study the local structure of a network obtained
Discussion
We have studied the effect of the STDP learning rule on the network evolution systematically. The network is all-to-all connected initially. Neurons are stimulated with spike trains, which are (partially) periodic or temporally correlated in a more complicated way and in line with Poisson statistics. The STDP learning rule introduces the necessary competition between synapses. As the network evolves, a stationary distribution of peak synaptic conductances is achieved, where most synapses become
Acknowledgements
The authors thank Nils Bertschinger, Bernhard Englitz, Thomas Kahle and Eckehard Olbrich for discussions. KMK acknowledges a research grant from the Department of Science and Technology (DST), India. JJ acknowledges support from the Volkswagen Foundation.
References (34)
Temporal correlation based learning in neuron models
Theory Biosci.
(2006)- et al.
Evolution of network structure by temporal learning
Physica A
(2009) - et al.
Axon branch removal at developing synapses by axosome shedding
Neuron
(2004) Axon pruning: C. elegans makes the cut
Current Biol.
(2005)- et al.
Statistical mechanics of networks
Rev. Modern. Phys.
(2002) - et al.
Collective dynamics of small-world networks
Nature
(1998) - et al.
Handbook of Graphs and Networks: From the Genome to the Internet
(2002) The structure and function of complex networks
SIAM Rev.
(2003)- et al.
Evolution of Networks: From Biological Nets to the Internet and WWW
(2003) - et al.
Scale-free brain functional networks
Phys. Rev. Lett.
(2005)
Graph theoretical analysis of complex networks in the brain
Nonlinear Biomed. Phys.
Regulation of synaptic efficacy by coincidence of synaptic APs and EPSPs
Science
Self-organized criticality and scale-free properties in emergent functional neural networks
Phys. Rev. E
The structure of the nervous system of the nematode caenorhabditis elegans
Phil. Trans. R. Soc. Lond. B
Sensoty activity affects sensory axon development in C. elegans
Development
Nest (neural simulation tool)
Scholarpedia
Cited by (17)
Structured connectivity in cerebellar inhibitory networks
2014, NeuronCitation Excerpt :Spike-timing-dependent plasticity (STDP), in particular, has been proposed to lead to structured connectivity. Modeling and theoretical studies argue that common STDP rules give rise to and maintain feedforward motifs and structures, while eliminating loops (Kozloski and Cecchi, 2010; Masuda and Kori, 2007; Ren et al., 2010; Song and Abbott, 2001; Takahashi et al., 2009). Incidentally, the increased occurrence of triplet motifs in C. elegans, which according to our nomenclature are transitive, can be robustly obtained from an STDP-driven network (Ren et al., 2010).
Moment-closure approximations for discrete adaptive networks
2014, Physica D: Nonlinear PhenomenaCitation Excerpt :If the dynamics on and of networks occur simultaneously and interdependently then the network topology coevolves with the states of the nodes and an adaptive network is formed [10,11]. Adaptive networks have been used to model problems of opinion formation [12–19], epidemic spreading [20–28], evolution of cooperation [29–41], synchronization [42–46], neuronal activity [47–55], collective motion [56,57], cartelization of markets [58], and particle diffusion [59] among others. Network models in general and adaptive networks in particular provide a powerful framework to model, analyze, and eventually understand a wide range of self-organization phenomena.
Stochastic resonance in feedforward-loop neuronal network motifs in astrocyte field
2013, Journal of Theoretical BiologyCitation Excerpt :So, the network motifs can be seen as basic building blocks of the networks, and uncovering their dynamical properties and specific functions is essential to understand the behaviors of the whole networks. Currently, several interesting functions and dynamical properties have been found in neuronal network motifs, such as the acceleration and delay of response and long- and short-term memory (Li, 2008; Ren et al., 2010; Franović and Miljković, 2010). Neurons are fundamental elements for constituting neuronal network motifs.
The rise and fall of hubs in self-organized critical learning networks
2021, Journal of Statistical Mechanics: Theory and Experiment