The role of action potential shape and parameter constraints in optimization of compartment models
Introduction
Compartmental neuron models often have many parameters, only loosely constrained within physiologically plausible ranges, which are difficult to estimate manually. Parameter estimation can be facilitated by automated search methods that minimize an objective, or error function representing salient differences between simulated and experimental (“target”) data. Our interest lies in modeling neurons of the central vestibular system, for which firing regularity and dynamics vary widely across the population [15]. Action potential (AP) shape, including the shape of the afterhyperpolarization, has been shown to be a critical determinant of neuronal firing dynamics and discharge regularity [1], [7]. Although previous modeling studies have included several important AP features and neuronal firing characteristics in their objective functions [16], [18], few have included the entire AP shape. A recent study [4] calculated AP shape error as the mean-squared difference between target and model voltage traces, which yields large errors when the model and target APs differ even slightly in time. We present an objective function that avoids this problem by first aligning target and model APs, then calculating the root mean-squared (RMS) error. Our function also includes errors in firing rate and discharge regularity.
Vanier and Bower [18] recently found that simplex-based simulated annealing [10], [13] can successfully optimize compartmental neuron models. We find that the performance of simulated annealing algorithm of Vanier and Bower [18] is impaired by the “wraparound” boundary condition they applied when the algorithm encounters an infeasible point in parameter space: one that lies outside the specified boundaries. Such a point lying beyond one side of the parameter space is relocated to the opposite side of the parameter space, by the amount of the overshoot. Our parameter search method uses a variant of simplex-based simulated annealing that avoids infeasible points by “recentering” such points about the current minimum [2]. We demonstrate that the recentering method is superior to the wraparound method, and that our objective function is effective in guiding the parameter search.
Section snippets
Mathematical model
The NEURON simulation environment [8] was used for this study. The efficacy of our optimization scheme and objective function (available online [11]) was tested using the six-conductance, single-compartment model of Av-Ron and Vidal [1] which can simulate the AP shapes and response dynamics of medial vestibular nucleus (MVN) neurons. The transient sodium (Na) and delayed rectifier potassium (K) currents were described by the Fitzhugh–Nagumo model [5], [12]. Remaining active currents (transient
Results
Fig. 2A shows the median model error and 95% confidence interval of the N2 searches for the recentering (solid) and wraparound (dashed) methods, as a function of simulation number. Under this condition, the wraparound and recentering methods performed equally well. Under all other optimization conditions, the recentering method identified models with significantly smaller errors than the wraparound method, and identified them more quickly (Fig. 2B–D; W2 not shown). Whereas the voltage trace of
Discussion
We have constructed an objective function that effectively compares the AP shape and firing statistics of a compartmental model against target data in simulated annealing-based parameter optimization. The recentering method of boundary management is a significant improvement over the wraparound method, regardless of the size of imposed physiologic boundaries. This is an important step forward in reproducing a range of neuronal responses whose dynamics depend on AP shape. Future studies will
Acknowledgments
This material is based upon work supported by the National Science Foundation under a grant (DBI-0305799) awarded in 2003, and by NIH grants DC05669 and RR16754.
Christina Weaver is a postdoctoral fellow in the Department of Neuroscience at Mount Sinai School of Medicine (New York, NY). She received her B.S. (Hons) in mathematics from Mount St. Mary's College (1998), and her M.S. and Ph.D. in applied mathematics and statistics from Stony Brook University (2000 and 2003, respectively). She began her work at Mount Sinai as an NSF Postdoctoral Fellow in Interdisciplinary Informatics (2003–2005). Her research applies computational modeling techniques to
References (18)
- et al.
The simplex-simulated annealing approach to continuous non-linear optimization
Comput. Chem. Eng.
(1996) - et al.
A reduced compartmental model of the mitral cell for use in network models of the olfactory bulb
Brain Res. Bull.
(2000) Impulses and physiological states in theoretical models of nerve membrane
Biophys. J.
(1961)- et al.
Relation of interspike baseline activity to the spontaneous discharges of primary afferents from the labyrinth of the toadfish
Opsanus tau, Brain Res.
(1978) - et al.
Intrinsic membrane properties and dynamics of medial vestibular neurons: a simulation
Biol. Cybern.
(1999) - A.P. Davison, personal communication,...
- et al.
Global structure, robustness, and modulation of neuronal models
J. Neurosci.
(2001) - et al.
The NEURON simulation environment
Neural Comput.
(1997) - J.H. Holland, Adaptation in Natural and Artificial Systems, second ed., MIT Press, Cambridge, MA, 1992 ed. (University...
Cited by (20)
Genetic algorithm for optimization of models of the early stages in the visual system
2017, NeurocomputingCitation Excerpt :Neuron models often have many parameters that are difficult to estimate manually. Parameter optimization is facilitated by automated search methods that minimize an error metric representing differences between simulated and experimental data [1–6]. Traditionally, models of the early visual pathways, referred to the retina, Lateral Geniculate Nucleus (LGN) and primary visual cortex (V1), are either hand-tuned, using a trial-and-error method, or defined in terms of the well-known linear-nonlinear (LN) modeling [7–16].
A comparative study on determining nonlinear function parameters of the izhikevich neuron model
2018, Journal of Circuits, Systems and ComputersIdentification of a stochastic resonate-and-fire neuronal model via nonlinear least squares and maximum likelihood estimation
2017, International Journal of Modelling, Identification and ControlIs realistic neuronal modeling realistic?
2016, Journal of NeurophysiologyAutomated evolutionary optimization of ion channel conductances and kinetics in models of young and aged rhesus monkey pyramidal neurons
2016, Journal of Computational Neuroscience
Christina Weaver is a postdoctoral fellow in the Department of Neuroscience at Mount Sinai School of Medicine (New York, NY). She received her B.S. (Hons) in mathematics from Mount St. Mary's College (1998), and her M.S. and Ph.D. in applied mathematics and statistics from Stony Brook University (2000 and 2003, respectively). She began her work at Mount Sinai as an NSF Postdoctoral Fellow in Interdisciplinary Informatics (2003–2005). Her research applies computational modeling techniques to investigate the contributions of morphology to neuronal firing dynamics, particularly in systems that exhibit neural integration and persistent activity.
Susan Wearne is a Mathematical Neuroscientist in the Center for Biomathematics and the Fishberg Department of Neuroscience, Mount Sinai School of Medicine, New York. She received her B.A. (Hons) in 1985 from the University of Sydney, Australia, with majors in mathematics and physiological psychology; her Ph.D. majoring in vestibular neuroscience from the University of Sydney in 1993, and her Masters in pure and applied mathematics from the University of New South Wales, Australia, in 1999. Her research interests include the physical and biological bases of fractional order dynamical systems, structural determinants of neural function, and biological bases of neural integration.