Elsevier

Neurocomputing

Volume 36, Issues 1–4, February 2001, Pages 225-233
Neurocomputing

Letters
Optimization of dynamic neural fields

https://doi.org/10.1016/S0925-2312(00)00328-3Get rights and content

Abstract

There is a growing interest in using dynamic neural fields for modeling biological and technical systems, but constructive ways to set up such models are still missing. We discuss gradient-based, evolutionary and hybrid algorithms for data-driven adaptation of neural field parameters. The proposed methods are evaluated using artificial and neuro-physiological data.

Introduction

Neural fields (NFs) have been introduced as mathematical descriptions of cortical neural tissues, where information processing takes place in the form of excitation patterns [22], [1], [12], [7]. They permit a systematic treatment of dynamical processes not only in distributed neural representations, but also in the context of the dynamical system approach to perception and behavior, behavior-based robotics, computer vision and sensor fusion, see [17], [8], [19] for further references. However, up to now there exists no constructive way to set up NF models. In order to clarify and substantiate experimental findings, an expert has to adjust the parameters of the NF. Since the fields reveal a highly non-linear dynamic behavior, this is a non-trivial task even for qualitative modeling. The choice of an appropriate set of parameters is an important problem also for technical applications, where NFs have to be designed to perform a given task, e.g., the control of robot behavior [4], [5].

The most common techniques for data-driven parameter adaptation of recurrent neural networks (RNNs) are gradient-based methods [6], [14]. In this study, we adopt these algorithms for adjusting the parameters of NFs. For comparison, we apply an elaborated evolution strategy [9], [10] to NF optimization. We present a hybrid algorithm that combines the local search of gradient-based methods with the more volume-oriented search of evolutionary algorithms.

Section snippets

Neural field model

We restrict our investigation to a one-dimensional NF formulation based on the model first analyzed by Amari [1]. However, the extension of the optimization algorithms to higher dimensional fields and coupled systems of NFs is straightforward.

Let ux(t) describe the average membrane potential at time t of a neuron located at site x of the field. Its average activity (mean firing rate) is given by the transfer function f[ux(t)], which is assumed to be of sigmoidal shape, f(u)=α/(1+exp[β(u−ushift

Gradient-based optimization

The goal is to adapt the NF parameters such that starting from the relaxed field for all t∈[0,T] and x=1,…,n either the potential ux(t) or the average activity f[ux(t)] follows a desired trajectory dx(t) for a given stimulus pattern sx(t). Therefore, we define an error function E=0TE(t)dt and perform gradient descent on E(t) with respect to the parameters θ. For each parameter θi,i=1,…,m, we have to calculatedEdθi=t=0Tx=1nex(t)dux(t)dθidtwithex(t)=μx(t)∂E(t)∂ux(t),where the masking function μ

Evolution strategy

Evolutionary algorithms (EAs) are a class of direct, easy to parallelize optimization methods inspired by natural evolution and have been applied successfully in the field of neural networks [23]. A variety of EAs for parameter optimization exists [3], in our study a state-of-the-art evolution strategy (ES) is employed, namely the (μ/μI,λ)-CMA-ES [9], [10]. The notation (μ/μI,λ) indicates that the new parent population consists of the μ best of the λ offspring and the use of intermediate

Artificial data

The first task is to reproduce a synthetic potential pattern shown in Fig. 1. It is generated by an NF as described by Eq. (1). The model consists of 100 neurons and is iterated for 100 time steps. The parameters are gex=40, σex=10, ginh=60, σinh=40, h=3, α=1.2, β=−1.5, ushift=0.2, and τ=5. Two Gaussian stimuli (gs=20,σs=10) are given for the first 50 time steps at the positions 35 and 65, respectively; the activation fuses to a single peak. We assume that stimulus shape, position and duration

Discussion

Gradient-based techniques can be adopted for NF optimization, where forward methods should be preferred. Our experiments show that the optimization of NFs is a difficult task; the algorithms tend to get trapped in local minima. In particular, bifurcations in the NF dynamics exacerbate the search process. These difficulties can partly be overcome by incorporating additional expert knowledge, e.g., by ensuring that necessary conditions for a desired type of solution are met. Nonetheless, in our

Acknowledgements

This work was supported by the Ministerium für Schule und Weiterbildung, Wissenschaft und Forschung des Landes Nordrhein-Westfalen under Grant 516 - 102 001 98.

References (23)

  • N. Hansen, A. Ostermeier, Completely derandomized self-adaptation in evolution strategies, Evol. Comput. (special issue...
  • Cited by (23)

    • Sequential method for fast neural population activity reconstruction in the cortex from incomplete noisy measurements

      2022, Computers in Biology and Medicine
      Citation Excerpt :

      Our novel technique relies upon a Galerkin-type spectral approximation utilized within the conventional state-space approach. We pay attention to the fact that translating a stochastic system into its state-space form, which is our special goal in the present research, creates a straightforward and fruitful way to the data-driven adaptation and parameter estimation [6], filtering, prediction [7] and smoothing. In addition, we remark that some nonlinear Bayesian filtering methods and, in particular, the unscented Kalman filter (UKF) have been already utilized for parameter estimations in dynamic NFs in engineering literature, e.g., in Ref. [8].

    View all citing articles on Scopus
    View full text