Elsevier

Neurocomputing

Volumes 38–40, June 2001, Pages 1541-1547
Neurocomputing

Two separate processing streams in a cortical-type architecture

https://doi.org/10.1016/S0925-2312(01)00554-9Get rights and content

Abstract

We present a model of a cortical-type architecture that uses a latency coding scheme to process inputs as fast as possible given certain neuronal constraints while at the same time keeping the potential to perform a detailed analysis. The basic processing elements of our network are columnar structures, called minicolumns. We propose that these minicolumns are the prime constituents of cortical architecture.

Introduction

Information processing in the mammalian cortex must satisfy two opposing objectives: to extract the behaviorally most important features in minimum time and to provide a detailed input description for fine analysis and learning. Furthermore, the system has to function within the framework of cortical architecture: a processing hierarchy comprising spiking neurons as basic elements. We propose a model of a hierarchical processing system of spiking neurons that fulfills these requirements.

Section snippets

Latency encoding for fast processing

It was noted in [9] that visual information can be analyzed extremely fast (within about 100–150ms). Considering cortical hierarchy, a coarse recognition of sensory input must be possible based on the first elicited afferent spikes with about 10ms for the information exchange between two network elements. Due to the low firing rates of cortical neurons (far less than 100Hz [1]) only one spike per neuron has to transmit the information, see also [8]. We propose that the most efficient coding

Minicolumns

The requirement to finish the first, coarse analysis of the input in about 150ms not only necessitates a one-spike-only coding strategy. It is furthermore obvious that there is not much time to go through several processing iterations using feedback. It must be possible to extract the behaviorally relevant features based on a single feedforward-only processing cycle. There should be a specialized processing system that is optimized to provide a coarse but fast initial categorization of the

Model architecture

We implemented a simplified version of the model described in [5] with the two processing streams for the special purpose of a vision task. The general network structure is a hierarchy of six processing levels (see Fig. 2), each consisting of an array of minicolumns. Minicolumns (see Fig. 1) are modeled as two spiking neurons corresponding to A1 and A2. We employ standard I&F model neurons [10] with exponentially decaying EPSPs (time constant 20ms) and an absolute refractory time (3ms). Both

Vision system

The model architecture (see Fig. 2) was tested for a moderately complex vision task: invariant recognition of a small number of objects. Because we did not employ learning strategies the objects and all filters (connections) were ‘hand-designed’. Some of the objects to be detected are shown in the right part of Fig. 3. There are eight different objects stored in eight different orientations. The first processing level uses a standard Gabor-wavelet operation to extract lines. The third level

Results

The system was tested with the eight stored high-level objects undergoing certain transformations. We now shortly summarize the results:

  • The system is fully translation invariant.

  • The system can easily tolerate a rotation of up to 10° from one of the stored views.

  • A scaling of up to 10% in both directions does not reduce performance.

  • Noise tolerance: White noise poses no real problem; objects are still recognized by the system even when using normally distributed noise with a standard deviation of

Summary

We presented a biologically plausible model architecture which combines fast recognition with the potential for a feedback controlled, iterative refinement of analysis. The main aspects of our model are latency encoding and columnar processing elements. It was demonstrated that our model processing system can even be applied to large-scale networks (order of 105 neurons). Although we employed a very primitive vision system with manually designed features, recognition performance was very good.

Tobias Rodemann received his masters degree in physics in 1998 from the Institut für Neuroinformatik at the University of Bochum. Since 1998 he is a graduate student at the Institut für Neuroinformatik and at the same time a member of Honda Future Technology Research division in Europe (Honda R&D Europe (Germany)). His current research focus is information processing in systems of spiking neurons and the functional architecture of the neocortex.

References (10)

There are more references available in the full text version of this article.

Tobias Rodemann received his masters degree in physics in 1998 from the Institut für Neuroinformatik at the University of Bochum. Since 1998 he is a graduate student at the Institut für Neuroinformatik and at the same time a member of Honda Future Technology Research division in Europe (Honda R&D Europe (Germany)). His current research focus is information processing in systems of spiking neurons and the functional architecture of the neocortex.

Dr. Koerner received his Dr. of Engineering in 1977 and Dr. Science in 1984 from Technical University Ilmenau, Germany. From 1976 to 1984 he was a senior researcher at Technical University of Ilmenau, 1984–1987, a member of the Bioholonics Project of JRDC (Tokyo), and 1988–1992, professor and Head of the Department of neurocomputing and cognitive systems, Technical University of Ilmenau, Germany. 1992–1997, he worked as a Chief Scientist at Honda R&D Co., Wako Research Center, and since 1997 he is Head of the Future Technology Research Division at Honda R&D Europe (Germany). His research focus are brain-like artificial neural systems, smooth transition between signal-symbol processing, and selforganization of knowledge representation.

View full text