Elsevier

Vision Research

Volume 38, Issue 18, September 1998, Pages 2769-2786
Vision Research

Neural dynamics of motion processing and speed discrimination

https://doi.org/10.1016/S0042-6989(97)00372-6Get rights and content
Under a Creative Commons license
open archive

Abstract

A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size–speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-tuned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1→MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

Keywords

Vision
Motion
Speed
Velocity
Neural network
MT

Cited by (0)