Event Abstract

The same neurons form a visual place code and an auditory rate code in the primate SC

  • 1 Duke University, United States

A computational hurdle in multisensory integration is that visual and auditory signals potentially have different representational formats. In the early visual pathway, neurons have receptive fields that tile the visual scene and produce a "place code" for stimulus location. In contrast, the binaural computation performed in the auditory pathway has been suggested to produce a "rate code" for sound location. In this latter format, neurons in the IC and auditory cortex respond to a broad range of locations with activity levels that scale proportionate with sound position, exhibiting maximum responses to sounds at extreme contralateral positions (Groh JM et al., 2003; Werner-Reiss U and Groh JM, 2008), The superior colliculus (SC) has been thought to employ a place code for sensory and saccade-related activity, with the same neurons controlling auditory as well as visual saccades. However, there is little quantitative information addressing the coding of sound location in monkey SC. Here, we examined individual auditory neuron in the primate SC to determine whether they have receptive field (place code) or monotonic spatial pattern (rate code). A monotonic code for sound location would mean a discrepancy between visual and auditory processing in the SC. We recorded sensory and saccade-related activity from 180 neurons in the intermediate and deep SC of two rhesus monkeys. Noise bursts or LED came from one of 9 locations between +/- 24 degrees in the horizontal dimension. The monkeys made saccades from different initial eye positions to these visual or auditory targets in an overlap saccade task. To quantify the representational format, we fit each neuron’s response functions to gaussian and sigmoid curves. The idea was that gaussian functions would be substantially better than sigmoids at fitting the tuned response patterns characteristic of a place code, but that both sigmoid and broad half-gaussians would be equally successful at fitting the monotonic tuning patterns characteristic of a rate code. We found that most neurons had monotonic response patterns for auditory stimuli along the axis of the contralateral ear, even though the same neurons had non-monotonic response patterns to visual stimuli. For the auditory trials, the sigmoid functions were as good as gaussians at capturing the response patterns. For visual trials, the gaussian functions showed significantly better performance than sigmoids in fitting the tuning curves. This pattern was true for both sensory and saccade-related activity. Our findings imply that a read-out algorithm is required to reconcile the discrepancy. The algorithm should be able to convert the visual and auditory signals into a motor command to work on either place or rate code to produce the same accurate saccade for both types of signals. In line with this, we have developed several models that involve transformation of signals from a place code to a rate or vice-versa (Groh and Sparks, 1992; Groh, 2001; Porter and Groh, 2006).
Acknowledgements: This work was supported by CRCNS grants [R01 NS50942]. JAL was also supported by the Korea Research Foundation Grant funded by the Korean Government [KRF-2008-356-H00003].

Conference: Computational and Systems Neuroscience 2010, Salt Lake City, UT, United States, 25 Feb - 2 Mar, 2010.

Presentation Type: Oral Presentation

Topic: Oral presentations

Citation: Lee J and Groh JM (2010). The same neurons form a visual place code and an auditory rate code in the primate SC. Front. Neurosci. Conference Abstract: Computational and Systems Neuroscience 2010. doi: 10.3389/conf.fnins.2010.03.00020

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 17 Feb 2010; Published Online: 17 Feb 2010.

* Correspondence: Jungah Lee, Duke University, Durham, United States, vision.jungah.lee@gmail.com