Skip to main content

Advertisement

Log in

Exact Bayesian bin classification: a fast alternative to Bayesian classification and its application to neural response analysis

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

We investigate the general problem of signal classification and, in particular, that of assigning stimulus labels to neural spike trains recorded from single cortical neurons. Finding efficient ways of classifying neural responses is especially important in experiments involving rapid presentation of stimuli. We introduce a fast, exact alternative to Bayesian classification. Instead of estimating the class-conditional densities p(x|y) (where x is a scalar function of the feature[s], y the class label) and converting them to P(y|x) via Bayes’ theorem, this probability is evaluated directly and without the need for approximations. This is achieved by integrating over all possible binnings of x with an upper limit on the number of bins. Computational time is quadratic in both the number of observed data points and the number of bins. The algorithm also allows for the computation of feedback signals, which can be used as input to subsequent stages of inference, e.g. neural network training. Responses of single neurons from high-level visual cortex (area STSa) to rapid sequences of complex visual stimuli are analysed. Information latency and response duration increase nonlinearly with presentation duration, suggesting that neural processing speeds adapt to presentation speeds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bertsekas, D. P. (2000). Dynamic programming and optimal control. Athena Scientific.

  • Blanz, V., Schölkopf, B., Bülthoff, H. C. B. V. V., & Vetter, T. (1996). Comparison of view-based object recognition algorithms using realistic 3D models. In C. von der Malsburg, W. von der Seelen, J. C. Vorbrüggen & B. Sendhoff (Eds.), Artificial neural networks—ICANN96 (pp. 251–256). Berlin: Springer.

    Google Scholar 

  • Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 273–297.

    Google Scholar 

  • Cover, T. M., & Joy, T. A. (1991). Elements of information theory. New York: Wiley.

    Google Scholar 

  • Cybenko, G. (1989). Approximations by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2, 304–314.

    Article  Google Scholar 

  • Endres, D., & Földiák, P. (1999). Quadratic programming for learning sparse codes. In Proceedings of the ninth international conference on artificial neural networks (ICANN99), IEE Conference Publication No. 470 (pp. 593–596). London. Institution of Electrical Engineers.

  • Endres, D. M. (2006). Bayesian and information-theoretic tools for neuroscience. Ph.D. diss., University of St. Andrews, Scotland, UK.

  • Földiák, P. (1990). Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics, 64, 165–170.

    Article  PubMed  Google Scholar 

  • Földiák, P. (1993). The ‘Ideal Homunculus’: Statistical inference from neural population responses. In F. Eeckman & J. Bower (Eds.), Computation and neural systems (pp. 55–60). Norwell, MA: Kluwer.

    Google Scholar 

  • Földiák, P., Xiao, D., Keysers, C., Edwards, R., & Perrett, D. I. (2004). Rapid serial visual presentation for the determination of neural selectivity in area STSa. Progress in Brain Research, 144, 107–116.

    Article  PubMed  Google Scholar 

  • Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (1995). Bayesian data Analysis. Boca Raton: Chapman & Hall.

    Google Scholar 

  • Harpur, G. F., & Prager, R. W. (2000). Experiments with low-entropy neural networks. In R. Baddeley, P. Hancock & P. Földiák (Eds.), Information theory and the brain, Chap. 5 (pp. 84–100). New York: Cambridge University Press.

    Google Scholar 

  • Hsu, C., Chang, C., & Lin, C. (2005). A practical guide to support vector classification. Retrieved at http://www.csie.ntu.edu.tw/∼cjlin/libsvm/.

  • Hutter, M. (2002). Distribution of mutual information. In Advances in neural information processing systems 14 (pp. 339–406). Cambridge, MA: MIT Press.

    Google Scholar 

  • Jordan, M., Ghahramani, Z., Jaakkola, T., & Saul, L. (1999). An introduction to variational methods for graphical models. Machine Learning, 37, 183–233

    Article  Google Scholar 

  • Keysers, C. (2000). The speed of sight. Ph.D. diss., School of Psychology, University of St. Andrews, UK

  • Keysers, C., Xiao, D., Földiák, P., & Perrett, D. I. (2001). The speed of sight. Journal of Cognitive Neuroscience, 13, 90–101.

    Article  PubMed  CAS  Google Scholar 

  • Kschischang, F., Frey, B., & Loeliger, H. A. (2001). Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47, 498–519.

    Article  Google Scholar 

  • MacKay, D. J. C. (2003). Information theory, inference and learning algorithms. New York: Cambridge University Press.

    Google Scholar 

  • Neal, R. M. (1997). Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. Technical report 9702, Dept. of Computer Science, University of Toronto.

  • Neal, R. (1996). Lecture notes in statistics: Bayesian learning for neural networks, Vol. 118. New York: Springer.

    Google Scholar 

  • Oram, M. W., Földiák, P., Perrett, D. I., & Sengpiel, F. (1998). The ‘Ideal Homunculus’: Decoding neural population signals. Trends in Neuroscience,  21, 259–265.

    Article  Google Scholar 

  • Oram, M. W., & Perrett, D. I. (1992). Time course of neural responses discriminating different views of the face and head. Journal of Neurophysiology, 68, 70–84.

    PubMed  CAS  Google Scholar 

  • Panzeri, S., & Treves, A. (1996). Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7, 87–107.

    Article  Google Scholar 

  • Rolls, E., Critchley, H., & Treves, A. (1995). The representation of olfactory information in the primate orbitofrontal cortex. Journal of Neurophysiology, 75, 1982–1996.

    Google Scholar 

  • Rumelhart, D., Hinton, G., & Williams, R. (1986). Learning internal representations by error propagation. In D. Rumelhart, J. McClelland & the PDP research group (Eds.), Parallel distributed processing: explorations in the microstructure of cognition (pp. 318–362). Cambridge, MA: MIT Press.

    Google Scholar 

  • Shannon, C. E. (1948). The mathematical theory of communication. Bell Systems Technical Journal, 27, 379–423, 623–656.

    Google Scholar 

  • Thorpe, S., & Imbert, M. (1989). Biological constraints on connectionist modelling. In R. Pfeifer, Z. Schreter & F. Fogelman-Souli’e (Eds.) Connectionism in perspective (pp. 63–93). Elsevier.

  • Vapnik, V. (1995). The nature of statistical learning theory. New York: Springer.

    Google Scholar 

  • Vapnik, V. (1998). Statistical learning theory. New York: Wiley.

    Google Scholar 

  • Williams, C. K. I., & Barber, D. (1998). Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1342–1351.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. Endres.

Additional information

Action Editor: Alexander Borst

Rights and permissions

Reprints and permissions

About this article

Cite this article

Endres, D., Földiák, P. Exact Bayesian bin classification: a fast alternative to Bayesian classification and its application to neural response analysis. J Comput Neurosci 24, 21–35 (2008). https://doi.org/10.1007/s10827-007-0039-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-007-0039-5

Keywords

Navigation