Abstract
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.
Similar content being viewed by others
Abbreviations
- ECM:
-
Expanded correlation matrix
- FSI:
-
Feature subvector index
- GECM:
-
General expanded correlation matrix
- GOE:
-
General orthogonal expansion
- NXOR:
-
Not-exclusive-or
- OE:
-
Orthogonal expansion
- PU:
-
Processing unit
- PU(n):
-
Processing unit on feature subvector index n
- RTS:
-
Rotation, translation and scaling
- SPD:
-
Subjective probability distribution
- THPAM:
-
Temporal hierarchical probabilistic associative memory
- XOR:
-
Exclusive-or
References
Arbib MA (2003) The handbook of brain theory and neural networks, 2nd edn. The MIT Press, Cambridge
Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Advances in neural information processing systems. The handbook of brain theory and neural networks, MIT Press, Cambridge
Bengio Y, LeCun Y (2007) Scaling learning algorithms towards AI. In: Bottou et al. (eds) Large-scale kernel machines. The MIT Press, Cambridge
Bishop CM (2006) Pattern recognition and machine learning. Springer Science, New York
Dayan P, Abbott LF (2001) Theoretical neuroscience: computational and mathematical modeling of neural systems. The MIT Press, Cambridge
Desjardins G, Bengio Y (2008) Empirical evaluation of convolutional rbms for vision. Technical Report 1327, Département d’Informatique et de Recherche Opérationnelle, Université de Montréal
Erhan D, Bengio Y, Courville A, Manzagol P, Vincent P, Bengio S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res Appear
Fromherz P, Gaede V (1993) Exclusive-or function of single arborized neuron. Biol Cybern 69:337–334
Geist J, Wilkinson R, Janet S, Grother P, Hammond B, Larsen N, Klear R, Matsko M, Burges C, Creecy R, Hull J, Vogl, Wilson C (1994) The second census optical charater recognition systems conference. Technical Report NIST 5452, National Institute of Standards and Technology, May 1994
George D, Hawkins J (2009) Towards a mathematical theory of cortical micro-circuits. PLoS Comput Biol 5–10:1–26
Granger R (2006) Engines of the brain: the computational instruction set of human cognition. AI Mag 27:15–31
Grossberg S (2007) Towards a unified theory of neocortex: laminar cortical circuits for vision and cognition. Prog Brain Res 165:79–104
Hassoun MH (1993) Associative neural memories, theory and implementation. Oxford University Press, New York
Hawkins J (2004) On intelligence. Henry Holt and Company, New York
Haykin S (2009) Neural networks and learning machine, 3rd edn. Pretice Hall, Upper Saddle River
Hecht-Nielsen R (2007) Confabulation theory. Springer, New York
Hecht-Nielsen R, McKenna T (2003) Computational models for neuroscience. Springer, New York
Hinton GE, Anderson JA (1989) Parallel models of associative memory. Lawrence Erlbaum Associates, Hllsdale
Hinton GE, Osindero S, Teh Y (2006) A fast learning algorithm for deep belief nets. Neural Comput 313:504–507
Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 18:1527–1554
Kandel ER, Schwartz JH, Jessell TM (2000) Principles of neural science. McGraw-Hill, New York
Koch C (1999) Biophysics of computation. Oxford University Press, Oxford
Kohonen T (1988) Self-organization and associative memory. Springer, New York
LeCun Y, Bose B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1–4:541–551
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. In: Proceedings of The IEEE 86(11):2278–2324
Levitan IB, Kaczmarek LK (1993) Simulated annealing and boltzmann machines. Wiley, London
Martin KAC (2002) Microcircuits in visual cortex. Curr Opinion Neurobiol 12–4:418–425
Mel BW (1994) Information processing in dendritic trees. Neural Comput 6:1031–1085
Mel BW (2002) Have we been hebbing down the wrong path? Neuron 34:275–288
Mountcastle VB (1978) An organizing principle for cerebral function: the unit model and the distributed system. In: Edelman GM, Mountcastle VB (eds) The mindful brain. MIT Press, Cambridge
Oja E (1982) A simplified neuron nodel as a principal component analyzer. J Math Biol 15:267–273
O’Reilly R, Munakata Y (2000) Computational explorations in cognitive neuroscience. The MIT Press, Cambridge
Principe JC, Euliano NR, Lefebvre WC (2000) Neural and adaptive systems: fundamentals through simulations. Wiley, New York
Ranzato M, Huang F, Boureau Y, LeCun Y (2007) Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: Proceedings of computer vision and pattern recognition conference (CVPR 2007). IEEE Press, New York
Rieke F, Warland D, de Ruyter R, van Steveninck, Bialek W (1999) Spikes: exploring the neural code. The MIT Press, Cambridge
Salakhutdinov R, Hinton G (2009) Deep Boltzmann machines. In: Proceedings of the 12th international conference on artificial intelligence and statistics (AISTATS) volume 5. Clearwater Beach, Florida, pp 448–455
Simard P, Steinkraus D, Platt JC (2003) Best practices for convolutional neural networks. In: Proceedings of the seventh international conference on document analysis and recognition 2:958–962
Slepian D (1956) A class of binary signaling alphabets. Bell Syst Tech J 35:203
Stuart G, Nelson S, Hausser M (2008) Dendrites, 2nd edn. Oxford University Press, New York
von Neumann J (1958) The computer and the brain. Yale University Press, New Haven
Wilkinson R, Geist J, Janet S, Grother P, Burges C, Creecy R, Hammond B, Hull J, Larsen N, Vogl, Wilson C (1992) The first census optical charater recognition systems conference. Technical Report NIST 4912, National Institute of Standards and Technology, August 1992
Zador AM, Clairborne BJ, Brown TH (1992) Nonlinear pattern separation in single hippocampal neurons with active dendritic membrane. In: Moody J, Lippmann R (eds) Advances in neural information processing systems, vol. 4. Morgan Kaufmann, San Mateo, pp 51–58
Author information
Authors and Affiliations
Corresponding author
Additional information
An erratum to this article can be found at http://dx.doi.org/10.1007/s11571-010-9127-8
Rights and permissions
About this article
Cite this article
Lo, J.TH. Functional model of biological neural networks. Cogn Neurodyn 4, 295–313 (2010). https://doi.org/10.1007/s11571-010-9110-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11571-010-9110-4