Skip to main content

Attractor Memory with Self-organizing Input

  • Conference paper
Biologically Inspired Approaches to Advanced Information Technology (BioADIT 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3853))

Abstract

We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)

    Google Scholar 

  2. Barlow, H.B.: Unsupervised Learning. Neural Computation 1(3), 295–311 (1989)

    Article  MathSciNet  Google Scholar 

  3. Linsker, R.: From basic network principles to neural architecture: Emergence of orientation columns. Proc. Natl. Acad. Sci. 83, 8779–8783 (1986)

    Article  Google Scholar 

  4. Linsker, R.: From basic network principles to neural architecture: Emergence of orientation-selective cells. Proc. Natl. Acad. Sci. 83, 8390–8394 (1986)

    Article  Google Scholar 

  5. Linsker, R.: From basic network principles to neural architecture: Emergence of spatial-opponent cells. Proc. Natl. Acad. Sci. 83, 7508–7512 (1986)

    Article  Google Scholar 

  6. Linsker, R.: Self-organization in a perceptual network. IEEE Computer 21, 105–117 (1988)

    Google Scholar 

  7. Olshausen, B.A., Field, D.J.: Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1. Vision Research 37(23), 3311–3325 (1997)

    Article  Google Scholar 

  8. Olshausen, B.A., Field, D.J.: Sparse coding of sensory inputs. Current Opinion in Neurobiology 14, 481–487 (2004)

    Article  Google Scholar 

  9. Bell, A.J., Sejnowski, T.J.: The Independent Components of Natural Scenes are Edge Filters. Vision Research 37(23), 3327–3338 (1997)

    Article  Google Scholar 

  10. Földiak, P.: Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 64, 165–170 (1990)

    Article  Google Scholar 

  11. Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996)

    Article  Google Scholar 

  12. Schraudolph, N.N., Sejnowski, T.J.: Competitive Anti-Hebbian Learning of Invariants. Advances of Information Processing Systems 4, 1017–1024 (1992)

    Google Scholar 

  13. Yuille, A.L., Smirnakis, S.M., Xu, L.: Bayesian Self-Organization Driven by Prior Probability Distributions. Neural Computation 7, 580–593 (1995)

    Article  Google Scholar 

  14. Peper, F., Shirazi, M.N.: A Categorizing Associative Memory Using an Adaptive Classifier and Sparse Coding. IEEE Trans. on Neural Networks 7(3), 669–675 (1996)

    Article  Google Scholar 

  15. Michaels, R.: Associative Memory with Uncorrelated Inputs. Neural Computation 8, 256–259 (1996)

    Article  Google Scholar 

  16. Bartlett, M.S., Sejnowski, T.J.: Learning viewpoint-invariant face representations from visual experience in an attractor network. Network: Comp. in Neur. Sys. 9(3), 399–417 (1998)

    Article  MATH  Google Scholar 

  17. Amit, Y., Mascaro, M.: Attractor Networks for Shape Recognition. Neural Computation 13(6), 1415–1442 (2001)

    Article  MATH  Google Scholar 

  18. Fukushima, K.: A Neural Network for Visual Pattern Recognition. Computer 21(3), 65–75 (1988)

    Article  Google Scholar 

  19. Fukushima, K.: Analysis of the Process of Visual Pattern Recognition by the Neocognitron. Neural Networks 2(6), 413–420 (1989)

    Article  Google Scholar 

  20. Fukushima, K., Wake, N.: Handwritten Alphanumeric Character Recognition by the Neocognitron. IEEE Trans. on Neural Networks 2(3), 355–365 (1991)

    Article  Google Scholar 

  21. Földiák, P.: Learning Invariance from Transformation Sequences. Neural Computation 3, 194–200 (1991)

    Article  Google Scholar 

  22. Grossberg, S.: Competetive Learning: From Interactive Activation to Adaptive Resonance. Cognitive Science 11, 23–63 (1987)

    Article  Google Scholar 

  23. Rolls, E.T., Treves, A.: Neural Networks and Brain Function. Oxford University Press, New York (1998)

    Google Scholar 

  24. Togawa, F., et al.: Receptive field neural network with shift tolerant capability for Kanji character recognition. In: IEEE International Joint Conference on Neural Networks, Singapore (1991)

    Google Scholar 

  25. Wallis, G., Rolls, E.T.: Invariant Face and Object Recognition in the Visual System. Progress in Neurobiology 51, 167–194 (1997)

    Article  Google Scholar 

  26. Rumelhart, D.E., Zipser, D.: Feature Discovery by Competetive Learning. Cognitive Science 9, 75–112 (1985)

    Article  Google Scholar 

  27. Hawkins, J. (ed.): On Intelligence. Times Books (2004)

    Google Scholar 

  28. Edelman, S., Poggio, T.: Models of object recognition. Current Opinion in Neurobiology 1, 270–273 (1991)

    Article  Google Scholar 

  29. Moses, Y., Ullman, S.: Generalization to Novel Views: Universal, Class-based, and Model-based Processing. Int. J. Computer Vision 29, 233–253 (1998)

    Article  Google Scholar 

  30. Sandberg, A., et al.: A Bayesian attractor network with incremental learning. Network: Comp. in Neur. Sys. 13(2), 179–194 (2002)

    MATH  MathSciNet  Google Scholar 

  31. Lansner, A., Ekeberg, Ö.: A one-layer feedback artificial neural network with a Bayesian learning rule. Int. J. Neural Systems 1(1), 77–87 (1989)

    Article  Google Scholar 

  32. Lansner, A., Holst, A.: A higher order Bayesian neural network with spiking units. Int. J. Neural Systems 7(2), 115–128 (1996)

    Article  Google Scholar 

  33. Ueda, N., Nakano, R.: A New Competitive Learning Approach Based on an Equidistortion Principle for Designing Optimal Vector Quantizers. Neural Network 7(8), 1211–1227 (1994)

    Article  Google Scholar 

  34. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. PNAS 79, 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  35. Buxhoeveden, D.P., Casanova, M.F.: The minicolumn hypothesis in neuroscience. Brain 125(5), 935–951 (2002)

    Article  Google Scholar 

  36. Thomson, A.M., Bannister, A.P.: Interlaminar Connections in the Neocortex. Cerebral Cortex 13(1), 5–14 (2003)

    Article  Google Scholar 

  37. Hubel, D.H., Wiesel, T.N.: Functional architecture of macaque monkey visual cortex. Proc. R. Soc. Lond. B. 198, 1–59 (1977)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Johansson, C., Lansner, A. (2006). Attractor Memory with Self-organizing Input. In: Ijspeert, A.J., Masuzawa, T., Kusumoto, S. (eds) Biologically Inspired Approaches to Advanced Information Technology. BioADIT 2006. Lecture Notes in Computer Science, vol 3853. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11613022_22

Download citation

  • DOI: https://doi.org/10.1007/11613022_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-31253-6

  • Online ISBN: 978-3-540-32438-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics