Skip to main content
Log in

Abstract

Implementations of artificial neural networks as analog VLSI circuits differ in their method of synaptic weight storage (digital weights, analog EEPROMs, or capacitive weights) and in whether learning is performed locally at the synapses or off-chip. In this paper, we explain the principles of analog networks with in situ or local synaptic learning of capacitive weights, with test results of CMOS implementations from our laboratory. Synapses for both simple Hebbian and mean field networks are investigated. Synaptic weights may be refreshed by periodic rehearsal on the training data, which compensates for temperature drift or other nonstationarity. Compact high-performance layouts have been obtained in which learning adjusts for component variability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. H.P. Graf, R. Janow, D. Henderson, and R. Lee, “Reconfigurable Neural Net Chip with 32 K Connections,”Advances in Neural Information Processing Systems 3, D.S. Touretzky, J. Moody and R. Lippmann, editors, San Mateo, CA: Morgan Kaufmann, pp. 1032–1038, 1991.

    Google Scholar 

  2. B.E. Boser, E. Sackinger, J. Bromley, Y. Le Cun, and L.D. Jackel, “An Analog Neural Network Processor with Programmable Topology,”IEEE J. Solid St. Ccts., Vol. 26, pp. 2017–2025, 1991.

    Article  Google Scholar 

  3. D.E. Rumelhart, G.E. Hinton, and R.W. Williams, “Learning representations by back-propagating errors,”Nature, Vol. 323, pp. 533–536, 1986.

    Article  Google Scholar 

  4. T. Kohonen, “The Self Organizing Map,”Proc. IEEE, Vol. 78, pp. 1464–1480, 1990.

    Article  Google Scholar 

  5. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,”Proc. Natl. Acad. Sci., Vol. 81, pp. 3088–3092, 1984.

    Article  Google Scholar 

  6. D.O. Hebb,Organization of behaviour, New York: John Wiley, 1949.

    Google Scholar 

  7. J. Raffel, J. Mann, R. Berger, A. Soares and S. Gilbert, “A Generic Architecture for Wafer Scale Neuromorphic Systems,”Proc. IEEE Int. Conf. Neural Networks, Vol. III, pp. 501–513, 1987.

    Google Scholar 

  8. Y. Tsividis and S. Satyanarayana, “Analogue circuits for variable synapse electronic neural networks,”Elec. Lett., Vol. 23, pp. 1313–1314, 1987.

    Article  Google Scholar 

  9. M. Holler, S. Tam, H. Castro, and R. Benson, “An electrically trainable artificial neural network with 10240 floating gate synapses,”Proc. IJCNN-89, Part II, pp. 191–196, 1989.

  10. Mead, C.A.,Analog VLSI and Neural Systems, Reading: Addison-Wesley, 1988.

    Google Scholar 

  11. D.B. Schwartz, R.E. Howard, and W.E. Hubbard, “A Programmable Analog Neural Network Chip,”IEEE J. Solid State Circuits, Vol. 24, pp. 313–319, 1989.

    Article  Google Scholar 

  12. G.E. Hinton, “Connectionist learning procedures,”Artificial Intelligence, Vol. 40, pp. 185–234, 1989.

    Article  Google Scholar 

  13. S. Satyanarayana, Y.P. Tsividis, and H.P. Graf, “A reconfigurable analog VLSI neural network chip,”Advances in Neural Information Processing Systems 2, D. Touretzky, editor, Morgan Kaufmann, 1990.

  14. J.R. Mann and S. Gilbert, “An Analog Self-Organizing Neural Network Chip,” inAdvances in Neural Information Processing Systems I, D.S. Touretzky, editor, San Mateo, CA: Morgan Kaufmann, pp. 739–747, 1990.

    Google Scholar 

  15. A.F. Murray, D. Del Corso, and L. Tarassenko, “Pulse Stream VLSI Neural Networks Mixing Analog and Digital Techniques,”IEEE Trans. on Neural Networks, Vol. 2, pp. 193–204, 1991.

    Article  Google Scholar 

  16. A. Agarnat and A. Yariv, “A New Architecture for a Microelectronic Implementation of Neural Network Models”,Proc. Ist Int. Conf. on Neural Networks, San Diego, June 1987.

  17. H.C. Card, C. Schneider and W.R. Moore, “Hebbian plasticity in MOS synapses,”IEE Proc. F, Vol. 138, pp. 13–16, Feb. 1991.

    Google Scholar 

  18. C.R. Schneider, and H.C. Card, “Analog CMOS Hebbian Synapses,”Elec. Lett., Vol. 27, pp. 785–786, 1991.

    Article  Google Scholar 

  19. C.R. Schneider and H.C. Card, “Analog CMOS Deterministic Boltzmann Circuits,”IEEE J. Solid St. Ccts., Vol. 28, pp. 907–914, Aug. 1993.

    Article  Google Scholar 

  20. C. Toumazou, F.J. Lidgey, and D.G. Haigh, editors,Analogue IC Design: the Current Mode Approach, London: P. Peregrinus, 1990.

    Google Scholar 

  21. J.J. Clark, “An analog CMOS implementation of a self organizing feedforward network,”Proc. 1990 Int. Joint Conf. on Neural Networks, Washington, D.C., Vol. II, pp. 118–121, 1990.

    Google Scholar 

  22. H.C. Card and C.R. Schneider, “Analog VLSI Models of Mean Field Networks,”Proc. of Oxford Workshop, Sept. 5–7, 1990, to be published asVLSI for AI and Neural Networks, W.R. Moore and J. Delgado-Frias, editors, 1991.

  23. J. Alspector and R.B. Allen, “A Neuromorphic VLSI Learning System,” inAdvanced Research in VLSI, Proc. of 1987 Stanford Conf., P. Losleben, editor, Cambridge: MIT Press, pp. 313–349, 1987.

    Google Scholar 

  24. J. Alspector, J.W. Gannett, S. Haber, M.B. Parker, and R. Chu, “A VLSI-Efficient Technique for Generating Multiple Uncorrelated Noise Sources and its Applications to Stochastic Neural Networks,”IEEE Trans. Ccts. and Syst., Vol. 38, pp. 109–123, 1991.

    Article  Google Scholar 

  25. J. Alspector, R.B. Allen, A. Jayakumar, T. Zeppenfeld, and R. Meir, “Relaxation networks for large supervised learning problems,”Advances in Neural Information Processing Systems 3, D.S. Touretzky, J. Moody and R. Lippmann, editors, San Mateo, CA: Morgan Kaufmann, Apr. 1991.

    Google Scholar 

  26. T. Shima, T. Kimura, Y. Kamatani, T. Itakura, Y. Fujita, and T. Lida, “Neuro Chips with On-Chip Backprop and/or Hebbian Learning,”Proc. IEEE Int. Solid State Ccts. Conf. (ISSCC-92), San Francisco, Paper TP8.4, Feb 19–21, 1992.

  27. Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Kondoh, and S. Kayano, “A Self Learning Neural Network Chip with 125 Neurons and 10 K Self Organizing Synapses,IEEE J. Solid St. Ccts., Vol. 26, pp. 607–611, 1991.

    Article  Google Scholar 

  28. Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Notani, H. Kondoh, and S. Kayano, “A 336-Neuron, 28 K Synapse, Self-Learning Neural Network Chip with Branch-Neuron-Unit Architecture,”IEEE. J. Solid St. Ccts., Vol. 26, pp. 1637–1644, 1991.

    Article  Google Scholar 

  29. Y. Arima, M. Murasaki, T. Yamada, A. Maeda, and H. Shinohara, “A Refreshable Analog VLSI Neural Network Chip with 400 Neurons and 40 K Synapses,”Proc. IEEE Int. Solid State Ccts. Conf. (ISSCC-92), San Francisco, Paper TP8.1, Feb 19–21, 1992.

  30. M. Sivilotti, M. Emerling, and C.A. Mead “A novel associative memory implemented using collective computation,”Proc. Chapel Hill Conf. on VLSI, pp. 329–342, 1985.

  31. C.P. Peterson and E. Hartman, “Explorations of the Mean Field Theory Learning Algorithm,”Neural Networks, Vol. 2, pp. 475–494, 1989.

    Article  Google Scholar 

  32. G.E. Hinton, “Deterministic Boltzmann Learning Performs Steepest Descent in Weight Space,”Neural Computation, vol. 1, pp. 143–150, 1989.

    Article  Google Scholar 

  33. J.R. Movellan, “Contrastive Hebbian Learning in the Continuous Hopfield Model,”Connectionist Models: Proceedings of the 1990 Summer School, D.S. Touretzky et al., editors, pp. 10–17, 1990.

  34. D.H. Ackley, G.E. Hinton, and T.J. Sejnowski, “A learning algorithm for Boltzmann machines,”Cognitive Science, Vol. 9, pp. 147–169, 1985.

    Article  Google Scholar 

  35. R. Linsker, “Self Organization in a Perceptual Network,”IEEE Computer, Vol. 21, pp. 105–117, March, 1988.

    Article  Google Scholar 

  36. R. Linsker, “How to Generate Ordered Maps by Maximizing the Mutual Information Between Input and Output Signals,”Neural Computation, Vol. 1, pp. 402–411, 1989.

    Article  Google Scholar 

  37. S. Becker and G.E. Hinton, “An Unsupervised Learning Procedure that Discovers Surfaces in Random-Dot Stereograms,”Proc. Int. Joint Conf. on Neural Networks, Washington, D.C., Vol. 1, pp. 218–222, Jan. 1990.

    Google Scholar 

  38. C.R. Schneider and H.C. Card, “Analog CMOS Deterministic Boltzmann Circuits,”IEEE J. Solid St. Ccts., Vol. 28, pp. 907–914, Aug. 1993.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Card, H.C., Schneider, C.R. & Schneider, R.S. Learning capacitive weights in analog CMOS neural networks. Journal of VLSI Signal Processing 8, 209–225 (1994). https://doi.org/10.1007/BF02106447

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02106447

Keywords

Navigation