Abstract
Implementations of artificial neural networks as analog VLSI circuits differ in their method of synaptic weight storage (digital weights, analog EEPROMs, or capacitive weights) and in whether learning is performed locally at the synapses or off-chip. In this paper, we explain the principles of analog networks with in situ or local synaptic learning of capacitive weights, with test results of CMOS implementations from our laboratory. Synapses for both simple Hebbian and mean field networks are investigated. Synaptic weights may be refreshed by periodic rehearsal on the training data, which compensates for temperature drift or other nonstationarity. Compact high-performance layouts have been obtained in which learning adjusts for component variability.
Similar content being viewed by others
References
H.P. Graf, R. Janow, D. Henderson, and R. Lee, “Reconfigurable Neural Net Chip with 32 K Connections,”Advances in Neural Information Processing Systems 3, D.S. Touretzky, J. Moody and R. Lippmann, editors, San Mateo, CA: Morgan Kaufmann, pp. 1032–1038, 1991.
B.E. Boser, E. Sackinger, J. Bromley, Y. Le Cun, and L.D. Jackel, “An Analog Neural Network Processor with Programmable Topology,”IEEE J. Solid St. Ccts., Vol. 26, pp. 2017–2025, 1991.
D.E. Rumelhart, G.E. Hinton, and R.W. Williams, “Learning representations by back-propagating errors,”Nature, Vol. 323, pp. 533–536, 1986.
T. Kohonen, “The Self Organizing Map,”Proc. IEEE, Vol. 78, pp. 1464–1480, 1990.
J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,”Proc. Natl. Acad. Sci., Vol. 81, pp. 3088–3092, 1984.
D.O. Hebb,Organization of behaviour, New York: John Wiley, 1949.
J. Raffel, J. Mann, R. Berger, A. Soares and S. Gilbert, “A Generic Architecture for Wafer Scale Neuromorphic Systems,”Proc. IEEE Int. Conf. Neural Networks, Vol. III, pp. 501–513, 1987.
Y. Tsividis and S. Satyanarayana, “Analogue circuits for variable synapse electronic neural networks,”Elec. Lett., Vol. 23, pp. 1313–1314, 1987.
M. Holler, S. Tam, H. Castro, and R. Benson, “An electrically trainable artificial neural network with 10240 floating gate synapses,”Proc. IJCNN-89, Part II, pp. 191–196, 1989.
Mead, C.A.,Analog VLSI and Neural Systems, Reading: Addison-Wesley, 1988.
D.B. Schwartz, R.E. Howard, and W.E. Hubbard, “A Programmable Analog Neural Network Chip,”IEEE J. Solid State Circuits, Vol. 24, pp. 313–319, 1989.
G.E. Hinton, “Connectionist learning procedures,”Artificial Intelligence, Vol. 40, pp. 185–234, 1989.
S. Satyanarayana, Y.P. Tsividis, and H.P. Graf, “A reconfigurable analog VLSI neural network chip,”Advances in Neural Information Processing Systems 2, D. Touretzky, editor, Morgan Kaufmann, 1990.
J.R. Mann and S. Gilbert, “An Analog Self-Organizing Neural Network Chip,” inAdvances in Neural Information Processing Systems I, D.S. Touretzky, editor, San Mateo, CA: Morgan Kaufmann, pp. 739–747, 1990.
A.F. Murray, D. Del Corso, and L. Tarassenko, “Pulse Stream VLSI Neural Networks Mixing Analog and Digital Techniques,”IEEE Trans. on Neural Networks, Vol. 2, pp. 193–204, 1991.
A. Agarnat and A. Yariv, “A New Architecture for a Microelectronic Implementation of Neural Network Models”,Proc. Ist Int. Conf. on Neural Networks, San Diego, June 1987.
H.C. Card, C. Schneider and W.R. Moore, “Hebbian plasticity in MOS synapses,”IEE Proc. F, Vol. 138, pp. 13–16, Feb. 1991.
C.R. Schneider, and H.C. Card, “Analog CMOS Hebbian Synapses,”Elec. Lett., Vol. 27, pp. 785–786, 1991.
C.R. Schneider and H.C. Card, “Analog CMOS Deterministic Boltzmann Circuits,”IEEE J. Solid St. Ccts., Vol. 28, pp. 907–914, Aug. 1993.
C. Toumazou, F.J. Lidgey, and D.G. Haigh, editors,Analogue IC Design: the Current Mode Approach, London: P. Peregrinus, 1990.
J.J. Clark, “An analog CMOS implementation of a self organizing feedforward network,”Proc. 1990 Int. Joint Conf. on Neural Networks, Washington, D.C., Vol. II, pp. 118–121, 1990.
H.C. Card and C.R. Schneider, “Analog VLSI Models of Mean Field Networks,”Proc. of Oxford Workshop, Sept. 5–7, 1990, to be published asVLSI for AI and Neural Networks, W.R. Moore and J. Delgado-Frias, editors, 1991.
J. Alspector and R.B. Allen, “A Neuromorphic VLSI Learning System,” inAdvanced Research in VLSI, Proc. of 1987 Stanford Conf., P. Losleben, editor, Cambridge: MIT Press, pp. 313–349, 1987.
J. Alspector, J.W. Gannett, S. Haber, M.B. Parker, and R. Chu, “A VLSI-Efficient Technique for Generating Multiple Uncorrelated Noise Sources and its Applications to Stochastic Neural Networks,”IEEE Trans. Ccts. and Syst., Vol. 38, pp. 109–123, 1991.
J. Alspector, R.B. Allen, A. Jayakumar, T. Zeppenfeld, and R. Meir, “Relaxation networks for large supervised learning problems,”Advances in Neural Information Processing Systems 3, D.S. Touretzky, J. Moody and R. Lippmann, editors, San Mateo, CA: Morgan Kaufmann, Apr. 1991.
T. Shima, T. Kimura, Y. Kamatani, T. Itakura, Y. Fujita, and T. Lida, “Neuro Chips with On-Chip Backprop and/or Hebbian Learning,”Proc. IEEE Int. Solid State Ccts. Conf. (ISSCC-92), San Francisco, Paper TP8.4, Feb 19–21, 1992.
Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Kondoh, and S. Kayano, “A Self Learning Neural Network Chip with 125 Neurons and 10 K Self Organizing Synapses,IEEE J. Solid St. Ccts., Vol. 26, pp. 607–611, 1991.
Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Notani, H. Kondoh, and S. Kayano, “A 336-Neuron, 28 K Synapse, Self-Learning Neural Network Chip with Branch-Neuron-Unit Architecture,”IEEE. J. Solid St. Ccts., Vol. 26, pp. 1637–1644, 1991.
Y. Arima, M. Murasaki, T. Yamada, A. Maeda, and H. Shinohara, “A Refreshable Analog VLSI Neural Network Chip with 400 Neurons and 40 K Synapses,”Proc. IEEE Int. Solid State Ccts. Conf. (ISSCC-92), San Francisco, Paper TP8.1, Feb 19–21, 1992.
M. Sivilotti, M. Emerling, and C.A. Mead “A novel associative memory implemented using collective computation,”Proc. Chapel Hill Conf. on VLSI, pp. 329–342, 1985.
C.P. Peterson and E. Hartman, “Explorations of the Mean Field Theory Learning Algorithm,”Neural Networks, Vol. 2, pp. 475–494, 1989.
G.E. Hinton, “Deterministic Boltzmann Learning Performs Steepest Descent in Weight Space,”Neural Computation, vol. 1, pp. 143–150, 1989.
J.R. Movellan, “Contrastive Hebbian Learning in the Continuous Hopfield Model,”Connectionist Models: Proceedings of the 1990 Summer School, D.S. Touretzky et al., editors, pp. 10–17, 1990.
D.H. Ackley, G.E. Hinton, and T.J. Sejnowski, “A learning algorithm for Boltzmann machines,”Cognitive Science, Vol. 9, pp. 147–169, 1985.
R. Linsker, “Self Organization in a Perceptual Network,”IEEE Computer, Vol. 21, pp. 105–117, March, 1988.
R. Linsker, “How to Generate Ordered Maps by Maximizing the Mutual Information Between Input and Output Signals,”Neural Computation, Vol. 1, pp. 402–411, 1989.
S. Becker and G.E. Hinton, “An Unsupervised Learning Procedure that Discovers Surfaces in Random-Dot Stereograms,”Proc. Int. Joint Conf. on Neural Networks, Washington, D.C., Vol. 1, pp. 218–222, Jan. 1990.
C.R. Schneider and H.C. Card, “Analog CMOS Deterministic Boltzmann Circuits,”IEEE J. Solid St. Ccts., Vol. 28, pp. 907–914, Aug. 1993.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Card, H.C., Schneider, C.R. & Schneider, R.S. Learning capacitive weights in analog CMOS neural networks. Journal of VLSI Signal Processing 8, 209–225 (1994). https://doi.org/10.1007/BF02106447
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/BF02106447