Skip to main content
Log in

Implementation and performance of an analog nonvolatile neural network

  • Published:
Analog Integrated Circuits and Signal Processing Aims and scope Submit manuscript

Abstract

An integrated circuit implementation of a fully parallel analog artificial neural network is presented. We include details of the architecture, some of the important design considerations, a description of the circuits and finally actual performance data. The electrically trainable artificial neural network (ETANN) chip incorporates 64 analog neurons and 10,240 analog synapses and utilizes a 1-µm CMOS NVM process. The network calculates the dot product between a 64-element analog input vector and a 64 × 64 nonvolatile (EEPROM based) analog synaptic weight array. These calculations occur at a rate in excess of 1.3 billion interconnections per second. All elements of the computation are stored and calculated in the analog domain and strictly in parallel. A 2:1 input and neuron multiplex mode permits rates in excess of 2 billion interconnections per second and a single-chip effective network size of 64 inputs by 128 outputs. The ETANN incorporates differential signal techniques throughout for improved noise rejection. Current summing is employed for the sum of products calculations. The chip integrates approximately 400 op amps, including variable gain stages of from 20 to 54 dB. Inevitable component to component variations due to the use of minimum dimension elements are found not to be significant for operation in an adaptive environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. J. Alspector et al. “A neuromorphic VLSI learning system,”Proc. 1987 Stanford Conf., Advanced Research in VLSI, 1987, pp. 313–349.

  2. S. Tam, M. Holler, J. Brauch, A. Pine, A. Peterson, S. Anderson, and S. Deiss, “A reconfigurable multi-chip analog neural network; recognition and back-propagation training,”Int. Joint Conf. Neural Networks, 1991.

  3. M. Holler, S. Tam, H. Castro, and R. Benson, “An electrically trainable artificial neural network (ETANN) with 10240 ‘floating gate’ synapses,”Proc. Int. Joint Conf. Neural Networks, Washington D.C., Vol. II, 1989, pp. 191–196.

    Google Scholar 

  4. G. Cauwenberghs, C. F. Neugebauer, and A. Yariv, “An adaptive CMOS matrix-vector multiplier for large scale analog hardware neural network applications,”Proc. Int. Joint Conf. Neural Networks, Seattle, WA, Vol. I, 1991, pp. 507–511.

    Google Scholar 

  5. C. Schneider and H. Card, “CMOS implementation of analog Hebbian synaptic learning circuits,”Proc. Int. Joint Conf. Neural Networks, Seattle, WA, Vol. I, 1991, pp. 437–442.

    Google Scholar 

  6. A. Kramer, C.K. Sin, R. Chu, and P.K. Ko, “Compact EEPROM based weight functions,”Proc. Advances Neural Information Processing Systems, Vol. 3, 1990, pp. 1101–1107, 1991.

    Google Scholar 

  7. Intel Corporation, “80170NX electrically trainable analog neural network,” (data sheet), 1991.

  8. S. Tam, B. Gupta, H. Castro, and M. Holler, “Analog non-volatile VLSI neural network chip and back-propagation training,”Neural Information Processing Systems Conference (Poster Session), 1990.

  9. S. Tam, B. Gupta, H. Castro, and M. Holler, “Learning on an analog VLSI neural network chip,”Proc. 1990 IEEE Int. Conf. Systems, Man & Cybernetics, pp. 701–703.

  10. M. Holler, H. Castro, and S. Tam, “Neural network exhibiting improved tolerance to power supply variations,” US. Patent No. 5075869, Dec. 1990.

  11. G. Hinton, “Deterministic Boltzmann learning performs steepest descent in weight-space,” inNeural Computations I, MIT, 1989, pp. 143-150.

  12. E. Sackinger and W. Guggenbuhl, “A versatile building block: the CMOS differential difference amplifier,”IEEE J. Solid State Circuits, Vol. SC-22, 1987, pp. 287–294.

    Google Scholar 

  13. T.L. Sculley and M.A. Brooke, “Obtaining high precision operation from non-ideal neural networks,”IEEE, ISCAS, 1990, pp. 1847–1850.

  14. S.C. Qin and R.L. Geiger, “A ±5 V MOS analog multiplier,”IEEE J. Solid State Circuits, Vol. SC-22, 1987, pp. 1143–1146.

    Google Scholar 

  15. J.J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,”Proc. Nat. Acad. Sci., Vol. 81, 1984, pp. 3088–3092.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

1. One such circuit variation which could not be tolerated was the dependence of sigmoid offset on the sigmoid gain, a problem which required circuit and layout changes to resolve.

2. The performance of this concept has not been verified.

3. The initial design achieved gain control by in effect constructing a Gilbert multiplier cell at the sigmoid circuit. A small mismatch between the two differential pairs of this multiplier generated an offset which was dependent on the relative conductance of each leg, and thus dependent on the value at the VGAIN pin. This approach produced a very large well-controlled gain range; but was abandoned because the offset could not be compensated through use of the synaptic elements.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Castro, H.A., Tam, S.M. & Holler, M.A. Implementation and performance of an analog nonvolatile neural network. Analog Integr Circ Sig Process 4, 97–113 (1993). https://doi.org/10.1007/BF01254862

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01254862

Keywords

Navigation