Skip to main content
Log in

Abstract

This paper discusses research on scalable VLSI implementations of feed-forward and recurrent neural networks. These two families of networks are useful in a wide variety of important applications—classification tasks for feed-forward nets and optimization problems for recurrent nets—but their differences affect the way they should be built. We find that analog computation with digitally programmable weights works best for feed-forward networks, while stochastic processing takes advantage of the integrative nature of recurrent networks. We have shown early prototypes of these networks which compute at rates of 1–2 billion connections per second. These general-purpose neural building blocks can be coupled with an overall data transmission framework that is electronically reconfigured in a local manner to produce arbitrarily large, fault-tolerant networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. R. Lippmann, “An introduction to computing with neural nets,”IEEE ASSP Magazine, April 1987, pp. 4–22.

  2. P. Werbos,Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, PhD thesis, Harvard, November 1974.

  3. J. Hopfield and D. Tank, “Neural computations of decisions in optimization problems,”Biological Cybernetics, vol. 52, 1985, pp. 141–152.

    MathSciNet  MATH  Google Scholar 

  4. P.W. Hollis and J.J. Paulos, “The effects of precision constraints in a back-propagation learning network,”IEEE International Joint Conference on Neural Networks, Washington, DC. 1989.

  5. J. Raffel, J. Mann, R. Berger, A. Soares, and S. Gilbert, “A generic architecture for wafer-scale neuromorphic systems,”Proceedings of the IEEE International Conference on Neural Networks, 1987.

  6. J. Bailey, “A VLSI interconnect structure for neural networks,“ Technical Report CS/E-88027, Oregon Graduate Center, August 1988.

  7. P.D. Franzon,Fault tolerance on VLSI, PhD thesis, University of Adelaide, December 1988.

  8. M. Franzini, “Speech recognition with back propagation,”Proceedings of the Ninth Annual Conference on Engineering in Medicine and Biology Soc., 1987, pp. 1702–1703.

  9. R. Gorman and T. Sejnowski, “Analysis of hidden units in a layered network trained to classify sonar targets,”Neural Networks, vol. 1, 1988, pp. 75–90.

    Article  Google Scholar 

  10. D. Rumelhart, G. Hinton, and R. Williams, “Learning Internal Representations by Error Propagation,”Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, Cambridge, MA.: MIT Press, 1986.

    Google Scholar 

  11. J. Alspector and R. Allen, “A neuromorphic VLSI learning system”,Proceedings of the 1987 Stanford Conference: Advanced Research in VLSI, 1987, pp. 313–349.

  12. H. Graf, L. Jackel, R. Howard, B. Straughn, J. Denker, W. Hubbard, D. Tennant, and D. Schwartz, “VLSI implementation of a neural network memory with several hundreds of neurons”,American Institute of Physics Conference Proceedings No. 151, 1986, pp. 414–419.

    Google Scholar 

  13. J.J. Paulos and P.W. Hollis, “Neural networks using analog multipliers”,Proceedings of the IEEE International Conference on Circuits and Systems, Helsinki, Finland, 1988, pp. 499–502.

  14. F. Kub, I. Mack, K. Moon, C. Yao, and J. Modolo, “Programmable analog synapses for microelectronic neural networks using a hybrid digital-analog approach”,Proceedings of the IEEE International Conference on Neural Networks, 1988.

  15. J. Hutchinson, C. Koch, and C. Mead, “Computing motion using analog and binary resistive networks”,Computer, vol. 21, March 1988, pp. 52–63.

    Article  Google Scholar 

  16. J. Sage, K. Thompson, and R. Withers, “An artificial neural network integrated circuit based on MNOS/CCD principles”,American Institute of Physics Conference Proceedings, No. 151, 1986, pp. 381–385.

    Google Scholar 

  17. P.W. Hollis and J.J. Paulos, “Artificial neurons using analog multipliers”,IEEE International Conference on Neural Networks, San Diego, CA, 1988.

  18. J.J. Paulos and P.W. Hollis, “A VLSI architecture for feed-forward networks with integral back-propagation”,Annual Meeting of the International Neural Network Society, Boston, MA. 1988.

  19. A. Waibel, H. Sawai, and C.S. Hughes, “Modularity and scaling in large phonemic neural networks,” Technical Report TR-I-0034, ATR Interpretations Telephony Research Laboratories, August 5, 1988.

  20. B. Gains, “Stochastic computing systems,”Advances in Information Systems Science. New York: Plenum Press, 1969.

    Google Scholar 

  21. B. Gaines, “Uncertainty as a foundation of computational power in neural networks,”Proceedings of the IEEE International Conference on Neural Networks, 1987, pp. III:51–III:57.

  22. P. Mars and W. Poppelbaum,Stochastic and Deterministic Averaging Processors, IEE Digital Electronics and Computing Series. New York: P. Peregrinus, 1981.

    Google Scholar 

  23. A. Agranat and A. Yariv, “A new architecture for a microelectronic implementation of neural network models,”Proceedings of the IEEE International Conference on Neural Networks, 1987, III:403–III:410.

  24. J. Hopfield and D. Tank, “Simple neural optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit,”IEEE Transactions on Circuits and Systems, vol. 33, May 1986, pp. 533–541.

    Article  Google Scholar 

  25. G. Bilbro, R. Mann, T. Miller III, W. Snyder, D.E. Van den Bout, and M. White, “Mean field annealing and neural networks,”Advances in Neural Information Processing Systems, San Mateo, CA: Morgan-Kaufmann Publishers, 1989, pp. 91–98.

    Google Scholar 

  26. D.E. Van den Bout and T.K. Miller III, “Graph partitioning using annealed neural networks,”Proceedings of the IEEE International Conference of Neural Networks, 1989, pp. I:521–I:528.

  27. D.E. Van den Bout and T.K. Miller III, “Improving the performance of the Hopfield-Tank neural network through normalization and annealing,” Accepted for publication inBiological Cybernetics, 1989.

  28. D. Van den Bout, T.K. Miller III, and Don Gage, “Image halftoning using mean field annealing,” Technical Report CCSP-TR-27/88, NCSU Center for Communications and Signal Processing, October 1988.

  29. D.E. Van den Bout and T.K. Miller III, “A stochastic architecture for neural nets,”Proceedings of the IEEE International Conference on Neural Networks, 1988, pp. I:481–I:488.

  30. D.E. Van den Bout and T.K. Miller III, “A digital architecture employing stochasticism for the simulation of Hopfield neural nets,”IEEE Transactions on Circuits and Systems, vol. 36, May 1989, pp. 732–738.

    Article  Google Scholar 

  31. H.T. Kung and M.S. Lam, “Wafer-scale integration and two level pipelined implementation of systolic arrays,”J. Parallel and Distributed Computing, vol. 1, 1984 pp. 32–64.

    Article  Google Scholar 

  32. S.K. Tewksbury, M. Hatamian, P. Franzon, Jr., L.A. Hornak, C.A. Siller, and V.B. Lawrence, “FIR filters for high-sample rate applications,”IEEE Communications, July 1987.

  33. Lincoln Laboratory, “DARPA neural network study—executive summary,” July 8, 1988.

  34. D. Van den Bout and T.K. Miller III, “TInMANN: The integer Markovian artificial neural network,”Proceedings of the IEEE International Conference on Neural Networks, 1989, pp. II:205–II:211, 1989.

  35. D. Van den Bout and T.K. Miller III, “TInMANN: The integer Markovian artificial neural network,”Journal of Parallel and Distributed Computing, 1989.

  36. R. Holdaway, “Enhancing supervised learning algorithms via self-organization,”Proceedings of the IEEE International Joint Conference on Neural Networks, 1989

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

van den Bout, D., Franzon, P., Paulos, J. et al. Scalable VLSI implementations for neural networks. J VLSI Sign Process Syst Sign Image Video Technol 1, 367–385 (1990). https://doi.org/10.1007/BF00929928

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00929928

Keywords

Navigation