Abstract
Recently, the authors described a training method for a convolutional neural network of threshold neurons. Hidden layers are trained by by clustering, in a feed-forward manner, while the output layer is trained using the supervised Perceptron rule. The system is designed for implementation on an existing low-power analog hardware architecture, exhibiting inherent error sources affecting the computation accuracy in unspecified ways. One key technique is to train the network on-chip, taking possible errors into account without any need to quantify them. For the hidden layers, an on-chip approach has been applied previously. In the present work, a chip-in-the-loop version of the iterative Perceptron rule is introduced for training the output layer. Influences of various types of errors are thoroughly investigated (noisy, deleted, and clamped weights) for all network layers, using the MNIST database of hand-written digits as a benchmark.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Fieres, J., Grubl, A., Philipp, S., Meier, K., Schemmel, J., Schürmann, F.: A platform for parallel operation of VLSI neural networks. In: Conference on Brain Inspired Cognitive Systems (BICS 2004), Stirling, Scotland (2004)
Fieres, J., Schemmel, J., Meier, K.: Training convolutional neural networks of threshold neurons suited for low-power hardware implementation. In: Int. Joint Conference on Neural Networks (IJCNN 2006), Vancouver, CA (accepted, 2006)
Fukushima, K.: Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Networks 1, 119–130 (1988)
Hohmann, S.G., Fieres, J., Meier, K., Schemmel, J., Schmittz, T., Schürmann, F.: Training Fast Mixed-Signal Neural Networks for Data Classification. In: Proceedings of the 2004 International Joint Conference on Neural Networks (IJCNN 2004), pp. 2647–2652. IEEE Press, Los Alamitos (2004)
Jang, J.-S.R., Sun, C.-T., Mizutani, E.: Neuro-Fuzzy and Soft Computing. Prentice-Hall, Englewood Cliffs (1997)
Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: a convolutional neural network approach. Transactions on Neural Networks 8(1), 98–113 (1997)
LeCun, Y., Jackel, L.D., Boser, B., Denker, J.S., Graf, H.P., Guyon, I., Henderson, D., Howard, R.E., Hubbard, W.: Handwritten digit recognition: Applications of neural net chips and automatic learning. IEEE Communications Magazine, 41–46 (1989)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)
LeCun, Y.: The MNIST database of handwritten digits, http://yann.lecun.com/exdb/mnist
Linsker, R.: From basic network principles to neural architecture (Series of 3 papers). Proc. Natl. Sci. USA 83, 7508–7512 (1983)
Neubauer, C.: Evaluation of convolutional neural networks for visual recognition. Transactions on Neural Networks 9(4), 685–696 (1998)
Oram, M.W., Perret, D.I.: Modeling visual recognition from neurobiological constraints. Neural Networks (7), 945–972 (1994)
Riesenhuber, M., Poggio, T.: Hierarchical Models of Object Recognition in Cortex. Nature Neuroscience 2, 1019–1025 (1999)
Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cognitive Science 9, 75–112 (1985)
Schemmel, J., Hohmann, S., Meier, K., Schurmann, F.: A mixed-mode analog neural network using current-steering synapses. Analog Integrated Circuits and Signal Processing 38, 233–244 (2004)
Simard, P.Y., Steinkraus, D., Platt, J.C.: Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. In: Intl. Conf. Document Analysis and Recognition, pp. 958–962 (2003)
Tanaka, K.: Inferotemporal cortex and object vision. Ann. Rev. Neuroscience 19, 109–139 (1996)
Ullmann, S., Soloviev, S.: Computation of pattern invariance in brain-like structures. Neural Networks 12, 1021–1036 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Fieres, J., Meier, K., Schemmel, J. (2006). A Convolutional Neural Network Tolerant of Synaptic Faults for Low-Power Analog Hardware. In: Schwenker, F., Marinai, S. (eds) Artificial Neural Networks in Pattern Recognition. ANNPR 2006. Lecture Notes in Computer Science(), vol 4087. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11829898_11
Download citation
DOI: https://doi.org/10.1007/11829898_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37951-5
Online ISBN: 978-3-540-37952-2
eBook Packages: Computer ScienceComputer Science (R0)