Abstract
We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single layer of computing units. However the computational effort needed for finding the correct combination of weights increases substantially when more parameters and more complicated topologies are considered. In this chapter we discuss a popular learning method capable of handling such large learning problems—the backpropagation algorithm. This numerical method was used by different research communities in different contexts, was discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning ever since.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Rojas, R. (1996). The Backpropagation Algorithm. In: Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-61068-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-61068-4_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60505-8
Online ISBN: 978-3-642-61068-4
eBook Packages: Springer Book Archive