Abstract
In the previous chapter we arrived at the conclusion that McCulloch—Pitts units can be used to build networks capable of computing any logical function and of simulating any finite automaton. From the biological point of view, however, the types of network that can be built are not very relevant. The computing units are too similar to conventional logic gates and the network must be completely specified before it can be used. There are no free parameters which could be adjusted to suit different problems. Learning can only be implemented by modifying the connection pattern of the network and the thresholds of the units, but this is necessarily more complex than just adjusting numerical parameters. For that reason, we turn our attention to weighted networks and consider their most relevant properties. In the last section of this chapter we show that simple weighted networks can provide a computational model for regular neuronal structures in the nervous system.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Rojas, R. (1996). Weighted Networks—The Perceptron. In: Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-61068-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-61068-4_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60505-8
Online ISBN: 978-3-642-61068-4
eBook Packages: Springer Book Archive