Abstract
By exact learning we mean the expressibility of a network to reproduce exactly the desired target function. For an exact learning the network weights do not need tuning; their values can be found exactly. Even if this is unlikely to occur in general, there are a few particular cases when this happens. These cases will be discussed in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This definition is based on the following result proved by Hilbert [54]: \(\int _0^1 \int _0^1 K(t, s) u(t) u(s) \, dt ds = \sum _n \frac{1}{\lambda _n} \langle u, \psi _n\rangle \), where \(\langle u, \psi _n\rangle = \int _0^1 u(s) \psi _n(s) \, ds\), where \(\lambda _n\) is the eigenvalue corresponding to the eigenfunction \(\psi _n\).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Calin, O. (2020). Exact Learning. In: Deep Learning Architectures. Springer Series in the Data Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-36721-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-36721-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36720-6
Online ISBN: 978-3-030-36721-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)