Abstract
The problem of induction is a central problem in philosophy of science and concerns whether it is sound or not to extract laws from observational data. Nowadays, this issue is more relevant than ever given the pervasive and growing role of the data discovery process in all sciences. If on one hand induction is routinely employed by automatic machine learning techniques, on the other most of the philosophical work criticises induction as if an alternative could exist. But is there indeed a reliable alternative to induction? Is it possible to discover or predict something in a non inductive manner?
This paper formalises the question on the basis of statistical notions (bias, variance, mean squared error) borrowed from estimation theory and statistical machine learning. The result is a justification of induction as rational behaviour. In a decision-making process a behaviour is rational if it is based on making choices that result in the most optimal level of benefit or utility. If we measure utility in a prediction context in terms of expected accuracy, it follows that induction is the rational way of conduct.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
for simplicity we will not consider here the case of combining estimators.
- 2.
here we will consider only deterministic algorithms.
- 3.
it is indeed a common practice to add random predictors in machine learning pipelines and to use them as null hypothesis against which the generalization power of more complex candidate algorithms is benchmarked.
References
Bensusan, H.N.: Automatic bias learning: an inquiry into the inductive basis of induction. Ph.D. thesis, University of Sussex (1999)
Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias/variance dilemma. Neural Comput. 4(1), 1–58 (1992)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. SSS. Springer, New York (2009). https://doi.org/10.1007/978-0-387-84858-7
Howson, C.: Hume’s Problem: Induction and the Justification of Belief. Clarendon Press, Oxford (2003)
Howson, C., Urbach, P.: Scientific Reasoning: The Bayesian Approach. Open Court, Chicago (2006)
Korb, K.B.: Introduction: machine learning as philosophy of science. Mind Mach. 14(4), 433–440 (2004)
Lehmann, E.L., Casella, G.: Theory of Point Estimation. STS. Springer, New York (1998). https://doi.org/10.1007/b98854
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
Popper, K.R.: Conjectures and Refutations. Basic Books, New York (1962)
Reichenbach, H.: The Theory of Probability. University of California Press (1949)
Salmon, W.C.: Hans Reichenbach’s vindication of induction. Erkenntnis 27(35), 99–122 (1991)
Williamson, J.: A dynamic interaction between machine learning and the philosophy of science. Minds Mach. 14(4), 539–549 (2004)
Wolpert, D.H.: On the connection between in-sample testing and generalization error. Complex Syst. 6, 47–94 (1992)
Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Comput. 8, 1341–1390 (1996)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Bontempi, G. (2019). The Induction Problem: A Machine Learning Vindication Argument. In: Nicosia, G., Pardalos, P., Umeton, R., Giuffrida, G., Sciacca, V. (eds) Machine Learning, Optimization, and Data Science. LOD 2019. Lecture Notes in Computer Science(), vol 11943. Springer, Cham. https://doi.org/10.1007/978-3-030-37599-7_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-37599-7_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37598-0
Online ISBN: 978-3-030-37599-7
eBook Packages: Computer ScienceComputer Science (R0)