Abstract
Uncertainty of the input data is a common issue in machine learning. In this paper we show how one can incorporate knowledge on uncertainty measure regarding particular points in the training set. This may boost up models accuracy as well as reduce overfitting. We show an approach based on the classical training with jitter for Artificial Neural Networks (ANNs). We prove that our method, which can be applied to a wide class of models, is approximately equivalent to generalised Tikhonov regularisation learning. We also compare our results with some alternative methods. In the end we discuss further prospects and applications.
Chapter PDF
Similar content being viewed by others
References
Niaf, E., Flamary, R., Lartizien, C., Canu, S.: Handling uncertainties in SVM classification. In: IEEE Statistical Signal Processing Workshop, pp. 757–760 (2011)
Ni, E.A., Ling, C.X.: Active Learning with c-Certainty. In: Tan, P.-N., Chawla, S., Ho, C.K., Bailey, J. (eds.) PAKDD 2012, Part I. LNCS, vol. 7301, pp. 231–242. Springer, Heidelberg (2012)
Pelckmans, K., De Brabanter, J., Suykens, J.A.K., De Moor, B.: Handling missing values in support vector machine classifiers. Neural Networks 18, 684–692 (2005)
Zhang, S.S., Wu, X., Zhu, M.: Efficient missing data imputation for supervised learning. In: 9th IEEE Int. Conf. on Cognitive Informatics, pp. 672–679 (2010)
Han, B., Xiao, S., Liu, L., Wu, Z.: New methods for filling missing values by grey relational analysis. In: Int. Conf. on Artificial Intelligence, Management Science and Electronic Commerce, pp. 2721–2724 (2011)
Coleman, H.W., Steele, W.G.: Experimentation, validation and uncertainty analysis for engineers. John Wiley and Sons (2009)
Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill–posed problems. V.H. Winston (1977)
Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Computation 7(1), 108–116 (1995)
Reed, R., Oh, S., Marks, R.J.: Regularization using jittered training data. In: IEEE Int. Joint Conf. on Neural Networks, pp. 147–152. IEEE Press (1992)
Sietsma, J., Dow, R.J.F.: Creating artificial neural networks that generalise. Neural Networks 4(1), 1481–1497 (1990)
Weigand, A.S., Rumelhart, D.E., Huberman, B.A.: Generalization by weight elimination applied to currency exchange rate prediction. In: Int. Joint Conf. on Neural Networks, pp. 1–837 (1991)
Heaton, J.: Programming neural networks with Encog 3 in Java. Heaton Research Inc. (2011)
Apache Commons Mathematics Library, http://commons.apache.org/math
Settles, B.: Active Learning, Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers (2012)
Podolak, I., Roman, A.: Theoretical Foundations and Experimental Results for a Hierarchical Classifier with Overlapping Clusters. Computational Intelligence 29(2), 357–388 (2013)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 IFIP International Federation for Information Processing
About this paper
Cite this paper
Czarnecki, W.M., Podolak, I.T. (2013). Machine Learning with Known Input Data Uncertainty Measure. In: Saeed, K., Chaki, R., Cortesi, A., Wierzchoń, S. (eds) Computer Information Systems and Industrial Management. CISIM 2013. Lecture Notes in Computer Science, vol 8104. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40925-7_35
Download citation
DOI: https://doi.org/10.1007/978-3-642-40925-7_35
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40924-0
Online ISBN: 978-3-642-40925-7
eBook Packages: Computer ScienceComputer Science (R0)