Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-16T12:57:44.840Z Has data issue: false hasContentIssue false

A new algorithm to automate inductive learning of default theories*

Published online by Cambridge University Press:  23 August 2017

FARHAD SHAKERIN
Affiliation:
The University of Texas at Dallas, Texas, USA (e-mails: fxs130430@utdallas.edu, ees101020@utdallas.edu, gupta@utdallas.edu)
ELMER SALAZAR
Affiliation:
The University of Texas at Dallas, Texas, USA (e-mails: fxs130430@utdallas.edu, ees101020@utdallas.edu, gupta@utdallas.edu)
GOPAL GUPTA
Affiliation:
The University of Texas at Dallas, Texas, USA (e-mails: fxs130430@utdallas.edu, ees101020@utdallas.edu, gupta@utdallas.edu)

Abstract

In inductive learning of a broad concept, an algorithm should be able to distinguish concept examples from exceptions and noisy data. An approach through recursively finding patterns in exceptions turns out to correspond to the problem of learning default theories. Default logic is what humans employ in common-sense reasoning. Therefore, learned default theories are better understood by humans. In this paper, we present new algorithms to learn default theories in the form of non-monotonic logic programs. Experiments reported in this paper show that our algorithms are a significant improvement over traditional approaches based on inductive logic programming. Under consideration for acceptance in TPLP.

Type
Regular Papers
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Authors are partially supported by NSF Grant No. 1423419.

References

Albert, E., Ed. 2013. Logic-Based Program Synthesis and Transformation, 22nd International Symposium, LOPSTR 2012, Revised Selected Papers. Lecture Notes in Computer Science, Vol. 7844, September 18–20, 2012. Springer, Leuven, Belgium.Google Scholar
Bain, M. 1991. Experiments in non-monotonic learning. In Proceedings of the Eighth International Workshop (ML91), Northwestern University, Morgan Kaufmann, Evanston, Illinois, USA, 380–384.Google Scholar
Baral, C. 2003. Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press, Cambridge, New York, Melbourne.CrossRefGoogle Scholar
Birnbaum, L. and Collins, G., Eds. 1991. Experiments in non-monotonic learning. In Proc. of the 8th International Workshop (ML91), Northwestern University, Morgan Kaufmann, Evanston, Illinois, USA.Google Scholar
Catlett, J. 1991. Megainduction: A test flight. In Proc. of the Eighth International Workshop (ML91), Northwestern University, Evanston, Illinois. 596599.Google Scholar
Corapi, D., Russo, A. and Lupu, E. 2012. Inductive Logic Programming in Answer Set Programming. Springer, Berlin, Heidelberg, 9197.CrossRefGoogle Scholar
Dimopoulos, Y. and Kakas, A. C. 1995. Learning non-monotonic logic programs: Learning exceptions. In Proc. of 8th European Conference on Machine Learning, ECML-95, Lecture Notes in Computer Science, April 25–27, 1995, Vol. 912. Springer, Heraclion, Crete, Greece, 122–137.Google Scholar
Gelfond, M. and Lifschitz, V. 1988. The stable model semantics for logic programming. In Proc. of the 5th International Conference and Symposium, Vol. 2, August 15–19, 1988. MIT Press, Seattle, Washington, 1070–1080.Google Scholar
Inoue, K. and Kudoh, Y. 1997. Learning extended logic programs. In Proc. of the 15th International Joint Conference on Artifical Intelligence, IJCAI'97, Vol. 1. Morgan Kaufmann Publishers, San Francisco, CA, USA, 176–181.Google Scholar
Kowalski, R. A. and Bowen, K. A., Eds. 1988. Logic Programming, Proceedings of the 5th International Conference and Symposium, Vol. 2, August 15–19, 1988. MIT Press, Seattle, Washington.Google Scholar
Kramer, S., Lavrač, N. and Flach, P. 2000. Propositionalization approaches to relational data mining. In Relational Data Mining. Springer-Verlag, New York, NY, USA, 262286.Google Scholar
Laird, J. E., Ed. 1988. Machine Learning, Proceedings of the 5th International Conference on Machine Learning, Ann Arbor, Michigan, USA, June 12–14, 1988. Morgan Kaufmann.Google Scholar
Lavrac, N. and Wrobel, S., Eds. 1995. Machine Learning, Proceedings of 8th European Conference on Machine Learning, ECML-95, Lecture Notes in Computer Science, April 25–27, 1995, Vol. 912. Springer, Heraclion, Crete, Greece.CrossRefGoogle Scholar
Marple, K. and Gupta, G. 2012. Galliwasp: A goal-directed answer set solver. In Logic-Based Program Synthesis and Transformation. 22nd International Symposium, LOPSTR 2012, Revised Selected Papers. Lecture Notes in Computer Science, Vol. 7844, September 18–20, 2012. Springer, Leuven, Belgium, 122136.Google Scholar
Mitchell, T. M. 1980. The need for biases in learning generalizations. In Readings in Machine Learning, Shavlik, J. W. and Dietterich, T. G., Eds. Morgan Kauffman, 184191. Book published in 1990.Google Scholar
Mitchell, T. M. 1997. Machine Learning. McGraw Hill Series in Computer Science. McGraw-Hill.Google Scholar
Muggleton, S. 1991. Inductive logic programming. New Generation Computing 8, 4, 295318.CrossRefGoogle Scholar
Muggleton, S. 1995. Inverse entailment and progol. New Generation Computing 13, 3–4, 245286.Google Scholar
Muggleton, S. and Buntine, W. L. 1988. Machine invention of first order predicates by inverting resolution. In Machine Learning, Proceedings of the 5th International Conference on Machine Learning, Ann Arbor, Michigan, USA, June 12–14, 1988. Morgan Kaufmann, 339–352.Google Scholar
Nicolas, P. and Duval, B. 2001. Representation of Incomplete Knowledge by Induction of Default Theories. Springer, Berlin, Heidelberg, 160172.Google Scholar
Plotkin, G. D. 1971. A further note on inductive generalization. In Machine Intelligence, Vol. 6. American Elsevier, 101124.Google Scholar
Quinlan, J. R. 1990. Learning logical definitions from relations. Machine Learning 5, 239266.Google Scholar
Quinlan, J. R. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann.Google Scholar
Ray, O. 2009. Nonmonotonic abductive inductive learning. Journal of Applied Logic 7, 3, 329340. Special Issue: Abduction and Induction in Artificial Intelligence.Google Scholar
Sakama, C. 2005. Induction from answer sets in nonmonotonic logic programs. ACM Transactions on Computational Logic 6, 2, 203231.Google Scholar
Seitzer, J. 1997. Stable ILP: Exploring the added expressivity of negation in the background knowledge. In Proc. of IJCAI-97 Workshop on Frontiers of ILP.Google Scholar
Singleton, P. and Dushin, F. 2003. JPL a java interface to prolog [online]. URL: http://www.swi-prolog.org/packages/jpl/java_api [Accessed on: 26/01/2017].Google Scholar
Srinivasan, A. 2001. Aleph manual [online]. URL: http://web.comlab.ox.ac.uk/oucl/research/areas/machlearn/Aleph/ [Accessed on: 26/01/2017].Google Scholar
Srinivasan, A., Muggleton, S. and Bain, M. 1996. Distinguishing exceptions from noise in non-monotonic learning. In Proc. of 2nd International Inductive Logic Programming Workshop (ILP'92).Google Scholar
Wielemaker, J., Schrijvers, T., Triska, M. and Lager, T. 2012. SWI-Prolog. Theory and Practice of Logic Programming 12, 1–2, 6796.CrossRefGoogle Scholar
Wu, X., Kumar, V., Ross, Quinlan J., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G. J., Ng, A., Liu, B., Yu, P. S., Zhou, Z.-H., Steinbach, M., Hand, D. J. and Steinberg, D. 2007. Top 10 algorithms in data mining. Knowledge and Information Systems 14, 1 (December), 137.Google Scholar