Abstract
Commonsense reasoning (CSR) is the ability to reason about everyday situations. In artificial intelligence systems, such reasoning requires extensive background knowledge. We implement an integration pipeline that transforms existing large commonsense knowledge (CSK) resources into a logic representation designed to support contradictory and uncertain information. Our representation is an extension of first-order logic with default rules and numeric confidences. Using the pipeline, we integrate eight large CSK graphs to build a large knowledge base for symbolic hybrid AI. To the best of our knowledge, our contribution is the first large-scale commonsense knowledge base using default logic, available online. The CSK sources complement each other, having 5% or less overlap in knowledge between any pair of input graphs. The integrated knowledge supports taxonomy related queries. Significant gaps in commonsense knowledge remain, because most of the CSK sources lack rules that enable further reasoning beyond taxonomy.
Similar content being viewed by others
Notes
Credit for the example goes to Dr. Pavel Smirnov from Honda Research Institute.
References
Abzianidze L, Bjerva J, Evang K et al.: The parallel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. In: Lapata M, Blunsom P, Koller A, editors. Proceedings of the 15th conference of the European chapter of the association for computational linguistics, EACL 2017, Valencia, Spain, April 3–7, 2017, Volume 2: short papers. Association for Computational Linguistics; 2017. p. 242–247. https://doi.org/10.18653/v1/e17-2039.
Álvez J, Lucio P, Rigau G. Adimen-sumo: reengineering an ontology for first-order reasoning. Int J Semantic Web Inf Syst. 2012;8(4):80–116. https://doi.org/10.4018/jswis.2012100105.
Angeli G, Premkumar MJJ, Manning CD. Leveraging linguistic structure for open domain information extraction. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: long papers), 2015. p. 344–354.
Arabshahi F, Lee J, Bosselut A, et al. Conversational multi-hop reasoning with neural commonsense knowledge and symbolic logic rules. In: Proceedings of the 2021 conference on empirical methods in natural language processing. 2021. p. 7404–7418.
Arnaout H, Razniewski S, Weikum G, et al. Uncommonsense: informative negative knowledge about everyday concepts. In: Hasan MA, Xiong L, editors. Proceedings of the 31st ACM international conference on information & knowledge management, Atlanta, October 17–21, 2022. 2022; ACM. p. 37–46. https://doi.org/10.1145/3511808.3557484.
Basile V, Cabrio E, Schon C. Knews: using logical and lexical semantics to extract knowledge from natural language. In: Proceedings of the European conference on artificial intelligence (ECAI) 2016 conference. 2016.
Bernhardsson E. Annoy: approximate nearest neighbors in c++/python. 2018. https://github.com/spotify/annoy.
Bird S, Klein E, Loper E. Natural language processing with Python. O’Reilly Media; 2009.
Blackburn P, Bos J, Kohlhase M, et al. Inference and computational semantics. Comput Mean. 2001;2:11–28.
Bos J. Wide-coverage semantic analysis with boxer. In: Bos J, Delmonte R, editors. Semantics in text processing. STEP 2008 conference proceedings, Venice, Italy, September 22–24, 2008. Association for Computational Linguistics; 2008. https://aclanthology.org/W08-2222/.
Bos J. Expressive power of abstract meaning representations. Comput Linguist. 2016;42(3):527–35. https://doi.org/10.1162/COLI_a_00257.
Brewka G. Adding priorities and specificity to default logic. In: European workshop on logics in artificial intelligence. Springer; 1994. p. 247–260.
Chalier Y, Razniewski S, Weikum G. Joint reasoning for multi-faceted commonsense knowledge. In: Das D, Hajishirzi H, McCallum A, et al, editors. Conference on automated knowledge base construction, AKBC 2020, Virtual, June 22–24, 2020. 2020. https://doi.org/10.24432/C58G6G.
Davis E. Logical formalizations of commonsense reasoning: a survey. J Artif Intell Res. 2017;59:651–723.
Fuchs NE, Kaljurand K, Kuhn T. Attempto controlled English for knowledge representation. In: Baroglio C, Bonatti PA, Maluszynski J, et al., editors. Reasoning web, 4th international summer school 2008, Venice, Italy, September 7–11, 2008, Tutorial lectures, lecture notes in computer science, vol 5224. Springer; 2008. p. 104–124. https://doi.org/10.1007/978-3-540-85658-0_3.
Furbach U, Schon C. Commonsense reasoning meets theorem proving. In: German conference on multiagent system technologies. Springer; 2016. p. 3–17.
Garnelo M, Shanahan M. Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Curr Opin Behav Sci. 2019;29:17–23.
Gashteovski K, Gemulla R, Del Corro L. Minie: minimizing facts in open information extraction. In: Proceedings of the 2017 conference on empirical methods in natural language processing. 2017. p. 2630–2640.
Gordon AS. Commonsense interpretation of triangle behavior. In: Schuurmans D, Wellman MP, editors. Proceedings of the Thirtieth AAAI conference on artificial intelligence, February 12–17, 2016, Phoenix, Arizona. AAAI Press; 2016. p. 3719–3725. http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11790.
He W, Huang C, Liu Y, et al. Winologic: a zero-shot logic-based diagnostic dataset for winograd schema challenge. In: Proceedings of the 2021 conference on empirical methods in natural language processing. 2021. p. 3779–3789.
Honnibal M, Montani I, Landeghem SV, et al. spaCy: industrial-strength natural language processing in Python. 2020. https://doi.org/10.5281/zenodo.1212303
Hwang JD, Bhagavatula C, Le Bras R, et al. (comet-) atomic 2020: on symbolic and neural commonsense knowledge graphs. In: Proceedings of the AAAI conference on artificial intelligence. 2021. p. 6384–6392.
Ilievski F, Szekely PA, Cheng J, et al. Consolidating commonsense knowledge. 2020. arXiv:2006.06114 [CoRR abs].
Järv P, Tammet T, Verrev M, et al. Knowledge integration for commonsense reasoning with default logic. In: Aveiro D, Dietz JLG, Filipe J, editors. Proceedings of the 14th international joint conference on knowledge discovery, knowledge engineering and knowledge management, IC3K 2022, Volume 2: KEOD, Valletta, Malta, October 24–26, 2022. SCITEPRESS; 2022. p. 148–155. https://doi.org/10.5220/0011532200003335.
Kamp H. From discourse to logic: introduction to model theoretic semantics of natural language, formal logic and discourse representation theory, vol 42. Springer Science & Business Media; 2013.
Lenat DB. CYC: a large-scale investment in knowledge infrastructure. Commun ACM. 1995;38(11):32–8. https://doi.org/10.1145/219717.219745.
Marcus G. The next decade in AI: four steps towards robust artificial intelligence. 2020. arXiv:2002.06177 [CoRR abs].
Maslan N, Roemmele M, Gordon AS. One hundred challenge problems for logical formalizations of commonsense psychology. In: 2015 AAAI spring symposia, Stanford University, Palo Alto, March 22–25, 2015. AAAI Press; 2015. http://www.aaai.org/ocs/index.php/SSS/SSS15/paper/view/10252.
McCarthy J. Artificial intelligence, logic and formalizing common sense. In: Philosophical logic and artificial intelligence. Springer; 1989. p. 161–190.
Mishra BD, Tandon N, Clark P. Domain-targeted, high precision knowledge extraction. Trans Assoc Comput Linguist. 2017;5:233–46. https://doi.org/10.1162/tacl_a_00058.
Nguyen TP, Razniewski S, Romero J, et al. Refined commonsense knowledge from large-scale web contents. IEEE Trans Knowl Data Eng. 2022. https://doi.org/10.1109/TKDE.2022.3206505.
Niles I, Pease A. Towards a standard upper ontology. In: 2nd international conference on formal ontology in information systems, FOIS 2001, Ogunquit, Maine, October 17–19, 2001, Proceedings. ACM; 2001. p. 2–9. https://doi.org/10.1145/505168.505170.
NLTK authors. Sample usage for drt. 2023. https://www.nltk.org/howto/drt.html. Accessed 30 Jan 2023.
Papasarantopoulos N. Transform abstract meaning representation (amr) to first order logic (fol) formulas. 2017. https://github.com/papagandalf/amr2fol. Accessed 30 Jan 2023.
Poelman W, van Noord R, Bos J. Transparent semantic parsing with universal dependencies using graph transformations. In: Calzolari N, Huang C, Kim H, et al., editors. Proceedings of the 29th international conference on computational linguistics, COLING 2022, Gyeongju, October 12–17, 2022. International Committee on Computational Linguistics; 2022. p. 4186–4192. https://aclanthology.org/2022.coling-1.367.
Qi P, Zhang Y, Zhang Y, et al. Stanza: a Python natural language processing toolkit for many human languages. In: Proceedings of the 58th annual meeting of the association for computational linguistics: system demonstrations. 2020. https://nlp.stanford.edu/pubs/qi2020stanza.pdf.
Razniewski S. Private correspondence. 2023.
Reiter R. A logic for default reasoning. Artif Intell. 1980;13(1–2):81–132.
Riegel R, Gray A, Luus F, et al. Logical neural networks. 2020. arXiv:2006.13155
Romero J, Razniewski S, Pal K, et al. Commonsense properties from query logs and question answering forums. In: Zhu W, Tao D, Cheng X, et al., editors. Proc. of CIKM’19—the 28th ACM intl. conf. on information and knowledge management. ACM; 2019. p. 1411–1420.
Schon C. Selection strategies for commonsense knowledge. 2022. arXiv:2202.09163 [CoRR abs].
Schon C, Siebert S, Stolzenburg F. The corg project: cognitive reasoning. Künstliche Intell. 2019;33(3):293–9. https://doi.org/10.1007/s13218-019-00601-5.
Speer R, Chin J, Havasi C. ConceptNet 5.5: an open multilingual graph of general knowledge. In: Singh SP, Markovitch S, editors. Proc. of AAAI’2017—the 31st AAAI conf. on artificial intelligence. AAAI; 2017. p. 4444–4451.
Tafjord O, Dalvi B, Clark P. Proofwriter: generating implications, proofs, and abductive statements over natural language. In: Zong C, Xia F, Li W, et al., editors. Findings of the association for computational linguistics: ACL/IJCNLP 2021, Online Event, August 1–6, 2021, Findings of ACL, vol ACL/IJCNLP 2021. Association for Computational Linguistics; 2021. p. 3621–3634. https://doi.org/10.18653/v1/2021.findings-acl.317.
Tammet T. JSON-LD-LOGIC homepage. 2020. https://github.com/tammet/json-ld-logic.
Tammet T, Sutcliffe G. Combining json-ld with first order logic. In: 2021 IEEE 15th international conference on semantic computing (ICSC). IEEE; 2021. p. 256–261.
Tammet T, Järv P, Draheim D. Confidences for commonsense reasoning. In: Platzer ASG, editor. Automated deduction—CADE 28. CADE 2021., LNCS, vol 12699. Springer; 2021. p. 507–524.
Tammet T, Draheim D, Järv P. GK: implementing full first order default logic for commonsense reasoning (system description). In: Blanchette J, Kovács L, Pattinson D, editors. Automated reasoning—11th international joint conference, IJCAR 2022, Haifa, Israel, August 8–10, 2022, proceedings, lecture notes in computer science, vol 13385. Springer; 2022. p. 300–309. https://doi.org/10.1007/978-3-031-10769-6_18.
Tandon N, de Melo G, Weikum G. Webchild 2.0: fine-grained commonsense knowledge distillation. In: Bansal M, Ji H, editors. Proceedings of the 55th annual meeting of the association for computational linguistics, ACL 2017, Vancouver, Canada, July 30–August 4, system demonstrations. Association for Computational Linguistics; 2017. p. 115–120, https://doi.org/10.18653/v1/P17-4020.
Trinh TH, Le QV. A simple method for commonsense reasoning. 2018. arXiv:1806.02847.
van Noord R, Abzianidze L, Toral A, et al. Exploring neural methods for parsing discourse representation structures. Trans Assoc Comput Linguist. 2018;6:619–33. https://doi.org/10.1162/tacl_a_00241.
West P, Bhagavatula C, Hessel J, et al. Symbolic knowledge distillation: from general language models to commonsense models. 2021. arXiv:2110.07178 [CoRR abs].
Ylönen T. Wiktextract: wiktionary as machine-readable structured data. In: Calzolari N, Béchet F, Blache P, et al., editors. Proceedings of the thirteenth language resources and evaluation conference, LREC 2022, Marseille, France, 20–25 June 2022. European Language Resources Association; 2022. p. 1317–1325. https://aclanthology.org/2022.lrec-1.140
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the topical collection “Advances on Knowledge Discovery, Knowledge Engineering and Knowledge Management” guest edited by Joaquim Filipe, Ana Fred, Frans Coenen, Jorge Bernardino and Elio Masciari.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Järv, P., Tammet, T., Verrev, M. et al. Large-Scale Commonsense Knowledge for Default Logic Reasoning. SN COMPUT. SCI. 4, 550 (2023). https://doi.org/10.1007/s42979-023-01963-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42979-023-01963-2