skip to main content
10.1145/2824864.2824886acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfireConference Proceedingsconference-collections
research-article

SEMONTOQA: A Semantic Understanding-Based Ontological Framework for Factoid Question Answering

Authors Info & Claims
Published:05 December 2014Publication History

ABSTRACT

This paper presents an outline of an Ontological and Semantic understanding-based model (SEMONTOQA) for an open-domain factoid Question Answering (QA) system. The outlined model analyses unstructured English natural language texts to a vast extent and represents the inherent contents in an ontological manner. The model locates and extracts useful information from the text for various question types and builds a semantically rich knowledge-base that is capable of answering different categories of factoid questions. The system model converts the unstructured texts into a minimalistic, labelled, directed graph that we call a Syntactic Sentence Graph (SSG). An Automatic Text Interpreter using a set of pre-learnt Text Interpretation Subgraphs and patterns tries to understand the contents of the SSG in a semantic way. The system proposes a new feature and action based Cognitive Entity-Relationship Network designed to extend the text understanding process to an in-depth level. Application of supervised learning allows the system to gradually grow its capability to understand the text in a more fruitful manner. The system incorporates an effective Text Inference Engine which takes the responsibility of inferring the text contents and isolating entities, their features, actions, objects, associated contexts and other properties, required for answering questions. A similar understanding-based question processing module interprets the user's need in a semantic way. An Ontological Mapping Module, with the help of a set of pre-defined strategies designed for different classes of questions, is able to perform a mapping between a question's ontology with the set of ontologies stored in the background knowledge-base. Empirical verification is performed to show the usability of the proposed model. The results achieved show that, this model can be used effectively as a semantic understanding based alternative QA system.

References

  1. E. M. Voorhees and D. Harman., Overview of the eighth text retrieval conference (trec-8). pages 1--24, 2000.Google ScholarGoogle Scholar
  2. G. Salton., Automatic Information Organization and Retrieval. McGraw-Hill, NewYork, 1968. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Hickl, K. Roberts, B. Rink, J. Bensley, T. Jungen, Y. Shi, and J. Williams., Question Answering with LCCs CHAUCER-2 at TREC 2007. In Proceedings of Text Retrieval Conference., 2007.Google ScholarGoogle Scholar
  4. S. R. Joty and Y. Chali., University of Lethbridge's Participation in TREC 2007 QA Track. In Proceedings of Text Retrieval Conference., 2007.Google ScholarGoogle Scholar
  5. S. Verberne., Retrieval-based Question Answering for Machine Reading Evalua-tion, CLEF. In CLEF 2011 Labs and Workshop, Notebook Papers., Amsterdam, September 2011.Google ScholarGoogle Scholar
  6. D. Jurafsky and J. H. Martin., Speech and Language Processing. 2nd Edition, Prentice Hall Series in Artificial Intelligence.,2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Ko, L. Si and E. Nyberg., Combining evidence with a probabilistic framework for answer ranking and answer merging in question answering. Elsevier Journal: Information Processing and Management 46., 541--554, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Andrenucci and E. Sneiders., Automated question answering: review of the main approaches. In Proceedings of ICITA '05, pp:541--554, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. C. Mendes, L. Coheur, J. Silva and H. Rodrigues., Just.Ask - A multi-pronged approach to question answering. In International Journal on Artificial Intelligence Tools, vol.22, n.1, 2013.Google ScholarGoogle Scholar
  10. J. D. Burger, L. Ferro, W. Greiff, J. Henderson, M. Light, and S. Mardis., MITRE's Qanda at TREC-11. In Proceedings of the Eleventh Text Retrieval Conference., 2003.Google ScholarGoogle Scholar
  11. J. Ng and M. Kan., QANUS- An Open-source Question-Answering Platform. 2010.Google ScholarGoogle Scholar
  12. L. Hirschman and R. Gaizauskas., Natural Language Question Answering: The View From Here. Natural Language Engineering, Vol:7, Issue 4, pp:275--300, December 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Lin., An Exploration of the Principles Underlying Redundancy-Based Factoid Question Answering. ACM Transactions on Information Systems., 27(2): 1--55, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. D. S. Wang., A Domain-Specific Question Answering System Based on Ontology and Question Templates. In Proceedings of 11th ACIS International Conference on Software Engineering., Artificial Intelligences, Networking and Parallel/Distributed Computing, SNPD, London, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. V. Lopez, V. Uren, E. Motta and M. Pasin., AquaLog: An ontology-driven question answering system for organizational semantic intranets. Web Semantics: Science, Services and Agents on the World Wide Web., 5(2), 72--105, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Yahya, K. Berberich, S. Elbassuoni, M. Ramanath, V. Tresp, and G. Weikum., Natural language questions for the web of data. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning., pp: 379--390, July 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Unger, L. Buhmann, J. Lehmann, A. C. Ngonga Ngomo, D. Gerber, and P. Cimiano., Template-based question answering over RDF data. In Proceedings of the 21st international conference on World Wide Web, pp: 639--648, April 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. D. Damljanovic, M. Agatonovic and H. Cunningham., FREyA: An interactive way of querying Linked Data using natural language. In The Semantic Web: ESWC 2011 Workshops., Springer Berlin Heidelberg, pp:125--138, January 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Unger, P. Cimiano., Pythia: Compositional meaning construction for ontologybased question answering on the Semantic Web. In Natural Language Processing and Information Systems., Springer Berlin Heidelberg, pp:153--160, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard and D. McClosky., The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations., pp: 55--60, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  21. OpenNLP., OpenNLP Tools, https://opennlp.apache.org/, Accessed on October 1 2014.Google ScholarGoogle Scholar
  22. R. Mitkov., Anaphora Resolution: The State of the Art. Paper based on the COLING'98/ACL'98 tutorial on anaphora resolution., University of Wolverhampton, 1999.Google ScholarGoogle Scholar
  23. M. Denber., Automatic Resolution of Anaphora in English. Technical report, Eastman Kodak Co., 1998.Google ScholarGoogle Scholar
  24. E. Bengtson and D. Roth., Understanding the Value of Features for Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp:294--303, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Levy and G. Andrew., Tregex and Tsurgeon: tools for querying and manipulating tree data structures. In Proceedings of 5th International Conference on Language Resources and Evaluation (LREC 2006)., 2006.Google ScholarGoogle Scholar
  26. D. Chen and C. D. Manning., A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of EMNLP 2014., pp:740--750, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  27. Marie-Catherine, B. MacCartney, and C. D. Manning., Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of LREC, 2006.Google ScholarGoogle Scholar
  28. J. Nivre, J. Hall and J. Nilsson., MaltParser: A Data-Driven Parser-Generator for Dependency Parsing. In Proceedings of LREC2006, Genoa, Italy, pp:2216--2219, 2006.Google ScholarGoogle Scholar
  29. LingPipe., LingPipe tool kit for processing text using computational linguistics. http://alias-i.com/lingpipe/, Accessed on 01 March 2015.Google ScholarGoogle Scholar
  30. DBpedia., DBpedia Knowledge Base. http://dbpedia.org/, Accessed on 01 March 2015.Google ScholarGoogle Scholar
  31. SPARQL, DBpedia., SPARQL RDF query language. http://dbpedia.org/sparql, Accessed on 01 March 2015.Google ScholarGoogle Scholar
  32. Boxer., Boxer C and C tools. http://svn.ask.it.usyd.edu.au/trac/candc/wiki/Demo., Accessed on 11 March 2015.Google ScholarGoogle Scholar
  33. X. Li and D. Roth., Learning Question Classifiers. In Proceedings of the 19th International Conference on Computational Linguistics., pp: 1--7, Taipei, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. C. Fellbaum., WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press., 1998.Google ScholarGoogle ScholarCross RefCross Ref
  35. G. Miller., WordNet: A Lexical Database for English. Communications of the ACM., 38(11):39--41, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. M. Hoque, T. Goncalves and P. Quaresma., Classifying Questions in Question Answering Systems using Finite State Machines with a simple learning approach. In Proceedings of PACLIC '27, pp:409--414, Taiwan, 2013.Google ScholarGoogle Scholar
  37. Lucene., Lucene : Apache Lucene Core. https://lucene.apache.org/core/, Accessed on 01 March 2015.Google ScholarGoogle Scholar
  38. A. Silberschatz, H. F. Korth and S. Sudarshan., Database System Concepts. McGraw-Hill, Chapter: 3: Introduction to SQL., 6th edition.Google ScholarGoogle Scholar
  39. H. T. Dang, D. Kelly, and J. Lin. Overview of the TREC 2007 Question Answering Track. TREC 2007.Google ScholarGoogle Scholar
  40. JSOUP., http://jsoup.org/download, Accessed on 01 March 2015.Google ScholarGoogle Scholar
  41. S. Clark and J. R. Curran. Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4), 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    FIRE '14: Proceedings of the 6th Annual Meeting of the Forum for Information Retrieval Evaluation
    December 2014
    151 pages
    ISBN:9781450337557
    DOI:10.1145/2824864
    • Editors:
    • Prasenjit Majumder,
    • Mandar Mitra,
    • Sukomal Pal,
    • Madhulika Agrawal,
    • Parth Mehta

    Copyright © 2014 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 5 December 2014

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate19of64submissions,30%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader