skip to main content
10.1145/3331184.3331282acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

A Systematic Comparison of Methods for Finding Good Premises for Claims

Published:18 July 2019Publication History

ABSTRACT

Research on computational argumentation has recently become very popular. An argument consists of a claim that is supported or attacked by at least one premise. Its intention is the persuasion of others. An important problem in this field is retrieving good premises for a designated claim from a corpus of arguments. Given a claim, oftentimes existing approaches' first step is finding textually similar claims. In this paper we compare 196 methods systematically for determining similar claims by textual similarity, using a large corpus of (claim, premise) pairs crawled from debate portals. We also evaluate how well textual similarity of claims can predict relevance of the associated premises.

References

  1. Norbert Fuhr. 2017. Some Common Mistakes In IR Evaluation, And How They Can Be Avoided. SIGIR Forum , Vol. 51, 3 (2017), 32--41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Ivan Habernal and Iryna Gurevych. 2015. Exploiting Debate Portals for Semi-Supervised Argumentation Mining in User-Generated Web Discourse. In Proc. 2015 Conf. Empirical Methods in Natural Language Processing (EMNLP). 2127--2137. http://aclweb.org/anthology/D/D15/D15--1255.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  3. Ivan Habernal and Iryna Gurevych. 2016. Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM. In Proc. 54th Ann. Meeting of the Assoc. for Computational Linguistics (ACL) . http://aclweb.org/anthology/P/P16/P16--1150.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  4. Ivan Habernal, Raffael Hannemann, Christian Pollak, Christopher Klamm, Patrick Pauli, and Iryna Gurevych. 2017. Argotario: Computational Argumentation Meets Serious Games. In Proc. 2017 Conf. on Empirical Methods in Natural Language Processing (EMNLP). 7--12. https://aclanthology.info/papers/D17--2002/d17--2002Google ScholarGoogle ScholarCross RefCross Ref
  5. Kalervo J"a rvelin and Jaana Kek"a l"a inen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. , Vol. 20, 4 (2002), 422--446. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Klaus Krippendorff. 1970. Estimating the reliability, systematic error, and random error of interval data. Vol. 30. 61--70 pages.Issue 1.Google ScholarGoogle Scholar
  7. Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proc. 31th Int. Conf. on Machine Learning, (ICML). 1188--1196. http://jmlr.org/proceedings/papers/v32/le14.html Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Andreas Peldszus and Manfred Stede. 2013. From Argument Diagrams to Argumentation Mining in Texts: A Survey. IJCINI , Vol. 7, 1 (2013), 1--31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. ArgumenText: Searching for Arguments in Heterogeneous Sources. In Proc. 2018 Conf. North American Chapter of the Assoc. for Computational Linguistics (NAACL-HTL) . 21--25. https://aclanthology.info/papers/N18--5005/n18--5005Google ScholarGoogle ScholarCross RefCross Ref
  10. Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017a. Building an Argument Search Engine for the Web. In Proc. 4th Workshop on Argument Mining (ArgMining@EMNLP). 49--59. https://aclanthology.info/papers/W17--5106/w17--5106Google ScholarGoogle ScholarCross RefCross Ref
  11. Henning Wachsmuth, Benno Stein, Graeme Hirst, Vinodkumar Prabhakaran, Yonatan Bilu, Yufang Hou, Nona Naderi, and Tim Alberdingk Thijm. 2017b. Computational Argumentation Quality Assessment in Natural Language. In Proc. 15th Conf. European Chapter of the Association for Computational Linguistics (EACL) . 176--187. https://aclanthology.info/papers/E17--1017/e17--1017Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Systematic Comparison of Methods for Finding Good Premises for Claims

                    Recommendations

                    Comments

                    Login options

                    Check if you have access through your login credentials or your institution to get full access on this article.

                    Sign in
                    • Published in

                      cover image ACM Conferences
                      SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval
                      July 2019
                      1512 pages
                      ISBN:9781450361729
                      DOI:10.1145/3331184

                      Copyright © 2019 ACM

                      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                      Publisher

                      Association for Computing Machinery

                      New York, NY, United States

                      Publication History

                      • Published: 18 July 2019

                      Permissions

                      Request permissions about this article.

                      Request Permissions

                      Check for updates

                      Qualifiers

                      • short-paper

                      Acceptance Rates

                      SIGIR'19 Paper Acceptance Rate84of426submissions,20%Overall Acceptance Rate792of3,983submissions,20%

                    PDF Format

                    View or Download as a PDF file.

                    PDF

                    eReader

                    View online with eReader.

                    eReader