ABSTRACT
Research on computational argumentation has recently become very popular. An argument consists of a claim that is supported or attacked by at least one premise. Its intention is the persuasion of others. An important problem in this field is retrieving good premises for a designated claim from a corpus of arguments. Given a claim, oftentimes existing approaches' first step is finding textually similar claims. In this paper we compare 196 methods systematically for determining similar claims by textual similarity, using a large corpus of (claim, premise) pairs crawled from debate portals. We also evaluate how well textual similarity of claims can predict relevance of the associated premises.
- Norbert Fuhr. 2017. Some Common Mistakes In IR Evaluation, And How They Can Be Avoided. SIGIR Forum , Vol. 51, 3 (2017), 32--41. Google ScholarDigital Library
- Ivan Habernal and Iryna Gurevych. 2015. Exploiting Debate Portals for Semi-Supervised Argumentation Mining in User-Generated Web Discourse. In Proc. 2015 Conf. Empirical Methods in Natural Language Processing (EMNLP). 2127--2137. http://aclweb.org/anthology/D/D15/D15--1255.pdfGoogle ScholarCross Ref
- Ivan Habernal and Iryna Gurevych. 2016. Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM. In Proc. 54th Ann. Meeting of the Assoc. for Computational Linguistics (ACL) . http://aclweb.org/anthology/P/P16/P16--1150.pdfGoogle ScholarCross Ref
- Ivan Habernal, Raffael Hannemann, Christian Pollak, Christopher Klamm, Patrick Pauli, and Iryna Gurevych. 2017. Argotario: Computational Argumentation Meets Serious Games. In Proc. 2017 Conf. on Empirical Methods in Natural Language Processing (EMNLP). 7--12. https://aclanthology.info/papers/D17--2002/d17--2002Google ScholarCross Ref
- Kalervo J"a rvelin and Jaana Kek"a l"a inen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. , Vol. 20, 4 (2002), 422--446. Google ScholarDigital Library
- Klaus Krippendorff. 1970. Estimating the reliability, systematic error, and random error of interval data. Vol. 30. 61--70 pages.Issue 1.Google Scholar
- Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proc. 31th Int. Conf. on Machine Learning, (ICML). 1188--1196. http://jmlr.org/proceedings/papers/v32/le14.html Google ScholarDigital Library
- Andreas Peldszus and Manfred Stede. 2013. From Argument Diagrams to Argumentation Mining in Texts: A Survey. IJCINI , Vol. 7, 1 (2013), 1--31. Google ScholarDigital Library
- Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. ArgumenText: Searching for Arguments in Heterogeneous Sources. In Proc. 2018 Conf. North American Chapter of the Assoc. for Computational Linguistics (NAACL-HTL) . 21--25. https://aclanthology.info/papers/N18--5005/n18--5005Google ScholarCross Ref
- Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017a. Building an Argument Search Engine for the Web. In Proc. 4th Workshop on Argument Mining (ArgMining@EMNLP). 49--59. https://aclanthology.info/papers/W17--5106/w17--5106Google ScholarCross Ref
- Henning Wachsmuth, Benno Stein, Graeme Hirst, Vinodkumar Prabhakaran, Yonatan Bilu, Yufang Hou, Nona Naderi, and Tim Alberdingk Thijm. 2017b. Computational Argumentation Quality Assessment in Natural Language. In Proc. 15th Conf. European Chapter of the Association for Computational Linguistics (EACL) . 176--187. https://aclanthology.info/papers/E17--1017/e17--1017Google ScholarCross Ref
Index Terms
- A Systematic Comparison of Methods for Finding Good Premises for Claims
Recommendations
QuARk: A GUI for Quality-Aware Ranking of Arguments
SIGIR '21: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalWith the Web augmenting every day and computers increasingly getting more powerful, research in the field of computational argumentation becomes more and more important. One of its research branches is argument retrieval, which aims at finding and ...
Argument Search: Assessing Argument Relevance
SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalWe report on the first user study on assessing argument relevance. Based on a search among more than 300,000 arguments, four standard retrieval models are compared on 40 topics for 20 controversial issues: every issue has one topic with a biased stance ...
A Framework for Argument Retrieval: Ranking Argument Clusters by Frequency and Specificity
Advances in Information RetrievalAbstractComputational argumentation has recently become a fast growing field of research. An argument consists of a claim, such as “We should abandon fossil fuels”, which is supported or attacked by at least one premise, for example “Burning fossil fuels ...
Comments