ABSTRACT
There are many tasks that require information finding. Some can be largely automated, and others greatly benefit from successful interaction between system and searcher. We are interested in the task of answering questions where some synthesis of information is required-the answer would not generally be given from a single passage of a single document. We investigate whether variation in the way a list of documents is delivered affected searcher performance in the question answering task. We will show that there is a significant difference in performance using a list customized to the task type, compared with a standard web-engine list. This indicates that paying attention to the task and the searcher interaction may provide substantial improvement in task performance.
- 1.Allan J., Callan J., Feng F. F. and Malin D. INQUERY and TREC-8. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google Scholar
- 2.Bailey P., Craswell N. and Hawking D. Engineering a Multipurpose Test Collection for Web Retrieval Experiments. Information Processing and Management. 2001. In Press. Google ScholarDigital Library
- 3.Berger A. L. and Mittal V. O. OCELOT: A System for summarizing web pages. In Proceedings of the 23 rd ACM SIGIR Conference. July 24-28, 2000, Athens, Greece (pp. 144-151). Google ScholarDigital Library
- 4.D'Souza D., Fuller M., et al. The Melbourne TREC-9 Experiments. In Proceedings of the Ninth Text Retrieval Conference (TREC-9) (pp. 366-379). November 2000, Gaithersberg, MD, USA.Google Scholar
- 5.Fuller M., Kaszkiel M. et al. The RMIT/CSIRO Ad Hoc, Q&A, Web, Interactive, and Speech Experiments at TREC 8. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google Scholar
- 6.Hawking D., Craswell N. and Thistlewaite P. ACSys TREC-8 Experiments. In Proceeding of Seventh Text Retrieval Conference (TREC-8), November 1999, Gaithersburg, MD, USA.Google Scholar
- 7.Hearst M. A. and Pedersen J. O. Reexamining the cluster hypothesis: Scatter/gather on retrieval results. In Proceedings of the 19 th ACM SIGIR conference, August 1996, Zurich, Switzerland (pp. 76-84). Google ScholarDigital Library
- 8.Hersh W., Turpin A. and Price S. et al. Do Batch and User Evaluations Give the Same Results? In Proceedings of the 23 rd ACM SIGIR Conference, July 24-28, 2000, Athens, Greece (pp. 17-24). Google ScholarDigital Library
- 9.Kupiec J., Pedersen J. and Chen F. A Trainable Document Summarizer. In Proceedings of the ACM SIGIR conference, July 1995, New York, USA (pp. 68-73). Google ScholarDigital Library
- 10.Lagergren E. and Over P. Comparing Interactive Information Retrieval Systems across Sites: The TREC-6 Interactive Track Matrix Experiment. In Proceedings of the ACM SIGIR, August 1998, Melbourne, Australia (pp. 164- 172). Google ScholarDigital Library
- 11.Over P. TREC-9 Interactive Track - Basics. In Note papers of TREC-9. November 2000, Gaithersberg, MD, USA (pp 721-728).Google Scholar
- 12.Salton G., Singhal A., Mitra M. and Buckley C. Automatic Text Structure and Summarization. Information Processing and Management. Vol. 33 (2) 193-207, 1997. Google ScholarDigital Library
- 13.Saracevic T. and Kentor P. A Study of Information Seeking and Retrieving: I. Background and Methodology. Journal of the American Society for Information Science. Vol. 39(3), 161-176. 1988.Google ScholarCross Ref
- 14.Singhal A., Abney S. and Bacchiani M. AT&T at TREC-8. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google Scholar
- 15.Srihari R. and Li W. Information Extraction Supported Question Answering. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google ScholarCross Ref
- 16.Voorhees E. M. The TREC-8 Question Answering Track Report. In Proceedings of the Ninth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google Scholar
- 17.Witten I., Moffat A. and Bell T. Managing Gigabytes: Compressing and indexing documents and images. Van Nostrand Reinhold. 1994. Google ScholarDigital Library
- 18.Wu M., Fuller M. and Wilkinson R. Using clustering and classification approaches in interactive retrieval. Information Processing & Management, 37 (2001) 459-484. Google ScholarDigital Library
- 19.http://www.panopticsearch.com/Google Scholar
Index Terms
- Searcher performance in question answering
Recommendations
Performance Prediction for Non-Factoid Question Answering
ICTIR '19: Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information RetrievalEstimating the quality of a result list, often referred to as query performance prediction (QPP), is a challenging and important task in information retrieval. It can be used as feedback to users, search engines, and system administrators. Although ...
Human question answering performance using an interactive document retrieval system
IIIX '12: Proceedings of the 4th Information Interaction in Context SymposiumEvery day, people answer their questions by using document retrieval systems. Compared to document retrieval systems, question answering (QA) systems aim to speed the rate at which users find answers by retrieving answers rather than documents. To ...
Question answering experiments for finnish and french
CLEF'05: Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information RepositoriesThis paper presents a question answering (QA) system called Tikka. Tikka’s approach to QA is based on question classification, semantic annotation and answer extraction pattern matching. Tikka’s performance is evaluated by conducting experiments in the ...
Comments