skip to main content
10.1145/383952.384028acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

Searcher performance in question answering

Published:01 September 2001Publication History

ABSTRACT

There are many tasks that require information finding. Some can be largely automated, and others greatly benefit from successful interaction between system and searcher. We are interested in the task of answering questions where some synthesis of information is required-the answer would not generally be given from a single passage of a single document. We investigate whether variation in the way a list of documents is delivered affected searcher performance in the question answering task. We will show that there is a significant difference in performance using a list customized to the task type, compared with a standard web-engine list. This indicates that paying attention to the task and the searcher interaction may provide substantial improvement in task performance.

References

  1. 1.Allan J., Callan J., Feng F. F. and Malin D. INQUERY and TREC-8. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google ScholarGoogle Scholar
  2. 2.Bailey P., Craswell N. and Hawking D. Engineering a Multipurpose Test Collection for Web Retrieval Experiments. Information Processing and Management. 2001. In Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. 3.Berger A. L. and Mittal V. O. OCELOT: A System for summarizing web pages. In Proceedings of the 23 rd ACM SIGIR Conference. July 24-28, 2000, Athens, Greece (pp. 144-151). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. 4.D'Souza D., Fuller M., et al. The Melbourne TREC-9 Experiments. In Proceedings of the Ninth Text Retrieval Conference (TREC-9) (pp. 366-379). November 2000, Gaithersberg, MD, USA.Google ScholarGoogle Scholar
  5. 5.Fuller M., Kaszkiel M. et al. The RMIT/CSIRO Ad Hoc, Q&A, Web, Interactive, and Speech Experiments at TREC 8. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google ScholarGoogle Scholar
  6. 6.Hawking D., Craswell N. and Thistlewaite P. ACSys TREC-8 Experiments. In Proceeding of Seventh Text Retrieval Conference (TREC-8), November 1999, Gaithersburg, MD, USA.Google ScholarGoogle Scholar
  7. 7.Hearst M. A. and Pedersen J. O. Reexamining the cluster hypothesis: Scatter/gather on retrieval results. In Proceedings of the 19 th ACM SIGIR conference, August 1996, Zurich, Switzerland (pp. 76-84). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. 8.Hersh W., Turpin A. and Price S. et al. Do Batch and User Evaluations Give the Same Results? In Proceedings of the 23 rd ACM SIGIR Conference, July 24-28, 2000, Athens, Greece (pp. 17-24). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9.Kupiec J., Pedersen J. and Chen F. A Trainable Document Summarizer. In Proceedings of the ACM SIGIR conference, July 1995, New York, USA (pp. 68-73). Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10.Lagergren E. and Over P. Comparing Interactive Information Retrieval Systems across Sites: The TREC-6 Interactive Track Matrix Experiment. In Proceedings of the ACM SIGIR, August 1998, Melbourne, Australia (pp. 164- 172). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. 11.Over P. TREC-9 Interactive Track - Basics. In Note papers of TREC-9. November 2000, Gaithersberg, MD, USA (pp 721-728).Google ScholarGoogle Scholar
  12. 12.Salton G., Singhal A., Mitra M. and Buckley C. Automatic Text Structure and Summarization. Information Processing and Management. Vol. 33 (2) 193-207, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. 13.Saracevic T. and Kentor P. A Study of Information Seeking and Retrieving: I. Background and Methodology. Journal of the American Society for Information Science. Vol. 39(3), 161-176. 1988.Google ScholarGoogle ScholarCross RefCross Ref
  14. 14.Singhal A., Abney S. and Bacchiani M. AT&T at TREC-8. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google ScholarGoogle Scholar
  15. 15.Srihari R. and Li W. Information Extraction Supported Question Answering. In Proceedings of the Eighth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google ScholarGoogle ScholarCross RefCross Ref
  16. 16.Voorhees E. M. The TREC-8 Question Answering Track Report. In Proceedings of the Ninth Text Retrieval Conference (TREC-8). November 1999, Gaithersberg, MD, USA.Google ScholarGoogle Scholar
  17. 17.Witten I., Moffat A. and Bell T. Managing Gigabytes: Compressing and indexing documents and images. Van Nostrand Reinhold. 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.Wu M., Fuller M. and Wilkinson R. Using clustering and classification approaches in interactive retrieval. Information Processing & Management, 37 (2001) 459-484. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.http://www.panopticsearch.com/Google ScholarGoogle Scholar

Index Terms

  1. Searcher performance in question answering

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            SIGIR '01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
            September 2001
            454 pages
            ISBN:1581133316
            DOI:10.1145/383952

            Copyright © 2001 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 1 September 2001

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • Article

            Acceptance Rates

            SIGIR '01 Paper Acceptance Rate47of201submissions,23%Overall Acceptance Rate792of3,983submissions,20%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader