Abstract
In recent years, predicting user behavior has drawn much attention in the fields of information retrieval. To that extend, many models and even more evaluation metrics have been proposed, aiming at the accurate evaluation of the information retrieval process. Most of the proposed metrics, including the well-known nDCG and ERR, rely on the assumption that the probability (R) a user finds a document relevant, depends only on its relevance grade. In this paper, we employ the assumption that this probability is a function of a combination of two factors; its relevance grade and its popularity grade. Popularity, as we define it from daily page views, can be considered as users’ vote for a document, and by combining this factor in the probability R we can capture user behavior more accurately. We present a new evaluation metric called Reciprocal Rank using Webpage Popularity (RRP) which takes into account not only the document’s relevance judgment, but also its popularity, and as a result correlates better with click metrics than the other evaluation metrics do.
Keywords
Download to read the full chapter text
Chapter PDF
References
Buckley, C., Voorhees, E.M.: Retrieval evaluation with incomplete information. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2004, pp. 25–32. ACM, New York (2004)
Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, pp. 621–630. ACM, New York (2009)
Chapelle, O., Zhang, Y.: A dynamic bayesian network click model for web search ranking. In: Proceedings of the 18th International Conference on World Wide Web, WWW 2009, pp. 1–10. ACM, New York (2009)
Cho, J., Roy, S., Adams, R.E.: Page quality: In search of an unbiased web ranking. In: Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data, SIGMOD 2005, pp. 551–562. ACM, New York (2005)
Chuklin, A., Serdyukov, P., de Rijke, M.: Click model-based information retrieval metrics. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2013, pp. 493–502. ACM, New York (2013)
Cyril, W.: Cleverdon. The significance of the cranfield tests on index languages. In: Proceedings of the 14th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 1991, pp. 3–12. ACM, New York (1991)
Craswell, N., Zoeter, O., Taylor, M., Ramsey, B.: An experimental comparison of click position-bias models. In: Proceedings of the 2008 International Conference on Web Search and Data Mining, WSDM 2008, pp. 87–94. ACM, New York (2008)
Huffman, S.B., Hochster, M.: How well does result relevance predict session satisfaction? In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2007, pp. 567–574. ACM, New York (2007)
Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst. 20(4), 422–446 (2002)
Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst., 27(1), 2:1–2:27 (2008)
University of Massachusetts and Carnegie Mellon University. The lemur project (January 2014), http://www.lemurproject.org/
Radlinski, F., Kurup, M., Joachims, T.: How does clickthrough data reflect retrieval quality? In: Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008, pp. 43–52. ACM, New York (2008)
Richardson, M., Dominowska, E., Ragno, R.: Predicting clicks: Estimating the click-through rate for new ads. In: Proceedings of the 16th International Conference on World Wide Web, WWW 2007, pp. 521–530. ACM, New York (2007)
Sakai, T.: Alternatives to bpref. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2007, pp. 71–78. ACM, New York (2007)
Sanderson, M., Zobel, J.: Information retrieval system evaluation: Effort, sensitivity, and reliability. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2005, pp. 162–169. ACM, New York (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 IFIP International Federation for Information Processing
About this paper
Cite this paper
Evangelopoulos, X., Makris, C., Plegas, Y. (2014). Reciprocal Rank Using Web Page Popularity. In: Iliadis, L., Maglogiannis, I., Papadopoulos, H., Sioutas, S., Makris, C. (eds) Artificial Intelligence Applications and Innovations. AIAI 2014. IFIP Advances in Information and Communication Technology, vol 437. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44722-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-662-44722-2_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44721-5
Online ISBN: 978-3-662-44722-2
eBook Packages: Computer ScienceComputer Science (R0)