Abstract
Improving average effectiveness is an objective of paramount importance of ranking model for the learning to rank task. Another equally important objective is the robustness—a ranking model should minimize the variance of effectiveness across all queries when the ranking model is disturbed. However, most of the existing learning to rank methods are optimizing the average effectiveness over all the queries, and leaving robustness unnoticed. An ideal ranking model is expected to balance the trade-off between effectiveness and robustness by achieving high average effectiveness and low variance of effectiveness. This paper investigates the effectiveness-robustness trade-off in learning to rank from a novel perspective, i.e., the bias-variance trade-off, and presents a unified objective function which captures the trade-off between these two competing measures for jointly optimizing the effectiveness and robustness of ranking model. We modify the gradient based on the unified objective function using LambdaMART which is a state-of-the-art learning to rank algorithm, and demonstrate the strategy of jointly optimizing the combination of bias and variance in a principled learning objective. Experimental results demonstrate that the gradient-modified LambdaMART improves the robustness and normalized effectiveness of ranking model by combining bias and variance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, pp. 621–630. ACM (2009)
Dinçer, B.T., Macdonald, C., Ounis, I.: Hypothesis testing for the risk-sensitive evaluation of retrieval systems. In: Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, Gold Coast, Queensland, Australia, pp. 23–32. ACM (2014)
Wang, L., Bennett, P.N., Collins-Thompson, K.: Robust ranking models via risk-sensitive optimization. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, Portland, Oregon, USA, pp. 761–770. ACM (2012)
Wu, Q., Burges, C.J.C., Svore, K.M., Gao, J.: Adapting boosting for information retrieval measures. Inf. Retrieval 13(3), 254–270 (2010)
Burges, C.J.C.: From ranknet to lambdarank to lambdamart: an overview. Technical report, Microsoft Research Technical Report MSRTR-2010–82 (2010)
Li, H.: A short introduction to learning to rank. IEICE Trans. Inf. Syst. 94(10), 1854–1862 (2011)
Xu, J., Xia, L., Lan, Y., Guo, J., Cheng, X.: Directly optimize diversity evaluation measures: a new approach to search result diversification. ACM Trans. Intell. Syst. Technol. (TIST) 8(3), 41 (2017)
Jung, C., Shen, Y., Jiao, L.: Learning to rank with ensemble ranking SVM. Neural Process. Lett. 42(3), 703–714 (2015)
Rigutini, L., Papini, T., Maggini, M., Scarselli, F.: SortNet: learning to rank by a neural preference function. IEEE Trans. Neural Networks 22(9), 1368–1380 (2011)
Zong, W., Huang, G.B.: Learning to rank with extreme learning machine. Neural Process. Lett. 39(2), 155–166 (2014)
Wang, S., Wu, Y., Gao, B.J., Wang, K., Lauw, H.W., Ma, J.: A cooperative coevolution framework for parallel learning to rank. IEEE Trans. Knowl. Data Eng. 27(12), 3152–3165 (2015)
Guo, H.F., Chu, D.H., Ye, Y.M., Li, X.T., Fan, X.X.: BLM-rank: a Bayesian linear method for learning to rank and its GPU implementation. IEICE Trans. Inf. Syst. E99-D(4), 896–905 (2016)
Ibrahim, M., Carman, M.: Comparing pointwise and listwise objective functions for random-forest-based learning-to-rank. ACM Trans. Inf. Syst. (TOIS) 34(4), 20 (2016)
Zhang, P., Song, D., Wang, J., Hou,Y.: Bias-variance decomposition of ir evaluation. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland, pp. 1021–1024. ACM (2013)
Zhang, P., Song, D., Wang, J., Hou, Y.: Bias-variance analysis in estimating true query model for information retrieval. Inf. Process. Manage. 50(1), 199–217 (2014)
Acknowledgements
This work was in part supported by the Natural Science Foundation of Jiangxi Province of China (No. 20171BAB202010), the Opening Foundation of Network and Data Security Key Laboratory of Sichuan Province (No. NDSMS201602).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Li, J., Liu, G., Xia, J. (2017). Robust Ranking Model via Bias-Variance Optimization. In: Huang, DS., Hussain, A., Han, K., Gromiha, M. (eds) Intelligent Computing Methodologies. ICIC 2017. Lecture Notes in Computer Science(), vol 10363. Springer, Cham. https://doi.org/10.1007/978-3-319-63315-2_62
Download citation
DOI: https://doi.org/10.1007/978-3-319-63315-2_62
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63314-5
Online ISBN: 978-3-319-63315-2
eBook Packages: Computer ScienceComputer Science (R0)