Skip to main content

One Size Doesn’t Fit All: Predicting the Number of Examples for In-Context Learning

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2025)

Abstract

In-context learning (ICL) refers to the process of adding a small number of localized examples from a training set of labelled data to an LLM’s prompt with an objective to effectively control the generative process seeking to improve the downstream task performance. Existing ICL approaches use an identical number of examples (a pre-configured hyper-parameter) for each data instance. Our work alleviates the limitations of this ‘one fits all’ approach by dynamically predicting the number of examples for each data instance to be used in few-shot inference with LLMs. In particular, we employ a multi-label classifier, the parameters of which are fitted using a training set, where the label for each instance in this training set indicates if using a specific value of k (number of most similar examples from 0 up to a maximum value) leads to correct k-shot downstream predictions. Our experiments on a number of text classification benchmarks show that AICL substantially outperforms standard ICL by up to 17%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Also interchangeably known as few-shot learning or retrieval-augmented generation (RAG) with ground-truth labels.

  2. 2.

    Code available at https://github.com/ManishChandra12/adaptiveICL.

References

  1. Arora, S., et al.: Ask me anything: a simple strategy for prompting language models (2022)

    Google Scholar 

  2. Bahri, D., Tay, Y., Zheng, C., Metzler, D., Tomkins, A.: Choppy: cut transformer for ranked list truncation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1513-1516. SIGIR ’20, Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3397271.3401188

  3. Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf

  4. Cunha, W., Viegas, F., França, C., Rosa, T., Rocha, L., Gonçalves, M.A.: A comparative survey of instance selection methods applied to non-neural and transformer-based text classification. ACM Comput. Surv. 55(13s) (2023). https://doi.org/10.1145/3582000

  5. Datta, S., Ganguly, D., Greene, D., Mitra, M.: Deep-QPP: a pairwise interaction-based deep learning model for supervised query performance prediction. In: Candan, K.S., Liu, H., Akoglu, L., Dong, X.L., Tang, J. (eds.) WSDM ’22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21 - 25, 2022, pp. 201–209. ACM (2022). https://doi.org/10.1145/3488560.3498491

  6. Diao, S., Wang, P., Lin, Y., Zhang, T.: Active prompting with chain-of-thought for large language models (2023)

    Google Scholar 

  7. Dong, Q., et al.: A survey for in-context learning. arXiv preprint arXiv:2301.00234 (2022)

  8. Ganguly, D., Yilmaz, E.: Query-specific variable depth pooling via query performance prediction. In: SIGIR, pp. 2303–2307. ACM (2023)

    Google Scholar 

  9. Gao, Y., et al.: Retrieval-augmented generation for large language models: a survey (2024). https://arxiv.org/abs/2312.10997

  10. Han, X., Simig, D., Mihaylov, T., Tsvetkov, Y., Celikyilmaz, A., Wang, T.: Understanding in-context learning via supportive pretraining data. arXiv preprint arXiv:2306.15091 (2023)

  11. Kanoulas, E., Carterette, B., Clough, P.D., Sanderson, M.: Evaluating multi-query sessions. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1053–1062. SIGIR ’11, Association for Computing Machinery, New York, NY, USA (2011). https://doi.org/10.1145/2009916.2010056

  12. Kumar, S., Talukdar, P.P.: Reordering examples helps during priming-based few-shot learning. In: ACL/IJCNLP (Findings). Findings of ACL, vol. ACL/IJCNLP 2021, pp. 4507–4518. Association for Computational Linguistics (2021)

    Google Scholar 

  13. Levy, I., Bogin, B., Berant, J.: Diverse demonstrations improve in-context compositional generalization. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1401–1422. Association for Computational Linguistics, Toronto, Canada (2023). https://doi.org/10.18653/v1/2023.acl-long.78, https://aclanthology.org/2023.acl-long.78

  14. Li, M., Ma, X., Lin, J.: An encoder attribution analysis for dense passage retriever in open-domain question answering. In: Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pp. 1–11. Association for Computational Linguistics, Seattle, U.S.A. (2022). https://doi.org/10.18653/v1/2022.trustnlp-1.1, https://aclanthology.org/2022.trustnlp-1.1

  15. Li, T., Ma, X., Zhuang, A., Gu, Y., Su, Y., Chen, W.: Few-shot in-context learning on knowledge base question answering. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6966–6980. Association for Computational Linguistics, Toronto, Canada (2023). https://doi.org/10.18653/v1/2023.acl-long.385, https://aclanthology.org/2023.acl-long.385

  16. Li, X.L., Liang, P.: Prefix-Tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597. Association for Computational Linguistics, Online (2021). https://doi.org/10.18653/v1/2021.acl-long.353, https://aclanthology.org/2021.acl-long.353

  17. Li, X., Roth, D.: Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics, vol. 1, pp. 1–7. COLING ’02, Association for Computational Linguistics, USA (2002). https://doi.org/10.3115/1072228.1072378

  18. Li, Y., Bubeck, S., Eldan, R., Del Giorno, A., Gunasekar, S., Lee, Y.T.: Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463 (2023)

  19. Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., Chen, W.: What makes good in-context examples for GPT-3? In: Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pp. 100–114. Association for Computational Linguistics, Dublin, Ireland and Online (2022). https://doi.org/10.18653/v1/2022.deelio-1.10, https://aclanthology.org/2022.deelio-1.10

  20. Liu, X., et al.: P-Tuning: prompt tuning can be comparable to fine-tuning across scales and tasks. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 61–68. Association for Computational Linguistics, Dublin, Ireland (2022). https://doi.org/10.18653/v1/2022.acl-short.8, https://aclanthology.org/2022.acl-short.8

  21. Liu, X., et al.: GPT understands, too. AI Open (2023). https://doi.org/10.1016/j.aiopen.2023.08.012, https://www.sciencedirect.com/science/article/pii/S2666651023000141

  22. Lu, Y., Bartolo, M., Moore, A., Riedel, S., Stenetorp, P.: Fantastically ordered prompts and where to find them: overcoming few-shot prompt order sensitivity. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8086–8098. Association for Computational Linguistics, Dublin, Ireland (2022). https://doi.org/10.18653/v1/2022.acl-long.556, https://aclanthology.org/2022.acl-long.556

  23. Luo, M., et al.: DR. ICL: Demonstration-retrieved in-context learning. arXiv preprint arXiv:2305.14128 (2023)

  24. Luo, M., Xu, X., Liu, Y., Pasupat, P., Kazemi, M.: In-context learning with retrieved demonstrations for language models: a survey (2024)

    Google Scholar 

  25. Ma, H., et al.: Fairness-guided few-shot prompting for large language models. In: Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) (2023)

    Google Scholar 

  26. Milios, A., Reddy, S., Bahdanau, D.: In-context learning for text classification with many labels. In: Hupkes, D., et al. (eds.) Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP, pp. 173–184. Association for Computational Linguistics, Singapore (Dec 2023). https://doi.org/10.18653/v1/2023.genbench-1.14, https://aclanthology.org/2023.genbench-1.14

  27. Min, S., et al.: Rethinking the role of demonstrations: what makes in-context learning work? In: EMNLP (2022)

    Google Scholar 

  28. Mysore, S., Mccallum, A., Zamani, H.: Large language model augmented narrative driven recommendations. In: Association for Computing Machinery, pp. 777–783. RecSys ’23, New York, NY, USA (2023). https://doi.org/10.1145/3604915.3608829

  29. Ni, J., et al.: Large dual encoders are generalizable retrievers. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 9844–9855. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (2022). https://doi.org/10.18653/v1/2022.emnlp-main.669, https://aclanthology.org/2022.emnlp-main.669

  30. Ni, J., et al.: Large dual encoders are generalizable retrievers (2021)

    Google Scholar 

  31. Ouyang, L., et al.: Training language models to follow instructions with human feedback. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 27730–27744. Curran Associates, Inc. (2022). https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf

  32. Parry, A., Ganguly, D., Chandra, M.: In-context learning or: how i learned to stop worrying and love "applied information retrieval". In: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 14–25. SIGIR ’24, Association for Computing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/3626772.3657842, https://doi.org/10.1145/3626772.3657842

  33. Pradeep, R., et al.: How does generative retrieval scale to millions of passages? (2023)

    Google Scholar 

  34. Qin, G., Eisner, J.: Learning how to ask: querying LMs with mixtures of soft prompts. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5203–5212. Association for Computational Linguistics, Online (2021). https://doi.org/10.18653/v1/2021.naacl-main.410, https://aclanthology.org/2021.naacl-main.410

  35. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding with unsupervised learning. OpenAI (2018). https://openai.com/research/language-unsupervised

  36. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3982–3992. Association for Computational Linguistics, Hong Kong, China (2019). https://doi.org/10.18653/v1/D19-1410, https://aclanthology.org/D19-1410

  37. Rubin, O., Herzig, J., Berant, J.: Learning to retrieve prompts for in-context learning. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2655–2671. Association for Computational Linguistics, Seattle, United States (2022). https://doi.org/10.18653/v1/2022.naacl-main.191, https://aclanthology.org/2022.naacl-main.191

  38. Schick, T., Schmid, H., Schütze, H.: Automatically identifying words that can serve as labels for few-shot text classification. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 5569–5578. International Committee on Computational Linguistics, Barcelona, Spain (Online) (2020). https://doi.org/10.18653/v1/2020.coling-main.488, https://aclanthology.org/2020.coling-main.488

  39. Schick, T., Schütze, H.: Exploiting cloze-questions for few-shot text classification and natural language inference. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 255–269. Association for Computational Linguistics, Online (2021). https://doi.org/10.18653/v1/2021.eacl-main.20, https://aclanthology.org/2021.eacl-main.20

  40. Shi, F., et al.: Large language models can be easily distracted by irrelevant context. In: Proceedings of the 40th International Conference on Machine Learning. ICML’23, JMLR.org (2023)

    Google Scholar 

  41. Tang, Y., Puduppully, R., Liu, Z., Chen, N.: In-context learning of large language models for controlled dialogue summarization: a holistic benchmark and empirical analysis. In: Dong, Y., Xiao, W., Wang, L., Liu, F., Carenini, G. (eds.) Proceedings of the 4th New Frontiers in Summarization Workshop, pp. 56–67. Association for Computational Linguistics, Singapore (2023). https://doi.org/10.18653/v1/2023.newsum-1.6, https://aclanthology.org/2023.newsum-1.6

  42. Touvron, H., et al.: Llama 2: open foundation and fine-tuned chat models (2023)

    Google Scholar 

  43. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.: GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Linzen, T., Chrupała, G., Alishahi, A. (eds.) Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353–355. Association for Computational Linguistics, Brussels, Belgium (2018). https://doi.org/10.18653/v1/W18-5446, https://aclanthology.org/W18-5446

  44. Wei, J., Huang, C., Vosoughi, S., Cheng, Y., Xu, S.: Few-shot text classification with triplet networks, data augmentation, and curriculum learning. arXiv preprint arXiv:2103.07552 (2021)

  45. Weidinger, L., et al.: Taxonomy of risks posed by language models. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214–229 (2022)

    Google Scholar 

  46. Zhang, Y., Feng, S., Tan, C.: Active example selection for in-context learning. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 9134–9148. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (2022). https://doi.org/10.18653/v1/2022.emnlp-main.622, https://aclanthology.org/2022.emnlp-main.622

  47. Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic chain of thought prompting in large language models. In: The Eleventh International Conference on Learning Representations (ICLR 2023) (2023)

    Google Scholar 

  48. Zhong, X.F., Guo, S.Z., Gao, L., Shan, H., Zheng, J.H.: An improved K-NN classification with dynamic k. In: Proceedings of the 9th International Conference on Machine Learning and Computing, pp. 211–216. ICMLC ’17, Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/3055635.3056604

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manish Chandra .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chandra, M., Ganguly, D., Ounis, I. (2025). One Size Doesn’t Fit All: Predicting the Number of Examples for In-Context Learning. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15572. Springer, Cham. https://doi.org/10.1007/978-3-031-88708-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-88708-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-88707-9

  • Online ISBN: 978-3-031-88708-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics