Skip to main content
Log in

Transfer for Automated Negotiation

  • Technical Contribution
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

Learning in automated negotiation is a difficult problem because the target function is hidden and the available experience for learning is rather limited. Transfer learning is a branch of machine learning research concerned with the reuse of previously acquired knowledge in new learning tasks, for example, in order to reduce the amount of learning experience required to attain a certain level of performance. This paper proposes a novel strategy based on a variation of TrAdaBoost—a classic instance transfer technique—that can be used in a multi-issue negotiation setting. The experimental results show that the proposed method is effective in a variety of application domains against the state-of-the-art negotiating agents.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Adopting the same notation as in the original TrAdaBoost paper, the index s stands for "same distribution instance space" and the index D for "different distribution instance space".

  2. Extending TrAdaBoost to multi-class classification problems is fairly straight forward.

  3. Please note, that the formalization using the KL measure assesses that the two distributions should be having the same domain. This is reasonable in our framework as we operate within the same negotiation domain. If the two distributions are structurally different both could be approximated using one bigger distribution such as Gaussian mixture models.

  4. In this work we split the negotiation session in intervals of 3 s.

  5. Competitiveness refers to the minimum distance of possible outcomes in a domain to the point where both parties are both fully satisfied. To put it differently, agents tend to achieve better performance in a domain with lower competitiveness.

References

  1. Ammar HB, Tuyls K, Taylor ME, Driessens K, Weiss G (2012) Reinforcement learning transfer via sparse coding. In: Proceedings of the 11th Int. Joint Conf. on Automomous Agents and Multi-Agent Systems. ACM, Valencia, p 383–390

  2. ANAC’ (2012) http://anac2012.ecs.soton.ac.uk/

  3. Chen S, Ammar HB, Tuyls K, Weiss G (2013) Optimizing complex automated negotiation using sparse pseudo-input Gaussian processes. In: Proceedings of the 12th Int. Joint Conf. on Automomous Agents and Multi-Agent Systems. ACM, Saint Paul, p 707–714

  4. Chen S, Weiss G (2012) An efficient and adaptive approach to negotiation in complex environments. In: Proceedings of the 20th European Conference on Artificial Intelligence. IOS Press, Montpellier, France, p 228–233

  5. Chen S, Weiss G (2013) An efficient automated negotiation strategy for complex environments. Eng Appl Artif Intell 26(10):2613–2623

    Article  Google Scholar 

  6. Coehoorn RM, Jennings NR (2004) Learning on opponent’s preferences to make effective multi-issue negotiation trade-offs. In: Proceedings of the 6th Int. conf. on Electronic commerce, ICEC ’04. ACM, New York p 59–68

  7. Dai W, Yang Q, Xue GR, Yu Y (2007) Boosting for transfer learning. In: Proceedings of the 24th international conference on Machine learning. ACM, New York, pages 193–200.

  8. Hao J, Leung H (2012) ABiNeS: an adaptive bilateral negotiating strategy over multiple items. In: Proceedings of the 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2012), Macau, China

  9. Hindriks K, Jonker C, Kraus S, Lin R, Tykhonov D (2009) Genius: negotiation environment for heterogeneous agents. In: Proceedings of the 8th Int. Joint Conf. on Automomous Agents and Multi-Agent Systems, p 1397–1398

  10. Jennings NR, Faratin P, Lomuscio AR, Parsons S, Sierra C, Wooldridge M (2001) Automated negotiation: prospects, methods and challenges. Int J Group Decis Negot 10(2):199–215

    Article  Google Scholar 

  11. Lau RY, Li Y, Song D, Kwok RCW (2008) Knowledge discovery for adaptive negotiation agents in e-marketplaces. Decis Support Syst 45(2):310–323

    Article  Google Scholar 

  12. Pan SJ, Yang Q (1010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  13. Park S, Yang S (2008) An efficient multilateral negotiation system for pervasive computing environments. Eng Appl Artif Intell 21(4):633–643

    Article  Google Scholar 

  14. Ponka I (2009) Commitment models and concurrent bilateral negotiation strategies in dynamic service markets. PhD thesis, University of Southampton, School of Electronics and Computer Science

  15. Raiffa H (1982) The art and science of negotiation. Harvard University Press Cambridge, Cambridge

    Google Scholar 

  16. Rasmussen CE (2006) Gaussian Processes for Machine Learning. MIT Press, Cambridge

    MATH  Google Scholar 

  17. Rubinstein A (1982) Perfect equilibrium in a bargaining model. Econometrica 50(1):97–109

    Article  MATH  MathSciNet  Google Scholar 

  18. Taylor ME, Stone P (2009) Transfer learning for reinforcement learning domains. A survey J Mach Learn Res 10:1633–1685

    MATH  MathSciNet  Google Scholar 

  19. Wang M, Wang H, Vogel D, Kumar K, and Chiu DK (2009) Agent-based negotiation and decision making for dynamic supply chain formation. Eng Appl Artif Intell 22(7):1046–1055

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Siqi Chen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chen, S., Ammar, H.B., Tuyls, K. et al. Transfer for Automated Negotiation. Künstl Intell 28, 21–27 (2014). https://doi.org/10.1007/s13218-013-0284-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-013-0284-x

Keywords

Navigation