Skip to main content

General Game Playing with Ants

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5361))

Abstract

General Game Playing (GGP) aims at developing game playing agents that are able to play a variety of games and, in the absence of pre-programmed game specific knowledge, become proficient players. The challenge of making such a player has led to various techniques being used to tackle the problem of game specific knowledge absence. Most GGP players have used standard tree-search techniques enhanced by automatic heuristic learning, neuroevolution and UCT (Upper Confidence bounds applied to Trees) search, which is a simulation-based tree search. In this paper, we explore a new approach to GGP. We use an Ant Colony System (ACS) to explore the game space and evolve strategies for game playing. Each ant in the ACS is a player with an assigned role, and forages through the game’s state space, searching for promising paths to victory. Preliminary results show this approach to be promising. In order to test the architecture, we create matches between players using the knowledge learnt by the ACS and random players.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Koffman, E.: Learning through pattern recognition applied to a class of games.: IEEE Trans. on Systems, Man and Cybernetics SSC-4 (1968)

    Google Scholar 

  2. Genesereth, M., Love, N.: General game playing: Overview of the aaai competition. AI Magazine (Spring 2005)

    Google Scholar 

  3. Genesereth, M., Love, N.: General game playing: Game description language specification, http://games.standford.edu

  4. Genesereth, M.R., Fikes, R.E.: Knowledge interchange format, version 3.0 reference manual. Technical report logic-92-1, Stanford University

    Google Scholar 

  5. http://games.stanford.edu

  6. Clune, J.: Heuristic evaluation functions for general game playing. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (2007)

    Google Scholar 

  7. Schiffel, S., Thielscher, M.: Fluxplayer: A successful general game player. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, pp. 1191–1196 (2007)

    Google Scholar 

  8. Banerjee, B., Kuhlmann, G., Stone, P.: Value function transfer for general game playing. In: ICML Workshop on Structural Knowledge Transfer for ML (2006)

    Google Scholar 

  9. Banerjee, B., Stone, P.: General game playing using knowledge transfer. In: The 20th International Joint Conference on Artificial Intelligence, pp. 672–777 (2007)

    Google Scholar 

  10. Bjornsoon, Y., Finnsson, H.: Simulation-based approach to general game playing. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence. AAAI Press, Menlo Park (2008)

    Google Scholar 

  11. Kocsis, L., Szepesvari, C.: Bandit based monte-carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS, vol. 4212, pp. 282–293. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  12. Dorigo, M.: Optimization, Learning and Natural Algorithms. PhD thesis, Politecnico di Milano, Italy (1992)

    Google Scholar 

  13. Dorigo, M., Gambardella, L.M.: Ant colony system: A cooperative learning approach to the travelling salesman problem. IEEE Transactions on Evolutionary Computation, 53–66 (1997)

    Google Scholar 

  14. Drogoul, A.: When ants play chess (or can strategies emerge from tactical behaviours?). In: Müller, J.P., Castelfranchi, C. (eds.) MAAMAW 1993. LNCS, vol. 957, pp. 11–27. Springer, Heidelberg (1995)

    Chapter  Google Scholar 

  15. Dunn, J.: Ant colony organization for mmorpg and rts creature resource gathering. AI Game Programming Wisdom 3, 495–506 (2006)

    Google Scholar 

  16. Dorigo, M., Gambardella, L.M.: Ant-q: A reinforcement learning approach to the travelling salesman problem. In: International Conference on Machine Learning (1995)

    Google Scholar 

  17. Watkins, C.: Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England (1989)

    Google Scholar 

  18. White, T., Kaegi, S., Oda, T.: Revisiting elitism in ant colony optimization. In: Genetic and Evolutionary Computation Conference (2003)

    Google Scholar 

  19. Zhang, T., Yu, C., Zhang, Y., Tian, W.: Ant colony system based on the asrank and mmas for the vrpspd. In: International Conference on Wireless Communications, Networking and Mobile Computing, pp. 3728–3731 (2007)

    Google Scholar 

  20. http://www.aco-metaheuristic.org/

  21. Winikoff, M.: http://www3.cs.utwente.nl/~schooten/yprolog

  22. Zobrist, A.: A new hashing method with application for game playing. Technical report 99, University of Wisconsin (1970)

    Google Scholar 

  23. Gelly, S., Wang, Y.: Modifications of uct and sequence-like simulations for monte-carlo go. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii (2007)

    Google Scholar 

  24. Schaeffer, J.: The history heuristic and alpha-beta search enhancements in practice. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1203–1212 (1989)

    Google Scholar 

  25. Sutton, R., Barto, A.: Reinforcement Learning, An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  26. Sharma, S., Kobti, Z., Goodwin, S.: Knowledge generation for improving simulations in uct for general game playing. In: 21st Australasian Joint Conference on Artificial Intelligence (in press, 2008)

    Google Scholar 

  27. Sharma, S., Kobti, Z.: A multi-agent architecture for general game playing. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sharma, S., Kobti, Z., Goodwin, S. (2008). General Game Playing with Ants. In: Li, X., et al. Simulated Evolution and Learning. SEAL 2008. Lecture Notes in Computer Science, vol 5361. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89694-4_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-89694-4_39

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-89693-7

  • Online ISBN: 978-3-540-89694-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics