Abstract
General Game Playing (GGP) aims at developing game playing agents that are able to play a variety of games and, in the absence of pre-programmed game specific knowledge, become proficient players. The challenge of making such a player has led to various techniques being used to tackle the problem of game specific knowledge absence. Most GGP players have used standard tree-search techniques enhanced by automatic heuristic learning, neuroevolution and UCT (Upper Confidence bounds applied to Trees) search, which is a simulation-based tree search. In this paper, we explore a new approach to GGP. We use an Ant Colony System (ACS) to explore the game space and evolve strategies for game playing. Each ant in the ACS is a player with an assigned role, and forages through the game’s state space, searching for promising paths to victory. Preliminary results show this approach to be promising. In order to test the architecture, we create matches between players using the knowledge learnt by the ACS and random players.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Koffman, E.: Learning through pattern recognition applied to a class of games.: IEEE Trans. on Systems, Man and Cybernetics SSC-4 (1968)
Genesereth, M., Love, N.: General game playing: Overview of the aaai competition. AI Magazine (Spring 2005)
Genesereth, M., Love, N.: General game playing: Game description language specification, http://games.standford.edu
Genesereth, M.R., Fikes, R.E.: Knowledge interchange format, version 3.0 reference manual. Technical report logic-92-1, Stanford University
Clune, J.: Heuristic evaluation functions for general game playing. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (2007)
Schiffel, S., Thielscher, M.: Fluxplayer: A successful general game player. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, pp. 1191–1196 (2007)
Banerjee, B., Kuhlmann, G., Stone, P.: Value function transfer for general game playing. In: ICML Workshop on Structural Knowledge Transfer for ML (2006)
Banerjee, B., Stone, P.: General game playing using knowledge transfer. In: The 20th International Joint Conference on Artificial Intelligence, pp. 672–777 (2007)
Bjornsoon, Y., Finnsson, H.: Simulation-based approach to general game playing. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence. AAAI Press, Menlo Park (2008)
Kocsis, L., Szepesvari, C.: Bandit based monte-carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS, vol. 4212, pp. 282–293. Springer, Heidelberg (2006)
Dorigo, M.: Optimization, Learning and Natural Algorithms. PhD thesis, Politecnico di Milano, Italy (1992)
Dorigo, M., Gambardella, L.M.: Ant colony system: A cooperative learning approach to the travelling salesman problem. IEEE Transactions on Evolutionary Computation, 53–66 (1997)
Drogoul, A.: When ants play chess (or can strategies emerge from tactical behaviours?). In: Müller, J.P., Castelfranchi, C. (eds.) MAAMAW 1993. LNCS, vol. 957, pp. 11–27. Springer, Heidelberg (1995)
Dunn, J.: Ant colony organization for mmorpg and rts creature resource gathering. AI Game Programming Wisdom 3, 495–506 (2006)
Dorigo, M., Gambardella, L.M.: Ant-q: A reinforcement learning approach to the travelling salesman problem. In: International Conference on Machine Learning (1995)
Watkins, C.: Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England (1989)
White, T., Kaegi, S., Oda, T.: Revisiting elitism in ant colony optimization. In: Genetic and Evolutionary Computation Conference (2003)
Zhang, T., Yu, C., Zhang, Y., Tian, W.: Ant colony system based on the asrank and mmas for the vrpspd. In: International Conference on Wireless Communications, Networking and Mobile Computing, pp. 3728–3731 (2007)
Winikoff, M.: http://www3.cs.utwente.nl/~schooten/yprolog
Zobrist, A.: A new hashing method with application for game playing. Technical report 99, University of Wisconsin (1970)
Gelly, S., Wang, Y.: Modifications of uct and sequence-like simulations for monte-carlo go. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii (2007)
Schaeffer, J.: The history heuristic and alpha-beta search enhancements in practice. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1203–1212 (1989)
Sutton, R., Barto, A.: Reinforcement Learning, An Introduction. MIT Press, Cambridge (1998)
Sharma, S., Kobti, Z., Goodwin, S.: Knowledge generation for improving simulations in uct for general game playing. In: 21st Australasian Joint Conference on Artificial Intelligence (in press, 2008)
Sharma, S., Kobti, Z.: A multi-agent architecture for general game playing. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sharma, S., Kobti, Z., Goodwin, S. (2008). General Game Playing with Ants. In: Li, X., et al. Simulated Evolution and Learning. SEAL 2008. Lecture Notes in Computer Science, vol 5361. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89694-4_39
Download citation
DOI: https://doi.org/10.1007/978-3-540-89694-4_39
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-89693-7
Online ISBN: 978-3-540-89694-4
eBook Packages: Computer ScienceComputer Science (R0)