Skip to main content
Log in

ADABA: improving the balancing between runtime and accuracy in a new distributed version of the alpha–beta algorithm

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

The satisfactory performance of the intelligent agents conceived to solve several real-life problems requires efficient decision-making algorithms. To address issues with high state-spaces within reasonable runtime limits, these algorithms are distributed according to some approaches that can be either synchronous or asynchronous, where the former guarantee the same results as their corresponding serial versions through synchronization points, which causes the undesirable effects of communication overhead and idle processors. To mitigate this, the asynchronous approaches reduce the message exchanges in such a way as to accelerate the runtime without too much compromising the response accuracy. The challenge of enhancing the minimax technique through pruning makes Alpha–Beta a relevant case study in parallelism research. Young Brothers Wait Concept (YBWC) and Asynchronous Parallel Hierarchical Iterative Deepening (APHID) are highlighted among the existing Alpha–Beta distributions. Knowing that APHID proved to be more suitable than YBWC to operate in distributed memory and that shared memory architectures are scarcely available due to their high costs, the primary motivation here is to implement the Asynchronous Distributed Alpha–Beta Algorithm (ADABA), which increases the accuracy and performance of APHID through the enhancement of the slaves’ task ordering policies, the communication process between the processors and the window’s updating strategy. Experiments fulfilled through tournaments involving ADABA-based and APHID-based Checkers agents proved that the player based on the best ADABA version reached, approximately, a victory rate 95% superior and a runtime two times faster than the APHID-based player, keeping the same response accuracy level of its opponent.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

The dataset used in the experiments of this work is available in the public Git Repository, which link is: https://github.com/ldbononi/adaba.

Code availability

The source code is available in the public Git Repository, which link is: https://github.com/ldbononi/adaba.

Consent for publication

Not applicable.

References

  • Barcelos ARA, Julia RMS RM Jr (2011) D-VisionDraughts: a draughts player neural network that learns by reinforcement a high performance environment. In: European symposium on artificial neural networks, computational intelligence and machine learning, 2011

  • Brockington MG (1997) Keyano unplugged—the construction of an Othello program. Technical Report 97-05. University of Alberta, Department of Computing Science

  • Brockington MG (1998) Asynchronous parallel game-tree search. PhD Thesis, University of Alberta, Edmonton

  • Caexeta GS (2008) Visiondraughts - um sistema de aprendizagem de jogos de damas baseado em redes neurais, diferenças temporais, algoritmos eficientes de busca em Árvores e informações perfeitas contidas em bases de dados. Master’s Thesis, Federal University of Uberlandia, Uberlandia

  • Caexeta GS, Julia RMS (2008) A draughts learning system based on neural networks and temporal differences: the impact of an efficient tree-search algorithm. In: The 19th Brazilian symposium on artificial intelligence, 2008, SBIA, LNAI series. Springer

  • Campos P, Langlois T (2003) Abalearn: efficient self-play learning of the game Abalone. INESC-ID, Neural Networks and Signal Processing Group

  • Chen S, Yan H, Wang L (2018) Adaptation of alpha–beta search algorithm for real-time strategy game. In: 2018 IEEE 9th international conference on software engineering and service science (ICSESS), 2018, pp 995–998

  • Duarte VAR (2009) Mp-draughts - um sistema multiagente de aprendizagem automática para damas baseado em redes neurais de kohonen e perceptron multicamadas. Master’s Thesis, Federal University of Uberlandia

  • Duarte VAR, Julia RMS, Albertini MK, Neto HC (2015) MP-Draughts: unsupervised learning multi-agent system based on MLP and adaptive neural networks. In: Proceedings of the 2015 IEEE 27th international conference on tools with artificial intelligence (ICTAI), 2015. IEEE Computer Society, Washington, DC, pp 920–927

  • Faria MPP, Julia RMS, Tomaz LBP (2018) Proposal of an automatic tool for evaluating the quality of decision-making on checkers player agents. In: Anais do XV Encontro Nacional de Inteligência Artificial e Computacional, SBC, Porto Alegre, RS, Brasil, pp 389–400. https://doi.org/10.5753/eniac.2018.4433

  • Feldmann R (1993) Game tree search on massively parallel systems. PhD Thesis, Department of Mathematics and Computer Science of University of Paderborn

  • Feldmann R, Monien B, Mysliwietz P, Vornberger O (1990) Distributed game tree search. In: Kumar V, Gopalakrishnan PS, Kanal LN (eds) Parallel algorithms for machine intelligence and vision. Springer, New York, pp 66–101

    Chapter  Google Scholar 

  • Fierz MC (2020) Cake informations. http://www.fierz.ch/cake.php. Disponível em 26 Jan 2021

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp 770–778

  • Hyatt RM (1988) A high-performance parallel algorithm to search depth-first game trees. PhD Thesis, University of Alabama

  • Knuth DE, Moore RW (1975) An analysis of alpha–beta pruning. In: Artificial intelligence, pp 293–326. https://doi.org/10.1016/0004-3702(75)90019-3

  • Liu WA, Dragomirand E, Dumitruand S, Christianand R, Scottand F, Cheng-Yangand BCA (2016) SSD: single shot multibox detector. In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer vision—ECCV 2016. Springer, Cham, pp 21–37

  • Lu CPP (1993) Parallel search of narrow game trees. Master’s Thesis, University of Alberta

  • Marsland TA, Campbell M (1982) Parallel search of strongly ordered game trees. ACM Comput Surv 14(4):533–551. https://doi.org/10.1145/356893.356895

    Article  Google Scholar 

  • Millington I (2006) Artificial intelligence for games. Morgan Kaufmann Publishers, Inc., San Francisco

    Google Scholar 

  • Neto HC, Julia RMS, Caexeta GS, Barcelos ARA (2014) LS-VisionDraughts: improving the performance of an agent for checkers by integrating computational intelligence, reinforcement learning and a powerful search method. Appl Intell 41(2):525–550. https://doi.org/10.1007/s10489-014-0536-y

    Article  Google Scholar 

  • Newborn M (1988) Unsynchronized iteratively deepening parallel alpha–beta search. IEEE Trans Pattern Anal Mach Intell 10(5):687–694. https://doi.org/10.1109/34.6777

    Article  MATH  Google Scholar 

  • Plaat A, Schaeffer J, Pijls W, de Bruin A (2017) A minimax algorithm better than alpha–beta? No and yes. arXiv:1702.03401

  • Pratihar DK, Jain LC (2010) Intelligent autonomous systems: foundations and applications, 1st edn. Springer, Berlin

    Book  MATH  Google Scholar 

  • Russell S, Norvig P (2003) Artificial intelligence: a modern approach, 2nd edn. Prentice Hall, Upper Saddle River

    MATH  Google Scholar 

  • Schaeffer J, Lake R, Lu P, Bryant M (1996) Chinook: the world man–machine checkers champion. AI Mag 17:21–29

    Google Scholar 

  • Singhal SP, Sridevi M (2019) Comparative study of performance of parallel alpha beta pruning for different architectures. In: 2019 IEEE 9th international conference on advanced computing (IACC), 2019, pp 115–119

  • Sutton RS (1988) Learning to predict by the methods of temporal differences. Mach Learn 3(1):9–44

    Article  Google Scholar 

  • Tanenbaum AS (2007) Modern operating systems, 3rd edn. Prentice Hall Press, Boston

    MATH  Google Scholar 

  • Tomaz LBP, Julia RMS (2019) ADABA: an algorithm to improve the parallel search in competitive agents. Springer, pp 475–485. https://doi.org/10.1007/978-3-030-16657-1_44

  • Tomaz LBP, Julia RMS, Barcelos ARA (2013) Improving the accomplishment of a neural network based agent for draughts that operates in a distributed learning environment. In: IEEE 14th international conference on information reuse and integration, 2013

  • Tomaz LBP, Julia RMS, Duarte VA (2017a) A multiagent player system composed by expert agents in specific game stages operating in high performance environment. Appl Intell 48:1–22

    Article  Google Scholar 

  • Tomaz LBP, Julia RMS, Faria MPP (2017b) APHID-Draughts: comparing the synchronous and asynchronous parallelism approaches for the alpha–beta algorithm applied to checkers. In: 2017 IEEE 29th international conference on tools with artificial intelligence (ICTAI), 2017

  • Ura A, Yokoyama D, Chikayama T (2013) Two-level task scheduling for parallel game tree search based on necessity. Inf Media Technol 8(1):32–40

    Google Scholar 

  • Ura A, Tsuruoka Y, Chikayama T (2015) Dynamic prediction of minimal trees in large-scale parallel game tree search. J Inf Process 23(1):9–19

    Google Scholar 

  • Wang D, Wang P, Yuan Y, Wang P, Shi J (2020) A fast conformal predictive system with regularized extreme learning machine. Neural Netw 126:347–361. https://doi.org/10.1016/j.neunet.2020.03.022

    Article  Google Scholar 

Download references

Funding

No funds, grants, or other support was received.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Lidia Bononi Paiva Tomaz.

Ethics declarations

Conflict of interest

The authors have non-financial interests. In this sense, author Lídia B. P. Tomaz declares to be a Professor at the Federal Institute of Triangulo Mineiro, Brazil. Author Rita M. S. Julia declares to be a Full Professor at the Federal University of Uberlândia, Brazil. Finally, author Matheus P. P. Faria is a PhD Student at Federal University of Uberlândia, Brazil. Thus, such work contributes to improving the research in the domain of Artificial Intelligence in both institutions at which the authors develop their academic activities.

Ethical approval

Not applicable.

Informed consent

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tomaz, L.B.P., Julia, R.M.S. & Faria, M.P.P. ADABA: improving the balancing between runtime and accuracy in a new distributed version of the alpha–beta algorithm. Artif Intell Rev 56, 4255–4293 (2023). https://doi.org/10.1007/s10462-022-10269-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-022-10269-3

Keywords

Navigation