Research Repository

On monte carlo tree search and reinforcement learning

Tom Vodopivec, and Samothrakis, Spyridon and Brank Ster, (2017) 'On monte carlo tree search and reinforcement learning.' The Journal of Artificial Intelligence Research, 60. 881 - 936. ISSN 1076-9757

[img]
Preview
Text
live-5507-10333-jair.pdf - Published Version

Download (1MB) | Preview

Abstract

Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.

Item Type: Article
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Science and Health > Computer Science and Electronic Engineering, School of
Depositing User: Elements
Date Deposited: 02 Mar 2018 14:34
Last Modified: 21 Sep 2018 16:15
URI: http://repository.essex.ac.uk/id/eprint/21129

Actions (login required)

View Item View Item