Research Repository

Rolling horizon methods for games with continuous states and actions

Samothrakis, Spyridon and Roberts, Samuel A and Perez, Diego and Lucas, Simon M (2014) Rolling horizon methods for games with continuous states and actions. In: 2014 IEEE Conference on Computational Intelligence and Games (CIG), 2014-08-26 - 2014-08-29.


Download (527kB) | Preview


It is often the case that games have continuous dynamics and allow for continuous actions, possibly with with some added noise. For larger games with complicated dynamics, having agents learn offline behaviours in such a setting is a daunting task. On the other hand, provided a generative model is available, one might try to spread the cost of search/learning in a rolling horizon fashion (e.g. as in Monte Carlo Tree Search). In this paper we compare T-HOLOP (Truncated Hierarchical Open Loop Planning), an open loop planning algorithm at least partially inspired by MCTS, with a version of evolutionary planning that uses CMA-ES (which we call EVO-P) in two planning benchmark problems (Inverted Pendulum and the Double Integrator) and in Lunar Lander, a classic arcade game. We show that EVO-P outperforms T-HOLOP in the classic benchmarks, while T-HOLOP is unable to find a solution using the same heuristics. We conclude that off-the-shelf evolutionary algorithms can be used successfully in a rolling horizon setting, and that a different type of heuristics might be needed under different optimisation algorithms.

Item Type: Conference or Workshop Item (Paper)
Additional Information: Published proceedings: IEEE Conference on Computatonal Intelligence and Games, CIG
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Science and Health
Faculty of Science and Health > Computer Science and Electronic Engineering, School of
SWORD Depositor: Elements
Depositing User: Elements
Date Deposited: 04 Dec 2014 13:37
Last Modified: 15 Jan 2022 00:39

Actions (login required)

View Item View Item