Neuroevolution strategies for episodic reinforcement learning |
| |
Authors: | Verena Heidrich-Meisner Christian Igel |
| |
Institution: | aInstitut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Germany |
| |
Abstract: | Because of their convincing performance, there is a growing interest in using evolutionary algorithms for reinforcement learning. We propose learning of neural network policies by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable-metric search algorithm for continuous optimization. We argue that this approach, which we refer to as CMA Neuroevolution Strategy (CMA-NeuroES), is ideally suited for reinforcement learning, in particular because it is based on ranking policies (and therefore robust against noise), efficiently detects correlations between parameters, and infers a search direction from scalar reinforcement signals. We evaluate the CMA-NeuroES on five different (Markovian and non-Markovian) variants of the common pole balancing problem. The results are compared to those described in a recent study covering several RL algorithms, and the CMA-NeuroES shows the overall best performance. |
| |
Keywords: | Reinforcement learning Evolution strategy Covariance matrix adaptation Partially observable Markov decision process Direct policy search |
本文献已被 ScienceDirect 等数据库收录! |
|