首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Neuroevolution strategies for episodic reinforcement learning
Authors:Verena Heidrich-Meisner  Christian Igel  
Institution:aInstitut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Germany
Abstract:Because of their convincing performance, there is a growing interest in using evolutionary algorithms for reinforcement learning. We propose learning of neural network policies by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable-metric search algorithm for continuous optimization. We argue that this approach, which we refer to as CMA Neuroevolution Strategy (CMA-NeuroES), is ideally suited for reinforcement learning, in particular because it is based on ranking policies (and therefore robust against noise), efficiently detects correlations between parameters, and infers a search direction from scalar reinforcement signals. We evaluate the CMA-NeuroES on five different (Markovian and non-Markovian) variants of the common pole balancing problem. The results are compared to those described in a recent study covering several RL algorithms, and the CMA-NeuroES shows the overall best performance.
Keywords:Reinforcement learning  Evolution strategy  Covariance matrix adaptation  Partially observable Markov decision process  Direct policy search
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号