首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Application of a Near-Optimal Reinforcement Learning Controller to a Robotics Problem in Manufacturing: A Hybrid Approach
Authors:Warren E Hearnes II  Augustine O Esogbue
Institution:(1) Intelligent Systems and Controls Laboratory, School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, GA, 30332-0205
Abstract:Optimization theory provides a framework for determining the best decisions or actions with respect to some mathematical model of a process. This paper focuses on learning to act in a near-optimal manner through reinforcement learning for problems that either have no model or the model is too complex. One approach to solving this class of problems is via approximate dynamic programming. The application of these methods are established primarily for the case of discrete state and action spaces. In this paper we develop efficient methods of learning which act in complex systems with continuous state and action spaces. Monte-Carlo approaches are employed to estimate function values in an iterative, incremental procedure. Derivative-free line search methods are used to obtain a near-optimal action in the continuous action space for a discrete subset of the state space. This near-optimal control policy is then extended to the entire continuous state space via a fuzzy additive model. To compensate for approximation errors, a modified procedure for perturbing the generated control policy is developed. Convergence results under moderate assumptions and stopping criteria are established.
Keywords:fuzzy sets  reinforcement learning  near-optimal control
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号