首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Complexity bounds for approximately solving discounted MDPs by value iterations
Abstract:For an infinite-horizon discounted Markov decision process with a finite number of states and actions, this note provides upper bounds on the number of operations required to compute an approximately optimal policy by value iterations in terms of the discount factor, spread of the reward function, and desired closeness to optimality. One of the provided upper bounds on the number of iterations has the property that it is a non-decreasing function of the value of the discount factor.
Keywords:Markov decision process  Discounting  Algorithm  Complexity  Optimal policy
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号