首页 | 本学科首页   官方微博 | 高级检索  
     检索      


On the computation of the optimal cost function for discrete time Markov models with partial observations
Authors:Enrique L Sernik  Steven I Marcus
Institution:(1) Department of Electrical and Computer Engineering, The University of Texas at Austin, 78712-1084 Austin, Texas, USA
Abstract:We consider several applications of two state, finite action, infinite horizon, discrete-time Markov decision processes with partial observations, for two special cases of observation quality, and show that in each of these cases the optimal cost function is piecewise linear. This in turn allows us to obtain either explicit formulas or simplified algorithms to compute the optimal cost function and the associated optimal control policy. Several examples are presented.Research supported in part by the Air Force Office of Scientific Research under Grant AFOSR-86-0029, in part by the National Science Foundation under Grant ECS-8617860, in part by the Advanced Technology Program of the State of Texas, and in part by the DoD Joint Services Electronics Program through the Air Force Office of Scientific Research (AFSC) Contract F49620-86-C-0045.
Keywords:Markov chains (finite state and action spaces  partial observations)  dynamic programming (infinite horizon  value iteration algorithm)
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号