首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Robust Dynamics and Control of a Partially Observed Markov Chain
Authors:R J Elliott  W P Malcolm  J P Moore
Institution:(1) Haskayne School of Business, Scurfield Hall, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada, T2N 1N4;(2) National ICT Australia, Locked Bag 8001, Canberra, ACT, 2601, Australia
Abstract:In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721–734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear). Clark’s method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established.
Keywords:Reference probability  Jump Markov systems  Hybrid dynamics  Viterbi algorithm  Filtering  Smoothing
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号