首页 | 本学科首页   官方微博 | 高级检索  
     检索      


On a new method of Markov chain reduction
Authors:D Racoceanu  A Elmoudni  M Ferney  S Zerhouni
Institution:Laboratoire de Mecanique et Productique Ecole Nationale d'Ingenieurs de Belfort Espace Bartholdi , Belfort Technopole , B.P. 525 90016 Belfort, France Phone: +33-84.58.23.41 Fax: +33-84.58.23.41
Abstract:The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain
Keywords:Markov chains  steady-state distribution  Two-Time-Scale
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号