首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper discusses finite-dimensional optimal filters for partially observed Markov chains. A model for a system containing a finite number of components where each component behaves like an independent finite state continuous-time Markov chain is considered. Using measure change techniques various estimators are derived.  相似文献   

2.
The control of piecewise-deterministic processes is studied where only local boundedness of the data is assumed. Moreover the discount rate may be zero. The value function is shown to be solution to the Bellman equation in a weak sense; however the solution concept is strong enough to generate optimal policies. Continuity and compactness conditions are given for the existence of nonrelaxed optimal feedback controls.  相似文献   

3.
We obtain sufficient criteria for central limit theorems (CLTs) for ergodic continuous-time Markov chains (CTMCs). We apply the results to establish CLTs for continuous-time single birth processes. Moreover, we present an explicit expression of the time average variance constant for a single birth process whenever a CLT exists. Several examples are given to illustrate these results.  相似文献   

4.
In this paper we obtain identities for some stopped Markov chains. These identities give a unified approach to many problems in optimal stopping of a Markovian sequence, extinction probability of a Markovian branching process and martingale theory.  相似文献   

5.
In this paper, subgeometric ergodicity is investigated for continuous-time Markov chains. Several equivalent conditions, based on the first hitting time or the drift function, are derived as the main theorem. In its corollaries, practical drift criteria are given for ?-ergodicity and computable bounds on subgeometric convergence rates are obtained for stochastically monotone Markov chains. These results are illustrated by examples.  相似文献   

6.
We suggest an approach to obtaining general estimates of stability in terms of special “weighted” norms related to total variation. Two important classes of continuous-time Markov chains are considered for which it is possible to obtain exact convergence rate estimates (and hence, guarantee exact stability estimates): birth–death–catastrophes processes, and queueing models with batch arrivals and group services.  相似文献   

7.
本文考虑连续时间Markov决策过程折扣模型的均值-方差优化问题.假设状态空间和行动空间均为Polish空间,转移率和报酬率函数均无界.本文的优化目标是在折扣最优平稳策略类里,选取相应方差最小的策略.本文致力于寻找Polish空间下Markov决策过程均值-方差最优策略存在的条件.利用首次进入分解方法,本文证明均值-方差优化问题可以转化为"等价"的期望折扣优化问题,进而得到关于均值-方差优化问题的"最优方程"和均值-方差最优策略的存在性以及它相应的特征.最后,本文给出若干例子说明折扣最优策略的不唯一性和均值-方差最优策略的存在性.  相似文献   

8.
Ergodic degrees for continuous-time Markov chains   总被引:3,自引:0,他引:3  
This paper studies the existence of the higher orders deviation matrices for continuous time Markov chains by the moments for the hitting times. An estimate of the polynomial convergence rates for the transition matrix to the stationary measure is obtained. Finally, the explicit formulas for birth-death processes are presented.  相似文献   

9.
In this paper we consider stopping problems for continuous-time Markov chains under a general risk-sensitive optimization criterion for problems with finite and infinite time horizon. More precisely our aim is to maximize the certainty equivalent of the stopping reward minus cost over the time horizon. We derive optimality equations for the value functions and prove the existence of optimal stopping times. The exponential utility is treated as a special case. In contrast to risk-neutral stopping problems it may be optimal to stop between jumps of the Markov chain. We briefly discuss the influence of the risk sensitivity on the optimal stopping time and consider a special house selling problem as an example.  相似文献   

10.
We study the filtering problem of an R d -valued pure jump process when the observations is a counting process. We assume that the dynamic of the state and the observations may be strongly dependent and that the two processes may jump together. Weak and pathwise uniqueness of solution of the Kushner–Stratonovich equation are discussed.  相似文献   

11.
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P.  相似文献   

12.
《Optimization》2012,61(4):773-800
Abstract

In this paper we study the risk-sensitive average cost criterion for continuous-time Markov decision processes in the class of all randomized Markov policies. The state space is a denumerable set, and the cost and transition rates are allowed to be unbounded. Under the suitable conditions, we establish the optimality equation of the auxiliary risk-sensitive first passage optimization problem and obtain the properties of the corresponding optimal value function. Then by a technique of constructing the appropriate approximating sequences of the cost and transition rates and employing the results on the auxiliary optimization problem, we show the existence of a solution to the risk-sensitive average optimality inequality and develop a new approach called the risk-sensitive average optimality inequality approach to prove the existence of an optimal deterministic stationary policy. Furthermore, we give some sufficient conditions for the verification of the simultaneous Doeblin condition, use a controlled birth and death system to illustrate our conditions and provide an example for which the risk-sensitive average optimality strict inequality occurs.  相似文献   

13.
14.
An approximate version of the standard uniformization technique is introduced for application to continuous-time Markov chains with unbounded jump rates. This technique is shown to be asymptotically exact and an error bound for the order of its accuracy is provided. An illustrative queueing application is included.  相似文献   

15.
1.IntrodnctionTheweightedMarkovdecisionprocesses(MDP's)havebeenextensivelystudiedsince1980's,seeforinstance,[1-6]andsoon.ThetheoryofweightedMDP'swithperturbedtransitionprobabilitiesappearstohavebeenmentionedonlyin[7].Thispaperwilldiscussthemodelsofwe...  相似文献   

16.
Let X(t) be a nonhomogeneous continuous-time Markov chain. Suppose that the intensity matrices of X(t) and some weakly or strongly ergodic Markov chain (t) are close. Some sufficient conditions for weak and strong ergodicity of X(t) are given and estimates of the rate of convergence are proved. Queue-length for a birth and death process in the case of asymptotically proportional intensities is considered as an example.  相似文献   

17.
In this paper we extend standard dynamic programming results for the risk sensitive optimal control of discrete time Markov chains to a new class of models. The state space is only finite, but now the assumptions about the Markov transition matrix are much less restrictive. Our results are then applied to the financial problem of managing a portfolio of assets which are affected by Markovian microeconomic and macroeconomic factors and where the investor seeks to maximize the portfolio's risk adjusted growth rate.  相似文献   

18.
The isomorphism theorem of Dynkin is definitely an important tool to investigate the problems raised in terms of local times of Markov processes. This theorem concerns continuous time Markov processes. We give here an equivalent version for Markov chains.  相似文献   

19.
A standard strategy in simulation, for comparing two stochastic systems, is to use a common sequence of random numbers to drive both systems. Since regenerative output analysis of the steady-state of a system requires that the process be regenerative, it is of interest to derive conditions under which the method of common random numbers yields a regenerative process. It is shown here that if the stochastic systems are positive recurrent Markov chains with countable state space, then the coupled system is necessarily regenerative; in fact, we allow couplings more general than those induced by common random numbers. An example is given which shows that the regenerative property can fail to hold in general state space, even if the individual systems are regenerative.  相似文献   

20.
Let \s{Xn, n ? 0\s} and \s{Yn, n ? 0\s} be two stochastic processes such that Yn depends on Xn in a stationary manner, i.e. P(Yn ? A\vbXn) does not depend on n. Sufficient conditions are derived for Yn to have a limiting distribution. If Xn is a Markov chain with stationary transition probabilities and Yn = f(Xn,..., Xn+k) then Yn depends on Xn is a stationary way. Two situations are considered: (i) \s{Xn, n ? 0\s} has a limiting distribution (ii) \s{Xn, n ? 0\s} does not have a limiting distribution and exits every finite set with probability 1. Several examples are considered including that of a non-homogeneous Poisson process with periodic rate function where we obtain the limiting distribution of the interevent times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号