首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
2.
This paper presents two main results: first, a Liapunov type criterion for the existence of a stationary probability distribution for a jump Markov process; second, a Liapunov type criterion for existence and tightness of stationary probability distributions for a sequence of jump Markov processes. If the corresponding semigroups TN(t) converge, under suitable hypotheses on the limit semigroup, this last result yields the weak convergence of the sequence of stationary processes (TN(t), πN) to the stationary limit one.  相似文献   

3.
We show that stochastic processes with linear conditional expectations and quadratic conditional variances are Markov, and their transition probabilities are related to a three-parameter family of orthogonal polynomials which generalize the Meixner polynomials. Special cases of these processes are known to arise from the non-commutative generalizations of the Lévy processes.Mathematics Subject Classification (2000): 60J25Research partially supported by NSF grant #INT-0332062, by the C.P. Taft Memorial Fund, and University of Cincinnatis Summer Faculty Research Fellowship ProgramAcknowledgementPart of the research of WB was conducted while visiting the Faculty of Mathematics and Information Science of Warsaw University of Technology. The authors thank M. Boejko for bringing to their attention several references, to Hiroaki Yoshida for information pertinent to Theorem 4.3, and to M. Anshelevich, W. Matysiak, R. Speicher, P. Szabowski, and M. Yor for helpful comments and discussions. Referees comments lead to several improvements in the paper.  相似文献   

4.
Markov network processes with product form stationary distributions   总被引:1,自引:0,他引:1  
Chao  X.  Miyazawa  M.  Serfozo  R.F.  Takada  H. 《Queueing Systems》1998,28(4):377-401
This study concerns the equilibrium behavior of a general class of Markov network processes that includes a variety of queueing networks and networks with interacting components or populations. The focus is on determining when these processes have product form stationary distributions. The approach is to relate the marginal distributions of the process to the stationary distributions of “node transition functions” that represent the nodes in isolation operating under certain fictitious environments. The main result gives necessary and sufficient conditions on the node transition functions for the network process to have a product form stationary distribution. This result yields a procedure for checking for a product form distribution and obtaining such a distribution when it exits. An important subclass of networks are those in which the node transition rates have Poisson arrival components. In this setting, we show that the network process has a product form distribution and is “biased locally balanced” if and only if the network is “quasi-reversible” and certain traffic equations are satisfied. Another subclass of networks are those with reversible routing. We weaken the known sufficient condition for such networks to be product form. We also discuss modeling issues related to queueing networks including time reversals and reversals of the roles of arrivals and departures. The study ends by describing how the results extend to networks with multi-class transitions. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
We construct examples of Markov Decision Processes for which, for a given initial state and for a given nonstationary transient policy, there is no equivalent (randomized) stationary policy, i.e. there is no stationary policy which occupation measure is equal to the occupation measure of a given policy. We also investigate the relation between the existence of equivalent stationary policies in special models and the existence of equivalent strategies in various classes of nonstationary policies in general models.  相似文献   

6.
We consider discrete-timeaverage reward Markov decision processes with denumerable state space andbounded reward function. Under structural restrictions on the model the existence of an optimal stationary policy is proved; both the lim inf and lim sup average criteria are considered. In contrast to the usual approach our results donot rely on the average regard optimality equation. Rather, the arguments are based on well-known facts fromRenewal Theory.This research was supported in part by the Consejo Nacional de Ciencia y Tecnologia (CONACYT) under Grants PCEXCNA 040640 and 050156, and by SEMAC under Grant 89-1/00ifn$.  相似文献   

7.
This paper deals with discrete-time Markov decision processes with state-dependent discount factors and unbounded rewards/costs. Under general conditions, we develop an iteration algorithm for computing the optimal value function, and also prove the existence of optimal stationary policies. Furthermore, we illustrate our results with a cash-balance model.  相似文献   

8.
9.
This paper deals with the asymptotic optimality of a stochastic dynamic system driven by a singularly perturbed Markov chain with finite state space. The states of the Markov chain belong to several groups such that transitions among the states within each group occur much more frequently than transitions among the states in different groups. Aggregating the states of the Markov chain leads to a limit control problem, which is obtained by replacing the states in each group by the corresponding average distribution. The limit control problem is simpler to solve as compared with the original one. A nearly-optimal solution for the original problem is constructed by using the optimal solution to the limit problem. To demonstrate, the suggested approach of asymptotic optimal control is applied to examples of manufacturing systems of production planning.  相似文献   

10.
We consider Markov control processes with Borel state space and Feller transition probabilities, satisfying some generalized geometric ergodicity conditions. We provide a new theorem on the existence of a solution to the average cost optimality equation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号