首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Mixtures of recurrent semi-Markov processes are characterized through a partial exchangeability condition of the array of successor states and holding times. A stronger invariance condition on the joint law of successor states and holding times leads to mixtures of Markov laws.  相似文献   

3.
4.
This paper attempts to study the optimal stopping time for semi- Markov processes (SMPs) under the discount optimization criteria with unbounded cost rates. In our work, we introduce an explicit construction of the equivalent semi-Markov decision processes (SMDPs). The equivalence is embodied in the expected discounted cost functions of SMPs and SMDPs, that is, every stopping time of SMPs can induce a policy of SMDPs such that the value functions are equal, and vice versa. The existence of the optimal stopping time of SMPs is proved by this equivalence relation. Next, we give the optimality equation of the value function and develop an effective iterative algorithm for computing it. Moreover, we show that the optimal and ε-optimal stopping time can be characterized by the hitting time of the special sets. Finally, to illustrate the validity of our results, an example of a maintenance system is presented in the end.  相似文献   

5.
We simulate long-run averages of time integrals of a recurrent semi-Markov process efficiently. Converting to discrete-time by simulating only an imbedded chain and computing the conditional expectations of everything else needed given the sequence of studies visited reduces asymptotic variance, eliminates generating holding-time variates, and (when advantageous) gets rid of the future event schedule. In this setting, uniformizing continuous-time Markov chains is not worthwhile. We generalize beyond semi-Markov processes and cut ties to regenerative simulation methodology. Implementation of discrete-time conversion is discussed. It often requires no more work or even less work than the naive method. We give sufficient conditions for work savings. Continuous-time Markov chains, for example, satisfy them.  相似文献   

6.
7.
The behavior of the mean values of additive functionals of regular semi-Markov processes with arbitrary (not necessarily finite or countable) sets of states is studied. An integral representation of the mean value of an additive functional is obtained. The behavior of certain operators connected with the process is investigated. As as illustration of the possible applications of the results obtained here we formulate and prove a limit theorem for a semi-Markov process. See [7].  相似文献   

8.
Semi-Markov control processes with Borel state space and Feller transition probabilities are considered. The objective of the paper is to prove coincidence of two expected average costs: the time-average and the ratio-average for stationary policies. Moreover, the optimal stationary policy is the same for both criteria.  相似文献   

9.
Generalized semi-Markov schemes were introduced by Matthes in 1962 under the designation ‘Bedienungsschemata’ (service schemes). They include a large variety of familiar stochastic models. It is shown in this paper that under appropriate regularity conditions the associated stochastic process describing the state at timet,t≥0, and the stationary distribution are continuous functions of the life-times of the active components. The supplementary-variable Markov process is shown to be the limit process of a sequence of discrete-state-process obtained through approximating the life-time distributions by mixtures of Erlang distributions and measuring ages and residual life-times in phases. This approach supplements the phase method.  相似文献   

10.
11.

The literature on Bayesian methods for the analysis of discrete-time semi-Markov processes is sparse. In this paper, we introduce the semi-Markov beta-Stacy process, a stochastic process useful for the Bayesian non-parametric analysis of semi-Markov processes. The semi-Markov beta-Stacy process is conjugate with respect to data generated by a semi-Markov process, a property which makes it easy to obtain probabilistic forecasts. Its predictive distributions are characterized by a reinforced random walk on a system of urns.

  相似文献   

12.
For Harris recurrent Markov renewal processes and semi-Markov processes one obtains a central limit theorem. One also obtains Berry-Esseen type estimates for this theorem. Their proof is based on the Kolmogorov-Doeblin regenerative method.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 142, pp. 86–97, 1985.  相似文献   

13.
Given a semi-Markov process with an arbitrary set of states, a criterion is obtained for the attainability of a certain isolated subset of states and for finiteness of the average attainment time. An equation is given for the average of an additive functional of a process with absorption, existence and uniqueness conditions are deduced for the solution of that equation in a given class of functions, and an integral representation is obtained for the solution.  相似文献   

14.
15.
In this paper, we consider a class of semi-Markov processes, known as phase semi-Markov processes, which can be considered as an extension of Markov processes, but whose times between transitions are phase-type random variables. Based on the theory of generalized inverses, we derive expressions for the moments of the first-passage time distributions, generalizing the results obtained by Kemeny and Snell (1960) for Markov chains.  相似文献   

16.
As an extension of the discrete-time case, this note investigates the variance of the total cumulative reward for the embedded Markov chain of semi-Markov processes. Under the assumption that the chain is aperiodic and contains a single class of recurrent states recursive formulae for the variance are obtained which show that the variance growth rate is asymptotically linear in time. Expressions are provided to compute this growth rate.  相似文献   

17.
18.
This paper presents a basic formula for performance gradient estimation of semi-Markov decision processes (SMDPs) under average-reward criterion. This formula directly follows from a sensitivity equation in perturbation analysis. With this formula, we develop three sample-path-based gradient estimation algorithms by using a single sample path. These algorithms naturally extend many gradient estimation algorithms for discrete-time Markov systems to continuous time semi-Markov models. In particular, they require less storage than the algorithm in the literature.  相似文献   

19.
A new algorithm for classifying the states of a homogeneous Markov chain having finitely many states is presented, which enables the investigation of the asymptotic behavior of semi-Markov processes in which the Markov chain is embedded. An application of the algorithm to a social security problem is also presented.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号