首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Limit theorems for functionals of classical (homogeneous) Markov renewal and semi-Markov processes have been known for a long time, since the pioneering work of Pyke Schaufele (Limit theorems for Markov renewal processes, Ann. Math. Statist., 35(4):1746–1764, 1964). Since then, these processes, as well as their time-inhomogeneous generalizations, have found many applications, for example, in finance and insurance. Unfortunately, no limit theorems have been obtained for functionals of inhomogeneous Markov renewal and semi-Markov processes as of today, to the best of the authors’ knowledge. In this article, we provide strong law of large numbers and central limit theorem results for such processes. In particular, we make an important connection of our results with the theory of ergodicity of inhomogeneous Markov chains. Finally, we provide an application to risk processes used in insurance by considering a inhomogeneous semi-Markov version of the well-known continuous-time Markov chain model, widely used in the literature.  相似文献   

2.
The literature about maximum of entropy for Markov processes deals mainly with discrete-time Markov chains. Very few papers dealing with continuous-time jump Markov processes exist and none dealing with semi-Markov processes. It is the aim of this paper to contribute to fill this lack. We recall the basics concerning entropy for Markov and semi-Markov processes and we study several problems to give an overview of the possible directions of use of maximum entropy in connection with these processes. Numeric illustrations are presented, in particular in application to reliability.  相似文献   

3.
A semi-Markov process is easily made Markov by adding some auxiliary random variables. This paper discusses the I-type quasi-stationary distributions of such “extended” processes, and the α-invariant distributions for the corresponding Markov transition probabilities; and we show that there is an intimate relation between the two. The results have relevance in the study of the time to “absorption” or “death” of semi-Markov processes. The particular case of a terminating renewal process is studied as an example.  相似文献   

4.
We study stochastic processes with age-dependent transition rates. A typical example of such a process is a semi-Markov process which is completely determined by the holding time distributions in each state and the transition probabilities of the embedded Markov chain. The process we construct generalizes semi-Markov processes. One important feature of this process is that unlike semi-Markov processes the transition probabilities of this process are age-dependent. Under certain condition we establish the Feller property of the process. Finally, we compute the limiting distribution of the process.  相似文献   

5.
We propose a system approach to the asymptotic analysis of stochastic systems in the scheme of series with averaging and diffusion approximation. Stochastic systems are defined by Markov processes with locally independent increments in a Euclidean space with random switchings that are described by jump Markov and semi-Markov processes. We use the asymptotic analysis of Markov and semi-Markov random evolutions. We construct the diffusion approximation using the asymptotic decomposition of generating operators and solutions of problems of singular perturbation for reducibly inverse operators. __________ Translated from Ukrains'kyi Matematychnyi Zhurnal, Vol. 57, No. 9, pp. 1235–1252, September, 2005.  相似文献   

6.
This paper presents a basic formula for performance gradient estimation of semi-Markov decision processes (SMDPs) under average-reward criterion. This formula directly follows from a sensitivity equation in perturbation analysis. With this formula, we develop three sample-path-based gradient estimation algorithms by using a single sample path. These algorithms naturally extend many gradient estimation algorithms for discrete-time Markov systems to continuous time semi-Markov models. In particular, they require less storage than the algorithm in the literature.  相似文献   

7.
We introduce and study a class of non-stationary semi-Markov decision processes on a finite horizon. By constructing an equivalent Markov decision process, we establish the existence of a piecewise open loop relaxed control which is optimal for the finite horizon problem.  相似文献   

8.
A continuous semi-Markov process with values in a closed interval is considered. This process coincides with a Markov diffusion process inside the interval. Thus, violation of the Markov property is only possible at the boundary of the interval. We prove a sufficient condition under which a semi-Markov process is Markov. We show that, in addition to Markov processes with instantaneous reflection from the boundary of the interval. there exists a class of Markov processes with delayed reflection from the boundary. Such a process has a positive average measure of time at which its trajectory belongs to the boundaries. This gives a different proof of a similar result by Gikhman and Skorokhod of 1968. Bibliography: 5 titles.  相似文献   

9.
We obtain chains of equations that relate the sojourn times of a semi-Markov process in a set of states to its Markov renewal function. We use the mathematical apparatus of the theory of Markov and semi-Markov processes.__________Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 56, No. 12, pp. 1684 – 1690, December, 2004.  相似文献   

10.
Considerable benefits have been gained from using Markov decision processes to select condition-based maintenance policies for the asset management of infrastructure systems. A key part of the method is using a Markov process to model the deterioration of condition. However, the Markov model assumes constant transition probabilities irrespective of how long an item has been in a state. The semi-Markov model relaxes this assumption. This paper describes how to fit a semi-Markov model to observed condition data and the results achieved on two data sets. Good results were obtained even where there was only 1 year of observation data.  相似文献   

11.
Coupling procedures for Markov renewal processes are described. Applications to ergodic theorems for processes with semi-Markov switchings are considered.This paper was partly prepared with the support of NFR Grant F-UP 10257-300.  相似文献   

12.
Summary For a family of semi-Markov processes where the transition matrices for the embedded Markov chains and the mean sojourn times depend continuously on a parameter, we give equivalent as well as sufficient conditions for the continuity of the mean recurrence times. The results will be used in a subsequent paper on average costs in a dynamic programming model.This work was supported by the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich 72 at the University of Bonn  相似文献   

13.
Discrete storage processes defined by sums of random variables on a Markov or a semi-Markov process are approximated by compound Poisson processes with continuous drift on increasing time intervals.  相似文献   

14.
A new algorithm for classifying the states of a homogeneous Markov chain having finitely many states is presented, which enables the investigation of the asymptotic behavior of semi-Markov processes in which the Markov chain is embedded. An application of the algorithm to a social security problem is also presented.  相似文献   

15.
A continuous semi-Markov process with a segment as the range of values is considered. This process coincides with a diffusion process inside the segment, i.e., up to the first hitting time of the boundary of the segment and at any time when the process leaves the boundary. The class of such processes consists of Markov processes with reflection at the boundaries (instantaneously or with a delay) and semi-Markov processes with intervals of constancy on some boundary. We derive conditions of existence of such a process in terms of a semi-Markov transition generating function on the boundary. The method of imbedded alternating renewal processes is applied to find a stationary distribution of the process. Bibliography: 3 titles. __________ Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 351, 2007, pp. 284–297.  相似文献   

16.
Statistical Inference for Stochastic Processes - In this article, the maximum spacing (MSP) method is extended to continuous time Markov chains and semi-Markov processes and consistency of the MSP...  相似文献   

17.
Recursive equations are derived for the conditional distribution of the state of a Markov chain, given observations of a function of the state. Mainly continuous time chains are considered. The equations for the conditional distribution are given in matrix form and in differential equation form. The conditional distribution itself forms a Markov process. Special cases considered are doubly stochastic Poisson processes with a Markovian intensity, Markov chains with a random time, and Markovian approximations of semi-Markov processes. Further the results are used to compute the Radon-Nikodym derivative for two probability measures for a Markov chain, when a function of the state is observed.  相似文献   

18.
A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi- Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.  相似文献   

19.
《Optimization》2012,61(3-4):367-382
This paper investigates discrete type shock semi-Markov decision processes (SMDP for short) with Borel state and action space. The discrete type shock SMDP describes a system which behaves like a discrete type SMDP, except that the system is subject to random shocks from its environment. Following each shock, an instantaneous state transition occurs and the parameters of the SMDP are changed. After presenting the model, we transform the discrete type shock SMDP into an equivalent discrete time Markov decision process under the condition that one of the assumptions P, N, D, holds. So the most results from discrete time Markov decision processes can be generalized directly to hold for the discrete type shock SMDP.  相似文献   

20.
This paper concerns the study of asymptotic properties of the maximum likelihood estimator (MLE) for the general hidden semi-Markov model (HSMM) with backward recurrence time dependence. By transforming the general HSMM into a general hidden Markov model, we prove that under some regularity conditions, the MLE is strongly consistent and asymptotically normal. We also provide useful expressions for asymptotic covariance matrices, involving the MLE of the conditional sojourn times and the embedded Markov chain of the hidden semi-Markov chain. Bibliography: 17 titles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号