首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper concerns the study of asymptotic properties of the maximum likelihood estimator (MLE) for the general hidden semi-Markov model (HSMM) with backward recurrence time dependence. By transforming the general HSMM into a general hidden Markov model, we prove that under some regularity conditions, the MLE is strongly consistent and asymptotically normal. We also provide useful expressions for asymptotic covariance matrices, involving the MLE of the conditional sojourn times and the embedded Markov chain of the hidden semi-Markov chain. Bibliography: 17 titles.  相似文献   

2.
3.
4.
5.
In this Note we consider a discrete-time hidden semi-Markov model and we prove that the nonparametric maximum likelihood estimators for the characteristics of such a model have nice asymptotic properties, namely consistency and asymptotic normality. To cite this article: V. Barbu, N. Limnios, C. R. Acad. Sci. Paris, Ser. I 342 (2006).  相似文献   

6.
We study risk-sensitive control of continuous time Markov chains taking values in discrete state space. We study both finite and infinite horizon problems. In the finite horizon problem we characterize the value function via Hamilton Jacobi Bellman equation and obtain an optimal Markov control. We do the same for infinite horizon discounted cost case. In the infinite horizon average cost case we establish the existence of an optimal stationary control under certain Lyapunov condition. We also develop a policy iteration algorithm for finding an optimal control.  相似文献   

7.
For Harris recurrent Markov renewal processes and semi-Markov processes one obtains a central limit theorem. One also obtains Berry-Esseen type estimates for this theorem. Their proof is based on the Kolmogorov-Doeblin regenerative method.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 142, pp. 86–97, 1985.  相似文献   

8.
We study infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled continuous time Markov chains on a countable state space. For the discounted-cost game, we prove the existence of value and saddle-point equilibrium in the class of Markov strategies under nominal conditions. For the ergodic-cost game, we prove the existence of values and saddle point equilibrium by studying the corresponding Hamilton-Jacobi-Isaacs equation under a certain Lyapunov condition.  相似文献   

9.
A class of models called interactive Markov chains is studied in both discrete and continuous time. These models were introduced by Conlisk and serve as a rich class for sociological modeling, because they allow for interactions among individuals. In discrete time, it is proved that the Markovian processes converge to a deterministic process almost surely as the population size becomes infinite. More importantly, the normalized process is shown to be asymptotically normal with specified mean vector and covariance matrix. In continuous time, the chain is shown to converge weakly to a diffusion process with specified drift and scale terms. The distributional results will allow for the construction of a likelihood function from interactive Markov chain data, so these results will be important for questions of statistical inference. An example from manpower planning is given which indicates the use of this theory in constructing and evaluating control policies for certain social systems.  相似文献   

10.
We provide non-ergodicity criteria for denumerable continuous time Markov processes in terms of test functions. Two examples are given where the non-ergodicity criteria are applied.  相似文献   

11.
The paper studies large sample asymptotic properties of the Maximum Likelihood Estimator (MLE) for the parameter of a continuous time Markov chain, observed in white noise. Using the method of weak convergence of likelihoods due to Ibragimov and Khasminskii (Statistical estimation, vol 16 of Applications of mathematics. Springer-Verlag, New York), consistency, asymptotic normality and convergence of moments are established for MLE under certain strong ergodicity assumptions on the chain. This article has been written during the author’s visit at Laboratoire de Statistique et Processus, Universite du Maine, France, supported by the Chateaubriand fellowship.  相似文献   

12.
13.
Summary Let X(t)=(X 1 (t), X 2 (t), , X t (t)) be a k-type (2k<) continuous time, supercritical, nonsingular, positively regular Markov branching process. Let M(t)=((m ij (t))) be the mean matrix where m ij (t)=E(X j (t)¦X r (0)= ir for r=1, 2, , k) and write M(t)=exp(At). Let be an eigenvector of A corresponding to an eigenvalue . Assuming second moments this paper studies the limit behavior as t of the stochastic process . It is shown that i) if 2 Re >1, then · X(t)e{–t¦ converges a.s. and in mean square to a random variable. ii) if 2 Re 1 then [ · X(t)] f(v · X(t)) converges in law to a normal distribution where f(x)=(x) –1 if 2 Re <1 and f(x)=(x log x)–1 if 2 Re =1, 1 the largest real eigenvalue of A and v the corresponding right eigenvector.Research supported in part under contracts N0014-67-A-0112-0015 and NIH USPHS 10452 at Stanford University.  相似文献   

14.
We introduce a sequence of stopping times that allow us to study an analogue of a life-cycle decomposition for a continuous time Markov process, which is an extension of the well-known splitting technique of Nummelin to the continuous time case. As a consequence, we are able to give deterministic equivalents of additive functionals of the process and to state a generalisation of Chen’s inequality. We apply our results to the problem of non-parametric kernel estimation of the drift of multi-dimensional recurrent, but not necessarily ergodic, diffusion processes.  相似文献   

15.
Unbiased estimators are constructed for transition probabilities of homogeneous Markov chains with a finite number of states.Translated from Statisticheskie Metody, pp. 97–103, 1980.  相似文献   

16.
This work develops asymptotic expansions for solutions of systems of backward equations of time- inhomogeneous Maxkov chains in continuous time. Owing to the rapid progress in technology and the increasing complexity in modeling, the underlying Maxkov chains often have large state spaces, which make the computa- tional tasks ihfeasible. To reduce the complexity, two-time-scale formulations are used. By introducing a small parameter ε〉 0 and using suitable decomposition and aggregation procedures, it is formulated as a singular perturbation problem. Both Markov chains having recurrent states only and Maxkov chains including also tran- sient states are treated. Under certain weak irreducibility and smoothness conditions of the generators, the desired asymptotic expansions axe constructed. Then error bounds are obtained.  相似文献   

17.
18.
Recursive equations are derived for the conditional distribution of the state of a Markov chain, given observations of a function of the state. Mainly continuous time chains are considered. The equations for the conditional distribution are given in matrix form and in differential equation form. The conditional distribution itself forms a Markov process. Special cases considered are doubly stochastic Poisson processes with a Markovian intensity, Markov chains with a random time, and Markovian approximations of semi-Markov processes. Further the results are used to compute the Radon-Nikodym derivative for two probability measures for a Markov chain, when a function of the state is observed.  相似文献   

19.
20.
The results of part I are carried over to Markov chains with continuous time. As opposed to the case of chains with discrete time, one establishes the Markov property of the occupation time process for the simplest one-dimensional symmetric random walk with continuous time.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 130, pp. 56–64, 1983.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号