首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Let be Gaussian random variables connected in a homogeneous Markov chain and let a sample sequence of length n with possible overliers be given. The asymptotic behavior of the distribution of the number N of Chauvenet overliers as is investigated for unknown mean and correlation. Bibliography: 3 titles.  相似文献   

2.
Let x1, ..., xn be random variables, connected in a normal Markov chain. The paper investigates the asymptotic behavior when n of the distribution of the random variable N, equal to the number of outliers in the Chauvenet sense.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova Akademii Nauk SSSR, Vol. 177, pp. 163–169, 1989.  相似文献   

3.
本给出一个将DHMM转化为齐次马尔可夫链的定理,该定理提供了利用在理论上比较完善的齐次马尔可夫链来研究DHMM的一个方法.  相似文献   

4.
本文借助组合矩阵理论给出了有限齐次Markov链在两种不同定义下遍历性的判定方法  相似文献   

5.
6.
7.
One gives the asymptotic distribution of the number N of gross observations, rejected by the Chauvenet method, for random variables with distribution function containing an unknown parameter, when as the estimator of the unknown parameter one makes use of the maximum likelihood estimator.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 153, pp. 153–159, 1986.In conclusion, I express my gratitude to I. A. Ibragimov for his interest in the problem and for his remarks.  相似文献   

8.
9.
In this paper we propose a new and more general criterion (the efficient determination criterion, EDC) for estimating the order of a Markov chain. The consistency and the strong consistency of the estimates have been established under mild conditions. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

10.
Waiting Time Problems in a Two-State Markov Chain   总被引:1,自引:0,他引:1  
Let F 0 be the event that l 0 0-runs of length k 0 occur and F 1 be the event that l 1 1-runs of length k 1 occur in a two-state Markov chain. In this paper using a combinatorial method and the Markov chain imbedding method, we obtained explicit formulas of the probability generating functions of the sooner and later waiting time between F 0 and F 1 by the non-overlapping, overlapping and "greater than or equal" enumeration scheme. These formulas are convenient for evaluating the distributions of the sooner and later waiting time problems.  相似文献   

11.
We consider the first-order Edgeworth expansion for summands related to a homogeneous Markov chain. Certain inaccuracies in some earlier results by Nagaev are corrected and the expansion is obtained under relaxed conditions. An application of our result to the distribution of the mle of a transition probability in the countable state space case is also considered.  相似文献   

12.
Every attainable structure of a continuous time homogeneous Markov chain (HMC) with n states, or of a closed Markov system with an embedded HMC with n states, or more generally of a Markov system driven by an HMC, is considered as a point-particle of ? n . Then, the motion of the attainable structure corresponds to the motion of the respective point-particle in ? n . Under the assumption that “the motion of every particle at every time point is due to the interaction with its surroundings”, ? n (and equivalently the set of the accosiated attainable structures of the homogeneous Markov system (HMS), or alternatively of the underlying embedded HMC) becomes a continuum. Thus, the evolution of the set of the attainable structures corresponds to the motion of the continuum. In this paper it is shown that the evolution of a three-dimensional HMS (n = 3) or simply of an HMC, can be interpreted through the evolution of a two-dimensional isotropic viscoelastic medium.  相似文献   

13.
It is common to subsample Markov chain output to reduce the storage burden. Geyer shows that discarding k ? 1 out of every k observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning Markov chain Monte Carlo (MCMC) output cannot improve statistical efficiency. Here, we suppose that it costs one unit of time to advance a Markov chain and then θ > 0 units of time to compute a sampled quantity of interest. For a thinned process, that cost θ is incurred less often, so it can be advanced through more stages. Here, we provide examples to show that thinning will improve statistical efficiency if θ is large and the sample autocorrelations decay slowly enough. If the lag ? ? 1 autocorrelations of a scalar measurement satisfy ρ? > ρ? + 1 > 0, then there is always a θ < ∞ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with ρ? = ρ|?| for some ? 1 < ρ < 1. For an AR(1) process, it is possible to compute the most efficient subsampling frequency k. The optimal k grows rapidly as ρ increases toward 1. The resulting efficiency gain depends primarily on θ, not ρ. Taking k = 1 (no thinning) is optimal when ρ ? 0. For ρ > 0, it is optimal if and only if θ ? (1 ? ρ)2/(2ρ). This efficiency gain never exceeds 1 + θ. This article also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes. Supplementary materials for this article are available online.  相似文献   

14.
We prove that if a certain row of the transition probability matrix of a regular Markov chain is subtracted from the other rows of this matrix and then this row and the corresponding column are deleted, then the spectral radius of the matrix thus obtained is less than 1. We use this property of a regular Markov chain for the construction of an iterative process for the solution of the Howard system of equations, which appears in the course of investigation of controlled Markov chains with single ergodic class and, possibly, transient states.  相似文献   

15.
An approximation of Markov type queueing models with fast Markov switches by Markov models with averaged transition rates is studied. First, an averaging principle for two-component Markov process (x n (t), n (t)) is proved in the following form: if a component x n () has fast switches, then under some asymptotic mixing conditions the component n () weakly converges in Skorokhod space to a Markov process with transition rates averaged by some stationary measures constructed by x n (). The convergence of a stationary distribution of (x n (), n ()) is studied as well. The approximation of state-dependent queueing systems of the type M M,Q /M M,Q /m/N with fast Markov switches is considered.  相似文献   

16.
This paper concerns the filtering of an R d -valued Markov pure jump process when only the total number of jumps are observed. Strong and weak uniqueness for the solutions of the filtering equations are discussed. Accepted 12 November 1999  相似文献   

17.
18.
We consider a sequence X 1, ..., X n of r.v.'s generated by a stationary Markov chain with state space A = {0, 1, ..., r}, r 1. We study the overlapping appearances of runs of k i consecutive i's, for all i = 1, ..., r, in the sequence X 1,..., X n. We prove that the number of overlapping appearances of the above multiple runs can be approximated by a Compound Poisson r.v. with compounding distribution a mixture of geometric distributions. As an application of the previous result, we introduce a specific Multiple-failure mode reliability system with Markov dependent components, and provide lower and upper bounds for the reliability of the system.  相似文献   

19.
In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721–734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear). Clark’s method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established.  相似文献   

20.
以开放的环境系统为依托,阐述了马尔柯夫理论在多介质环境归趋研究中的科学性与合理性,并结合国家重点课题—黄河兰州段典型污染物迁移/转化特性及承纳水平研究,研究壬基酚聚氧乙烯醚(nonylphenol polyethoxylates—NPEOs)在黄河兰州段的环境归趋.研究结果表明,利用马尔柯夫模型可以确定污染物在环境介质间的迁移时间、环境介质内的滞留时间、任意时刻污染物在不同环境介质内的含量、给定时间内不同迁移转化过程的迁移量和降解量、环境系统达到稳定的时间、污染物在环境系统内的稳定分布、以及污染物的环境容量或排放标准.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号