首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper derives a particle filter algorithm within the Dempster–Shafer framework. Particle filtering is a well-established Bayesian Monte Carlo technique for estimating the current state of a hidden Markov process using a fixed number of samples. When dealing with incomplete information or qualitative assessments of uncertainty, however, Dempster–Shafer models with their explicit representation of ignorance often turn out to be more appropriate than Bayesian models.The contribution of this paper is twofold. First, the Dempster–Shafer formalism is applied to the problem of maintaining a belief distribution over the state space of a hidden Markov process by deriving the corresponding recursive update equations, which turn out to be a strict generalization of Bayesian filtering. Second, it is shown how the solution of these equations can be efficiently approximated via particle filtering based on importance sampling, which makes the Dempster–Shafer approach tractable even for large state spaces. The performance of the resulting algorithm is compared to exact evidential as well as Bayesian inference.  相似文献   

2.
A limit theorem is established for the asymptotic state of a Markov chain arising from an iterative renormalization. The limit theorem is illustrated in applications to the theory of random search and in probabilistic models for descent algorithms. Some special cases are also noted where exact distributional results can be obtained.  相似文献   

3.
The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high-dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed for this purpose.  相似文献   

4.
The method introduced by Leroux [Maximum likelihood estimation for hidden Markov models, Stochastic Process Appl. 40 (1992) 127–143] to study the exact likelihood of hidden Markov models is extended to the case where the state variable evolves in an open interval of the real line. Under rather minimal assumptions, we obtain the convergence of the normalized log-likelihood function to a limit that we identify at the true value of the parameter. The method is illustrated in full details on the Kalman filter model.  相似文献   

5.
The context of planned preventive maintenance lends itself readilyto probabilistic modelling. Indeed, many of the published theoreticalmodels to be found in the literature adopt a Markov approach,where states are usually ‘operating’, ‘operatingat one of several levels of deterioration’, and ‘failed’.However, most of these models assume the required Markovianproperty and do not address the issue of testing the assumption,or the related task of estimating parameters. It is possiblethat data are inadequate to test the assumption, or that theMarkov property is believed to be not strictly valid, but acceptableas an approximation. In this paper we consider within a specificinspection–maintenance context the robustness of a Markov-basedmodel when the Markov assumption is not valid. This is achievedby comparing the output of an exact delay time model of an inspection–maintenanceproblem with that of a semi-Markov approximation. The importanceof establishing the vadility of the Markov property in the modellingapplication is highlighted. If the plant behaviour is seen tobe nearly Markov, in the case considered the semi-Markov modelgives a good approximation to the exact model. Conversley ifthe Markov assumption is not a good approximation, the semi-Markovmodel can lead to inappropriate advice.  相似文献   

6.
We present a Markov chain Monte Carlo (MCMC) method for generating Markov chains using Markov bases for conditional independence models for a four-way contingency table. We then describe a Markov basis characterized by Markov properties associated with a given conditional independence model and show how to use the Markov basis to generate random tables of a Markov chain. The estimates of exact p-values can be obtained from random tables generated by the MCMC method. Numerical experiments examine the performance of the proposed MCMC method in comparison with the χ 2 approximation using large sparse contingency tables.  相似文献   

7.
In this article, we focus on statistical models for binary data on a regular two-dimensional lattice. We study two classes of models, the Markov mesh models (MMMs) based on causal-like, asymmetric spatial dependence, and symmetric Markov random fields (SMFs) based on noncausal-like, symmetric spatial dependence. Building on results of Enting (1977), we give sufficient conditions for the asymmetrically defined binary MMMs (of third order) to be equivalent to a symmetrically defined binary SMF. Although not every binary SMF can be written as a binary MMM, our results show that many can. For such SMFs, their joint distribution can be written in closed form and their realizations can be simulated with just one pass through the lattice. An important consequence of the latter observation is that there are nontrivial spatial processes for which exact probabilities can be used to benchmark the performance of Markov-chain-Monte-Carlo and other algorithms.  相似文献   

8.
In this paper, we develop an algorithmic method for the evaluation of the steady state probability vector of a special class of finite state Markov chains. For the class of Markov chains considered here, it is assumed that the matrix associated with the set of linear equations for the steady state probabilities possess a special structure, such that it can be rearranged and decomposed as a sum of two matrices, one lower triangular with nonzero diagonal elements, and the other an upper triangular matrix with only very few nonzero columns. Almost all Markov chain models of queueing systems with finite source and/or finite capacity and first-come-first-served or head of the line nonpreemptive priority service discipline belongs to this special class.  相似文献   

9.
A stochastic chemical system with multiple types of molecules interacting through reaction channels can be modeled as a continuous‐time Markov chain with a countably infinite multidimensional state space. Starting from an initial probability distribution, the time evolution of the probability distribution associated with this continuous‐time Markov chain is described by a system of ordinary differential equations, known as the chemical master equation (CME). This paper shows how one can solve the CME using backward differentiation. In doing this, a novel approach to truncate the state space at each time step using a prediction vector is proposed. The infinitesimal generator matrix associated with the truncated state space is represented compactly, and exactly, using a sum of Kronecker products of matrices associated with molecules. This exact representation is already compact and does not require a low‐rank approximation in the hierarchical Tucker decomposition (HTD) format. During transient analysis, compact solution vectors in HTD format are employed with the exact, compact, and truncated generated matrices in Kronecker form, and the linear systems are solved with the Jacobi method using fixed or adaptive rank control strategies on the compact vectors. Results of simulation on benchmark models are compared with those of the proposed solver and another version, which works with compact vectors and highly accurate low‐rank approximations of the truncated generator matrices in quantized tensor train format and solves the linear systems with the density matrix renormalization group method. Results indicate that there is a reason to solve the CME numerically, and adaptive rank control strategies on compact vectors in HTD format improve time and memory requirements significantly.  相似文献   

10.
Gaudemet  T.  McDonald  D. 《Queueing Systems》2002,41(1-2):95-121
Markov modulated fluid models are widely used in modelling communications and computer systems. In the AMS (Annick, Mitra, Sohndi) model, heterogeneous, bursty sources modeled by multidimensional Markov processes are superimposed or multiplexed together to drive a fluid buffer. The performance of the system is measured by the steady state probability that the buffer exceeds a high level. The exact solution to this problem derived by AMS requires too much computation to be used on-line. Here we derive an upper bound for the above probability which is fast to compute and accurate enough for practical use.  相似文献   

11.
The Hidden Markov Chain (HMC) models are widely applied in various problems. This succes is mainly due to the fact that the hidden model distribution conditional on observations remains a Markov chain distribution, and thus different processings, like Bayesian restorations, are handleable. These models have been recetly generalized to “Pairwise” Markov chains, which admit the same processing power and a better modeling one. The aim of this Note is to show that the Hidden Markov trees, which can be seen as extensions of the HMC models, can also be generalized to “Pairwise” Markov trees, which present the same processing advantages and better modelling power. To cite this article: W. Pieczynski, C. R. Acad. Sci. Paris, Ser. I 335 (2002) 79–82.  相似文献   

12.
Bayesian nonparametric (BNP) models provide a flexible tool in modeling many processes. One area that has not yet utilized BNP estimation is semi‐Markov processes (SMPs). SMPs require a significant amount of computation; this, coupled with the computation requirements for BNP models, has hampered any applications of SMPs using BNP estimation. This paper presents a modeling and computational approach for BNP estimation in semi‐Markov models, which includes a simulation study and an application of asthma patients' first passage from one state of control to another.  相似文献   

13.
We study discretizations of polynomial processes using finite state Markov processes satisfying suitable moment matching conditions. The states of these Markov processes together with their transition probabilities can be interpreted as Markov cubature rules. The polynomial property allows us to study such rules using algebraic techniques. Markov cubature rules aid the tractability of path-dependent tasks such as American option pricing in models where the underlying factors are polynomial processes.  相似文献   

14.
This paper extends the model and analysis in that of Vandaele and Vanmaele [Insurance: Mathematics and Economics, 2008, 42: 1128–1137]. We assume that parameters of the Lévy process which models the dynamic of risky asset in the financial market depend on a finite state Markov chain. The state of the Markov chain can be interpreted as the state of the economy. Under the regime switching Lévy model, we obtain the locally risk-minimizing hedging strategies for some unit-linked life insurance products, including both the pure endowment policy and the term insurance contract.  相似文献   

15.
McDonald  D.  Qian  K. 《Queueing Systems》1998,30(3-4):365-384
This paper presents an approximation method for numerically solving general Markov-modulated fluid models which are widely used in modelling communications and computer systems. We show how the superposition of a group of heterogeneous sources (normally modeled by a multidimensional Markov process) can be approximated by a one-dimensional Markov process, which is then used as the modulating process of the buffer content process. The method effectively reduces the computation that is usually required to find exact (or asymptotic) solutions of fluid models. While this method is general, we focus our discussion on the models with only ON/OFF traffic sources. Numerous numerical results are provided to show the accuracy of the approximation. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

16.
Various process models for discrete manufacturing systems (parts industry) can be treated as bounded discrete-space Markov chains, completely characterized by the original in-control state and a transition matrix for shifts to an out-of-control state. The present work extends these models by using a continuous-state Markov chain, incorporating non-random corrective actions. These actions are to be realized according to the statistical process control (SPC) technique and should substantially affect the model. The developed stochastic model yields Laplace distribution of a process mean. Real-data tests confirm its applicability for the parts industry and show that the distribution parameter is mainly controlled by the SPC sample size.  相似文献   

17.
Ishizaki  Fumio  Takine  Tetsuya 《Queueing Systems》1999,31(3-4):317-326
We consider a discrete-time single-server queue with arrivals governed by a stationary Markov chain, where no arrivals are assumed to occur only when the Markov chain is in a particular state. This assumption implies that off-periods in the arrival process are i.i.d. and geometrically distributed. For this queue, we establish the exact relationship between queue length distributions in a finite-buffer queue and the corresponding infinite-buffer queue. With the result, the exact loss probability is obtained in terms of the queue length distribution in the corresponding infinite-buffer queue. Note that this result enables us to compute the loss probability very efficiently, since the queue length distribution in the infinite-buffer queue can be efficiently computed when off-periods are geometrically distributed. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

18.
A general procedure for creating Markovian interest rate models is presented. The models created by this procedure automatically fit within the HJM framework and fit the initial term structure exactly. Therefore they are arbitrage free. Because the models created by this procedure have only one state variable per factor, twoand even three-factor models can be computed efficiently, without resorting to Monte Carlo techniques. This computational efficiency makes calibration of the new models to market prices straightforward. Extended Hull- White, extended CIR, Black-Karasinski, Jamshidian's Brownian path independent models, and Flesaker and Hughston's rational log normal models are one-state variable models which fit naturally within this theoretical framework. The ‘separable’ n-factor models of Cheyette and Li, Ritchken, and Sankarasubramanian - which require n(n + 3)/2 state variables - are degenerate members of the new class of models with n(n + 3)/2 factors. The procedure is used to create a new class of one-factor models, the ‘β-η models.’ These models can match the implied volatility smiles of swaptions and caplets, and thus enable one to eliminate smile error. The β-η models are also exactly solvable in that their transition densities can be written explicitly. For these models accurate - but not exact - formulas are presented for caplet and swaption prices, and it is indicated how these closed form expressions can be used to efficiently calibrate the models to market prices.  相似文献   

19.
Quasi-stationary distributions have been used in biology to describe the steady state behaviour of Markovian population models which, while eventually certain to become extinct, nevertheless maintain an apparent stochastic equilibrium for long periods. However, they have substantial drawbacks; a Markov process may not possess any, or may have several, and their probabilities can be very difficult to determine. Here, we consider conditions under which an apparent stochastic equilibrium distribution can be identified and computed, irrespective of whether a quasi-stationary distribution exists, or is unique; we call it a quasi-equilibrium distribution. The results are applied to multi-dimensional Markov population processes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号