首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Abstract

In this article, we solve a class of estimation problems, namely, filtering smoothing and detection for a discrete time dynamical system with integer-valued observations. The observation processes we consider are Poisson random variables observed at discrete times. Here, the distribution parameter for each Poisson observation is determined by the state of a Markov chain. By appealing to a duality between forward (in time) filter and its corresponding backward processes, we compute dynamics satisfied by the unnormalized form of the smoother probability. These dynamics can be applied to construct algorithms typically referred to as fixed point smoothers, fixed lag smoothers, and fixed interval smoothers. M-ary detection filters are computed for two scenarios: one for the standard model parameter detection problem and the other for a jump Markov system.  相似文献   

2.
Abstract

We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov process to be commutative. Under suitable conditions we recover some of the basic quantities of the original Markov process from the jump chain of the lumped Markov process.  相似文献   

3.
Abstract

Versions of the Gibbs Sampler are derived for the analysis of data from hidden Markov chains and hidden Markov random fields. The principal new development is to use the pseudolikelihood function associated with the underlying Markov process in place of the likelihood, which is intractable in the case of a Markov random field, in the simulation step for the parameters in the Markov process. Theoretical aspects are discussed and a numerical study is reported.  相似文献   

4.
Abstract

In this paper, we use filtering techniques to estimate the occurrence time of an event in a financial market. The occurrence time is being viewed as a Markov stopping time with respect to the σ-field generated by a hidden Markov process. We also generalize our result to the Nth occurrence time of that event.  相似文献   

5.
Abstract

The so-called “Rao-Blackwellized” estimators proposed by Gelfand and Smith do not always reduce variance in Markov chain Monte Carlo when the dependence in the Markov chain is taken into account. An illustrative example is given, and a theorem characterizing the necessary and sufficient condition for such an estimator to always reduce variance is proved.  相似文献   

6.
Members of a population of fixed size N can be in any one of n states. In discrete time the individuals jump from one state to another, independently of each other, and with probabilities described by a homogeneous Markov chain. At each time a sample of size M is withdrawn, (with replacement). Based on these observations, and using the techniques of Hidden Markov Models, recursive estimates for the distribution of the population are obtained  相似文献   

7.
8.
Abstract

In this article, a class of strong limit theorems for relative entropy density of arbitrary stochastic sequence, expressed by inequalities, are obtained by comparing arbitrary dependent distribution with and the mth-order Markov distribution on probability space. As corollaries, some Shannon–McMillan theorems of mth-order nonhomogeneous Markov information source are obtained. Some results of nonhomogeneous Markov information source obtained are generalized.  相似文献   

9.
Abstract

In this paper, we focus on two-component Markov processes which consist of continuous dynamics and discrete events. Using the classical fixed point theorem for contractions to investigate the existence and uniqueness of solutions of stochastic heat equations with Markovian switching, then developing the corresponding Feller property of the solution.  相似文献   

10.
Abstract

Many Bayesian analyses use Markov chain Monte Carlo (MCMC) techniques. MCMC techniques work fastest (per iteration) when the prior distribution of the parameters is chosen conveniently, such as a conjugate prior. However, this is sometimes at odds with the prior desired by the investigator. We describe two motivating examples where nonconjugate priors are preferred. One is a Dirichlet process where it is difficult to implement alternative, nonconjugate priors. We develop a method that allows computation to be done with a convenient prior but adjusts the equilibrium distribution of the Markov chain to be the posterior distribution from the desired prior. In addition to allowing more freedom in choosing prior distributions, the method enables the investigator to perform quick sensitivity analyses, even in nonparametric settings.  相似文献   

11.
Abstract

Techniques of filtering and parameter reestimation of a general hidden Markov model are developed and applied to a discrete time multi-period asset allocation problem, where a commonly used mean-variance utility is considered and recursive calculation of an explicit optimal portfolio is provided. Our result is a generalization of that by Robert J. Elliott and John van der Hoek.  相似文献   

12.
Discrete choice models are widely used for understanding how customers choose between a variety of substitutable goods. We investigate the relationship between two well studied choice models, the Nested Logit (NL) model and the Markov choice model. Both models generalize the classic Multinomial Logit model and admit tractable algorithms for assortment optimization. Previous evidence indicates that the NL model may be well approximated by, or be a special case of, the Markov model. We establish that the Nested Logit model, in general, cannot be represented by a Markov model. Further, we show that there exists a family of instances of the NL model where the choice probabilities cannot be approximated to within a constant error by any Markov choice model.  相似文献   

13.
Abstract

Transition probabilities of embedded Markov chain for single-server queues are considered when the distribution of the inter-arrival time or that of the service time is specified. A comprehensive collection of formulas is derived for the transition probabilities, covering some seventeen flexible families. The corresponding estimation procedures are also derived by the method of moments. It is expected that this work could serve as a useful reference for the modeling of queuing systems with embedded Markov chains.  相似文献   

14.
Markov properties and strong Markov properties for random fields are defined and discussed. Special attention is given to those defined by I. V. Evstigneev. The strong Markov nature of Markov random fields with respect to random domains such as [0, L], where L is a multidimensional extension of a stopping time, is explored. A special case of this extension is shown to generalize a result of Merzbach and Nualart for point processes. As an additional example, Evstigneev's Markov and strong Markov properties are considered for independent increment jump processes.  相似文献   

15.
Abstract

The members of a set of conditional probability density functions are called compatible if there exists a joint probability density function that generates them. We generalize this concept by calling the conditionals functionally compatible if there exists a non-negative function that behaves like a joint density as far as generating the conditionals according to the probability calculus, but whose integral over the whole space is not necessarily finite. A necessary and sufficient condition for functional compatibility is given that provides a method of calculating this function, if it exists. A Markov transition function is then constructed using a set of functionally compatible conditional densities and it is shown, using the compatibility results, that the associated Markov chain is positive recurrent if and only if the conditionals are compatible. A Gibbs Markov chain, constructed via “Gibbs conditionals” from a hierarchical model with an improper posterior, is a special case. Therefore, the results of this article can be used to evaluate the consequences of applying the Gibbs sampler when the posterior's impropriety is unknown to the user. Our results cannot, however, be used to detect improper posteriors. Monte Carlo approximations based on Gibbs chains are shown to have undesirable limiting behavior when the posterior is improper. The results are applied to a Bayesian hierarchical one-way random effects model with an improper posterior distribution. The model is simple, but also quite similar to some models with improper posteriors that have been used in conjunction with the Gibbs sampler in the literature.  相似文献   

16.
Abstract

A continuous time financial market is considered where randomness is modelled by a finite state Markov chain. Using the chain, a stochastic discount factor is defined. The probability distributions of default times are shown to be given by solutions of a system of coupled partial differential equations.  相似文献   

17.
Multi-dimensional asymptotically quasi-Toeplitz Markov chains with discrete and continuous time are introduced. Ergodicity and non-ergodicity conditions are proven. Numerically stable algorithm to calculate the stationary distribution is presented. An application of such chains in retrial queueing models with Batch Markovian Arrival Process is briefly illustrated. AMS Subject Classifications Primary 60K25 · 60K20  相似文献   

18.
Abstract

We generalize the stochastic volatility model by allowing the volatility to follow different dynamics in different states of the world. The dynamics of the “states of the world” are represented by a Markov chain. We estimate all the parameters by using the filtering and the EM algorithms. Closed form estimates for all parameters are derived in this paper. These estimates can be updated using new information as it arrives.  相似文献   

19.
ABSTRACT

The main goal of this paper is to study the infinite-horizon long run average continuous-time optimal control problem of piecewise deterministic Markov processes (PDMPs) with the control acting continuously on the jump intensity λ and on the transition measure Q of the process. We provide conditions for the existence of a solution to an integro-differential optimality inequality, the so called Hamilton-Jacobi-Bellman (HJB) equation, and for the existence of a deterministic stationary optimal policy. These results are obtained by using the so-called vanishing discount approach, under some continuity and compactness assumptions on the parameters of the problem, as well as some non-explosive conditions for the process.  相似文献   

20.
It is common to subsample Markov chain output to reduce the storage burden. Geyer shows that discarding k ? 1 out of every k observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning Markov chain Monte Carlo (MCMC) output cannot improve statistical efficiency. Here, we suppose that it costs one unit of time to advance a Markov chain and then θ > 0 units of time to compute a sampled quantity of interest. For a thinned process, that cost θ is incurred less often, so it can be advanced through more stages. Here, we provide examples to show that thinning will improve statistical efficiency if θ is large and the sample autocorrelations decay slowly enough. If the lag ? ? 1 autocorrelations of a scalar measurement satisfy ρ? > ρ? + 1 > 0, then there is always a θ < ∞ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with ρ? = ρ|?| for some ? 1 < ρ < 1. For an AR(1) process, it is possible to compute the most efficient subsampling frequency k. The optimal k grows rapidly as ρ increases toward 1. The resulting efficiency gain depends primarily on θ, not ρ. Taking k = 1 (no thinning) is optimal when ρ ? 0. For ρ > 0, it is optimal if and only if θ ? (1 ? ρ)2/(2ρ). This efficiency gain never exceeds 1 + θ. This article also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号