首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

We postulate observations from a Poisson process whose rate parameter modulates between two values determined by an unobserved Markov chain. The theory switches from continuous to discrete time by considering the intervals between observations as a sequence of dependent random variables. A result from hidden Markov models allows us to sample from the posterior distribution of the model parameters given the observed event times using a Gibbs sampler with only two steps per iteration.  相似文献   

2.
Consider a Markov additive chain (V,Z) with a negative horizontal drift on a half-plane. We provide the limiting distribution of Z when V passes a threshold for the first time, as V tends to infinity. Our contribution is to allow the Markovian part of an associated twisted Markov chain to be null recurrent or transient. The positive recurrent case was treated by Kesten [Ann. Probab. 2 (1974), 355–386]. Moreover, a ratio limit will be established for a transition kernel with unbounded jumps.  相似文献   

3.
Waiting Time Problems in a Two-State Markov Chain   总被引:1,自引:0,他引:1  
Let F 0 be the event that l 0 0-runs of length k 0 occur and F 1 be the event that l 1 1-runs of length k 1 occur in a two-state Markov chain. In this paper using a combinatorial method and the Markov chain imbedding method, we obtained explicit formulas of the probability generating functions of the sooner and later waiting time between F 0 and F 1 by the non-overlapping, overlapping and "greater than or equal" enumeration scheme. These formulas are convenient for evaluating the distributions of the sooner and later waiting time problems.  相似文献   

4.
A Bayesian approach is used to analyze the seismic events with magnitudes at least 4.7 on Taiwan. Following the idea proposed by Ogata (1988,Journal of the American Statistical Association,83, 9–27), an epidemic model for the process of occurrence times given the observed magnitude values is considered, incorporated with gamma prior distributions for the parameters in the model, while the hyper-parameters of the prior are essentially determined by the seismic data in an earlier period. Bayesian inference is made on the conditional intensity function via Markov chain Monte Carlo method. The results yield acceptable accuracies in predicting large earthquake events within short time periods.  相似文献   

5.
Abstract

This paper concerns the pricing of American options with stochastic stopping time constraints expressed in terms of the states of a Markov process. Following the ideas of Menaldi et al., we transform the constrained into an unconstrained optimal stopping problem. The transformation replaces the original payoff by the value of a generalized barrier option. We also provide a Monte Carlo method to numerically calculate the option value for multidimensional Markov processes. We adapt the Longstaff–Schwartz algorithm to solve the stochastic Cauchy–Dirichlet problem related to the valuation problem of the barrier option along a set of simulated trajectories of the underlying Markov process.  相似文献   

6.
In this paper we introduce a Markov chain imbeddable vector of multinomial type and a Markov chain imbeddable variable of returnable type and discuss some of their properties. These concepts are extensions of the Markov chain imbeddable random variable of binomial type which was introduced and developed by Koutras and Alexandrou (1995, Ann. Inst. Statist. Math., 47, 743–766). By using the results, we obtain the distributions and the probability generating functions of numbers of occurrences of runs of a specified length based on four different ways of counting in a sequence of multi-state trials. Our results also yield the distribution of the waiting time problems.  相似文献   

7.
Finitary Markov processes are described in G. Morvai and B. Weiss, Prediction for discrete time series, Probability Theory and Related Fields 132 (2005), 1–12. The transition functions of finitary Markov processes are residually locally constant g-functions that can be extended by continuity to their maximal domain of definition. The study of their associated symbolic dynamics leads one to the D-shifts as introduced in W. Krieger, On g-functions for subshifts, Institute of Mathematical Statistics Lecture Notes-Monograph Series, Vol. 48, Dynamics & Stochastics, arXiv:math.DS/0608259, (2006), 306–316, We study the phenomena that can arise in residually locally constant and locally constant maximally defined g-functions on D-shifts, Markov shifts and synchronizing systems with respect to future measures and g-measures  相似文献   

8.
Abstract

In this article, a class of strong limit theorems for relative entropy density of arbitrary stochastic sequence, expressed by inequalities, are obtained by comparing arbitrary dependent distribution with and the mth-order Markov distribution on probability space. As corollaries, some Shannon–McMillan theorems of mth-order nonhomogeneous Markov information source are obtained. Some results of nonhomogeneous Markov information source obtained are generalized.  相似文献   

9.
Abstract  In this paper we study strongly continuous positive semigroups on particular classes of weighted continuous function space on a locally compact Hausdorff space X having a countable base. In particular we characterize those positive semigroups which are the transition semigroups of suitable Markov processes. Some applications are also discussed. Keywords Positive semigroup, Markov transition function, Markov process, Weighted continuous function space, Degenerate second order differential operator Mathematics Subject Classification (2000) 47D06, 47D07, 60J60  相似文献   

10.
Abstract

In this article, we solve a class of estimation problems, namely, filtering smoothing and detection for a discrete time dynamical system with integer-valued observations. The observation processes we consider are Poisson random variables observed at discrete times. Here, the distribution parameter for each Poisson observation is determined by the state of a Markov chain. By appealing to a duality between forward (in time) filter and its corresponding backward processes, we compute dynamics satisfied by the unnormalized form of the smoother probability. These dynamics can be applied to construct algorithms typically referred to as fixed point smoothers, fixed lag smoothers, and fixed interval smoothers. M-ary detection filters are computed for two scenarios: one for the standard model parameter detection problem and the other for a jump Markov system.  相似文献   

11.
Abstract

We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov process to be commutative. Under suitable conditions we recover some of the basic quantities of the original Markov process from the jump chain of the lumped Markov process.  相似文献   

12.
This paper deals with a continuous review (s,S) inventory system where arriving demands finding the system out of stock, leave the service area and repeat their request after some random time. This assumption introduces a natural alternative to classical approaches based either on lost demand models or on backlogged models. The stochastic model formulation is based on a bidimensional Markov process which is numerically solved to investigate the essential operating characteristics of the system. An optimal design problem is also considered. AMS subject classification: 90B05 90B22  相似文献   

13.
ABSTRACT

The asymptotic equipartition property is a basic theorem in information theory. In this paper, we study the strong law of large numbers of Markov chains in single-infinite Markovian environment on countable state space. As corollary, we obtain the strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for this process. Finally, we give the asymptotic equipartition property of Markov chains in single-infinite Markovian environment on countable state space.  相似文献   

14.
We extend the central limit theorem for additive functionals of a stationary, ergodic Markov chain with normal transition operator due to Gordin and Lif?ic, 1981 [A remark about a Markov process with normal transition operator, In: Third Vilnius Conference on Probability and Statistics 1, pp. 147–48] to continuous-time Markov processes with normal generators. As examples, we discuss random walks on compact commutative hypergroups as well as certain random walks on non-commutative, compact groups.  相似文献   

15.
Summary Let D denote the generator of a continuous time positive recurrent Markov process with state space N (or R +). Sufficient conditions are given to imply the existence of >0 such that if 0 is a point of the spectrum of D considered as an operator on the L 2 space of the equilibrium distribution, then Re()–. A related result is given for discrete time Markov chains.  相似文献   

16.
Let k and m are positive integers with km. The probability generating function of the waiting time for the first occurrence of consecutive k successes in a sequence of m-th order Markov dependent trials is given as a function of the conditional probability generating functions of the waiting time for the first occurrence of consecutive m successes. This provides an efficient algorithm for obtaining the probability generating function when k is large. In particular, in the case of independent trials a simple relationship between the geometric distribution of order k and the geometric distribution of order k−1 is obtained. This research was partially supported by the ISM Cooperative Research Program(2004-ISM-CRP-2006) and by a Grant-in-Aid for Scientific Research (C) of the JSPI (Grant Number 16500183)  相似文献   

17.
We establish integral tests and laws of the iterated logarithm for the upper envelope of the future infimum of positive self-similar Markov processes and for increasing self-similar Markov processes at 0 and +∞. Our proofs are based on the Lamperti representation and time reversal arguments due to Chaumont, L. and Pardo, J.C. (Prépublication (L'université de Paris 6), 2005). These results extend laws of the iterated logarithm for the future infimum of Bessel processes due to Khoshnevisan, D., Lewis, T.M. and Li, W.V. (On the future infima of some transient processes, Probability Theory and Related Fields, 99, 337–360, 1994).  相似文献   

18.
We consider a {0,1}-valuedm-th order stationary Markov chain. We study the occurrences of runs where two 1’s are separated byat most/exactly/at least k 0’s under the overlapping enumeration scheme wherek≥0 and occurrences of scans (at leastk 1 successes in a window of length at mostk, 1≤k 1k) under both non-overlapping and overlapping enumeration schemes. We derive the generating function of first two types of runs. Under the conditions, (1) strong tendency towards success and (2) strong tendency towards reversing the state, we establish the convergence of waiting times of ther-th occurrence of runs and scans to Poisson type distributions. We establish the central limit theorem and law of the iterated logarithm for the number of runs and scans up to timen.  相似文献   

19.
The expected value and generating function of the number of overlapping occurrences of a pattern P in a Markov chain until the first occurrence of a member from a finite collection of patterns that start with P is obtained. A martingale technique is employed to address the problem.   相似文献   

20.
This paper considers a stable GIGI∨1 queue with a regularly varying service time distribution. We derive the tail behaviour of the integral of the queue length process Q(t) over one busy period. We show that the occurrence of a large integral is related to the occurrence of a large maximum of the queueing process over the busy period and we exploit asymptotic results for this variable. We also prove a central limit theorem for ∫0t Q(s) ds.AMS subject classification: 60K25, 90B22.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号