首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
In this paper we develop a class of applied probabilistic continuous time but discretized state space decompositions of the characterization of a multivariate generalized diffusion process. This decomposition is novel and, in particular, it allows one to construct families of mimicking classes of processes for such continuous state and continuous time diffusions in the form of a discrete state space but continuous time Markov chain representation. Furthermore, we present this novel decomposition and study its discretization properties from several perspectives. This class of decomposition both brings insight into understanding locally in the state space the induced dependence structures from the generalized diffusion process as well as admitting computationally efficient representations in order to evaluate functionals of generalized multivariate diffusion processes, which is based on a simple rank one tensor approximation of the exact representation. In particular, we investigate aspects of semimartingale decompositions, approximation and the martingale representation for multidimensional correlated Markov processes. A new interpretation of the dependence among processes is given using the martingale approach. We show that it is possible to represent, in both continuous and discrete space, that a multidimensional correlated generalized diffusion is a linear combination of processes originated from the decomposition of the starting multidimensional semimartingale. This result not only reconciles with the existing theory of diffusion approximations and decompositions, but defines the general representation of infinitesimal generators for both multidimensional generalized diffusions and, as we will demonstrate, also for the specification of copula density dependence structures. This new result provides immediate representation of the approximate weak solution for correlated stochastic differential equations. Finally, we demonstrate desirable convergence results for the proposed multidimensional semimartingales decomposition approximations.  相似文献   

2.
The MAPK pathway is one of the well-known systems in oncogene researches of eukaryotes due to its important role in cell life. In this study, we perform the parameter estimation of a realistic MAPK system by using western blotting data. In inference, we use the modified diffusion bridge algorithm with data augmentation technique by modelling the realistically complex system via the Euler–Maruyama approximation. This approximation, which is the discretized version of the diffusion model, can be seen as an alternative OR approach with respect to the (hidden) Markov chain method in stochastic modelling of the biochemical systems where the data can be fully or partially observed and the time-course measurements are though to be collected at small time steps. Hereby, the modified diffusion bridge technique, which is based on the Markov Chain Monte Carlo (MCMC) methods, enables us to accurately estimate the model parameters, presented as the stochastic reaction rate constants, of the diffusion model under high dimensional systems despite loss in computational demand. In the estimation of the parameters, due to the complexity in the decision-making problems of the MCMC updates at different stages, we face with the dependency challenges. We unravel them by checking the singularity of the system in every stage of updates. In modelling, we also assume with/without-measurement error approaches in all states. But in order to evaluate the performance of both models, we initially implement them in a toy system. From the results, we observe that the model with measurement error performs better than the model without measurement error in terms of the mixing features of the MCMC runs and the accuracy of estimates, thereby, it is used for the parameter estimation of the realistic MAPK pathway. From the outcomes, we consider that the suggested approach can be seen as a promising alternative method in inference of parameters via different OR techniques in system biology.  相似文献   

3.
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm’s accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.  相似文献   

4.
In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included.  相似文献   

5.
In this paper, we study a reflected Markov-modulated Brownian motion with a two sided reflection in which the drift, diffusion coefficient and the two boundaries are (jointly) modulated by a finite state space irreducible continuous time Markov chain. The goal is to compute the stationary distribution of this Markov process, which in addition to the complication of having a stochastic boundary can also include jumps at state change epochs of the underlying Markov chain because of the boundary changes. We give the general theory and then specialize to the case where the underlying Markov chain has two states.  相似文献   

6.
Herein, we consider direct Markov chain approximations to the Duncan–Mortensen–Zakai equations for nonlinear filtering problems on regular, bounded domains. For clarity of presentation, we restrict our attention to reflecting diffusion signals with symmetrizable generators. Our Markov chains are constructed by employing a wide band observation noise approximation, dividing the signal state space into cells, and utilizing an empirical measure process estimation. The upshot of our approximation is an efficient, effective algorithm for implementing such filtering problems. We prove that our approximations converge to the desired conditional distribution of the signal given the observation. Moreover, we use simulations to compare computational efficiency of this new method to the previously developed branching particle filter and interacting particle filter methods. This Markov chain method is demonstrated to outperform the two-particle filter methods on our simulated test problem, which is motivated by the fish farming industry.  相似文献   

7.
In this paper, we extend the previous Markov-modulated reflected Brownian motion model discussed in [1] to a Markov-modulated reflected jump diffusion process, where the jump component is described as a Markov-modulated compound Poisson process. We compute the joint stationary distribution of the bivariate Markov jump process. An abstract example with two states is given to illustrate how the stationary equation described as a system of ordinary integro-differential equations is solved by choosing appropriate boundary conditions. As a special case, we also give the sationary distribution for this Markov jump process but without Markovian regime-switching.  相似文献   

8.
Stochastic networks with time varying arrival and service rates and routing structure are studied. Time variations are governed by, in addition to the state of the system, two independent finite state Markov processes X and Y. The transition times of X are significantly smaller than typical inter-arrival and processing times whereas the reverse is true for the Markov process Y. By introducing a suitable scaling parameter one can model such a system using a hierarchy of time scales. Diffusion approximations for such multiscale systems are established under a suitable heavy traffic condition. In particular, it is shown that, under certain conditions, properly normalized buffer content processes converge weakly to a reflected diffusion. The drift and diffusion coefficients of this limit model are functions of the state process, the invariant distribution of X, and a finite state Markov process which is independent of the driving Brownian motion.  相似文献   

9.
The finite Markov Chain Imbedding technique has been successfully applied in various fields for finding the exact or approximate distributions of runs and patterns under independent and identically distributed or Markov dependent trials. In this paper, we derive a new recursive equation for distribution of scan statistic using the finite Markov chain imbedding technique. We also address the problem of obtaining transition probabilities of the imbedded Markov chain by introducing a notion termed Double Finite Markov Chain Imbedding where transition probabilities are obtained by using the finite Markov chain imbedding technique again. Applications for random permutation model in chemistry and coupon collector’s problem are given to illustrate our idea.  相似文献   

10.
A continuous semi-Markov process with a segment as the range of values is considered. This process coincides with a diffusion process inside the segment, i.e., up to the first hitting time of the boundary of the segment and at any time when the process leaves the boundary. The class of such processes consists of Markov processes with reflection at the boundaries (instantaneously or with a delay) and semi-Markov processes with intervals of constancy on some boundary. We derive conditions of existence of such a process in terms of a semi-Markov transition generating function on the boundary. The method of imbedded alternating renewal processes is applied to find a stationary distribution of the process. Bibliography: 3 titles. __________ Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 351, 2007, pp. 284–297.  相似文献   

11.
In this paper shift ergodicity and related topics are studied for certain stationary processes. We first present a simple proof of the conclusion that every stationary Markov process is a generalized convex combination of stationary ergodic Markov processes. A direct consequence is that a stationary distribution of a Markov process is extremal if and only if the corresponding stationary Markov process is time ergodic and every stationary distribution is a generalized convex combination of such extremal ones. We then consider space ergodicity for spin flip particle systems. We prove space shift ergodicity and mixing for certain extremal invariant measures for a class of spin systems, in which most of the typical models, such as the Voter Models and the Contact Models, are included. As a consequence of these results we see that for such systems, under each of those extremal invariant measures, the space and time means of an observable coincide, an important phenomenon in statistical physics. Our results provide partial answers to certain interesting problems in spin systems.  相似文献   

12.
In this paper we address the problem of efficiently deriving the steady-state distribution for a continuous time Markov chain (CTMC) S evolving in a random environment E. The process underlying E is also a CTMC. S is called Markov modulated process. Markov modulated processes have been widely studied in literature since they are applicable when an environment influences the behaviour of a system. For instance, this is the case of a wireless link, whose quality may depend on the state of some random factors such as the intensity of the noise in the environment. In this paper we study the class of Markov modulated processes which exhibits separable, product-form stationary distribution. We show that several models that have been proposed in literature can be studied applying the Extended Reversed Compound Agent Theorem (ERCAT), and also new product-forms are derived. We also address the problem of the necessity of ERCAT for product-forms and show a meaningful example of product-form not derivable via ERCAT.  相似文献   

13.
We study the necessary and sufficient conditions for a finite ergodic Markov chain to converge in a finite number of transitions to its stationary distribution. Using this result, we describe the class of Markov chains which attain the stationary distribution in a finite number of steps, independent of the initial distribution. We then exhibit a queueing model that has a Markov chain embedded at the points of regeneration that falls within this class. Finally, we examine the class of continuous time Markov processes whose embedded Markov chain possesses the property of rapid convergence, and find that, in the case where the distribution of sojourn times is independent of the state, we can compute the distribution of the system at time t in the form of a simple closed expression.  相似文献   

14.
15.
In order to exploit mean-reverting behavior among the price differential between two markets, one can use unit root tests to determine which pairs of assets appear to exhibit mean-reverting behavior. Since nonlinear mean reversion shares the same meaning as local stationarity, this paper proposes a Bayesian hypothesis testing to detect the presence of a local unit root in the mean equation using Markov switching GARCH models. This model incorporates a fat-tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. To implement the test, we propose a numerical approximation of the marginal likelihoods to posterior odds by using an adaptive Markov Chain Monte Carlo scheme. Our simulation study demonstrates that the approximate Bayesian test performs properly. The proposed method utilizes the daily basis between the FTSE 100 Index and Index Futures as an illustration.  相似文献   

16.
Diffusion Approximations for Queues with Markovian Bases   总被引:2,自引:0,他引:2  
Consider a base family of state-dependent queues whose queue-length process can be formulated by a continuous-time Markov process. In this paper, we develop a piecewise-constant diffusion model for an enlarged family of queues, each of whose members has arrival and service distributions generalized from those of the associated queue in the base. The enlarged family covers many standard queueing systems with finite waiting spaces, finite sources and so on. We provide a unifying explicit expression for the steady-state distribution, which is consistent with the exact result when the arrival and service distributions are those of the base. The model is an extension as well as a refinement of the M/M/s-consistent diffusion model for the GI/G/s queue developed by Kimura [13] where the base was a birth-and-death process. As a typical base, we still focus on birth-and-death processes, but we also consider a class of continuous-time Markov processes with lower-triangular infinitesimal generators.  相似文献   

17.
 We prove that if is a finite valued stationary Markov Chain with strictly positive probability transitions, then for any natural number p, there exists a continuum of finite valued non Markovian processes which have the p-marginal distributions of X and with positive entropy, whereas for an irrational rotation and essentially bounded real measurable function f with no zero Fourier coefficient on the unit circle with normalized Lebesgue measure, the process is uniquely determined by its three-dimensional distributions in the class of ergodic processes. We give also a family of Gaussian non-Markovian dynamical systems for which the symbolic dynamic associated to the time zero partition has the two-dimensional distributions of a reversible mixing Markov Chain. (Received 22 July 1999; in revised form 24 February 2000)  相似文献   

18.
An Interactive Markov Chain is a population process in which each individuals's transitions depend on the population's distribution over the various states. We investigate a certain aspect of such process’ dynamics for a fixed population size. Conditions for convergence to steady‐state regardless of population size are provided.  相似文献   

19.
The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times.  相似文献   

20.
In this paper, we derive an approximation for throughput of TCP Compound connections under random losses. Throughput expressions for TCP Compound under a deterministic loss model exist in the literature. These are obtained assuming that the window sizes are continuous, i.e., a fluid behavior is assumed. We validate this model theoretically. We show that under the deterministic loss model, the TCP window evolution for TCP Compound is asymptotically periodic and is independent of the initial window size. We then consider the case when packets are lost randomly and independently of each other. We discuss Markov chain models to analyze performance of TCP in this scenario. We use insights from the deterministic loss model to get an appropriate scaling for the window size process and show that these scaled processes, indexed by p, the packet error rate, converge to a limit Markov chain process as p goes to 0. We show the existence and uniqueness of the stationary distribution for this limit process. Using the stationary distribution for the limit process, we obtain approximations for throughput, under random losses, for TCP Compound when packet error rates are small. We compare our results with ns2 simulations which show a good match and a better approximation than the fluid model at low p.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号