共查询到20条相似文献,搜索用时 37 毫秒
1.
2.
3.
本文研究了马氏环境中的马氏链,利用马氏双链的性质,得到了马氏环境中的马氏链回返于小柱集上的概率的若干估计式. 相似文献
4.
5.
状态可数的马氏环境中马氏链函数的强大数定律 总被引:3,自引:0,他引:3
讨论了马氏双链与随机环境中马氏链的关系.在此基础上,研究了具有离散参量的马氏环境中马氏链函数的强大数定律,并且给出了直接加于链和过程样本函数上的充分条件. 相似文献
6.
There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator. 相似文献
7.
Andrea Clementi Pierluigi Crescenzi Carola Doerr Pierre Fraigniaud Francesco Pasquale Riccardo Silvestri 《Random Structures and Algorithms》2016,48(2):290-312
We analyze randomized broadcast in dynamic networks modeled as edge‐Markovian evolving graphs. The most realistic range of edge‐Markovian parameters yields sparse and disconnected graphs. We prove that, in this setting, the “push” protocol completes with high probability in optimal logarithmic time. © 2015 Wiley Periodicals, Inc. Random Struct. Alg., 48, 290–312, 2016 相似文献
8.
Abstract In this paper, we introduce the population size dependent generalized multitype branching process. This is a Markovian model that allows us to study homogeneous multitype branching processes in a unified way. The basic properties for this model, transitions between its states, as well as the existence of a stationary limiting distribution, are investigated. Finally, we apply the obtained results to a new controlled multitype branching process. 相似文献
9.
Revindra M. Phatarfod 《Stochastic Processes and their Applications》1982,13(3):279-292
It is known that the main difficulty in applying the Markovian analogue of Wald's Identity is the presence, in the Identity, of the last state variable before the random walk is terminated. In this paper we show that this difficulty can be overcome if the underlying Markov chain has a finite state space. The absorption probabilities thus obtained are used, by employing a duality argument, to derive time-dependent and limiting probabilities for the depletion process of a dam with Markovian inputs.The second problem that is considered here is that of a non-homogeneous but cyclic Markov chain. An analogue of Wald's Identity is obtained for this case, and is used to derive time- dependent and limiting probabilities for the depletion process with inputs forming a non- homogeneous (cyclic) Markov chain. 相似文献
10.
ABSTRACTThe asymptotic equipartition property is a basic theorem in information theory. In this paper, we study the strong law of large numbers of Markov chains in single-infinite Markovian environment on countable state space. As corollary, we obtain the strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for this process. Finally, we give the asymptotic equipartition property of Markov chains in single-infinite Markovian environment on countable state space. 相似文献
11.
A retrial queueing system with the batch Markovian arrival process and semi-Markovian service is investigated. We suppose that the intensity of retrials linearly depends on the number of repeated calls. The distribution of the number of calls in the system is the subject of research. Asymptotically quasi-Toeplitz 2-dimensional Markov chains are introduced into consideration and applied for solving the problem. 相似文献
12.
D. Racoceanu A. Elmoudni M. Ferney S. Zerhouni 《Mathematical and Computer Modelling of Dynamical Systems: Methods, Tools and Applications in Engineering and Related Sciences》2013,19(3):199-229
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting. The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event. The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property. The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain 相似文献
13.
We establish an integration by parts formula for the random functionals of a continuous-time Markov chain, based on partial differentiation with respect to jump times. In comparison with existing methods, our approach does not rely on the Girsanov theorem and it imposes less restrictions on the choice of directions of differentiation, while assuming additional continuity conditions on the considered functionals. As an application we compute sensitivities (Greeks) using stochastic weights in an asset price model with Markovian regime switching. 相似文献
14.
In this paper, we consider a BMAP/G/1 G-queue with setup times and multiple vacations. Arrivals of positive customers and
negative customers follow a batch Markovian arrival process (BMAP) and Markovian arrival process (MAP) respectively. The arrival
of a negative customer removes all the customers in the system when the server is working. The server leaves for a vacation
as soon as the system empties and is allowed to take repeated (multiple) vacations. By using the supplementary variables method
and the censoring technique, we obtain the queue length distributions. We also obtain the mean of the busy period based on
the renewal theory. 相似文献
15.
Importance Sampling Simulations of Markovian Reliability Systems Using Cross-Entropy 总被引:1,自引:0,他引:1
This paper reports simulation experiments, applying the cross entropy method such as the importance sampling algorithm for
efficient estimation of rare event probabilities in Markovian reliability systems. The method is compared to various failure
biasing schemes that have been proved to give estimators with bounded relative errors. The results from the experiments indicate
a considerable improvement of the performance of the importance sampling estimators, where performance is measured by the
relative error of the estimate, by the relative error of the estimator, and by the gain of the importance sampling simulation
to the normal simulation. 相似文献
16.
In this paper the intrinsic complex nature of engineering systems under control is treated by introducing an approach based on Controlled Stochastic Differential Equations with Markovian Switchings (in short CSDEMS). Technical conditions for the existence and uniqueness of the solutions of the CSDEMS are provided. In this context it is not unusual to deal with non-linear CSDEMS that cannot be solved analytically. Therefore, we develop a new two-step, predictor–corrector method for finding numerical approximations to solutions of CSDEMS. This method utilizes the Euler–Maruyama method. An illustrative application to the biochemical engineering area is presented to highlight the usefulness of our approach as a simulation tool. 相似文献
17.
18.
19.
We study the properties of finite ergodic Markov Chains whose transition probability matrix P is singular. The results establish bounds on the convergence time of Pm to a matrix where all the rows are equal to the stationary distribution of P. The results suggest a simple rule for identifying the singular matrices which do not have a finite convergence time. We next study finite convergence to the stationary distribution independent of the initial distribution. The results establish the connection between the convergence time and the order of the minimal polynomial of the transition probability matrix. A queuing problem and a maintenance Markovian decision problem which possess the property of rapid convergence are presented. 相似文献
20.
Dan Teodorescu 《Stochastic Processes and their Applications》1980,10(3):255-270
A new class of operators performing an optimization (optimization operators or, simply, optimators) which generate transition matrices with required properties such as ergodicity, recurrence etc., is considered and their fundamental features are described. Some criteria for comparing such operators by taking into account their strenght are given and sufficient conditions for both weak and strong ergodicity are derived. The nearest Markovian model with respect to a given set of observed probability vectors is then defined as a sequence of transition matrices satisfying certain constraints that express our prior knowledge about the system. Finally, sufficient conditions for the existence of such a model are given and the related algorithm is illustrated by an example. 相似文献