首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
We consider unrecoverable homogeneous multi-state systems with graduate failures, where each component can work at M + 1 linearly ordered levels of performance. The underlying process of failure for each component is a homogeneous Markov process such that the level of performance of one component can change only for one level lower than the observed one, and the failures are independent for different components. We derive the probability distribution of the random vector X, representing the state of the system at the moment of failure and use it for testing the hypothesis of equal transition intensities. Under the assumption that these intensities are equal, we derive the method of moments estimators for probabilities of failure in a given state vector and the intensity of failure. At the end we calculate the reliability function for such systems. Received: May 18, 2007., Revised: July 8, 2008., Accepted: September 29, 2008.  相似文献   

2.
Abstract

Versions of the Gibbs Sampler are derived for the analysis of data from hidden Markov chains and hidden Markov random fields. The principal new development is to use the pseudolikelihood function associated with the underlying Markov process in place of the likelihood, which is intractable in the case of a Markov random field, in the simulation step for the parameters in the Markov process. Theoretical aspects are discussed and a numerical study is reported.  相似文献   

3.
4.
Abstract

We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov process to be commutative. Under suitable conditions we recover some of the basic quantities of the original Markov process from the jump chain of the lumped Markov process.  相似文献   

5.
Abstract

In this paper, we use filtering techniques to estimate the occurrence time of an event in a financial market. The occurrence time is being viewed as a Markov stopping time with respect to the σ-field generated by a hidden Markov process. We also generalize our result to the Nth occurrence time of that event.  相似文献   

6.
Abstract

This paper concerns the pricing of American options with stochastic stopping time constraints expressed in terms of the states of a Markov process. Following the ideas of Menaldi et al., we transform the constrained into an unconstrained optimal stopping problem. The transformation replaces the original payoff by the value of a generalized barrier option. We also provide a Monte Carlo method to numerically calculate the option value for multidimensional Markov processes. We adapt the Longstaff–Schwartz algorithm to solve the stochastic Cauchy–Dirichlet problem related to the valuation problem of the barrier option along a set of simulated trajectories of the underlying Markov process.  相似文献   

7.
A time-continuous branching random walk on the lattice ? d , d ≥ 1, is considered when the particles may produce offspring at the origin only. We assume that the underlying Markov random walk is homogeneous and symmetric, the process is initiated at moment t = 0 by a single particle located at the origin, and the average number of offspring produced at the origin is such that the corresponding branching random walk is critical. The asymptotic behavior of the survival probability of such a process at moment t → ∞ and the presence of at least one particle at the origin is studied. In addition, we obtain the asymptotic expansions for the expectation of the number of particles at the origin and prove Yaglom-type conditional limit theorems for the number of particles located at the origin and beyond at moment t.  相似文献   

8.
ABSTRACT

The asymptotic equipartition property is a basic theorem in information theory. In this paper, we study the strong law of large numbers of Markov chains in single-infinite Markovian environment on countable state space. As corollary, we obtain the strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for this process. Finally, we give the asymptotic equipartition property of Markov chains in single-infinite Markovian environment on countable state space.  相似文献   

9.
This paper deals with a continuous review (s,S) inventory system where arriving demands finding the system out of stock, leave the service area and repeat their request after some random time. This assumption introduces a natural alternative to classical approaches based either on lost demand models or on backlogged models. The stochastic model formulation is based on a bidimensional Markov process which is numerically solved to investigate the essential operating characteristics of the system. An optimal design problem is also considered. AMS subject classification: 90B05 90B22  相似文献   

10.
Abstract  In this paper we study strongly continuous positive semigroups on particular classes of weighted continuous function space on a locally compact Hausdorff space X having a countable base. In particular we characterize those positive semigroups which are the transition semigroups of suitable Markov processes. Some applications are also discussed. Keywords Positive semigroup, Markov transition function, Markov process, Weighted continuous function space, Degenerate second order differential operator Mathematics Subject Classification (2000) 47D06, 47D07, 60J60  相似文献   

11.
Summary The basic problem considered in this paper is that of determining conditions for recurrence and transience for two dimensional irreducible Markov chains whose state space is Z + 2 =Z+xZ+. Assuming bounded jumps and a homogeneity condition Malyshev [7] obtained necessary and sufficient conditions for recurrence and transience of two dimensional random walks on the positive quadrant. Unfortunately, his hypothesis that the jumps of the Markov chain be bounded rules out for example, the Poisson arrival process. In this paper we generalise Malyshev's theorem by means of a method that makes novel use of the solution to Laplace's equation in the first quadrant satisfying an oblique derivative condition on the boundaries. This method, which allows one to replace the very restrictive boundedness condition by a moment condition and a lower boundedness condition, is of independent interest.  相似文献   

12.
Abstract

In this paper, we develop an option valuation model where the dynamics of the spot foreign exchange rate is governed by a two-factor Markov-modulated jump-diffusion process. The short-term fluctuation of stochastic volatility is driven by a Cox–Ingersoll–Ross (CIR) process and the long-term variation of stochastic volatility is driven by a continuous-time Markov chain which can be interpreted as economy states. Rare events are governed by a compound Poisson process with log-normal jump amplitude and stochastic jump intensity is modulated by a common continuous-time Markov chain. Since the market is incomplete under regime-switching assumptions, we determine a risk-neutral martingale measure via the Esscher transform and then give a pricing formula of currency options. Numerical results are presented for investigating the impact of the long-term volatility and the annual jump intensity on option prices.  相似文献   

13.
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid, however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this article, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general setup, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effect models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.  相似文献   

14.
We extend the central limit theorem for additive functionals of a stationary, ergodic Markov chain with normal transition operator due to Gordin and Lif?ic, 1981 [A remark about a Markov process with normal transition operator, In: Third Vilnius Conference on Probability and Statistics 1, pp. 147–48] to continuous-time Markov processes with normal generators. As examples, we discuss random walks on compact commutative hypergroups as well as certain random walks on non-commutative, compact groups.  相似文献   

15.
Abstract

In this article, we solve a class of estimation problems, namely, filtering smoothing and detection for a discrete time dynamical system with integer-valued observations. The observation processes we consider are Poisson random variables observed at discrete times. Here, the distribution parameter for each Poisson observation is determined by the state of a Markov chain. By appealing to a duality between forward (in time) filter and its corresponding backward processes, we compute dynamics satisfied by the unnormalized form of the smoother probability. These dynamics can be applied to construct algorithms typically referred to as fixed point smoothers, fixed lag smoothers, and fixed interval smoothers. M-ary detection filters are computed for two scenarios: one for the standard model parameter detection problem and the other for a jump Markov system.  相似文献   

16.
Abstract

We postulate observations from a Poisson process whose rate parameter modulates between two values determined by an unobserved Markov chain. The theory switches from continuous to discrete time by considering the intervals between observations as a sequence of dependent random variables. A result from hidden Markov models allows us to sample from the posterior distribution of the model parameters given the observed event times using a Gibbs sampler with only two steps per iteration.  相似文献   

17.
《随机分析与应用》2013,31(6):1207-1214
Abstract

In this article, we assume that we have a number of candidate insurance models for describing a risk process. Suppose that in each model the risk process is a function of the states of some Markov chains. Based on observing the history of the premiums and claims processes we propose dynamics whose solutions indicate the likelihoods of each candidate model.  相似文献   

18.
Abstract

A continuous time financial market is considered where randomness is modelled by a finite state Markov chain. Using the chain, a stochastic discount factor is defined. The probability distributions of default times are shown to be given by solutions of a system of coupled partial differential equations.  相似文献   

19.
20.
ABSTRACT

This paper studies partially observed risk-sensitive optimal control problems with correlated noises between the system and the observation. It is assumed that the state process is governed by a continuous-time Markov regime-switching jump-diffusion process and the cost functional is of an exponential-of-integral type. By virtue of a classical spike variational approach, we obtain two general maximum principles for the aforementioned problems. Moreover, under certain convexity assumptions on both the control domain and the Hamiltonian, we give a sufficient condition for the optimality. For illustration, a linear-quadratic risk-sensitive control problem is proposed and solved using the main results. As a natural deduction, a fully observed risk-sensitive maximum principle is also obtained and applied to study a risk-sensitive portfolio optimization problem. Closed-form expressions for both the optimal portfolio and the corresponding optimal cost functional are obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号