首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The evolution of a system with phase transition is simulated by a Markov process whose transition probabilities depend on a parameter. The change of the stationary distribution of the Markov process with a change of this parameter is interpreted as a phase transition of the system from one thermodynamic equilibrium state to another. Calculations and computer experiments are performed for condensation of a vapor. The sample paths of the corresponding Markov process have parts where the radius of condensed drops is approximately constant. These parts are interpreted as metastable states. Two metastable states occur, initial (gaseous steam) and intermediate (fog). The probability distributions of the drop radii in the metastable states are estimated. Translated from Teoreticheskaya i Matematicheskaya Fizika, Vol. 123, No. 1, pp. 94–106, April, 2000.  相似文献   

2.
本文研究一个周期性订货的多设备同备件库存系统,将备件库存策略与设备状态监控相结合,讨论了存在设备状态监控情形下的备件库存策略。针对设备状态自然腐蚀过程和人 为修复过程的复合过程,运用一个新的马尔科夫概率转移矩阵对设备需求概率进行刻画,并在此基础上给出静态订货模型和状态监控下的动态订货模型的最优订货策略。通过对比以上两种订货策略优缺点,本文提出一种新的启发式订货策略: 基于关键状态的订货策略模型。该策略可以有效降低对全部设备实行动态监控的信息成本,且成本节省优于静态订货策略,对于企业的现实问题有着较好的指导意义。  相似文献   

3.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

4.
We consider a discrete time risk model where dividends are paid to insureds and the claim size has a discrete phase-type distribution, but the claim sizes vary according to an underlying Markov process called an environment process. In addition, the probability of paying the next dividend is affected by the current state of the underlying Markov process. We provide explicit expressions for the ruin probability and the deficit distribution at ruin by extracting a QBD (quasi-birth-and-death) structure in the model and then analyzing the QBD process. Numerical examples are also given.  相似文献   

5.
Processes of autocorrelated Poisson counts can often be modelled by a Poisson INAR(1) model, which proved to apply well to typical tasks of SPC. Statistical properties of this model are briefly reviewed. Based on these properties, we propose a new control chart: the combined jumps chart. It monitors the counts and jumps of a Poisson INAR(1) process simultaneously. As the bivariate process of counts and jumps is a homogeneous Markov chain, average run lengths (ARLs) can be computed exactly with the well‐known Markov chain approach. Based on an investigation of such ARLs, we derive design recommendations and show that a properly designed chart can be applied nearly universally. This is also demonstrated by a real‐data example from the insurance field. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
王兆军  巩震  邹长亮 《数理统计与管理》2011,30(3):467-494,496,497,495
质量控制图的研究在统计过程控制(Statistical Process Control,简称SPC)中占有很重要的位置,且在实际中得到了很好的应用,也取得了很好的经济效益和社会效益。而ARL(AverageRun Length)和ATS(Average Time to Signal)分别是评价各种静态和动态控制图性质好坏的一个常用的关键指标。到现在为止,关于各种控制图ARL或ATS的计算大致有三种方法:马氏链法、积分方程法和随机模拟,本文将针对各种不同的控制图,对上述前两种方法做一个综述性的总结,以方便研究者和实际工作参考使用。  相似文献   

7.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

8.
Dynamic principal agent models are formulated based on constrained Markov decision process (CMDP), in which conditions are given that the state space of the system is countable and the agent chooses his actions from a countable action set. If the principal has finite alternative contracts to select, it is shown that the optimal contract solution and the corresponding optimal policy can be obtained by linear programming under the discounted criterion and average criterion. The paper is supported by Zhejiang Provincial Nature Science Foundation of China (701017) and by Scientific Research Fund of Zhejiang Provincial Education Department.  相似文献   

9.
We have previously used Markov models to describe movements of patients between hospital states; these may be actual or virtual and described by a phase-type distribution. Here we extend this approach to a Markov reward model for a healthcare system with Poisson admissions and an absorbing state, typically death. The distribution of costs is evaluated for any time and expressions derived for the mean and variances of costs. The average cost at any time is then determined for two scenarios: the Therapeutic and Prosthetic models, respectively. This example is used to illustrate the idea that keeping acute patients longer in hospital to ensure fitness for discharge, may reduce costs by decreasing the number of patients that become long-stay. In addition we develop a Markov Reward Model for a healthcare system including states, where the patient is in hospital, and states, where the patient is in the community. In each case, the length of stay is described by a phase-type distribution, thus enabling the representation of durations and costs in each phase within a Markov framework. The model can be used to determine costs for the entire system thus facilitating a systems approach to the planning of healthcare and a holistic approach to costing. Such models help us to assess the complex relationship between hospital and community care.  相似文献   

10.
In the theory and applications of Markov decision processes introduced by Howard and subsequently developed by many authors, it is assumed that actions can be chosen independently at each state. A policy constrained Markov decision process is one where selecting a given action in one state restricts the choice of actions in another. This note describes a method for determining a maximal gain policy in the policy constrained case. The method involves the use of bounds on the gain of the feasible policies to produce a policy ranking list. This list then forms a basis for a bounded enumeration procedure which yields the optimal policy.  相似文献   

11.
目前建立的路由收敛模型大部分都是确定性模型,而路由器在收敛过程中存在丢包、链路噪声、互连拓扑结构突变等现象。针对这些随机问题,该文引入Bernoulli白序列分布、Wiener过程、Markov过程,提出了一种新的随机动力系统模型,应用随机微分方程理论和随机分析方法得出其路由收敛的充分条件,结果证明,随机环境下路由状态收敛与路由器连接拓扑的Laplace矩阵、Markov切换的平稳分布、网络中数据包的成功传输率以及噪声强度息息相关。最后通过一个数值实例验证了相关结论的有效性。  相似文献   

12.
The Hidden Markov Chains (HMC) are widely applied in various problems. This succes is mainly due to the fact that the hidden process can be recovered even in the case of very large set of data. These models have been recetly generalized to ‘Pairwise Markov Chains’ (PMC) model, which admit the same processing power and a better modeling one. The aim of this note is to propose further generalization called Triplet Markov Chains (TMC), in which the distribution of the couple (hidden process, observed process) is the marginal distribution of a Markov chain. Similarly to HMC, we show that posterior marginals are still calculable in Triplets Markov Chains. We provide a necessary and sufficient condition that a TMC is a PMC, which shows that the new model is strictly more general. Furthermore, a link with the Dempster–Shafer fusion is specified. To cite this article: W. Pieczynski, C. R. Acad. Sci. Paris, Ser. I 335 (2002) 275–278.  相似文献   

13.
Stochastic epidemic models describe the dynamics of an epidemic as a disease spreads through a population. Typically, only a fraction of cases are observed at a set of discrete times. The absence of complete information about the time evolution of an epidemic gives rise to a complicated latent variable problem in which the state space size of the epidemic grows large as the population size increases. This makes analytically integrating over the missing data infeasible for populations of even moderate size. We present a data augmentation Markov chain Monte Carlo (MCMC) framework for Bayesian estimation of stochastic epidemic model parameters, in which measurements are augmented with subject-level disease histories. In our MCMC algorithm, we propose each new subject-level path, conditional on the data, using a time-inhomogenous continuous-time Markov process with rates determined by the infection histories of other individuals. The method is general, and may be applied to a broad class of epidemic models with only minimal modifications to the model dynamics and/or emission distribution. We present our algorithm in the context of multiple stochastic epidemic models in which the data are binomially sampled prevalence counts, and apply our method to data from an outbreak of influenza in a British boarding school. Supplementary material for this article is available online.  相似文献   

14.
A brief survey of the literature on sojourn time problems in single node feedback queueing systems is presented. The derivation of the distribution and moments of the sojourn time of a typical customer in a Markov renewal queue with state dependent feedback is considered in depth. The techniques used relate to the derivation of a first passage time distribution in a particular Markov renewal process. These results are applied to birth-death queues with state dependent feedback. For such models an alternative approach using the theory of Markov chains in continuous time is also examined.  相似文献   

15.
Processing equipment in the water industry is subject to decayand requires maintenance, repair and eventual replacement. Thechallenge of competition within the water industry and the accompanyingregulatory regime requires that actions be integrated and costeffective. This is an industry, which has considerable dataon the failure of its equipment, but until recently very fewmodels of the maintenance process have been built. This paper describes the context of this problem for cleanwater processing where the equipment is that required to purifywater. It proposes a model based on the virtual and operatingage of the components. The operating age reflects the true ageof the equipment while the virtual age allows for the cumulativeeffect of maintenance actions performed on the equipment. Themodel also allows for different types of equipment by describingdegradation by Cox's proportional hazards model. Thus the specialfeatures of the equipment and environment in which the equipmentoperates are described by a set of characteristics, which modifythe hazard rate of the failure time of the equipment. This approachusing Cox's model with virtual and operating age can be appliedto other processing industries including the gas industry andthe ‘dirty water’ side of the water industry. The model is formulated as a stochastic dynamic programmingor Markov decision process and the form of the optimal policyis determined. This shows that repair and replacement shouldonly be performed when the equipment has failed and describesgeneral conditions when replacement is appropriate. The optimalpolicy is calculated numerically using the value iteration algorithmfor a specific example based on data on failure.  相似文献   

16.
This paper derives a particle filter algorithm within the Dempster–Shafer framework. Particle filtering is a well-established Bayesian Monte Carlo technique for estimating the current state of a hidden Markov process using a fixed number of samples. When dealing with incomplete information or qualitative assessments of uncertainty, however, Dempster–Shafer models with their explicit representation of ignorance often turn out to be more appropriate than Bayesian models.The contribution of this paper is twofold. First, the Dempster–Shafer formalism is applied to the problem of maintaining a belief distribution over the state space of a hidden Markov process by deriving the corresponding recursive update equations, which turn out to be a strict generalization of Bayesian filtering. Second, it is shown how the solution of these equations can be efficiently approximated via particle filtering based on importance sampling, which makes the Dempster–Shafer approach tractable even for large state spaces. The performance of the resulting algorithm is compared to exact evidential as well as Bayesian inference.  相似文献   

17.
Quasi-stationary distributions have been used in biology to describe the steady state behaviour of Markovian population models which, while eventually certain to become extinct, nevertheless maintain an apparent stochastic equilibrium for long periods. However, they have substantial drawbacks; a Markov process may not possess any, or may have several, and their probabilities can be very difficult to determine. Here, we consider conditions under which an apparent stochastic equilibrium distribution can be identified and computed, irrespective of whether a quasi-stationary distribution exists, or is unique; we call it a quasi-equilibrium distribution. The results are applied to multi-dimensional Markov population processes.  相似文献   

18.
Expected value is a common and useful baseline used to compare different multi-layered missile defense strategies or fire doctrines (number of interceptors fired at one target). However, expected value by itself does not render enough information to the military or national security researchers regarding the probability distribution of the effectiveness of the entire missile defense system. The objective of this paper is to provide relevant probability distribution functions (pdf) for ballistic missile defense (BMD) planning and cost-effective analyzing. To achieve this goal, discrete time Markov process is utilized to model multi-layered BMD system. Most issues of the multi-layered BMD system are covered in this model, including multi-reentry vehicles, discrimination probabilities and accompanying risk and waste of defensive resources, probability to engage all of the hostile objects, and required inventory levels. The effectiveness of a multi-layered BMD system is expressed in the pdf form of the number of warheads or missiles penetrating the BMD system. This paper also suggests that by changing fire doctrines and comparing the resulting effectiveness against cost the BMD system might be optimized. Since Markov process modeling requires initial state, military intelligence and information will be necessary to generate the initial state.  相似文献   

19.
An Erratum for this article has been published in Applied Stochastic Models in Business and Industry 2005; (in press) This paper presents a future pricing model based on the discrete time homogeneous semi‐Markov process (DTHSMP). The model is adapted to the real data of the Italian primary future stock index. After showing the pricing model, the DTHSMP solution is given. The solution of the semi‐Markov process gives, for each period of the considered horizon time, and for each starting state, the probability distribution of the future price. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we address the problem of efficiently deriving the steady-state distribution for a continuous time Markov chain (CTMC) S evolving in a random environment E. The process underlying E is also a CTMC. S is called Markov modulated process. Markov modulated processes have been widely studied in literature since they are applicable when an environment influences the behaviour of a system. For instance, this is the case of a wireless link, whose quality may depend on the state of some random factors such as the intensity of the noise in the environment. In this paper we study the class of Markov modulated processes which exhibits separable, product-form stationary distribution. We show that several models that have been proposed in literature can be studied applying the Extended Reversed Compound Agent Theorem (ERCAT), and also new product-forms are derived. We also address the problem of the necessity of ERCAT for product-forms and show a meaningful example of product-form not derivable via ERCAT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号