首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The cost allocation process in hospitals typically entails an accounting step-down procedure whereby costs are allocated from non-revenue producing service centres to revenue centres. The resulting revenue centre costs are then compared with the third party (Blue Cross, Medicare, Medicaid) allowable costs. Any costs in excess of the allowable costs are not reimbursable. This procedure has been conceptualized using a Markov chain in a recent journal article. The purpose of this paper is to demonstrate how the Markov model may be used to assess the impact of various changes in the original data without having to recalculate the entire step-down process via a Markov model or any other procedure. The changes include an alternate step-down model, a different cost allocation basis for one or more service centres, and the expansion or contraction of one or more service centres.  相似文献   

2.
This paper deals with a gradually deteriorating equipment whose actual degree of deterioration can be revealed by inspections only. An inspection can be succeeded by a revision depending on the system's degree of deterioration. In the absence of inspections and revisions, the working condition of the system evolves according to a Markov chain whose changes of state are not observable with the possible exception of a breakdown. Examples of this model include production machines subject to stochastic breakdowns, and maintenance of communication systems. The cost structure of the model consists of inspection, revision and operating costs. It is intuitively reasonable that in many applications a simple control-limit rule will be optimal. Such a rule prescribes a revision only when inspection reveals that the degree of deterioration has exceeded some critical level. A special-purpose Markov decision algorithm operating on the class of control-limit rules is developed for the computation of an average cost optimal schedule of inspections and revisions.  相似文献   

3.
Considerable benefits have been gained from using Markov decision processes to select condition-based maintenance policies for the asset management of infrastructure systems. A key part of the method is using a Markov process to model the deterioration of condition. However, the Markov model assumes constant transition probabilities irrespective of how long an item has been in a state. The semi-Markov model relaxes this assumption. This paper describes how to fit a semi-Markov model to observed condition data and the results achieved on two data sets. Good results were obtained even where there was only 1 year of observation data.  相似文献   

4.
煤矿安全事故预防和控制是煤矿安全评价和决策的基础.灰色预测适合于时间短、数据量少和波动不大的系统对象,而马尔可夫链理论适用于预测随机波动大的动态过程.结合灰色预测GM(1,1)模型和马尔可夫链理论的优点,提出了一种改进的灰色马尔可夫GM(1,1)模型.利用改进的GM(1,1)模型进一步拟合煤矿人因失误事故的发展变化趋势,并以此为基础进行马尔柯夫预测,提高预测效果.以2000-2010年全国煤矿事故百万吨死亡率为例进行了预测分析,结果表明模型既能揭示煤矿人因失误事故百万吨死亡率变化的总体趋势,又能克服随机波动性数据对预测精度的影响,具有较强的工程实用性,并对煤矿人因失误安全事故的预测和控制有一定的实际意义.  相似文献   

5.
In this paper, we present a parameter estimation procedure for a condition‐based maintenance model under partial observations. Systems can be in a healthy or unhealthy operational state, or in a failure state. System deterioration is driven by a continuous time homogeneous Markov chain and the system state is unobservable, except the failure state. Vector information that is stochastically related to the system state is obtained through condition monitoring at equidistant sampling times. Two types of data histories are available — data histories that end with observable failure, and censored data histories that end when the system has been suspended from operation but has not failed. The state and observation processes are modeled in the hidden Markov framework and the model parameters are estimated using the expectation–maximization algorithm. We show that both the pseudolikelihood function and the parameter updates in each iteration of the expectation–maximization algorithm have explicit formulas. A numerical example is developed using real multivariate spectrometric oil data coming from the failing transmission units of 240‐ton heavy hauler trucks used in the Athabasca oil sands of Alberta, Canada. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
We consider a Bernoulli process where the success probability changes with respect to a Markov chain. Such a model represents an interesting application of stochastic processes where the parameters are not constants; rather, they are stochastic processes themselves due to their dependence on a randomly changing environment. The model operates in a random environment depicted by a Markov chain so that the probability of success at each trial depends on the state of the environment. We will concentrate, in particular, on applications in reliability theory to motivate our model. The analysis will focus on transient as well as long-term behaviour of various processes involved.  相似文献   

7.
莫晓云 《经济数学》2010,27(3):28-34
在客户发展关系的Markov链模型的基础上,构建了企业的客户回报随机过程.证明了:在适当假设下,客户回报过程是Markov链。甚至是时间齐次的Markov链.本文求出了该链的转移概率.通过转移概率得到了客户给企业期望回报的一些计算公式,从而为企业选定发展客户关系策略提供了有效的量化基础.  相似文献   

8.
M/G/1排队系统的性能灵敏度分析   总被引:4,自引:0,他引:4  
非Markov型排除系统经常被用来作为某些实际工程问题(如通讯网络)的研究模型,对于一般的M/G/1排队系统,本文通过研究其嵌入Markov链,讨论了系统的稳态性能灵敏度分析问题,并给出用嵌入Markov链的势能表示的稳态性能灵敏度公式,由于嵌入Markov链要比描述其系统状态的半Markov过程简单得多,故本文的结果对M/G/1排队系统的性能灵敏度仿真计算及系统的优化,都将带来极大的方便。  相似文献   

9.
延迟策略是协调市场需求变化的绝对性和生产系统相对稳定性矛盾的一个重要战略手段。为分析实施延迟策略生产系统的性能,利用随机广义Petri-net(GSPN)的建模分析方法,并根据GSPN与马尔科夫链的同构关系,将GSPN模型转化成为等价的马尔科夫链模型。通过马尔科夫链及相关数学方法获得实施延迟策略生产系统的主要性能指标。该方法不仅可以分析实施延迟策略生产系统的整体性能,而且能够对系统的各个环节运作效率做出定量分析。最后,通过算例分析验证了该方法的科学性与有效性,丰富了延迟策略研究方面的方法理论。  相似文献   

10.
A model is established to describe the structures of tilled soils using Markov chain theory. The effectiveness of the model in describing soil structures, and its accuracy when the model parameters are determined from limited field data is investigated by a consideration of variances of the transition probabilities and Markov chain state occurances in finite length chains. Criteria for correlation of soil structures at small horizontal and vertical displacements are derived, in order to establish distances at which soil structures become effectively independent. In this, a mathematical analysis is made of limiting covariances, generally applicable to the type of Markov chain used in describing these structures, in order to drastically reduce computing time in processing field data. Similarity coefficients are defined from the theory to measure similarity in different soil structures, and are compared in practice.  相似文献   

11.
We present a new family of models that is based on graphs that may have undirected, directed and bidirected edges. We name these new models marginal AMP (MAMP) chain graphs because each of them is Markov equivalent to some AMP chain graph under marginalization of some of its nodes. However, MAMP chain graphs do not only subsume AMP chain graphs but also multivariate regression chain graphs. We describe global and pairwise Markov properties for MAMP chain graphs and prove their equivalence for compositional graphoids. We also characterize when two MAMP chain graphs are Markov equivalent.For Gaussian probability distributions, we also show that every MAMP chain graph is Markov equivalent to some directed and acyclic graph with deterministic nodes under marginalization and conditioning on some of its nodes. This is important because it implies that the independence model represented by a MAMP chain graph can be accounted for by some data generating process that is partially observed and has selection bias. Finally, we modify MAMP chain graphs so that they are closed under marginalization for Gaussian probability distributions. This is a desirable feature because it guarantees parsimonious models under marginalization.  相似文献   

12.
13.
This work is concerned with weak convergence of non-Markov random processes modulated by a Markov chain. The motivation of our study stems from a wide variety of applications in actuarial science, communication networks, production planning, manufacturing and financial engineering. Owing to various modelling considerations, the modulating Markov chain often has a large state space. Aiming at reduction of computational complexity, a two-time-scale formulation is used. Under this setup, the Markov chain belongs to the class of nearly completely decomposable class, where the state space is split into several subspaces. Within each subspace, the transitions of the Markov chain varies rapidly, and among different subspaces, the Markov chain moves relatively infrequently. Aggregating all the states of the Markov chain in each subspace to a single super state leads to a new process. It is shown that under such aggregation schemes, a suitably scaled random sequence converges to a switching diffusion process.  相似文献   

14.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

15.
A continuous‐time binary‐matrix‐valued Markov chain is used to model the process by which social structure effects individual behavior. The model is developed in the context of sociometric networks of interpersonal affect. By viewing the network as a time‐dependent stochastic process it is possible to construct transition intensity equations for the probability that choices between group members will change. These equations can contain parameters for structural effects. Empirical estimates of the parameters can be interpreted as measures of structural tendencies. Some elementary processes are described and the application of the model to cross‐sectional data is explained in terms of the steady state solution to the process.  相似文献   

16.
The Hidden Markov Chains (HMC) are widely applied in various problems. This succes is mainly due to the fact that the hidden process can be recovered even in the case of very large set of data. These models have been recetly generalized to ‘Pairwise Markov Chains’ (PMC) model, which admit the same processing power and a better modeling one. The aim of this note is to propose further generalization called Triplet Markov Chains (TMC), in which the distribution of the couple (hidden process, observed process) is the marginal distribution of a Markov chain. Similarly to HMC, we show that posterior marginals are still calculable in Triplets Markov Chains. We provide a necessary and sufficient condition that a TMC is a PMC, which shows that the new model is strictly more general. Furthermore, a link with the Dempster–Shafer fusion is specified. To cite this article: W. Pieczynski, C. R. Acad. Sci. Paris, Ser. I 335 (2002) 275–278.  相似文献   

17.
Summary An infinite system of Markov chains is used to describe population development in an interconnected system of local populations. The model can also be viewed as an inhomogeneous Markov chain where the temporal inhomogeneity is a function of the mean of the process. Conditions for population persistence, in the sense of stochastic boundedness, are found.  相似文献   

18.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

19.
In this paper, we study a reflected Markov-modulated Brownian motion with a two sided reflection in which the drift, diffusion coefficient and the two boundaries are (jointly) modulated by a finite state space irreducible continuous time Markov chain. The goal is to compute the stationary distribution of this Markov process, which in addition to the complication of having a stochastic boundary can also include jumps at state change epochs of the underlying Markov chain because of the boundary changes. We give the general theory and then specialize to the case where the underlying Markov chain has two states.  相似文献   

20.
This work focuses on optimal controls for hybrid systems of renewable resources in random environments. We propose a new formulation to treat the optimal exploitation with harvesting and renewing. The random environments are modeled by a Markov chain, which is hidden and can be observed only in a Gaussian white noise. We use the Wonham filter to estimate the state of the Markov chain from the observable process. Then we formulate a harvesting–renewing model under partial observation. The Markov chain approximation method is used to find a numerical approximation of the value function and optimal policies. Our work takes into account natural aspects of the resource exploitation in practice: interacting resources, switching environment, renewing and partial observation. Numerical examples are provided to demonstrate the results and explore new phenomena arising from new features in the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号