首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we extend our previous semi-Markov reward model which attached costs to duration in states, by including costs of making a transition from one state to another. Theoretical results concerning the moments and consequently the distribution of interval costs for every member and of the total cost per unit period at any time and also through time intervals are obtained and provided in analytic form for the semi Markov reward model with discounting. The results are applied to an open healthcare system. In the healthcare domain such transition costs allow us to evaluate the overall costs of therapy or clinical intervention where an operation or other treatment may be an option. This model can be used for strategic approaches to planning and evaluating long-term patient care. The results demonstrate the potential of the model to demonstrate differential costs of different therapeutic strategies and explore optimal solutions.  相似文献   

2.
This paper suggests a generalized semi‐Markov model for manpower planning, which could be adopted in cases of unavailability of candidates with the desired qualifications/experience, as well as in cases where an organization provides training opportunities to its personnel. In this context, we incorporate training classes into the framework of a non‐homogeneous semi‐Markov system and we introduce an additional, external semi‐Markov system providing the former with potential recruits. For the model above, referred to as the Augmented Semi‐Markov System, we derive the equations that reflect the expected number of persons in each grade and we also investigate its limiting population structure. An illustrative example is provided. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
We have previously used Markov models to describe movements of patients between hospital states; these may be actual or virtual and described by a phase-type distribution. Here we extend this approach to a Markov reward model for a healthcare system with Poisson admissions and an absorbing state, typically death. The distribution of costs is evaluated for any time and expressions derived for the mean and variances of costs. The average cost at any time is then determined for two scenarios: the Therapeutic and Prosthetic models, respectively. This example is used to illustrate the idea that keeping acute patients longer in hospital to ensure fitness for discharge, may reduce costs by decreasing the number of patients that become long-stay. In addition we develop a Markov Reward Model for a healthcare system including states, where the patient is in hospital, and states, where the patient is in the community. In each case, the length of stay is described by a phase-type distribution, thus enabling the representation of durations and costs in each phase within a Markov framework. The model can be used to determine costs for the entire system thus facilitating a systems approach to the planning of healthcare and a holistic approach to costing. Such models help us to assess the complex relationship between hospital and community care.  相似文献   

4.
在产品质量不完备的环境下,考虑了需求依赖于质量水平的报童问题。本文主要利用马氏理论刻画质量水平与需求之间关联性的动态演变过程,并将"不完备质量"的决策理念纳入报童问题的理论框架,进而提出了新的随机库存系统的优化模型及其决策机制。同时,利用随机质量过程中的首达性、遍历性、不可约性等基本属性,构建了随机库存系统在运作和管理过程中的可靠性及其收益评估机制。模型的相关结论表明:在不完备质量的环境下,零售商的最优订购决策是由各个质量状态的转移概率所确定,若由质量水平的波动性所导出的随机过程为不可约遍历马氏链时,则库存系统的决策机制具有良好的稳定性。  相似文献   

5.
This paper presents an integrated platform for multi-sensor equipment diagnosis and prognosis. This integrated framework is based on hidden semi-Markov model (HSMM). Unlike a state in a standard hidden Markov model (HMM), a state in an HSMM generates a segment of observations, as opposed to a single observation in the HMM. Therefore, HSMM structure has a temporal component compared to HMM. In this framework, states of HSMMs are used to represent the health status of a component. The duration of a health state is modeled by an explicit Gaussian probability function. The model parameters (i.e., initial state distribution, state transition probability matrix, observation probability matrix, and health-state duration probability distribution) are estimated through a modified forward–backward training algorithm. The re-estimation formulae for model parameters are derived. The trained HSMMs can be used to diagnose the health status of a component. Through parameter estimation of the health-state duration probability distribution and the proposed backward recursive equations, one can predict the useful remaining life of the component. To determine the “value” of each sensor information, discriminant function analysis is employed to adjust the weight or importance assigned to a sensor. Therefore, sensor fusion becomes possible in this HSMM based framework.  相似文献   

6.
系统所遭受的冲击和退化损害过程广泛存在着多阶段特征和相互依赖关系,为了更精确建模和分析系统冲击和退化间的依赖性,论文建立了多阶段冲击和退化过程的复合模型,提出了一种更加广义的冲击和退化过程依赖关系,即二者同时对系统损害累积过程产生贡献,进而导致系统阶段的改变,而状态转移又反馈性地影响冲击和退化过程。通过构造马尔可夫更新过程,基于半马尔科夫核,得到此类冲击退化模型的可靠度解析表达。  相似文献   

7.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

8.
An Erratum for this article has been published in Applied Stochastic Models in Business and Industry 2005; (in press) This paper presents a future pricing model based on the discrete time homogeneous semi‐Markov process (DTHSMP). The model is adapted to the real data of the Italian primary future stock index. After showing the pricing model, the DTHSMP solution is given. The solution of the semi‐Markov process gives, for each period of the considered horizon time, and for each starting state, the probability distribution of the future price. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
Multistate transition models are increasingly used in credit risk applications as they allow us to quantify the evolution of the process among different states. If the process is Markov, analysis and prediction are substantially simpler, so analysts would like to use these models if they are applicable. In this paper, we develop a procedure for assessing the Markov hypothesis and discuss different ways of implementing the test procedure. One issue when sample size is large is that the statistical test procedures will detect even small deviations from the Markov model when these differences are not of practical interest. To address this problem, we propose an approach to formulate and test the null hypothesis of “weak non‐Markov.” The situation where the transition probabilities are heterogeneous is also examined, and approaches to accommodate this case are indicated. Simulation studies are used extensively to study the properties of the procedures, and two applications are to illustrate the results.  相似文献   

10.
This paper derives a Markov decision process model for the profitability of credit cards, which allows lenders to find an optimal dynamic credit limit policy. The states of the system are based on the borrower’s behavioural score and the decisions are what credit limit to give the borrower each period. In determining which Markov chain best describes the borrower’s performance, second order as well as first order Markov chains are considered and estimation procedures developed that deal with the low default levels that may exist in the data. A case study is given in which the optimal credit limit is derived and the results compared with the actual outcomes.  相似文献   

11.
12.
The paper presents a formal approach which may increase the realism and parsimony of higher‐order Markov models applied to certain human behaviors. Often in behavioral applications, any improvements in fit available from increasing the order of a Markov model would be more than offset by interpretive problems caused by the very rapid increase in the number of independent parameters. The model proposed here for the higher‐order process greatly reduces the number of independent parameters, replacing them with sociologically relevant effects of persistence in and reversion to previous conditions.

The general model is called the “reversion model.” In it, individuals are allowed to carry along some information about their pasts, for a number of periods corresponding to the order of the model. The parameters describing residence histories are constructed to give each individual an underlying set of first‐order transition probabilities, which are modified by experience of the various states of the system. When an individual occupies a particular state, his relative probability of future residence there (vis‐a‐vis the other states as a group) is permitted to change. But occupation of a particular state is not permitted to affect the relative chances of residence among the other states. With suitable constraints, the number of parameters of this higher‐order process no longer increases geometrically with the order, but only arithmetically.

Maximum likelihood estimation formulas are derived for the reversion model, which is then applied to longitudinal data on the work activities of U.S. Ph.D. physicists and chemists in 1960–1966, and is found to fit well using likelihood ratio tests.  相似文献   

13.
本文首次在报酬函数及转移速率族均非一致有界的条件下,对可数状态空间,可地动集的连续时间折扣马氏决策规划进行研究,文中引入一类新的无界报酬函数,在一类新的马氏策略中,讨论了最优策略的存在性及春结构,除证明了在有界报酬和一致有界转移速率族下成立的主要结果外,本文还得到一些重要结论。  相似文献   

14.
An identifying model of sugar cane crop rotation has been developed to optimize return to a single sugar cane farm by scheduling harvesting times and implicitly defining the crop growth periods. A Markov decision process is used to provide a simple structure which retains the problem's dynamic nature. Stochastic influence on the system is incorporated within this structure by defining states representing crops at discrete quality levels with probabilistic transitions between states. Non-discounted and discounted versions of the Markov process are solved using a linear programming formulation. Problem formulation and interpretation of the solution is complicated by an extra constraint on the Markov process due to sugar industry restriction. This constraint is treated by two different means and a rule is presented to aid its interpretation.  相似文献   

15.
The Markov decision process is studied under the maximization of the probability that total discounted rewards exceed a target level. We focus on and study the dynamic programing equations of the model. We give various properties of the optimal return operator and, for the infinite planning-horizon model, we characterize the optimal value function as a maximal fixed point of the previous operator. Various turnpike results relating the finite and infinite-horizon models are also given.  相似文献   

16.
A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.  相似文献   

17.
This paper presents a model for improving utilization in IEEE 802.11e wireless LAN via a Markov decision process (MDP) approach. A Markov chain tracking the utilized transmission window for two separate access mechanisms is devised. Subsequently, the action space and the rewards of the MDP are judiciously selected with the aim of improving overall utilization without explicit blocking. The proposed MDP model for 802.11e reveals that proportional allocation of access opportunities improve overall utilization compared to completely randomized access. Simulation results go on to show that a policy that limits HCCA access as a function of channel load improves utilization by an average 8 %. The optimization framework proposed in this paper is promising as a practical decision support tool for resource planning in 802.11e.  相似文献   

18.
Markov models are commonly used in modelling many practical systems such as telecommunication systems, manufacturing systems and inventory systems. However, higher-order Markov models are not commonly used in practice because of their huge number of states and parameters that lead to computational difficulties. In this paper, we propose a higher-order Markov model whose number of states and parameters are linear with respect to the order of the model. We also develop efficient estimation methods for the model parameters. We then apply the model and method to solve the generalised Newsboy's problem. Numerical examples with applications to production planning are given to illustrate the power of our proposed model.  相似文献   

19.
20.
In this paper,we consider a Markov switching Lévy process model in which the underlying risky assets are driven by the stochastic exponential of Markov switching Lévy process and then apply the model to option pricing and hedging.In this model,the market interest rate,the volatility of the underlying risky assets and the N-state compensator,depend on unobservable states of the economy which are modeled by a continuous-time Hidden Markov process.We use the MEMM(minimal entropy martingale measure) as the equivalent martingale measure.The option price using this model is obtained by the Fourier transform method.We obtain a closed-form solution for the hedge ratio by applying the local risk minimizing hedging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号