首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
为了对企业项目间风险元传递预测进行研究,本文首先提出项目链风险元传递概念及企业项目链风险传递四面体模型,然后以企业项目链风险传递链式结构为切入点,引入灰色预测、傅里叶级数和马尔科夫链理论,构建了马尔科夫—傅里叶级数修正灰色预测模型(MFGM),用来预测项目链式结构的风险元传递,最后以项目工期为风险元,算例分析基于该模型的项目链风险元传递的预测情况,结果表明该模型具有可行性和有效性。  相似文献   

2.
基于改进的灰色马尔可夫链模型的交通事故预测   总被引:1,自引:0,他引:1  
交通事故预测是交通安全评价、规划和决策的基础.灰色预测适合于数据量少和波动小的系统对象,而马尔可夫链理论适用于预测随机波动大的动态过程.为克服一般灰色马尔可夫链模型运用的转移概率矩阵固定不变而影响预测精度的问题,本文建立了改进的灰色马尔可夫链模型.采用滑动转移概率矩阵方法,去掉最老数据并补充最新数据,从而建立新的一步转移概率矩阵.借助改进的灰色马尔可夫链模型,对全国2002-2004年交通事故10万人口死亡率进行了预测分析.结果表明,改进的灰色马尔可夫链模型比一般灰色马尔可夫链模型的预测范围更准确,预测精度更高.  相似文献   

3.
研究了闭环供应链网络双渠道均衡问题,其中制造/再制造工厂存在产能约束,通过分销/回收中心实体链和通过电子商务直销渠道,将其产品经由零售商/回收点销售给存在限制性价格上限的消费市场.借助变分不等式理论,建立了闭环供应链网络双渠道均衡模型,设计了求解均衡解的对数二次逼近的预测校正算法.从数值算例分析得到:消费市场中的商品会发生短缺,由于限制性价格上限的存在,当存在产能约束时情况会更为严重.另外,制造/再制造工厂加入直销渠道会增加制造/再制造工厂、零售商/回收点及闭环供应链的利润,但会减小分销/回收中心的利润.  相似文献   

4.
马尔可夫链模型在灾变预测中的应用   总被引:2,自引:0,他引:2  
利用马尔可夫链模型的原理预测灾变,以郑州市旱涝等级的预测作为实例,介绍了使用这种模型的方法与步骤,预测结果表明,利用马尔可夫链模型预测灾变是可行的。  相似文献   

5.
近年来我国淀粉产业迅速发展,给相关企业带来了巨大经济效益的同时,由于其情况复杂,致使相关企业无法正确掌控淀粉价格的走势,也造成了大量的经济损失.因此,寻找一种科学的、高效的淀粉价格预测方法已成为当务之急.将遗传算法(GA)与回归型支持向量机(SVR)相融合,建立了GA-SVR淀粉价格预测模型.对2003-2011年淀粉价格进行仿真预测,结果表明,模型的决定系数和均方误差均优于其它方法,验证了模型的有效性与优势.  相似文献   

6.
本文将证券价格时间序列分解成趋势变动序列和 Markov链 ,建立了证券组合的 Markov链模型 ,应用 Markov链理论对此模型进行了分析 ,给出了充分大的一个时间内的收益率 ,风险和切点组合的计算公式  相似文献   

7.
基于马氏链拟合的一种非负变权组合预测算法及其应用   总被引:3,自引:0,他引:3  
通过马氏链拟合的方法求取一种新的非负时变权组合预测算法公式.主要工作是:一、对组合预测问题以最小误差为准则给出了马氏链的状态和状态概率初步估计;二、用马氏链拟合状态概率分布时变规律,通过约束多元自回归模型导出了一步转移概率阵的LS解;三、给出一种非负时变权组合预测公式并举一应用实例.  相似文献   

8.
猪肉价格的预测关乎到消费者和生产者的利益,因此受到广泛关注.基于数据挖掘中关联规则的理念,提出基于二维时间序列模式提取的猪肉价格波动预测方法,并对猪肉日价格数据进行了预测实验,结果表明该模型的有效性.  相似文献   

9.
韩明 《运筹与管理》2007,16(3):119-123
本文提出了两种证券投资预测方法-马氏链法和E-Bayes法.首先对数据进行分组,然后在此基础上应用马氏链法和E-Bayes法的理论建立预测模型,最后结合实际问题进行了计算,两种方法的预测结果是一致的.  相似文献   

10.
根据国际原油价格近期数据及原油价格变化量,给出了国际原油价格改变量的状态转移概率(或频率)矩阵.依此提出以国际原油价格预测误差的期望与方差最小为最优目标,建立国际原油价格预测的双层随机整数规划,并论述该优化问题最优解的存在性, 根据约束特性构造了优化算法.同时按照国内现行成品油定价机制, 提出的优化算法,对国内成品油调价进行了预测,实证分析表明提出的模型与优化算法具有一定的预测精度和较好的实用性.  相似文献   

11.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices.  相似文献   

12.
莫晓云 《经济数学》2010,27(3):28-34
在客户发展关系的Markov链模型的基础上,构建了企业的客户回报随机过程.证明了:在适当假设下,客户回报过程是Markov链。甚至是时间齐次的Markov链.本文求出了该链的转移概率.通过转移概率得到了客户给企业期望回报的一些计算公式,从而为企业选定发展客户关系策略提供了有效的量化基础.  相似文献   

13.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

14.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

15.
This paper evaluates the small and large sample properties of Markov chain time-dependence and time-homogeneity tests. First, we present the Markov chain methodology to investigate various statistical properties of time series. Considering an auto-regressive time series and its associated Markov chain representation, we derive analytical measures of the statistical power of the Markov chain time-dependence and time-homogeneity tests. We later use Monte Carlo simulations to examine the small-sample properties of these tests. It is found that although Markov chain time-dependence test has desirable size and power properties, time-homogeneity test does not perform well in statistical size and power calculations.  相似文献   

16.
In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α 〉1/√2.  相似文献   

17.
研究了马氏环境中的可数马氏链,主要证明了过程于小柱集上的回返次数是渐近地服从Poisson分布。为此,引入熵函数h,首先给出了马氏环境中马氏链的Shannon-Mc Millan-Breiman定理,还给出了一个非马氏过程Posson逼近的例子。当环境过程退化为一常数序列时,便得到可数马氏链的Poisson极限定理。这是有限马氏链Pitskel相应结果的拓广。  相似文献   

18.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
A Markov chain plays an important role in an interacting multiple model (IMM) algorithm which has been shown to be effective for target tracking systems. Such systems are described by a mixing of continuous states and discrete modes. The switching between system modes is governed by a Markov chain. In real world applications, this Markov chain may change or needs to be changed. Therefore, one may be concerned about a target tracking algorithm with the switching of a Markov chain. This paper concentrates on fault-tolerant algorithm design and algorithm analysis of IMM estimation with the switching of a Markov chain. Monte Carlo simulations are carried out and several conclusions are given.  相似文献   

20.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号