共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
基于改进的灰色马尔可夫链模型的交通事故预测 总被引:1,自引:0,他引:1
交通事故预测是交通安全评价、规划和决策的基础.灰色预测适合于数据量少和波动小的系统对象,而马尔可夫链理论适用于预测随机波动大的动态过程.为克服一般灰色马尔可夫链模型运用的转移概率矩阵固定不变而影响预测精度的问题,本文建立了改进的灰色马尔可夫链模型.采用滑动转移概率矩阵方法,去掉最老数据并补充最新数据,从而建立新的一步转移概率矩阵.借助改进的灰色马尔可夫链模型,对全国2002-2004年交通事故10万人口死亡率进行了预测分析.结果表明,改进的灰色马尔可夫链模型比一般灰色马尔可夫链模型的预测范围更准确,预测精度更高. 相似文献
3.
研究了闭环供应链网络双渠道均衡问题,其中制造/再制造工厂存在产能约束,通过分销/回收中心实体链和通过电子商务直销渠道,将其产品经由零售商/回收点销售给存在限制性价格上限的消费市场.借助变分不等式理论,建立了闭环供应链网络双渠道均衡模型,设计了求解均衡解的对数二次逼近的预测校正算法.从数值算例分析得到:消费市场中的商品会发生短缺,由于限制性价格上限的存在,当存在产能约束时情况会更为严重.另外,制造/再制造工厂加入直销渠道会增加制造/再制造工厂、零售商/回收点及闭环供应链的利润,但会减小分销/回收中心的利润. 相似文献
4.
马尔可夫链模型在灾变预测中的应用 总被引:2,自引:0,他引:2
利用马尔可夫链模型的原理预测灾变,以郑州市旱涝等级的预测作为实例,介绍了使用这种模型的方法与步骤,预测结果表明,利用马尔可夫链模型预测灾变是可行的。 相似文献
5.
近年来我国淀粉产业迅速发展,给相关企业带来了巨大经济效益的同时,由于其情况复杂,致使相关企业无法正确掌控淀粉价格的走势,也造成了大量的经济损失.因此,寻找一种科学的、高效的淀粉价格预测方法已成为当务之急.将遗传算法(GA)与回归型支持向量机(SVR)相融合,建立了GA-SVR淀粉价格预测模型.对2003-2011年淀粉价格进行仿真预测,结果表明,模型的决定系数和均方误差均优于其它方法,验证了模型的有效性与优势. 相似文献
6.
本文将证券价格时间序列分解成趋势变动序列和 Markov链 ,建立了证券组合的 Markov链模型 ,应用 Markov链理论对此模型进行了分析 ,给出了充分大的一个时间内的收益率 ,风险和切点组合的计算公式 相似文献
7.
8.
《数学的实践与认识》2016,(24)
猪肉价格的预测关乎到消费者和生产者的利益,因此受到广泛关注.基于数据挖掘中关联规则的理念,提出基于二维时间序列模式提取的猪肉价格波动预测方法,并对猪肉日价格数据进行了预测实验,结果表明该模型的有效性. 相似文献
9.
本文提出了两种证券投资预测方法-马氏链法和E-Bayes法.首先对数据进行分组,然后在此基础上应用马氏链法和E-Bayes法的理论建立预测模型,最后结合实际问题进行了计算,两种方法的预测结果是一致的. 相似文献
10.
11.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices. 相似文献
12.
在客户发展关系的Markov链模型的基础上,构建了企业的客户回报随机过程.证明了:在适当假设下,客户回报过程是Markov链。甚至是时间齐次的Markov链.本文求出了该链的转移概率.通过转移概率得到了客户给企业期望回报的一些计算公式,从而为企业选定发展客户关系策略提供了有效的量化基础. 相似文献
13.
赵一强 《应用数学学报(英文版)》2000,16(3):274-282
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, … 相似文献
14.
John Conlisk 《The Journal of mathematical sociology》2013,37(2-3):127-143
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains. 相似文献
15.
《European Journal of Operational Research》2002,137(3):524-543
This paper evaluates the small and large sample properties of Markov chain time-dependence and time-homogeneity tests. First, we present the Markov chain methodology to investigate various statistical properties of time series. Considering an auto-regressive time series and its associated Markov chain representation, we derive analytical measures of the statistical power of the Markov chain time-dependence and time-homogeneity tests. We later use Monte Carlo simulations to examine the small-sample properties of these tests. It is found that although Markov chain time-dependence test has desirable size and power properties, time-homogeneity test does not perform well in statistical size and power calculations. 相似文献
16.
Ying Bao Zi-hu Zhu 《应用数学学报(英文版)》2006,22(3):517-528
In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α 〉1/√2. 相似文献
17.
18.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
19.
《Mathematical and Computer Modelling》1997,25(1):1-9
A Markov chain plays an important role in an interacting multiple model (IMM) algorithm which has been shown to be effective for target tracking systems. Such systems are described by a mixing of continuous states and discrete modes. The switching between system modes is governed by a Markov chain. In real world applications, this Markov chain may change or needs to be changed. Therefore, one may be concerned about a target tracking algorithm with the switching of a Markov chain. This paper concentrates on fault-tolerant algorithm design and algorithm analysis of IMM estimation with the switching of a Markov chain. Monte Carlo simulations are carried out and several conclusions are given. 相似文献
20.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions. 相似文献