首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
本文应用SAS软件对1952-2009年的中国人均GDP建立时间序列模型并对2010-2013年的中国人均GDP进行了预测;在此基础上建立了以时间序列模型得到的参数信息作为先验信息的两种贝叶斯修匀模型与算法。由此所得的参数贝叶斯估计及预测,能充分利用样本信息和参数的先验信息,因而具有更小的方差或平方误差,估计参数更科学。为了检验该方法对先验分布的灵敏性,我们做了基于两种先验分布的模拟预测。将预测结果与传统时间序列预测相比,发现单一正态观测值、方差已知的先验分布的贝叶斯模型得到的预测值更准确,而基于先验分布为指数分布的贝叶斯模型的预测误差较大,预测效果差。  相似文献   

2.
赵喜林  赵煜  余东 《数学杂志》2014,34(1):186-190
本文研究了基于泊松分布的产品失效率估计问题.利用贝叶斯统计推断方法,获得了以截尾伽玛分布为先验分布时,产品失效率的贝叶斯估计和相关性质,推广了以伽玛分布为先验分布的贝叶斯估计结果.  相似文献   

3.
研究了艾拉姆咖分布变点估计的非迭代抽样算法(IBF)和MCMC算法.在贝叶斯框架下,选取无信息先验分布,得到关于变点位置的后验分布和各参数的满条件分布,并且详细介绍了IBF算法和MCMC方法的实施步骤.最后进行随机模拟试验,结果表明两种算法都能够有效的估计变点位置,并且IBF算法的计算速度优于MCMC方法.  相似文献   

4.
对于先验分布为正态逆伽玛分布的正态分布的方差参数,我们解析地计算了具有共轭的正态逆伽玛先验分布的在Stein损失函数下的贝叶斯后验估计量.这个估计量最小化后验期望Stein损失.我们还解析地计算了在平方误差损失函数下的贝叶斯后验估计量和后验期望Stein损失.数值模拟的结果例证了我们的如下理论研究:后验期望Stein损失不依赖于样本;在平方误差损失函数下的贝叶斯后验估计量和后验期望Stein损失要一致地大于在Stein损失函数下的对应的量.最后,我们计算了上证综指的月度的简单回报的贝叶斯后验估计量和后验期望Stein损失.  相似文献   

5.
小批量生产的贝叶斯质量控制模型   总被引:1,自引:0,他引:1  
本应用贝叶斯统计推断方法,研究了基于正态共轭先验分布和正态——逆伽玛共轭先验分布的小批量生产下的质量控制模型问题,根据不同控制对象的预报分布密度函数,分别构造了方差已知时的贝叶斯均值控制图和方差未知时的贝叶斯均值——标准差控制图,并与经典质量控制模型进行了比较。  相似文献   

6.
用线性贝叶斯方法去同时估计线性模型中回归系数和误差方差,并在不知道先验分布具体形式的情况下,得到了线性贝叶斯估计的表达式.在均方误差矩阵准则下,证明了其优于最小二乘估计和极大似然估计.与利用MCMC算法得到的贝叶斯估计相比,线性贝叶斯估计具有显式表达式并且更方便使用.对于几种不同的先验分布,数值模拟结果表明线性贝叶斯估...  相似文献   

7.
本文分别用极大似然法和Bayes方法研究了AR(p)模型中的变点问题.在数据矩阵不一定满秩的条件下,利用Moore-Penrose广义逆给出了模型参数的极大似然估计的统一表达式和变点位置的估计式.在假定自回归系数的先验分布服从多元正态,方差服从逆Γ分布的条件下,用Bayes方法给出了变点位置估计的显示表达式以及模型参数的Bayes估计.  相似文献   

8.
用贝叶斯方法对幂变换门限GARCH (PTTGARCH)模型变点问题进行统计分析.构造了变点模型参数的满条件分布并且采用MCMC的Griddy-Gibbs抽样算法对参数进行了估计.分别就不同的变点位置、模型不存在变点以及模型接近非平稳的情况进行数值模拟.结果表明:变点处于序列中间位置时,估计效果较好,当变点位置越靠近序列两端时,所得估计的误差越大;当模型不存在变点时,所设变点位置τ后验分布的峰度接近均匀分布的峰度;当模型存在变点时,τ后验分布的峰度大于2,且模型越平稳,τ的后验分布的峰度越大,因此可以通过判断τ的后验分布的峰度来判断模型是否存在变点.最后以GARCH模型对上证指数日收益率进行分析,得到变点发生时刻的概率分布,该结果与市场的变化背景符合.  相似文献   

9.
在实际应用中,两参数Gumbel分布的贝叶斯估计往往需要预先知道Gumbel参数的二维联合先验分布。由于获取先验分布的主观性和统计推断的复杂性,目前有关Gumbel分布贝叶斯估计理论及其性质的讨论还比较少,更不要说获得较为简单的Gumbel分布的贝叶斯估计。本文基于Kaminskiy和Vasiliy提出的简单贝叶斯估计过程,利用可靠度函数估计的区间形式表示先验信息,从而得到两个参数Gumbel分布的简单贝叶斯估计。基于此先验信息,该估计过程构造了Gumbel参数的连续联合先验分布,给出了在给定任意时点的可靠度(或累积密度)及其标准差的后验估计,为可靠性与风险评估中简单快速的使用贝叶斯估计刻画极端事件提供了可能.  相似文献   

10.
夏业茂 《应用数学》2019,32(1):81-93
两部分回归模型在刻画半连续型数据的概率发生机制具有重要作用.本文将经典的两部分回归模型推广到两部分有限混合模型,通过假定多条回归直线的混合来解释分布的不齐一性.在贝叶斯框架内,运用马尔可夫链蒙特卡洛(MCMC)方法来进行后验分析.Polya-Gamma先验被用来对logistic模型进行拟合,同时,Stick-breaking先验用于随机权.这些有助于加速后验抽样.本文对可卡因数据展开实证分析.  相似文献   

11.
Frequency domain properties of the operators to decompose a time series into the multi-components along the Akaike's Bayesian model (Akaike (1980, Bayesian Statistics, 143–165, University Press, Valencia, Spain)) are shown. In that analysis a normal disturbance-linear-stochastic regression prior model is applied to the time series. A prior distribution, characterized by a small number of hyperparameters, is specified for model parameters. The posterior distribution is a linear function (filter) of observations. Here we use frequency domain analysis or filter characteristics of several prior models parametrically as a function of the hyperparameters.  相似文献   

12.
曾惠芳  熊培银 《经济数学》2020,37(3):183-188
针对气候变化及经济影响存在的巨大不确定性,研究了气候变化不确定性以及先验信息对社会碳成本的影响.在贝叶斯理论框架下,采用指数分布刻画气候变化的分布特征,假设尾部变化率是一个随机变量,给出其伽玛先验分布,推导了气候变化分布的贝叶斯先验预测分布.并分别基于指数分布以及帕累托先验预测分布计算了社会碳排放成本.模拟分析发现,在未融合先验信息的情况下,由于尾部概率很小,不管是否修正消费与气候变化之间的关系,截尾社会碳成本和未截尾社会碳成本几乎重合.然而,在利用贝叶斯方法融合先验信息的情况下,社会碳成本容易受到先验信息的影响.但是,通过修正消费与气候变化之间的关系后,发现社会碳成本受先验信息的影响比较少.  相似文献   

13.
This paper considers the problem of learning multinomial distributions from a sample of independent observations. The Bayesian approach usually assumes a prior Dirichlet distribution about the probabilities of the different possible values. However, there is no consensus on the parameters of this Dirichlet distribution. Here, it will be shown that this is not a simple problem, providing examples in which different selection criteria are reasonable. To solve it the Imprecise Dirichlet Model (IDM) was introduced. But this model has important drawbacks, as the problems associated to learning from indirect observations. As an alternative approach, the Imprecise Sample Size Dirichlet Model (ISSDM) is introduced and its properties are studied. The prior distribution over the parameters of a multinomial distribution is the basis to learn Bayesian networks using Bayesian scores. Here, we will show that the ISSDM can be used to learn imprecise Bayesian networks, also called credal networks when all the distributions share a common graphical structure. Some experiments are reported on the use of the ISSDM to learn the structure of a graphical model and to build supervised classifiers.  相似文献   

14.
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process (GP) prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking mixtures. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function, generalizing existing theory focused on parametric residual distributions. The homoscedastic priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating GP in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally adaptive manner. The methods are illustrated using simulated and real data applications.  相似文献   

15.
Bayesian inference is considered for the seemingly unrelated regressions with an elliptically contoured error distribution. We show that the posterior distribution of the regression parameters and the predictive distribution of future observations under elliptical errors assumption are identical to those obtained under independently distributed normal errors when an improper prior is used. This gives inference robustness with respect to departures from the reference case of independent sampling from the normal distribution.  相似文献   

16.
We consider Bayesian shrinkage predictions for the Normal regression problem under the frequentist Kullback-Leibler risk function.Firstly, we consider the multivariate Normal model with an unknown mean and a known covariance. While the unknown mean is fixed, the covariance of future samples can be different from that of training samples. We show that the Bayesian predictive distribution based on the uniform prior is dominated by that based on a class of priors if the prior distributions for the covariance and future covariance matrices are rotation invariant.Then, we consider a class of priors for the mean parameters depending on the future covariance matrix. With such a prior, we can construct a Bayesian predictive distribution dominating that based on the uniform prior.Lastly, applying this result to the prediction of response variables in the Normal linear regression model, we show that there exists a Bayesian predictive distribution dominating that based on the uniform prior. Minimaxity of these Bayesian predictions follows from these results.  相似文献   

17.
Bayesian additive regression trees (BART) is a Bayesian approach to flexible nonlinear regression which has been shown to be competitive with the best modern predictive methods such as those based on bagging and boosting. BART offers some advantages. For example, the stochastic search Markov chain Monte Carlo (MCMC) algorithm can provide a more complete search of the model space and variation across MCMC draws can capture the level of uncertainty in the usual Bayesian way. The BART prior is robust in that reasonable results are typically obtained with a default prior specification. However, the publicly available implementation of the BART algorithm in the R package BayesTree is not fast enough to be considered interactive with over a thousand observations, and is unlikely to even run with 50,000 to 100,000 observations. In this article we show how the BART algorithm may be modified and then computed using single program, multiple data (SPMD) parallel computation implemented using the Message Passing Interface (MPI) library. The approach scales nearly linearly in the number of processor cores, enabling the practitioner to perform statistical inference on massive datasets. Our approach can also handle datasets too massive to fit on any single data repository.  相似文献   

18.
We consider the application of the Bayesian approach to parameter estimation to the single period inventory model. We assume complete prior ignorance of the values that the (single) unknown parameter of the demand distribution might take and express this by using a uniform prior over the permitted range of parameter values. Direct analytical and numerical comparisons are made for three distributions and the results show that over a wide range of parameter values, including most of those which are likely to be of practical interest, the application of Bayesian methodology produces better decisions (resulting in lower expected total cost) than the approach of using a point estimate for the parameter, with no increase in computation or complexity. This suggests that this methodology could usefully be applied to this and other decision models and also provides a strong justification for the use of the full Bayesian approach when a meaningful prior is available.  相似文献   

19.
The Bayesian system reliability assessment under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayesian estimation method will be used to create the fuzzy Bayes point estimator of system reliability based on Exponential distribution by invoking the well-known theorem called “Resolution Identity” in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g., GAMS or LINGO (LINDO).  相似文献   

20.
In change point problems in general we should answer three questions: how many changes are there? Where are they? And, what is the distribution of the data within the blocks? In this paper, we develop a new full predictivistic approach for modeling observations within the same block of observation and consider the product partition model (PPM) for treating the change point problem. The PPM brings more flexibility into the change point problem because it considers the number of changes and the instants when the changes occurred as random variables. A full predictivistic characterization of the model can provide a more tractable way to elicit the prior distribution of the parameters of interest, once prior opinions will be required only about observable quantities. We also present an application to the problem of identifying multiple change points in the mean and variance of a stock market return time series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号