首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 879 毫秒
1.
Exponential smoothing methods are widely used as forecasting techniques in inventory systems and business planning, where reliable prediction intervals are also required for a large number of series. This paper describes a Bayesian forecasting approach based on the Holt–Winters model, which allows obtaining accurate prediction intervals. We show how to build them incorporating the uncertainty due to the smoothing unknowns using a linear heteroscedastic model. That linear formulation simplifies obtaining the posterior distribution on the unknowns; a random sample from such posterior, which is not analytical, is provided using an acceptance sampling procedure and a Monte Carlo approach gives the predictive distributions. On the basis of this scheme, point-wise forecasts and prediction intervals are obtained. The accuracy of the proposed Bayesian forecasting approach for building prediction intervals is tested using the 3003 time series from the M3-competition.  相似文献   

2.
Optimal subset selection among a general family of threshold autoregressive moving-average (TARMA) models is considered. The usual complexity of model/order selection is increased by capturing the uncertainty of unknown threshold levels and an unknown delay lag. The Monte Carlo method of Bayesian model averaging provides a possible way to overcome such model uncertainty. Incorporating with the idea of Bayesian model averaging, a modified stochastic search variable selection method is adapted to consider subset selection in TARMA models, by adding latent indicator variables for all potential model lags as part of the proposed Markov chain Monte Carlo sampling scheme. Metropolis–Hastings methods are employed to deal with the well-known difficulty of including moving-average terms in the model and a novel proposal mechanism is designed for this purpose. Bayesian comparison of two hyper-parameter settings is carried out via a simulation study. The results demonstrate that the modified method has favourable performance under reasonable sample size and appropriate settings of the necessary hyper-parameters. Finally, the application to four real datasets illustrates that the proposed method can provide promising and parsimonious models from more than 16 million possible subsets.  相似文献   

3.
The threshold autoregressive model with generalized autoregressive conditionally heteroskedastic (GARCH) specification is a popular nonlinear model that captures the well‐known asymmetric phenomena in financial market data. The switching mechanisms of hysteretic autoregressive GARCH models are different from threshold autoregressive model with GARCH as regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. This paper conducts a Bayesian model comparison among competing models by designing an adaptive Markov chain Monte Carlo sampling scheme. We illustrate the performance of three kinds of criteria by comparing models with fat‐tailed and/or skewed errors: deviance information criteria, Bayesian predictive information, and an asymptotic version of Bayesian predictive information. A simulation study highlights the properties of the three Bayesian criteria and the accuracy as well as their favorable performance as model selection tools. We demonstrate the proposed method in an empirical study of 12 international stock markets, providing evidence to strongly support for both models with skew fat‐tailed innovations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
This article suggests a method for variable and transformation selection based on posterior probabilities. Our approach allows for consideration of all possible combinations of untransformed and transformed predictors along with transformed and untransformed versions of the response. To transform the predictors in the model, we use a change-point model, or “change-point transformation,” which can yield more interpretable models and transformations than the standard Box–Tidwell approach. We also address the problem of model uncertainty in the selection of models. By averaging over models, we account for the uncertainty inherent in inference based on a single model chosen from the set of models under consideration. We use a Markov chain Monte Carlo model composition (MC3) method which allows us to average over linear regression models when the space of models under consideration is very large. This considers the selection of variables and transformations at the same time. In an example, we show that model averaging improves predictive performance as compared with any single model that might reasonably be selected, both in terms of overall predictive score and of the coverage of prediction intervals. Software to apply the proposed methodology is available via StatLib.  相似文献   

5.
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. When there are linear dependencies among potential predictor variables in a generalized linear model, existing Markov chain Monte Carlo algorithms for sampling from the posterior distribution on the model and parameter space in Bayesian variable selection problems may not work well. This article describes a sampling algorithm based on the Swendsen-Wang algorithm for the Ising model, and which works well when the predictors are far from orthogonality. In problems of variable selection for generalized linear models we can index different models by a binary parameter vector, where each binary variable indicates whether or not a given predictor variable is included in the model. The posterior distribution on the model is a distribution on this collection of binary strings, and by thinking of this posterior distribution as a binary spatial field we apply a sampling scheme inspired by the Swendsen-Wang algorithm for the Ising model in order to sample from the model posterior distribution. The algorithm we describe extends a similar algorithm for variable selection problems in linear models. The benefits of the algorithm are demonstrated for both real and simulated data.  相似文献   

6.
This article proposes a new Bayesian approach to prediction on continuous covariates. The Bayesian partition model constructs arbitrarily complex regression and classification surfaces by splitting the covariate space into an unknown number of disjoint regions. Within each region the data are assumed to be exchangeable and come from some simple distribution. Using conjugate priors, the marginal likelihoods of the models can be obtained analytically for any proposed partitioning of the space where the number and location of the regions is assumed unknown a priori. Markov chain Monte Carlo simulation techniques are used to obtain predictive distributions at the design points by averaging across posterior samples of partitions.  相似文献   

7.
Increasingly large volumes of space–time data are collected everywhere by mobile computing applications, and in many of these cases, temporal data are obtained by registering events, for example, telecommunication or Web traffic data. Having both the spatial and temporal dimensions adds substantial complexity to data analysis and inference tasks. The computational complexity increases rapidly for fitting Bayesian hierarchical models, as such a task involves repeated inversion of large matrices. The primary focus of this paper is on developing space–time autoregressive models under the hierarchical Bayesian setup. To handle large data sets, a recently developed Gaussian predictive process approximation method is extended to include autoregressive terms of latent space–time processes. Specifically, a space–time autoregressive process, supported on a set of a smaller number of knot locations, is spatially interpolated to approximate the original space–time process. The resulting model is specified within a hierarchical Bayesian framework, and Markov chain Monte Carlo techniques are used to make inference. The proposed model is applied for analysing the daily maximum 8‐h average ground level ozone concentration data from 1997 to 2006 from a large study region in the Eastern United States. The developed methods allow accurate spatial prediction of a temporally aggregated ozone summary, known as the primary ozone standard, along with its uncertainty, at any unmonitored location during the study period. Trends in spatial patterns of many features of the posterior predictive distribution of the primary standard, such as the probability of noncompliance with respect to the standard, are obtained and illustrated. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
We study a new approach to statistical prediction in the Dempster–Shafer framework. Given a parametric model, the random variable to be predicted is expressed as a function of the parameter and a pivotal random variable. A consonant belief function in the parameter space is constructed from the likelihood function, and combined with the pivotal distribution to yield a predictive belief function that quantifies the uncertainty about the future data. The method boils down to Bayesian prediction when a probabilistic prior is available. The asymptotic consistency of the method is established in the iid case, under some assumptions. The predictive belief function can be approximated to any desired accuracy using Monte Carlo simulation and nonlinear optimization. As an illustration, the method is applied to multiple linear regression.  相似文献   

9.
Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of classification outcomes that is of crucial importance for safety critical applications. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the classifier diversity and the required performance. The interpretability of MCSs can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models for experts. The required diversity of MCSs exploiting such classification models can be achieved by using two techniques, the Bayesian model averaging and the randomised DT ensemble. Both techniques have revealed promising results when applied to real-world problems. In this paper we experimentally compare the classification uncertainty of the Bayesian model averaging with a restarting strategy and the randomised DT ensemble on a synthetic dataset and some domain problems commonly used in the machine learning community. To make the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo technique. The classification uncertainty is evaluated within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. Exploring a full posterior distribution, this technique produces realistic estimates which can be easily interpreted in statistical terms. In our experiments we found out that the Bayesian DTs are superior to the randomised DT ensembles within the Uncertainty Envelope technique.  相似文献   

10.
In this article, we develop a new approach within the framework of asset pricing models that incorporates two key features of the latent volatility: co‐movement among conditionally heteroscedastic financial returns and switching between different unobservable regimes. By combining latent factor models with hidden Markov chain models we derive a dynamical local model for segmentation and prediction of multivariate conditionally heteroscedastic financial time series. We concentrate more precisely on situations where the factor variances are modelled by univariate generalized quadratic autoregressive conditionally heteroscedastic processes. The expectation maximization algorithm that we have developed for the maximum likelihood estimation is based on a quasi‐optimal switching Kalman filter approach combined with a generalized pseudo‐Bayesian approximation, which yield inferences about the unobservable path of the common factors, their variances and the latent variable of the state process. Extensive Monte Carlo simulations and preliminary experiments obtained with daily foreign exchange rate returns of eight currencies show promising results. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

11.
The calibration of some stochastic differential equation used to model spot prices in electricity markets is investigated. As an alternative to relying on standard likelihood maximization, the adoption of a fully Bayesian paradigm is explored, that relies on Markov chain Monte Carlo (MCMC) stochastic simulation and provides the posterior distributions of the model parameters. The proposed method is applied to one‐ and two‐factor stochastic models, using both simulated and real data. The results demonstrate good agreement between the maximum likelihood and MCMC point estimates. The latter approach, however, provides a more complete characterization of the model uncertainty, an information that can be exploited to obtain a more realistic assessment of the forecasting error. In order to further validate the MCMC approach, the posterior distribution of the Italian electricity price volatility is explored for different maturities and compared with the corresponding maximum likelihood estimates.  相似文献   

12.
Although the classic exponential-smoothing models and grey prediction models have been widely used in time series forecasting, this paper shows that they are susceptible to fluctuations in samples. A new fractional bidirectional weakening buffer operator for time series prediction is proposed in this paper. This new operator can effectively reduce the negative impact of unavoidable sample fluctuations. It overcomes limitations of existing weakening buffer operators, and permits better control of fluctuations from the entire sample period. Due to its good performance in improving stability of the series smoothness, the new operator can better capture the real developing trend in raw data and improve forecast accuracy. The paper then proposes a novel methodology that combines the new bidirectional weakening buffer operator and the classic grey prediction model. Through a number of case studies, this method is compared with several classic models, such as the exponential smoothing model and the autoregressive integrated moving average model, etc. Values of three error measures show that the new method outperforms other methods, especially when there are data fluctuations near the forecasting horizon. The relative advantages of the new method on small sample predictions are further investigated. Results demonstrate that model based on the proposed fractional bidirectional weakening buffer operator has higher forecasting accuracy.  相似文献   

13.
We discuss a new class of spatially varying, simultaneous autoregressive (SVSAR) models motivated by interests in flexible, non-stationary spatial modelling scalable to higher dimensions. SVSAR models are hierarchical Markov random fields extending traditional SAR models. We develop Bayesian analysis using Markov chain Monte Carlo methods of SVSAR models, with extensions to spatio-temporal contexts to address problems of data assimilation in computer models. A motivating application in atmospheric science concerns global CO emissions where prediction from computer models is assessed and refined based on high-resolution global satellite imagery data. Application to synthetic and real CO data sets demonstrates the potential of SVSAR models in flexibly representing inhomogeneous spatial processes on lattices, and their ability to improve estimation and prediction of spatial fields. The SVSAR approach is computationally attractive in even very large problems; computational efficiencies are enabled by exploiting sparsity of high-dimensional precision matrices.  相似文献   

14.
Healthcare fraud and abuse are a serious challenge to healthcare payers and to the entire society. This article presents a predictive model for fraud and abuse detection in health insurance based on a training dataset of manually reviewed claims. The goal of the analysis is to predict different fraud and abuse probabilities for new invoices. The prediction is based on a wide framework of fraud and abuse reports which examine the behavior of medical providers and insured members by measuring systematic deviation from usual patterns in medical claims data. We show that models which directly use the results of the reports as model covariates do not exploit the full potential in terms of predictive quality. Instead, we propose a multinomial Bayesian latent variable model which summarizes behavioral patterns in latent variables, and predicts different fraud and abuse probabilities. The estimation of model parameters is based on a Markov Chain Monte Carlo (MCMC) algorithm using Bayesian shrinkage techniques. The presented approach improves the identification of fraudulent and abusive claims compared to different benchmark approaches.  相似文献   

15.
The two-parameter exponential distribution is proposed to be an underlying model, and prediction bounds for future observations are obtained by using Bayesian approach. Prediction intervals are derived for unobserved lifetimes in one-sample prediction and two-sample prediction based on type II doubly censored samples. A numerical example is given to illustrate the procedures, prediction intervals are investigated via Monte Carlo method, and the accuracy of prediction intervals is presented. Supported by the National Natural Science Foundation of China (79970022) and Aviation Fund (02J53079).  相似文献   

16.
Model averaging is a good alternative to model selection, which can deal with the uncertainty from model selection process and make full use of the information from various candidate models. However, most of the existing model averaging criteria do not consider the influence of outliers on the estimation procedures. The purpose of this paper is to develop a robust model averaging approach based on the local outlier factor (LOF) algorithm which can downweight the outliers in the covariates. Asymptotic optimality of the proposed robust model averaging estimator is derived under some regularity conditions. Further, we prove the consistency of the LOF-based weight estimator tending to the theoretically optimal weight vector. Numerical studies including Monte Carlo simulations and a real data example are provided to illustrate our proposed methodology.  相似文献   

17.
The internal‐rating‐based Basel II approach increases the need for the development of more realistic default probability models. In this paper, we follow the approach taken in McNeil A and Wendin J 7 (J. Empirical Finance 2007) by constructing generalized linear mixed models for estimating default probabilities from annual data on companies with different credit ratings. The models considered, in contrast to McNeil A and Wendin J 7 (J. Empirical Finance 2007), allow parsimonious parametric models to capture simultaneously dependencies of the default probabilities on time and credit ratings. Macro‐economic variables can also be included. Estimation of all model parameters are facilitated with a Bayesian approach using Markov chain Monte Carlo methods. Special emphasis is given to the investigation of predictive capabilities of the models considered. In particular, predictable model specifications are used. The empirical study using default data from Standard and Poor's gives evidence that the correlation between credit ratings further apart decreases and is higher than the one induced by the autoregressive time dynamics. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Identifying periods of recession and expansion is a challenging topic of ongoing interest with important economic and monetary policy implications. Given the current state of the global economy, significant attention has recently been devoted to identifying and forecasting economic recessions. Consequently, we introduce a novel class of Bayesian hierarchical probit models that take advantage of dimension‐reduced time–frequency representations of various market indices. The approach we propose can be viewed as a Bayesian mixed frequency data regression model, as it relates high‐frequency daily data observed over several quarters to a binary quarterly response indicating recession or expansion. More specifically, our model directly incorporates time–frequency representations of the entire high‐dimensional non‐stationary time series of daily log returns, over several quarters, as a regressor in a predictive model, while quantifying various sources of uncertainty. The necessary dimension reduction is achieved by treating the time–frequency representation (spectrogram) as an “image” and finding its empirical orthogonal functions. Subsequently, further dimension reduction is accomplished through the use of stochastic search variable selection. Overall, our dimension reduction approach provides an extremely powerful tool for feature extraction, yielding an interpretable image of features that predict recessions. The effectiveness of our model is demonstrated through out‐of‐sample identification (nowcasting) and multistep‐ahead prediction (forecasting) of economic recessions. In fact, our results provide greater than 85% and 80% out‐of‐sample forecasting accuracy for recessions and expansions respectively, even three quarters ahead. Finally, we illustrate the utility and added value of including time–frequency information from the NASDAQ index when identifying and predicting recessions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
基于ARIMA与神经网络集成的GDP时间序列预测研究   总被引:6,自引:1,他引:5  
本文深入分析了单整自回归移动平均(ARIMA)模型与神经网络(NN)模型的预测特性和优劣,并在此基础上建立了由ARIMA模型和NN模型集成的GDP时间序列预测模型与算法。其基本思想是充分发挥两种模型在线性空间和非线性空间的预测优势,据此将GDP时间序列的数据结构分解为线性自相关主体和非线性残差两部分,首先用ARIMA模型预测序列的线性主体,然后用NN模型对其非线性残差进行估计,最终集成为整个序列的预测结果。仿真实验表明:集成模型的预测准确率显著高于单一模型的预测准确率,从而证实了集成模型用于GDP预测的有效性。  相似文献   

20.
Numerous multivariate time series admit weak vector autoregressive moving-average (VARMA) representations, in which the errors are uncorrelated but not necessarily independent nor martingale differences. These models are called weak VARMA by opposition to the standard VARMA models, also called strong VARMA models, in which the error terms are supposed to be independent and identically distributed (iid). This article considers the problem of order selection of the weak VARMA models by using the information criteria. It is shown that the use of the standard information criteria are often not justified when the iid assumption on the noise is relaxed. As a consequence, we propose the modified versions of the Schwarz or Bayesian information criterion and of the Hannan and Quinn criterion for identifying the orders of weak VARMA models. Monte Carlo experiments show that the proposed modified criteria estimate the model orders more accurately than the standard ones. An illustrative application using the squared daily returns of financial series is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号