首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We address the problem of forecasting real time series with a proportion of zero values and a great variability among the nonzero values. In order to calculate forecasts for a time series, the model coefficients must be estimated. The appropriate choice of values for the smoothing parameters in exponential smoothing methods relies on the minimization of the fitting errors of historical data. We adapt the generalized Holt–Winters formulation so that it can consider the starting values of the local components of level, trend and seasonality as decision variables of the nonlinear programming problem associated with this forecasting procedure. A spreadsheet model is used to solve the problems of optimization efficiently. We show that our approach produces accurate forecasts with little data per product.  相似文献   

2.
Time series are found widely in engineering and science. We study forecasting of stochastic, dynamic systems based on observations from multivariate time series. We model the domain as a dynamic multiply sectioned Bayesian network (DMSBN) and populate the domain by a set of proprietary, cooperative agents. We propose an algorithm suite that allows the agents to perform one-step forecasts with distributed probabilistic inference. We show that as long as the DMSBN is structural time-invariant (possibly parametric time-variant), the forecast is exact and its time complexity is exponentially more efficient than using dynamic Bayesian networks (DBNs). In comparison with independent DBN-based agents, multiagent DMSBNs produce more accurate forecasts. The effectiveness of the framework is demonstrated through experiments on a supply chain testbed.  相似文献   

3.
The macroeconomic climate influences operations with regard to, e.g., raw material prices, financing, supply chain utilization and demand quotas. In order to adapt to the economic environment, decision-makers across the public and private sectors require accurate forecasts of the economic outlook. Existing predictive frameworks base their forecasts primarily on time series analysis, as well as the judgments of experts. As a consequence, current approaches are often biased and prone to error. In order to reduce forecast errors, this paper presents an innovative methodology that extends lag variables with unstructured data in the form of financial news: (1) we apply a variety of models from machine learning to word counts as a high-dimensional input. However, this approach suffers from low interpretability and overfitting, motivating the following remedies. (2) We follow the intuition that the economic climate is driven by general sentiments and suggest a projection of words onto latent semantic structures as a means of feature engineering. (3) We propose a semantic path model, together with estimation technique based on regularization, in order to yield full interpretability of the forecasts. We demonstrate the predictive performance of our approach by utilizing 80,813 ad hoc announcements in order to make long-term forecasts of up to 24 months ahead regarding key macroeconomic indicators. Back-testing reveals a considerable reduction in forecast errors.  相似文献   

4.

A new computational approach based on the pointwise regularity exponent of the price time series is proposed to estimate Value at Risk. The forecasts obtained are compared with those of two largely used methodologies: the variance-covariance method and the exponentially weighted moving average method. Our findings show that in two very turbulent periods of financial markets the forecasts obtained using our algorithm decidedly outperform the two benchmarks, providing more accurate estimates in terms of both unconditional coverage and independence and magnitude of losses.

  相似文献   

5.
In recent years Singular Spectrum Analysis (SSA), used as a powerful technique in time series analysis, has been developed and applied to many practical problems. In this paper, the SSA technique based on the minimum variance estimator is introduced. The SSA technique based on the minimum variance and least squares estimators in reconstructing and forecasting time series is also considered. A well-known time series data set, namely, monthly accidental deaths in the USA time series, is used in examining the performance of the technique. The results are compared with several classical methods namely, Box–Jenkins SARIMA models, the ARAR algorithm and the Holt–Winter algorithm.  相似文献   

6.
Exponential smoothing methods are widely used as forecasting techniques in inventory systems and business planning, where reliable prediction intervals are also required for a large number of series. This paper describes a Bayesian forecasting approach based on the Holt–Winters model, which allows obtaining accurate prediction intervals. We show how to build them incorporating the uncertainty due to the smoothing unknowns using a linear heteroscedastic model. That linear formulation simplifies obtaining the posterior distribution on the unknowns; a random sample from such posterior, which is not analytical, is provided using an acceptance sampling procedure and a Monte Carlo approach gives the predictive distributions. On the basis of this scheme, point-wise forecasts and prediction intervals are obtained. The accuracy of the proposed Bayesian forecasting approach for building prediction intervals is tested using the 3003 time series from the M3-competition.  相似文献   

7.
Methods designed for second-order stationary time series can be misleading when applied to nonstationary series, often resulting in inaccurate models and poor forecasts. Hence, testing time series stationarity is important especially with the advent of the ‘data revolution’ and the recent explosion in the number of nonstationary time series analysis tools. Most existing stationarity tests rely on a single basis. We propose new tests that use nondecimated basis libraries which permit discovery of a wider range of nonstationary behaviours, with greater power whilst preserving acceptable statistical size. Our tests work with a wide range of time series including those whose marginal distributions possess heavy tails. We provide freeware R software that implements our tests and a range of graphical tools to identify the location and duration of nonstationarities. Theoretical and simulated power calculations show the superiority of our wavelet packet approach in a number of important situations and, hence, we suggest that the new tests are useful additions to the analyst's toolbox.  相似文献   

8.
We use the methodology of singular spectrum analysis (SSA), principal component analysis (PCA), and multi-fractal detrended fluctuation analysis (MFDFA), for investigating characteristics of vibration time series data from a friction brake. SSA and PCA are used to study the long time-scale characteristics of the time series. MFDFA is applied for investigating all time scales up to the smallest recorded one. It turns out that the majority of the long time-scale dynamics, that is presumably dominated by the structural dynamics of the brake system, is dominated by very few active dimensions only and can well be understood in terms of low dimensional chaotic attractors. The multi-fractal analysis shows that the fast dynamical processes originating in the friction interface are in turn truly multi-scale in nature.  相似文献   

9.
股票时间序列预测在经济和管理领域具有重要的应用前景,也是很多商业和金融机构成功的基础.首先利用奇异谱分析对股市时间序列重构,降低噪声并提取趋势序列.再利用C-C算法确定股市时间序列的嵌入维数和延迟阶数,对股市时间序列进行相空间重构,生成神经网络的学习矩阵.进一步利用Boosting技术和不同的神经网络模型,生成神经网络集成个体.最后采用带有惩罚项的半参数回归模型进行集成,并利用遗传算法选择最优的光滑参数,以此建立遗传算法和半参数回归的神经网络集成股市预测模型.通过上证指数开盘价进行实例分析,与传统的时间序列分析和其他集成方法对比,发现该方法能获得更准确的预测结果.计算结果表明该方法能充分反映股票价格时间序列趋势,为金融时间序列预测提供一个有效方法.  相似文献   

10.
Emergency Medical Service (EMS) systems operate under the pressure of knowing that human lives might be directly at stake. In the public eye there is a natural expectation of efficient response. There is abundant literature on the topic of efficient planning of EMS systems (maximizing expected coverage or minimizing response time). Other objectives have been considered but the literature available is very sparse compared to efficiency-based works. Furthermore, while real size EMS systems have been studied, the use of exact models is usually hindered by the amount of computational time required to obtain solutions. We approach the planning of large-scale EMS systems including fairness considerations using a Tabu Search-based heuristic with an embedded approximation procedure for the queuing submodel. This allows for the analysis of large-scale real systems, extending the approach in which strategic decisions (location) and operative decisions (dispatching) are combined to balance efficiency and fairness.  相似文献   

11.
Many companies use firm orders-to-date to make forecasts of units to be shipped at a future time t that is j periods away, j = 1, 2,..., h. A number of methods for making these forecasts were developed and evaluated using simulation. The time series for bookings was decomposed into a shipment time series and a time series for factors representing the fraction of shipments booked j periods ahead. Separate techniques were used for the shipments series (namely, naive, exponential smoothing and Bayesian procedures) and for the factors (naive and exponential smoothing procedures). The accuracy of these approaches, as well as an ARIMA model that ignored orders-to-date, was evaluated by using several simulated patterns of bookings. No approach was dominant, but one of the simplest approaches (naive/smoothing) did comparatively well.  相似文献   

12.
13.
Growth curves such as the logistic and Gompertz are widely used for forecasting market development. The approach proposed is specifically designed for forecasting, rather than fitting available data—the usual approach with non-linear least squares regression. Two innovations form the foundation for this approach. The growth curves are reformulated from a time basis to an observation basis. This ensures that the available observations and the forecasts form a monotonic series; this is not necessarily true for least squares extrapolations of growth curves. An extension of the Kalman filter, an approach already used with linear forecasting models, is applied to the estimation of the growth curve coefficients. This allows the coefficients the flexibility to change over time if the market environment changes. The extended Kalman filter also proves the information for the generation of confidence intervals about the forecasts. Alternative forecasting approaches, least squares and an adaptive Bass model, suggested by Bretschneider and Mahajan, are used to produce comparative forecasts for a number of different data sets. The approach using the extended Kalman filter is shown to be more robust and almost always more accurate than the alternatives.  相似文献   

14.
The coefficients of Linear Recurrent Relations (LRR) play a pivotal role in many forecasting techniques. Precise and closed form of the LRR coefficients enables one to achieve more accurate forecasts. On account to the fact that, in real-world situations, a time series data is contaminated with noise, extracting the noiseless series is of great importance. This paper seeks to obtain a closed form, with less noise level, of LRR coefficients for noisy exponential time series. Improving the filtering performance through employing noiseless eigenvectors of the covariance matrix is another novelty of this study. Our simulation results confirm that the proposed approach enhances filtering and forecasting results.  相似文献   

15.
A new method is proposed of constructing mortality forecasts. This parameterized approach utilizes Generalized Linear Models (GLMs), based on heteroscedastic Poisson (non-additive) error structures, and using an orthonormal polynomial design matrix. Principal Component (PC) analysis is then applied to the cross-sectional fitted parameters. The produced model can be viewed either as a one-factor parameterized model where the time series are the fitted parameters, or as a principal component model, namely a log-bilinear hierarchical statistical association model of Goodman [Goodman, L.A., 1991. Measures, models, and graphical displays in the analysis of cross-classified data. J. Amer. Statist. Assoc. 86(416), 1085-1111] or equivalently as a generalized Lee-Carter model with p interaction terms. Mortality forecasts are obtained by applying dynamic linear regression models to the PCs. Two applications are presented: Sweden (1751-2006) and Greece (1957-2006).  相似文献   

16.
In this paper, we investigate the possibility of using multivariate singular spectrum analysis (SSA), a nonparametric technique in the field of time series analysis, for mortality forecasting. We consider a real data application with 9 European countries: Belgium, Denmark, Finland, France, Italy, Netherlands, Norway, Sweden, and Switzerland, over a period 1900 to 2009, and a simulation study based on the data set. The results show the superiority of multivariate SSA in comparison with the univariate SSA, in terms of forecasting accuracy.  相似文献   

17.
Participants of a laboratory experiment judgmentally forecast a time series. In order to support their forecasts they are given a highly correlated indicator with a constant lead period of one. The subjects are not given any other information than the time series realizations and have to base their forecasts on pure eyeballing/chart-reading. Standard economic models do not appropriately account for the features of individual forecasts: These are typically affected by intra- and inter-individual instability of behavior. We extend the scheme theory by Otwin Becker for the explanation of individual forecasts by simple schemes based on visually perceived characteristics of the time series. We find that the forecasts of most subjects can be explained very accurately by only a few schemes.  相似文献   

18.
Intermittent demand is characterised by infrequent demand arrivals, where many periods have zero demand, coupled with varied demand sizes. The dual source of variation renders forecasting for intermittent demand a very challenging task. Many researchers have focused on the development of specialised methods for intermittent demand. However, apart from a case study on hierarchical forecasting, the effects of combining, which is a standard practice for regular demand, have not been investigated. This paper empirically explores the efficiency of forecast combinations in the intermittent demand context. We examine both method and temporal combinations of forecasts. The first are based on combinations of different methods on the same time series, while the latter use combinations of forecasts produced on different views of the time series, based on temporal aggregation. Temporal combinations of single or multiple methods are investigated, leading to a new time-series classification, which leads to model selection and combination. Results suggest that appropriate combinations lead to improved forecasting performance over single methods, as well as simplifying the forecasting process by limiting the need for manual selection of methods or hyper-parameters of good performing benchmarks. This has direct implications for intermittent demand forecasting in practice.  相似文献   

19.
The initial aim of this study is to propose a hybrid method based on exponential fuzzy time series and learning automata based optimization for stock market forecasting. For doing so, a two-phase approach is introduced. In the first phase, the optimal lengths of intervals are obtained by applying a conventional fuzzy time series together with learning automata swarm intelligence algorithm to tune the length of intervals properly. Subsequently, the obtained optimal lengths are applied to generate a new fuzzy time series, proposed in this study, named exponential fuzzy time series. In this final phase, due to the nature of exponential fuzzy time series, another round of optimization is required to estimate certain method parameters. Finally, this model is used for future forecasts. In order to validate the proposed hybrid method, forty-six case studies from five stock index databases are employed and the findings are compared with well-known fuzzy time series models and classic methods for time series. The proposed model has outperformed its counterparts in terms of accuracy.  相似文献   

20.
Accurate clustering of time series is a challenging problem for data arising from areas such as financial markets, biomedical studies, and environmental sciences, especially when some, or all, of the series exhibit nonlinearity and nonstationarity. When a subset of the series exhibits nonlinear characteristics, frequency domain clustering methods based on higher-order spectral properties, such as the bispectra or trispectra are useful. While these methods address nonlinearity, they rely on the assumption of series stationarity. We propose the Bispectral Smooth Localized Complex EXponential (BSLEX) approach for clustering nonlinear and nonstationary time series. BSLEX is an extension of the SLEX approach for linear, nonstationary series, and overcomes the challenges of both nonlinearity and nonstationarity through smooth partitions of the nonstationary time series into stationary subsets in a dyadic fashion. The performance of the BSLEX approach is illustrated via simulation where several nonstationary or nonlinear time series are clustered, as well as via accurate clustering of the records of 16 seismic events, eight of which are earthquakes and eight are explosions. We illustrate the utility of the approach by clustering S&P 100 financial returns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号