首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
By comparing the class ratio deviation and restoring error of first‐order accumulation with that of fractional‐order accumulation, a gray model for monotonically increasing sequences can obtain optimal simulation accuracy via selecting a proper cumulative order. In this study, a gray model for increasing sequences with nonhomogeneous index trends based on fractional‐order accumulation is proposed. To reduce the modeling error caused by the background value and to improve the prediction accuracy of the model, an optimized model using the 3/8 Simpson formula is constructed. Finally, the 2 proposed models are used to predict the total energy consumption in China and the monthly sales of new products in an enterprise. Compared with the GM(1,1) model based on fractional‐order accumulation, the proposed model exhibits better simulation and prediction accuracy.  相似文献   

2.
Based on the weekly closing price of Shenzhen Integrated Index, this article studies the volatility of Shenzhen Stock Market using three different models: Logistic, AR(1) and AR(2). The time-variable parameters of Logistic regression model is estimated by using both the index smoothing method and the time-variable parameter estimation method. And both the AR(1) model and the AR(2) model of zero-mean series of the weekly closing price and its zero-mean series of volatility rate are established based on the analysis results of zero-mean series of the weekly closing price. Six common statistical methods for error prediction are used to test the predicting results. These methods are: mean error (ME), mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), Akaike's information criterion (AIC), and Bayesian information criterion (BIC). The investigation shows that AR(1) model exhibits the best predicting result, whereas AR(2) model exhibits predicting results that is intermediate between AR(1) model and the Logistic regression model.  相似文献   

3.
With the ability to deal with high non-linearity, artificial neural networks (ANNs) and support vector machines (SVMs) have been widely studied and successfully applied to time series prediction. However, good fitting results of ANNs and SVMs to nonlinear models do not guarantee an equally good prediction performance. One main reason is that their dynamics and properties are changing with time, and another key problem is the inherent noise of the fitting data. Nonlinear filtering methods have some advantages such as handling additive noises and following the movement of a system when the underlying model is evolving through time. The present paper investigates time series prediction algorithms by using a combination of nonlinear filtering approaches and the feedforward neural network (FNN). The nonlinear filtering model is established by using the FNN’s weights to present state equation and the FNN’s output to present the observation equation, and the input vector to the FNN is composed of the predicted signal with given length, then the extended Kalman filtering (EKF) and Unscented Kalman filtering (UKF) are used to online train the FNN. Time series prediction results are presented by the predicted observation value of nonlinear filtering approaches. To evaluate the proposed methods, the developed techniques are applied to the predictions of one simulated Mackey-Glass chaotic time series and one real monthly mean water levels time series. Generally, the prediction accuracy of the UKF-based FNN is better than the EKF-based FNN when the model is highly nonlinear. However, comparing from prediction accuracy and computational effort based on the prediction model proposed in our study, we draw the conclusion that the EKF-based FNN is superior to the UKF-based FNN for the theoretical Mackey-Glass time series prediction and the real monthly mean water levels time series prediction.  相似文献   

4.
Electricity price forecasting is an interesting problem for all the agents involved in electricity market operation. For instance, every profit maximisation strategy is based on the computation of accurate one-day-ahead forecasts, which is why electricity price forecasting has been a growing field of research in recent years. In addition, the increasing concern about environmental issues has led to a high penetration of renewable energies, particularly wind. In some European countries such as Spain, Germany and Denmark, renewable energy is having a deep impact on the local power markets. In this paper, we propose an optimal model from the perspective of forecasting accuracy, and it consists of a combination of several univariate and multivariate time series methods that account for the amount of energy produced with clean energies, particularly wind and hydro, which are the most relevant renewable energy sources in the Iberian Market. This market is used to illustrate the proposed methodology, as it is one of those markets in which wind power production is more relevant in terms of its percentage of the total demand, but of course our method can be applied to any other liberalised power market. As far as our contribution is concerned, first, the methodology proposed by García-Martos et al (2007 and 2012) is generalised twofold: we allow the incorporation of wind power production and hydro reservoirs, and we do not impose the restriction of using the same model for 24?h. A computational experiment and a Design of Experiments (DOE) are performed for this purpose. Then, for those hours in which there are two or more models without statistically significant differences in terms of their forecasting accuracy, a combination of forecasts is proposed by weighting the best models (according to the DOE) and minimising the Mean Absolute Percentage Error (MAPE). The MAPE is the most popular accuracy metric for comparing electricity price forecasting models. We construct the combination of forecasts by solving several nonlinear optimisation problems that allow computation of the optimal weights for building the combination of forecasts. The results are obtained by a large computational experiment that entails calculating out-of-sample forecasts for every hour in every day in the period from January 2007 to December 2009. In addition, to reinforce the value of our methodology, we compare our results with those that appear in recent published works in the field. This comparison shows the superiority of our methodology in terms of forecasting accuracy.  相似文献   

5.
Abstract Developing models to predict tree mortality using data from long‐term repeated measurement data sets can be difficult and challenging due to the nature of mortality as well as the effects of dependence on observations. Marginal (population‐averaged) generalized estimating equations (GEE) and random effects (subject‐specific) models offer two possible ways to overcome these effects. For this study, standard logistic, marginal logistic based on the GEE approach, and random logistic regression models were fitted and compared. In addition, four model evaluation statistics were calculated by means of K‐fold cross‐valuation. They include the mean prediction error, the mean absolute prediction error, the variance of prediction error, and the mean square error. Results from this study suggest that the random effects model produced the smallest evaluation statistics among the three models. Although marginal logistic regression accommodated for correlations between observations, it did not provide noticeable improvements of model performance compared to the standard logistic regression model that assumed impendence. This study indicates that the random effects model was able to increase the overall accuracy of mortality modeling. Moreover, it was able to ascertain correlation derived from the hierarchal data structure as well as serial correlation generated through repeated measurements.  相似文献   

6.
The multinomial logit model is the most widely used model for the unordered multi-category responses. However, applications are typically restricted to the use of few predictors because in the high-dimensional case maximum likelihood estimates frequently do not exist. In this paper we are developing a boosting technique called multinomBoost that performs variable selection and fits the multinomial logit model also when predictors are high-dimensional. Since in multi-category models the effect of one predictor variable is represented by several parameters one has to distinguish between variable selection and parameter selection. A special feature of the approach is that, in contrast to existing approaches, it selects variables not parameters. The method can also distinguish between mandatory predictors and optional predictors. Moreover, it adapts to metric, binary, nominal and ordinal predictors. Regularization within the algorithm allows to include nominal and ordinal variables which have many categories. In the case of ordinal predictors the order information is used. The performance of boosting technique with respect to mean squared error, prediction error and the identification of relevant variables is investigated in a simulation study. The method is applied to the national Indonesia contraceptive prevalence survey and the identification of glass. Results are also compared with the Lasso approach which selects parameters.  相似文献   

7.
在统计学与机器学习中,交叉验证被广泛应用于评估模型的好坏.但交叉验证法的表现一般不稳定,因此评估时通常需要进行多次交叉验证并通过求均值以提高交叉验证算法的稳定性.文章提出了一种基于空间填充准则改进的k折交叉验证方法,它的思想是每一次划分的训练集和测试集均具有较好的均匀性.模拟结果表明,文章所提方法在五种分类模型(k近邻,决策树,随机森林,支持向量机和Adaboost)上对预测精度的估计均比普通k折交叉验证的高.将所提方法应用于骨质疏松实际数据分析中,根据对预测精度的估计选择了最优的模型进行骨质疏松患者的分类预测.  相似文献   

8.
This paper built a hybrid decomposition-ensemble model named VMD-ARIMA-HGWO-SVR for the purpose of improving the stability and accuracy of container throughput prediction. The latest variational mode decomposition (VMD) algorithm is employed to decompose the original series into several modes (components), then ARIMA models are built to forecast the low-frequency components, and the high-frequency components are predicted by SVR models which are optimized with a recently proposed swarm intelligence algorithm called hybridizing grey wolf optimization (HGWO), following this, the prediction results of all modes are ensembled as the final forecasting result. The error analysis and model comparison results show that the VMD is more effective than other decomposition methods such as CEEMD and WD, moreover, adopting ARIMA models for prediction of low-frequency components can yield better results than predicting all components by SVR models. Based on the results of empirical study, the proposed model has good prediction performance on container throughput data, which can be used in practical work to provide reference for the operation and management of ports to improve the overall efficiency and reduce the operation costs.  相似文献   

9.
In this article, a new multivariate radial basis functions neural network model is proposed to predict the complex chaotic time series. To realize the reconstruction of phase space, we apply the mutual information method and false nearest‐neighbor method to obtain the crucial parameters time delay and embedding dimension, respectively, and then expand into the multivariate situation. We also proposed two the objective evaluations, mean absolute error and prediction mean square error, to evaluate the prediction accuracy. To illustrate the prediction model, we use two coupled Rossler systems as examples to do simultaneously single‐step prediction and multistep prediction, and find that the evaluation performances and prediction accuracy can achieve an excellent magnitude. © 2013 Wiley Periodicals, Inc. Complexity, 2013.  相似文献   

10.
Despite several years of research, type reduction (TR) operation in interval type-2 fuzzy logic system (IT2FLS) cannot perform as fast as a type-1 defuzzifier. In particular, widely used Karnik–Mendel (KM) TR algorithm is computationally much more demanding than alternative TR approaches. In this work, a data driven framework is proposed to quickly, yet accurately, estimate the output of the KM TR algorithm using simple regression models. Comprehensive simulation performed in this study shows that the centroid end-points of KM algorithm can be approximated with a mean absolute percentage error as low as 0.4%. Also, switch point prediction accuracy can be as high as 100%. In conjunction with the fact that simple regression model can be trained with data generated using exhaustive defuzzification method, this work shows the potential of proposed method to provide highly accurate, yet extremely fast, TR approximation method. Speed of the proposed method should theoretically outperform all available TR methods while keeping the uncertainty information intact in the process.  相似文献   

11.
Regression models with interaction effects have been widely used in multivariate analysis to improve model flexibility and prediction accuracy. In functional data analysis, however, due to the challenges of estimating three-dimensional coefficient functions, interaction effects have not been considered for function-on-function linear regression. In this article, we propose function-on-function regression models with interaction and quadratic effects. For a model with specified main and interaction effects, we propose an efficient estimation method that enjoys a minimum prediction error property and has good predictive performance in practice. Moreover, converting the estimation of three-dimensional coefficient functions of the interaction effects to the estimation of two- and one-dimensional functions separately, our method is computationally efficient. We also propose adaptive penalties to account for varying magnitudes and roughness levels of coefficient functions. In practice, the forms of the models are usually unspecified. We propose a stepwise procedure for model selection based on a predictive criterion. This method is implemented in our R package FRegSigComp. Supplemental materials are available online.  相似文献   

12.
The popularity of downside risk among investors is growing and mean return–downside risk portfolio selection models seem to oppress the familiar mean–variance approach. The reason for the success of the former models is that they separate return fluctuations into downside risk and upside potential. This is especially relevant for asymmetrical return distributions, for which mean–variance models punish the upside potential in the same fashion as the downside risk.The paper focuses on the differences and similarities between using variance or a downside risk measure, both from a theoretical and an empirical point of view. We first discuss the theoretical properties of different downside risk measures and the corresponding mean–downside risk models. Against common beliefs, we show that from the large family of downside risk measures, only a few possess better theoretical properties within a return–risk framework than the variance. On the empirical side, we analyze the differences between some US asset allocation portfolios based on variances and downside risk measures. Among other things, we find that the downside risk approach tends to produce – on average – slightly higher bond allocations than the mean–variance approach. Furthermore, we take a closer look at estimation risk, viz. the effect of sampling error in expected returns and risk measures on portfolio composition. On the basis of simulation analyses, we find that there are marked differences in the degree of estimation accuracy, which calls for further research.  相似文献   

13.
In this paper, a multi-layer gated recurrent unit neural network (multi-head GRU) model is proposed to predict the confirmed cases of the new crown epidemic (COVID-19). We extract the time series relationship in the data, and the rolling prediction method is adopted to ensure the simple structure of the model and achieve higher precision and interpretability. The prediction results of this model are compared with the LSTM model, the Transformer model and the infectious disease model (SIR). The results show that the proposed model has higher prediction accuracy. The mean absolute error (MAE) of epidemic prediction in most countries (the United States, Brazil, India, the United Kingdom and Russia) is respectively 197.52, 68.02, 200.67, 24.78 and 123.50, which is much smaller than the prediction error of the SIR model, LSTM model and Transformer model. For the spread of the COVID-19 epidemic, traditional infectious disease models and machine learning models cannot achieve more accurate predictions. In this paper, we use a GRU model to predict the real-time spread of COVID-19, which has fewer parameters and reduces the risk of overfitting to train faster. Meanwhile, it can make up for the shortcoming of the transformer model to capture local features.  相似文献   

14.
Deterministic models of technical efficiency assume that all deviations from the production frontier are due to inefficiency. Critics argue that no allowance is made for measurement error and other statistical noise so that the resulting efficiency measure will be contaminated. The stochastic frontier model is an alternative that allows both inefficiency and measurement error. Advocates argue that the stochastic frontier models should be used despite other potential limitations because of the superior conceptual treatment of noise. As will be demonstrated in this paper, however, the assumed shape of the error distributions is used to identify a key production function parameter. Therefore, the stochastic frontier models, like the deterministic models, cannot produce absolute measures of efficiency. Moreover, we show that rankings for firm-specific inefficiency estimates produced by traditional stochastic frontier models do not change from the rankings of the composed errors. As a result, the performance of the deterministic models is qualitatively similar to that of the stochastic frontier models.  相似文献   

15.
Although the classic exponential-smoothing models and grey prediction models have been widely used in time series forecasting, this paper shows that they are susceptible to fluctuations in samples. A new fractional bidirectional weakening buffer operator for time series prediction is proposed in this paper. This new operator can effectively reduce the negative impact of unavoidable sample fluctuations. It overcomes limitations of existing weakening buffer operators, and permits better control of fluctuations from the entire sample period. Due to its good performance in improving stability of the series smoothness, the new operator can better capture the real developing trend in raw data and improve forecast accuracy. The paper then proposes a novel methodology that combines the new bidirectional weakening buffer operator and the classic grey prediction model. Through a number of case studies, this method is compared with several classic models, such as the exponential smoothing model and the autoregressive integrated moving average model, etc. Values of three error measures show that the new method outperforms other methods, especially when there are data fluctuations near the forecasting horizon. The relative advantages of the new method on small sample predictions are further investigated. Results demonstrate that model based on the proposed fractional bidirectional weakening buffer operator has higher forecasting accuracy.  相似文献   

16.
This study attempts to show how a Kohonen map can be used to improve the temporal stability of the accuracy of a financial failure model. Most models lose a significant part of their ability to generalize when data used for estimation and prediction purposes are collected over different time periods. As their lifespan is fairly short, it becomes a real problem if a model is still in use when re-estimation appears to be necessary. To overcome this drawback, we introduce a new way of using a Kohonen map as a prediction model. The results of our experiments show that the generalization error achieved with a map remains more stable over time than that achieved with conventional methods used to design failure models (discriminant analysis, logistic regression, Cox’s method, and neural networks). They also show that type-I error, the economically costliest error, is the greatest beneficiary of this gain in stability.  相似文献   

17.
本征正交分解及Galerkin投影是解决复杂非线性系统模型降阶问题常用的方法.然而,该方法在构造降阶系统过程中只截取基函数的部分模态,这通常会使得降阶系统不准确.针对该问题,提出了对降阶系统误差进行快速校正的方法.首先应用Mori-Zwanzig格式对降阶系统的误差进行分析,理论上得到误差模型的形式和有效预测变量.再通过偏最小二乘方法构造预测变量和系统误差的多元回归模型,建立误差预测模型.将所构造的误差预测模型直接嵌入到原降阶系统,得到新的降阶系统在形式上等价于对原模型的右端采用Petrov-Galerkin投影.最后给出了新的降阶系统的误差估计.数值结果进一步说明了所提方法能有效地提高降阶系统的稳定性和准确性,且具有较高计算效率.  相似文献   

18.
Every economic model should include an estimate of its stability and predictability. A new measure, the first passage time (FPT) which is defined as the time period when the model error first exceeds a pre-determined criterion (i.e., the tolerance level), is proposed here to estimate the model predictability. A theoretical framework is developed to determine the mean and variance of FPT. The classical Kaldor model is taken as an example to show the robustness of using FPT as a quantitative measure for identifying the model stability.  相似文献   

19.
Predictions made by imprecise-probability models are often indeterminate (that is, set-valued). Measuring the quality of an indeterminate prediction by a single number is important to fairly compare different models, but a principled approach to this problem is currently missing. In this paper we derive, from a set of assumptions, a metric to evaluate the predictions of credal classifiers. These are supervised learning models that issue set-valued predictions. The metric turns out to be made of an objective component, and another that is related to the decision-maker’s degree of risk aversion to the variability of predictions. We discuss when the measure can be rendered independent of such a degree, and provide insights as to how the comparison of classifiers based on the new measure changes with the number of predictions to be made. Finally, we make extensive empirical tests of credal, as well as precise, classifiers by using the new metric. This shows the practical usefulness of the metric, while yielding a first insightful and extensive comparison of credal classifiers.  相似文献   

20.
基于指数平滑模型与误差反传神经网络法提出了一个改进的时间序列预测方法.将神经网络模型移植入指数加权滑动平均模型中,充分考虑了时间序列的部分线性性和非线性性对预测结果的影响,是传统的混合模型的一个更合理的改进.最后通过对上证指数时间序列的实证分析,以预测均方误差为检验标准,对五种常用的时间序列预测模型进行了预测精度的比较,而且经验证所提出的改进的时间序列预测模型相对来说具有更小的预测均方误差.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号