首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
This paper describes the relationship between support vector regression (SVR) and rough (or interval) patterns. SVR is the prediction component of the support vector techniques. Rough patterns are based on the notion of rough values, which consist of upper and lower bounds, and are used to effectively represent a range of variable values. Predictions of rough values in a variety of different forms within the context of interval algebra and fuzzy theory are attracting research interest. An extension of SVR, called rough support vector regression   (RSVR), is proposed to improve the modeling of rough patterns. In particular, it is argued that the upper and lower bounds should be modeled separately. The proposal is shown to be a more flexible version of lower possibilistic regression model using ??-insensitivity. Experimental results on the Dow Jones Industrial Average demonstrate the suggested RSVR modeling technique.  相似文献   

3.
Annals of Operations Research - The existence of contaminants in metal alloys products is the main problem affecting the product quality, which is an important requirement for competitiveness in...  相似文献   

4.
In this paper we propose a new nonparametric regression method called composite support vector quantile regression (CSVQR) that combines the formulations of support vector regression and composite quantile regression. First the CSVQR using the quadratic programming (QP) is proposed and then the CSVQR utilizing the iteratively reweighted least squares (IRWLS) procedure is proposed to overcome weakness of the QP based method in terms of computation time. The IRWLS procedure based method enables us to derive a generalized cross validation (GCV) function that is easier and faster than the conventional cross validation function. The GCV function facilitates choosing the hyperparameters that affect the performance of the CSVQR and saving computation time. Numerical experiment results are presented to illustrate the performance of the proposed method  相似文献   

5.
We propose a variant of two SVM regression algorithms expressly tailored in order to exploit additional information summarizing the relevance of each data item, as a measure of its relative importance w.r.t. the remaining examples. These variants, enclosing the original formulations when all data items have the same relevance, are preliminary tested on synthetic and real-world data sets. The obtained results outperform standard SVM approaches to regression if evaluated in light of the above mentioned additional information about data quality.  相似文献   

6.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated.  相似文献   

7.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

8.
In this paper, we propose a novel multiphase support vector regression (mp-SVR) technique to approximate a true relationship for the case where the effect of input on output changes abruptly at some break-points. A new formulation for mp-SVR is presented to allow such structural changes in regression function. And then, we present a new hybrid-encoding scheme in genetic algorithms to select the best combination of the kernel functions and to determine both break-points and hyperparameters of mp-SVR. The proposed method has a major advantage over the conventional ones that different kernel functions can be possibly adapted to different regions of the data domain. Computational results in two examples including a real-life data demonstrate its capability in capturing the local characteristics of the data more effectively. Consequently, the mp-SVR has a high potential value in a wide range of applications for function approximations.  相似文献   

9.
Value at Risk (VaR) has been used as an important tool to measure the market risk under normal market. Usually the VaR of log returns is calculated by assuming a normal distribution. However, log returns are frequently found not normally distributed. This paper proposes the estimation approach of VaR using semiparametric support vector quantile regression (SSVQR) models which are functions of the one-step-ahead volatility forecast and the length of the holding period, and can be used regardless of the distribution. We find that the proposed models perform better overall than the variance-covariance and linear quantile regression approaches for return data on S&P 500, NIKEI 225 and KOSPI 200 indices.  相似文献   

10.
11.
We propose an efficient global sensitivity analysis method for multivariate outputs that applies polynomial chaos-based surrogate models to vector projection-based sensitivity indices. These projection-based sensitivity indices, which are powerful measures of the comprehensive effects of model inputs on multiple outputs, are conventionally estimated by the Monte Carlo simulations that incur prohibitive computational costs for many practical problems. Here, the projection-based sensitivity indices are efficiently estimated via two polynomial chaos-based surrogates: polynomial chaos expansion and a proper orthogonal decomposition-based polynomial chaos expansion. Several numerical examples with various types of outputs are tested to validate the proposed method; the results demonstrate that the polynomial chaos-based surrogates are more efficient than Monte Carlo simulations at estimating the sensitivity indices, even for models with a large number of outputs. Furthermore, for models with only a few outputs, polynomial chaos expansion alone is preferable, whereas for models with a large number of outputs, implementation with proper orthogonal decomposition is the best approach.  相似文献   

12.
13.
In this paper, we investigate the multiscale support vector regression (SVR) method for approximation of functions in Sobolev spaces on bounded domains. The Vapnik ?-intensive loss function, which has been developed well in learning theory, is introduced to replace the standard l2 loss function in multiscale least squares methods. Convergence analysis is presented to verify the validity of the multiscale SVR method with scaled versions of compactly supported radial basis functions. Error estimates on noisy observation data are also derived to show the robustness of our proposed algorithm. Numerical simulations support the theoretical predictions.  相似文献   

14.
Support vector regression (SVR) is one of the most popular nonlinear regression techniques with the aim to approximate a nonlinear system with a good generalization capability. However, SVR has a major drawback in that it is sensitive to the presence of outliers. The ramp loss function for robust SVR has been introduced to resolve this problem, but SVR with ramp loss function has a non-differentiable and non-convex formulation, which is not easy to solve. Consequently, SVR with the ramp loss function requires smoothing and Concave-Convex Procedure techniques, which transform the non-differentiable and non-convex optimization to a differentiable and convex one. We present a robust SVR with linear-log concave loss function (RSLL), which does not require the transformation technique, where the linear-log concave loss function has a similar effect as the ramp loss function. The zero norm approximation and the difference of convex functions problem are employed for solving the optimization problem. The proposed RSLL approach is used to develop a robust and stable virtual metrology (VM) prediction model, which utilizes the status variables of process equipment to predict the process quality of wafer level in semiconductor manufacturing. We also compare the proposed approach to existing SVR-based methods in terms of the root mean squared error of prediction using both synthetic and real data sets. Our experimental results show that the proposed approach performs better than existing SVR-based methods regardless of the data set and type of outliers (ie, X-space and Y-space outliers), implying that it can be used as a useful alternative when the regression data contain outliers.  相似文献   

15.
In this paper, we propose two new smooth support vector machines for \(\varepsilon \)-insensitive regression. According to these two smooth support vector machines, we construct two systems of smooth equations based on two novel families of smoothing functions, from which we seek the solution to \(\varepsilon \)-support vector regression (\(\varepsilon \)-SVR). More specifically, using the proposed smoothing functions, we employ the smoothing Newton method to solve the systems of smooth equations. The algorithm is shown to be globally and quadratically convergent without any additional conditions. Numerical comparisons among different values of parameter are also reported.  相似文献   

16.
The need to minimize the potential impact of air pollutants on humans has made the accurate prediction of concentrations of air pollutants a crucial subject in environmental research. Support vector regression (SVR) models have been successfully employed to solve time series problems in many fields. The use of SVR models for forecasting concentrations of air pollutants has not been widely investigated. Data preprocessing procedures and the parameter selection of SVR models can radically influence forecasting performance. This study proposes a support vector regression with logarithm preprocessing procedure and immune algorithms (SVRLIA) model which takes advantage of the structural risk minimization of SVR models, the data smoothing of preprocessing procedures, and the optimization of immune algorithms, in order to more accurately forecast concentrations of air pollutants. Three pollutants, namely particulate matter (PM10), nitrogen oxide, (NOx), and nitrogen dioxide (NO2), are collected and examined to determine the feasibility of the developed SVRLIA model. Experimental results reveal that the SVRLIA model can accurately forecast concentrations of air pollutants.  相似文献   

17.
The paper is related to the error analysis of Multicategory Support Vector Machine (MSVM) classifiers based on reproducing kernel Hilbert spaces. We choose the polynomial kernel as Mercer kernel and give the error estimate with De La Vallée Poussin means. We also introduce the standard estimation of sample error, and derive the explicit learning rate.  相似文献   

18.
非平行支持向量机是支持向量机的延伸,受到了广泛的关注.非平行支持向量机构造允许非平行的支撑超平面,可以描述不同类别之间的数据分布差异,从而适用于更广泛的问题.然而,对非平行支持向量机模型与支持向量机模型之间的关系研究较少,且尚未有等价于标准支持向量机模型的非平行支持向量机模型.从支持向量机出发,构造出新的非平行支持向量机模型,该模型不仅可以退化为标准支持向量机,保留了支持向量机的稀疏性和核函数可扩展性.同时,可以描述不同类别之间的数据分布差异,适用于更广泛的非平行结构数据等.最后,通过实验初步验证了所提模型的有效性.  相似文献   

19.
The support vector machine (SVM) is one of the most popular classification methods in the machine learning literature. Binary SVM methods have been extensively studied, and have achieved many successes in various disciplines. However, generalization to multicategory SVM (MSVM) methods can be very challenging. Many existing methods estimate k functions for k classes with an explicit sum-to-zero constraint. It was shown recently that such a formulation can be suboptimal. Moreover, many existing MSVMs are not Fisher consistent, or do not take into account the effect of outliers. In this paper, we focus on classification in the angle-based framework, which is free of the explicit sum-to-zero constraint, hence more efficient, and propose two robust MSVM methods using truncated hinge loss functions. We show that our new classifiers can enjoy Fisher consistency, and simultaneously alleviate the impact of outliers to achieve more stable classification performance. To implement our proposed classifiers, we employ the difference convex algorithm for efficient computation. Theoretical and numerical results obtained indicate that for problems with potential outliers, our robust angle-based MSVMs can be very competitive among existing methods.  相似文献   

20.
Although support vector regression models are being used successfully in various applications, the size of the business datasets with millions of observations and thousands of variables makes training them difficult, if not impossible to solve. This paper introduces the Row and Column Selection Algorithm (ROCSA) to select a small but informative dataset for training support vector regression models with standard SVM tools. ROCSA uses ε-SVR models with L1-norm regularization of the dual and primal variables for the row and column selection steps, respectively. The first step involves parallel processing of data chunks and selects a fraction of the original observations that are either representative of the pattern identified in the chunk, or represent those observations that do not fit the identified pattern. The column selection step dramatically reduces the number of variables and the multicolinearity in the dataset, increasing the interpretability of the resulting models and their ease of maintenance. Evaluated on six retail datasets from two countries and a publicly available research dataset, the reduced ROCSA training data improves the predictive accuracy on average by 39% compared with the original dataset when trained with standard SVM tools. Comparison with the ε SSVR method using reduced kernel technique shows similar performance improvement. Training a standard SVM tool with the ROCSA selected observations improves the predictive accuracy on average by 21% compared to the practical approach of random sampling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号