首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
基于指数Laplace损失函数的回归估计鲁棒超限学习机   总被引:1,自引:0,他引:1       下载免费PDF全文
实际问题的数据集通常受到各种噪声的影响,超限学习机(extreme learning machine, ELM)对这类数据集进行学习时,表现出预测精度低、预测结果波动大.为了克服该缺陷,采用了能够削弱噪声影响的指数Laplace损失函数.该损失函数是建立在Gauss核函数基础上,具有可微、非凸、有界且能够趋近于Laplace函数的特点.将其引入到超限学习机中,提出了鲁棒超限学习机回归估计(exponential Laplace loss function based robust ELM for regression, ELRELM)模型.利用迭代重赋权算法求解模型的优化问题.在每次迭代中,噪声样本点被赋予较小的权值,能够有效地提高预测精度.真实数据集实验验证了所提出的模型相比较于对比算法具有更优的学习性能和鲁棒性.  相似文献   

2.
Support vector regression (SVR) has been successfully applied in various domains, including predicting the prices of different financial instruments like stocks, futures, options, and indices. Because of the wide variation in financial time-series data, instead of using only a single standard prediction technique like SVR, we propose a hybrid model called USELM-SVR. It is a combination of unsupervised extreme learning machine (US-ELM)-based clustering and SVR forecasting. We assessed the feasibility and effectiveness of this hybrid model using a case study, predicting the one-, two-, and three-day ahead closing values of the energy commodity futures index traded on the Multi Commodity Exchange in India. Our experimental results show that the USELM-SVR is viable and effective, and produces better forecasts than our benchmark models (standard SVR, a hybrid of SVR with self-organizing map (SOM) clustering, and a hybrid of SVR with k-means clustering). Moreover, the proposed USELM-SVR architecture is useful as an alternative model for prediction tasks when we require more accurate predictions.  相似文献   

3.
Kernel extreme learning machine (KELM) increases the robustness of extreme learning machine (ELM) by turning linearly non-separable data in a low dimensional space into a linearly separable one. However, the internal power parameters of ELM are initialized at random, causing the algorithm to be unstable. In this paper, we use the active operators particle swam optimization algorithm (APSO) to obtain an optimal set of initial parameters for KELM, thus creating an optimal KELM classifier named as APSO-KELM. Experiments on standard genetic datasets show that APSO-KELM has higher classification accuracy when being compared to the existing ELM, KELM, and these algorithms combining PSO/APSO with ELM/KELM, such as PSO-KELM, APSO-ELM, PSO-ELM, etc. Moreover, APSO-KELM has good stability and convergence, and is shown to be a reliable and effective classification algorithm.  相似文献   

4.
In this paper, we propose a kernel-free semi-supervised quadratic surface support vector machine model for binary classification. The model is formulated as a mixed-integer programming problem, which is equivalent to a non-convex optimization problem with absolute-value constraints. Using the relaxation techniques, we derive a semi-definite programming problem for semi-supervised learning. By solving this problem, the proposed model is tested on some artificial and public benchmark data sets. Preliminary computational results indicate that the proposed method outperforms some existing well-known methods for solving semi-supervised support vector machine with a Gaussian kernel in terms of classification accuracy.  相似文献   

5.
Kernel logistic regression (KLR) is a very powerful algorithm that has been shown to be very competitive with many state-of the art machine learning algorithms such as support vector machines (SVM). Unlike SVM, KLR can be easily extended to multi-class problems and produces class posterior probability estimates making it very useful for many real world applications. However, the training of KLR using gradient based methods or iterative re-weighted least squares can be unbearably slow for large datasets. Coupled with poor conditioning and parameter tuning, training KLR can quickly design matrix become infeasible for some real datasets. The goal of this paper is to present simple, fast, scalable, and efficient algorithms for learning KLR. First, based on a simple approximation of the logistic function, a least square algorithm for KLR is derived that avoids the iterative tuning of gradient based methods. Second, inspired by the extreme learning machine (ELM) theory, an explicit feature space is constructed through a generalized single hidden layer feedforward network and used for training iterative re-weighted least squares KLR (IRLS-KLR) and the newly proposed least squares KLR (LS-KLR). Finally, for large-scale and/or poorly conditioned problems, a robust and efficient preconditioned learning technique is proposed for learning the algorithms presented in the paper. Numerical results on a series of artificial and 12 real bench-mark datasets show first that LS-KLR compares favorable with SVM and traditional IRLS-KLR in terms of accuracy and learning speed. Second, the extension of ELM to KLR results in simple, scalable and very fast algorithms with comparable generalization performance to their original versions. Finally, the introduced preconditioned learning method can significantly increase the learning speed of IRLS-KLR.  相似文献   

6.
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.  相似文献   

7.
In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new “input”, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.  相似文献   

8.
Semi-supervised learning is an emerging computational paradigm for machine learning,that aims to make better use of large amounts of inexpensive unlabeled data to improve the learning performance.While various methods have been proposed based on different intuitions,the crucial issue of generalization performance is still poorly understood.In this paper,we investigate the convergence property of the Laplacian regularized least squares regression,a semi-supervised learning algorithm based on manifold regularization.Moreover,the improvement of error bounds in terms of the number of labeled and unlabeled data is presented for the first time as far as we know.The convergence rate depends on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers.Some new techniques are exploited for the analysis since an extra regularizer is introduced.  相似文献   

9.
小波包是小波理论的重要组成部分,在非平稳信号特征检测和故障诊断中具有广泛的应用。小波包教学是小波分析教学的一个难点,也是一个较容易忽视的知识点。本文分析了小波包理论,归纳总结了小波包目标函数,以及它们适用的领域,并提出了新的目标函数。本文可以对小波包的教学提供一些新的思路。  相似文献   

10.
The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.  相似文献   

11.
Interbank Offered rate is the only direct market rate in China’s currency market. Volatility forecasting of China Interbank Offered Rate (IBOR) has a very important theoretical and practical significance for financial asset pricing and financial risk measure or management. However, IBOR is a dynamics and non-steady time series whose developmental changes have stronger random fluctuation, so it is difficult to forecast the volatility of IBOR. This paper offers a hybrid algorithm using grey model and extreme learning machine (ELM) to forecast volatility of IBOR. The proposed algorithm is composed of three phases. In the first, grey model is used to deal with the original IBOR time series by accumulated generating operation (AGO) and weaken the stochastic volatility in original series. And then, a forecasting model is founded by using ELM to analyze the new IBOR series. Lastly, the predictive value of the original IBOR series can be obtained by inverse accumulated generating operation (IAGO). The new model is applied to forecasting Interbank Offered Rate of China. Compared with the forecasting results of BP and classical ELM, the new model is more efficient to forecasting short- and middle-term volatility of IBOR.  相似文献   

12.
In a new fine particle concentrations forecasting model, the Hampel identifier outlier correction preprocessing detects and corrects the outliers in the original series. Empirical wavelet transform method decomposes the corrected series into a set of subseries adaptively, and each subseries are used to train the Stacking ensemble method. In the Stacking ensemble forecasting method, the outlier robust extreme learning machine meta-learner combines different Elman neural network base learners and outputs the forecasting results of different subseries. Different forecasting subseries are combined and then reconstructed by inverse empirical wavelet transform reconstruction method to get the final forecasting fine particle concentrations results. It has been proved in the study that the model proposed in the study has better accuracy and wide applicability comparing to the existing models.  相似文献   

13.
The regularity of functions from reproducing kernel Hilbert spaces (RKHSs) is studied in the setting of learning theory. We provide a reproducing property for partial derivatives up to order s when the Mercer kernel is C2s. For such a kernel on a general domain we show that the RKHS can be embedded into the function space Cs. These observations yield a representer theorem for regularized learning algorithms involving data for function values and gradients. Examples of Hermite learning and semi-supervised learning penalized by gradients on data are considered.  相似文献   

14.
Unsupervised classification is a highly important task of machine learning methods. Although achieving great success in supervised classification, support vector machine (SVM) is much less utilized to classify unlabeled data points, which also induces many drawbacks including sensitive to nonlinear kernels and random initializations, high computational cost, unsuitable for imbalanced datasets. In this paper, to utilize the advantages of SVM and overcome the drawbacks of SVM-based clustering methods, we propose a completely new two-stage unsupervised classification method with no initialization: a new unsupervised kernel-free quadratic surface SVM (QSSVM) model is proposed to avoid selecting kernels and related kernel parameters, then a golden-section algorithm is designed to generate the appropriate classifier for balanced and imbalanced data. By studying certain properties of proposed model, a convergent decomposition algorithm is developed to implement this non-covex QSSVM model effectively and efficiently (in terms of computational cost). Numerical tests on artificial and public benchmark data indicate that the proposed unsupervised QSSVM method outperforms well-known clustering methods (including SVM-based and other state-of-the-art methods), particularly in terms of classification accuracy. Moreover, we extend and apply the proposed method to credit risk assessment by incorporating the T-test based feature weights. The promising numerical results on benchmark personal credit data and real-world corporate credit data strongly demonstrate the effectiveness, efficiency and interpretability of proposed method, as well as indicate its significant potential in certain real-world applications.  相似文献   

15.
In this era of big data, more and more models need to be trained to mine useful knowledge from large scale data. It has become a challenging problem to train multiple models accurately and efficiently so as to make full use of limited computing resources. As one of ELM variants, online sequential extreme learning machine (OS-ELM) provides a method to learn from incremental data. MapReduce, which provides a simple, scalable and fault-tolerant framework, can be utilized for large scale learning. In this paper, we propose an efficient parallel method for batched online sequential extreme learning machine (BPOS-ELM) training using MapReduce. Map execution time is estimated with historical statistics, where regression method and inverse distance weighted interpolation method are used. Reduce execution time is estimated based on complexity analysis and regression method. Based on the estimations, BPOS-ELM generates a Map execution plan and a Reduce execution plan. Finally, BPOS-ELM launches one MapReduce job to train multiple OS-ELM models according to the generated execution plan, and collects execution information to further improve estimation accuracy. Our proposal is evaluated with real and synthetic data. The experimental results show that the accuracy of BPOS-ELM is at the same level as those of OS-ELM and parallel OS-ELM (POS-ELM) with higher training efficiencies.  相似文献   

16.
为了捕捉农产品市场期货价格波动的复杂特征,进一步提高其预测精度,基于分解集成的思想,构建包含变分模态分解(VMD)和极限学习机(ELM)的分解集成预测模型。首先,利用VMD分解的自适应性和非递归性,选择VMD将复杂时间序列分解成多个模态分量(IMF)。其次,针对VMD分解关键参数模态数K的选取难题,提出基于最小模糊熵准则寻找最优K值的方法,有效避免模态混淆和端点效应问题,从而提升VMD的分解能力。最后,利用ELM强大的学习能力和泛化能力,对VMD分解得到的不同尺度子序列进行预测,集成得到最终预测结果。以CBOT交易所稻谷、小麦、豆粕期货价格作为研究对象,实证结果表明,该分解集成预测模型在预测精度和方向性指标上,显著优于单预测模型和其它分解集成预测模型,为农产品期货价格预测提供了一种新途径。  相似文献   

17.
Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in our understanding of the dependence of generalization error on the numbers of labeled and unlabeled data. In this paper,we consider a graph-based semi-supervised classification algorithm and establish its generalization error bounds. Our results show the close relations between the generalizat...  相似文献   

18.
Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods and also their empirical bootstrap have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.  相似文献   

19.
The support vector regression (SVR) is a supervised machine learning technique that has been successfully employed to forecast financial volatility. As the SVR is a kernel-based technique, the choice of the kernel has a great impact on its forecasting accuracy. Empirical results show that SVRs with hybrid kernels tend to beat single-kernel models in terms of forecasting accuracy. Nevertheless, no application of hybrid kernel SVR to financial volatility forecasting has been performed in previous researches. Given that the empirical evidence shows that the stock market oscillates between several possible regimes, in which the overall distribution of returns it is a mixture of normals, we attempt to find the optimal number of mixture of Gaussian kernels that improve the one-period-ahead volatility forecasting of SVR based on GARCH(1,1). The forecast performance of a mixture of one, two, three and four Gaussian kernels are evaluated on the daily returns of Nikkei and Ibovespa indexes and compared with SVR–GARCH with Morlet wavelet kernel, standard GARCH, Glosten–Jagannathan–Runkle (GJR) and nonlinear EGARCH models with normal, student-t, skew-student-t and generalized error distribution (GED) innovations by using mean absolute error (MAE), root mean squared error (RMSE) and robust Diebold–Mariano test. The results of the out-of-sample forecasts suggest that the SVR–GARCH with a mixture of Gaussian kernels can improve the volatility forecasts and capture the regime-switching behavior.  相似文献   

20.
Support vector machine (SVM) is a popular tool for machine learning task. It has been successfully applied in many fields, but the parameter optimization for SVM is an ongoing research issue. In this paper, to tune the parameters of SVM, one form of inter-cluster distance in the feature space is calculated for all the SVM classifiers of multi-class problems. Inter-cluster distance in the feature space shows the degree the classes are separated. A larger inter-cluster distance value implies a pair of more separated classes. For each classifier, the optimal kernel parameter which results in the largest inter-cluster distance is found. Then, a new continuous search interval of kernel parameter which covers the optimal kernel parameter of each class pair is determined. Self-adaptive differential evolution algorithm is used to search the optimal parameter combination in the continuous intervals of kernel parameter and penalty parameter. At last, the proposed method is applied to several real word datasets as well as fault diagnosis for rolling element bearings. The results show that it is both effective and computationally efficient for parameter optimization of multi-class SVM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号