首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
基于指数Laplace损失函数的回归估计鲁棒超限学习机   总被引:1,自引:0,他引:1       下载免费PDF全文
实际问题的数据集通常受到各种噪声的影响,超限学习机(extreme learning machine, ELM)对这类数据集进行学习时,表现出预测精度低、预测结果波动大.为了克服该缺陷,采用了能够削弱噪声影响的指数Laplace损失函数.该损失函数是建立在Gauss核函数基础上,具有可微、非凸、有界且能够趋近于Laplace函数的特点.将其引入到超限学习机中,提出了鲁棒超限学习机回归估计(exponential Laplace loss function based robust ELM for regression, ELRELM)模型.利用迭代重赋权算法求解模型的优化问题.在每次迭代中,噪声样本点被赋予较小的权值,能够有效地提高预测精度.真实数据集实验验证了所提出的模型相比较于对比算法具有更优的学习性能和鲁棒性.  相似文献   

2.
Kernel logistic regression (KLR) is a powerful nonlinear classifier. The combination of KLR and the truncated-regularized iteratively re-weighted least-squares (TR-IRLS) algorithm, has led to a powerful classification method using small-to-medium size data sets. This method (algorithm), is called truncated-regularized kernel logistic regression (TR-KLR). Compared to support vector machines (SVM) and TR-IRLS on twelve benchmark publicly available data sets, the proposed TR-KLR algorithm is as accurate as, and much faster than, SVM and more accurate than TR-IRLS. The TR-KLR algorithm also has the advantage of providing direct prediction probabilities.  相似文献   

3.
This article introduces a new method of supervised learning based on linear discrimination among the vertices of a regular simplex in Euclidean space. Each vertex represents a different category. Discrimination is phrased as a regression problem involving ?-insensitive residuals and a quadratic penalty on the coefficients of the linear predictors. The objective function can by minimized by a primal MM (majorization–minimization) algorithm that (a) relies on quadratic majorization and iteratively re-weighted least squares, (b) is simpler to program than algorithms that pass to the dual of the original optimization problem, and (c) can be accelerated by step doubling. Limited comparisons on real and simulated data suggest that the MM algorithm is competitive in statistical accuracy and computational speed with the best currently available algorithms for discriminant analysis.  相似文献   

4.
We investigate constrained first order techniques for training support vector machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the constrained incremental algorithms working in the dual space achieve the best trade-off between prediction accuracy and training time. We perform comparisons with an unconstrained large scale learning algorithm (Pegasos stochastic gradient) to emphasize that our choice can remain competitive for large scale learning due to the very special structure of the training problem.  相似文献   

5.
Preconditioned sor methods for generalized least-squares problems   总被引:1,自引:0,他引:1  
1.IntroductionThegeneralizedleastsquaresproblem,definedasmin(Ax--b)"W--'(Ax--b),(1.1)xacwhereAERm",m>n,bERm,andWERm'misasymmetricandpositivedefinitematrix,isfrequentlyfoundwhensolvingproblemsinstatistics,engineeringandeconomics.Forexample,wegetgeneralizedleastsquaresproblemswhensolvingnonlinearregressionanalysisbyquasi-likelihoodestimation,imagereconstructionproblemsandeconomicmodelsobtainedbythemaximumlikelihoodmethod(of.[1,21).Paige[3,4]investigatestheproblemexplicitly.Hechangestheorig…  相似文献   

6.
The support vector machine (SVM) is known for its good performance in two-class classification, but its extension to multiclass classification is still an ongoing research issue. In this article, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in two-class classification, but also can naturally be generalized to the multiclass case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the support points of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a potential computational advantage over the SVM.  相似文献   

7.
This article is concerned with iterative techniques for linear systems of equations arising from a least squares formulation of boundary value problems. In its classical form, the solution of the least squares method is obtained by solving the traditional normal equation. However, for nonsmooth boundary conditions or in the case of refinement at a selected set of interior points, the matrix associated with the normal equation tends to be ill-conditioned. In this case, the least squares method may be formulated as a Powell multiplier method and the equations solved iteratively. Therein we use and compare two different iterative algorithms. The first algorithm is the preconditioned conjugate gradient method applied to the normal equation, while the second is a new algorithm based on the Powell method and formulated on the stabilized dual problem. The two algorithms are first compared on a one-dimensional problem with poorly conditioned matrices. Results show that, for such problems, the new algorithm gives more accurate results. The new algorithm is then applied to a two-dimensional steady state diffusion problem and a boundary layer problem. A comparison between the least squares method of Bramble and Schatz and the new algorithm demonstrates the ability of the new method to give highly accurate results on the boundary, or at a set of given interior collocation points without the deterioration of the condition number of the matrix. Conditions for convergence of the proposed algorithm are discussed. © 1997 John Wiley & Sons, Inc.  相似文献   

8.
For solving least squares problems, the CGLS method is a typical method in the point of view of iterative methods. When the least squares problems are ill-conditioned, the convergence behavior of the CGLS method will present a deteriorated result. We expect to select other iterative Krylov subspace methods to overcome the disadvantage of CGLS. Here the GMRES method is a suitable algorithm for the reason that it is derived from the minimal residual norm approach, which coincides with least squares problems. Ken Hayami proposed BAGMRES for solving least squares problems in [\emph{GMRES Methods for Least Squares Problems, SIAM J. Matrix Anal. Appl., 31(2010)}, pp.2400-2430]. The deflation and balancing preconditioners can optimize the convergence rate through modulating spectral distribution. Hence, in this paper we utilize preconditioned iterative Krylov subspace methods with deflation and balancing preconditioners in order to solve ill-conditioned least squares problems. Numerical experiments show that the methods proposed in this paper are better than the CGLS method.  相似文献   

9.
A computational procedure is developed for determining the solution of minimal length to a linear least squares problem subject to bounds on the variables. In the first stage, a solution to the least squares problem is computed and then in the second stage, the solution of minimal length is determined. The objective function in each step is minimized by an active set method adapted to the special structure of the problem.The systems of linear equations satisfied by the descent direction and the Lagrange multipliers in the minimization algorithm are solved by direct methods based on QR decompositions or iterative preconditioned conjugate gradient methods. The direct and the iterative methods are compared in numerical experiments, where the solutions are sought to a sequence of related, minimal least squares problems subject to bounds on the variables. The application of the iterative methods to large, sparse problems is discussed briefly.This work was supported by The National Swedish Board for Technical Development under contract dnr 80-3341.  相似文献   

10.
This paper is concerned with weighted least squares solutions to general coupled Sylvester matrix equations. Gradient based iterative algorithms are proposed to solve this problem. This type of iterative algorithm includes a wide class of iterative algorithms, and two special cases of them are studied in detail in this paper. Necessary and sufficient conditions guaranteeing the convergence of the proposed algorithms are presented. Sufficient conditions that are easy to compute are also given. The optimal step sizes such that the convergence rates of the algorithms, which are properly defined in this paper, are maximized and established. Several special cases of the weighted least squares problem, such as a least squares solution to the coupled Sylvester matrix equations problem, solutions to the general coupled Sylvester matrix equations problem, and a weighted least squares solution to the linear matrix equation problem are simultaneously solved. Several numerical examples are given to illustrate the effectiveness of the proposed algorithms.  相似文献   

11.
Support Vector Machine (SVM) is one of the most important class of machine learning models and algorithms, and has been successfully applied in various fields. Nonlinear optimization plays a crucial role in SVM methodology, both in defining the machine learning models and in designing convergent and efficient algorithms for large-scale training problems. In this paper we present the convex programming problems underlying SVM focusing on supervised binary classification. We analyze the most important and used optimization methods for SVM training problems, and we discuss how the properties of these problems can be incorporated in designing useful algorithms.  相似文献   

12.
Kernel extreme learning machine (KELM) increases the robustness of extreme learning machine (ELM) by turning linearly non-separable data in a low dimensional space into a linearly separable one. However, the internal power parameters of ELM are initialized at random, causing the algorithm to be unstable. In this paper, we use the active operators particle swam optimization algorithm (APSO) to obtain an optimal set of initial parameters for KELM, thus creating an optimal KELM classifier named as APSO-KELM. Experiments on standard genetic datasets show that APSO-KELM has higher classification accuracy when being compared to the existing ELM, KELM, and these algorithms combining PSO/APSO with ELM/KELM, such as PSO-KELM, APSO-ELM, PSO-ELM, etc. Moreover, APSO-KELM has good stability and convergence, and is shown to be a reliable and effective classification algorithm.  相似文献   

13.
In this era of big data, more and more models need to be trained to mine useful knowledge from large scale data. It has become a challenging problem to train multiple models accurately and efficiently so as to make full use of limited computing resources. As one of ELM variants, online sequential extreme learning machine (OS-ELM) provides a method to learn from incremental data. MapReduce, which provides a simple, scalable and fault-tolerant framework, can be utilized for large scale learning. In this paper, we propose an efficient parallel method for batched online sequential extreme learning machine (BPOS-ELM) training using MapReduce. Map execution time is estimated with historical statistics, where regression method and inverse distance weighted interpolation method are used. Reduce execution time is estimated based on complexity analysis and regression method. Based on the estimations, BPOS-ELM generates a Map execution plan and a Reduce execution plan. Finally, BPOS-ELM launches one MapReduce job to train multiple OS-ELM models according to the generated execution plan, and collects execution information to further improve estimation accuracy. Our proposal is evaluated with real and synthetic data. The experimental results show that the accuracy of BPOS-ELM is at the same level as those of OS-ELM and parallel OS-ELM (POS-ELM) with higher training efficiencies.  相似文献   

14.
结合偏最小二乘法和支持向量机的优缺点,提出基于偏最小二乘支持向量机的天然气消费量预测模型。首先,利用偏最小二乘法确定影响天然气消费量的新综合变量,建立以新综合变量为输入,天然气消费量为输出的支持向量机模型,对天然气消费量进行了预测;然后,与多元回归、偏最小二乘回归、普通支持向量机做误差检验比较,验证该方法的可行性与正确性。结果表明,此天然气消费量预测模型具有较高的精确度和应用价值。  相似文献   

15.
A multilevel approach for nonnegative matrix factorization   总被引:1,自引:0,他引:1  
Nonnegative matrix factorization (NMF), the problem of approximating a nonnegative matrix with the product of two low-rank nonnegative matrices, has been shown to be useful in many applications, such as text mining, image processing, and computational biology. In this paper, we explain how algorithms for NMF can be embedded into the framework of multilevel methods in order to accelerate their initial convergence. This technique can be applied in situations where data admit a good approximate representation in a lower dimensional space through linear transformations preserving nonnegativity. Several simple multilevel strategies are described and are experimentally shown to speed up significantly three popular NMF algorithms (alternating nonnegative least squares, multiplicative updates and hierarchical alternating least squares) on several standard image datasets.  相似文献   

16.
Local search methods are widely used to improve the performance of evolutionary computation algorithms in all kinds of domains. Employing advanced and efficient exploration mechanisms becomes crucial in complex and very large (in terms of search space) problems, such as when employing evolutionary algorithms to large-scale data mining tasks. Recently, the GAssist Pittsburgh evolutionary learning system was extended with memetic operators for discrete representations that use information from the supervised learning process to heuristically edit classification rules and rule sets. In this paper we first adapt some of these operators to BioHEL, a different evolutionary learning system applying the iterative learning approach, and afterwards propose versions of these operators designed for continuous attributes and for dealing with noise. The performance of all these operators and their combination is extensively evaluated on a broad range of synthetic large-scale datasets to identify the settings that present the best balance between efficiency and accuracy. Finally, the identified best configurations are compared with other classes of machine learning methods on both synthetic and real-world large-scale datasets and show very competent performance.  相似文献   

17.
Iterative methods applied to the normal equationsA T Ax=A T b are sometimes used for solving large sparse linear least squares problems. However, when the matrix is rank-deficient many methods, although convergent, fail to produce the unique solution of minimal Euclidean norm. Examples of such methods are the Jacobi and SOR methods as well as the preconditioned conjugate gradient algorithm. We analyze here an iterative scheme that overcomes this difficulty for the case of stationary iterative methods. The scheme combines two stationary iterative methods. The first method produces any least squares solution whereas the second produces the minimum norm solution to a consistent system. This work was supported by the Swedish Research Council for Engineering Sciences, TFR.  相似文献   

18.
Complex valued systems of equations with a matrix R + 1S where R and S are real valued arise in many applications. A preconditioned iterative solution method is presented when R and S are symmetric positive semi‐definite and at least one of R, S is positive definite. The condition number of the preconditioned matrix is bounded above by 2, so only very few iterations are required. Applications when solving matrix polynomial equation systems, linear systems of ordinary differential equations, and using time‐stepping integration schemes based on Padé approximation for parabolic and hyperbolic problems are also discussed. Numerical comparisons show that the proposed real valued method is much faster than the iterative complex symmetric QMR method. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

19.
For large sparse systems of linear equations iterative techniques are attractive. In this paper, we study a splitting method for an important class of symmetric and indefinite system. Theoretical analyses show that this method converges to the unique solution of the system of linear equations for all t>0 (t is the parameter). Moreover, all the eigenvalues of the iteration matrix are real and nonnegative and the spectral radius of the iteration matrix is decreasing with respect to the parameter t. Besides, a preconditioning strategy based on the splitting of the symmetric and indefinite coefficient matrices is proposed. The eigensolution of the preconditioned matrix is described and an upper bound of the degree of the minimal polynomials for the preconditioned matrix is obtained. Numerical experiments of a model Stokes problem and a least‐squares problem with linear constraints presented to illustrate the effectiveness of the method. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we study the problem of learning the gradient function with application to variable selection and determining variable covariation. Firstly, we propose a novel unifying framework for coordinate gradient learning from the perspective of multi-task learning. Various variable selection methods can be regarded as special instances of this framework. Secondly, we formulate the dual problems of gradient learning with general loss functions. This enables the direct application of standard optimization toolboxes to the case of gradient learning. For instance, gradient learning with SVM loss can be solved by quadratic programming (QP) routines. Thirdly, we propose a novel gradient learning formulation which can be cast as a learning the kernel matrix problem. Its relation with sparse regularization is highlighted. A semi-infinite linear programming (SILP) approach and an iterative optimization approach are proposed to efficiently solve this problem. Finally, we validate our proposed approaches on both synthetic and real datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号