首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
考虑求解一类半监督距离度量学习问题. 由于样本集(数据库)的规模与复杂性的激增, 在考虑距离度量学习问题时, 必须考虑学习来的距离度量矩阵具有稀疏性的特点. 因此, 在现有的距离度量学习模型中, 增加了学习矩阵的稀疏约束. 为了便于模型求解, 稀疏约束应用了Frobenius 范数约束. 进一步, 通过罚函数方法将Frobenius范数约束罚到目标函数, 使得具有稀疏约束的模型转化成无约束优化问题. 为了求解问题, 提出了正定矩阵群上加速投影梯度算法, 克服了矩阵群上不能直接进行线性组合的困难, 并分析了算法的收敛性. 最后通过UCI数据库的分类问题的例子, 进行了数值实验, 数值实验的结果说明了学习矩阵的稀疏性以及加速投影梯度算法的有效性.  相似文献   

2.
In this paper, we propose a general framework for Extreme Learning Machine via free sparse transfer representation, which is referred to as transfer free sparse representation based on extreme learning machine (TFSR-ELM). This framework is suitable for different assumptions related to the divergence measures of the data distributions, such as a maximum mean discrepancy and K-L divergence. We propose an effective sparse regularization for the proposed free transfer representation learning framework, which can decrease the time and space cost. Different solutions to the problems based on the different distribution distance estimation criteria and convergence analysis are given. Comprehensive experiments show that TFSR-based algorithms outperform the existing transfer learning methods and are robust to different sizes of training data.  相似文献   

3.
基于稀疏重构的图像修复依赖于图像全局自相似性信息的利用和稀疏分解字典的选择,为此提出了基于分类学习字典全局稀疏表示模型的图像修复思路.该算法首先将图像未丢失信息聚类为具有相似几何结构的多个子区域,并分别对各个子区域用K-SVD字典学习方法得到与各子区域结构特征相适应的学习字典.然后根据图像自相似性特点构建能够描述图像块空间组织结构关系的全局稀疏最大期望值表示模型,迭代地使用该模型交替更新图像块的组织结构关系和损坏图像的估计直到修复结果趋于稳定.实验结果表明,方法对于图像的纹理细节、结构信息都能起到好的修复作用.  相似文献   

4.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

5.
J. K. Liu  X. L. Du 《Applicable analysis》2018,97(12):2122-2131
Many problems arising from machine learning, compressive sensing, linear inverse problem, and statistical inference involve finding sparse solutions to under-determined or ill-conditioned equations. In this paper, a gradient projection method is proposed to recover sparse signal in compressive sensing by solving the nonlinear convex constrained equations. The global convergence is established with the backtracking line search. Preliminary numerical experiments coping with the sparse signal reconstruction in compressive sensing are performed, which show that the proposed method is very effective and stable.  相似文献   

6.
In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new “input”, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.  相似文献   

7.
ABSTRACT

In the era of big data, with the increase of data processing information and the increase of data complexity, higher requirements are put on the tools and algorithms of data processing. As a tool for structured information representation, ontology has been used in engineering fields such as chemistry, biology, pharmacy, and materials. As a dynamic structure, the increasing concepts contributes to a gradual increase of a single ontology. In order to solve the problem of computational complexity decreasing in the procedure of similarity calculating, the techniques of dimensionality reduction and sparse computing are applied to ontology learning. This article presents discrete dynamics approach showing several tricks on applying the sparse computing method to ontology learning, and verify its efficiency through experiments.  相似文献   

8.
针对经典的流形学习算法Isomap在非线性数据稀疏时降维效果下降甚至失效的问题,提出改进的切近邻等距特征映射算法(Cut-Neighbors Isometric feature mapping,CN-Isomap).该算法在数据稀疏的情况下首先通过有效识别样本点的"流形邻居"来剔除近邻图上的"短路"边,然后再通过最短路径算法拟合测地线距离,使得拟合的测地线距离不会偏离流形区域,从而低维嵌入映射能够正确地反映高维输入空间样本点间的内在拓扑特征,很好地发现蕴含在高维空间里的低维流形,有效地对非线性稀疏数据进行降维.通过对Benchmark数据集的实验表明了算法的有效性.CN-Isomap算法是Isomap算法的推广,不仅能有效地对非线性稀疏数据进行降维,同样也适用于数据非稀疏的情况.  相似文献   

9.
数学最优化是以数学的方式来刻画和找出问题最优解的一门学科.机器学习利用数据构造预测方法,并对这些方法进行研究.介绍了机器学习中与支持向量机和稀疏重构相关的最优化模型.在此基础上,给出了三个典型最优化模型的对偶问题,并详细地讨论了对偶在求解这些问题中的应用.  相似文献   

10.
Foundations of Computational Mathematics - In this paper we develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms...  相似文献   

11.
Zhao  Chen  Xiu  Naihua  Qi  Houduo  Luo  Ziyan 《Mathematical Programming》2022,195(1-2):903-928
Mathematical Programming - The sparse nonlinear programming (SNP) problem has wide applications in signal and image processing, machine learning and finance, etc. However, the computational...  相似文献   

12.
This paper presents an application of the sparse Bayesian learning (SBL) algorithm to linear inverse problems with a high order total variation (HOTV) sparsity prior. For the problem of sparse signal recovery, SBL often produces more accurate estimates than maximum a posterioriestimates, including those that use $\ell_1$ regularization. Moreover, rather than a single signal estimate, SBL yields a full posterior density estimate which can be used for uncertainty quantification. However, SBL is only immediately applicable to problems having a directsparsity prior, or to those that can be formed via synthesis. This paper demonstrates how a problem with an HOTV sparsity prior can be formulated via synthesis, and then utilizes SBL. This expands the class of problems available to Bayesian learning to include, e.g., inverse problems dealing with the recovery of piecewise smooth functions or signals from data. Numerical examples are provided to demonstrate how this new technique is effectively employed.  相似文献   

13.
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of column-partitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.  相似文献   

14.
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high-dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, that is, APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results that do not exist in the existing literature. Thorough numerical results are provided to back up our theory.  相似文献   

15.
Regularization Networks and Support Vector Machines   总被引:23,自引:0,他引:23  
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular, the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

16.
In the framework of supervised learning, we prove that the iterative algorithm introduced in Umanità and Villa (2010) [22] allows us to estimate in a consistent way the relevant features of the regression function under the a priori assumption that it admits a sparse representation on a fixed dictionary.  相似文献   

17.
More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications. Data that we encounter often have certain embedded sparsity structures. That is, if they are represented in an appropriate basis, their energies can concentrate on a small number of basis functions. This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities, by deep neural networks (DNNs) with a sparse regularization with multiple parameters. Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions, by employing a penalty with multiple parameters, we develop DNNs with a multi-scale sparse regularization (SDNN) for effectively representing functions having certain singularities. We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation. Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.  相似文献   

18.
In this paper we study the problem of learning the gradient function with application to variable selection and determining variable covariation. Firstly, we propose a novel unifying framework for coordinate gradient learning from the perspective of multi-task learning. Various variable selection methods can be regarded as special instances of this framework. Secondly, we formulate the dual problems of gradient learning with general loss functions. This enables the direct application of standard optimization toolboxes to the case of gradient learning. For instance, gradient learning with SVM loss can be solved by quadratic programming (QP) routines. Thirdly, we propose a novel gradient learning formulation which can be cast as a learning the kernel matrix problem. Its relation with sparse regularization is highlighted. A semi-infinite linear programming (SILP) approach and an iterative optimization approach are proposed to efficiently solve this problem. Finally, we validate our proposed approaches on both synthetic and real datasets.  相似文献   

19.
Extreme learning machine (ELM) not only is an effective classifier in supervised learning, but also can be applied on unsupervised learning and semi-supervised learning. The model structure of unsupervised extreme learning machine (US-ELM) and semi-supervised extreme learning machine (SS-ELM) are same as ELM, the difference between them is the cost function. We introduce kernel function to US-ELM and propose unsupervised extreme learning machine with kernel (US-KELM). And SS-KELM has been proposed. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and Wavelet kernel functions have been widely used in support vector machine. Therefore, to realize a combination of the wavelet kernel function, US-ELM, and SS-ELM, unsupervised extreme learning machine with wavelet kernel function (US-WKELM) and semi-supervised extreme learning machine with wavelet kernel function (SS-WKELM) are proposed in this paper. The experimental results show the feasibility and validity of US-WKELM and SS-WKELM in clustering and classification.  相似文献   

20.
Feature selection plays an important role in the successful application of machine learning techniques to large real-world datasets. Avoiding model overfitting, especially when the number of features far exceeds the number of observations, requires selecting informative features and/or eliminating irrelevant ones. Searching for an optimal subset of features can be computationally expensive. Functional magnetic resonance imaging (fMRI) produces datasets with such characteristics creating challenges for applying machine learning techniques to classify cognitive states based on fMRI data. In this study, we present an embedded feature selection framework that integrates sparse optimization for regularization (or sparse regularization) and classification. This optimization approach attempts to maximize training accuracy while simultaneously enforcing sparsity by penalizing the objective function for the coefficients of the features. This process allows many coefficients to become zero, which effectively eliminates their corresponding features from the classification model. To demonstrate the utility of the approach, we apply our framework to three different real-world fMRI datasets. The results show that regularized classifiers yield better classification accuracy, especially when the number of initial features is large. The results further show that sparse regularization is key to achieving scientifically-relevant generalizability and functional localization of classifier features. The approach is thus highly suited for analysis of fMRI data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号