首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

2.
Aveiro method is a sparse representation method in reproducing kernel Hilbert spaces, which gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying reproducing kernel Hilbert space. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro method. To avoid those difficulties, we propose an new Aveiro method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so‐called pre‐orthogonal greedy algorithm involving completion of a given dictionary. The new method is called Aveiro method under complete dictionary. The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition bring available for the classical Hardy and Paley‐Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro method and the greedy algorithm.  相似文献   

3.
基于稀疏重构的图像修复依赖于图像全局自相似性信息的利用和稀疏分解字典的选择,为此提出了基于分类学习字典全局稀疏表示模型的图像修复思路.该算法首先将图像未丢失信息聚类为具有相似几何结构的多个子区域,并分别对各个子区域用K-SVD字典学习方法得到与各子区域结构特征相适应的学习字典.然后根据图像自相似性特点构建能够描述图像块空间组织结构关系的全局稀疏最大期望值表示模型,迭代地使用该模型交替更新图像块的组织结构关系和损坏图像的估计直到修复结果趋于稳定.实验结果表明,方法对于图像的纹理细节、结构信息都能起到好的修复作用.  相似文献   

4.
In the framework of supervised learning, we prove that the iterative algorithm introduced in Umanità and Villa (2010) [22] allows us to estimate in a consistent way the relevant features of the regression function under the a priori assumption that it admits a sparse representation on a fixed dictionary.  相似文献   

5.
We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that (a) we use a third-order tensor representation for our images and (b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.  相似文献   

6.
This article introduces a novel variational model for restoring images degraded by Cauchy noise and/or blurring.The model integrates a nonconvex data-fidelity term with two regularization terms,a sparse representation prior over dictionary learning and total generalized variation(TGV)regularization.The sparse representation prior exploiting patch information enables the preservation of fine features and textural patterns,while adequately denoising in homogeneous regions and contributing natural visual quality.TGV regularization further assists in effectively denoising in smooth regions while retaining edges.By adopting the penalty method and an alternating minimization approach,we present an efficient iterative algorithm to solve the proposed model.Numerical results establish the superiority of the proposed model over other existing models in regard to visual quality and certain image quality assessments.  相似文献   

7.
对于不完全投影角度的重建研究是CT图像重建中一个重要的问题.将压缩感知中字典学习的方法与CT重建算法ART迭代算法相结合.字典学习方法中字典更新采用K-SVD(K-奇异值分解)算法,稀疏编码采用OMP(正交匹配追踪)算法.最后通过对标准Head头部模型进行仿真实验,验证了字典学习方法在CT图像重建中对于提高图像的重建质量和提高信噪比的可行性与有效性.另外还研究了字典学习中图像块大小和滑动距离对重建图像的影响  相似文献   

8.
In this paper, we propose a general framework for Extreme Learning Machine via free sparse transfer representation, which is referred to as transfer free sparse representation based on extreme learning machine (TFSR-ELM). This framework is suitable for different assumptions related to the divergence measures of the data distributions, such as a maximum mean discrepancy and K-L divergence. We propose an effective sparse regularization for the proposed free transfer representation learning framework, which can decrease the time and space cost. Different solutions to the problems based on the different distribution distance estimation criteria and convergence analysis are given. Comprehensive experiments show that TFSR-based algorithms outperform the existing transfer learning methods and are robust to different sizes of training data.  相似文献   

9.
针对传统变换基函数难以获得地震数据最优的稀疏表示,提出基于字典学习的随机噪声压制算法,将地震数据分块,每一块包含多个地震记录道在一定采样时间段内波形的信息,利用自适应字典学习技术,以地震数据块为训练样本,根据地震数据邻近块中记录道相似的特点,构造超完备字典,稀疏编码地震数据,从而恢复数据的主要特征,压制随机噪声.实验表明算法具有较高的PSNR值,并且能较好的保持地震数据纹理复杂区域的局部特征.  相似文献   

10.
稀疏表示是近年来新兴的一种数据表示方法,是对人类大脑皮层编码机制的模拟。稀疏表示以其良好的鲁棒性、抗干扰能力、可解释性和判别性等优势,广泛应用于模式识别领域。基于稀疏表示的分类器在人脸识别领域取得了令人惊喜的成就,它将训练样本看成字典,寻求测试样本在字典下的最稀疏的表示,即用尽可能少的训练样本的线性组合来重构测试样本。但是经典的基于稀疏表示的分类器没有考虑训练样本的类别信息,以致被选中的训练样本来自许多类,不利于分类,因此基于组稀疏的分类器被提出。组稀疏方法考虑了训练样本的类别相似性,其目的是用尽可能少类别的训练样本来表示测试样本,然而这类方法的缺点是同类的训练样本或者同时被选中或者同时被丢弃。在实际中,人脸受到光照、表情、姿势甚至遮挡等因素的影响,样本之间关系比较复杂,因此最后介绍局部加权组结构稀疏表示方法。该方法尽量用来自于与测试样本相似的类的训练样本和来自测试样本邻域的训练样本来表示测试样本,以减轻不相关类的干扰,并使得表示更稀疏和更具判别性。  相似文献   

11.
Data sets in high-dimensional spaces are often concentrated near low-dimensional sets. Geometric Multi-Resolution Analysis (Allard, Chen, Maggioni, 2012) was introduced as a method for approximating (in a robust, multiscale fashion) a low-dimensional set around which data may concentrated and also providing dictionary for sparse representation of the data. Moreover, the procedure is very computationally efficient. We introduce an estimator for low-dimensional sets supporting the data constructed from the GMRA approximations. We exhibit (near optimal) finite sample bounds on its performance, and demonstrate the robustness of this estimator with respect to noise and model error. In particular, our results imply that, if the data is supported on a low-dimensional manifold, the proposed sparse representations result in an error which depends only on the intrinsic dimension of the manifold. (© 2014 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
In this Note, we formulate a sparse Krylov-based algorithm for solving large-scale linear systems of algebraic equations arising from the discretization of randomly parametrized (or stochastic) elliptic partial differential equations (SPDEs). We analyze the proposed sparse conjugate gradient (CG) algorithm within the framework of inexact Krylov subspace methods, prove its convergence and study its abstract computational cost. Numerical studies conducted on stochastic diffusion models show that the proposed sparse CG algorithm outperforms the classical CG method when the sought solutions admit a sparse representation in a polynomial chaos basis. In such cases, the sparse CG algorithm recovers almost exactly the sparsity pattern of the exact solutions, which enables accelerated convergence. In the case when the SPDE solution does not admit a sparse representation, the convergence of the proposed algorithm is very similar to the classical CG method.  相似文献   

13.
We discuss the problem of sparse representation of domains in ℝ d . We demonstrate how the recently developed general theory of greedy approximation in Banach spaces can be used in this problem. The use of greedy approximation has two important advantages: (1) it works for an arbitrary dictionary of sets used for sparse representation and (2) the method of approximation does not depend on smoothness properties of the domains and automatically provides a near optimal rate of approximation for domains with different smoothness properties. We also give some lower estimates of the approximation error and discuss a specific greedy algorithm for approximation of convex domains in ℝ2.  相似文献   

14.
Elastic-net regularization in learning theory   总被引:1,自引:0,他引:1  
Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie [H. Zou, T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B, 67(2) (2005) 301–320] for the selection of groups of correlated variables. To investigate the statistical properties of this scheme and in particular its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combinations of elements (features) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular “elastic-net representation” of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed in the above-cited work.  相似文献   

15.
The purpose of this paper is to study sparse representations of signals from a general dictionary in a Banach space. For so-called localized frames in Hilbert spaces, the canonical frame coefficients are shown to provide a near sparsest expansion for several sparseness measures. However, for frames which are not localized, this no longer holds true and sparse representations may depend strongly on the choice of the sparseness measure. A large class of admissible sparseness measures is introduced, and we give sufficient conditions for having a unique sparse representation of a signal from the dictionary w.r.t. such a sparseness measure. Moreover, we give sufficient conditions on a signal such that the simple solution of a linear programming problem simultaneously solves all the nonconvex (and generally hard combinatorial) problems of sparsest representation of the signal w.r.t. arbitrary admissible sparseness measures.  相似文献   

16.
This paper brings together a novel information representation model for use in signal processing and computer vision problems, with a particular algorithmic development of the Landweber iterative algorithm. The information representation model allows a representation of multiple values for a variable as well as an expression for confidence. Both properties are important for effective computation using multi-level models, where a choice between models will be implementable as part of the optimization process. It is shown that in this way the algorithm can deal with a class of high-dimensional, sparse, and constrained least-squares problems, which arise in various computer vision learning tasks, such as object recognition and object pose estimation. While the algorithm has been applied to the solution of such problems, it has so far been used heuristically. In this paper we describe the properties and some of the peculiarities of the channel representation and optimization, and put them on firm mathematical ground. We consider the optimization a convexly constrained weighted least-squares problem and propose for its solution a projected Landweber method which employs oblique projections onto the closed convex constraint set. We formulate the problem, present the algorithm and work out its convergence properties, including a rate-of-convergence result. The results are put in perspective with currently available projected Landweber methods. An application to supervised learning is described, and the method is evaluated in an experiment involving function approximation, as well as application to transient signals.  相似文献   

17.
This paper is concerned with linear inverse problems where the solution is assumed to have a sparse expansion with respect to several bases or frames. We were mainly motivated by the following two different approaches: (1) Jaillet and Torrésani [F. Jaillet, B. Torrésani, Time–frequency jigsaw puzzle: Adaptive multi-window and multi-layered Gabor expansions, preprint, 2005] and Molla and Torrésani [S. Molla, B. Torrésani, A hybrid audio scheme using hidden Markov models of waveforms, Appl. Comput. Harmon. Anal. (2005), in press] have suggested to represent audio signals by means of at least a wavelet for transient and a local cosine dictionary for tonal components. The suggested technology produces sparse representations of audio signals that are very efficient in audio coding. (2) Also quite recently, Daubechies et al. [I. Daubechies, M. Defrise, C. DeMol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. 57 (2004) 1413–1541] have developed an iterative method for linear inverse problems that promote a sparse representation for the solution to be reconstructed. Here in this paper, we bring both ideas together and construct schemes for linear inverse problems where the solution might then have a sparse representation (we also allow smoothness constraints) with respect to several bases or frames. By a few numerical examples in the field of audio and image processing we show that the resulting method works quite nicely.  相似文献   

18.
In the past decade, the sparse representation synthesis model has been deeply researched and widely applied in signal processing. Recently, a cosparse analysis model has been introduced as an interesting alternative to the sparse representation synthesis model. The sparse synthesis model pay attention to non-zero elements in a representation vector x, while the cosparse analysis model focuses on zero elements in the analysis representation vector Ωx. This paper mainly considers the problem of the cosparse analysis model. Based on the greedy analysis pursuit algorithm, by constructing an adaptive weighted matrix W k?1, we propose a modified greedy analysis pursuit algorithm for the sparse recovery problem when the signal obeys the cosparse model. Using a weighted matrix, we fill the gap between greedy algorithm and relaxation techniques. The standard analysis shows that our algorithm is convergent. We estimate the error bound for solving the cosparse analysis model, and then the presented simulations demonstrate the advantage of the proposed method for the cosparse inverse problem.  相似文献   

19.
Finding a sparse approximation of a signal from an arbitrary dictionary is a very useful tool to solve many problems in signal processing. Several algorithms, such as Basis Pursuit (BP) and Matching Pursuits (MP, also known as greedy algorithms), have been introduced to compute sparse approximations of signals, but such algorithms a priori only provide sub-optimal solutions. In general, it is difficult to estimate how close a computed solution is from the optimal one. In a series of recent results, several authors have shown that both BP and MP can successfully recover a sparse representation of a signal provided that it is sparse enough, that is to say if its support (which indicates where are located the nonzero coefficients) is of sufficiently small size. In this paper we define identifiable structures that support signals that can be recovered exactly by minimization (Basis Pursuit) and greedy algorithms. In other words, if the support of a representation belongs to an identifiable structure, then the representation will be recovered by BP and MP. In addition, we obtain that if the output of an arbitrary decomposition algorithm is supported on an identifiable structure, then one can be sure that the representation is optimal within the class of signals supported by the structure. As an application of the theoretical results, we give a detailed study of a family of multichannel dictionaries with a special structure (corresponding to the representation problem ) often used in, e.g., under-determined source sepa-ration problems or in multichannel signal processing. An identifiable structure for such dictionaries is defined using a generalization of Tropp’s Babel function which combines the coherence of the mixing matrix with that of the time-domain dictionary , and we obtain explicit structure conditions which ensure that both minimization and a multi-channel variant of Matching Pursuit can recover structured multichannel representations. The multichannel Matching Pursuit algorithm is described in detail and we conclude with a discussion of some implications of our results in terms of blind source separation based on sparse decompositions. Communicated by Yuesheng Xu  相似文献   

20.
Advances in Data Analysis and Classification - In recent years dictionary learning has become a favorite sparse feature extraction technique. Dictionary learning represents each data as a sparse...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号