首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
稀疏表示是近年来新兴的一种数据表示方法,是对人类大脑皮层编码机制的模拟。稀疏表示以其良好的鲁棒性、抗干扰能力、可解释性和判别性等优势,广泛应用于模式识别领域。基于稀疏表示的分类器在人脸识别领域取得了令人惊喜的成就,它将训练样本看成字典,寻求测试样本在字典下的最稀疏的表示,即用尽可能少的训练样本的线性组合来重构测试样本。但是经典的基于稀疏表示的分类器没有考虑训练样本的类别信息,以致被选中的训练样本来自许多类,不利于分类,因此基于组稀疏的分类器被提出。组稀疏方法考虑了训练样本的类别相似性,其目的是用尽可能少类别的训练样本来表示测试样本,然而这类方法的缺点是同类的训练样本或者同时被选中或者同时被丢弃。在实际中,人脸受到光照、表情、姿势甚至遮挡等因素的影响,样本之间关系比较复杂,因此最后介绍局部加权组结构稀疏表示方法。该方法尽量用来自于与测试样本相似的类的训练样本和来自测试样本邻域的训练样本来表示测试样本,以减轻不相关类的干扰,并使得表示更稀疏和更具判别性。  相似文献   

2.
In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new “input”, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.  相似文献   

3.
The purpose of this paper is to study sparse representations of signals from a general dictionary in a Banach space. For so-called localized frames in Hilbert spaces, the canonical frame coefficients are shown to provide a near sparsest expansion for several sparseness measures. However, for frames which are not localized, this no longer holds true and sparse representations may depend strongly on the choice of the sparseness measure. A large class of admissible sparseness measures is introduced, and we give sufficient conditions for having a unique sparse representation of a signal from the dictionary w.r.t. such a sparseness measure. Moreover, we give sufficient conditions on a signal such that the simple solution of a linear programming problem simultaneously solves all the nonconvex (and generally hard combinatorial) problems of sparsest representation of the signal w.r.t. arbitrary admissible sparseness measures.  相似文献   

4.
对于不完全投影角度的重建研究是CT图像重建中一个重要的问题.将压缩感知中字典学习的方法与CT重建算法ART迭代算法相结合.字典学习方法中字典更新采用K-SVD(K-奇异值分解)算法,稀疏编码采用OMP(正交匹配追踪)算法.最后通过对标准Head头部模型进行仿真实验,验证了字典学习方法在CT图像重建中对于提高图像的重建质量和提高信噪比的可行性与有效性.另外还研究了字典学习中图像块大小和滑动距离对重建图像的影响  相似文献   

5.
基于稀疏重构的图像修复依赖于图像全局自相似性信息的利用和稀疏分解字典的选择,为此提出了基于分类学习字典全局稀疏表示模型的图像修复思路.该算法首先将图像未丢失信息聚类为具有相似几何结构的多个子区域,并分别对各个子区域用K-SVD字典学习方法得到与各子区域结构特征相适应的学习字典.然后根据图像自相似性特点构建能够描述图像块空间组织结构关系的全局稀疏最大期望值表示模型,迭代地使用该模型交替更新图像块的组织结构关系和损坏图像的估计直到修复结果趋于稳定.实验结果表明,方法对于图像的纹理细节、结构信息都能起到好的修复作用.  相似文献   

6.
张量的鲁棒主成分分析是将未知的一个低秩张量与一个稀疏张量从已知的它们的和中分离出来.因为在计算机视觉与模式识别中有着广阔的应用前景,该问题在近期成为学者们的研究热点.本文提出了一种针对张量鲁棒主成分分析的新的模型,并给出交替方向极小化的求解算法,在求解过程中给出了两种秩的调整策略.针对低秩分量本文对其全部各阶展开矩阵进行低秩矩阵分解,针对稀疏分量采用软阈值收缩的策略.无论目标低秩张量为精确低秩或近似低秩,本文所提方法均可适用.本文对算法给出了一定程度上的收敛性分析,即算法迭代过程中产生的任意收敛点均满足KKT条件.如果目标低秩张量为精确低秩,当迭代终止时可对输出结果进行基于高阶奇异值分解的修正.针对人工数据和真实视频数据的数值实验表明,与同类型算法相比,本文所提方法可以得到更好的结果.  相似文献   

7.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

8.
Lin He  Ti-Chiun Chang  Stanley Osher  Tong Fang  Peter Speier 《PAMM》2007,7(1):1011207-1011208
Magnetic resonance imaging (MRI) reconstruction from sparsely sampled data has been a difficult problem in medical imaging field. We approach this problem by formulating a cost functional that includes a constraint term that is imposed by the raw measurement data in k-space and the L1 norm of a sparse representation of the reconstructed image. The sparse representation is usually realized by total variational regularization and/or wavelet transform. We have applied the Bregman iteration to minimize this functional to recover finer scales in our recent work. Here we propose nonlinear inverse scale space methods in addition to the iterative refinement procedure. Numerical results from the two methods are presented and it shows that the nonlinear inverse scale space method is a more efficient algorithm than the iterated refinement method. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
This article introduces a novel variational model for restoring images degraded by Cauchy noise and/or blurring.The model integrates a nonconvex data-fidelity term with two regularization terms,a sparse representation prior over dictionary learning and total generalized variation(TGV)regularization.The sparse representation prior exploiting patch information enables the preservation of fine features and textural patterns,while adequately denoising in homogeneous regions and contributing natural visual quality.TGV regularization further assists in effectively denoising in smooth regions while retaining edges.By adopting the penalty method and an alternating minimization approach,we present an efficient iterative algorithm to solve the proposed model.Numerical results establish the superiority of the proposed model over other existing models in regard to visual quality and certain image quality assessments.  相似文献   

10.
This paper is concerned with linear inverse problems where the solution is assumed to have a sparse expansion with respect to several bases or frames. We were mainly motivated by the following two different approaches: (1) Jaillet and Torrésani [F. Jaillet, B. Torrésani, Time–frequency jigsaw puzzle: Adaptive multi-window and multi-layered Gabor expansions, preprint, 2005] and Molla and Torrésani [S. Molla, B. Torrésani, A hybrid audio scheme using hidden Markov models of waveforms, Appl. Comput. Harmon. Anal. (2005), in press] have suggested to represent audio signals by means of at least a wavelet for transient and a local cosine dictionary for tonal components. The suggested technology produces sparse representations of audio signals that are very efficient in audio coding. (2) Also quite recently, Daubechies et al. [I. Daubechies, M. Defrise, C. DeMol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. 57 (2004) 1413–1541] have developed an iterative method for linear inverse problems that promote a sparse representation for the solution to be reconstructed. Here in this paper, we bring both ideas together and construct schemes for linear inverse problems where the solution might then have a sparse representation (we also allow smoothness constraints) with respect to several bases or frames. By a few numerical examples in the field of audio and image processing we show that the resulting method works quite nicely.  相似文献   

11.
Transform-based image codec follows the basic principle: the reconstructed quality is decided by the quantization level. Compressive sensing (CS) breaks the limit and states that sparse signals can be perfectly recovered from incomplete or even corrupted information by solving convex optimization. Under the same acquisition of images, if images are represented sparsely enough, they can be reconstructed more accurately by CS recovery than inverse transform. So, in this paper, we utilize a modified TV operator to enhance image sparse representation and reconstruction accuracy, and we acquire image information from transform coefficients corrupted by quantization noise. We can reconstruct the images by CS recovery instead of inverse transform. A CS-based JPEG decoding scheme is obtained and experimental results demonstrate that the proposed methods significantly improve the PSNR and visual quality of reconstructed images compared with original JPEG decoder.  相似文献   

12.
Image decoding optimization based on compressive sensing   总被引:1,自引:0,他引:1  
Transform-based image codec follows the basic principle: the reconstructed quality is decided by the quantization level. Compressive sensing (CS) breaks the limit and states that sparse signals can be perfectly recovered from incomplete or even corrupted information by solving convex optimization. Under the same acquisition of images, if images are represented sparsely enough, they can be reconstructed more accurately by CS recovery than inverse transform. So, in this paper, we utilize a modified TV operator to enhance image sparse representation and reconstruction accuracy, and we acquire image information from transform coefficients corrupted by quantization noise. We can reconstruct the images by CS recovery instead of inverse transform. A CS-based JPEG decoding scheme is obtained and experimental results demonstrate that the proposed methods significantly improve the PSNR and visual quality of reconstructed images compared with original JPEG decoder.  相似文献   

13.
Spectral computed tomography (CT) has a great superiority in lesion detection, tissue characterization and material decomposition. To further extend its potential clinical applications, in this work, we propose an improved tensor dictionary learning method for low-dose spectral CT reconstruction with a constraint of image gradient ℓ0-norm, which is named as ℓ0TDL. The ℓ0TDL method inherits the advantages of tensor dictionary learning (TDL) by employing the similarity of spectral CT images. On the other hand, by introducing the ℓ0-norm constraint in gradient image domain, the proposed method emphasizes the spatial sparsity to overcome the weakness of TDL on preserving edge information. The split-bregman method is employed to solve the proposed method. Both numerical simulations and real mouse studies are perform to evaluate the proposed method. The results show that the proposed ℓ0TDL method outperforms other competing methods, such as total variation (TV) minimization, TV with low rank (TV+LR), and TDL methods.  相似文献   

14.
Finding a sparse approximation of a signal from an arbitrary dictionary is a very useful tool to solve many problems in signal processing. Several algorithms, such as Basis Pursuit (BP) and Matching Pursuits (MP, also known as greedy algorithms), have been introduced to compute sparse approximations of signals, but such algorithms a priori only provide sub-optimal solutions. In general, it is difficult to estimate how close a computed solution is from the optimal one. In a series of recent results, several authors have shown that both BP and MP can successfully recover a sparse representation of a signal provided that it is sparse enough, that is to say if its support (which indicates where are located the nonzero coefficients) is of sufficiently small size. In this paper we define identifiable structures that support signals that can be recovered exactly by minimization (Basis Pursuit) and greedy algorithms. In other words, if the support of a representation belongs to an identifiable structure, then the representation will be recovered by BP and MP. In addition, we obtain that if the output of an arbitrary decomposition algorithm is supported on an identifiable structure, then one can be sure that the representation is optimal within the class of signals supported by the structure. As an application of the theoretical results, we give a detailed study of a family of multichannel dictionaries with a special structure (corresponding to the representation problem ) often used in, e.g., under-determined source sepa-ration problems or in multichannel signal processing. An identifiable structure for such dictionaries is defined using a generalization of Tropp’s Babel function which combines the coherence of the mixing matrix with that of the time-domain dictionary , and we obtain explicit structure conditions which ensure that both minimization and a multi-channel variant of Matching Pursuit can recover structured multichannel representations. The multichannel Matching Pursuit algorithm is described in detail and we conclude with a discussion of some implications of our results in terms of blind source separation based on sparse decompositions. Communicated by Yuesheng Xu  相似文献   

15.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation.  相似文献   

16.
We discuss the problem of sparse representation of domains in ℝ d . We demonstrate how the recently developed general theory of greedy approximation in Banach spaces can be used in this problem. The use of greedy approximation has two important advantages: (1) it works for an arbitrary dictionary of sets used for sparse representation and (2) the method of approximation does not depend on smoothness properties of the domains and automatically provides a near optimal rate of approximation for domains with different smoothness properties. We also give some lower estimates of the approximation error and discuss a specific greedy algorithm for approximation of convex domains in ℝ2.  相似文献   

17.
This paper is a follow-up to the author’s previous paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there the three most popular greedy algorithms in nonlinear approximation in Banach spaces-Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation, and Weak Relaxed Greedy Algorithm-for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the mth iteration is equal to the sum of the approximant from the previous, (m ? 1)th, iteration and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique, can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary.  相似文献   

18.
Recently, Field, Lewicki, Olshausen, and Sejnowski have reported efforts to identify the ``Sparse Components' of image data. Their empirical findings indicate that such components have elongated shapes and assume a wide range of positions, orientations, and scales. To date, sparse components analysis (SCA) has only been conducted on databases of small (e.g., 16 by 16) image patches and there seems limited prospect of dramatically increased resolving power. In this paper, we apply mathematical analysis to a specific formalization of SCA using synthetic image models, hoping to gain insight into what might emerge from a higher-resolution SCA based on n by n image patches for large n but a constant field of view. In our formalization, we study a class of objects \cal F in a functional space; they are to be represented by linear combinations of atoms from an overcomplete dictionary, and sparsity is measured by the p -norm of the coefficients in the linear combination. We focus on the class \cal F = \sc Star α of black and white images with the black region consisting of a star-shaped set with an α -smooth boundary. We aim to find an optimal dictionary, one achieving the optimal sparsity in an atomic decomposition uniformly over members of the class \sc Star α . We show that there is a well-defined optimal sparsity of representation of members of \sc Star α ; there are decompositions with finite p -norm for p > 2/(α+1) but not for p < 2/(α+1) . We show that the optimal degree of sparsity is nearly attained using atomic decompositions based on the wedgelet dictionary. Wedgelets provide a system of representation by elements in a dyadically organized collection, at all scales, locations, orientations, and positions. The atoms of our atomic decomposition contain both coarse-scale dyadic ``blobs,' which are simply wedgelets from our dictionary, and fine-scale ``needles,' which are differences of pairs of wedgelets. The fine-scale atoms used in the adaptive atomic decomposition are highly anisotropic and occupy a range of positions, scales, and locations. This agrees qualitatively with the visual appearance of empirically determined sparse components of natural images. The set has certain definite scaling properties; for example, the number of atoms of length l scales as 1/l , and, when the object has α -smooth boundaries, the number of atoms with anisotropy \approx A scales as \approx A α-1 . August 16, 1999. Date revised: April 24, 2000. Date accepted: April 4, 2000.  相似文献   

19.
工件的释放时间和加工时间具有一致性, 是指释放时间大的工件其加工时间不小于释放时间小的工件的加工时间, 即若$r_{i}\geq r_{j}$, 则$p_{i}\geq p_{j}$。本文在该一致性约束下, 研究最小化最大加权完工时间单机在线排序问题, 和最小化总加权完工时间单机在线排序问题, 并分别设计出$\frac{\sqrt{5}+1}{2}$-竞争的最好可能在线算法。  相似文献   

20.
One of the open problems in the field of forward uncertainty quantification(UQ) is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs. Another challenge is to efficiently make use of limited training data for UQ predictions of complex engineering problems, particularly with high dimensional random parameters. We address these challenges by combining data-driven polynomial chaos expansions with a recently developed preconditioned sparse approximation approach for UQ problems. The first task in this two-step process is to employ the procedure developed in [1] to construct an "arbitrary" polynomial chaos expansion basis using a finite number of statistical moments of the random inputs. The second step is a novel procedure to effect sparse approximation via l1 minimization in order to quantify the forward uncertainty. To enhance the performance of the preconditioned l1 minimization problem, we sample from the so-called induced distribution, instead of using Monte Carlo (MC) sampling from the original, unknown probability measure. We demonstrate on test problems that induced sampling is a competitive and often better choice compared with sampling from asymptotically optimal measures(such as the equilibrium measure) when we have incomplete information about the distribution. We demonstrate the capacity of the proposed induced sampling algorithm via sparse representation with limited data on test functions, and on a Kirchoff plating bending problem with random Young's modulus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号