首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The problem of recovering a low-rank matrix from a set of observations corrupted with gross sparse error is known as the robust principal component analysis (RPCA) and has many applications in computer vision, image processing and web data ranking. It has been shown that under certain conditions, the solution to the NP-hard RPCA problem can be obtained by solving a convex optimization problem, namely the robust principal component pursuit (RPCP). Moreover, if the observed data matrix has also been corrupted by a dense noise matrix in addition to gross sparse error, then the stable principal component pursuit (SPCP) problem is solved to recover the low-rank matrix. In this paper, we develop efficient algorithms with provable iteration complexity bounds for solving RPCP and SPCP. Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that our algorithms are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed.  相似文献   

2.
低秩矩阵恢复问题作为一类在图像处理和信号数据分析等领域中都十分重要的问题已被广泛研究.本文在交替方向算法的框架下,应用非单调技术,提出一种求解低秩矩阵恢复问题的新算法.该算法在每一步迭代过程中,首先利用一步带有变步长梯度算法同时更新低秩部分的两块变量,然后采用非单调技术更新稀疏部分的变量.在一定的假设条件下,本文证明了...  相似文献   

3.
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.  相似文献   

4.
Recovering low-rank and sparse matrix from a given matrix arises in many applications, such as image processing, video background substraction, and so on. The 3-block alternating direction method of multipliers (ADMM) has been applied successfully to solve convex problems with 3-block variables. However, the existing sufficient conditions to guarantee the convergence of the 3-block ADMM usually require the penalty parameter $\gamma$ to satisfy a certain bound, which may affect the performance of solving the large scale problem in practice. In this paper, we propose the 3-block ADMM to recover low-rank and sparse matrix from noisy observations. In theory, we prove that the 3-block ADMM is convergent when the penalty parameters satisfy a certain condition and the objective function value sequences generated by 3-block ADMM converge to the optimal value. Numerical experiments verify that proposed method can achieve higher performance than existing methods in terms of both efficiency and accuracy.  相似文献   

5.
We study the problem of estimating multiple predictive functions from a dictionary of basis functions in the nonparametric regression setting. Our estimation scheme assumes that each predictive function can be estimated in the form of a linear combination of the basis functions. By assuming that the coefficient matrix admits a sparse low-rank structure, we formulate the function estimation problem as a convex program regularized by the trace norm and the \(\ell _1\) -norm simultaneously. We propose to solve the convex program using the accelerated gradient (AG) method; we also develop efficient algorithms to solve the key components in AG. In addition, we conduct theoretical analysis on the proposed function estimation scheme: we derive a key property of the optimal solution to the convex program; based on an assumption on the basis functions, we establish a performance bound of the proposed function estimation scheme (via the composite regularization). Simulation studies demonstrate the effectiveness and efficiency of the proposed algorithms.  相似文献   

6.
张量的鲁棒主成分分析是将未知的一个低秩张量与一个稀疏张量从已知的它们的和中分离出来.因为在计算机视觉与模式识别中有着广阔的应用前景,该问题在近期成为学者们的研究热点.本文提出了一种针对张量鲁棒主成分分析的新的模型,并给出交替方向极小化的求解算法,在求解过程中给出了两种秩的调整策略.针对低秩分量本文对其全部各阶展开矩阵进行低秩矩阵分解,针对稀疏分量采用软阈值收缩的策略.无论目标低秩张量为精确低秩或近似低秩,本文所提方法均可适用.本文对算法给出了一定程度上的收敛性分析,即算法迭代过程中产生的任意收敛点均满足KKT条件.如果目标低秩张量为精确低秩,当迭代终止时可对输出结果进行基于高阶奇异值分解的修正.针对人工数据和真实视频数据的数值实验表明,与同类型算法相比,本文所提方法可以得到更好的结果.  相似文献   

7.
We describe a branch-and-bound (b&b) method aimed at searching for an exact solution of the fundamental problem of decomposing a matrix into the sum of a sparse matrix and a low-rank matrix. Previous heuristic techniques employed convex and nonconvex optimization. We leverage and extend those ideas, within a spatial b&b framework, aimed at exact global optimization. Our work may serve to (i) gather evidence to assess the true quality of the previous heuristic techniques, and (ii) provide software to routinely calculate global optima or at least better solutions for moderate-sized instances coming from applications.  相似文献   

8.
Recently, the 1-bit compressive sensing(1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. We first show that a necessary assumption(that has been overlooked in the literature) should be made for some existing theories and discussions for 1-bit CS. Without such an assumption, the found solution by some existing decoding algorithms might be inconsistent with 1-bit measurements. This motivates us to pursue a new direction to develop uniform and nonuniform recovery theories for 1-bit CS with a new decoding method which always generates a solution consistent with 1-bit measurements. We focus on an extreme case of 1-bit CS, in which the measurements capture only the sign of the product of a sensing matrix and a signal. We show that the 1-bit CS model can be reformulated equivalently as an ?_0-minimization problem with linear constraints. This reformulation naturally leads to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit, which is remarkably different from existing formulations. It turns out that the uniqueness condition for the solution of the 1-bit basis pursuit yields the so-called restricted range space property(RRSP) of the transposed sensing matrix. This concept provides a basis to develop sign recovery conditions for sparse signals through 1-bit measurements. We prove that if the sign of a sparse signal can be exactly recovered from 1-bit measurements with 1-bit basis pursuit, then the sensing matrix must admit a certain RRSP, and that if the sensing matrix admits a slightly enhanced RRSP, then the sign of a k-sparse signal can be exactly recovered with 1-bit basis pursuit.  相似文献   

9.
A computationally-efficient method for recovering sparse signals from a series of noisy observations, known as the problem of compressed sensing (CS), is presented. The theory of CS usually leads to a constrained convex minimization problem. In this work, an alternative outlook is proposed. Instead of solving the CS problem as an optimization problem, it is suggested to transform the optimization problem into a convex feasibility problem (CFP), and solve it using feasibility-seeking sequential and simultaneous subgradient projection methods, which are iterative, fast, robust and convergent schemes for solving CFPs. As opposed to some of the commonly-used CS algorithms, such as Bayesian CS and Gradient Projections for sparse reconstruction, which become inefficient as the problem dimension and sparseness degree increase, the proposed methods exhibit robustness with respect to these parameters. Moreover, it is shown that the CFP-based projection methods are superior to some of the state-of-the-art methods in recovering the signal’s support. Numerical experiments show that the CFP-based projection methods are viable for solving large-scale CS problems with compressible signals.  相似文献   

10.
11.
孙青青  王川龙 《计算数学》2021,43(4):516-528
针对低秩稀疏矩阵恢复问题的一个非凸优化模型,本文提出了一种快速非单调交替极小化方法.主要思想是对低秩矩阵部分采用交替极小化方法,对稀疏矩阵部分采用非单调线搜索技术来分别进行迭代更新.非单调线搜索技术是将单步下降放宽为多步下降,从而提高了计算效率.文中还给出了新算法的收敛性分析.最后,通过数值实验的比较表明,矩阵恢复的非单调交替极小化方法比原单调类方法更有效.  相似文献   

12.
This paper concerns solving the sparse deconvolution and demixing problem using ? 1,2-minimization. We show that under a certain structured random model, robust and stable recovery is possible. The results extend results of Ling and Strohmer (Inverse Probl. 31, 115002 2015), and in particular theoretically explain certain experimental findings from that paper. Our results do not only apply to the deconvolution and demixing problem, but to recovery of column-sparse matrices in general.  相似文献   

13.
This note presents a unified analysis of the recovery of simple objects from random linear measurements. When the linear functionals are Gaussian, we show that an s-sparse vector in ${\mathbb{R}^n}$ can be efficiently recovered from 2s log n measurements with high probability and a rank r, n × n matrix can be efficiently recovered from r(6n ? 5r) measurements with high probability. For sparse vectors, this is within an additive factor of the best known nonasymptotic bounds. For low-rank matrices, this matches the best known bounds. We present a parallel analysis for block-sparse vectors obtaining similarly tight bounds. In the case of sparse and block-sparse signals, we additionally demonstrate that our bounds are only slightly weakened when the measurement map is a random sign matrix. Our results are based on analyzing a particular dual point which certifies optimality conditions of the respective convex programming problem. Our calculations rely only on standard large deviation inequalities and our analysis is self-contained.  相似文献   

14.
We discuss a generalization of the Cohn–Umans method, a potent technique developed for studying the bilinear complexity of matrix multiplication by embedding matrices into an appropriate group algebra. We investigate how the Cohn–Umans method may be used for bilinear operations other than matrix multiplication, with algebras other than group algebras, and we relate it to Strassen’s tensor rank approach, the traditional framework for investigating bilinear complexity. To demonstrate the utility of the generalized method, we apply it to find the fastest algorithms for forming structured matrix–vector product, the basic operation underlying iterative algorithms for structured matrices. The structures we study include Toeplitz, Hankel, circulant, symmetric, skew-symmetric, f-circulant, block Toeplitz–Toeplitz block, triangular Toeplitz matrices, Toeplitz-plus-Hankel, sparse/banded/triangular. Except for the case of skew-symmetric matrices, for which we have only upper bounds, the algorithms derived using the generalized Cohn–Umans method in all other instances are the fastest possible in the sense of having minimum bilinear complexity. We also apply this framework to a few other bilinear operations including matrix–matrix, commutator, simultaneous matrix products, and briefly discuss the relation between tensor nuclear norm and numerical stability.  相似文献   

15.
We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as \(\ell _1\_\ell _s\) and Mirror Prox regardless of the sparsity level or problem size.  相似文献   

16.
Many applications in data analysis rely on the decomposition of a data matrix into a low-rank and a sparse component. Existing methods that tackle this task use the nuclear norm and \(\ell _1\) -cost functions as convex relaxations of the rank constraint and the sparsity measure, respectively, or employ thresholding techniques. We propose a method that allows for reconstructing and tracking a subspace of upper-bounded dimension from incomplete and corrupted observations. It does not require any a priori information about the number of outliers. The core of our algorithm is an intrinsic Conjugate Gradient method on the set of orthogonal projection matrices, the so-called Grassmannian. Non-convex sparsity measures are used for outlier detection, which leads to improved performance in terms of robustly recovering and tracking the low-rank matrix. In particular, our approach can cope with more outliers and with an underlying matrix of higher rank than other state-of-the-art methods.  相似文献   

17.
压缩感知(compressed sensing,CS) 是一种全新的信息采集与处理的理论框架,借助信号内在的稀疏性或可压缩性,可以从小规模的线性、非自适应的测量中通过求解非线性优化问题重构原信号.块稀疏信号是一种具有块结构的信号,即信号的非零元是成块出现的.受YIN Peng-hang, LOU Yi-fei, HE Qi等提出的l1-2范数最小化方法的启发,将基于l1-l2范数的稀疏重构算法推广到块稀疏模型,证明了块稀疏模型下l1-l2范数的相关性质,建立了基于l1-l2范数的块稀疏信号精确重构的充分条件,并通过DCA(difference of convex functions algorithm) 和ADMM(alternating direction method of multipliers)给出了求解块稀疏模型下l1-l2范数的迭代方法.数值实验表明,基于l1-l2范数的块稀疏重构算法比其他块稀疏重构算法具有更高的重构成功率.  相似文献   

18.
Important parts of adaptive wavelet methods are well-conditioned wavelet stiffness matrices and an efficient approximate multiplication of quasi-sparse stiffness matrices with vectors in wavelet coordinates. Therefore it is useful to develop a well-conditioned wavelet basis with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in each column is bounded by a constant. Consequently, the stiffness matrix corresponding to the n-dimensional Laplacian in the tensor product wavelet basis is also sparse. Then a matrix–vector multiplication can be performed exactly with linear complexity. In this paper, we construct a wavelet basis based on Hermite cubic splines with respect to which both the mass matrix and the stiffness matrix corresponding to a one-dimensional Poisson equation are sparse. Moreover, a proposed basis is well-conditioned on low decomposition levels. Small condition numbers for low decomposition levels and a sparse structure of stiffness matrices are kept for any well-conditioned second order partial differential equations with constant coefficients; furthermore, they are independent of the space dimension.  相似文献   

19.
We present a new computational approach to approximating a large, noisy data table by a low-rank matrix with sparse singular vectors. The approximation is obtained from thresholded subspace iterations that produce the singular vectors simultaneously, rather than successively as in competing proposals. We introduce novel ways to estimate thresholding parameters, which obviate the need for computationally expensive cross-validation. We also introduce a way to sparsely initialize the algorithm for computational savings that allow our algorithm to outperform the vanilla singular value decomposition (SVD) on the full data table when the signal is sparse. A comparison with two existing sparse SVD methods suggests that our algorithm is computationally always faster and statistically always at least comparable to the better of the two competing algorithms. Supplementary materials for the article are available in an online appendix. An R package ssvd implementing the algorithms introduced in this article is available on CRAN.  相似文献   

20.
We study the problem of learning a sparse linear regression vector under additional conditions on the structure of its sparsity pattern. This problem is relevant in machine learning, statistics and signal processing. It is well known that a linear regression can benefit from knowledge that the underlying regression vector is sparse. The combinatorial problem of selecting the nonzero components of this vector can be “relaxed” by regularizing the squared error with a convex penalty function like the ?1 norm. However, in many applications, additional conditions on the structure of the regression vector and its sparsity pattern are available. Incorporating this information into the learning method may lead to a significant decrease of the estimation error. In this paper, we present a family of convex penalty functions, which encode prior knowledge on the structure of the vector formed by the absolute values of the regression coefficients. This family subsumes the ?1 norm and is flexible enough to include different models of sparsity patterns, which are of practical and theoretical importance. We establish the basic properties of these penalty functions and discuss some examples where they can be computed explicitly. Moreover, we present a convergent optimization algorithm for solving regularized least squares with these penalty functions. Numerical simulations highlight the benefit of structured sparsity and the advantage offered by our approach over the Lasso method and other related methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号