首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We consider a class of optimization problems for sparse signal reconstruction which arise in the field of compressed sensing (CS). A plethora of approaches and solvers exist for such problems, for example GPSR, FPC_AS, SPGL1, NestA, $\mathbf{\ell _1\_\ell _s}$ , PDCO to mention a few. CS applications lead to very well conditioned optimization problems and therefore can be solved easily by simple first-order methods. Interior point methods (IPMs) rely on the Newton method hence they use the second-order information. They have numerous advantageous features and one clear drawback: being the second-order approach they need to solve linear equations and this operation has (in the general dense case) an ${\mathcal {O}}(n^3)$ computational complexity. Attempts have been made to specialize IPMs to sparse reconstruction problems and they have led to interesting developments implemented in $\mathbf{\ell _1\_\ell _s}$ and PDCO softwares. We go a few steps further. First, we use the matrix-free IPM, an approach which redesigns IPM to avoid the need to explicitly formulate (and store) the Newton equation systems. Secondly, we exploit the special features of the signal processing matrices within the matrix-free IPM. Two such features are of particular interest: an excellent conditioning of these matrices and the ability to perform inexpensive (low complexity) matrix–vector multiplications with them. Computational experience with large scale one-dimensional signals confirms that the new approach is efficient and offers an attractive alternative to other state-of-the-art solvers.  相似文献   

2.
This article presents new results concerning the recovery of a signal from the magnitude only measurements where the signal is not sparse in an orthonormal basis but in a redundant dictionary, which we call it phase retrieval with redundant dictionary for short. To solve this phaseless problem, we analyze the \( \ell _1 \)-analysis model. Firstly we investigate the noiseless case with presenting a null space property of the measurement matrix under which the \( \ell _1 \)-analysis model provides an exact recovery. Secondly we introduce a new property (S-DRIP) of the measurement matrix. By solving the \( \ell _1 \)-analysis model, we prove that this property can guarantee a stable recovery of real signals that are nearly sparse in overcomplete dictionaries.  相似文献   

3.
Many applications in data analysis rely on the decomposition of a data matrix into a low-rank and a sparse component. Existing methods that tackle this task use the nuclear norm and \(\ell _1\) -cost functions as convex relaxations of the rank constraint and the sparsity measure, respectively, or employ thresholding techniques. We propose a method that allows for reconstructing and tracking a subspace of upper-bounded dimension from incomplete and corrupted observations. It does not require any a priori information about the number of outliers. The core of our algorithm is an intrinsic Conjugate Gradient method on the set of orthogonal projection matrices, the so-called Grassmannian. Non-convex sparsity measures are used for outlier detection, which leads to improved performance in terms of robustly recovering and tracking the low-rank matrix. In particular, our approach can cope with more outliers and with an underlying matrix of higher rank than other state-of-the-art methods.  相似文献   

4.
The aim of this paper is to study the stability of the \(\ell _1\) minimization for the compressive phase retrieval and to extend the instance-optimality in compressed sensing to the real phase retrieval setting. We first show that \(m={\mathcal {O}}(k\log (N/k))\) measurements are enough to guarantee the \(\ell _1\) minimization to recover k-sparse signals stably provided the measurement matrix A satisfies the strong RIP property. We second investigate the phaseless instance-optimality presenting a null space property of the measurement matrix A under which there exists a decoder \(\Delta \) so that the phaseless instance-optimality holds. We use the result to study the phaseless instance-optimality for the \(\ell _1\) norm. This builds a parallel for compressive phase retrieval with the classical compressive sensing.  相似文献   

5.
Precision matrix estimation is an important problem in statistical data analysis.This paper proposes a sparse precision matrix estimation approach,based on CLIME estimator and an efficient algorithm GISSρ that was originally proposed for l1 sparse signal recov-ery in compressed sensing.The asymptotic convergence rate for sparse precision matrix estimation is analyzed with respect to the new stopping criteria of the proposed GISSρ algorithm.Finally,numerical comparison of GISSρ with other sparse recovery algorithms,such as ADMM and HTP in three settings of precision matrix estimation is provided and the numerical results show the advantages of the proposed algorithm.  相似文献   

6.
Minimization with orthogonality constraints (e.g., $X^\top X = I$ ) and/or spherical constraints (e.g., $\Vert x\Vert _2 = 1$ ) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform—a Crank-Nicolson-like update scheme—to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842 % to the best known solution on the largest problem “tai256c” in QAPLIB can be reached in 5 min on a typical laptop.  相似文献   

7.
A Frisch-Newton Algorithm for Sparse Quantile Regression   总被引:3,自引:0,他引:3  
Recent experience has shown that interior-point methods using a log barrier approach are far superior to classical simplex methods for computing solutions to large parametric quantile regression problems. In many large empirical applications, the design matrix has a very sparse structure. A typical example is the classical fixed-effect model for panel data where the parametric dimension of the model can be quite large, but the number of non-zero elements is quite small. Adopting recent developments in sparse linear algebra we introduce a modified version of the Prisch-Newton algorithm for quantile regression described in Portnoy and Koenker~([28]). The new algorithm substantially reduces the storage (memory) requirements and increases computational speed. The modified algorithm also facilitates the development of nonparametric quantile regression methods. The pseudo design matrices employed in nonparametric quantile regression smoothing are inherently sparse in both the fidelity and roughness penalty components. Exploiting the sparse structure of these problems opens up a whole range of new possibilities for multivariate smoothing on large data sets via ANOVA-type decomposition and partial linear models.  相似文献   

8.
A computationally-efficient method for recovering sparse signals from a series of noisy observations, known as the problem of compressed sensing (CS), is presented. The theory of CS usually leads to a constrained convex minimization problem. In this work, an alternative outlook is proposed. Instead of solving the CS problem as an optimization problem, it is suggested to transform the optimization problem into a convex feasibility problem (CFP), and solve it using feasibility-seeking sequential and simultaneous subgradient projection methods, which are iterative, fast, robust and convergent schemes for solving CFPs. As opposed to some of the commonly-used CS algorithms, such as Bayesian CS and Gradient Projections for sparse reconstruction, which become inefficient as the problem dimension and sparseness degree increase, the proposed methods exhibit robustness with respect to these parameters. Moreover, it is shown that the CFP-based projection methods are superior to some of the state-of-the-art methods in recovering the signal’s support. Numerical experiments show that the CFP-based projection methods are viable for solving large-scale CS problems with compressible signals.  相似文献   

9.
In this paper, we specialize Gill et al.'s projected Newton barrier method to solve a large-scale linear program of dynamic (i.e. multistage) Leontief-type constraints. We propose an efficient and stable method for solving the least-squares subproblems, the crucial part of the barrier method. The key step is to exploit a special structure of the constraint matrix and reduce the matrix of the normal equation for the least-squares problem to a banded matrix. By comparing the average-case operations count of this specialized barrier method with that of the sparse simplex method, we show that this method performs at least O(T) faster than the simplex method for such stype of linear programs, where T is the number of time periods (i.e. stages).  相似文献   

10.
We investigate the problem of robust matrix completion with a fraction of observation corrupted by sparsity outlier noise. We propose an algorithmic framework based on the ADMM algorithm for a non-convex optimization, whose objective function consists of an $\ell_1$ norm data fidelity and a rank constraint. To reduce the computational cost per iteration, two inexact schemes are developed to replace the most time-consuming step in the generic ADMM algorithm. The resulting algorithms remarkably outperform the existing solvers for robust matrix completion with outlier noise. When the noise is severe and the underlying matrix is ill-conditioned, the proposed algorithms are faster and give more accurate solutions than state-of-the-art robust matrix completion approaches.  相似文献   

11.
We give the first computationally tractable and almost optimal solution to the problem of one‐bit compressed sensing, showing how to accurately recover an s‐sparse vector \input amssym $x \in {\Bbb R}^n$ from the signs of $O(s \log^2(n/s))$ random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations.  相似文献   

12.
Optimizing the acquisition matrix is useful for compressed sensing of signals that are sparse in overcomplete dictionaries, because the acquisition matrix can be adapted to the particular correlations of the dictionary atoms. In this paper a novel formulation of the optimization problem is proposed, in the form of a rank-constrained nearest correlation matrix problem. Furthermore, improvements for three existing optimization algorithms are introduced, which are shown to be particular instances of the proposed formulation. Simulation results show notable improvements and superior robustness in sparse signal recovery.  相似文献   

13.
We analyze a multiple-input multiple-output (MIMO) radar model and provide recovery results for a compressed sensing (CS) approach. In MIMO radar different pulses are emitted by several transmitters and the echoes are recorded at several receiver nodes. Under reasonable assumptions the transformation from emitted pulses to the received echoes can approximately be regarded as linear. For the considered model, and many radar tasks in general, sparsity of targets within the considered angle-range-Doppler domain is a natural assumption. Therefore, it is possible to apply methods from CS in order to reconstruct the parameters of the targets. Assuming Gaussian random pulses the resulting measurement matrix becomes a highly structured random matrix. Our first main result provides an estimate for the well-known restricted isometry property (RIP) ensuring stable and robust recovery. We require more measurements than standard results from CS, like for example those for Gaussian random measurements. Nevertheless, we show that due to the special structure of the considered measurement matrix our RIP result is in fact optimal (up to possibly logarithmic factors). Our further two main results on nonuniform recovery (i.e., for a fixed sparse target scene) reveal how the fine structure of the support set—not only the size—affects the (nonuniform) recovery performance. We show that for certain “balanced” support sets reconstruction with essentially the optimal number of measurements is possible. Indeed, we introduce a parameter measuring the well-behavedness of the support set and resemble standard results from CS for near-optimal parameter choices. We prove recovery results for both perfect recovery of the support set in case of exactly sparse vectors and an \(\ell _2\)-norm approximation result for reconstruction under sparsity defect. Our analysis complements earlier work by Strohmer & Friedlander and deepens the understanding of the considered MIMO radar model. Thereby—and apparently for the first time in CS theory—we prove theoretical results in which the difference between nonuniform and uniform recovery consists of more than just logarithmic factors.  相似文献   

14.
Xu  Fengmin  Dai  Yuhong  Zhao  Zhihu  Xu  Zongben 《中国科学 数学(英文版)》2019,62(2):245-268
Sparse optimization has attracted increasing attention in numerous areas such as compressed sensing, financial optimization and image processing. In this paper, we first consider a special class of cardinality constrained optimization problems, which involves box constraints and a singly linear constraint. An efficient approach is provided for calculating the projection over the feasibility set after a careful analysis on the projection subproblem. Then we present several types of projected gradient methods for a general class of cardinality constrained optimization problems. Global convergence of the methods is established under suitable assumptions. Finally, we illustrate some applications of the proposed methods for signal recovery and index tracking.Especially for index tracking, we propose a new model subject to an adaptive upper bound on the sparse portfolio weights. The computational results demonstrate that the proposed projected gradient methods are efficient in terms of solution quality.  相似文献   

15.
The research on the robust principal component analysis has been attracting much attention recently. Generally, the model assumes sparse noise and characterizes the error term by the λ1-norm. However, the sparse noise has clustering effect in practice so using a certain λp-norm simply is not appropriate for modeling. In this paper, we propose a novel method based on sparse Bayesian learning principles and Markov random fields. The method is proved to be very effective for low-rank matrix recovery and contiguous outliers detection, by enforcing the low-rank constraint in a matrix factorization formulation and incorporating the contiguity prior as a sparsity constraint. The experiments on both synthetic data and some practical computer vision applications show that the novel method proposed in this paper is competitive when compared with other state-of-the-art methods.  相似文献   

16.
The problem of finding sparse solutions to underdetermined systems of linear equations is very common in many fields as e.g. signal/image processing and statistics. A standard tool for dealing with sparse recovery is the \(\ell _1\) -regularized least-squares approach that has recently attracted the attention of many researchers. In this paper, we describe a new version of the two-block nonlinear constrained Gauss–Seidel algorithm for solving \(\ell _1\) -regularized least-squares that at each step of the iteration process fixes some variables to zero according to a simple active-set strategy. We prove the global convergence of the new algorithm and we show its efficiency reporting the results of some preliminary numerical experiments.  相似文献   

17.
A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven to have optimal-order uniform recovery guarantees using the ubiquitous Restricted Isometry Property (RIP) (Candès and Tao (2005) [11]). However, without specifying a matrix, or class of matrices, it is unclear when the RIP-based sufficient conditions on the algorithm are satisfied. Bounds on RIP constants can be inserted into the algorithms RIP-based conditions, translating the conditions into requirements on the signal's sparsity level, length, and number of measurements. We illustrate this approach for Gaussian matrices on three of the state-of-the-art greedy algorithms: CoSaMP (Needell and Tropp (2009) [29]), Subspace Pursuit (SP) (Dai and Milenkovic (2009) [13]) and Iterative Hard Thresholding (IHT) (Blumensath and Davies (2009) [8]). Designed to allow a direct comparison of existing theory, our framework implies that, according to the best available analysis on these three algorithms, IHT requires the fewest number of compressed sensing measurements, has the best proven stability bounds, and has the lowest per iteration computational cost.  相似文献   

18.
Sums of Kronecker products occur naturally in high‐dimensional spline approximation problems, which arise, for example, in the numerical treatment of chemical reactions. In full matrix form, the resulting non‐sparse linear problems usually exceed the memory capacity of workstations. We present methods for the manipulation and numerical handling of Kronecker products in factorized form. Moreover, we analyze the problem of approximating a given matrix by sums of Kronecker products by making use of the equivalence to the problem of decomposing multilinear forms into sums of one‐forms. Greedy algorithms based on the maximization of multilinear forms over a torus are used to obtain such (finite and infinite) decompositions that can be used to solve the approximation problem. Moreover, we present numerical considerations for these algorithms. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
For suitable classes of random Verblunsky coefficients, including independent, identically distributed, rotationally invariant ones, we prove that if $$ \bbE \biggl( \int\f{d\theta}{2\pi} \biggl|\biggl( \f{\calC + e^{i\theta}}{\calC-e^{i\theta}} \biggr)_{k\ell}\biggr|^p \biggr) \leq C_1 e^{-\kappa_1 \abs{k-\ell}} $$ for some $\kappa_1 < 0$ and $p < 1$, then for suitable $C_2$ and $\kappa_2 >0$, $$ \bbE \Bigl( \sup_n \abs{(\calC^n)_{k\ell}}\Bigr) \leq C_2 e^{-\kappa_2 \abs{k-\ell}}. $$ Here $\calC$ is the CMV matrix.  相似文献   

20.
We are concerned with the maximization of tr(VTAV)/tr(VT BV)+ tr(VT CV)over the Stiefel manifold {V ∈ Rm×| V T V = It}(t m), where B is a given symmetric and positive definite matrix, A and C are symmetric matrices, and tr() is the trace of a square matrix. This is a subspace version of the maximization problem studied in Zhang(2013), which arises from real-world applications in, for example,the downlink of a multi-user MIMO system and the sparse Fisher discriminant analysis in pattern recognition.We establish necessary conditions for both the local and global maximizers and connect the problem with a nonlinear extreme eigenvalue problem. The necessary condition for the global maximizers offers deep insights into the problem, on the one hand, and, on the other hand, naturally leads to a self-consistent-field(SCF)iteration to be presented and analyzed in detail in Part II of this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号