首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
In the past decade, the sparse representation synthesis model has been deeply researched and widely applied in signal processing. Recently, a cosparse analysis model has been introduced as an interesting alternative to the sparse representation synthesis model. The sparse synthesis model pay attention to non-zero elements in a representation vector x, while the cosparse analysis model focuses on zero elements in the analysis representation vector Ωx. This paper mainly considers the problem of the cosparse analysis model. Based on the greedy analysis pursuit algorithm, by constructing an adaptive weighted matrix W k?1, we propose a modified greedy analysis pursuit algorithm for the sparse recovery problem when the signal obeys the cosparse model. Using a weighted matrix, we fill the gap between greedy algorithm and relaxation techniques. The standard analysis shows that our algorithm is convergent. We estimate the error bound for solving the cosparse analysis model, and then the presented simulations demonstrate the advantage of the proposed method for the cosparse inverse problem.  相似文献   

2.
3.
Orthogonal matching pursuit(OMP)algorithm is an efcient method for the recovery of a sparse signal in compressed sensing,due to its ease implementation and low complexity.In this paper,the robustness of the OMP algorithm under the restricted isometry property(RIP) is presented.It is shown that δK+√KθK,11is sufcient for the OMP algorithm to recover exactly the support of arbitrary K-sparse signal if its nonzero components are large enough for both l2bounded and l∞bounded noises.  相似文献   

4.
Orthogonal multi-matching pursuit(OMMP)is a natural extension of orthogonal matching pursuit(OMP)in the sense that N(N≥1)indices are selected per iteration instead of 1.In this paper,the theoretical performance of OMMP under the restricted isometry property(RIP)is presented.We demonstrate that OMMP can exactly recover any K-sparse signal from fewer observations y=φx,provided that the sampling matrixφsatisfiesδKN-N+1+(K/N)~(1/2)θKN-N+1,N1.Moreover,the performance of OMMP for support recovery from noisy observations is also discussed.It is shown that,for l_2 bounded and l_∞bounded noisy cases,OMMP can recover the true support of any K-sparse signal under conditions on the restricted isometry property of the sampling matrixφand the minimum magnitude of the nonzero components of the signal.  相似文献   

5.
A full-rank under-determined linear system of equations Ax = b has in general infinitely many possible solutions. In recent years there is a growing interest in the sparsest solution of this equation—the one with the fewest non-zero entries, measured by ∥x0. Such solutions find applications in signal and image processing, where the topic is typically referred to as “sparse representation”. Considering the columns of A as atoms of a dictionary, it is assumed that a given signal b is a linear composition of few such atoms. Recent work established that if the desired solution x is sparse enough, uniqueness of such a result is guaranteed. Also, pursuit algorithms, approximation solvers for the above problem, are guaranteed to succeed in finding this solution.Armed with these recent results, the problem can be reversed, and formed as an implied matrix factorization problem: Given a set of vectors {bi}, known to emerge from such sparse constructions, Axi = bi, with sufficiently sparse representations xi, we seek the matrix A. In this paper we present both theoretical and algorithmic studies of this problem. We establish the uniqueness of the dictionary A, depending on the quantity and nature of the set {bi}, and the sparsity of {xi}. We also describe a recently developed algorithm, the K-SVD, that practically find the matrix A, in a manner similar to the K-Means algorithm. Finally, we demonstrate this algorithm on several stylized applications in image processing.  相似文献   

6.
This paper provides new results on computing simultaneous sparse approximations of multichannel signals over redundant dictionaries using two greedy algorithms. The first one, p-thresholding, selects the S atoms that have the largest p-correlation while the second one, p-simultaneous matching pursuit (p-SOMP), is a generalisation of an algorithm studied by Tropp in (Signal Process. 86:572–588, 2006). We first provide exact recovery conditions as well as worst case analyses of all algorithms. The results, expressed using the standard cumulative coherence, are very reminiscent of the single channel case and, in particular, impose stringent restrictions on the dictionary. We unlock the situation by performing an average case analysis of both algorithms. First, we set up a general probabilistic signal model in which the coefficients of the atoms are drawn at random from the standard Gaussian distribution. Second, we show that under this model, and with mild conditions on the coherence, the probability that p-thresholding and p-SOMP fail to recover the correct components is overwhelmingly small and gets smaller as the number of channels increases. Furthermore, we analyse the influence of selecting the set of correct atoms at random. We show that, if the dictionary satisfies a uniform uncertainty principle (Candes and Tao, IEEE Trans. Inf. Theory, 52(12):5406–5425, 2006), the probability that simultaneous OMP fails to recover any sufficiently sparse set of atoms gets increasingly smaller as the number of channels increases.  相似文献   

7.
Solving a sparse system of linear equations Ax=b is one of the most fundamental operations inside any circuit simulator. The equations/rows in the matrix A are often rearranged/permuted before factorization and applying direct or iterative methods to obtain the solution. Permuting the rows of the matrix A so that the entries with large absolute values lie on the diagonal has several advantages like better numerical stability for direct methods (e.g., Gaussian elimination) and faster convergence for indirect methods (such as the Jacobi method). Duff (2009) [3] has formulated this as a weighted bipartite matching problem (the MC64 algorithm). In this paper we improve the performance of the MC64 algorithm with a new labeling technique which improves the asymptotic complexity of updating dual variables from O(|V|+|E|) to O(|V|), where |V| is the order of the matrix A and |E| is the number of non-zeros. Experimental results from using the new algorithm, when benchmarked with both industry benchmarks and UFL sparse matrix collection, are very promising. Our algorithm is more than 60 times faster (than Duff’s algorithm) for sparse matrices with at least a million non-zeros.  相似文献   

8.
In this paper we describe an implementation of a cutting plane algorithm for the perfect matching problem which is based on the simplex method. The algorithm has the following features:
  • -It works on very sparse subgraphs ofK n which are determined heuristically, global optimality is checked using the reduced cost criterion.
  • -Cutting plane recognition is usually accomplished by heuristics. Only if these fail, the Padberg-Rao procedure is invoked to guarantee finite convergence.
  • Our computational study shows that—on the average—very few variables and very few cutting planes suffice to find a globally optimal solution. We could solve this way matching problems on complete graphs with up to 1000 nodes. Moreover, it turned out that our cutting plane algorithm is competitive with the fast combinatorial matching algorithms known to date.  相似文献   

    9.
    Sign truncated matching pursuit (STrMP) algorithm is presented in this paper. STrMP is a new greedy algorithm for the recovery of sparse signals from the sign measurement, which combines the principle of consistent reconstruction with orthogonal matching pursuit (OMP). The main part of STrMP is as concise as OMP and hence STrMP is simple to implement. In contrast to previous greedy algorithms for one-bit compressed sensing, STrMP only need to solve a convex and unconstrained subproblem at each iteration. Numerical experiments show that STrMP is fast and accurate for one-bit compressed sensing compared with other algorithms.  相似文献   

    10.
    We consider the approximation in L 2 R of a given function using finite linear combinations of Walsh atoms, which are Walsh functions localized to dyadic intervals, also called Haar—Walsh wavelet packets. It is shown that up to a constant factor, a linear combination of K atoms can be represented to relative error ɛ by a linear combination of orthogonal atoms. In finite dimension N, best approximation with K orthogonal atoms can be realized with an algorithm of order . A faster algorithm of order solves the problem with indirect control over K. Therefore the above result connects algorithmic and theoretical best approximation. Date received: July 6, 1995. Date revised: January 8, 1996.  相似文献   

    11.
    One can recover sparse multivariate trigonometric polynomials from a few randomly taken samples with high probability (as shown by Kunis and Rauhut). We give a deterministic sampling of multivariate trigonometric polynomials inspired by Weil’s exponential sum. Our sampling can produce a deterministic matrix satisfying the statistical restricted isometry property, and also nearly optimal Grassmannian frames. We show that one can exactly reconstruct every M-sparse multivariate trigonometric polynomial with fixed degree and of length D from the determinant sampling X, using the orthogonal matching pursuit, and with |X| a prime number greater than (MlogD)2. This result is optimal within the (logD)2 factor. The simulations show that the deterministic sampling can offer reconstruction performance similar to the random sampling.  相似文献   

    12.
    Let E be a 2-uniformly real Banach space and F,K:EE be nonlinear-bounded accretive operators. Assume that the Hammerstein equation u+KFu=0 has a solution. A new explicit iteration sequence is introduced and strong convergence of the sequence to a solution of the Hammerstein equation is proved. The operators F and K are not required to satisfy the so-called range condition. No invertibility assumption is imposed on the operator K and F is not restricted to be an angle-bounded (necessarily linear) operator.  相似文献   

    13.
    The orthogonal multi-matching pursuit (OMMP) is a natural extension of the orthogonal matching pursuit (OMP).We denote the OMMP with the parameter $M$ as OMMP($M$) where $M$ ≥ 1 is an integer. The main difference between OMP and OMMP($M$) is that OMMP($M$) selects $M$ atoms per iteration, while OMP only adds one atom to the optimal atom set. In this paper, we study the performance of orthogonal multi-matching pursuit under RIP. In particular, we show that, when the measurement matrix $A$ satisfies (25$s$, 1/10)-RIP, OMMP($M_0$) with $M_0$ = 12 can recover $s$-sparse signals within $s$ iterations. We furthermore prove that OMMP($M$) can recover $s$-sparse signals within $O(s/M)$ iterations for a large class of $M$.  相似文献   

    14.
    A generalized version of the exact model matching problem (GEMMP) is considered for linear multivariable systems over an arbitrary commutative ring K with identity. Reduced forms of this problem are introduced, and a characterization of all solutions and minimal order solutions is given, both with and without the properness constraint on the solutions, in terms of linear equations over K and K-modules. An approach to the characterization of all stable solutions is presented which, under a certain Bezout condition and a freeness condition, provides a parametrization of all stable solutions. The results provide an explicit parametrization of all solutions and all stable solutions in case K is a field, without the Bezout condition. This is achieved through a very simple characterization and a generalization to an arbitrary field K of the “fixed poles” of the model matching problem in terms of invariant factors of a certain polynomial matrix. The results also show that whenever the GEMMP has a solution, there exist solutions whose poles can be chosen arbitrarily as far as they contain the “fixed poles” with the right multiplicities (in the algebraic closure of K). Implications of these results in regard to inverse systems are shown. Equivalent simpler forms (in state space form) of the problem are shown to be obtainable. A theory of finitely generated (F,G)-invariant submodules for linear systems over rings is developed, and the geometric equivalent of the model matching problem—the dynamic cover problem—is formulated, to which the results of the previous sections provide a solution in the reduced case.  相似文献   

    15.
    Flux balance analysis has proven an effective tool for analyzing metabolic networks. In flux balance analysis, reaction rates and optimal pathways are ascertained by solving a linear program, in which the growth rate is maximized subject to mass-balance constraints. A variety of cell functions in response to environmental stimuli can be quantified using flux balance analysis by parameterizing the linear program with respect to extracellular conditions. However, for most large, genome-scale metabolic networks of practical interest, the resulting parametric problem has multiple and highly degenerate optimal solutions, which are computationally challenging to handle. An improved multi-parametric programming algorithm based on active-set methods is introduced in this paper to overcome these computational difficulties. Degeneracy and multiplicity are handled, respectively, by introducing generalized inverses and auxiliary objective functions into the formulation of the optimality conditions. These improvements are especially effective for metabolic networks because their stoichiometry matrices are generally sparse; thus, fast and efficient algorithms from sparse linear algebra can be leveraged to compute generalized inverses and null-space bases. We illustrate the application of our algorithm to flux balance analysis of metabolic networks by studying a reduced metabolic model of Corynebacterium glutamicum and a genome-scale model of Escherichia coli. We then demonstrate how the critical regions resulting from these studies can be associated with optimal metabolic modes and discuss the physical relevance of optimal pathways arising from various auxiliary objective functions. Achieving more than fivefold improvement in computational speed over existing multi-parametric programming tools, the proposed algorithm proves promising in handling genome-scale metabolic models.  相似文献   

    16.
    Goldfarb's algorithm, which is one of the most successful methods for minimizing a function of several variables subject to linear constraints, uses a single matrix to keep second derivative information and to ensure that search directions satisfy any active constraints. In the original version of the algorithm this matrix is full, but by making a change of variables so that the active constraints become bounds on vector components, this matrix is transformed so that the dimension of its non-zero part is only the number of variablesless the number of active constraints. It is shown how this transformation may be used to give a version of the algorithm that usually provides a good saving in the amount of computation over the original version. Also it allows the use of sparse matrix techniques to take advantage of zeros in the matrix of linear constraints. Thus the method described can be regarded as an extension of linear programming to allow a non-linear objective function.  相似文献   

    17.
    Adaptive greedy approximations   总被引:5,自引:0,他引:5  
    The problem of optimally approximating a function with a linear expansion over a redundant dictionary of waveforms is NP-hard. The greedy matching pursuit algorithm and its orthogonalized variant produce suboptimal function expansions by iteratively choosing dictionary waveforms that best match the function’s structures. A matching pursuit provides a means of quickly computing compact, adaptive function approximations. Numerical experiments show that the approximation errors from matching pursuits initially decrease rapidly, but the asymptotic decay rate of the errors is slow. We explain this behavior by showing that matching pursuits are chaotic, ergodic maps. The statistical properties of the approximation errors of a pursuit can be obtained from the invariant measure of the pursuit. We characterize these measures using group symmetries of dictionaries and by constructing a stochastic differential equation model. We derive a notion of the coherence of a signal with respect to a dictionary from our characterization of the approximation errors of a pursuit. The dictionary elements slected during the initial iterations of a pursuit correspond to a function’s coherent structures. The tail of the expansion, on the other hand, corresponds to a noise which is characterized by the invariant measure of the pursuit map. When using a suitable dictionary, the expansion of a function into its coherent structures yields a compact approximation. We demonstrate a denoising algorithm based on coherent function expansions.  相似文献   

    18.
    This paper uses Daubechies orthogonal wavelets to change dense and fully populated matrices of boundary element method (BEM) systems into sparse and semi‐banded matrices. Then a novel algorithm based on hierarchical nature of multiresolution analysis is introduced to solving resultant sparse linear systems. This algorithm decomposes NS‐form of transformed parent matrix into descendant systems with reduced sizes and solves them iteratively using GMRES algorithm. Both parts, changing dense matrices to sparse systems and the novel solver, can be added as a black box to the existing BEM codes. Transforming matrices into wavelet space needs less time than saved by solving sparse large systems. Numerical results with a precise study on sensitivity of solution for physical variables to the thresholding parameter, and savings in computer time and memory are presented. Also, the suitable value for thresholding parameter is recommended for elasticity problems. The results indicate that the proposed method is efficient for large problems. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

    19.
    By means of critical point theory, existence theorems for nontrivial solutions to the Hammerstein equation x = KFx are given, where K is a compact linear integral operator and F is a nonlinear superposition operator. To this end, appropriate conditions on the spectrum of the linear parte are combined with growth and representation conditions on the nonlinear part to ensure the applicability of the mountain — pass lemma. The abstract existence theorems are applied to nonlinear elliptic equations and systems subject to Dirichlet boundary conditions.  相似文献   

    20.
    We consider quadratic programs with pure general integer variables. The objective function is quadratic and convex and the constraints are linear. An exact solution approach is proposed. It is decomposed into two phases. In the first phase, the initial problem is reformulated into an equivalent problem with a separable objective function. This is done by use of a Gauss decomposition of the Hessian matrix of the initial problem and requires the addition of some continuous variables and constraints. In the second phase, the reformulated problem is linearized by an approximation of each squared term by a set of K linear functions that correspond to the tangents of a hyperbola in K points. We give a proof of the intuitive property that when K is large enough, the optimal value of the obtained linear program is very close to optimal value of the two previous problems, the initial problem and the reformulated separable problem. The reminder is dedicated to the implementation of a branch-and-bound algorithm for the solution of linearized problem, and its application to a set of instances. Several points are considered among which choice of the right value for parameter K and the implementation of a sophisticated heuristic solution algorithm. The numerical comparison is done with CPLEX 12.2 since, in this case, the initial problem as well as the problem reformulated by the first step can be solved by CPLEX. We show that with our approach, the total CPU time is divided by a factor ranging from 1.2 to 131.6 for instances with 40–60 variables.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号