共查询到20条相似文献,搜索用时 15 毫秒
1.
Mathematical Programming - In this article we study the problem of signal recovery for group models. More precisely for a given set of groups, each containing a small subset of indices, and for... 相似文献
2.
3.
André F. Perold 《Mathematical Programming》1980,19(1):239-254
For general sparse linear programs two of the most efficient implementations of the LU factorization with Bartels—Golub updating are due to Reid and Saunders. This paper presents an alternative approach which achieves fast execution times for degenerate simplex method iterations, especially when used with multiple pricing. The method should have wide applicability since the simplex method performs a high proportion of degenerate iterations on most practical problems. A key feature of Saunders' method is combined with the updating strategy of Reid so as to make the scheme suitable for implementation out of core. Its efficiency is confirmed by experimental results. 相似文献
4.
Deterministic constructions of compressed sensing matrices 总被引:2,自引:0,他引:2
Compressed sensing is a new area of signal processing. Its goal is to minimize the number of samples that need to be taken from a signal for faithful reconstruction. The performance of compressed sensing on signal classes is directly related to Gelfand widths. Similar to the deeper constructions of optimal subspaces in Gelfand widths, most sampling algorithms are based on randomization. However, for possible circuit implementation, it is important to understand what can be done with purely deterministic sampling. In this note, we show how to construct sampling matrices using finite fields. One such construction gives cyclic matrices which are interesting for circuit implementation. While the guaranteed performance of these deterministic constructions is not comparable to the random constructions, these matrices have the best known performance for purely deterministic constructions. 相似文献
5.
In this paper, we establish a connection between zonoids (a concept from classical convex geometry) and the distinguishability norms associated to quantum measurements or POVMs (Positive Operator-Valued Measures), recently introduced in quantum information theory. This correspondence allows us to state and prove the POVM version of classical results from the local theory of Banach spaces about the approximation of zonoids by zonotopes. We show that on \(\mathbf {C}^d\), the uniform POVM (the most symmetric POVM) can be sparsified, i.e. approximated by a discrete POVM having only \(O(d^2)\) outcomes. We also show that similar (but weaker) approximation results actually hold for any POVM on \(\mathbf {C}^d\). By considering an appropriate notion of tensor product for zonoids, we extend our results to the multipartite setting: we show, roughly speaking, that local POVMs may be sparsified locally. In particular, the local uniform POVM on \(\mathbf {C}^{d_1}\otimes \cdots \otimes \mathbf {C}^{d_k}\) can be approximated by a discrete POVM which is local and has \(O(d_1^2 \times \cdots \times d_k^2)\) outcomes. 相似文献
6.
Compressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, greedy sparse approximation algorithms have been observed to be both able to recover the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large‐scale empirical investigation into the behavior of three of the state of the art greedy algorithms: Normalized Iterative Hard Thresholding (NIHT), Hard Thresholding Pursuit (HTP), and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recover the sparsest solution is accurately determined, and throughout this region, additional performance characteristics are presented. Contrasting the recovery regions and the average computational time for each algorithm, we present algorithm selection maps, which indicate, for each problem size, which algorithm is able to reliably recover the sparsest vector in the least amount of time. Although no algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise across a variety of problem classes. A principle difference between the NIHT and the more sophisticated HTP and CSMPSP is the balance of asymptotic convergence rate against computational cost prior to potential support set updates. The data suggest that NIHT is typically faster than HTP and CSMPSP because of greater flexibility in updating the support that limits unnecessary computation on incorrect support sets. The algorithm selection maps presented here are the first of their kind for compressed sensing. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
7.
Stéphane Chrétien 《PAMM》2007,7(1):2080003-2080004
The purpose of this note is to present a simple algorithm motivated by Lagrangian duality and generalizing the l1-relaxation approach to the recent Compressed Sensing (CS) problem introduced by Candes, Romberg and Tao. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
8.
《Comptes Rendus Mathematique》2008,346(9-10):589-592
It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as “compressed sensing” or “compressive sampling” relies on properties of the sensing matrix such as the restricted isometry property. In this Note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant. To cite this article: E.J. Candès, C. R. Acad. Sci. Paris, Ser. I 346 (2008). 相似文献
9.
Clarice Poon 《Applied and Computational Harmonic Analysis》2017,42(3):402-451
Many of the applications of compressed sensing have been based on variable density sampling, where certain sections of the sampling coefficients are sampled more densely. Furthermore, it has been observed that these sampling schemes are dependent not only on sparsity but also on the sparsity structure of the underlying signal. This paper extends the result of Adcock, Hansen, Poon and Roman (arXiv:1302.0561, 2013) [2] to the case where the sparsifying system forms a tight frame. By dividing the sampling coefficients into levels, our main result will describe how the amount of subsampling in each level is determined by the local coherences between the sampling and sparsifying operators and the localized level sparsities – the sparsity in each level under the sparsifying operator. 相似文献
10.
Owing to providing a novel insight for signal and image processing, compressed sensing (CS) has attracted increasing attention. The accuracy of the reconstruction algorithms plays an important role in real applications of the CS theory. In this paper, a generalized reconstruction model that simultaneously considers the inaccuracies on the measurement matrix and the measurement data is proposed for CS reconstruction. A generalized objective functional, which integrates the advantages of the least squares (LSs) estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the merits of the homotopy method and the artificial physics optimization (APO) algorithm is developed for solving the proposed objective functional. Numerical simulations are implemented to evaluate the feasibility and effectiveness of the proposed algorithm. For the cases simulated in this paper, the reconstruction accuracy is improved, which indicates that the proposed algorithm is successful in solving CS inverse problems. 相似文献
11.
Numerical Algorithms - Our interest in this paper is to introduce a Halpern-type algorithm with both inertial terms and errors for approximating fixed point of a nonexpansive mapping. We obtain... 相似文献
12.
Iterative hard thresholding(IHT) and compressive sampling matching pursuit(Co Sa MP) are two mainstream compressed sensing algorithms using the hard thresholding operator. The guaranteed performance of the two algorithms for signal recovery was mainly analyzed in terms of the restricted isometry property(RIP)of sensing matrices. At present, the best known bound using the RIP of order 3k for guaranteed performance of IHT(with the unit stepsize) is δ3k< 1/31/2 ≈ 0.5774, an... 相似文献
13.
为了提高块压缩感知的测量效率和重构性能,根据离散余弦变换和离散正弦变换具有汇聚信号能量的特性,提出了基于重复块对角结构的部分离散余弦变换partial discrete cosine transform in repeated block diagonal structure,简称PDCT-RBDS和部分离散正弦变换partial discrete sine transform in repeated block diagonal structure简称PDST-RBDS的两种压缩感知测量方法.所采用的测量矩阵是一种低复杂度的结构化确定性矩阵, 满足受限等距性质.并得到一个与采样能量有关的受限等距常数和精确重构的测量数下限.通过与采用重复块对角结构的部分随机高斯矩阵和部分贝努利矩阵的图像压缩感知对比,结果表明PDCT-RBDS和PDST-RBDS重构的PSNR大约提高1---5dBSSIM提高约0.05, 所需的重构时间和测量矩阵的存储空间大大减少.该方法特别适合大规模图像压缩及实时视频数据处理场合. 相似文献
14.
Numerical Algorithms - Iterative algorithms based on thresholding, feedback, and null space tuning (NST+HT+FB) for sparse signal recovery are exceedingly effective and efficient, particularly for... 相似文献
15.
Qing-hua ZHOU College of Mathematics Computer Hebei University Baoding China State Key Laboratory of Scientific Engineering Computing Institute of Computational Mathematics Scientific/Engineering Computing Academy of Mathematics Systems Science Chinese Academy of Sciences Beijing China 《中国科学A辑(英文版)》2007,50(7):913-924
In this paper,we investigate the quadratic approximation methods.After studying the basic idea of simplex methods,we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces.And the quadratic model is solved in the new subspaces.The motivation is to use the information disclosed by the former steps to construct more promising directions.For most tested problems,the number of function evaluations have been reduced obviously through our algorithms. 相似文献
16.
Computational Optimization and Applications - The paper deals with the numerical solution of the problem P to maximize a homogeneous polynomial over the unit simplex. We discuss the convergence... 相似文献
17.
18.
This paper presents dual network simplex algorithms that require at most 2nm pivots and O(n
2
m) time for solving a maximum flow problem on a network ofn nodes andm arcs. Refined implementations of these algorithms and a related simplex variant that is not strictly speaking a dual simplex algorithm are shown to have a complexity of O(n
3). The algorithms are based on the concept of apreflow and depend upon the use of node labels that are underestimates of the distances from the nodes to the sink node in the extended residual graph associated with the current flow. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research was supported by NSF Grants DMS 91-06195, DMS 94-14438 and CDR 84-21402 and DOE Grant DE-FG02-92ER25126.Research was supported by NSF Grant CDR 84-21402 at Columbia University. 相似文献
19.
We study the implementation of two fundamentally different algorithms for solving the maximum flow problem: Dinic's method and the network simplex method. For the former, we present the design of a storage-efficient implementation. For the latter, we develop a "steepest-edge" pivot selection criterion that is easy to include in an existing network simplex implementation. We compare the computational efficiency of these two methods on a personal computer with a set of generated problems of up to 4 600 nodes and 27 000 arcs.This research was supported in part by the National Science Foundation under Grant Nos. MCS-8113503 and DMS-8512277. 相似文献
20.
O. Axelsson 《BIT Numerical Mathematics》1974,14(3):279-287
The numerical solution of systems of differential equations of the formB dx/dt=σ(t)Ax(t)+f(t),x(0) given, whereB andA (withB and —(A+A T) positive definite) are supposed to be large sparse matrices, is considered.A-stable methods like the Implicit Runge-Kutta methods based on Radau quadrature are combined with iterative methods for the solution of the algebraic systems of equations. 相似文献