共查询到20条相似文献,搜索用时 0 毫秒
1.
Claire Boyer Jérémie Bigot Pierre Weiss 《Applied and Computational Harmonic Analysis》2019,46(2):312-350
Compressed Sensing (CS) is an appealing framework for applications such as Magnetic Resonance Imaging (MRI). However, up-to-date, the sensing schemes suggested by CS theories are made of random isolated measurements, which are usually incompatible with the physics of acquisition. To reflect the physical constraints of the imaging device, we introduce the notion of blocks of measurements: the sensing scheme is not a set of isolated measurements anymore, but a set of groups of measurements which may represent any arbitrary shape (parallel or radial lines for instance). Structured acquisition with blocks of measurements are easy to implement, and provide good reconstruction results in practice. However, very few results exist on the theoretical guarantees of CS reconstructions in this setting. In this paper, we derive new CS results for structured acquisitions and signals satisfying a prior structured sparsity. The obtained results provide a recovery probability of sparse vectors that explicitly depends on their support. Our results are thus support-dependent and offer the possibility for flexible assumptions on the sparsity structure. Moreover, the results are drawing-dependent, since we highlight an explicit dependency between the probability of reconstructing a sparse vector and the way of choosing the blocks of measurements. Numerical simulations show that the proposed theory is faithful to experimental observations. 相似文献
2.
Foundations of Computational Mathematics - Regularization techniques are widely employed in optimization-based approaches for solving ill-posed inverse problems in data analysis and scientific... 相似文献
3.
We consider real polynomials in finitely many variables. Let the variables consist of finitely many blocks that are allowed
to overlap in a certain way. Let the solution set of a finite system of polynomial inequalities be given, where each inequality
involves only variables of one block. We investigate polynomials that are positive on such a set and sparse in the sense that
each monomial involves only variables of one block. In particular, we derive a short and direct proof for Lasserre’s theorem
on the existence of sums of squares certificates respecting the block structure. The motivation for the results can be found
in the literature on numerical methods for global optimization of polynomials that exploit sparsity.
The first and the third author were supported by the DFG grant “Barrieren”. The second author was supported by “Studienstiftung
des deutschen Volkes”. 相似文献
4.
5.
Finding a sparse approximation of a signal from an arbitrary dictionary is a very useful tool to solve many problems in signal
processing. Several algorithms, such as Basis Pursuit (BP) and Matching Pursuits (MP, also known as greedy algorithms), have
been introduced to compute sparse approximations of signals, but such algorithms a priori only provide sub-optimal solutions. In general, it is difficult to estimate how close a computed solution is from the optimal
one. In a series of recent results, several authors have shown that both BP and MP can successfully recover a sparse representation
of a signal provided that it is sparse enough, that is to say if its support (which indicates where are located the nonzero
coefficients) is of sufficiently small size. In this paper we define identifiable structures that support signals that can
be recovered exactly by minimization (Basis Pursuit) and greedy algorithms. In other words, if the support of a representation belongs to an identifiable
structure, then the representation will be recovered by BP and MP. In addition, we obtain that if the output of an arbitrary
decomposition algorithm is supported on an identifiable structure, then one can be sure that the representation is optimal
within the class of signals supported by the structure. As an application of the theoretical results, we give a detailed study
of a family of multichannel dictionaries with a special structure (corresponding to the representation problem ) often used in, e.g., under-determined source sepa-ration problems or in multichannel signal processing. An identifiable
structure for such dictionaries is defined using a generalization of Tropp’s Babel function which combines the coherence of
the mixing matrix with that of the time-domain dictionary , and we obtain explicit structure conditions which ensure that both minimization and a multi-channel variant of Matching Pursuit can recover structured multichannel representations. The multichannel
Matching Pursuit algorithm is described in detail and we conclude with a discussion of some implications of our results in
terms of blind source separation based on sparse decompositions.
Communicated by Yuesheng Xu 相似文献
6.
《Journal of Graph Theory》2018,89(1):14-25
The separation dimension of a hypergraph G is the smallest natural number k for which the vertices of G can be embedded in so that any pair of disjoint edges in G can be separated by a hyperplane normal to one of the axes. Equivalently, it is the cardinality of a smallest family of total orders of , such that for any two disjoint edges of G, there exists at least one total order in in which all the vertices in one edge precede those in the other. Separation dimension is a monotone parameter; adding more edges cannot reduce the separation dimension of a hypergraph. In this article, we discuss the influence of separation dimension and edge‐density of a graph on one another. On one hand, we show that the maximum separation dimension of a k‐degenerate graph on n vertices is and that there exists a family of 2‐degenerate graphs with separation dimension . On the other hand, we show that graphs with bounded separation dimension cannot be very dense. Quantitatively, we prove that n‐vertex graphs with separation dimension s have at most edges. We do not believe that this bound is optimal and give a question and a remark on the optimal bound. 相似文献
7.
《Mathematical Methods in the Applied Sciences》2018,41(13):5140-5158
In this paper, we aim to prove the possibility to reconstruct a bicomplex sparse signal, with high probability, from a reduced number of bicomplex random samples. Due to the idempotent representation of the bicomplex algebra, this case is similar to the case of the standard Fourier basis, thus allowing us to adapt in a rather easy way the arguments from the recent works of Rauhut and Candés et al. 相似文献
8.
The Helmberg-Rendl-Vanderbei-Wolkowicz/Kojima-Shindoh-Hara/Monteiro and Nesterov-Todd search directions have been used in
many primal-dual interior-point methods for semidefinite programs. This paper proposes an efficient method for computing the
two directions when the semidefinite program to be solved is large scale and sparse. 相似文献
9.
We propose a new stochastic first-order algorithm for solving sparse regression problems. In each iteration, our algorithm utilizes a stochastic oracle of the subgradient of the objective function. Our algorithm is based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory lectures on convex optimization: a basic course, Kluwer, Amsterdam, 2003). The convergence rate of our algorithm depends continuously on the noise level of the gradient. In particular, in the limiting case of noiseless gradient, the convergence rate of our algorithm is the same as that of optimal deterministic gradient algorithms. We also establish some large deviation properties of our algorithm. Unlike existing stochastic gradient methods with optimal convergence rates, our algorithm has the advantage of readily enforcing sparsity at all iterations, which is a critical property for applications of sparse regressions. 相似文献
10.
11.
Exploiting sparsity is essential to improve the efficiency of solving large optimization problems. We present a method for
recognizing the underlying sparsity structure of a nonlinear partially separable problem, and show how the sparsity of the
Hessian matrices of the problem’s functions can be improved by performing a nonsingular linear transformation in the space
corresponding to the vector of variables. A combinatorial optimization problem is then formulated to increase the number of
zeros of the Hessian matrices in the resulting transformed space, and a heuristic greedy algorithm is applied to this formulation.
The resulting method can thus be viewed as a preprocessor for converting a problem with hidden sparsity into one in which
sparsity is explicit. When it is combined with the sparse semidefinite programming relaxation by Waki et al. for polynomial
optimization problems, the proposed method is shown to extend the performance and applicability of this relaxation technique.
Preliminary numerical results are presented to illustrate this claim.
S. Kim’s research was supported by Kosef R01-2005-000-10271-0.
M. Kojima’s research was supported by Grant-in-Aid for Scientific Research on Priority Areas 16016234. 相似文献
12.
We describe an iterative algorithm for the minimization of Tikhonov type functionals which involve sparsity constraints in form of ℓp -penalties which have been proposed recently for the regularization of ill-posed problems. In contrast to the well-known algorithm considered by Daubechies, Defrise and De Mol, it uses hard instead of soft thresholding. This hard thresholding algorithm is based on the generalized conditional gradient method. General results on the convergence of the generalized conditional gradient method enable us to prove strong convergence of the iterates. Furthermore we are able to establish convergence rates of O (n–1/2) and O (λn) for p = 1 and 1 < p ≤ 2 respectively. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
13.
We describe an implementation of Conjugate Gradient-type iterative algorithms for problems with general sparsity patterns on a vector processor with a hierarchy of memories, such as the IBM 3090/VF. The implementation relies on the wavefront approach to vectorize the solution of the two sparse triangular systems that arise when using ILU type preconditioners. The data structure is the key to an effective implementation of sparse computational kernels on a vector processor. A data structure is a combination of a layout of the matrix coefficients and ordering schemes for the vectors to increase data locality. With the data structure we describe, we achieve comparable performance on both the matrix-vector product and the solution of the sparse triangular systems on a variety of real problems, such as those arising from large scale reservoir simulation or structural analysis. 相似文献
14.
15.
16.
Recent scientific applications produce data that are too large for storing or rendering for further statistical analysis. This motivates the construction of an optimum mechanism to choose only a subset of the available information and drawing inferences about the parent population using only the stored subset. This paper addresses the issue of estimation of parameter from such filtered data. Instead of all the observations we observe only a few chosen linear combinations of them and treat the remaining information as missing. From the observed linear combinations we try to estimate the parameter using EM based technique under the assumption that the parameter is sparse. In this paper we propose two related methods called ASREM and ESREM. The methods developed here are also used for hypothesis testing and construction of confidence interval. Similar data filtering approach already exists in signal sampling paradigm, for example, Compressive Sampling introduced by Candes et al. (Commun Pure Appl Math 59(8):1207–1223, 2006) and Donoho (IEEE Trans Inf Theory 52: 1289–1306, 2006). The methods proposed in this paper are not claimed to outperform all the available techniques of signal recovery, rather our methods are suggested as an alternative way of data compression using EM algorithm. However, we shall compare our methods to one standard algorithm, viz., robust signal recovery from noisy data using min-\(\ell _{1}\) with quadratic constraints. Finally we shall apply one of our methods to a real life dataset. 相似文献
17.
《Operations Research Letters》2022,50(6):632-638
We present a heuristic approach for convex optimization problems containing different types of sparsity constraints. Whenever the support is required to belong to a matroid, we propose an exchange heuristic adapting the support in every iteration. The entering non-zero is determined by considering the dual multipliers of the bounds on variables being fixed to zero. While this algorithm is purely heuristic, we show experimentally that it often finds near-optimal solutions for cardinality-constrained knapsack problems and for sparse regression problems. 相似文献
18.
In this paper, we consider nonlinear inverse problems where the solution is assumed to have a sparse expansion with respect to a preassigned basis or frame. We develop a scheme which allows to minimize a Tikhonov functional where the usual quadratic regularization term is replaced by a one-homogeneous (typically weighted ℓ
p
) penalty on the coefficients (or isometrically transformed coefficients) of such expansions. For (p < 2), the regularized solution will have a sparser expansion with respect to the basis or frame under consideration. The computation of the regularized solution amounts in our setting to a Landweber-fixed-point iteration with a projection applied in each fixed-point iteration step. The performance of the resulting numerical scheme is demonstrated by solving the nonlinear inverse single photon emission computerized tomography (SPECT) problem. 相似文献
19.
In this paper we describe an automatic procedure for successively reducing the set of possible nonzeros in a Jacobian matrix
until eventually the exact sparsity pattern is obtained. The dependence information needed in this probing process consist
of “Boolean” Jacobian-vector products and possibly also vector-Jacobian products, which can be evaluated exactly by automatic
differentiation or approximated by divided differences. The latter approach yields correct sparsity patterns, provided there
is no exact cancellation at the current argument.?Starting from a user specified, or by default initialized, probability distribution
the procedure suggests a sequence of probing vectors. The resulting information is then used to update the probabilities that
certain elements are nonzero according to Bayes’ law. The proposed probing procedure is found to require only O(logn) probing vectors on randomly generated matrices of dimension n, with a fixed number of nonzeros per row or column. This result has been proven for (block-) banded matrices, and for general
sparsity pattern finite termination of the probing procedure can be guaranteed.
Received: April 29, 2000 / Accepted: September 2001?Published online April 12, 2002 相似文献
20.
Ph. Toint 《Mathematical Programming》1981,21(1):172-181
This paper is concerned with two questions relating to quasi-Newton updates for unconstrained optimization that exploit any sparsity present in the second derivative matrix of the objective function. First, a family of such updates is derived, that reduces to any a priori known dense update formula when no sparsity is imposed. This family uses the Frobenius projection of the desired update on the subspace of matrices that satisfy all the needed conditions. In the second part, we prove that, under mild assumptions, a positive definite sparse quasi-Newton update always exists. The proof of this result includes the explicit determination of such an update. 相似文献