首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate the problem of reconstructing sparse multivariate trigonometric polynomials from few randomly taken samples by Basis Pursuit and greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Thresholding. While recovery by Basis Pursuit has recently been studied by several authors, we provide theoretical results on the success probability of reconstruction via Thresholding and OMP for both a continuous and a discrete probability model for the sampling points. We present numerical experiments, which indicate that usually Basis Pursuit is significantly slower than greedy algorithms, while the recovery rates are very similar.   相似文献   

2.
Finding a sparse approximation of a signal from an arbitrary dictionary is a very useful tool to solve many problems in signal processing. Several algorithms, such as Basis Pursuit (BP) and Matching Pursuits (MP, also known as greedy algorithms), have been introduced to compute sparse approximations of signals, but such algorithms a priori only provide sub-optimal solutions. In general, it is difficult to estimate how close a computed solution is from the optimal one. In a series of recent results, several authors have shown that both BP and MP can successfully recover a sparse representation of a signal provided that it is sparse enough, that is to say if its support (which indicates where are located the nonzero coefficients) is of sufficiently small size. In this paper we define identifiable structures that support signals that can be recovered exactly by minimization (Basis Pursuit) and greedy algorithms. In other words, if the support of a representation belongs to an identifiable structure, then the representation will be recovered by BP and MP. In addition, we obtain that if the output of an arbitrary decomposition algorithm is supported on an identifiable structure, then one can be sure that the representation is optimal within the class of signals supported by the structure. As an application of the theoretical results, we give a detailed study of a family of multichannel dictionaries with a special structure (corresponding to the representation problem ) often used in, e.g., under-determined source sepa-ration problems or in multichannel signal processing. An identifiable structure for such dictionaries is defined using a generalization of Tropp’s Babel function which combines the coherence of the mixing matrix with that of the time-domain dictionary , and we obtain explicit structure conditions which ensure that both minimization and a multi-channel variant of Matching Pursuit can recover structured multichannel representations. The multichannel Matching Pursuit algorithm is described in detail and we conclude with a discussion of some implications of our results in terms of blind source separation based on sparse decompositions. Communicated by Yuesheng Xu  相似文献   

3.
This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements—L1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm, ROMP, reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the uniform uncertainty principle.   相似文献   

4.
We present two strategies for warmstarting primal-dual interior point methods for the homogeneous self-dual model when applied to mixed linear and quadratic conic optimization problems. Common to both strategies is their use of only the final (optimal) iterate of the initial problem and their negligible computational cost. This is a major advantage when compared to previously suggested strategies that require a pool of iterates from the solution process of the initial problem. Consequently our strategies are better suited for users who use optimization algorithms as black-box routines which usually only output the final solution. Our two strategies differ in that one assumes knowledge only of the final primal solution while the other assumes the availability of both primal and dual solutions. We analyze the strategies and deduce conditions under which they result in improved theoretical worst-case complexity. We present extensive computational results showing work reductions when warmstarting compared to coldstarting in the range 30–75% depending on the problem class and magnitude of the problem perturbation. The computational experiments thus substantiate that the warmstarting strategies are useful in practice.  相似文献   

5.
Andreas M. Tillmann 《PAMM》2015,15(1):735-738
In this note, we show that linear programming and the prominent Basis Pursuit problem (i.e., minimizing the ℓ1-norm of a vector x subject to an underdetermined linear equation system Ax = b) are theoretically equivalent, and briefly discuss possible ramifications regarding computational complexity and practical applicability. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
We propose new versions of accelerated first-order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular, we show that a full backtracking strategy can be used within the FISTA and FALM algorithms while preserving their worst-case iteration complexities of $O(\sqrt{L(f)/\epsilon })$ . In the original versions of FISTA and FALM the prox parameter value on each iteration must be bounded from above by its value on prior iterations. The complexity of the algorithm then depends on the smallest value of the prox parameter encountered by the algorithm. The theoretical implications of using full backtracking in the framework of accelerated first-order and alternating linearization methods allow for better complexity estimates that depend on the “average” prox parameter value. Moreover, we show that in the case of compressed sensing problem and Lasso, the additional cost of the new backtracking strategy is negligible compared to the cost of the original FISTA iteration. Our computational results show the benefit of the new algorithm.  相似文献   

7.
This paper focuses on finite minimax problems with many functions, and their solution by means of exponential smoothing. We conduct run-time complexity and rate of convergence analysis of smoothing algorithms and compare them with those of SQP algorithms. We find that smoothing algorithms may have only sublinear rate of convergence, but as shown by our complexity results, their slow rate of convergence may be compensated by small computational work per iteration. We present two smoothing algorithms with active-set strategies and novel precision-parameter adjustment schemes. Numerical results indicate that the algorithms are competitive with other algorithms from the literature, and especially so when a large number of functions are nearly active at stationary points.  相似文献   

8.
Automatic numerical algorithms attempt to provide approximate solutions that differ from exact solutions by no more than a user-specified error tolerance. The computational cost is often determined adaptively by the algorithm based on the function values sampled. While adaptive, automatic algorithms are widely used in practice, most lack guarantees, i.e., conditions on input functions that ensure that the error tolerance is met.  相似文献   

9.
《Optimization》2012,61(12):2291-2323
ABSTRACT

We study and solve the two-stage stochastic extended second-order cone programming problem. We show that the barrier recourse functions and the composite barrier functions for this optimization problem are self-concordant families with respect to barrier parameters. These results are used to develop primal decomposition-based interior-point algorithms. The worst case iteration complexity of the developed algorithms is shown to be the same as that for the short- and long-step primal interior algorithms applied to the extensive formulation of our problem.  相似文献   

10.
矩阵填充是指利用矩阵的低秩特性而由部分观测元素恢复出原矩阵,在推荐系统、信号处理、医学成像、机器学习等领域有着广泛的应用。采用精确线搜索的交替最速下降法由于每次迭代计算量小因而对大规模问题的求解非常有效。本文在其基础上采用分离地精确线搜索,可使得每次迭代下降更多但计算量相同,从而可望进一步提高计算效率。本文分析了新算法的收敛性。数值结果也表明所提出的算法更加有效。  相似文献   

11.
The complexity of a computational problem is the order of computational resources which are necessary and sufficient to solve the problem. The algorithm complexity is the cost of a particular algorithm. We say that a problem has polynomial complexity if its computational complexity is a polynomial in the measure of input size. We introduce polynomial time algorithms based in generating functions for computing the Myerson value in weighted voting games restricted by a tree. Moreover, we apply the new generating algorithm for computing the Myerson value in the Council of Ministers of the European Union restricted by a communication structure.  相似文献   

12.
《Journal of Complexity》2003,19(1):85-99
We consider iterative methods for approximating solutions of nonlinear equations, where the iteration cannot be computed exactly, but is corrupted by additive perturbations. The cost of computing each iteration depends on the size of the perturbation. For a class of cost functions, we show that the total cost of producing an ε-approximation can be made proportional to the cost c(ε) of one single iterative step performed with the accuracy proportional to ε. We also demonstrate that for some cost functions the total cost is proportional to c(ε)2. In both cases matching lower bounds are shown. The results find natural application to establishing the complexity of nonlinear boundary-value problems, where they yield an improvement over the known upper bounds, and remove the existing gap between the upper and lower bounds.  相似文献   

13.
Recent studies on the kernel function-based primal-dual interior-point algorithms indicate that a kernel function not only represents a measure of the distance between the iteration and the central path, but also plays a critical role in improving the computational complexity of an interior-point algorithm. In this paper, we propose a new class of parameterized kernel functions for the development of primal-dual interior-point algorithms for solving linear programming problems. The properties of the proposed kernel functions and corresponding parameters are investigated. The results lead to a complexity bounds of ${O\left(\sqrt{n}\,{\rm log}\,n\,{\rm log}\,\frac{n}{\epsilon}\right)}$ for the large-update primal-dual interior point methods. To the best of our knowledge, this is the best known bound achieved.  相似文献   

14.
We consider a class of smoothing methods for minimization problems where the feasible set is convex but the objective function is not convex, not differentiable and perhaps not even locally Lipschitz at the solutions. Such optimization problems arise from wide applications including image restoration, signal reconstruction, variable selection, optimal control, stochastic equilibrium and spherical approximations. In this paper, we focus on smoothing methods for solving such optimization problems, which use the structure of the minimization problems and composition of smoothing functions for the plus function (x)+. Many existing optimization algorithms and codes can be used in the inner iteration of the smoothing methods. We present properties of the smoothing functions and the gradient consistency of subdifferential associated with a smoothing function. Moreover, we describe how to update the smoothing parameter in the outer iteration of the smoothing methods to guarantee convergence of the smoothing methods to a stationary point of the original minimization problem.  相似文献   

15.
We propose and analyze a fast method for computing the solution of the high frequency Helmholtz equation in a bounded one-dimensional domain with a variable wave speed function. The method is based on wave splitting. The Helmholtz equation is split into one-way wave equations with source functions which are solved iteratively for a given tolerance. The source functions depend on the wave speed function and on the solutions of the one-way wave equations from the previous iteration. The solution of the Helmholtz equation is then approximated by the sum of the one-way solutions at every iteration. To improve the computational cost, the source functions are thresholded and in the domain where they are equal to zero, the one-way wave equations are solved with geometrical optics with a computational cost independent of the frequency. Elsewhere, the equations are fully resolved with a Runge–Kutta method. We have been able to show rigorously in one dimension that the algorithm is convergent and that for fixed accuracy, the computational cost is asymptotically just O(w1/ p)\mathcal {O}(\omega^{1/ p}) for a pth order Runge–Kutta method, where ω is the frequency. Numerical experiments indicate that the growth rate of the computational cost is much slower than a direct method and can be close to the asymptotic rate.  相似文献   

16.
“Classical” First Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov’s optimal algorithm of smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a “good proximal setup”. The latter essentially means that (1) the problem domain should satisfy certain geometric conditions of “favorable geometry”, and (2) the practical use of these methods is conditioned by our ability to compute at a moderate cost proximal transformation at each iteration. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains why proximal type FO methods recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of computing proximal transformation becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available. The approach is based on applying FO algorithms to the dual problem and using the accuracy certificates supplied by the method to recover the primal solution. While suboptimal in terms of accuracy guaranties, the proposed approach does not rely upon “good proximal setup” for the primal problem but requires the problem domain to admit a Linear Optimization oracle—the ability to efficiently maximize a linear form on the domain of the primal problem.  相似文献   

17.
A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven to have optimal-order uniform recovery guarantees using the ubiquitous Restricted Isometry Property (RIP) (Candès and Tao (2005) [11]). However, without specifying a matrix, or class of matrices, it is unclear when the RIP-based sufficient conditions on the algorithm are satisfied. Bounds on RIP constants can be inserted into the algorithms RIP-based conditions, translating the conditions into requirements on the signal's sparsity level, length, and number of measurements. We illustrate this approach for Gaussian matrices on three of the state-of-the-art greedy algorithms: CoSaMP (Needell and Tropp (2009) [29]), Subspace Pursuit (SP) (Dai and Milenkovic (2009) [13]) and Iterative Hard Thresholding (IHT) (Blumensath and Davies (2009) [8]). Designed to allow a direct comparison of existing theory, our framework implies that, according to the best available analysis on these three algorithms, IHT requires the fewest number of compressed sensing measurements, has the best proven stability bounds, and has the lowest per iteration computational cost.  相似文献   

18.
In this paper, a stochastic bottleneck transportation problem, which aims at minimizing the transportation time target subject to a chance constraint, is formulated and an algorithm based on a parametric programming approach is developed to solve it. Further, assuming the transportation costs to be deterministic, a trade-off analysis between the transportation time target and the total cost is given. In addition, methods are developed which give the whole spectrum of optimal solutions to the problems mentioned above. The algorithms are illustrated by numerical examples. The computational complexity of the algorithms is also discussed.  相似文献   

19.
We consider smooth stochastic programs and develop a discrete-time optimal-control problem for adaptively selecting sample sizes in a class of algorithms based on variable sample average approximations (VSAA). The control problem aims to minimize the expected computational cost to obtain a near-optimal solution of a stochastic program and is solved approximately using dynamic programming. The optimal-control problem depends on unknown parameters such as rate of convergence, computational cost per iteration, and sampling error. Hence, we implement the approach within a receding-horizon framework where parameters are estimated and the optimal-control problem is solved repeatedly during the calculations of a VSAA algorithm. The resulting sample-size selection policy consistently produces near-optimal solutions in short computing times as compared to other plausible policies in several numerical examples.  相似文献   

20.
Sequential linear programming and sequential quadratic programming based algorithms are often used to solve nonlinear minimax problems. In case of large scale problems, however, these algorithms can be quite tedious, since linear approximations of every nonlinear function are utilized in the mathematical program approximating the original problem (at any iteration). This paper is concerned with algorithms that require, at each iteration, approximations of only a small fraction of the functions. Such methods are thus well suited for large scale problems. Global convergence of this class of algorithms is proven.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号