首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1-regularized convex minimization problems, i.e., the problem of minimizing an 1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale 1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale 1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale 1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.  相似文献   

2.
In this paper, we propose a new trust-region-projected Hessian algorithm with nonmonotonic backtracking interior point technique for linear constrained optimization. By performing the QR decomposition of an affine scaling equality constraint matrix, the conducted subproblem in the algorithm is changed into the general trust-region subproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint. By using both the trust-region strategy and the line-search technique, each iteration switches to a backtracking interior point step generated by the trustregion subproblem. The global convergence and fast local convergence rates for the proposed algorithm are established under some reasonable assumptions. A nonmonotonic criterion is used to speed up the convergence in some ill-conditioned cases. Selected from Journal of Shanghai Normal University (Natural Science), 2003, 32(4): 7–13  相似文献   

3.
For a bounded system of linear equalities and inequalities, we show that the NP-hard 0-norm minimization problem is completely equivalent to the concave p -norm minimization problem, for a sufficiently small p. A local solution to the latter problem can be easily obtained by solving a provably finite number of linear programs. Computational results frequently leading to a global solution of the 0-minimization problem and often producing sparser solutions than the corresponding 1-solution are given. A similar approach applies to finding minimal 0-solutions of linear programs.  相似文献   

4.
UOBYQA: unconstrained optimization by quadratic approximation   总被引:5,自引:0,他引:5  
UOBYQA is a new algorithm for general unconstrained optimization calculations, that takes account of the curvature of the objective function, F say, by forming quadratic models by interpolation. Therefore, because no first derivatives are required, each model is defined by ?(n+1)(n+2) values of F, where n is the number of variables, and the interpolation points must have the property that no nonzero quadratic polynomial vanishes at all of them. A typical iteration of the algorithm generates a new vector of variables, t say, either by minimizing the quadratic model subject to a trust region bound, or by a procedure that should improve the accuracy of the model. Then usually F( t ) is obtained, and one of the interpolation points is replaced by t . Therefore the paper addresses the initial positions of the interpolation points, the adjustment of trust region radii, the calculation of t in the two cases that have been mentioned, and the selection of the point to be replaced. Further, UOBYQA works with the Lagrange functions of the interpolation equations explicitly, so their coefficients are updated when an interpolation point is moved. The Lagrange functions assist the procedure that improves the model, and also they provide an estimate of the error of the quadratic approximation to F, which allows the algorithm to achieve a fast rate of convergence. These features are discussed and a summary of the algorithm is given. Finally, a Fortran implementation of UOBYQA is applied to several choices of F, in order to investigate accuracy, robustness in the presence of rounding errors, the effects of first derivative discontinuities, and the amount of work. The numerical results are very promising for n≤20, but larger values are problematical, because the routine work of an iteration is of fourth order in the number of variables. Received: December 7, 2000 / Accepted: August 31, 2001?Published online April 12, 2002  相似文献   

5.
The article presents analysis of a new methodology for generating meshes minimizing L p -norms of the interpolation error or its gradient, p > 0. The key element of the methodology is the construction of a metric from node-based and edge-based values of a given function. For a mesh with N h triangles, we demonstrate numerically that L -norm of the interpolation error is proportional to N h −1 and L -norm of the gradient of the interpolation error is proportional to N h −1/2. The methodology can be applied to adaptive solution of PDEs provided that edge-based a posteriori error estimates are available.  相似文献   

6.
Recently, Field, Lewicki, Olshausen, and Sejnowski have reported efforts to identify the ``Sparse Components' of image data. Their empirical findings indicate that such components have elongated shapes and assume a wide range of positions, orientations, and scales. To date, sparse components analysis (SCA) has only been conducted on databases of small (e.g., 16 by 16) image patches and there seems limited prospect of dramatically increased resolving power. In this paper, we apply mathematical analysis to a specific formalization of SCA using synthetic image models, hoping to gain insight into what might emerge from a higher-resolution SCA based on n by n image patches for large n but a constant field of view. In our formalization, we study a class of objects \cal F in a functional space; they are to be represented by linear combinations of atoms from an overcomplete dictionary, and sparsity is measured by the p -norm of the coefficients in the linear combination. We focus on the class \cal F = \sc Star α of black and white images with the black region consisting of a star-shaped set with an α -smooth boundary. We aim to find an optimal dictionary, one achieving the optimal sparsity in an atomic decomposition uniformly over members of the class \sc Star α . We show that there is a well-defined optimal sparsity of representation of members of \sc Star α ; there are decompositions with finite p -norm for p > 2/(α+1) but not for p < 2/(α+1) . We show that the optimal degree of sparsity is nearly attained using atomic decompositions based on the wedgelet dictionary. Wedgelets provide a system of representation by elements in a dyadically organized collection, at all scales, locations, orientations, and positions. The atoms of our atomic decomposition contain both coarse-scale dyadic ``blobs,' which are simply wedgelets from our dictionary, and fine-scale ``needles,' which are differences of pairs of wedgelets. The fine-scale atoms used in the adaptive atomic decomposition are highly anisotropic and occupy a range of positions, scales, and locations. This agrees qualitatively with the visual appearance of empirically determined sparse components of natural images. The set has certain definite scaling properties; for example, the number of atoms of length l scales as 1/l , and, when the object has α -smooth boundaries, the number of atoms with anisotropy \approx A scales as \approx A α-1 . August 16, 1999. Date revised: April 24, 2000. Date accepted: April 4, 2000.  相似文献   

7.
We study the order structure, various duals and orthomorphisms on vector lattices of sequences that differ from a polynomial (either of bounded or unbounded degree) by a sequence in p .  相似文献   

8.
We present a null-space primal-dual interior-point algorithm for solving nonlinear optimization problems with general inequality and equality constraints. The algorithm approximately solves a sequence of equality constrained barrier subproblems by computing a range-space step and a null-space step in every iteration. The ℓ2 penalty function is taken as the merit function. Under very mild conditions on range-space steps and approximate Hessians, without assuming any regularity, it is proved that either every limit point of the iterate sequence is a Karush-Kuhn-Tucker point of the barrier subproblem and the penalty parameter remains bounded, or there exists a limit point that is either an infeasible stationary point of minimizing the 2 norm of violations of constraints of the original problem, or a Fritz-John point of the original problem. In addition, we analyze the local convergence properties of the algorithm, and prove that by suitably controlling the exactness of range-space steps and selecting the barrier parameter and Hessian approximation, the algorithm generates a superlinearly or quadratically convergent step. The conditions on guaranteeing that all slack variables are still positive for a full step are presented.  相似文献   

9.
In this paper we give an upper bound for the Picard number of the rational surfaces which resolve minimally the singularities of toric log Del Pezzo surfaces of given index . This upper bound turns out to be a quadratic polynomial in the variable . Received: 18 June 2008  相似文献   

10.
Exploiting sparsity is essential to improve the efficiency of solving large optimization problems. We present a method for recognizing the underlying sparsity structure of a nonlinear partially separable problem, and show how the sparsity of the Hessian matrices of the problem’s functions can be improved by performing a nonsingular linear transformation in the space corresponding to the vector of variables. A combinatorial optimization problem is then formulated to increase the number of zeros of the Hessian matrices in the resulting transformed space, and a heuristic greedy algorithm is applied to this formulation. The resulting method can thus be viewed as a preprocessor for converting a problem with hidden sparsity into one in which sparsity is explicit. When it is combined with the sparse semidefinite programming relaxation by Waki et al. for polynomial optimization problems, the proposed method is shown to extend the performance and applicability of this relaxation technique. Preliminary numerical results are presented to illustrate this claim. S. Kim’s research was supported by Kosef R01-2005-000-10271-0. M. Kojima’s research was supported by Grant-in-Aid for Scientific Research on Priority Areas 16016234.  相似文献   

11.
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we consider the formulation of subproblems in which the objective function is a generalization of the Hestenes-Powell augmented Lagrangian function. The main feature of the generalized function is that it is minimized with respect to both the primal and the dual variables simultaneously. The benefits of this approach include: (i) the ability to control the quality of the dual variables during the solution of the subproblem; (ii) the availability of improved dual estimates on early termination of the subproblem; and (iii) the ability to regularize the subproblem by imposing explicit bounds on the dual variables. We propose two primal-dual variants of conventional primal methods: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual 1 linearly constrained Lagrangian (pd 1LCL) method. Finally, a new sequential quadratic programming (pdSQP) method is proposed that uses the primal-dual augmented Lagrangian as a merit function.  相似文献   

12.
We describe a new algorithm, the (k, ℓ)-pebble game with colors, and use it to obtain a characterization of the family of (k, ℓ)-sparse graphs and algorithmic solutions to a family of problems concerning tree decompositions of graphs. Special instances of sparse graphs appear in rigidity theory and have received increased attention in recent years. In particular, our colored pebbles generalize and strengthen the previous results of Lee and Streinu [12] and give a new proof of the Tutte-Nash-Williams characterization of arboricity. We also present a new decomposition that certifies sparsity based on the (k, ℓ)-pebble game with colors. Our work also exposes connections between pebble game algorithms and previous sparse graph algorithms by Gabow [5], Gabow and Westermann [6] and Hendrickson [9]. Ileana Streinu; Research of both authors funded by the NSF under grants NSF CCF-0430990 and NSF-DARPA CARGO CCR-0310661 to the first author.  相似文献   

13.
We prove that there is no single uniform tight frame in Euclidean (unitary) space such that a solution of the 1-norm minimization problem for the frame representation is attained on the frame coefficients. Then we find an exact solution of the 1-minimization problem for the Mercedes-Benz frame in ℝ N . We also give some examples of connections between optimization problems of various types.  相似文献   

14.
Estimation of sparse hessian matrices and graph coloring problems   总被引:2,自引:0,他引:2  
Large scale optimization problems often require an approximation to the Hessian matrix. If the Hessian matrix is sparse then estimation by differences of gradients is attractive because the number of required differences is usually small compared to the dimension of the problem. The problem of estimating Hessian matrices by differences can be phrased as follows: Given the sparsity structure of a symmetric matrixA, obtain vectorsd 1,d 2, …d p such thatAd 1,Ad 2, …Ad p determineA uniquely withp as small as possible. We approach this problem from a graph theoretic point of view and show that both direct and indirect approaches to this problem have a natural graph coloring interpretation. The complexity of the problem is analyzed and efficient practical heuristic procedures are developed. Numerical results illustrate the differences between the various approaches. Work supported in part by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38.  相似文献   

15.
We propose necessary and sufficient conditions for a sensing matrix to be “s-semigood” – to allow for exact 1-recovery of sparse signals with at most s nonzero entries under sign restrictions on part of the entries. We express error bounds for imperfect 1-recovery in terms of the characteristics underlying these conditions. These characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1-recovery and thus efficiently computable upper bounds on those s for which a given sensing matrix is s-semigood. We examine the properties of proposed verifiable sufficient conditions, describe their limits of performance and provide numerical examples comparing them with other verifiable conditions from the literature.  相似文献   

16.
We are interested in minimizing functionals with ℓ2 data and gradient fitting term and ℓ1 regularization term with higher order derivatives in a discrete setting. We examine the structure of the solution in 1D by reformulating the original problem into a contact problem which can be solved by dual optimization techniques. The solution turns out to be a ’smooth’ discrete polynomial spline whose knots coincide with the contact points while its counterpart in the contact problem is a discrete version of a spline with higher defect and contact points as knots. In 2D we modify Chambolle’s algorithm to solve the minimization problem with the ℓ1 norm of interacting second order partial derivatives as regularization term. We show that the algorithm can be implemented efficiently by applying the fast cosine transform. We demonstrate by numerical denoising examples that the ℓ2 gradient fitting term can be used to avoid both edge blurring and staircasing effects.   相似文献   

17.
Several risk management and exotic option pricing models have been proposed in the literature which may price European options correctly. A prerequisite of these models is the interpolation of the market implied volatilities or the European option price function. However, the no-arbitrage principle places shape restrictions on the option price function. In this paper, an interpolation method is developed to preserve the shape of the option price function. The interpolation is optimal in terms of minimizing the distance between the implied risk-neutral density and the prior approximation function in L 2-norm, which is important when only a few observations are available. We reformulate the problem into a system of semismooth equations so that it can be solved efficiently.  相似文献   

18.
We discuss necessary and sufficient conditions for a sensing matrix to be “s-good”—to allow for exact 1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect 1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding the 1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1-recovery and to efficiently computable upper bounds on those s for which a given sensing matrix is s-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties.  相似文献   

19.
Linear support vector machine training can be represented as a large quadratic program. We present an efficient and numerically stable algorithm for this problem using interior point methods, which requires only O(n)\mathcal{O}(n) operations per iteration. Through exploiting the separability of the Hessian, we provide a unified approach, from an optimization perspective, to 1-norm classification, 2-norm classification, universum classification, ordinal regression and ε-insensitive regression. Our approach has the added advantage of obtaining the hyperplane weights and bias directly from the solver. Numerical experiments indicate that, in contrast to existing methods, the algorithm is largely unaffected by noisy data, and they show training times for our implementation are consistent and highly competitive. We discuss the effect of using multiple correctors, and monitoring the angle of the normal to the hyperplane to determine termination.  相似文献   

20.
The C. Neumann system describes a particle on the sphere S n under the influence of a potential that is a quadratic form. We study the case that the quadratic form has +1 distinct eigenvalues with multiplicity. Each group of m σ equal eigenvalues gives rise to an O(m σ )-symmetry in configuration space. The combined symmetry group G is a direct product of + 1 such factors, and its cotangent lift has an Ad*-equivariant momentum mapping. Regular reduction leads to the Rosochatius system on S , which has the same form as the Neumann system albeit for an additional effective potential.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号