首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
受性能估计问题(PEP)方法的启发,通过考察最坏函数误差的收敛边界(即效率),优化了迭代点对应的梯度满足Q-线性收敛的光滑凸极小化的一阶方法的步长系数.介绍新的有效的一阶方法,称为QGM,具有与优化梯度法(OGM)类似的计算有效形式.  相似文献   

3.
Consider the problem of minimizing a convex essentially smooth function over a polyhedral set. For the special case where the cost function is strictly convex, we propose a feasible descent method for this problem that chooses the descent directions from a finite set of vectors. When the polyhedral set is the nonnegative orthant or the entire space, this method reduces to a coordinate descent method which, when applied to certain dual of linearly constrained convex programs with strictly convex essentially smooth costs, contains as special cases a number of well-known dual methods for quadratic and entropy (either –logx orx logx) optimization. Moreover, convergence of these dual methods can be inferred from a general convergence result for the feasible descent method. When the cost function is not strictly convex, we propose an extension of the feasible descent method which makes descent along the elementary vectors of a certain subspace associated with the polyhedral set. The elementary vectors are not stored, but generated using the dual rectification algorithm of Rockafellar. By introducing an -complementary slackness mechanism, we show that this extended method terminates finitely with a solution whose cost is within an order of of the optimal cost. Because it uses the dual rectification algorithm, this method can exploit the combinatorial structure of the polyhedral set and is well suited for problems with a special (e.g., network) structure.This work was partially supported by the US Army Research Office Contract No. DAAL03-86-K-0171 and by the National Science Foundation Grant No. ECS-85-19058.  相似文献   

4.
We describe an algorithm for minimizing convex, not necessarily smooth, functions of several variables, based on a descent direction finding procedure that inherits some characteristics both of standard bundle method and of Wolfe’s conjugate subgradient method. This is obtained by allowing appropriate upward shifting of the affine approximations of the objective function which contribute to the classic definition of the cutting plane function. The algorithm embeds a proximity control strategy. Finite termination is proved at a point satisfying an approximate optimality condition and some numerical results are provided.  相似文献   

5.
This paper considers a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. It studies the computational complexity of quadratic penalty based methods for solving the above class of problems. An iteration of these methods, which is simply an iteration of Nesterov’s optimal method (or one of its variants) for approximately solving a smooth penalization subproblem, consists of one or two projections onto the simple convex set. Iteration-complexity bounds expressed in terms of the latter type of iterations are derived for two quadratic penalty based variants, namely: one which applies the quadratic penalty method directly to the original problem and another one which applies the latter method to a perturbation of the original problem obtained by adding a small quadratic term to its objective function.  相似文献   

6.
Taylor  Adrien  Drori  Yoel 《Mathematical Programming》2023,199(1-2):557-594
Mathematical Programming - We present an optimal gradient method for smooth strongly convex optimization. The method is optimal in the sense that its worst-case bound on the distance to an optimal...  相似文献   

7.
We consider a dual method for solving non-strictly convex programs possessing a certain separable structure. This method may be viewed as a dual version of a block coordinate ascent method studied by Auslender [1, Section 6]. We show that the decomposition methods of Han [6, 7] and the method of multipliers may be viewed as special cases of this method. We also prove a convergence result for this method which can be applied to sharpen the available convergence results for Han's methods.The main part of this research was conducted while the author was with the Laboratory for Information and Decision Systems, M.I.T., Cambridge, with support by the U.S. Army Research Office, Contract No. DAAL03-86-K-0171 (Center for Intelligent Control Systems) and by the National Science Foundation under Grant ECS-8519058.  相似文献   

8.
9.
Proximal bundle methods for minimizing a convex functionf generate a sequence {x k } by takingx k+1 to be the minimizer of , where is a sufficiently accurate polyhedral approximation tof andu k > 0. The usual choice ofu k = 1 may yield very slow convergence. A technique is given for choosing {u k } adaptively that eliminates sensitivity to objective scaling. Some encouraging numerical experience is reported.This research was supported by Project CPBP.02.15.  相似文献   

10.
We proposed implementable conjugate gradient type methods for solving a possibly nondifferentiable convex minimization problem by converting the original objective function to a once continuously differentiable function by way of the Moreau–Yosida regularization. The proposed methods make use of approximate function and gradient values of the Moreau–Yosida regularization instead of the corresponding exact values. The global convergence is established under mild conditions.  相似文献   

11.
Boob  Digvijay  Deng  Qi  Lan  Guanghui 《Mathematical Programming》2023,197(1):215-279
Mathematical Programming - Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in...  相似文献   

12.
《Optimization》2012,61(9):1367-1385
The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. Based on Marino and Xu's method [G. Marino and H.-K. Xu, A general method for nonexpansive mappings in Hilbert space, J. Math. Anal. Appl. 318 (2006), pp. 43–52], we combine GPA and averaged mapping approach to propose implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem for the first time in this article. Under suitable conditions, strong convergence theorems are obtained.  相似文献   

13.
Luo  Hao  Chen  Long 《Mathematical Programming》2022,195(1-2):735-781
Mathematical Programming - Convergence analysis of accelerated first-order methods for convex optimization problems are developed from the point of view of ordinary differential equation solvers. A...  相似文献   

14.
Consider a problem of minimizing a separable, strictly convex, monotone and differentiable function on a convex polyhedron generated by a system of m linear inequalities. The problem has a series–parallel structure, with the variables divided serially into n disjoint subsets, whose elements are considered in parallel. This special structure is exploited in two algorithms proposed here for the approximate solution of the problem. The first algorithm solves at most min{mν − n + 1} subproblems; each subproblem has exactly one equality constraint and at most n variables. The second algorithm solves a dynamically generated sequence of subproblems; each subproblem has at most ν − n + 1 equality constraints, where ν is the total number of variables. To solve these subproblems both algorithms use the authors’ Projected Newton Bracketing method for linearly constrained convex minimization, in conjunction with the steepest descent method. We report the results of numerical experiments for both algorithms.  相似文献   

15.
This paper studies Newton-type methods for minimization of partly smooth convex functions. Sequential Newton methods are provided using local parameterizations obtained from -Lagrangian theory and from Riemannian geometry. The Hessian based on the -Lagrangian depends on the selection of a dual parameter g; by revealing the connection to Riemannian geometry, a natural choice of g emerges for which the two Newton directions coincide. This choice of g is also shown to be related to the least-squares multiplier estimate from a sequential quadratic programming (SQP) approach, and with this multiplier, SQP gives the same search direction as the Newton methods. This paper is dedicated to R.T. Rockafellar, on the occasion of his 70th birthday.  相似文献   

16.
This paper focuses on some customized applications of the proximal point algorithm (PPA) to two classes of problems: the convex minimization problem with linear constraints and a generic or separable objective function, and a saddle-point problem. We treat these two classes of problems uniformly by a mixed variational inequality, and show how the application of PPA with customized metric proximal parameters can yield favorable algorithms which are able to make use of the models’ structures effectively. Our customized PPA revisit turns out to unify some algorithms including some existing ones in the literature and some new ones to be proposed. From the PPA perspective, we establish the global convergence and a worst-case O(1/t) convergence rate for this series of algorithms in a unified way.  相似文献   

17.
For convex minimization we introduce an algorithm based on -space decomposition. The method uses a bundle subroutine to generate a sequence of approximate proximal points. When a primal-dual track leading to a solution and zero subgradient pair exists, these points approximate the primal track points and give the algorithm's , or corrector, steps. The subroutine also approximates dual track points that are -gradients needed for the method's -Newton predictor steps. With the inclusion of a simple line search the resulting algorithm is proved to be globally convergent. The convergence is superlinear if the primal-dual track points and the objective's -Hessian are approximated well enough. Dedicated to Terry Rockafellar who has had a great influence on our work via strong support for proximal points and for structural definitions that involve tangential convergence. On leave from INRIA Rocquencourt Research of the first author supported by the National Science Foundation under Grant No. DMS-0071459 and by CNPq (Brazil) under Grant No. 452966/2003-5. Research of the second author supported by FAPERJ (Brazil) under Grant No.E26/150.581/00 and by CNPq (Brazil) under Grant No. 383066/2004-2.  相似文献   

18.
Oviedo  Harry  Andreani  Roberto  Raydan  Marcos 《Numerical Algorithms》2022,90(3):1225-1252
Numerical Algorithms - We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic functions, whose parameters are determined by a minimization model based on a...  相似文献   

19.
We propose two linearly convergent descent methods for finding a minimizer of a convex quadratic spline and establish global error estimates for the iterates. One application of such descent methods is to solve convex quadratic programs, since they can be reformulated as problems of unconstrained minimization of convex quadratic splines. In particular, we derive several new linearly convergent algorthms for solving convex quadratic programs. These algorithms could be classified as row-action methods, matrix-splitting methods, and Newton-type methods.  相似文献   

20.
This paper presents new versions of proximal bundle methods for solving convex constrained nondifferentiable minimization problems. The methods employ 1 or exact penalty functions with new penalty updates that limit unnecessary penalty growth. In contrast to other methods, some of them are insensitive to problem function scaling. Global convergence of the methods is established, as well as finite termination for polyhedral problems. Some encouraging numerical experience is reported. The ideas presented may also be used in variable metric methods for smooth nonlinear programming.This research was supported by the Polish Academy of Sciences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号