共查询到20条相似文献,搜索用时 15 毫秒
1.
Spectral projected gradient and variable metric methods for optimization with linear inequalities 总被引:2,自引:0,他引:2
Andreani Roberto; Birgin Ernesto G.; Martinez Jose Mario; Yuan Jinyun 《IMA Journal of Numerical Analysis》2005,25(2):221-252
A family of variable metric methods for convex constrained optimizationwas introduced recently by Birgin, Martínez and Raydan.One of the members of this family is the inexact spectral projectedgradient (ISPG) method for minimization with convex constraints.At each iteration of these methods a strictly convex quadraticfunction with convex constraints must be (inexactly) minimized.In the case of the ISPG method it was shown that, in some importantapplications, iterative projection methods can be used for thisminimization. In this paper the particular case in which theconvex domain is a polytope described by a finite set of linearinequalities is considered. For solving the linearly constrainedconvex quadratic subproblem a dual approach is adopted, by meansof which subproblems become (not necessarily strictly) convexquadratic minimization problems with box constraints. Thesesubproblems are solved by means of an active-set box-constraintquadratic optimizer with a proximal-point type unconstrainedalgorithm for minimization within the current faces. Convergenceresults and numerical experiments are presented. 相似文献
2.
本文给出求解界约束优化问题的一种新的非单调谱投影梯度算法. 该算法是将谱投影梯度算法与Zhang and Hager [SIAM Journal on Optimization,2004,4(4):1043-1056]提出的非单调线搜索结合得到的方法. 在合理的假设条件下,证明了算法的全局收敛性.数值实验结果表明,与已有的界约束优化问题的谱投影梯度法比较,利用本文给出的算法求解界约束优化问题是有竞争力的. 相似文献
3.
Based on a differentiable merit function proposed by Taji, et al in “Mathematical Programming, 1993, 58: 369-383”, a projected gradient trust region method for the monotone variational inequality problem with convex constraints is presented. Theoretical analysis is given which proves that the proposed algorithm is globally convergent and has a local quadratic convergence rate under some reasonable conditions. The results of numerical experiments are reported to show the effectiveness of the proposed algorithm. 相似文献
4.
This paper discusses the global convergence of a class of nonmonotone conjugate gradient methods (NM methods) for nonconvex object functions. This class of methods includes the nonmonotone counterpart of modified Polak-Ribiere method and modified Hestenes-Stiefel method as special cases. 相似文献
5.
In this paper, by means of an active set strategy, we present a projected spectral gradient algorithm for solving large-scale bound constrained optimization problems. A nice property of the active set estimation technique is that it can identify the active set at the optimal point without requiring strict complementary condition, which is potentially used to solve degenerated optimization problems. Under appropriate conditions, we show that this proposed method is globally convergent. We also do some numerical experiments by using some bound constrained problems from CUTEr library. The numerical comparisons with SPG, TRON, and L-BFGS-B show that the proposed method is effective and promising. 相似文献
6.
Lev M. Bregman Yair Censor Simeon Reich Yael Zepkowitz-Malachi 《Journal of Approximation Theory》2003,124(2):194-218
We present a modification of Dykstra's algorithm which allows us to avoid projections onto general convex sets. Instead, we calculate projections onto either a half-space or onto the intersection of two half-spaces. Convergence of the algorithm is established and special choices of the half-spaces are proposed.The option to project onto half-spaces instead of general convex sets makes the algorithm more practical. The fact that the half-spaces are quite general enables us to apply the algorithm in a variety of cases and to generalize a number of known projection algorithms.The problem of projecting a point onto the intersection of closed convex sets receives considerable attention in many areas of mathematics and physics as well as in other fields of science and engineering such as image reconstruction from projections.In this work we propose a new class of algorithms which allow projection onto certain super half-spaces, i.e., half-spaces which contain the convex sets. Each one of the algorithms that we present gives the user freedom to choose the specific super half-space from a family of such half-spaces. Since projecting a point onto a half-space is an easy task to perform, the new algorithms may be more useful in practical situations in which the construction of the super half-spaces themselves is not too difficult. 相似文献
7.
William La Cruz José Mario Martí nez Marcos Raydan. 《Mathematics of Computation》2006,75(255):1429-1448
A fully derivative-free spectral residual method for solving large-scale nonlinear systems of equations is presented. It uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonotone behavior. The global convergence analysis of the combined scheme is presented. An extensive set of numerical experiments that indicate that the new combination is competitive and frequently better than well-known Newton-Krylov methods for large-scale problems is also presented.
8.
Alfred Auslender Paulo J. S. Silva Marc Teboulle 《Computational Optimization and Applications》2007,38(3):305-327
We consider nonmonotone projected gradient methods based on non-Euclidean distances, which play the role of barrier for a
given constraint set. Our basic scheme uses the resulting projection-like maps that produces interior trajectories, and combines
it with the recent nonmonotone line search technique originally proposed for unconstrained problems by Zhang and Hager. The
combination of these two ideas leads to produce a nonmonotone scheme for constrained nonconvex problems, which is proven to
converge to a stationary point. Some variants of this algorithm that incorporate spectral steplength are also studied and
compared with classical nonmonotone schemes based on the usual Euclidean projection. To validate our approach, we report on
numerical results solving bound constrained problems from the CUTEr library collection. 相似文献
9.
J. M. Martínez E. A. Pilotta M. Raydan 《Journal of Optimization Theory and Applications》2005,125(3):629-651
Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy. The simple bounds are then taken into account by placing them in an exponential penalty term that modifies the objective function. The exponential penalty scheme defines the outer iterations of the process. Each outer iteration involves the application of the previously defined preconditioned spectral gradient method for linear equality constrained problems. Therefore, an equality constrained convex quadratic programming problem needs to be solved at every inner iteration. The associated extended KKT matrix remains constant unless the process is reinitiated. In ordinary inner iterations, only the right-hand side of the KKT system changes. Therefore, suitable sparse factorization techniques can be applied and exploited effectively. Encouraging numerical experiments are presented.This research was supported by FAPESP Grant 2001-04597-4 and Grant 903724-6, FINEP and FAEP-UNICAMP, and the Scientific Computing Center of UCV. The authors thank two anonymous referees whose comments helped us to improve the final version of this paper. 相似文献
10.
ZHOU Bin GAO Li & DAI Yuhong School of Mathematical Sciences LMAM Peking University Beijing China State Key Laboratory of Scientific Engineering Computing Institute of Computational Mathematics Scientific/Engineering Computing Academy of Mathematics Systems Science Chinese Academy of Sciences Beijing China 《中国科学A辑(英文版)》2006,49(5):688-702
Inspired by the success of the projected Barzilai-Borwein (PBB) method for large-scale box-constrained quadratic programming, we propose and analyze the monotone projected gradient methods in this paper. We show by experiments and analyses that for the new methods, it is generally a bad option to compute steplengths based on the negative gradients. Thus in our algorithms, some continuous or discontinuous projected gradients are used instead to compute the steplengths. Numerical experiments on a wide variety of test problems are presented, indicating that the new methods usually outperform the PBB method. 相似文献
11.
A class of globally convergent conjugate gradient methods 总被引:4,自引:0,他引:4
Conjugate gradient methods are very important ones for solving nonlinear optimization problems, especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods were usually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can be regarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed by Dai et al. To analyze this class of methods, we introduce some unified tools that concern a general method with the scalar βk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods can uniformly be analyzed. 相似文献
12.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed. 相似文献
13.
A three-parameter family of nonlinear conjugate gradient methods 总被引:3,自引:0,他引:3
In this paper, we propose a three-parameter family of conjugate gradient methods for unconstrained optimization. The three-parameter family of methods not only includes the already existing six practical nonlinear conjugate gradient methods, but subsumes some other families of nonlinear conjugate gradient methods as its subfamilies. With Powell's restart criterion, the three-parameter family of methods with the strong Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the three-parameter family of methods. This paper can also be regarded as a brief review on nonlinear conjugate gradient methods.
14.
15.
一类无充分下降条件的非单调共轭梯度法的全局收敛性分析 总被引:1,自引:0,他引:1
In [3] Liu et al. investigated global convergence of conjugate gradient methods.In that paper they allowed βk to be selected in a wider range and the global convergence of the corresponding algorithm without sufficient decrease condition was proved. This paper investigates global convergence of nonmonotone conjugate gradient method under the same conditions. 相似文献
16.
一类新的记忆梯度法及其全局收敛性 总被引:1,自引:0,他引:1
研究了求解无约束优化问题的记忆梯度法,利用当前和前面迭代点的信息产生下降方向,得到了一类新的无约束优化算法,在Wolfe线性搜索下证明了其全局收敛性.新算法结构简单,不用计算和存储矩阵,适于求解大型优化问题.数值试验表明算法有效. 相似文献
17.
We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an
k
-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes
k
are exogenously given, satisfying
k=0
k = ,
k=0
k
2
< , and
k is chosen so that
k k for some > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research of this author was partially supported by CNPq grant nos. 301280/86 and 300734/95-6. 相似文献
18.
We consider the convex composite problem of minimizing the sum of a strongly convex function and a general extended valued convex function. We present a dual-based proximal gradient scheme for solving this problem. We show that although the rate of convergence of the dual objective function sequence converges to the optimal value with the rate O(1/k2), the rate of convergence of the primal sequence is of the order O(1/k). 相似文献
19.
In this paper, we present a general scheme for bundle-type algorithms which includes a nonmonotone line search procedure and for which global convergence can be proved. Some numerical examples are reported, showing that the nonmonotonicity can be beneficial from a computational point of view.This work was partially supported by the National Research Program on Metodi di ottimizzazione per le decisioni, Ministero dell' Universitá e della Ricerca Scientifica e Tecnologica and by ASI: Agenzia Spaziale Italiana. 相似文献
20.
Yu-Hong Dai. 《Mathematics of Computation》2003,72(243):1317-1328
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.