首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

2.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

3.
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

4.
Two trust regions algorithms for unconstrained nonlinear optimization problems are presented: a monotone and a nonmonotone one. Both of them solve the trust region subproblem by the spectral projected gradient (SPG) method proposed by Birgin, Martínez and Raydan (in SIAM J. Optim. 10(4):1196?C1211, 2000). SPG is a nonmonotone projected gradient algorithm for solving large-scale convex-constrained optimization problems. It combines the classical projected gradient method with the spectral gradient choice of steplength and a nonmonotone line search strategy. The simplicity (only requires matrix-vector products, and one projection per iteration) and rapid convergence of this scheme fits nicely with globalization techniques based on the trust region philosophy, for large-scale problems. In the nonmonotone algorithm the trial step is evaluated by acceptance via a rule which can be considered a generalization of the well known fraction of Cauchy decrease condition and a generalization of the nonmonotone line search proposed by Grippo, Lampariello and Lucidi (in SIAM J. Numer. Anal. 23:707?C716, 1986). Convergence properties and extensive numerical results are presented. Our results establish the robustness and efficiency of the new algorithms.  相似文献   

5.
A computational technique for unconstrained optimal control problems is presented. First, an Euler discretization is carried out to obtain a finite-dimensional approximation of the continuous-time (infinite-dimensional) problem. Then, an inexact restoration (IR) method due to Birgin and Martínez is applied to the discretized problem to find an approximate solution. Convergence of the technique to a solution of the continuous-time problem is facilitated by the convergence of the IR method and the convergence of the discrete (approximate) solution as finer subdivisions are taken. The technique is numerically demonstrated by means of a problem involving the van der Pol system; comprehensive comparisons are made with the Newton and projected Newton methods.  相似文献   

6.
This article proposes new conjugate gradient method for unconstrained optimization by applying the Powell symmetrical technique in a defined sense. Using the Wolfe line search conditions, the global convergence property of the method is also obtained based on the spectral analysis of the conjugate gradient iteration matrix and the Zoutendijk condition for steepest descent methods. Preliminary numerical results for a set of 86 unconstrained optimization test problems verify the performance of the algorithm and show that the Generalized Descent Symmetrical Hestenes-Stiefel algorithm is competitive with the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP+) algorithms.  相似文献   

7.
The spectral projected gradient method SPG is an algorithm for large-scale bound-constrained optimization introduced recently by Birgin, Martínez, and Raydan. It is based on the Raydan unconstrained generalization of the Barzilai-Borwein method for quadratics. The SPG algorithm turned out to be surprisingly effective for solving many large-scale minimization problems with box constraints. Therefore, it is natural to test its perfomance for solving the sub-problems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as the underlying convex-constraint solver are introduced (ALSPG) and the methods are tested in two sets of problems. First, a meaningful subset of large-scale nonlinearly constrained problems of the CUTE collection is solved and compared with the perfomance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP.  相似文献   

8.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

9.
Characterizations of the solution set in terms of subdifferentials play an important role in research of mathematical programming. Previous characterizations are based on necessary and sufficient optimality conditions and invariance properties of subdifferentials. Recently, characterizations of the solution set for essentially quasiconvex programming in terms of Greenberg–Pierskalla subdifferential are studied by the authors. Unfortunately, there are some examples such that these characterizations do not hold for non-essentially quasiconvex programming. As far as we know, characterizations of the solution set for non-essentially quasiconvex programming have not been studied yet. In this paper, we study characterizations of the solution set in terms of subdifferentials for non-essentially quasiconvex programming. For this purpose, we use Martínez–Legaz subdifferential which is introduced by Martínez–Legaz as a special case of c-subdifferential by Moreau. We derive necessary and sufficient optimality conditions for quasiconvex programming by means of Martínez–Legaz subdifferential, and, as a consequence, investigate characterizations of the solution set in terms of Martínez–Legaz subdifferential. In addition, we compare our results with previous ones. We show an invariance property of Greenberg–Pierskalla subdifferential as a consequence of an invariance property of Martínez–Legaz subdifferential. We give characterizations of the solution set for essentially quasiconvex programming in terms of Martínez–Legaz subdifferential.  相似文献   

10.
A Spectral Conjugate Gradient Method for Unconstrained Optimization   总被引:4,自引:0,他引:4  
A family of scaled conjugate gradient algorithms for large-scale unconstrained minimization is defined. The Perry, the Polak—Ribière and the Fletcher—Reeves formulae are compared using a spectral scaling derived from Raydan's spectral gradient optimization method. The best combination of formula, scaling and initial choice of step-length is compared against well known algorithms using a classical set of problems. An additional comparison involving an ill-conditioned estimation problem in Optics is presented. Accepted 22 August 2000. Online publication 26 February 2001.  相似文献   

11.
We find a Bäcklund transformation between the four-dimensional Martínez Alonso–Shabat and Ferapontov–Khusnutdinova equations. We also discuss an integrable deformation of the Martínez Alonso–Shabat equation.  相似文献   

12.
We introduce a trust region algorithm for minimization of nonsmooth functions with linear constraints. At each iteration, the objective function is approximated by a model function that satisfies a set of assumptions stated recently by Qi and Sun in the context of unconstrained nonsmooth optimization. The trust region iteration begins with the resolution of an “easy problem”, as in recent works of Martínez and Santos and Friedlander, Martínez and Santos, for smooth constrained optimization. In practical implementations we use the infinity norm for defining the trust region, which fits well with the domain of the problem. We prove global convergence and report numerical experiments related to a parameter estimation problem. Supported by FAPESP (Grant 90/3724-6), FINEP and FAEP-UNICAMP. Supported by FAPESP (Grant 90/3724-6 and grant 93/1515-9).  相似文献   

13.
Wolfe线搜索下一类混合共轭梯度法的全局收敛性   总被引:3,自引:0,他引:3  
本文给出了一个新的共轭梯度公式,新公式在精确线搜索下与DY公式等价,并给出了新公式的相关性质.结合新公式和DY公式提出了一个新的混合共轭梯度法,新算法在Wolfe线搜索下产生一个下降方向,并证明了算法的全局收敛性,并给出了数值例子.  相似文献   

14.
In this paper we show that an iterative sequence generated by the Halpern algorithm converges to a fixed point in the case of complete CAT(κ) spaces. Similar results for Hadamard manifolds were obtained in [Li, C., López, G., Martín-Márquez, V.: Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwanese J. Math., 14, 541–559 (2010)], but we study a much more general case. Moreover, we discuss the Halpern iteration procedure for set-valued mappings.  相似文献   

15.
《Optimization》2012,61(12):1457-1471
A modified Polak–Ribière–Polyak conjugate gradient algorithm which satisfies both the sufficient descent condition and the conjugacy condition is presented. These properties are independent of the line search. The algorithms use the standard Wolfe line search. Under standard assumptions, we show the global convergence of the algorithm. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this computational scheme outperforms the known Polak–Ribière–Polyak algorithm, as well as some other unconstrained optimization algorithms.  相似文献   

16.
Conjugate gradient methods are important for large-scale unconstrained optimization. This paper proposes an acceleration of these methods using a modification of steplength. The idea is to modify in a multiplicative manner the steplength αk, computed by Wolfe line search conditions, by means of a positive parameter ηk, in such a way to improve the behavior of the classical conjugate gradient algorithms. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with some conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that the accelerated computational scheme outperform the corresponding conjugate gradient algorithms.  相似文献   

17.
In this paper we derive a formula relating the norm of the l2 error to the A-norm of the error in the conjugate gradient algorithm. Approximating the different terms in this formula, we obtain an estimate of the l2 norm during the conjugate gradient iterations. Numerical experiments are given for several matrices. AMS subject classification 65F10, 65G20  相似文献   

18.
We derive a new four-dimensional partial differential equation with the isospectral Lax representation by shrinking the symmetry algebra of the reduced quasi-classical self-dual Yang–Mills equation and applying the technique of twisted extensions to the obtained Lie algebra. Then we find a recursion operator for symmetries of the new equation and construct a Bäcklund transformation between this equation and the four-dimensional Martínez Alonso–Shabat equation. Finally, we construct extensions of the integrable hierarchies associated to the hyper-CR equation for Einstein–Weyl structures, the reduced quasi-classical self-dual Yang–Mills equation, the four-dimensional universal hierarchy equation, and the four-dimensional Martínez Alonso–Shabat equation.  相似文献   

19.
In this paper a new hybrid conjugate gradient algorithm is proposed and analyzed. The parameter β k is computed as a convex combination of the Polak-Ribière-Polyak and the Dai-Yuan conjugate gradient algorithms, i.e. β k N =(1−θ k )β k PRP +θ k β k DY . The parameter θ k in the convex combination is computed in such a way that the conjugacy condition is satisfied, independently of the line search. The line search uses the standard Wolfe conditions. The algorithm generates descent directions and when the iterates jam the directions satisfy the sufficient descent condition. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this hybrid computational scheme outperforms the known hybrid conjugate gradient algorithms. N. Andrei is a member of the Academy of Romanian Scientists, Splaiul Independenţei nr. 54, Sector 5, Bucharest, Romania.  相似文献   

20.
Zhao  Ting  Liu  Hongwei  Liu  Zexian 《Numerical Algorithms》2021,87(4):1501-1534

In this paper, two new subspace minimization conjugate gradient methods based on p-regularization models are proposed, where a special scaled norm in p-regularization model is analyzed. Different choices of special scaled norm lead to different solutions to the p-regularized subproblem. Based on the analyses of the solutions in a two-dimensional subspace, we derive new directions satisfying the sufficient descent condition. With a modified nonmonotone line search, we establish the global convergence of the proposed methods under mild assumptions. R-linear convergence of the proposed methods is also analyzed. Numerical results show that, for the CUTEr library, the proposed methods are superior to four conjugate gradient methods, which were proposed by Hager and Zhang (SIAM J. Optim. 16(1):170–192, 2005), Dai and Kou (SIAM J. Optim. 23(1):296–320, 2013), Liu and Liu (J. Optim. Theory. Appl. 180(3):879–906, 2019) and Li et al. (Comput. Appl. Math. 38(1):2019), respectively.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号