首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary. The one-dimensional discrete Poisson equation on a uniform grid with points produces a linear system of equations with a symmetric, positive-definite coefficient matrix. Hence, the conjugate gradient method can be used, and standard analysis gives an upper bound of ) on the number of iterations required for convergence. This paper introduces a systematically defined set of solutions dependent on a parameter , and for several values of , presents exact analytic expressions for the number of steps ) needed to achieve accuracy . The asymptotic behavior of these expressions has the form )} as and )} as . In particular, two choices of corresponding to nonsmooth solutions give , i.e., iteration counts independent of ; this is in contrast to the standard bounds. The standard asymptotic convergence behavior, , is seen for a relatively smooth solution. Numerical examples illustrate and supplement the analysis. Received August 30, 1995 / Revised version received January 23, 1996  相似文献   

2.
In this paper we present a new family of conjugate gradientalgorithms. This family originates in the algorithms providedby Wolfe and Lemaréchal for non-differentiable problems.It is shown that the Wolfe-Lemaréchal algorithm is identicalto the Fletcher-Reeves algorithm when the objective functionis smooth and when line searches are exact. The convergenceproperties of the new algorithms are investigated. One of themis globally convergent under minimum requirements on the directionalminimization.  相似文献   

3.
4.
** Email: zl606{at}tom.com*** Corresponding author. Email: weijunzhou{at}126.com**** Email: dhli{at}hnu.cn In this paper, we propose a modified Polak–Ribière–Polyak(PRP) conjugate gradient method. An attractive property of theproposed method is that the direction generated by the methodis always a descent direction for the objective function. Thisproperty is independent of the line search used. Moreover, ifexact line search is used, the method reduces to the ordinaryPRP method. Under appropriate conditions, we show that the modifiedPRP method with Armijo-type line search is globally convergent.We also present extensive preliminary numerical experimentsto show the efficiency of the proposed method.  相似文献   

5.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

6.
Zheng  Xiuyun  Dong  Xiaoliang  Shi  Jiarong  Yang  Wei 《Numerical Algorithms》2020,84(2):603-608
Numerical Algorithms - In Dai and Wen (Numer. Algor. 69, 337–341 2015), some improvements have been presented in the proof of Theorem 2 and Theorem 4 in Andrei (Numer. Algor. 47,...  相似文献   

7.
Translated from Matematicheskie Modeli i Optimizatsiya Vychislitel'nykh Algoritmov, pp. 50–62, 1994.  相似文献   

8.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

9.
10.
Summary Consider the problem of minimizing a real function subject to linear equality constraints and to nonnegativity bounds on the variables. A convergence theorem is established for a general algorithm model based on the reduced gradient method. The most meaningful assumptions that are made, concern two crucial points: the choice of the independent variables and that of the search direction.  相似文献   

11.
《Optimization》2012,61(2):163-179
In this article, we consider the global convergence of the Polak–Ribiére–Polyak conjugate gradient method (abbreviated PRP method) for minimizing functions that have Lipschitz continuous partial derivatives. A novel form of non-monotone line search is proposed to guarantee the global convergence of the PRP method. It is also shown that the PRP method has linear convergence rate under some mild conditions when the non-monotone line search reduces to a related monotone line search. The new non-monotone line search needs to estimate the Lipschitz constant of the gradients of objective functions, for which two practical estimations are proposed to help us to find a suitable initial step size for the PRP method. Numerical results show that the new line search approach is efficient in practical computation.  相似文献   

12.
Although the study of global convergence of the Polak–Ribière–Polyak (PRP), Hestenes–Stiefel (HS) and Liu–Storey (LS) conjugate gradient methods has made great progress, the convergence of these algorithms for general nonlinear functions is still erratic, not to mention under weak conditions on the objective function and weak line search rules. Besides, it is also interesting to investigate whether there exists a general method that converges under the standard Armijo line search for general nonconvex functions, since very few relevant results have been achieved. So in this paper, we present a new general form of conjugate gradient methods whose theoretical significance is attractive. With any formula β k  ≥ 0 and under weak conditions, the proposed method satisfies the sufficient descent condition independently of the line search used and the function convexity, and its global convergence can be achieved under the standard Wolfe line search or even under the standard Armijo line search. Based on this new method, convergence results on the PRP, HS, LS, Dai–Yuan–type (DY) and Conjugate–Descent–type (CD) methods are established. Preliminary numerical results show the efficiency of the proposed methods.  相似文献   

13.
This paper is concerned with proving theoretical results related to the convergence of the conjugate gradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projection of the inverse of the matrix, new relations for ratios of the A‐norm of the error and the norm of the residual are provided, starting from some earlier results of Sadok (Numer. Algorithms 2005; 40 :201–216). The proofs of our results rely on the well‐known correspondence between the CG method and the Lanczos algorithm. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
It has been conjectured that the conjugate gradient method for minimizing functions of several variables has a superlinear rate of convergence, but Crowder and Wolfe show by example that the conjecture is false. Now the stronger result is given that, if the objective function is a convex quadratic and if the initial search direction is an arbitrary downhill direction, then either termination occurs or the rate of convergence is only linear, the second possibility being more usual. Relations between the starting point and the initial search direction that are necessary and sufficient for termination in the quadratic case are studied.  相似文献   

15.
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

16.
To guarantee global convergence of the standard (unmodified) PRP nonlinear conjugate gradient method for unconstrained optimization, the exact line search or some Armijo type line searches which force the PRP method to generate descent directions have been adopted. In this short note, we propose a non-descent PRP method in another way. We prove that the unmodified PRP method converges globally even for nonconvex minimization by the use of an approximate descent inexact line search.  相似文献   

17.
Since Rosen’s gradient projection method was published in 1960, a rigorous convergence proof of his method has remained an open question. A convergence theorem is given in this paper. Part of this author’s work was done while he studied at the Department of Mathematics, University of California at Santa Barbara, and was supported by the National Science Foundation under Grant No. MCS83-14977. Part of this author’s work was done while he visited the Computer Science Department, University of Minnesota, Minneapolis, and was supported by the National Science Foundation under Grant No. MCS81-01214.  相似文献   

18.
In this paper, HS conjugate gradient method for minimizing a continuously differentiable functionf onR n is modified to have global convergence property. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with Curry-Altman’s step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with Armijo step size rule are established. Numerical results show that the new algorithms are efficient.  相似文献   

19.
20.
共轭梯度法是最优化中最常用的方法之一,广泛地应用于求解大规模优化问题,其中参数β_k的不同选取可以构成不同的共轭梯度法.给出了一类含有三个参数的共轭梯度算法,这种算法能够在给定的条件下证明选定的β_k在每一步都能产生一个下降方向,同时在强Wolfe线搜索下,这种算法具有全局收敛性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号