首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

2.
本文在很弱的条件下得到了关于无约束最优化的Polak—Ribiere和Hestenes-Stiefel共轭梯度法的全局收敛性的新结果,这里 PR方法和HS方法中的参数β_k~(PR)和β_k~HS可以在某个负的区域内取值,这一负的区域与k有关.这些新的收敛性结果改进了文献中已有的结果.数值检验的结果表明了本文中新的 PR方法和 HS方法是相当有效的.  相似文献   

3.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

4.
Three hybrid methods for solving unconstrained optimization problems are introduced. These methods are defined using proper combinations of the search directions and included parameters in conjugate gradient and quasi-Newton methods. The convergence of proposed methods with the underlying backtracking line search is analyzed for general objective functions and particularly for uniformly convex objective functions. Numerical experiments show the superiority of the proposed methods with respect to some existing methods in view of the Dolan and Moré’s performance profile.  相似文献   

5.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

6.
AbstractIn this paper, motivated by the Martinez and Qi methods[l], we propose one type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems in which the objective functions are not twice differentiable, but have LC gradient. They make the norm of the gradient decreasing. These methods are implementable and globally convergent. We prove that the algorithms have superlinear convergence rates under some mile conditions.The methods may also be used to solve nonsmooth equations.  相似文献   

7.
ACLASSOFTRUSTREGIONMETHODSFORLINEARINEQUALITYCONSTRAINEDOPTIMIZATIONANDITSTHEORYANALYSIS:I.ALGORITHMANDGLOBALCONVERGENCEXIUNA...  相似文献   

8.
《Optimization》2012,61(12):2679-2691
In this article, we present an improved three-term conjugate gradient algorithm for large-scale unconstrained optimization. The search directions in the developed algorithm are proved to satisfy an approximate secant equation as well as the Dai-Liao’s conjugacy condition. With the standard Wolfe line search and the restart strategy, global convergence of the algorithm is established under mild conditions. By implementing the algorithm to solve 75 benchmark test problems with dimensions from 1000 to 10,000, the obtained numerical results indicate that the algorithm outperforms the state-of-the-art algorithms available in the literature. It costs less CPU time and smaller number of iterations in solving the large-scale unconstrained optimization.  相似文献   

9.
Yanyun Ding  Jianwei Li 《Optimization》2017,66(12):2309-2328
The recent designed non-linear conjugate gradient method of Dai and Kou [SIAM J Optim. 2013;23:296–320] is very efficient currently in solving large-scale unconstrained minimization problems due to its simpler iterative form, lower storage requirement and its closeness to the scaled memoryless BFGS method. Just because of these attractive properties, this method was extended successfully to solve higher dimensional symmetric non-linear equations in recent years. Nevertheless, its numerical performance in solving convex constrained monotone equations has never been explored. In this paper, combining with the projection method of Solodov and Svaiter, we develop a family of non-linear conjugate gradient methods for convex constrained monotone equations. The proposed methods do not require the Jacobian information of equations, and even they do not store any matrix in each iteration. They are potential to solve non-smooth problems with higher dimensions. We prove the global convergence of the class of the proposed methods and establish its R-linear convergence rate under some reasonable conditions. Finally, we also do some numerical experiments to show that the proposed methods are efficient and promising.  相似文献   

10.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

11.
We are concerned with defining new globalization criteria for solution methods of nonlinear equations. The current criteria used in these methods require a sufficient decrease of a particular merit function at each iteration of the algorithm. As was observed in the field of smooth unconstrained optimization, this descent requirement can considerably slow the rate of convergence of the sequence of points produced and, in some cases, can heavily deteriorate the performance of algorithms. The aim of this paper is to show that the global convergence of most methods proposed in the literature for solving systems of nonlinear equations can be obtained using less restrictive criteria that do not enforce a monotonic decrease of the chosen merit function. In particular, we show that a general stabilization scheme, recently proposed for the unconstrained minimization of continuously differentiable functions, can be extended to methods for the solution of nonlinear (nonsmooth) equations. This scheme includes different kinds of relaxation of the descent requirement and opens up the possibility of describing new classes of algorithms where the old monotone linesearch techniques are replace with more flexible nonmonotone stabilization procedures. As in the case of smooth unconstrained optimization, this should be the basis for defining more efficient algorithms with very good practical rates of convergence.This material is partially based on research supported by the Air Force Office of Scientific Research Grant AFOSR-89-0410, National Science Foundation Grant CCR-91-57632, and Istituto di Analisi dei Sistemi ed Informatica del CNR.  相似文献   

12.
In this paper, an adaptive trust region algorithm that uses Moreau–Yosida regularization is proposed for solving nonsmooth unconstrained optimization problems. The proposed algorithm combines a modified secant equation with the BFGS update formula and an adaptive trust region radius, and the new trust region radius utilizes not only the function information but also the gradient information. The global convergence and the local superlinear convergence of the proposed algorithm are proven under suitable conditions. Finally, the preliminary results from comparing the proposed algorithm with some existing algorithms using numerical experiments reveal that the proposed algorithm is quite promising for solving nonsmooth unconstrained optimization problems.  相似文献   

13.
This paper introduces two new algorithms for finding initial feasible points from initial infeasible points for the recently developed norm-relaxed method of feasible directions (MFD). Their global convergence is analyzed. The theoretical results show that both methods are globally convergent; one of them guarantees finding a feasible point in a finite number of steps. These two methods are very convenient to implement in the norm-relaxed MFD. Numerical experiments are carried out to demonstrate their performance on some classical test problems and to compare them with the traditional method of phase I problems. The numerical results show that the methods proposed in this paper are more effective than the method of phase I problems in the norm-relaxed MFD. Hence, they can be used for finding initial feasible points for other MFD algorithms and other nonlinear programming methods.  相似文献   

14.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

15.
本文将无约束超记忆梯度法推广到非线性不等式约束优化问题上来,给出了两类形式很一般的超记忆可行方向法,并在非退化及连续可微等较弱的假设下证明了其全局收敛性.适当选取算法中的参量及记忆方向,不仅可得到一些已知的方法及新方法,而且还可能加快算法的收敛速度.  相似文献   

16.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

17.
In this paper two nonmonolone curved search (NCS) algorithms fur unconstrained optimization are presented. The NCS algorithms possess both a global convergence properly and a quadratic rale of convergence. Some numerical results are also reported which show that the NCS algorithn is superior to the usual curved search (UCS)aIgorithm for typical lest problems.  相似文献   

18.
The Barzilai–Borwein (BB) gradient method has received many studies due to its simplicity and numerical efficiency. By incorporating a nonmonotone line search, Raydan (SIAM J Optim. 1997;7:26–33) has successfully extended the BB gradient method for solving general unconstrained optimization problems so that it is competitive with conjugate gradient methods. However, the numerical results reported by Raydan are poor for very ill-conditioned problems because the effect of the degree of nonmonotonicity may be noticeable. In this paper, we focus more on the nonmonotone line search technique used in the global Barzilai–Borwein (GBB) gradient method. We improve the performance of the GBB gradient method by proposing an adaptive nonmonotone line search based on the morphology of the objective function. We also prove the global convergence and the R-linear convergence rate of the proposed method under reasonable assumptions. Finally, we give some numerical experiments made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed method in the sense of the performance profile introduced (Math Program. 2002;91:201–213) by Dolan and Moré.  相似文献   

19.
一类全局收敛的记忆梯度法及其线性收敛性   总被引:18,自引:0,他引:18  
本文研究一类新的解无约束最优化问题的记忆梯度法,在强Wolfe线性搜索下证明了其全局收敛性.当目标函数为一致凸函数时,对其线性收敛速率进行了分析.数值试验表明算法是很有效的.  相似文献   

20.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号