首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, by the use of the project of the PRP (Polak–Ribiére–Polyak) conjugate gradient direction, we develop a PRP-based descent method for solving unconstrained optimization problem. The method provides a sufficient descent direction for the objective function. Moreover, if exact line search is used, the method reduces to the standard PRP method. Under suitable conditions, we show that the method with some backtracking line search or the generalized Wolfe-type line search is globally convergent. We also report some numerical results and compare the performance of the method with some existing conjugate gradient methods. The results show that the proposed method is efficient.  相似文献   

2.
Tanabe (1988) proposed a variation of the classical Newton method for solving nonlinear systems of equations, the so-called Centered Newton method. His idea was based on a deviation of the Newton direction towards a variety called “Central Variety”. In this paper we prove that the Centered Newton method is locally convergent and we present a globally convergent method based on the centered direction used by Tanabe. We show the effectiveness of our proposal for solving nonlinear system of equations and compare with the Newton method with line search.  相似文献   

3.
一种修正的谱CD共轭梯度算法的全局收敛性   总被引:2,自引:0,他引:2  
In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising.  相似文献   

4.
In this paper, we make a modification to the Liu-Storey (LS) conjugate gradient method and propose a descent LS method. The method can generate sufficient descent directions for the objective function. This property is independent of the line search used. We prove that the modified LS method is globally convergent with the strong Wolfe line search. The numerical results show that the proposed descent LS method is efficient for the unconstrained problems in the CUTEr library.  相似文献   

5.
《Optimization》2012,61(7):1027-1042
In order to take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing the CG parameters convexly and Powell's approach of nonnegative restriction of the CG parameters. The hybridization parameter in our methods is computed from a modified secant equation obtained based on the search direction of the Hager–Zhang nonlinear CG method. We show that if the line search fulfils the Wolfe conditions, then one of our methods is globally convergent for uniformly convex functions and the other is globally convergent for general functions. We report some numerical results demonstrating the efficiency of our methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

6.
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. It is well-known that the direction generated by a conjugate gradient method may not be a descent direction of the objective function. In this paper, we take a little modification to the Fletcher–Reeves (FR) method such that the direction generated by the modified method provides a descent direction for the objective function. This property depends neither on the line search used, nor on the convexity of the objective function. Moreover, the modified method reduces to the standard FR method if line search is exact. Under mild conditions, we prove that the modified method with Armijo-type line search is globally convergent even if the objective function is nonconvex. We also present some numerical results to show the efficiency of the proposed method.Supported by the 973 project (2004CB719402) and the NSF foundation (10471036) of China.  相似文献   

7.
强Wolfe条件不能保证标准CD共轭梯度法全局收敛.本文通过建立新的共轭参数,提出无约束优化问题的一个新谱共轭梯度法,该方法在精确线搜索下与标准CD共轭梯度法等价,在标准wolfe线搜索下具有下降性和全局收敛性.初步的数值实验结果表明新方法是有效的,适合于求解非线性无约束优化问题.  相似文献   

8.
In this paper, we propose a modified BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with nonmonotone line search for unconstrained optimization. Under some mild conditions, we show that the method is globally convergent without a convexity assumption on the objective function. We also report some preliminary numerical results to show the efficiency of the proposed method.  相似文献   

9.
In this paper, we scale the quasiNewton equation and propose a spectral scaling BFGS method. The method has a good selfcorrecting property and can improve the behavior of the BFGS method. Compared with the standard BFGS method, the single-step convergence rate of the spectral scaling BFGS method will not be inferior to that of the steepest descent method when minimizing an n-dimensional quadratic function. In addition, when the method with exact line search is applied to minimize an n-dimensional strictly convex function, it terminates within n steps. Under appropriate conditions, we show that the spectral scaling BFGS method with Wolfe line search is globally and R-linear convergent for uniformly convex optimization problems. The reported numerical results show that the spectral scaling BFGS method outperforms the standard BFGS method.  相似文献   

10.
The smoothing-type algorithm has been successfully applied to solve various optimization problems. In general, the smoothing-type algorithm is designed based on some monotone line search. However, in order to achieve better numerical results, the non-monotone line search technique has been used in the numerical computations of some smoothing-type algorithms. In this paper, we propose a smoothing-type algorithm for solving the nonlinear complementarity problem with a non-monotone line search. We show that the proposed algorithm is globally and locally superlinearly convergent under suitable assumptions. The preliminary numerical results are also reported.  相似文献   

11.
In this article, based on the modified secant equation, we propose a modified Hestenes-Stiefel (HS) conjugate gradient method which has similar form as the CG-DESCENT method proposed by Hager and Zhang (SIAM J Optim 16:170–192, 2005). The presented method can generate sufficient descent directions without any line search. Under some mild conditions, we show that it is globally convergent with Armijo line search. Moreover, the R-linear convergence rate of the modified HS method is established. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known CG-DESCENT method.  相似文献   

12.
Armijo线性搜索下Hager-Zhang共轭梯度法的全局收敛性   总被引:2,自引:0,他引:2       下载免费PDF全文
Hager和Zhang[4]提出了一种新的非线性共轭梯度法(简称 HZ 方法), 并证明了该方法在 Wolfe搜索和 Goldstein 搜索下求解强凸问题的全局收敛性.但是HZ方法在标准Armijo 搜索下求解非凸问题是否全局收敛尚不清楚.该文提出了一种保守的HZ共轭梯度法,并且证明了这种方法在 Armijo 线性搜索下求解非凸优化问题的全局收敛性.此外,作者给出了一些 数值结果以检验该方法的有效性.  相似文献   

13.
In this paper, we propose a new regularized quasi-Newton method for unconstrained optimization. At each iteration, a regularized quasi-Newton equation is solved to obtain the search direction. The step size is determined by a non-monotone Armijo backtracking line search. An adaptive regularized parameter, which is updated according to the step size of the line search, is employed to compute the next search direction. The presented method is proved to be globally convergent. Numerical experiments show that the proposed method is effective for unconstrained optimizations and outperforms the existing regularized Newton method.  相似文献   

14.
In this paper, we combine trust region technique with line search technique to develop an iterative method for solving semismooth equations. At each iteration, a trust region subproblem is solved. The solution of the trust region subproblem provides a descent direction for the norm of a smoothing function. By using a backtracking line search, a steplength is determined. The proposed method shares advantages of trust region methods and line search methods. Under appropriate conditions, the proposed method is proved to be globally and superlinearly convergent. In particular, we show that after finitely many iterations, the unit step is always accepted and the method reduces to a smoothing Newton method.  相似文献   

15.
提供了一类新的结合非单调内点回代线搜索技术的仿射变换Levenberg-Marquardt法解Karush-Kuhn-Tucker(KKT)系统. 基于由KKT系统转化得到的等价的部分变量具有非负约束的最小化问题,建立了Levenberg-Marquardt方程. 证明了算法不仅具有整体收敛性,而且在合理的假设条件下,算法具有超线性收敛速率. 数值结果验证了算法的实际有效性.  相似文献   

16.
Summary. The method of shortest residuals (SR) was presented by Hestenes and studied by Pytlak. If the function is quadratic, and if the line search is exact, then the SR method reduces to the linear conjugate gradient method. In this paper, we put forward the formulation of the SR method when the line search is inexact. We prove that, if stepsizes satisfy the strong Wolfe conditions, both the Fletcher-Reeves and Polak-Ribière-Polyak versions of the SR method converge globally. When the Wolfe conditions are used, the two versions are also convergent provided that the stepsizes are uniformly bounded; if the stepsizes are not bounded, an example is constructed to show that they need not converge. Numerical results show that the SR method is a promising alternative of the standard nonlinear conjugate gradient method. Received June 25, 1996 / Revised version received April 1, 1997 / Published online July 28, 1999  相似文献   

17.
Although the Liu–Storey (LS) nonlinear conjugate gradient method has a similar structure as the well-known Polak–Ribière–Polyak (PRP) and Hestenes–Stiefel (HS) methods, research about this method is very rare. In this paper, based on the memoryless BFGS quasi-Newton method, we propose a new LS type method, which converges globally for general functions with the Grippo–Lucidi line search. Moreover, we modify this new LS method such that the modified scheme is globally convergent for nonconvex minimization if the strong Wolfe line search is used. Numerical results are also reported.  相似文献   

18.
In this paper, based on a new class of conjugate gradient methods which are proposed by Rivaie, Dai and Omer et al. we propose a class of improved conjugate gradient methods for nonconvex unconstrained optimization. Different from the above methods, our methods possess the following properties: (i) the search direction always satisfies the sufficient descent condition independent of any line search; (ii) these approaches are globally convergent with the standard Wolfe line search or standard Armijo line search without any convexity assumption. Moreover, our numerical results also demonstrated the efficiencies of the proposed methods.  相似文献   

19.
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods.  相似文献   

20.
In this paper, we consider the DFP algorithm without exact line search. We strengthen the conditions on the line search and prove that, under the new line search conditions, the DFP algorithm is globally convergent, Q-superlinearly convergent, and n-step quadratically convergent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号