首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The paper examines a method of implementing an angle test for determining when to restart conjugate gradient methods in a steepest descent direction. The test is based on guaranteeing that the cosine of the angle between the search direction and the negative gradient is within a constant multiple of the cosine of the angle between the Fletcher-Reeves search direction and the negative gradient. This guarantees convergence, for the Fletcher-Reeves method is known to converge. Numerical results indicate little, if anything, is lost in efficiency, and indicate gains may well be possible for large problems.  相似文献   

2.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

3.
4.
Two modified Dai-Yuan nonlinear conjugate gradient methods   总被引:1,自引:0,他引:1  
In this paper, we propose two modified versions of the Dai-Yuan (DY) nonlinear conjugate gradient method. One is based on the MBFGS method (Li and Fukushima, J Comput Appl Math 129:15–35, 2001) and inherits all nice properties of the DY method. Moreover, this method converges globally for nonconvex functions even if the standard Armijo line search is used. The other is based on the ideas of Wei et al. (Appl Math Comput 183:1341–1350, 2006), Zhang et al. (Numer Math 104:561–572, 2006) and possesses good performance of the Hestenes-Stiefel method. Numerical results are also reported. This work was supported by the NSF foundation (10701018) of China.  相似文献   

5.
Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput. Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J. Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent under certain assumptions. Numerical results are reported.  相似文献   

6.
A modified PRP conjugate gradient method   总被引:4,自引:0,他引:4  
This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the new method is interesting for the given test problems. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

7.
一类带非精确线搜索的修改的Broyden算法   总被引:4,自引:0,他引:4  
对于文(8)和(14)中提出的修改的Broyden算法,本文讨论它在线搜索非精确时的收敛性质,证明这类算法作用于梯度满足Lipschitz条件的目标函数时是整体收敛的,当目标函数一致凸时,算法是Q-超线性收敛和二阶收敛的。  相似文献   

8.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

9.
10.
Linear systems with complex coefficients arise from various physical problems. Examples are the Helmholtz equation and Maxwell equations approximated by finite difference or finite element methods, that lead to large sparse linear systems. When the continuous problem is reduced to integral equations, after discretization, one obtains a dense linear system. The resulting matrices are generally non-Hermitian but, most of the time, symmetric and consequently the classical conjugate gradient method cannot be directly applied. Usually, these linear systems have to be solved with a large number of unknowns because, for instance, in electromagnetic scattering problems the mesh size must be related to the wave length of the incoming wave. The higher the frequency of the incoming wave, the smaller the mesh size must be. When one wants to solve 3D-problems, it is no longer practical to use direct method solvers, because of the huge memory they need. So iterative methods are attractive for this kind of problems, even though their convergence cannot be always guaranteed with theoretical results. In this paper we derive several methods from a unified framework and we numerically compare these algorithms on some test problems.  相似文献   

11.
It is well known that global convergence has not been established for the Polak-Ribière-Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk?0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging.  相似文献   

12.
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function under suitable conditions. Numerical results are reported. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

13.
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods.  相似文献   

14.
Global convergence result for conjugate gradient methods   总被引:2,自引:0,他引:2  
Conjugate gradient optimization algorithms depend on the search directions,
  相似文献   

15.
A quadratic programming algorithm is presented, resembling Beale's 1955 quadratic programming algorithm and Wolfe's Reduced Gradient method. It uses conjugate search directions. The algorithm is conceived as being particularly appropriate for problems with a large Hessian matrix. An experimental computer program has been written to validate the concepts, and has performed adequately, although it has not been used on very large problems. An outline of the solution to the quadratic capacity-constrained transportation problem using the above method is also presented.While engaged in this research the author had a part-time post with the Manpower Services Commission.  相似文献   

16.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

17.
In a recent paper McCormick and Ritter consider two classes of algorithms, namely methods of conjugate directions and quasi-Newton methods, for the problem of minimizing a function ofn variablesF(x). They show that the former methods possess ann-step superlinear rate of convergence while the latter are every step superlinear and therefore inherently superior. In this paper a simple and computationally inexpensive modification of a method of conjugate directions is presented. It is shown that the modified method is a quasi-Newton method and is thus every step superlinearly convergent. It is also shown that under certain assumptions on the second derivatives ofF the rate of convergence of the modified method isn-step quadratic.This work was supported by the National Research Council of Canada under Research Grant A8189.  相似文献   

18.
We establish a point-wise gradient estimate for all positive solutions of the conjugate heat equation. This contrasts to Perelman's point-wise gradient estimate which works mainly for the fundamental solution rather than all solutions. Like Perelman's estimate, the most general form of our gradient estimate does not require any curvature assumption. Moreover, assuming only lower bound on the Ricci curvature, we also prove a localized gradient estimate similar to the Li-Yau estimate for the linear Schrödinger heat equation. The main difference with the linear case is that no assumptions on the derivatives of the potential (scalar curvature) are needed. A classical Harnack inequality follows.  相似文献   

19.
In this paper, a three-term conjugate gradient algorithm is developed for solving large-scale unconstrained optimization problems. The search direction at each iteration of the algorithm is determined by rectifying the steepest descent direction with the difference between the current iterative points and that between the gradients. It is proved that such a direction satisfies the approximate secant condition as well as the conjugacy condition. The strategies of acceleration and restart are incorporated into designing the algorithm to improve its numerical performance. Global convergence of the proposed algorithm is established under two mild assumptions. By implementing the algorithm to solve 75 benchmark test problems available in the literature, the obtained results indicate that the algorithm developed in this paper outperforms the existent similar state-of-the-art algorithms.  相似文献   

20.
A class of reduced gradient methods for handling general optimization problems with linear equality and inequality constraints is suggested in this paper. Although a slack vector is introduced, the dimension of the problem is not increased, which is unlike the conventional way of transferring the inequality constraints into the equality constraints by introducing slack variables. When an iterate x(k) is not a K-T point of the problem under consideration, different feasible descent directions can be obtained by different choices of the slack vectors. The suggested method is globally convergent and the numerical experiment given in the paper shows that the method is efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号