共查询到20条相似文献,搜索用时 9 毫秒
1.
Yuhong Dai Jinyun Yuan Ya-Xiang Yuan 《Computational Optimization and Applications》2002,22(1):103-109
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved. 相似文献
2.
In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems. 相似文献
3.
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果. 相似文献
4.
提出一类求解无约束最优化问题的混合共轭梯度算法,新算法有机地结合了DY算法和HS算法的优点,并采用非单调线搜索技术在较弱条件下证明了算法的全局收敛性.数值实验表明新算法具有良好的计算效能. 相似文献
5.
Liu G. H. Jing L. L. Han L. X. Han D. 《Journal of Optimization Theory and Applications》1999,101(1):127-140
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts. 相似文献
6.
The gradient method for the symmetric positive definite linear system
is as follows
where
is the residual of the system at xk and αk is the stepsize. The stepsize
is optimal in the sense that it minimizes the modulus
, where λ1 and λn are the minimal and maximal eigenvalues of A respectively. Since λ1 and λn are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this
paper, we will propose a new stepsize formula which tends to the optimal stepsize as
. At the same time, the minimal and maximal eigenvalues, λ1 and λn, of A and their corresponding eigenvectors can be obtained.
This research was initiated while the first author was visiting The Hong Kong Polytechnic University.
This author was supported by the Chinese NSF grants (No. 40233029 and 101071104) and an innovation fund of Chinese Academy
of Sciences.
This author was supported by a grant from the Research Committee of the Hong Kong Polytechnic University (A-PC36). 相似文献
(1) |
7.
Jorge Nocedal Annick Sartenaer Ciyou Zhu 《Computational Optimization and Applications》2002,22(1):5-35
It is well known that the norm of the gradient may be unreliable as a stopping test in unconstrained optimization, and that it often exhibits oscillations in the course of the optimization. In this paper we present results descibing the properties of the gradient norm for the steepest descent method applied to quadratic objective functions. We also make some general observations that apply to nonlinear problems, relating the gradient norm, the objective function value, and the path generated by the iterates. 相似文献
8.
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar
k
with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems. 相似文献
9.
This paper discusses the global convergence of a class of nonmonotone conjugate gradient methods (NM methods) for nonconvex object functions. This class of methods includes the nonmonotone counterpart of modified Polak-Ribiere method and modified Hestenes-Stiefel method as special cases. 相似文献
10.
11.
In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed,and the corresponding results are as follows:(1) The DY method globally converges for a strongly convex LC~1 objective function;(2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, LC~1 objective function. 相似文献
12.
P. Armand 《Journal of Optimization Theory and Applications》2007,132(2):287-305
This paper proposes a line search technique to satisfy a relaxed form of the strong Wolfe conditions in order to guarantee
the descent condition at each iteration of the Polak-Ribière-Polyak conjugate gradient algorithm. It is proved that this line
search algorithm preserves the usual convergence properties of any descent algorithm. In particular, it is shown that the
Zoutendijk condition holds under mild assumptions. It is also proved that the resulting conjugate gradient algorithm is convergent
under a strong convexity assumption. For the nonconvex case, a globally convergent modification is proposed. Numerical tests
are presented.
This paper is based on an earlier work presented at the International Symposium on Mathematical Programming in Lausanne in
1997. The author thanks J. C. Gilbert for his advice and M. Albaali for some recent discussions which motivated him to write
this paper. Special thanks to G. Liu, J. Nocedal, and R. Waltz for the availability of the software CG+ and to one of the
referees who indicated to him the paper of Grippo and Lucidi (Ref. 1). 相似文献
13.
14.
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein’s line search is proposed.
By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary
point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency
of our new method.
This work was supported by the National Natural Science Foundation of China (Grant No. 10231060) and the Specialized Research
Fund of Doctoral Program of Higher Education of China (Grant No. 20040319003) 相似文献
15.
Wen-yu SUN~ 《中国科学A辑(英文版)》2007,50(10)
In this paper,an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed.By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary point with the second order optimality conditions.Numerical tests on a set of standard test problems confirm the efficiency of our new method. 相似文献
16.
17.
Globally Convergent Algorithms for Unconstrained Optimization 总被引:2,自引:0,他引:2
Yixun Shi 《Computational Optimization and Applications》2000,16(3):295-308
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed. 相似文献
18.
Wolfe搜索下记忆梯度法的收敛性 总被引:7,自引:0,他引:7
本文研究无约束优化问题的记忆梯度算法,分析了Wolfe搜索下该算法的全局收敛性和线性收敛速度。初步数值试验结果表明了算法的有效性。 相似文献
19.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory
gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory
gradient method which generates a descent search direction for the objective function at every iteration. We show that our
method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy.
Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter
included in the method. 相似文献
20.
A new conjugate gradient method is proposed in this paper. For any (inexact) line search, our scheme satifies the sufficient descent property. The method is proved to be globally convergent if the restricted Wolfe-Powell line search is used. Preliminary numerical result shows that it is efficient. 相似文献