共查询到20条相似文献,搜索用时 15 毫秒
1.
提出一类新的求解无约束优化问题的记忆梯度法,证明了算法的全局收敛性.当目标函数为一致凸函数时,对其线性收敛速率进行了分析.新算法在迭代过程中无需对步长进行线性搜索,仅需对算法中的一些参数进行预测估计,从而减少了目标函数及梯度的迭代次数,降低了算法的计算量和存储量.数值试验表明算法是有效的. 相似文献
2.
Another Conjugate Gradient Algorithm with Guaranteed Descent and Conjugacy Conditions for Large-scale Unconstrained Optimization 总被引:1,自引:0,他引:1
Neculai Andrei 《Journal of Optimization Theory and Applications》2013,159(1):159-182
In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The coefficients in this linear combination are selected in such a way that both the descent and the conjugacy condition are satisfied at every iteration. The algorithm introduces the modified Wolfe line search, in which the parameter in the second Wolfe condition is modified at every iteration. It is shown that both for uniformly convex functions and for general nonlinear functions, the algorithm with strong Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 75 unconstrained optimization problems with different dimensions show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel; Polak, Ribière and Polyak; Dai and Yuan or the hybrid Dai and Yuan; CG_DESCENT with Wolfe line search, as well as the quasi-Newton L-BFGS. 相似文献
3.
一种新的无约束优化线搜索算法 总被引:1,自引:2,他引:1
在对各种有效的线搜索算法分析的基础上,给出了一种求解光滑无约束优化问题的新的线搜索算法.对于目标函数是二次连续可微且下有界的无约束优化问题,算法具有与Wolfe-Powell线搜索算法相同的理论性质.在每一步迭代中算法至多需要计算两次梯度,对于计算目标函数梯度花费较大的情形可以节省一定的计算量.数值试验表明本文算法是可行的和有效的. 相似文献
4.
一类新的非单调记忆梯度法及其全局收敛性 总被引:1,自引:0,他引:1
在非单调Armijo线搜索的基础上提出一种新的非单调线搜索,研究了一类在该线搜索下的记忆梯度法,在较弱条件下证明了其全局收敛性。与非单调Armijo线搜索相比,新的非单调线搜索在每次迭代时可以产生更大的步长,从而使目标函数值充分下降,降低算法的计算量。 相似文献
5.
In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient
method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line
search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover,
we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction
can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under
strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational
point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the
same good features of the Polak-Ribière method, while avoiding pathological situations.
This research was supported by Agenzia Spaziale Italiana, Rome, Italy. 相似文献
6.
Adaptive Two-Point Stepsize Gradient Algorithm 总被引:7,自引:0,他引:7
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method. 相似文献
7.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation. 相似文献
8.
Neculai Andrei 《Journal of Computational and Applied Mathematics》2010,234(12):3397-3410
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust. 相似文献
9.
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search
into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate
diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points
and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions
and strictly convex quadratic functions satisfying specified conditions. 相似文献
10.
一个新的无约束优化超记忆梯度算法 总被引:3,自引:0,他引:3
本文提出一种新的无约束优化超记忆梯度算法,算法利用当前点的负梯度和前一点的负梯度的线性组合为搜索方向,以精确线性搜索和Armijo搜索确定步长.在很弱的条件下证明了算法具有全局收敛性和线性收敛速度.因算法中避免了存贮和计算与目标函数相关的矩阵,故适于求解大型无约束优化问题.数值实验表明算法比一般的共轭梯度算法有效. 相似文献
11.
《Applied mathematics and computation》1987,23(4):341-357
The flexible polyhedron (simplex) search algorithm is reviewed and some of its shortcomings highlighted. Particularly, the fixed search parameters are shown to be a sure liability and an improvement is proposed. A unidirectional optimal search algorithm is substituted for the set of fixed rules usually employed to modify the simplex. This modification proves especially effective in dealing with “narrow valley” situations, normally encountered whenever the decision variables exhibit some degree of correlation. The new adaptive algorithm compares well with the parent simplex method, featuring less function evaluations and better convergence properties in cases where the classical search techniques perform poorly or fail altogether. 相似文献
12.
基于指数罚函数,对最近提出的一种求解无约束优化问题的三项共轭梯度法进行了修正,并用它求解更复杂的大规模极大极小值问题.证明了该方法生成的搜索方向对每一个光滑子问题是充分下降方向,而且与所用的线搜索规则无关.以此为基础,设计了求解大规模极大极小值问题的算法,并在合理的假设下,证明了算法的全局收敛性.数值实验表明,该算法优于文献中已有的类似算法. 相似文献
13.
14.
A new family of conjugate gradient methods 总被引:1,自引:0,他引:1
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak–Ribiére–Polyak and Liu–Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions. 相似文献
15.
Yu‐Hong Dai Jos Mario Martínez Jin‐Yun Yuan 《Numerical Linear Algebra with Applications》2003,10(4):323-334
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
16.
17.
Sne?ana S.DJORDJEVI? 《数学物理学报(B辑英文版)》2019,(1)
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one. 相似文献
18.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效. 相似文献
19.
A new adaptive subspace minimization three-term conjugate gradient algorithm with nonmonotone line search is introduced and analyzed in this paper.The search directions are computed by minimizing a quadratic approximation of the objective function on special subspaces,and we also proposed an adaptive rule for choosing different searching directions at each iteration.We obtain a significant conclusion that the each choice of the search directions satisfies the sufficient descent condition.With the used nonmonotone line search,we prove that the new algorithm is globally convergent for general nonlinear functions under some mild assumptions.Numerical experiments show that the proposed algorithm is promising for the given test problem set. 相似文献
20.
Neculai Andrei 《Computational Optimization and Applications》2007,38(3):401-416
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation
of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG
by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction.
This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart
philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner
by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly
convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained
optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral
conjugate gradient SCG algorithm.
The author was awarded the Romanian Academy Grant 168/2003. 相似文献