首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.  相似文献   

2.
Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization   总被引:6,自引:0,他引:6  
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.  相似文献   

3.
本文提出了两种求解伪单调变分不等式的定步长的投影算法.这与Solodov & Tseng(1996)和He(1997)的变步长策略不同.我们证明了算法的全局收敛性,并且还在一定条件下证明了算法的Q-线性收敛性.  相似文献   

4.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

5.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

6.
In this paper, variable stepsize multistep methods for delay differential equations of the type y(t) = f(t, y(t), y(t − τ)) are proposed. Error bounds for the global discretization error of variable stepsize multistep methods for delay differential equations are explicitly computed. It is proved that a variable multistep method which is a perturbation of strongly stable fixed step size method is convergent.  相似文献   

7.
In this paper, we propose a globally convergent Polak-Ribière-Polyak (PRP)conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic model Φ(t):= f(xk)+tgTkdk+1/2t2dTkQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

8.
This paper presents a theoretical result on convergence of a primal affine-scaling method for convex quadratic programs. It is shown that, as long as the stepsize is less than a threshold value which depends on the input data only, Ye and Tse's interior ellipsoid algorithm for convex quadratic programming is globally convergent without nondegeneracy assumptions. In addition, its local convergence rate is at least linear and the dual iterates have an ergodically convergent property.Research supported in part by the NSF under grant DDM-8721709.  相似文献   

9.
It is well-known (see Pang and Chan [8]) that Newton's method, applied to strongly monotone variational inequalities, is locally and quadratically convergent. In this paper we show that Newton's method yields a descent direction for a non-convex, non-differentiable merit function, even in the absence of strong monotonicity. This result is then used to modify Newton's method into a globally convergent algorithm by introducing a linesearch strategy. Furthermore, under strong monotonicity (i) the optimal face is attained after a finite number of iterations, (ii) the stepsize is eventually fixed to the value one, resulting in the usual Newton step. Computational results are presented.  相似文献   

10.
In this paper, a new projection method for solving a system of nonlinear equations with convex constraints is presented. Compared with the existing projection method for solving the problem, the projection region in this new algorithm is modified which makes an optimal stepsize available at each iteration and hence guarantees that the next iterate is more closer to the solution set. Under mild conditions, we show that the method is globally convergent, and if an error bound assumption holds in addition, it is shown to be superlinearly convergent. Preliminary numerical experiments also show that this method is more efficient and promising than the existing projection method. This work was done when Yiju Wang visited Chongqing Normal University.  相似文献   

11.
本文对带线性等式约束的LC^1优化问题提出了一个新的ODE型信赖域算法,它在每一次迭代时,不必求解带信赖域界的子问题,仅解一线性方程组而求得试验步。从而可以降低计算的复杂性,提高计算效率,在一定的条件下,文中还证明了该算法是超线性收敛的。  相似文献   

12.
欧宜贵  侯定丕 《数学季刊》2003,18(2):140-145
In this paper, a new trust region algorithm for unconstrained LC1 optimization problems is given. Compare with those existing trust regiion methods, this algorithm has a different feature: it obtains a stepsize at each iteration not by soloving a quadratic subproblem with a trust region bound, but by solving a system of linear equations. Thus it reduces computational complexity and improves computation efficiency. It is proven that this algorithm is globally convergent and locally superlinear under some conditions.  相似文献   

13.
Summary. This paper deals with the subject of numerical stability for the neutral functional-differential equation It is proved that numerical solutions generated by -methods are convergent if . However, our numerical experiment suggests that they are divergent when is large. In order to obtain convergent numerical solutions when , we use -methods to obtain approximants to some high order derivative of the exact solution, then we use the Taylor expansion with integral remainder to obtain approximants to the exact solution. Since the equation under consideration has unbounded time lags, it is in general difficult to investigate numerically the long time dynamical behaviour of the exact solution due to limited computer (random access) memory. To avoid this problem we transform the equation under consideration into a neutral equation with constant time lags. Using the later equation as a test model, we prove that the linear -method is -stable, i.e., the numerical solution tends to zero for any constant stepsize as long as and , if and only if , and that the one-leg -method is -stable if . We also find out that inappropriate stepsize causes spurious solution in the marginal case where and . Received May 6, 1994  相似文献   

14.
Notes on the Dai-Yuan-Yuan modified spectral gradient method   总被引:1,自引:0,他引:1  
In this paper, we give some notes on the two modified spectral gradient methods which were developed in [10]. These notes present the relationship between their stepsize formulae and some new secant equations in the quasi-Newton method. In particular, we also introduce another two new choices of stepsize. By using an efficient nonmonotone line search technique, we propose some new spectral gradient methods. Under some mild conditions, we show that these proposed methods are globally convergent. Numerical experiments on a large number of test problems from the CUTEr library are also reported, which show that the efficiency of these proposed methods.  相似文献   

15.
关于外梯度法的步长规则   总被引:1,自引:0,他引:1  
修乃华  王长钰 《计算数学》2000,22(2):197-208
1.引言 设为Rn中的一个非空闭凸集,F(x)为Rn Rn中的一个连续向量函数.变分不等式问题(F,)就是:找一向量x 使得当 =R时,(1.1)退化成非线性互补问题。在这篇文章中总假定:(H1) ,这里表示(1.1)的解集;(H2)F(x)是单调的,即对,(x-y)(F(x)-F(x)-F(y)). 这类问题出现在工程物理、经济管理等领域,有着极为广泛的应用.因此,其数值解近年来受到重视,提出许多有效算法,见综述[1, 2].在现有的算法中, Korpelevich的外梯度法[3](何炳生称它为投影…  相似文献   

16.
梯度法简述     
孙聪  张亚 《运筹学学报》2021,25(3):119-132
梯度法是一类求解优化问题的一阶方法。梯度法形式简单、计算开销小,在大规模问题的求解中得到了广泛应用。系统地介绍了光滑无约束问题梯度法的迭代格式、理论框架。梯度法中最重要的参数是步长,步长的选取直接决定了梯度法的收敛性质与收敛速度。从线搜索框架、近似技巧、随机技巧和交替重复步长四方面介绍了梯度步长的构造思想及相应梯度法的收敛性结果,还对非光滑及约束问题的梯度法、梯度法加速技巧和随机梯度法等扩展方向做了简要介绍。  相似文献   

17.
A potential reduction algorithm is proposed for optimization of a convex function subject to linear constraints. At each step of the algorithm,a system of linear equations is solved toget a search direction and the Armijo‘s rule is used to determine a stepsize. It is proved that thealgorithm is globally convergent. Computational results are reported.  相似文献   

18.
求解凸二次规划问题的势下降内点算法   总被引:11,自引:0,他引:11  
1 引 言二次规划问题的求解是数学规划和工业应用等领域的一个重要课题 ,同时也是解一般非线性规划问题的序列二次规划算法的关键 .求解二次规划问题的早期技术是利用线性规划问题的单纯形方法求解二次规划问题的 KKT最优性必要条件[1 ] .这类算法比较直观 ,但在处理不等式约束时 ,松弛变量的引进很容易导致求解过程的明显减慢 .有效集策略是求解二次规划问题的另一类主要技术 .这类方法一般都是稳定的 ,但随着问题中大量不等式约束的出现 ,其收敛速度将越来越低[2 ] .简约空间技术将所求问题的 Hessian阵投影到自由变量所在的子空间中 …  相似文献   

19.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

20.
A potential reduction algorithm is proposed for the solution of monotone variational inequality problems. At each step of the algorithm, a system of linear equations is solved to get the search direction and the Armijo's rule is used to determine the stepsize.It is proved that the algorithm is globally convergent. Computational results are reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号