首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
SR1更新公式对比其他的拟牛顿更新公式,会更加简单且每次迭代需要更少的计算量。但是一般SR1更新公式的收敛性质是在一致线性无关这一很强的条件下证明的。基于前人的研究成果,提出了一种新的修正SR1公式,并分别证明了其在一致线性无关和没有一致线性无关这两个条件下的局部收敛性,最后通过数值实验验证了提出的更新公式的有效性,以及所作出假设的合理性。根据实验数据显示,在某些条件下基于所提出更新公式的拟牛顿算法会比基于传统的SR1更新公式的算法收敛效果更好一些。  相似文献   

2.
信赖域法是一种保证全局收敛性的优化算法,为避免Hessian矩阵的计算,基于拟牛顿校正公式构造了求解带线性等式约束的非线性规划问题的截断拟牛顿型信赖域法.首先给出了截断拟牛顿型信赖域法的构造过程及具体步骤;然后针对随机用户均衡模型中变量和约束的特点对算法进行了修正,并将多种拟牛顿校正公式下所得结果与牛顿型信赖域法的结果进行了比较,结果发现基于对称秩1校正公式的信赖域法更为合适.最后基于数值算例结果得到了一些在算法编程过程中的重要结论,对其它形式信赖域法的编程实现具有一定的参考意义.  相似文献   

3.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

4.
本文提供了在没有非奇异假设的条件下,求解有界约束半光滑方程组的投影信赖域算法.基于一个正则化子问题,求得类牛顿步,进而求得投影牛顿步.在合理的假设条件下,证明了算法不仅具有整体收敛性而且保持超线性收敛速率.  相似文献   

5.
一类改进BFGS算法及其收敛性分析   总被引:6,自引:0,他引:6  
本文针对无约束最优化问题,基于目标函数的局部二次模型近似,提出一类改进的BFGS算法,称为 MBFGS算法。其修正 B_k的公式中含有一个参数θ∈[0,l],当 θ= 1时即得经典的BFGS公式;当θ∈[0、l)时,所得公式已不属于拟Newton类。在目标函数一致凸假设下,证明了所给算法的全局收敛性及局部超线性收敛性。  相似文献   

6.
本文提供了预条件不精确牛顿型方法结合非单调技术解光滑的非线性方程组.在合理的条件下证明了算法的整体收敛性.进一步,基于预条件收敛的性质,获得了算法的局部收敛速率,并指出如何选择势序列保证预条件不精确牛顿型的算法局部超线性收敛速率.  相似文献   

7.
通过引入中间值函数的一类光滑价值函数,构造了箱约束变分不等式的一种新的光滑价值函数,该函数形式简单且具有良好的微分性质.基于此给出了求解箱约束变分不等式的一种阻尼牛顿算法,在较弱的条件下,证明了算法的全局收敛性和局部超线性收敛率,以及对线性箱约束变分不等式的有限步收敛性.数值实验结果表明了算法可靠有效的实用性能.  相似文献   

8.
改进的PSB拟牛顿修正矩阵的收敛性   总被引:1,自引:0,他引:1  
本文在已建立的一类新拟牛顿方程Bk+1δk=yk=yk+θk/δk^Tu的基础上,证明了满足新拟牛顿方程的改进PSB算法产生的拟牛顿修正矩阵序列在序列{xk}收敛于x^*,{δk}一致性无关及二阶导数阵连续有界的条件下收敛于海色阵G(x^*)。  相似文献   

9.
本文定义了分片线性NCP函数,并对非线性约束优化问题,提出了带有这分片NCP函数的QP-free非可行域算法.利用优化问题的一阶KKT条件,乘子和NCP函数,得到对应的非光滑方程组.本文给出解这非光滑方程组算法,它包含原始-对偶变量,在局部意义下,可看成关扰动牛顿-拟牛顿迭代算法.在线性搜索时,这算法采用滤子方法.本文给出的算法是可实现的并具有全局收敛性,在适当假设下算法具有超线性收敛性.  相似文献   

10.
提供了一种新的非单调内点回代线搜索技术的仿射内点信赖域方法解线性不等式约束的广义非线性互补问题(GCP).基于广义互补问题构成的半光滑方程组的广义Jacobian矩阵,算法使用l_2范数作为半光滑方程组的势函数,形成的信赖域子问题为一个带椭球约束的线性化的二次模型.利用广义牛顿方程计算试探迭代步,通过内点映射回代技术确保迭代点是严格内点,保证了算法的整体收敛性.在合理的条件下,证明了信赖域算法在接近最优点时可转化为广义拟牛顿步,进而具有局部超线性收敛速率.非单调技术将克服高度非线性情况加速收敛进展.最后,数值结果表明了算法的有效性.  相似文献   

11.
Quasi-Newton algorithms for unconstrained nonlinear minimization generate a sequence of matrices that can be considered as approximations of the objective function second derivatives. This paper gives conditions under which these approximations can be proved to converge globally to the true Hessian matrix, in the case where the Symmetric Rank One update formula is used. The rate of convergence is also examined and proven to be improving with the rate of convergence of the underlying iterates. The theory is confirmed by some numerical experiments that also show the convergence of the Hessian approximations to be substantially slower for other known quasi-Newton formulae.The work of this author was supported by the National Sciences and Engineering Research Council of Canada, and by the Information Technology Research Centre, which is funded by the Province of Ontario.  相似文献   

12.
在二阶拟牛顿方程的基础上,结合Zhang H.C.提出的非单调线搜索构造了一种求解大规模无约束优化问题的对角二阶拟牛顿算法.算法在每次迭代中利用对角矩阵逼近Hessian矩阵的逆,使计算搜索方向的存储量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性和超线性收敛性.数值实验表明算法是有效可行的.  相似文献   

13.
In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.  相似文献   

14.
In this paper, the necessary optimality conditions for an unconstrained optimal control problem are used to derive a quasi-Newton method where the update involves only second-order derivative terms. A pointwise update which was presented in a previous paper by the authors is changed to allow for more general second-order sufficiency conditions in the control problem. In particular, pointwise versions of the Broyden, PSB, and SR1 update are considered. A convergence rate theorem is given for the Broyden and PSB versions.This research was supported by NSF Grant No. DMS-89-00410, by NSF Grant No. INT-88-00560, by AFOSR Grant No. AFOSR-89-0044, and by the Deutsche Forschungsgemeinschaft.  相似文献   

15.
Quasi-Newton methods are powerful techniques for solving unconstrained minimization problems. Variable metric methods, which include the BFGS and DFP methods, generate dense positive definite approximations and, therefore, are not applicable to large-scale problems. To overcome this difficulty, a sparse quasi-Newton update with positive definite matrix completion that exploits the sparsity pattern of the Hessian is proposed. The proposed method first calculates a partial approximate Hessian , where , using an existing quasi-Newton update formula such as the BFGS or DFP methods. Next, a full matrix H k+1, which is a maximum-determinant positive definite matrix completion of , is obtained. If the sparsity pattern E (or its extension F) has a property related to a chordal graph, then the matrix H k+1 can be expressed as products of some sparse matrices. The time and space requirements of the proposed method are lower than those of the BFGS or the DFP methods. In particular, when the Hessian matrix is tridiagonal, the complexities become O(n). The proposed method is shown to have superlinear convergence under the usual assumptions.   相似文献   

16.
Symmetric rank-one (SR1) is one of the competitive formulas among the quasi-Newton (QN) methods. In this paper, we propose some modified SR1 updates based on the modified secant equations, which use both gradient and function information. Furthermore, to avoid the loss of positive definiteness and zero denominators of the new SR1 updates, we apply a restart procedure to this update. Three new algorithms are given to improve the Hessian approximation with modified secant equations for the SR1 method. Numerical results show that the proposed algorithms are very encouraging and the advantage of the proposed algorithms over the standard SR1 and BFGS updates is clearly observed.  相似文献   

17.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

18.
This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is developed. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-Newton method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from a positive definite matrix. Therefore we propose the memoryless SR1 method, which is updated from a positive scaled of the identity, where the scaling factor is derived in such a way that positive definiteness of the updating matrices are preserved and at the same time improves the condition of the scaled memoryless SR1 update. Under very mild conditions it is shown that, for strictly convex objective functions, the method is globally convergent with a linear rate of convergence. Numerical results show that the optimally scaled memoryless SR1 method is very encouraging.  相似文献   

19.
Multi-step quasi-Newton methods for optimization   总被引:4,自引:0,他引:4  
Quasi-Newton methods update, at each iteration, the existing Hessian approximation (or its inverse) by means of data deriving from the step just completed. We show how “multi-step” methods (employing, in addition, data from previous iterations) may be constructed by means of interpolating polynomials, leading to a generalization of the “secant” (or “quasi-Newton”) equation. The issue of positive-definiteness in the Hessian approximation is addressed and shown to depend on a generalized version of the condition which is required to hold in the original “single-step” methods. The results of extensive numerical experimentation indicate strongly that computational advantages can accrue from such an approach (by comparison with “single-step” methods), particularly as the dimension of the problem increases.  相似文献   

20.
给出了一种非单调带参数的Perry-Shanno无记忆拟牛顿法, 对于目标函数为凸函数, 在参数满足适当范围的情况下, 证明了算法的全局收敛性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号