首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, we describe an application of the planar conjugate gradient method introduced in Part 1 (Ref. 1) and aimed at solving indefinite nonsingular sets of linear equations. We prove that it can be used fruitfully within optimization frameworks; in particular, we present a globally convergent truncated Newton scheme, which uses the above planar method for solving the Newton equation. Finally, our approach is tested over several problems from the CUTE collection (Ref. 2).This work was supported by MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization, Rome, Italy.The author acknowledges Luigi Grippo and Stefano Lucidi, who contributed considerably to the elaboration of this paper. The exchange of experiences with Massimo Roma was a constant help in the investigation. The author expresses his gratitude to the Associate Editor and the referees for suggestions and corrections.  相似文献   

2.
This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (L-BFGS) and a Hessian-free Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for L-BFGS iterations. The lengths of the L-BFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters.  相似文献   

3.
A new limited memory quasi-Newton algorithm is developed, in which the self-scaling symmetric rank one update with Davidon's optimal condition is applied. Preliminary numerical tests show that the new algorithm is very efficient for large-scale problems as well as general nonlinear optimization.  相似文献   

4.
A modified Newton method for minimization   总被引:6,自引:0,他引:6  
Some promising ideas for minimizing a nonlinear function, whose first and second derivatives are given, by a modified Newton method, were introduced by Fiacco and McCormick (Ref. 1). Unfortunately, in developing a method around these ideas, Fiacco and McCormick used a potentially unstable, or even impossible, matrix factorization. Using some recently developed techniques for factorizing an indefinite symmetric matrix, we are able to produce a method which is similar to Fiacco and McCormick's original method, but avoids the difficulties of the original method.Both authors gratefully acknowledge the award of a research fellowship from the British Science Research Council.  相似文献   

5.
尝试在有限存储类算法中利用目标函数值所提供的信息.首先利用插值条件构造了一个新的二次函数逼近目标函数,得到了一个新的弱割线方程,然后将此弱割线方程与袁[1]的弱割线方程相结合,给出了一族包括标准LBFGS的有限存储BFGS类算法,证明了这族算法的收敛性.从标准试验函数库CUTE中选择试验函数进行了数值试验,试验结果表明这族算法的数值表现都与标准LBFGS类似.  相似文献   

6.
把从T0PEX/Poseidon卫星高度计资料中提取的沿轨道分潮的调和常数同化到二维非线性潮汐数值模式中,对台湾岛毗邻海域进行数值同化模拟.所用的同化方法为伴随同化方法,该方法的主要作用是校正开边界条件和底摩擦系数.结果表明:利用该方法对研究海域四个分潮调和常数的计算精度较高、速度快,克服了常规人工给定开边界和底摩擦系数计算潮汐所遇到的困难和不足.  相似文献   

7.
Recently, in [12] a very general class oftruncated Newton methods has been proposed for solving large scale unconstrained optimization problems. In this work we present the results of an extensive numericalexperience obtained by different algorithms which belong to the preceding class. This numerical study, besides investigating which arethe best algorithmic choices of the proposed approach, clarifies some significant points which underlies every truncated Newton based algorithm.  相似文献   

8.
In this paper, we present a new conjugate gradient (CG) based algorithm in the class of planar conjugate gradient methods. These methods aim at solving systems of linear equations whose coefficient matrix is indefinite and nonsingular. This is the case where the application of the standard CG algorithm by Hestenes and Stiefel (Ref. 1) may fail, due to a possible division by zero. We give a complete proof of global convergence for a new planar method endowed with a general structure; furthermore, we describe some important features of our planar algorithm, which will be used within the optimization framework of the companion paper (Part 2, Ref. 2). Here, preliminary numerical results are reported.This work was supported by MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization, Rome, ItalyThe author acknowledges Luigi Grippo and Stefano Lucidi, who contributed considerably to the elaboration of this paper. The exchange of experiences with Massimo Roma was a constant help in the investigation. The author expresses his gratitude to the Associate Editor and the referees for suggestions and corrections.  相似文献   

9.
数据同化中的伴随方法的有关问题的研究   总被引:10,自引:0,他引:10  
关于伴随方法应用中只能利用模式的伴随的观点,被认为是有疑问的.所作的数值模拟实验表明,对于潮波模型而言,方程的伴随能够得到与模式的伴随同样的结果:调和常数的实测值与模拟值的振幅差的绝对值的平均小于5.0 cm,迟角差的绝对值的平均小于5.0°.这些结果都能够体现渤、黄海M2分潮的基本特征.作为对比,也利用前人的方法对渤、黄海的M2分潮潮波进行了数值模拟,首先借助于历史资料和观测资料得到开边界的初始猜测,然后对开边界的初始猜测值进行调整,以得到与高度计资料之差尽可能小的模拟结果.但由于开边界的值共有72个,究竟有哪些值需要调整,需要如何调整,只有经过不断的调试,才能部分地解决这些问题.工作量大且很难得到令人满意的结果.该文实现了确定开边界条件的自动化过程,这与前人的方法相比,有无可比拟的优势.需要特别强调的是如果利用方程的伴随,可以避免繁琐而冗长的数学推导.因而说明方程的伴随也应该引起足够的重视.  相似文献   

10.
An iterative method for the minimization of convex functions f :n , called a Newton Bracketing (NB) method, is presented. The NB method proceeds by using Newton iterations to improve upper and lower bounds on the minimum value. The NB method is valid for n = 1, and in some cases for n > 1 (sufficient conditions given here). The NB method is applied to large scale Fermat–Weber location problems.  相似文献   

11.
本文基于分式逼近提出了一类求解单变量无约束优化问题的新割线法,给出并证明了该方法的收敛阶是(√2+1).并进一步对新方法的性能进行了分析,给出了新方法、经典的牛顿法和其他修正的割线类方法解单变量无约束优化问题的数值实验.理论和数值结果均表明新的割线法是有效的.  相似文献   

12.
结合非单调信赖域方法,和非单调线搜索技术,提出了一种新的无约束优化算法.信赖域方法的每一步采用线搜索,使得迭代每一步都充分下降加快了迭代速度.在一定条件下,证明了算法具有全局收敛性和局部超线性.收敛速度.数值试验表明算法是十分有效的.  相似文献   

13.
We consider quasidifferentiable functions in the sense of Demyanov and Rubinov, i. e. functions, which are directionally differentiable and whose directional derivative can be expressed as a difference of two sublinear functions, so that its subdifferential, called the quasidifferential, consists of a pair of sets. For these functions a generalized gradient algorithm is proposed. Its behaviour is studied in detail for the special class of continuously subdifferentiable functions. Numerical test results are given. Finally, the general quasidifferentiable case is simulated by means of perturbed subdifferentials, where we make use of the non-uniqueness in the quasidifferential representation.  相似文献   

14.
不可微预报系统的广义变分同化方法及数值试验   总被引:3,自引:0,他引:3  
讨论了不可微预报系统中的广义变分同化方法.对于不可微预报系统,由于不可微性,系统不存在切线性系统,而切线性系统的不存在,使得无法用通常的途径导出伴随系统A·D2引进不可微系统的弱形式后,可以不考虑切线性系统,而直接导出伴随系统.主要就3种形式的问题展开了讨论,第1种为低维系统,第2种情形为高维系统整体观测资料,第3种情形为高维系统局部观测资料.可以称此方法为结合反问题思想的广义变分同化方法.  相似文献   

15.
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented.  相似文献   

16.
黄海 《经济数学》2011,28(2):25-28
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果.  相似文献   

17.
本文通过结合牛顿法与PRP共轭梯度法提出一修正PRP方法,新方法中包含了二阶导数信息,在适当的假设下算法全局收敛,数值算例表明了算法的有效性.  相似文献   

18.
本文对无约束最优化问题提出一阶曲线法,给出其全局收敛结果.对由微分方程定义的一阶曲线,提出渐近收敛指数概念,并计算出几个常用曲线模型的渐近收敛指数.  相似文献   

19.
对一般目标函数极小化问题的拟牛顿法及其全局收敛性的研究,已经成为拟牛顿法理论中最基本的开问题之一.本文对这个问题做了进一步的研究,对无约束优化问题提出一类新的广义拟牛顿算法,并结合Goldstein线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

20.
The BFGS method is the most effective of the quasi-Newton methods for solving unconstrained optimization problems. Wei, Li, and Qi [16] have proposed some modified BFGS methods based on the new quasi-Newton equation B k+1 s k = y* k , where y* k is the sum of y k and A k s k, and A k is some matrix. The average performance of Algorithm 4.3 in [16] is better than that of the BFGS method, but its superlinear convergence is still open. This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号