首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An example is given where truncation error, caused by finite computer arithmetic, leads to the BFGS variable-metric method becoming stuck, despite the approximated Hessian matrix, the gradient vector, and the search vector satisfying analytical conditions for convergence. A restart criterion to eliminate the problem is suggested.  相似文献   

2.
To efficiently solve a large scale unconstrained minimization problem with a dense Hessian matrix, this paper proposes to use an incomplete Hessian matrix to define a new modified Newton method, called the incomplete Hessian Newton method (IHN). A theoretical analysis shows that IHN is convergent globally, and has a linear rate of convergence with a properly selected symmetric, positive definite incomplete Hessian matrix. It also shows that the Wolfe conditions hold in IHN with a line search step length of one. As an important application, an effective IHN and a modified IHN, called the truncated-IHN method (T-IHN), are constructed for solving a large scale chemical database optimal projection mapping problem. T-IHN is shown to work well even with indefinite incomplete Hessian matrices. Numerical results confirm the theoretical results of IHN, and demonstrate the promising potential of T-IHN as an efficient minimization algorithm.  相似文献   

3.
Data assimilation algorithms combine prior and observational information, weighted by their respective uncertainties, to obtain the most likely posterior of a dynamical system. In variational data assimilation the posterior is computed by solving a nonlinear least squares problem. Many numerical weather prediction (NWP) centers use full observation error covariance (OEC) weighting matrices, which can slow convergence of the data assimilation procedure. Previous work revealed the importance of the minimum eigenvalue of the OEC matrix for conditioning and convergence of the unpreconditioned data assimilation problem. In this article we examine the use of correlated OEC matrices in the preconditioned data assimilation problem for the first time. We consider the case where there are more state variables than observations, which is typical for applications with sparse measurements, for example, NWP and remote sensing. We find that similarly to the unpreconditioned problem, the minimum eigenvalue of the OEC matrix appears in new bounds on the condition number of the Hessian of the preconditioned objective function. Numerical experiments reveal that the condition number of the Hessian is minimized when the background and observation lengthscales are equal. This contrasts with the unpreconditioned case, where decreasing the observation error lengthscale always improves conditioning. Conjugate gradient experiments show that in this framework the condition number of the Hessian is a good proxy for convergence. Eigenvalue clustering explains cases where convergence is faster than expected.  相似文献   

4.
Quasi-Newton algorithms for unconstrained nonlinear minimization generate a sequence of matrices that can be considered as approximations of the objective function second derivatives. This paper gives conditions under which these approximations can be proved to converge globally to the true Hessian matrix, in the case where the Symmetric Rank One update formula is used. The rate of convergence is also examined and proven to be improving with the rate of convergence of the underlying iterates. The theory is confirmed by some numerical experiments that also show the convergence of the Hessian approximations to be substantially slower for other known quasi-Newton formulae.The work of this author was supported by the National Sciences and Engineering Research Council of Canada, and by the Information Technology Research Centre, which is funded by the Province of Ontario.  相似文献   

5.
We propose a nonmonotone adaptive trust region method based on simple conic model for unconstrained optimization. Unlike traditional trust region methods, the subproblem in our method is a simple conic model, where the Hessian of the objective function is approximated by a scalar matrix. The trust region radius is adjusted with a new self-adaptive adjustment strategy which makes use of the information of the previous iteration and current iteration. The new method needs less memory and computational efforts. The global convergence and Q-superlinear convergence of the algorithm are established under the mild conditions. Numerical results on a series of standard test problems are reported to show that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

6.
In this paper, we propose a trust region method for minimizing a function whose Hessian matrix at the solutions may be singular. The global convergence of the method is obtained under mild conditions. Moreover, we show that if the objective function is LC 2 function, the method possesses local superlinear convergence under the local error bound condition without the requirement of isolated nonsingular solution. This is the first regularized Newton method with trust region technique which possesses local superlinear (quadratic) convergence without the assumption that the Hessian of the objective function at the solution is nonsingular. Preliminary numerical experiments show the efficiency of the method. This work is partly supported by the National Natural Science Foundation of China (Grant Nos. 70302003, 10571106, 60503004, 70671100) and Science Foundation of Beijing Jiaotong University (2007RC014).  相似文献   

7.
An algorithm was recently presented that minimizes a nonlinear function in several variables using a Newton-type curvilinear search path. In order to determine this curvilinear search path the eigenvalue problem of the Hessian matrix of the objective function has to be solved at each iteration of the algorithm. In this paper an iterative procedure requiring gradient information only is developed for the approximation of the eigensystem of the Hessian matrix. It is shown that for a quadratic function the approximated eigenvalues and eigenvectors tend rapidly to the actual eigenvalues and eigenvectors of its Hessian matrix. The numerical tests indicate that the resulting algorithm is very fast and stable. Moreover, the fact that some approximations to the eigenvectors of the Hessian matrix are available is used to get past saddle points and accelerate the rate of convergence on flat functions.  相似文献   

8.
本文提出了一种解无约束优化问题的新的非单调自适应信赖域方法.这种方法借助于目标函数的海赛矩阵的近似数量矩阵来确定信赖域半径.在通常的条件下,给出了新算法的全局收敛性以及局部超线性收敛的结果,数值试验验证了新的非单调方法的有效性.  相似文献   

9.
The paper studies the steepest descent method applied to the minimization of a twice continuously differentiable function. Under certain conditions, the random choice of the step length parameter, independent of the actual iteration, generates a process that is almost surely R-convergent for quadratic functions. The convergence properties of this random procedure are characterized based on the mean value function related to the distribution of the step length parameter. The distribution of the random step length, which guarantees the maximum asymptotic convergence rate independent of the detailed properties of the Hessian matrix of the minimized function, is found, and its uniqueness is proved. The asymptotic convergence rate of this optimally created random procedure is equal to the convergence rate of the Chebyshev polynomials method. Under practical conditions, the efficiency of the suggested random steepest descent method is degraded by numeric noise, particularly for ill-conditioned problems; furthermore, the asymptotic convergence rate is not achieved due to the finiteness of the realized calculations. The suggested random procedure is also applied to the minimization of a general non-quadratic function. An algorithm needed to estimate relevant bounds for the Hessian matrix spectrum is created. In certain cases, the random procedure may surpass the conjugate gradient method. Interesting results are achieved when minimizing functions having a large number of local minima. Preliminary results of numerical experiments show that some modifications of the presented basic method may significantly improve its properties.  相似文献   

10.
本文对等式约束问题提出了一个种组合信赖域与拟牛顿算法。该算法的特点是若Lagrangian函数的近似Hessian阵在等式约束Jacobi阵的零空间正定的,则选择拟牛顿算法,否则用信赖域算法,在通常信赖域算法的收敛假设下,该文证明了组合算法的全局收敛性。  相似文献   

11.
在二阶拟牛顿方程的基础上,结合Zhang H.C.提出的非单调线搜索构造了一种求解大规模无约束优化问题的对角二阶拟牛顿算法.算法在每次迭代中利用对角矩阵逼近Hessian矩阵的逆,使计算搜索方向的存储量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性和超线性收敛性.数值实验表明算法是有效可行的.  相似文献   

12.
主要讨论了无约束最优化中非线性最小二乘问题的收敛性.侧重于收敛的速率和整体、局部分析.改变了Gauss—Newton方法收敛性定理的条件,分两种情况证明了:(1)目标函数的海赛矩阵正定(函数严格凸)时为强整体二阶收敛;(2)目标函数不保证严格凸性,但海赛矩阵的逆存在时为局部收敛,敛速仍为二阶,同时给出了J(X)~(-1)和Q(X)~(-1)之间存在、有界性的等价条件.  相似文献   

13.
A simple conic model function,in which the Hessian approximation is a scalar matrix,is constructed by using the function values and gradient values of the minimizing function.Based on this conic model function,a new nonmonotone line search method is proposed.The convergence results of this line search method are proved under certain conditions.Numerical results show that the new algorithm is effective.  相似文献   

14.
This paper gives a general convergence analysis to the truncated incomplete Hessian Newton method (T-IHN). It shows that T-IHN is globally convergent even with an indefinite incomplete Hessian matrix or an indefinite preconditioner, which may happen in practice. It also proves that when the T-IHN iterates are close enough to a minimum point, T-IHN has a Q-linear rate of convergence, and an admissible line search steplength of one. Moreover, a particular T-IHN algorithm is constructed for minimizing a biomolecular potential energy function, and numerically tested for a protein model problem based on a widely used molecular simulation package, CHARMM. Numerical results confirm the theoretical results, and demonstrate that T-IHN can have a better performance (in terms of computer CPU time) than most CHARMM minimizers.  相似文献   

15.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

16.
We present an algorithm, partitioning group correction (PGC) algorithm based on trust region and conjugate gradient method, for large-scale sparse unconstrained optimization. In large sparse optimization, computing the whole Hessian matrix and solving the Newton-like equations at each iteration can be considerably expensive when a trust region method is adopted. The method depends on a symmetric consistent partition of the columns of the Hessian matrix and an inaccurate solution to the Newton-like equations by conjugate gradient method. And we allow that the current direction exceeds the trust region bound if it is a good descent direction. Besides, we studies a method dealing with some sparse matrices having a dense structure part. Some good convergence properties are kept and we contrast the computational behavior of our method with that of other algorithms. Our numerical tests show that the algorithm is promising and quite effective, and that its performance is comparable to or better than that of other algorithms available.  相似文献   

17.
In this paper, we introduce a cautious BFGS (CBFGS) update criterion in the reduced Hessian sequential quadratic programming (SQP) method. An attractive property of this update criterion is that the generated iterative matrices are always positive definite. Under mild conditions, we get the global convergence of the reduced Hessian SQP method. In particular, the second order sufficient condition is not necessary for the global convergence of the method. Furthermore, we show that if the second order sufficient condition holds at an accumulation point, then the reduced Hessian SQP method with CBFGS update reduces to the reduced Hessian SQP method with ordinary BFGS update. Consequently, the local behavior of the proposed method is the same as the reduced Hessian SQP method with BFGS update. The presented preliminary numerical experiments show the good performance of the method. This work was supported by the National Natural Science Foundation of China via grant 10671060 and 10471060.  相似文献   

18.
童小娇 《应用数学》2001,14(4):31-36
本文提出了解等式约束优化的一个信赖域方法,该方法以既约Hessian逐步二次规划为基础,它享有信赖域方法与既约Hessian方法的优点,在通常条件下,证明了算法的全局收敛性。  相似文献   

19.
许任飞 《经济数学》2004,21(3):258-262
本文研究求解含有奇异解的无约束最优化问题算法 .该类问题的一个重要特性是目标函数的Hessian阵可能处处奇异 .我们提出求解该类问题的一种梯度 -正则化牛顿型混合算法 .并在一定的条件下得到了算法的全局收敛性 .而且 ,经一定迭代步后 ,算法还原为正则化 Newton法 .因而 ,算法具有局部二次收敛性 .  相似文献   

20.
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号