首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper gives a general convergence analysis to the truncated incomplete Hessian Newton method (T-IHN). It shows that T-IHN is globally convergent even with an indefinite incomplete Hessian matrix or an indefinite preconditioner, which may happen in practice. It also proves that when the T-IHN iterates are close enough to a minimum point, T-IHN has a Q-linear rate of convergence, and an admissible line search steplength of one. Moreover, a particular T-IHN algorithm is constructed for minimizing a biomolecular potential energy function, and numerically tested for a protein model problem based on a widely used molecular simulation package, CHARMM. Numerical results confirm the theoretical results, and demonstrate that T-IHN can have a better performance (in terms of computer CPU time) than most CHARMM minimizers.  相似文献   

2.
The problem of globalizing the Newton method when the actual Hessian matrix is not used at every iteration is considered. A stabilization technique is studied that employs a new line search strategy for ensuring the global convergence under mild assumptions. Moreover, an implementable algorithmic scheme is proposed, where the evaluation of the second derivatives is conditioned to the behavior of the algorithm during the minimization process and the local convexity properties of the objective function. This is done in order to obtain a significant computational saving, while keeping acceptable the unavoidable degradation in convergence speed. The numerical results reported indicate that the method described may be employed advantageously in all applications where the computation of the Hessian matrix is highly time consuming.  相似文献   

3.
许任飞 《经济数学》2004,21(3):258-262
本文研究求解含有奇异解的无约束最优化问题算法 .该类问题的一个重要特性是目标函数的Hessian阵可能处处奇异 .我们提出求解该类问题的一种梯度 -正则化牛顿型混合算法 .并在一定的条件下得到了算法的全局收敛性 .而且 ,经一定迭代步后 ,算法还原为正则化 Newton法 .因而 ,算法具有局部二次收敛性 .  相似文献   

4.
On the use of directions of negative curvature in a modified newton method   总被引:3,自引:0,他引:3  
We present a modified Newton method for the unconstrained minimization problem. The modification occurs in non-convex regions where the information contained in the negative eigenvalues of the Hessian is taken into account by performing a line search along a path which is initially tangent to a direction of negative curvature. We give termination criteria for the line search and prove that the resulting iterates are guaranteed to converge, under reasonable conditions, to a critical point at which the Hessian is positive semidefinite. We also show how the Bunch and Parlett decomposition of a symmetric indefinite matrix can be used to give entirely adequate directions of negative curvature.Work performed under the auspices of the U.S. Department of Energy.  相似文献   

5.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

6.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

7.
In this paper a quasi-Newton projection method for image deblurring is presented. The image restoration problem is mathematically formulated as a nonnegatively constrained minimization problem where the objective function is the sum of the Kullback–Leibler divergence, used to express fidelity to the data in the presence of Poisson noise, and of a Tikhonov regularization term. The Hessian of the objective function is approximated so that the Newton system can be efficiently solved by using Fast Fourier Transforms. The numerical results show the potential of the proposed method both in terms of relative error reduction and computational efficiency.  相似文献   

8.
When solving large complex optimization problems, the user is faced with three major problems. These are (i) the cost in human time in obtaining accurate expressions for the derivatives involved; (ii) the need to store second derivative information; and (iii), of lessening importance, the time taken to solve the problem on the computer. For many problems, a significant part of the latter can be attributed to solving Newton-like equations. In the algorithm described, the equations are solved using a conjugate direction method that only needs the Hessian at the current point when it is multiplied by a trial vector. In this paper, we present a method that finds this product using automatic differentiation while only requiring vector storage. The method takes advantage of any sparsity in the Hessian matrix and computes exact derivatives. It avoids the complexity of symbolic differentiation, the inaccuracy of numerical differentiation, the labor of finding analytic derivatives, and the need for matrix store. When far from a minimum, an accurate solution to the Newton equations is not justified, so an approximate solution is obtained by using a version of Dembo and Steihaug's truncated Newton algorithm (Ref. 1).This paper was presented at the SIAM National Meeting, Boston, Massachusetts, 1986.  相似文献   

9.
In this paper, we present a new algorithm to accelerate the Chambolle gradient projection method for total variation image restoration. The new proposed method considers an approximation of the Hessian based on the secant equation. Combined with the quasi‐Cauchy equations and diagonal updating, we can obtain a positive definite diagonal matrix. In the proposed minimization method model, we use the positive definite diagonal matrix instead of the constant time stepsize in Chambolle's method. The global convergence of the proposed scheme is proved. Some numerical results illustrate the efficiency of this method. Moreover, we also extend the quasi‐Newton diagonal updating method to solve nonlinear systems of monotone equations. Performance comparisons show that the proposed method is efficient. A practical application of the monotone equations is shown and tested on sparse signal reconstruction in compressed sensing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

11.
In this work, the problem of the restoration of images corrupted by space invariant blur and noise is considered. This problem is ill-posed and regularization is required. The image restoration problem is formulated as a nonnegatively constrained minimization problem whose objective function depends on the statistical properties of the noise corrupting the observed image. The cases of Gaussian and Poisson noise are both considered. A Newton-like projection method with early stopping of the iterates is proposed as an iterative regularization method in order to determine a nonnegative approximation to the original image. A suitable approximation of the Hessian of the objective function is proposed for a fast solution of the Newton system. The results of the numerical experiments show the effectiveness of the method in computing a good solution in few iterations, when compared with some methods recently proposed as best performing.  相似文献   

12.
For a simple nonsmooth minimization problem, the discrete minisum problem, an efficient hybrid method is presented. This method consists of an ‘inner algorithm’ (Newton method) for solving the necessary optimality conditions and a gradient-type ‘outer algorithm’. By this way we combine the large convergence area of the gradient technique with the fast final convergence of the Newton method.  相似文献   

13.
We show that, for an unconstrained optimization problem, the long-term optimal trajectory consists of a sequence of greatest descent directions and a Newton method in the final iteration. The greatest descent direction can be computed approximately by using a Levenberg-Marquardt like formula. This implies the view that the Newton method approximates a Levenberg-Marquardt like formula at a finite distance from the minimum point, instead of the standard view that the Levenberg-Marquadt formula is a way to approximate the Newton method. With the insight gained from this analysis, we develop a two-dimensional version of a Levenberg-Marquardt like formula. We make use of the two numerically largest components of the gradient vector to define here new search directions. In this way, we avoid the need of inverting a high-dimensional matrix. This reduces also the storage requirements for the full Hessian matrix in problems with a large number of variables. The author thanks Mark Wu, Professors Sanyang Liu, Junmin Li, Shuisheng Zhou and Feng Ye for support and help in this research as well as the referees for helpful comments.  相似文献   

14.
A modified Newton-like method for nonlinearly constrained optimization calculations is presented and applied to a variety of separation process optimization problems.The nonlinear programming algorithm in this paper is similar to those of Wilson, Han and Powell. It differs only in the way in which approximations to the Hessian matrix of the Lagrangian function are built. For separation process applications, a large amount of reliable second-derivative information is available in analytical form. Much of the remaining second-derivative information is subject to certain thermodynamic constraints. Our algorithm exploits these two facts by building Hessian matrix approximations in two parts, a computed part which is built analytically, and an approximated part which is built by iterated projections and the Powell-symmetric-Broyden formula. Iterated projections are required so that the quasi-Newton matrix approximations satisfy the thermodynamic constraints at each iteration and a limiting secant condition. Further advantage is taken of the complete separability of the thermodynamic functions. This modified successive quadratic programming algorithm has been compared to various Newton-like methods on a number of separation process optimization problems. These include cost minimization calculations for one- and two-stage flash units and the optimization of the operating conditions of a distillation column. The numerical results show that our modified algorithm can compete favorably with existing methods, in terms of reliability and efficiency, on these applications.  相似文献   

15.
The formulation of a particular fluid--structure interaction as an optimal control problem is the departure point of this work. The control is the vertical component of the force acting on the interface and the observation is the vertical component of the velocity of the fluid on the interface. This approach permits us to solve the coupled fluid--structure problem by partitioned procedures. The analytic expression for the gradient of the cost function is obtained in order to devise accurate numerical methods for the minimization problem. Numerical results arising from blood flow in arteries are presented. To solve the optimal control problem numerically, we use a quasi-Newton method which employs the analytic gradient of the cost function and the approximation of the inverse Hessian is updated by the Broyden, Fletcher, Goldforb, Shano (BFGS) scheme. This algorithm is faster than fixed point with relaxation or block Newton methods.  相似文献   

16.
Tao  Ting  Pan  Shaohua  Bi  Shujun 《Journal of Global Optimization》2021,81(4):991-1017

This paper is concerned with the squared F(robenius)-norm regularized factorization form for noisy low-rank matrix recovery problems. Under a suitable assumption on the restricted condition number of the Hessian matrix of the loss function, we establish an error bound to the true matrix for the non-strict critical points with rank not more than that of the true matrix. Then, for the squared F-norm regularized factorized least squares loss function, we establish its KL property of exponent 1/2 on the global optimal solution set under the noisy and full sample setting, and achieve this property at its certain class of critical points under the noisy and partial sample setting. These theoretical findings are also confirmed by solving the squared F-norm regularized factorization problem with an accelerated alternating minimization method.

  相似文献   

17.
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton’s method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton’s method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.  相似文献   

18.
A tensor given by its canonical decomposition is approximated by another tensor (again, in the canonical decomposition) of fixed lower rank. For this problem, the structure of the Hessian matrix of the objective function is analyzed. It is shown that all the auxiliary matrices needed for constructing the quadratic model can be calculated so that the computational effort is a quadratic function of the tensor dimensionality (rather than a cubic function as in earlier publications). An economical version of the trust region Newton method is proposed in which the structure of the Hessian matrix is efficiently used for multiplying this matrix by vectors and for scaling the trust region. At each step, the subproblem of minimizing the quadratic model in the trust region is solved using the preconditioned conjugate gradient method, which is terminated if a negative curvature direction is detected for the Hessian matrix.  相似文献   

19.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

20.
Implicit Runge-Kutta (IRK) methods (such as the s-stage Radau IIA method with s=3,5, or 7) for solving stiff ordinary differential equation systems have excellent stability properties and high solution accuracy orders, but their high computing costs in solving their nonlinear stage equations have seriously limited their applications to large scale problems. To reduce such a cost, several approximate Newton algorithms were developed, including a commonly used one called the simplified Newton method. In this paper, a new approximate Jacobian matrix and two new test rules for controlling the updating of approximate Jacobian matrices are proposed, yielding an improved approximate Newton method. Theoretical and numerical analysis show that the improved approximate Newton method can significantly improve the convergence and performance of the simplified Newton method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号