首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
This paper presents a modified quasi-Newton method for structured unconstrained optimization. The usual SQN equation employs only the gradients, but ignores the available function value information. Several researchers paid attention to other secant conditions to get a better approximation of the Hessian matrix of the objective function. Recently Yabe et al. (2007) [6] proposed the modified secant condition which uses both gradient and function value information in order to get a higher-order accuracy in approximating the second curvature of the objective function. In this paper, we derive a new progressive modified SQN equation, with a vector parameter which use both available gradient and function value information, that maintains most properties of the usual and modified structured quasi-Newton methods. Furthermore, local and superlinear convergence of the algorithm is obtained under some reasonable conditions.  相似文献   

2.
《Journal of Complexity》2002,18(2):557-572
This paper studies recent modifications of the limited memory BFGS (L-BFGS) method for solving large scale unconstrained optimization problems. Each modification technique attempts to improve the quality of the L-BFGS Hessian by employing (extra) updates in a certain sense. Because at some iterations these updates might be redundant or worsen the quality of this Hessian, this paper proposes an updates criterion to measure this quality. Hence, extra updates are employed only to improve the poor approximation of the L-BFGS Hessian. The presented numerical results illustrate the usefulness of this criterion and show that extra updates improve the performance of the L-BFGS method substantially.  相似文献   

3.
A new generalized Polak-Ribière conjugate gradient algorithm is proposed for unconstrained optimization, and its numerical and theoretical properties are discussed. The new method is, in fact, a particular type of two-dimensional Newton method and is based on a finite-difference approximation to the product of a Hessian and a vector.  相似文献   

4.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

5.
A class of generalized variable penalty formulations for solving nonlinear programming problems is presented. The method poses a sequence of unconstrained optimization problems with mechanisms to control the quality of the approximation for the Hessian matrix, which is expressed in terms of the constraint functions and their first derivatives. The unconstrained problems are solved using a modified Newton's algorithm. The method is particularly applicable to solution techniques where an approximate analysis step has to be used (e.g., constraint approximations, etc.), which often results in the violation of the constraints. The generalized penalty formulation contains two floating parameters, which are used to meet the penalty requirements and to control the errors in the approximation of the Hessian matrix. A third parameter is used to vary the class of standard barrier or quasibarrier functions, forming a branch of the variable penalty formulation. Several possibilities for choosing such floating parameters are discussed. The numerical effectiveness of this algorithm is demonstrated on a relatively large set of test examples.The author is thankful for the constructive suggestions of the referees.  相似文献   

6.
在二阶拟牛顿方程的基础上,结合Zhang H.C.提出的非单调线搜索构造了一种求解大规模无约束优化问题的对角二阶拟牛顿算法.算法在每次迭代中利用对角矩阵逼近Hessian矩阵的逆,使计算搜索方向的存储量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性和超线性收敛性.数值实验表明算法是有效可行的.  相似文献   

7.
In this paper, we propose two modified partial-update algorithms for solving unconstrained unary optimization problems based on trust-region stabilization via indefinite dogleg curves. The two algorithms partially update an approximation to the Hessian matrix in each iteration by utilizing a number of times the rank-one updating of the Bunch–Parlett factorization. In contrast with the original algorithms in Ref. 1, the two algorithms not only converge globally, but possess also a locally quadratic or superlinear convergence rate. Furthermore, our numerical experiments show that the new algorithms outperform the trust-region method which uses the partial update criteria suggested in Ref. 1.  相似文献   

8.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

9.
A modified version of the truncated-Newton algorithm of Nash ([24], [25], [29]) is presented differing from it only in the use of an exact Hessian vector product for carrying out the large-scale unconstrained optimization required in variational data assimilation. The exact Hessian vector products is obtained by solving an optimal control problem of distributed parameters. (i.e. the system under study occupies a certain spatial and temporal domain and is modeled by partial differential equations) The algorithm is referred to as the adjoint truncated-Newton algorithm. The adjoint truncated-Newton algorithm is based on the first and the second order adjoint techniques allowing to obtain a better approximation to the Newton line search direction for the problem tested here. The adjoint truncated-Newton algorithm is applied here to a limited-area shallow water equations model with model generated data where the initial conditions serve as control variables. We compare the performance of the adjoint truncated-Newton algorithm with that of the original truncated-Newton method [29] and the LBFGS (Limited Memory BFGS) method of Liu and Nocedal [23]. Our numerical tests yield results which are twice as fast as these obtained by the truncated-Newton algorithm and are faster than the LBFGS method both in terms of number of iterations as well as in terms of CPU time.  相似文献   

10.
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions and strictly convex quadratic functions satisfying specified conditions.  相似文献   

11.
The limited memory BFGS method (L-BFGS) is an adaptation of the BFGS method for large-scale unconstrained optimization. However, The L-BFGS method need not converge for nonconvex objective functions and it is inefficient on highly ill-conditioned problems. In this paper, we proposed a regularization strategy on the L-BFGS method, where the used regularization parameter may play a compensation role in some sense when the condition number of Hessian approximation tends to become ill-conditioned. Then we proposed a regularized L-BFGS method and established its global convergence even when the objective function is nonconvex. Numerical results show that the proposed method is efficient.  相似文献   

12.
《Optimization》2012,61(3):375-389
In this paper we consider two alternative choices for the factor used to scale the initial Hessian approximation, before updating by a member of the Broyden family of updates for quasi-Newton optimization methods. By extensive computational experiments carried out on a set of standard test problems from the CUTE collection, using efficient implemen-tations of the quasi-Newton method, we show that the proposed new scaling factors are better, in terms of efficiency achieved (number of iterations, number of function and gradient evaluations), than the standard choice proposed in the literature  相似文献   

13.
In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization is proposed. By employing the generalized weak quasi-Newton equations, we derive several schemes to construct variants of scalar matrices as the Hessian approximation used in the trust region subproblem. Under some reasonable conditions, global convergence of the proposed algorithm is established in the trust region framework. The numerical experiments on solving the test problems with dimensions from 50 to 20,000 in the CUTEr library are reported to show efficiency of the algorithm.  相似文献   

14.
A modified BFGS algorithm for solving the unconstrained optimization, whose Hessian matrix at the minimum point of the convex function is of rank defects, is presented in this paper.The main idea of the algorithm is first to add a modified term to the convex function for obtain an equivalent model, then simply the model to get the modified BFGS algorithm. The superlinear convergence property of the algorithm is proved in this paper. To compared with the Tensor algorithms presented by R. B. Schnabel (seing [4],[5]), this method is more efficient for solving singular unconstrained optimization in computing amount and complication.  相似文献   

15.
周群艳  陈俊 《应用数学》2012,25(1):202-208
本文提出一种新的解大规模无约束优化问题的全局收敛的梯度法.新算法沿着负梯度方向选择步长,而初始步长根据目标函数的海赛矩阵的近似数量矩阵来确定.理论上证明了新算法产生的点列的每个聚点都是稳定的,数值试验表明新算法是可靠且有效的.  相似文献   

16.
Based on simple quadratic models of the trust region subproblem, we combine the trust region method with the nonmonotone and adaptive techniques to propose a new nonmonotone adaptive trust region algorithm for unconstrained optimization. Unlike traditional trust region method, our trust region subproblem is very simple by using a new scale approximation of the minimizing function??s Hessian. The new method needs less memory capacitance and computational complexity. The convergence results of the method are proved under certain conditions. Numerical results show that the new method is effective and attractive for large scale unconstrained problems.  相似文献   

17.
For solving unconstrained minimization problems, quasi-Newton methods are popular iterative methods. The secant condition which employs only the gradient information is imposed on these methods. Several researchers paid attention to other secant conditions to get a better approximation of the Hessian matrix of the objective function. Recently, Zhang et al. [New quasi-Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl. 102 (1999) 147–167] and Zhang and Xu [Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations, J. Comput. Appl. Math. 137 (2001) 269–278] proposed the modified secant condition which uses both gradient and function value information in order to get a higher order accuracy in approximating the second curvature of the objective function. They showed the local and q-superlinear convergence property of the BFGS-like and DFP-like updates based on their proposed secant condition. In this paper, we incorporate one parameter into this secant condition to smoothly switch the standard secant condition and the secant condition of Zhang et al. We consider a modified Broyden family which includes the BFGS-like and the DFP-like updates proposed by Zhang et al. We prove the local and q-superlinear convergence of our method.  相似文献   

18.
本文提出了一种解无约束优化问题的新的非单调自适应信赖域方法.这种方法借助于目标函数的海赛矩阵的近似数量矩阵来确定信赖域半径.在通常的条件下,给出了新算法的全局收敛性以及局部超线性收敛的结果,数值试验验证了新的非单调方法的有效性.  相似文献   

19.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

20.
A quasi-Newton extension of the Goldstein-Levitin-Polyak (GLP) projected gradient algorithm for constrained optimization is considered. Essentially, this extension projects an unconstrained descent step on to the feasible region. The determination of the stepsize is divided into two stages. The first is a stepsize sequence, chosen from the range [1,2], converging to unity. This determines the size of the unconstrained step. The second is a stepsize chosen from the range [0,1] according to a stepsize strategy and determines the length of the projected step. Two such strategies are considered. The first bounds the objective function decrease by a conventional linear functional, whereas the second uses a quadratic functional as a bound.The introduction of the unconstrained step provides the option of taking steps that are larger than unity. It is shown that unit steplengths and subsequently superlinear convergence rates are attained if the projection of the quasi-Newton Hessian approximation approaches the projection of the Hessian at the solution. Thus, the requirement in the GLP algorithm for a positive definite Hessian at the solution is relaxed. This allows the use of strictly positive definite Hessian approximations, thereby simplifying the quadratic subproblem involved, even if the Hessian at the solution is not strictly positive definite.This research was funded by a Science and Engineering Research Council Advanced Fellowship. The author is also grateful to an anonymous referee for numerous constructive criticisms and comments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号