首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
It is shown that algorithms for minimizing an unconstrained functionF(x), x E n , which are solely methods of conjugate directions can be expected to exhibit only ann or (n–1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x). Research was supported in part by Army Research Office, Contract Number DAHC 19-69-C-0017 and the Office of Naval Research, Contract Number N00014-71-C-0116 (NR 047-99).  相似文献   

2.
A method of conjugate directions, the projection method, for solving unconstrained minimization problems is presented. Under the assumption of uniform strict convexity, the method is shown to converge to the global minimizer of the unconstrained problem and to have an (n – 1)-step superlinear rate of convergence. With a Lipschitz condition on the second derivatives, the rate of convergence is shown to be a modifiedn-step quadratic one.This research was supported in part by the Army Research Office, Contract No. DAHC 19-69-C-0017, and the Office of Naval Research, Contract No. N00014-71-C-0116(NR-047-099).  相似文献   

3.
A family of accelerated conjugate direction methods, corresponding to the Broyden family of quasi-Newton methods, is described. It is shown thatall members of the family generate the same sequence of points approximating the optimum and the same sequence of search directions, provided only that each direction vector is normalized before the stepsize to be taken in that direction is determined.With minimal restrictions on how the stepsize is determined (sufficient only for convergence), the accelerated methods applied to the optimization of a function ofn variables are shown to have an (n+1)-step quadratic rate of convergence. Furthermore, the information needed to generate an accelerating step can be stored in a singlen-vector, rather than the usualn×n symmetric matrix, without changing the theoretical order of convergence.The relationships between this family of methods and existing conjugate direction methods are discussed, and numerical experience with two members of the family is presented.This research was sponsored by the United States Army under Contract No. DAAG29-75-C-0024.The author gratefully acknowledges the valuable assistance of Julia H. Gray, of the Mathematics Research Center, University of Wisconsin, Madison, who painstakingly programmed these methods and obtained the computational results.  相似文献   

4.
A version of the conjugate Gram-Schmidt method (devised by Hestenes), which requires only function evaluations, is considered. It is shown thatN-step superlinear convergence to a minimum is possible when applying that method to nonquadratic functions.This work was supported by an NPS Foundation Research Grant.  相似文献   

5.
An iterative procedure is presented which uses conjugate directions to minimize a nonlinear function subject to linear inequality constraints. The method (i) converges to a stationary point assuming only first-order differentiability, (ii) has ann-q step superlinear or quadratic rate of convergence with stronger assumptions (n is the number of variables,q is the number of constraints which are binding at the optimum), (iii) requires the computation of only the objective function and its first derivatives, and (iv) is experimentally competitive with well-known methods.For helpful suggestions, the author is much indebted to C. R. Glassey and K. Ritter.This research has been partially supported by the National Research Council of Canada under Grants Nos. A8189 and C1234.  相似文献   

6.
Quasi-Newton equations play a central role in quasi-Newton methods for optimization and various quasi-Newton equations are available. This paper gives a survey on these quasi-Newton equations and studies properties of quasi-Newton methods with updates satisfying different quasi-Newton equations. These include single-step quasi-Newton equations that use only gradient information and that use both gradient and function value information in one step, and multi-step quasi-Newton equations that use the gradient information in last m steps. Main properties of quasi-Newton methods with updates satisfying different quasi-Newton equations are studied. These properties include the finite termination property, invariance, heredity of positive definite updates, consistency of search directions, global convergence and local superlinear convergence properties.  相似文献   

7.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

8.
Supposez ∈ E n is a solution to the optimization problem minimizeF(x) s.t.x ∈ E n and an algorithm is available which iteratively constructs a sequence of search directions {s j } and points {x j } with the property thatx j z. A method is presented to accelerate the rate of convergence of {x j } toz provided that n consecutive search directions are linearly independent. The accelerating method uses n iterations of the underlying optimization algorithm. This is followed by a special step and then another n iterations of the underlying algorithm followed by a second special step. This pattern is then repeated. It is shown that a superlinear rate of convergence applies to the points determined by the special step. The special step which uses only first derivative information consists of the computation of a search direction and a step size. After a certain number of iterations a step size of one will always be used. The acceleration method is applied to the projection method of conjugate directions and the resulting algorithm is shown to have an (n + 1)-step cubic rate of convergence. The acceleration method is based on the work of Best and Ritter [2].  相似文献   

9.
This paper describes a new unconstrained optimisation procedure employing conjugate directions and requiring only threen-dimensional vectors. The method has been tested for computational efficiency and stability on a large set of test functions and compared with numerical data of other major methods. Results show that the method possesses strong superiority over other existing conjugate gradient methods on all problems and can out-perform or is at least as efficient as quasi-Newton methods on many tested problems.  相似文献   

10.
The convergence properties of the Davidon-Fletcher-Powell method when applied to the minimization of convex functions are considered for the case where the one-dimensional minimization required at each iteration is not solved exactly. Conditions on the error incurred at each iteration are given which are sufficient for the original method to have a linear or superlinear rate of convergence, and for the restarted version to have ann-step quadratic rate of convergence.Sponsored by the United States Army under Contract No. DA-31-124-ARO-D-462.  相似文献   

11.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

12.
For the problem of minimizing an unconstrained function, the conjugate-gradient method is shown to be convergent. If the function is uniformly strictly convex, the ultimate rate of convergence is shown to ben-step superlinear. If the Hessian matrix is Lipschitz continuous, the rate of convergence is shown to ben-step quadratic. All results are obtained for the reset version of the method and with a relaxed requirement on the solution of the stepsize problem. In addition to obtaining sharper results, the paper differs from previously published ones in the mode of proof which contains as a corollary the proof of finiteness of the conjugate-gradient method when applied to a quadratic problem rather than assuming that result.  相似文献   

13.
The BFGS method is the most effective of the quasi-Newton methods for solving unconstrained optimization problems. Wei, Li, and Qi [16] have proposed some modified BFGS methods based on the new quasi-Newton equation B k+1 s k = y* k , where y* k is the sum of y k and A k s k, and A k is some matrix. The average performance of Algorithm 4.3 in [16] is better than that of the BFGS method, but its superlinear convergence is still open. This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.  相似文献   

14.
This paper develops a reduced Hessian method for solving inequality constrained optimization problems. At each iteration, the proposed method solves a quadratic subproblem which is always feasible by introducing a slack variable to generate a search direction and then computes the steplength by adopting a standard line search along the direction through employing the l penalty function. And a new update criterion is proposed to generate the quasi-Newton matrices, whose dimensions may be variable, approximating the reduced Hessian of the Lagrangian. The global convergence is established under mild conditions. Moreover, local R-linear and superlinear convergence are shown under certain conditions.  相似文献   

15.
This paper presents some new results in the theory of Newton-type methods for variational inequalities, and their application to nonlinear programming. A condition of semistability is shown to ensure the quadratic convergence of Newton's method and the superlinear convergence of some quasi-Newton algorithms, provided the sequence defined by the algorithm exists and converges. A partial extension of these results to nonsmooth functions is given. The second part of the paper considers some particular variational inequalities with unknowns (x, ), generalizing optimality systems. Here only the question of superlinear convergence of {x k } is considered. Some necessary or sufficient conditions are given. Applied to some quasi-Newton algorithms they allow us to obtain the superlinear convergence of {x k }. Application of the previous results to nonlinear programming allows us to strengthen the known results, the main point being a characterization of the superlinear convergence of {x k } assuming a weak second-order condition without strict complementarity.  相似文献   

16.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

17.
《Optimization》2012,61(9):1387-1400
Although the Hesteness and Stiefel (HS) method is a well-known method, if an inexact line search is used, researches about its convergence rate are very rare. Recently, Zhang, Zhou and Li [Some descent three-term conjugate gradient methods and their global convergence, Optim. Method Softw. 22 (2007), pp. 697–711] proposed a three-term Hestenes–Stiefel method for unconstrained optimization problems. In this article, we investigate the convergence rate of this method. We show that the three-term HS method with the Wolfe line search will be n-step superlinearly and even quadratically convergent if some restart technique is used under reasonable conditions. Some numerical results are also reported to verify the theoretical results. Moreover, it is more efficient than the previous ones.  相似文献   

18.
In this paper, we present two partitioned quasi-Newton methods for solving partially separable nonlinear equations. When the Jacobian is not available, we propose a partitioned Broyden’s rank one method and show that the full step partitioned Broyden’s rank one method is locally and superlinearly convergent. By using a well-defined derivative-free line search, we globalize the method and establish its global and superlinear convergence. In the case where the Jacobian is available, we propose a partitioned adjoint Broyden method and show its global and superlinear convergence. We also present some preliminary numerical results. The results show that the two partitioned quasi-Newton methods are effective and competitive for solving large-scale partially separable nonlinear equations.  相似文献   

19.
In this paper, some Newton and quasi-Newton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences {x k } convergingq-superlinearly to the solution. Furthermore, under mild assumptions, aq-quadratic convergence rate inx is also attained. Other features of these algorithms are that only the solution of linear systems of equations is required at each iteration and that the strict complementarity assumption is never invoked. First, the superlinear or quadratic convergence rate of a Newton-like algorithm is proved. Then, a simpler version of this algorithm is studied, and it is shown that it is superlinearly convergent. Finally, quasi-Newton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle, and Wang is given.This research was supported by the National Research Program Metodi di Ottimizzazione per la Decisioni, MURST, Roma, Italy.  相似文献   

20.
A convergent minimization algorithm made up of repetitive line searches is considered in n . It is shown that the uniform nonsingularity of the matrices consisting ofn successive normalized search directions guarantees a speed of convergence which is at leastn-step Q-linear. Consequences are given for multistep methods, including Powell's 1964 procedure for function minimization without calculating derivatives as well as Zangwill's modifications of this procedure.The authors wish to thank the Namur Department of Mathematics, especially its optimization group, for many discussions and encouragement. They also thank the reviewers for many helpful suggestions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号