首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary Generalized conjugate gradient algorithms which are invariant to a nonlinear scaling of a strictly convex quadratic function are described. The algorithms when applied to scaled quadratic functionsfR n R 1 of the formf(x)=h(F(x)) withF(x) strictly convex quadratic andhC 1(R 1) an arbitrary strictly monotone functionh generate the same direction vectors as for the functionF without perfect steps.  相似文献   

2.
Summary It is shown that the theory developed in part I of this paper [22] can be applied to some well-known minimization algorithms with the quadratic termination property to prove theirn-step quadratic convergence. In particular, some conjugate gradient methods, the rank-1-methods of Pearson and McCormick (see Pearson [18]) and the large class of rank-2-methods described by Oren and Luenberger [16, 17] are investigated.This work was supported in part at Stanford University, Stanford, California, under Energy Research and Development Administration, Contract E(04-3) 326 PA No. 30, and National Science Foundation Grant DCR 71-01996 A04 and in part by the Deutsche Forschungsgemeinschaft  相似文献   

3.
Summary Consider the problem of minimizing a real function subject to linear equality constraints and to nonnegativity bounds on the variables. A convergence theorem is established for a general algorithm model based on the reduced gradient method. The most meaningful assumptions that are made, concern two crucial points: the choice of the independent variables and that of the search direction.  相似文献   

4.
Summary In this paper the problem of minimizing the functionalf:DR n R is considered. Typical assumptions onf are assumed. A class of Quasi-Newton methods, namely Huang's class of methods is used for finding an optimal solution of this problem. A new theorem connected with this class is presented. By means of this theorem some convergence results known up till now only for the methods which satisfy Quasi-Newton condition are extended, that is the results of superlinear convergence of variable metric methods in the cases of exact and asymptotically exact minimization and the so-called direct-prediction case. This theorem allows to interpretate one of the parameters as the scaling parameter.  相似文献   

5.
Summary This paper considers a class of variable metric methods for unconstrained minimization. Without requiring exact line searches each algorithm in this class converges globally and superlinearly on convex functions. Various results on the rate of the superlinear convergence are obtained.Dedicated to Professor Dr. H. Görtler on the occasion of his seventieth birthday  相似文献   

6.
《Optimization》2012,61(4):475-485
Several descent methods have recently been proposed for minimizing smooth compositions of max-type functions. The methods generate many search directions at each iteration. This paper shows that a random choice of only two search directions at each iteration suffices for retaining convergence to inf-stationary points with probability 1. This technique may decrease significantly the work in quadratic programming and line searches, thus enabling efficient implementations of the methods.  相似文献   

7.
Summary A generalized conjugate gradient algorithm which is invariant to a nonlinear scaling of a strictly convex quadratic function is described, which terminates after at mostn steps when applied to scaled quadratic functionsf: R n R1 of the formf(x)=h(F(x)) withF(x) strictly convex quadratic andhC 1 (R1) an arbitrary strictly monotone functionh. The algorithm does not suppose the knowledge ofh orF but only off(x) and its gradientg(x).  相似文献   

8.
Summary The paper represents an outcome of an extensive comparative study of nonlinear optimization algorithms. This study indicates that quadratic approximation methods which are characterized by solving a sequence of quadratic subproblems recursively, belong to the most efficient and reliable nonlinear programming algorithms available at present. The purpose of this paper is to analyse the theoretical convergence properties and to investigate the numerical performance in more detail. In Part 1, the exactL 1-penalty function of Han and Powell is replaced by a differentiable augmented Lagrange function for the line search computation to be able to prove the global convergence and to show that the steplength one is chosen in the neighbourhood of a solution. In Part 2, the quadratic subproblem is exchanged by a linear least squares problem to improve the efficiency, and to test the dependence of the performance from different solution methods for the quadratic or least squares subproblem.  相似文献   

9.
Summary The multigrid full approximation scheme (FAS MG) is a well-known solver for nonlinear boundary value problems. In this paper we restrict ourselves to a class of second order elliptic mildly nonlinear problems and we give local conditions, e.g. a local Lipschitz condition on the derivative of the continuous operator, under which the FAS MG with suitably chosen parameters locally converges. We prove quantitative convergence statements and deduce explicit bounds for important quantities such as the radius of a ball of guaranteed convergence, the number of smoothings needed, the number of coarse grid corrections needed and the number of FAS MG iterations needed in a nested iteration. These bounds show well-known features of the FAS MG scheme.  相似文献   

10.
Local convergence analysis for partitioned quasi-Newton updates   总被引:8,自引:0,他引:8  
Summary This paper considers local convergence properties of inexact partitioned quasi-Newton algorithms for the solution of certain non-linear equations and, in particular, the optimization of partially separable objective functions. Using the bounded deterioration principle, one obtains local and linear convergence, which impliesQ-superlinear convergence under the usual conditions on the quasi-Newton updates. For the optimization case, these conditions are shown to be satisfied by any sequence of updates within the convex Broyden class, even if some Hessians are singular at the minimizer. Finally, local andQ-superlinear convergence is established for an inexact partitioned variable metric method under mild assumptions on the initial Hessian approximations.Work supported by a research grant of the Deutsche Forschungsgemeinschaft, Bonn and carried out at the Department of Applied Mathematics and Theoretical Physics Cambridge (United Kingdom)  相似文献   

11.
Summary. Many successful quasi-Newton methods for optimization are based on positive definite local quadratic approximations to the objective function that interpolate the values of the gradient at the current and new iterates. Line search termination criteria used in such quasi-Newton methods usually possess two important properties. First, they guarantee the existence of such a local quadratic approximation. Second, under suitable conditions, they allow one to prove that the limit of the component of the gradient in the normalized search direction is zero. This is usually an intermediate result in proving convergence. Collinear scaling algorithms proposed initially by Davidon in 1980 are natural extensions of quasi-Newton methods in the sense that they are based on normal conic local approximations that extend positive definite local quadratic approximations, and that they interpolate values of both the gradient and the function at the current and new iterates. Line search termination criteria that guarantee the existence of such a normal conic local approximation, which also allow one to prove that the component of the gradient in the normalized search direction tends to zero, are not known. In this paper, we propose such line search termination criteria for an important special case where the function being minimized belongs to a certain class of convex functions. Received February 1, 1997 / Revised version received September 8, 1997  相似文献   

12.
Summary For solving an equality constrained nonlinear least squares problem, a globalization scheme for the generalized Gauss-Newton method via damping is proposed. The stepsize strategy is based on a special exact penalty function. Under natural conditions the global convergence of the algorithm is proved. Moreover, if the algorithm converges to a solution having a sufficiently small residual, the algorithm is shown to change automatically into the undamped generalized Gauss-Newton method with a fast linear rate of convergence. The behaviour of the method is demonstrated on hand of some examples taken from the literature.  相似文献   

13.
Summary A numerical method for constrained approximation problems in normed linear spaces is presented. The method uses extremal subgradients of the norms or sublinear functionals involved in the approximation problem considered. Under certain weak assumptions the convergence of the method is proved. For various normed spaces hints for practical realization are given and several numerical examples are described.
Ein Abstiegsverfahren für Approximationsaufgaben in normierten Räumen
  相似文献   

14.
Summary The existence of attractive cycles constitutes a serious impediment to the solution of nonlinear equations by iterative methods. This problem is illustrated in the case of the solution of the equationz tanz=c, for complex values ofc, by Newton's method. Relevant results from the theory of the iteration of rational functions are cited and extended to the analysis of this case, in which a meromorphic function is iterated. Extensive numerical results, including many attractive cycles, are summarized.This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under grants A3028 and A7691  相似文献   

15.
Summary The theoretical convergence of Generalized Reduced Gradient Method (GRG) has not been proved; the purpose of this paper is to propose two theoritical and general variants of the original method and a proof of their convergence.
  相似文献   

16.
Summary We consider unconstrained minimization problems and the application of the Broyden-Fletcher-Goldfarb-Shanno variable metric algorithm without exact line searches. For a certain class of step functions the global convergence of this method is proven, generalizing a result given by Powell. Furthermore some remarks are made concerning the superlinear convergence of this particular variable metric algorithm.
  相似文献   

17.
Summary In this paper new Quasi-Newton methods of rank-one type for solving the unconstrained minimization problem are developed. The methods belong to the Oren-Luenberger class (for negative parameters k ) and they generate always positive definite updating matrices. Moreover it is shown that these methods are invariant by scaling of the objective function.
  相似文献   

18.
The convergence of the Durand-Kerner algorithm is quadratic in case of simple roots but only linear in case of multiple roots. This paper shows that, at each step, the mean of the components converging to the same root approaches it with an error proportional to the square of the error at the previous step. Since it is also shown that it is possible to estimate the multiplicity order of the roots during the algorithm, a modification of the Durand-Kerner iteration is proposed to preserve a quadratic-like convergence even in case of multiple zeros.This work is supported in part by the Research Program C3 of the French CNRS and MEN, and by the Direction des Recherches et Etudes Techniques (DGA).  相似文献   

19.
Second order methods for simultaneous approximation of multiple complex zeros of a polynomial are presented. Convergence analysis of new iteration formulas and an efficient criterion for the choice of the appropriate value of a root are discussed. A numerical example is given which demonstrates the effectiveness of the presented methods.  相似文献   

20.
Summary This paper presents a modification of the BFGS-method for unconstrained minimization that avoids computation of derivatives. The gradients are approximated by the aid of differences of function values. These approximations are calculated in such a way that a complete convergence proof can be given. The presented algorithm is implementable, no exact line search is required. It is shown that, if the objective function is convex and some usually required conditions hold, the algorithm converges to a solution. If the Hessian matrix of the objective function is positive definite and satisfies a Lipschitz-condition in a neighbourhood of the solution, then the rate of convergence is superlinear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号