首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper studies the behaviour of a family of conjugate gradientoptimization algorithms, of which the best known is probablythat introduced in 1964 by Fletcher & Reeves. This familyhas the property that, on a quadratic function, the directionsgenerated by any member of the family are the same set of conjugatedirections providing that, at each iteration, an exact linearsearch is performed. In this paper a modification is introduced that enables thisset of conjugate directions to be generated without any accurateline searches. This enables the minimum of a quadratic functionto be found in, at most, (n+2) gradient evaluations. As themodification only requires the storage of two additional n-vectors,the storage advantage of conjugate gradient algorithms viz-?-vizvariable metric algorithms is maintained. Finally, a numerical study is reported in which the performanceof this new method is compared to that of various members ofthe unmodified family.  相似文献   

2.
An algorithm for unconstrained minimization is developed which uses a rational function model, rather than a quadratic, as a basis for conjugate directions. The algorithm is similar to one previously proposed by the authors, but inexact linear searches are investigated in the present paper.  相似文献   

3.
A family of accelerated conjugate direction methods, corresponding to the Broyden family of quasi-Newton methods, is described. It is shown thatall members of the family generate the same sequence of points approximating the optimum and the same sequence of search directions, provided only that each direction vector is normalized before the stepsize to be taken in that direction is determined.With minimal restrictions on how the stepsize is determined (sufficient only for convergence), the accelerated methods applied to the optimization of a function ofn variables are shown to have an (n+1)-step quadratic rate of convergence. Furthermore, the information needed to generate an accelerating step can be stored in a singlen-vector, rather than the usualn×n symmetric matrix, without changing the theoretical order of convergence.The relationships between this family of methods and existing conjugate direction methods are discussed, and numerical experience with two members of the family is presented.This research was sponsored by the United States Army under Contract No. DAAG29-75-C-0024.The author gratefully acknowledges the valuable assistance of Julia H. Gray, of the Mathematics Research Center, University of Wisconsin, Madison, who painstakingly programmed these methods and obtained the computational results.  相似文献   

4.
An approximation theory is given for a class of elliptic quadratic forms which include the study of conjugate surfaces for elliptic multiple integral problems. These ideas follow from the quadratic form theory of Hestenes, applied to multiple integral problems by Dennemeyer, and extended with applications for approximation problems by Gregory.The application of this theory to a variety of approximation problem areas in this setting is given. These include conjugate surfaces and conjugate solutions in the calculus of variations, oscillation problems for elliptic partial differential equations, eigenvalue problems for compact operators, numerical approximation problems, and, finally, the intersection of these problem areas.In the final part of this paper the ideas are specifically applied to the construction and counting of negative vectors in order to obtain new numerical methods for solving Laplace-type equations and to obtain the “Euler-Lagrange equations” for symmetric-banded tridiagonal matrices. In this new result (which will allow the reexamination of both the theory and applications of symmetricbanded matrices) one can construct, in a meaningful way, negative vectors, oscillation vectors, eigenvectors, and extremal solutions of classical problems as well as efficient algorithms for the numerical solution of partial differential equations. Numerical examples (test runs) are given.  相似文献   

5.
Let n measurements of a real valued function of one variablebe given. If the function is convex but the data have lost convexitydue to the errors of the measuring process, then the least sumof squares change to the data that provides nonnegative seconddivided differences may be required. An algorithm is proposedfor this highly structured quadratic programming calculation.First a procedure that requires only O(n) computer operationsgenerates a starting point for the main calculation, and thena version of the iterative method of Goldfarb & Idnani (1983)is applied. It is proved that the algorithm converges, the analysisbeing a special case of the theory of Goldfarb & Idnani.The algorithm is efficient because the matrices that occur arebanded due to representing the required fit as a linear combinationof B-splines. Some numerical results illustrate the method.They suggest that the algorithm can be used when n is very large,because the O(n) starting procedure identifies most of the convexityconstraints that are active at the solution.  相似文献   

6.
Problem of solving the strictly convex, quadratic programming problem is studied. The idea of conjugate directions is used. First we assume that we know the set of directions conjugate with respect to the hessian of the goal function. We apply n simultaneous directional minimizations along these conjugate directions starting from the same point followed by the addition of the directional corrections. Theorem justifying that the algorithm finds the global minimum of the quadratic goal function is proved. The way of effective construction of the required set of conjugate directions is presented. We start with a vector with zero value entries except the first one. At each step new vector conjugate to the previously generated is constructed whose number of nonzero entries is larger by one than in its predecessor. Conjugate directions obtained by means of the above construction procedure with appropriately selected parameters form an upper triangular matrix which in exact computations is the Cholesky factor of the inverse of the hessian matrix. Computational cost of calculating the inverse factorization is comparable with the cost of the Cholesky factorization of the original second derivative matrix. Calculation of those vectors involves exclusively matrix/vector multiplication and finding an inverse of a diagonal matrix. Some preliminary computational results on some test problems are reported. In the test problems all symmetric, positive definite matrices with dimensions from \(14\times 14\) to \(2000\times 2000\) from the repository of the Florida University were used as the hessians.  相似文献   

7.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible.  相似文献   

8.
《Optimization》2012,61(7):929-941
To take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest a hybridization of these methods using a quadratic relaxation of a hybrid CG parameter proposed by Dai and Yuan. In the proposed method, the hybridization parameter is computed based on a conjugacy condition. Under proper conditions, we show that our method is globally convergent for uniformly convex functions. We give a numerical comparison of the implementations of our method and two efficient hybrid CG methods proposed by Dai and Yuan using a set of unconstrained optimization test problems from the CUTEr collection. Numerical results show the efficiency of the proposed hybrid CG method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

9.
In this article, by slightly modifying the search direction of the nonmonotone Hestenes–Stiefel method, a variant Hestenes–Stiefel conjugate gradient method is proposed that satisfies the su?cient descent condition independent of any line search. This algorithm also possesses information about the gradient value and the function value. We establish the global convergence of our methods without the assumption that the steplength is bounded away from zero. Numerical results illustrate that our method can e?ciently solve the test problems, and therefore is promising.  相似文献   

10.
There is a family of gradient algorithms (Broyden, 1970) thatincludes many useful methods for calculating the least valueof a function F(x), and some of these algorithms have been extendedto solve linearly constrained problems (Fletcher, 1971). Somenew and fundamental properties of these algorithms are given,in the case that F(x) is a positive definite quadratic function.In particular these properties are relevant to the case whenonly some of the iterations of an algorithm make a completelinear search. They suggest that Goldfarb's (1969) algorithmfor linearly constrained problems has excellent quadratic terminationproperties, and it is proved that these properties are betterthan has been stated in previously published papers. Also anew technique is identified for unconstrained minimization withoutlinear searches.  相似文献   

11.
Summary. The method of shortest residuals (SR) was presented by Hestenes and studied by Pytlak. If the function is quadratic, and if the line search is exact, then the SR method reduces to the linear conjugate gradient method. In this paper, we put forward the formulation of the SR method when the line search is inexact. We prove that, if stepsizes satisfy the strong Wolfe conditions, both the Fletcher-Reeves and Polak-Ribière-Polyak versions of the SR method converge globally. When the Wolfe conditions are used, the two versions are also convergent provided that the stepsizes are uniformly bounded; if the stepsizes are not bounded, an example is constructed to show that they need not converge. Numerical results show that the SR method is a promising alternative of the standard nonlinear conjugate gradient method. Received June 25, 1996 / Revised version received April 1, 1997 / Published online July 28, 1999  相似文献   

12.
Two quadratically convergent gradient methods for minimizing an unconstrained function of several variables are examined. The heart of the Fletcher and Powell reformulation of Davidon's method is a variableH-matrix. The eigenvalues and eigenvectors of this matrix for a quadratic function are explored, leading to a proof that the gradient vectors at each step are mutually orthogonal. From this, a geometric interpretation of theH-matrix in terms of the projection of the gradient into a solution subspace is derived. These properties are then used to arrive at the main result, which states that, for a quadratic function, the direction vectors generated by the Davidon algorithm and the conjugate-gradient algorithm of Hestenes and Stiefel are scalar multiples of each other, provided the initial step each takes is in the direction of steepest descent. If one assumes no round-off error and a perfect one-dimensional search, the methods generate identical steps leading to the minimum.It is also shown that, for a quadratic function, the Davidon algorithm has a simplified version which searches in the same directions. However, the unique advantage of the original scheme, that it yields the curvature of the function at the minimum, is sacrificed for simplicity.Although these results apply to the case of a quadratic function, a comparative study of the same algorithms for a general function can be found in a companion paper.This research was carried out under Contract No. NAS 9-4036, NASA-Manned Spacecraft Center, Houston, Texas. The author is indebted to Dr. H. J. Kelley, whose suggestions and encouragement provided the inspiration for this paper.  相似文献   

13.
The CGS (conjugate Gram-Schmidt) algorithms of Hestenes and Stiefel are formulated so as to obtain least-square solutions of a system of equationsg(x)=0 inn independent variables. Both the linear caseg(x)=Axh and the nonlinear case are discussed. In the linear case, a least-square solution is obtained in no more thann steps, and a method of obtaining the least-square solution of minimum length is given. In the nonlinear case, the CGS algorithm is combined with the Gauss-Newton process to minimize sums of squares of nonlinear functions. Results of numerical experiments with several versions of CGS on test functions indicate that the algorithms are effective.The author wishes to express appreciation and to acknowledge the ideas and help of Professor M. R. Hestenes which made this paper possible.  相似文献   

14.
A method is proposed for numerical construction of Lyapunov-Krasovskii functionals for analyzing the stability of linear stationary lag systems. The functional is constructed in the form of the sum of two quadratic sums. Positive-definite matrices of quadratic forms are found as a solution of a nonsmooth optimization problem on a convex set. Bibliography: 8 titles. Translated fromObchyslyuval'na ta Prykladna Matematyka, No. 80, 1996, pp. 142–151.  相似文献   

15.
We introduce an efficient and robust proposal for solving linear systems arising at each iteration of primal-dual interior-point methods for linear programming. Our proposal is based on the stable system presented by Gonzalez-Lima et al. (Comput. Opt. Appl. 44:213–247, 2009). Using similar techniques as those employed in the splitting preconditioner introduced by Oliveira and Sorensen (Linear Algebra Appl. 394:1–24, 2005) we are able to express the stable system matrix in block form such that the diagonal blocks are nonsingular diagonal matrices and the off-diagonal blocks are matrices close to zero when the iterates are close to the solution set of the linear programming problem. For degenerate problems a perturbation of the diagonal is added. We use a low-cost fixed iterative method to solve this system. Numerical experiments have shown that our approach leads to very accurate solutions for the linear programming problem.  相似文献   

16.
This paper continues our earlier work on random quadratic forms. We show that if continuity conditions for a family of quadratic forms holds uniformly on an index set, generalized focal/conjugate point signature approximation results hold. We then apply these results to obtain continuity of thenth focal point in a random (robust) setting.  相似文献   

17.
Consider the polynomial ring R[x1, ..., xn] over a unique factorizationdomain R. A form (i.e., a homogeneous polynomial) is said tosplit if it is a product of linear forms. When a homogeneousideal is generated by splitting forms, the associated projectivealgebraic set is a finite union of linear subvarieties of Pn–1(R).But conversely, when a projective algebraic set decomposes intolinear subvarieties, its associated radical ideal may not begenerated by splitting forms. In this paper we construct a recursivealgorithm for establishing sufficient conditions for an idealto be generated by a prescribed set of splitting forms and applythis algorithm to a family of ideals that have arisen in thestudy of block designs. Our results on ideal generators havevery interesting applications to graph theory, which are discussedelsewhere.  相似文献   

18.
The preconditioned conjugate gradient method is employed tosolve Toeplitz systems Tnx = b where the generating functionsof the n-by-n Toeplitz matrices Tn are continuous nonnegativeperiodic functions defined in [–,]. The preconditionedCn are band Toeplitz matrices with band-widths independent ofn. We prove that the spectra of Cn-1Tn are uniformly boundedby constants independent of n. In particular, we show that thesolutions of Tnx = b can be obtained in O(nlogn) operations.  相似文献   

19.
The paper discusses several versions of the method of shortest residuals, a specific variant of the conjugate gradient algorithm, first introduced by Lemaréchal and Wolfe and discussed by Hestenes in a quadratic case. In the paper we analyze the global convergence of the versions considered. Numerical comparison of these versions of the method of shortest residuals and an implementation of a standard Polak–Ribière conjugate gradient algorithm is also provided. It supports the claim that the method of shortest residuals is a viable technique, competitive to other conjugate gradient algorithms.  相似文献   

20.
Stokes's method of calculating the form of steady finite-amplitude,gravity waves in deep water involves a series of coefficientsCn related to the Fourier coefficients of the free surface elevation.The condition of constant pressure at the free surface yieldsa series of cubic relations between the Cn, which are normallyused for calculations. In this paper it is shown that the Cnalso satisfy some simpler, quadratic relations, which renderthe calculation of the profile faster and more accurate. The new relations are equivalent to certain integral propertiesinvolving the square of the particle speed, integrated alonga streamline. This enables a generalization to be readily madeto waves in water of finite depth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号