首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The steepest-descent technique dealing with the perturbed values of the objective function and its gradients and with nonexact line searches is considered. Attention is given to the case where the perturbations do not decrease on the algorithm trajectories; the aim is to investigate how perturbations at every iteration of the algorithm perturb its original attractor set.Based on the Liapunov direct method for attraction analysis of discrete-time processes, a sharp estimation of the attractor set generated by a perturbed steepest-descent technique with respect to the perturbation magnitudes is obtained. Some global optimization properties of finite-difference analogues of the gradient method are discovered. These properties are not inherent in methods which use exact gradients.The author is grateful to the referees for many useful suggestions.  相似文献   

2.
During the last few years, conjugate-gradient methods have been found to be the best available tool for large-scale minimization of nonlinear functions occurring in geophysical applications. While vectorization techniques have been applied to linear conjugate-gradient methods designed to solve symmetric linear systems of algebraic equations, arising mainly from discretization of elliptic partial differential equations, due to their suitability for vector or parallel processing, no such effort was undertaken for the nonlinear conjugate-gradient method for large-scale unconstrained minimization.Computational results are presented here using a robust memoryless quasi-Newton-like conjugate-gradient algorithm by Shanno and Phua applied to a set of large-scale meteorological problems. These results point to the vectorization of the conjugate-gradient code inducing a significant speed-up in the function and gradient evaluation for the nonlinear conjugate-gradient method, resulting in a sizable reduction in the CPU time for minimizing nonlinear functions of 104 to 105 variables. This is particularly true for many real-life problems where the gradient and function evaluation take the bulk of the computational effort.It is concluded that vector computers are advantageous for largescale numerical optimization problems where local minima of nonlinear functions are to be found using the nonlinear conjugate-gradient method.This research was supported by the Florida State University Supercomputer Computations Research Institute, which is partially funded by the US Department of Energy through Contract No. DE-FC05-85ER250000.  相似文献   

3.
The solution of linear equationsCu 0+b=0 foru 0 is considered here, withC a positive-definite and self-adjoint operator. Such equations arise when solving quadratic optimization problems and (for example) when solving partial differential equations using finite-difference methods. A standard solution technique is to approximateC by an operatorK which is easy to invert and then to construct an algorithm of the contraction-mapping type to useK –1 iteratively to help solve the original equation. Such algorithms have long been used for solving equations of this type. The aim of the paper is to show that, for eachK, a little-known generalization of the usual conjugate-gradient algorithm has advantages over the corresponding contraction-mapping algorithm in that it has better convergence properties. In addition, it is not significantly more difficult to implement. IfK is a good approximation toC, the resulting generalized conjugate-gradient algorithm is more effective than the usual conjugate-gradient algorithm.The computed results presented in Section 4 were obtained by P. Bluett while a research student at Imperial College.  相似文献   

4.
A conjugate-gradient method for unconstrained optimization, which is based on a nonquadratic model, is proposed. The technique has the same properties as the Fletcher-Reeves algorithm when applied to a quadratic function. It is shown to be efficient when tried on general functions of different dimensionality.  相似文献   

5.
The penalty-function approach is an attractive method for solving constrained nonlinear programming problems, since it brings into play all of the well-developed unconstrained optimization techniques, If, however, the classical steepest-descent method is applied to the standard penalty-function objective, the rate of convergence approaches zero as the penalty coefficient is increased to yield a close approximation to the true solution.In this paper, it is shown that, ifm+1 steps of the conjugate-gradient method are successively repeated (wherem is the number of constraints), the convergence rate when applied to the penalty-function objective conveges at a rate predicted by the second derivative of the Lagrangian. This rate is independent of the penalty coefficient and, hence, the scheme yields reasonable convergence for a first-order method.This research was supported by National Science Foundation, Grant No. NSF-GK-1683.  相似文献   

6.
An extensive failure analysis of the steepest-descent optimization algorithm has been made. Each of the ways in which the algorithm can fail is discussed in terms of both the mathematical and numerical manifestations of a failure and the information which each type of failure provides about the formulation of the physical problem. Numerical tests for each of the various types of failure are described; several faulty problem formulations are presented, each of which illustrates a particular type of failure. A table is presented in which all failure modes are summarized and the corresponding numerical tests are exhibited.  相似文献   

7.
A rank-one algorithm is presented for unconstrained function minimization. The algorithm is a modified version of Davidon's variance algorithm and incorporates a limited line search. It is shown that the algorithm is a descent algorithm; for quadratic forms, it exhibits finite convergence, in certain cases. Numerical studies indicate that it is considerably superior to both the Davidon-Fletcher-Powell algorithm and the conjugate-gradient algorithm.  相似文献   

8.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible.  相似文献   

9.
In this paper, we propose three different kinds of iteration schemes to compute the approximate solutions of variational inequalities in the setting of Banach spaces. First, we suggest Mann-type steepest-descent iterative algorithm, which is based on two well-known methods: Mann iterative method and steepest-descent method. Second, we introduce modified hybrid steepest-descent iterative algorithm. Third, we propose modified hybrid steepest-descent iterative algorithm by using the resolvent operator. For the first two cases, we prove the convergence of sequences generated by the proposed algorithms to a solution of a variational inequality in the setting of Banach spaces. For the third case, we prove the convergence of the iterative sequence generated by the proposed algorithm to a zero of an operator, which is also a solution of a variational inequality.  相似文献   

10.
A conjugate-gradient optimization method which is invariant to nonlinear scaling of a quadratic form is introduced. The technique has the property that the search directions generated are identical to those produced by the classical Fletcher-Reeves algorithm applied to the quadratic form. The approach enables certain nonquadratic functions to be minimized in a finite number of steps. Several examples which illustrate the efficacy of the method are included.  相似文献   

11.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the case of a quadratic functional subject to linear constraints is considered, and a conjugate-gradient algorithm is derived. Nominal functionsx(t),u(t), π satisfying all the differential equations and boundary conditions are assumed. Variations Δx(t), δu(t), Δπ are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the differential equations, the boundary conditions, and a quadratic constraint on the variations of the control and the parameter. Next, the more general case of a nonquadratic functional subject to nonlinear constraints is considered. The algorithm derived for the linear-quadratic case is employed with one modification: a restoration phase is inserted between any two successive conjugate-gradient phases. In the restoration phase, variations Δx(t), Δu(t), Δπ are determined by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Thus, a sequential conjugate-gradient-restoration algorithm is constructed in such a way that the differential equations and the boundary conditions are satisfied at the end of each complete conjugate-gradient-restoration cycle. Several numerical examples illustrating the theory of this paper are given in Part 2 (see Ref. 1). These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper. This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor A. Miele for stimulating discussions. Formerly, Graduate Studient in Aero-Astronautics, Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas.  相似文献   

12.
1.引言 CG法对于变量个数很多的问题,是很有用的.1970年后它有了许多改进和发展,CCG法以正定圆锥函数为基础[1],它的一般方法是:设圆锥函数为 2]其中: V= V(x)=1+ aTx ≠ 0;, r ∈R1为常量; a,g ∈ Rn为常向量;x ∈ Rn为变向量;A∈Rn×n为对称正定矩阵.算法[1]:预先给出初始近似点x0∈ Rn及初始搜索方向 p0;满足:其中“I”是单位矩阵, V0= V(x0)= 1+ atx0及记号“”是函数的梯度.迭代格式为: xk+1= xk +λkpk,k= 0,1,2,…(3…  相似文献   

13.
A new algorithm for solving smooth large-scale minimization problems with bound constraints is introduced. The way of dealing with active constraints is similar to the one used in some recently introduced quadratic solvers. A limited-memory multipoint symmetric secant method for approximating the Hessian is presented. Positive-definiteness of the Hessian approximation is not enforced. A combination of trust-region and conjugate-gradient approaches is used to explore useful information. Global convergence is proved for a general model algorithm. Results of numerical experiments are presented.  相似文献   

14.
A stochastic steepest-descent algorithm for function minimization under noisy observations is presented. Function evaluation is done by performing a number of random experiments on a suitable probability space. The number of experiments performed at a point generated by the algorithm reflects a balance between the conflicting requirements of accuracy and computational complexity. The algorithm uses an adaptive precision scheme to determine the number of random experiments at a point; this number tends to increase whenever a stationary point is approached and to decrease otherwise. Two rules are used to determine the number of random experiments at a point; one, in the inner loop of the algorithm, uses the magnitude of the observed gradient of the function to be minimized; and the other, in the outer-loop, uses a measure of accumulated errors in function evaluations at past points generated by the algorithm. Once a stochastic approximation of the function to be minimized is obtained at a point, the algorithm proceeds to generate the next point by using the steepest-descent deterministic methods of Armijo and Polak (Refs. 3, 4). Convergence of the algorithm to stationary points is demonstrated under suitable assumptions.  相似文献   

15.
This paper describes a simple algorithm for calculating the carryover term β in the conjugate-gradient method. The proposed algorithm incorporates an orthogonality correction as well as an automatic restart. Its performance is compared with alternate β forms reported, using five test functions and two cases of parameter estimation.  相似文献   

16.
In this paper, a unified method to construct quadratically convergent algorithms for function minimization is described. With this unified method, a generalized algorithm is derived. It is shown that all the existing conjugate-gradient algorithms and variable-metric algorithms can be obtained as particular cases. In addition, several new practical algorithms can be generated. The application of these algorithms to quadratic functions as well as nonquadratic functions is discussed.This research, supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67, is based on Ref. 1.  相似文献   

17.
The sequential gradient-restoration algorithm (SGRA) was developed in the late 1960s for the solution of equality-constrained nonlinear programs and has been successfully implemented by Miele and coworkers on many large-scale problems. The algorithm consists of two major sequentially applied phases. The first is a gradient-type minimization in a subspace tangent to the constraint surface, and the second is a feasibility restoration procedure. In Part 1, the original SGRA algorithm is described and is compared with two other related methods: the gradient projection and the generalized reduced gradient methods. Next, the special case of linear equalities is analyzed. It is shown that, in this case, only the gradient-type minimization phase is needed, and the SGRA becomes identical to the steepest-descent method. Convergence proofs for the nonlinearly constrained case are given in Part 2.Partial support for this work was provided by the Fund for the Promotion of Research at Technion, Israel Institute of Technology, Haifa, Israel.  相似文献   

18.
The evolution of wave packets generated by impulsive disturbances of the rotating-disk boundary-layer flow is studied. It is shown, by comparison with Briggs' method, that there are certain difficulties associated with the steepest-descent time-asymptotic method of evaluating the impulse response. The rotating-disk boundary layer provides examples of saddle points through which the steepest-descent path can and cannot be made to pass. It is shown that it is critically important to establish, from the topography of the phase function, whether the steepest-descent path passes through particular saddle points. If this procedure is not carried out, calculations of the wave-packet evolution can be majorly flawed and incorrect conclusions can be drawn about the stability of the flow, i.e., whether it is convectively or absolutely unstable.  相似文献   

19.
Steepest-descent optimal control techniques have been used extensively for dynamic systems in one independent variable and with a full set of initial conditions. This paper presents an extension of the steepest-descent technique to mechanical design problems that are described by boundary-value problems with one or more independent variables. The method is illustrated by solving finite-dimensional problems, problems with distribution of design over one space dimension, and problems with distribution of design over two space dimensions.  相似文献   

20.
Three variants of the classical conjugate-gradient method are presented. Two of these variants are based upon a nonlinear function of a quadratic form. A restarting procedure due to Powell, and based upon some earlier work of Beale, is discussed and incorporated into two of the variants. Results of applying the four algorithms to a set of benchmark problems are included, and some tentative conclusions about the relative merits of the four schemes are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号