首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary. Many successful quasi-Newton methods for optimization are based on positive definite local quadratic approximations to the objective function that interpolate the values of the gradient at the current and new iterates. Line search termination criteria used in such quasi-Newton methods usually possess two important properties. First, they guarantee the existence of such a local quadratic approximation. Second, under suitable conditions, they allow one to prove that the limit of the component of the gradient in the normalized search direction is zero. This is usually an intermediate result in proving convergence. Collinear scaling algorithms proposed initially by Davidon in 1980 are natural extensions of quasi-Newton methods in the sense that they are based on normal conic local approximations that extend positive definite local quadratic approximations, and that they interpolate values of both the gradient and the function at the current and new iterates. Line search termination criteria that guarantee the existence of such a normal conic local approximation, which also allow one to prove that the component of the gradient in the normalized search direction tends to zero, are not known. In this paper, we propose such line search termination criteria for an important special case where the function being minimized belongs to a certain class of convex functions. Received February 1, 1997 / Revised version received September 8, 1997  相似文献   

2.
In this paper, the problem of minimizing a nonlinear functionf(x) subject to a nonlinear constraint (x)=0 is considered, wheref is a scalar,x is ann-vector, and is aq-vector, withq<n. A conjugate gradient-restoration algorithm similar to those developed by Mieleet al. (Refs. 1 and 2) is employed. This particular algorithm consists of a sequence of conjugate gradient-restoration cycles. The conjugate gradient portion of each cycle is based upon a conjugate gradient algorithm that is derived for the special case of a quadratic function subject to linear constraints. This portion of the cycle involves a single step and is designed to decrease the value of the function while satisfying the constraints to first order. The restoration portion of each cycle involves one or more iterations and is designed to restore the norm of the constraint function to within a predetermined tolerance about zero.The conjugate gradient-restoration sequence is reinitialized with a simple gradient step everyn–q or less cycles. At the beginning of each simple gradient step, a positive-definite preconditioning matrix is used to accelerate the convergence of the algorithm. The preconditioner chosen,H +, is the positive-definite reflection of the Hessian matrixH. The matrixH + is defined herein to be a matrix whose eigenvectors are identical to those of the Hessian and whose eigenvalues are the moduli of the latter's eigenvalues. A singular-value decomposition is used to efficiently construct this matrix. The selection of the matrixH + as the preconditioner is motivated by the fact that gradient algorithms exhibit excellent convergence characteristics on quadratic problems whose Hessians have small condition numbers. To this end, the transforming operatorH + 1/2 produces a transformed Hessian with a condition number of one.A higher-order example, which has resulted from a new eigenstructure assignment formulation (Ref. 3), is used to illustrate the rapidity of convergence of the algorithm, along with two simpler examples.  相似文献   

3.
In this paper, we study the problem of quadratic programming with M-matrices. We describe (1) an effective algorithm for the case where the variables are subject to a lower-bound constraint, and (2) an analogous algorithm for the case where the variables are subject to lower-and-upper-bound constraints. We demonstrate the special monotone behavior of the iterate and gradient vectors. The result on the gradient vector is new. It leads us to consider a simple updating procedure which preserves the monotonicity of both vectors. The procedures uses the fact that an M-matrix has a nonnegative inverse. Two new algorithms are then constructed by incorporating this updating procedure into the two given algorithms. We give numerical examples which show that the new methods can be more efficient than the original ones.  相似文献   

4.
《Optimization》2012,61(2):137-150
An algorithm for addressing multiple objective linear programming (MOLP) problems is presented. The algorithm modifies the path-following primal-dual algorithm to MOLP problems by using the single objective algorithm to generate interior search directions and later combine them to derive a single direction along which to step to the next iterate. Combining the different interior search directions is done by interacting with a Decision Maker (DM) to obtain locally-relevant preference information for the value vectors along these directions. This preference information is then used to derive an approximation to the gradient of an implicity-known utility function, and using a projection of this gradient provides a direction gradient of an implicitly-known utility function, and using a projection of this gradient provides a direction vector along which we step to the next iterate. At each iteration the algorithm also generates boundary points that aid in deriving the combined search direction. We refer to these boundary points, generated sequentially during the process, as anchor points that serve as candidate solutions at which to terminate the iterative process.  相似文献   

5.
Described here is the structure and theory for a sequential quadratic programming algorithm for solving sparse nonlinear optimization problems. Also provided are the details of a computer implementation of the algorithm along with test results. The algorithm maintains a sparse approximation to the Cholesky factor of the Hessian of the Lagrangian. The solution to the quadratic program generated at each step is obtained by solving a dual quadratic program using a projected conjugate gradient algorithm. An updating procedure is employed that does not destroy sparsity.  相似文献   

6.
A tensor given by its canonical decomposition is approximated by another tensor (again, in the canonical decomposition) of fixed lower rank. For this problem, the structure of the Hessian matrix of the objective function is analyzed. It is shown that all the auxiliary matrices needed for constructing the quadratic model can be calculated so that the computational effort is a quadratic function of the tensor dimensionality (rather than a cubic function as in earlier publications). An economical version of the trust region Newton method is proposed in which the structure of the Hessian matrix is efficiently used for multiplying this matrix by vectors and for scaling the trust region. At each step, the subproblem of minimizing the quadratic model in the trust region is solved using the preconditioned conjugate gradient method, which is terminated if a negative curvature direction is detected for the Hessian matrix.  相似文献   

7.
Some properties of “Davidon”, or variable metric, methods are studied from the viewpoint of convex analysis; they depend on the convexity of the function to be minimized rather than on its being approximately quadratic. An algorithm is presented which generalizes the variable metric method, and its convergence is shown for a large class of convex functions.  相似文献   

8.
The gradient path of a real valued differentiable function is given by the solution of a system of differential equations. For a quadratic function the above equations are linear, resulting in a closed form solution. A quasi-Newton type algorithm for minimizing ann-dimensional differentiable function is presented. Each stage of the algorithm consists of a search along an arc corresponding to some local quadratic approximation of the function being minimized. The algorithm uses a matrix approximating the Hessian in order to represent the arc. This matrix is updated each stage and is stored in its Cholesky product form. This simplifies the representation of the arc and the updating process. Quadratic termination properties of the algorithm are discussed as well as its global convergence for a general continuously differentiable function. Numerical experiments indicating the efficiency of the algorithm are presented.  相似文献   

9.
A new subspace minimization conjugate gradient algorithm with a nonmonotone Wolfe line search is proposed and analyzed. In the scheme, we propose two choices of the search direction by minimizing a quadratic approximation of the objective function in special subspaces, and state criterions on how to choose the direction. Under given conditions, we obtain the significant conclusion that each choice of the direction satisfies the sufficient descent property. Based on the idea on how the function is close to a quadratic function, a new strategy for choosing the initial stepsize is presented for the line search. With the used nonmonotone Wolfe line search, we prove the global convergence of the proposed method for general nonlinear functions under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

10.
We study a new trust region affine scaling method for general bound constrained optimization problems. At each iteration, we compute two trial steps. We compute one along some direction obtained by solving an appropriate quadratic model in an ellipsoidal region. This region is defined by an affine scaling technique. It depends on both the distances of current iterate to boundaries and the trust region radius. For convergence and avoiding iterations trapped around nonstationary points, an auxiliary step is defined along some newly defined approximate projected gradient. By choosing the one which achieves more reduction of the quadratic model from the two above steps as the trial step to generate next iterate, we prove that the iterates generated by the new algorithm are not bounded away from stationary points. And also assuming that the second-order sufficient condition holds at some nondegenerate stationary point, we prove the Q-linear convergence of the objective function values. Preliminary numerical experience for problems with bound constraints from the CUTEr collection is also reported.  相似文献   

11.
In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems.The proposed step sizes employ second-order information in order to obtain faster gradient-type methods.Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of the objective function.A convergence analysis of the proposed algorithm is provided.Some numerical experiments are performed in order to compare the efficiency and effectiveness of the proposed methods with similar methods in the literature.Experimentally,it is observed that our proposals accelerate the gradient method at nearly no extra computational cost,which makes our proposal a good alternative to solve large-scale problems.  相似文献   

12.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

13.
In this study, we propose an algorithm for solving a minimax problem over a polyhedral set defined in terms of a system of linear inequalities. At each iteration a direction is found by solving a quadratic programming problem and then a suitable step size along that direction is taken through an extension of Armijo's approximate line search technique. We show that each accumulation point is a Kuhn-Tucker solution and give a condition that guarantees convergence of the whole sequence of iterations. Through the use of an exact penalty function, the algorithm can be used for solving constrained nonlinear programming. In this case, our algorithm resembles that of Han, but differs from it both in the direction-finding and the line search steps.  相似文献   

14.
投影信赖域策略结合非单调线搜索算法解有界约束非线性半光滑方程组.基于简单有界约束的非线性优化问题构建信赖域子问题,半光滑类牛顿步在可行域投影得到投影牛顿的试探步,获得新的搜索方向,结合非单调线搜索技术得到回代步,获得新的步长.在合理的条件下,证明算法不仅具有整体收敛性且保持超线性收敛速率.引入非单调技术能克服高度非线性的病态问题,加速收敛性进程,得到超线性收敛速率.  相似文献   

15.
《Optimization》2012,61(4-5):395-415
The Barzilai and Borwein (BB) gradient method does not guarantee a descent in the objective function at each iteration, but performs better than the classical steepest descent (SD) method in practice. So far, the BB method has found many successful applications and generalizations in linear systems, unconstrained optimization, convex-constrained optimization, stochastic optimization, etc. In this article, we propose a new gradient method that uses the SD and the BB steps alternately. Hence the name “alternate step (AS) gradient method.” Our theoretical and numerical analyses show that the AS method is a promising alternative to the BB method for linear systems. Unconstrained optimization algorithms related to the AS method are also discussed. Particularly, a more efficient gradient algorithm is provided by exploring the idea of the AS method in the GBB algorithm by Raydan (1997).

To establish a general R-linear convergence result for gradient methods, an important property of the stepsize is drawn in this article. Consequently, R-linear convergence result is established for a large collection of gradient methods, including the AS method. Some interesting insights into gradient methods and discussion about monotonicity and nonmonotonicity are also given.  相似文献   

16.
We propose an interior point method for large-scale convex quadratic programming where no assumptions are made about the sparsity structure of the quadratic coefficient matrixQ. The interior point method we describe is a doubly iterative algorithm that invokes aconjugate projected gradient procedure to obtain the search direction. The effect is thatQ appears in a conjugate direction routine rather than in a matrix factorization. By doing this, the matrices to be factored have the same nonzero structure as those in linear programming. Further, one variant of this method istheoretically convergent with onlyone matrix factorization throughout the procedure.  相似文献   

17.
An algorithm for finding an approximate global minimum of a funnel shaped function with many local minima is described. It is applied to compute the minimum energy docking position of a ligand with respect to a protein molecule. The method is based on the iterative use of a convex, general quadratic approximation that underestimates a set of local minima, where the error in the approximation is minimized in the L1 norm. The quadratic approximation is used to generate a reduced domain, which is assumed to contain the global minimum of the funnel shaped function. Additional local minima are computed in this reduced domain, and an improved approximation is computed. This process is iterated until a convergence tolerance is satisfied. The algorithm has been applied to find the global minimum of the energy function generated by the Docking Mesh Evaluator program. Results for three different protein docking examples are presented. Each of these energy functions has thousands of local minima. Convergence of the algorithm to an approximate global minimum is shown for all three examples.  相似文献   

18.
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.  相似文献   

19.
《Optimization》2012,61(1):101-131
In this article, non-linear minimax problems with general constraints are discussed. By means of solving one quadratic programming an improved direction is yielded and a second-order correction direction can also be at hand via one system of linear equations. So a new algorithm for solving the discussed problems is presented. In connection with a special merit function, the generalized monotone line search is used to yield the step size at each iteration. Under mild conditions, we can ensure global and superlinear convergence. Finally, some numerical experiments are operated to test our algorithm, and the results demonstrate that it is promising.  相似文献   

20.
Primal-dual pairs of semidefinite programs provide a general framework for the theory and algorithms for the trust region subproblem (TRS). This latter problem consists in minimizing a general quadratic function subject to a convex quadratic constraint and, therefore, it is a generalization of the minimum eigenvalue problem. The importance of (TRS) is due to the fact that it provides the step in trust region minimization algorithms. The semidefinite framework is studied as an interesting instance of semidefinite programming as well as a tool for viewing known algorithms and deriving new algorithms for (TRS). In particular, a dual simplex type method is studied that solves (TRS) as a parametric eigenvalue problem. This method uses the Lanczos algorithm for the smallest eigenvalue as a black box. Therefore, the essential cost of the algorithm is the matrix-vector multiplication and, thus, sparsity can be exploited. A primal simplex type method provides steps for the so-called hard case. Extensive numerical tests for large sparse problems are discussed. These tests show that the cost of the algorithm is 1 +α(n) times the cost of finding a minimum eigenvalue using the Lanczos algorithm, where 0<α(n)<1 is a fraction which decreases as the dimension increases. Research supported by the National Science and Engineering Research Council Canada.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号