首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are many applications related to singly linearly constrained quadratic programs subjected to upper and lower bounds. In this paper, a new algorithm based on secant approximation is provided for the case in which the Hessian matrix is diagonal and positive definite. To deal with the general case where the Hessian is not diagonal, a new efficient projected gradient algorithm is proposed. The basic features of the projected gradient algorithm are: 1) a new formula is used for the stepsize; 2) a recently-established adaptive non-monotone line search is incorporated; and 3) the optimal stepsize is determined by quadratic interpolation if the non-monotone line search criterion fails to be satisfied. Numerical experiments on large-scale random test problems and some medium-scale quadratic programs arising in the training of Support Vector Machines demonstrate the usefulness of these algorithms. This work was supported by the EPRSC in UK (no. GR/R87208/01) and the Chinese NSF grants (no. 10171104 and 40233029).  相似文献   

2.
This paper studies subspace properties of trust region methods for unconstrained optimization, assuming the approximate Hessian is updated by quasi- Newton formulae and the initial Hessian approximation is appropriately chosen. It is shown that the trial step obtained by solving the trust region subproblem is in the subspace spanned by all the gradient vectors computed. Thus, the trial step can be defined by minimizing the quasi-Newton quadratic model in the subspace. Based on this observation, some subspace trust region algorithms are proposed and numerical results are also reported.  相似文献   

3.
In this paper we view the Barzilai and Borwein (BB) method from a new angle, and present a new adaptive Barzilai and Borwein (NABB) method with a nonmonotone line search for general unconstrained optimization. In the proposed method, the scalar approximation to the Hessian matrix is updated by the Broyden class formula to generate an adaptive stepsize. It is remarkable that the new stepsize is chosen adaptively in the interval which contains the two well-known BB stepsizes. Moreover, for the negative curvature direction, a strategy for the choice of the stepsize is designed to accelerate the convergence rate of the NABB method. Furthermore, we apply the NABB method without any line search to strictly convex quadratic minimization. The numerical experiments show the NABB method is very promising.  相似文献   

4.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

5.
A fundamental problem in constrained nonlinear optimization algorithms is the design of a satisfactory stepsize strategy which converges to unity. In this paper, we discuss stepsize strategies for Newton or quasi-Newton algorithms which require the solution of quadratic optimization subproblems. Five stepsize strategies are considered for three different subproblems, and the conditions under which the stepsizes will converge to unity are established. It is shown that these conditions depend critically on the convergence of the Hessian approximations used in the algorithms. The stepsize strategies are constructed using basic principles from which the conditions to unit stepsizes follow. Numerical results are discussed in an Appendix.Paper presented to the XI Symposium on Mathematical Programming, Bonn, Germany, 1982.This work was completed while the author was visiting the European University in Florence where, in particular, Professors Fitoussi and Velupillai provided the opportunity for its completion. The author is grateful to Dr. L. C. W. Dixon for his helpful comments and criticisms on numerous versions of the paper, and to R. G. Becker for programming the algorithms in Section 3 and for helpful discussions concerning these algorithms.  相似文献   

6.
本文对等式约束问题提出了一个种组合信赖域与拟牛顿算法。该算法的特点是若Lagrangian函数的近似Hessian阵在等式约束Jacobi阵的零空间正定的,则选择拟牛顿算法,否则用信赖域算法,在通常信赖域算法的收敛假设下,该文证明了组合算法的全局收敛性。  相似文献   

7.
8.
This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is developed. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-Newton method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from a positive definite matrix. Therefore we propose the memoryless SR1 method, which is updated from a positive scaled of the identity, where the scaling factor is derived in such a way that positive definiteness of the updating matrices are preserved and at the same time improves the condition of the scaled memoryless SR1 update. Under very mild conditions it is shown that, for strictly convex objective functions, the method is globally convergent with a linear rate of convergence. Numerical results show that the optimally scaled memoryless SR1 method is very encouraging.  相似文献   

9.
To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the quasi-Newton iteration pattern. We prove the global convergence properties of the algorithm associating with the general form of line search, and prove the quadratic convergence rate of the algorithm under some conditions.  相似文献   

10.
For the problem of minimizing an unconstrained function, the conjugate-gradient method is shown to be convergent. If the function is uniformly strictly convex, the ultimate rate of convergence is shown to ben-step superlinear. If the Hessian matrix is Lipschitz continuous, the rate of convergence is shown to ben-step quadratic. All results are obtained for the reset version of the method and with a relaxed requirement on the solution of the stepsize problem. In addition to obtaining sharper results, the paper differs from previously published ones in the mode of proof which contains as a corollary the proof of finiteness of the conjugate-gradient method when applied to a quadratic problem rather than assuming that result.  相似文献   

11.
Quasi-Newton methods based on the symmetric rank-one (SR1) update have been known to be fast and provide better approximations of the true Hessian than popular rank-two approaches, but these properties are guaranteed under certain conditions which frequently do not hold. Additionally, SR1 is plagued by the lack of guarantee of positive definiteness for the Hessian estimate. In this paper, we propose cubic regularization as a remedy to relax the conditions on the proofs of convergence for both speed and accuracy and to provide a positive definite approximation at each step. We show that the n-step convergence property for strictly convex quadratic programs is retained by the proposed approach. Extensive numerical results on unconstrained problems from the CUTEr test set are provided to demonstrate the computational efficiency and robustness of the approach.  相似文献   

12.
We present a quasi-Newton sequential quadratic programming (SQP) algorithm for nonlinear programs in which the Hessian of the Lagrangian function is block-diagonal. Problems with this characteristic frequently arise in the context of optimal control; for example, when a direct multiple shooting parametrization is used. In this article, we describe an implementation of a filter line-search SQP method that computes search directions using an active-set quadratic programming (QP) solver. To take advantage of the block-diagonal structure of the Hessian matrix, each block is approximated separately by quasi-Newton updates. For nonconvex instances, that arise, for example, in optimum experimental design control problems, these blocks are often found to be indefinite. In that case, the block-BFGS quasi-Newton update can lead to poor convergence. The novel aspect in this work is the use of SR1 updates in place of BFGS approximations whenever possible. The resulting indefinite QPs necessitate an inertia control mechanism within the sparse Schur-complement factorization that is carried out by the active-set QP solver. This permits an adaptive selection of the Hessian approximation that guarantees sufficient progress towards a stationary point of the problem. Numerical results demonstrate that the proposed approach reduces the number of SQP iterations and CPU time required for the solution of a set of optimal control problems.  相似文献   

13.
The trust region(TR) method for optimization is a class of effective methods.The conic model can be regarded as a generalized quadratic model and it possesses the good convergence properties of the quadratic model near the minimizer.The Barzilai and Borwein(BB) gradient method is also an effective method,it can be used for solving large scale optimization problems to avoid the expensive computation and storage of matrices.In addition,the BB stepsize is easy to determine without large computational efforts.In this paper,based on the conic trust region framework,we employ the generalized BB stepsize,and propose a new nonmonotone adaptive trust region method based on simple conic model for large scale unconstrained optimization.Unlike traditional conic model,the Hessian approximation is an scalar matrix based on the generalized BB stepsize,which resulting a simple conic model.By adding the nonmonotone technique and adaptive technique to the simple conic model,the new method needs less storage location and converges faster.The global convergence of the algorithm is established under certain conditions.Numerical results indicate that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

14.
In Ref. 1, Nocedal and Overton proposed a two-sided projected Hessian updating technique for equality constrained optimization problems. Although local two-step Q-superlinear rate was proved, its global convergence is not assured. In this paper, we suggest a trust-region-type, two-sided, projected quasi-Newton method, which preserves the local two-step superlinear convergence of the original algorithm and also ensures global convergence. The subproblem that we propose is as simple as the one often used when solving unconstrained optimization problems by trust-region strategies and therefore is easy to implement.This research was supported in part by the National Natural Science Foundation of China.  相似文献   

15.
Described here is the structure and theory for a sequential quadratic programming algorithm for solving sparse nonlinear optimization problems. Also provided are the details of a computer implementation of the algorithm along with test results. The algorithm maintains a sparse approximation to the Cholesky factor of the Hessian of the Lagrangian. The solution to the quadratic program generated at each step is obtained by solving a dual quadratic program using a projected conjugate gradient algorithm. An updating procedure is employed that does not destroy sparsity.  相似文献   

16.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

17.
A new diagonal quasi-Newton updating algorithm for unconstrained optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite differences directional derivatives of the components of the gradient. Under mild classical assumptions, the convergence of the algorithm is proved to be linear. Numerical experiments with 80 unconstrained optimization test problems, of different structures and complexities, as well as five applications from MINPACK-2 collection, prove that the suggested algorithm is more efficient and more robust than the quasi-Newton diagonal algorithm retaining only the diagonal elements of the BFGS update, than the weak quasi-Newton diagonal algorithm, than the quasi-Cauchy diagonal algorithm, than the diagonal approximation of the Hessian by the least-change secant updating strategy and minimizing the trace of the matrix, than the Cauchy with Oren and Luenberger scaling algorithm in its complementary form (i.e. the Barzilai-Borwein algorithm), than the steepest descent algorithm, and than the classical BFGS algorithm. However, our algorithm is inferior to the limited memory BFGS algorithm (L-BFGS).  相似文献   

18.
This paper extends prior work by the authors on solving nonlinear least squares unconstrained problems using a factorized quasi-Newton technique. With this aim we use a primal-dual interior-point algorithm for nonconvex nonlinear programming. The factorized quasi-Newton technique is now applied to the Hessian of the Lagrangian function for the transformed problem which is based on a logarithmic barrier formulation. We emphasize the importance of establishing and maintaining symmetric quasi-definiteness of the reduced KKT system. The algorithm then tries to choose a step size that reduces a merit function, and to select a penalty parameter that ensures descent directions along the iterative process. Computational results are included for a variety of least squares constrained problems and preliminary numerical testing indicates that the algorithm is robust and efficient in practice.  相似文献   

19.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

20.
In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems.The proposed step sizes employ second-order information in order to obtain faster gradient-type methods.Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of the objective function.A convergence analysis of the proposed algorithm is provided.Some numerical experiments are performed in order to compare the efficiency and effectiveness of the proposed methods with similar methods in the literature.Experimentally,it is observed that our proposals accelerate the gradient method at nearly no extra computational cost,which makes our proposal a good alternative to solve large-scale problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号