首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

2.
本文提出一个求解非线性不等式约束优化问题的带有共轭梯度参数的广义梯度投影算法.算法中的共轭梯度参数是很容易得到的,且算法的初始点可以任意选取.而且,由于算法仅使用前一步搜索方向的信息,因而减少了计算量.在较弱条件下得到了算法的全局收敛性.数值结果表明算法是有效的.  相似文献   

3.
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient algorithms but not so good as in a quasi-Newton approach. The new algorithm uses a strategy of generating a form of conjugate gradient search direction for most iterations, but it periodically uses a quasi-Newton step to improve the convergence. Some theoretical background for a new algorithm has been presented in an earlier paper; here we examine properties of the new algorithm and its implementation. We also present the results of some computational experience.This research was supported by the National Research Council of Canada grant number A-8962.  相似文献   

4.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

5.
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust.  相似文献   

6.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

7.
This paper describes an efficient implementation of a nested decomposition algorithm for the multistage stochastic linear programming problem. Many of the computational tricks developed for deterministic staircase problems are adapted to the stochastic setting and their effect on computation times is investigated. The computer code supports an arbitrary number of time periods and various types of random structures for the input data. Numerical results compare the performance of the algorithm to MINOS 5.0.  相似文献   

8.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效.  相似文献   

9.
For solving large-scale unconstrained minimization problems, the nonlinear conjugate gradient method is welcome due to its simplicity, low storage, efficiency and nice convergence properties. Among all the methods in the framework, the conjugate gradient descent algorithm — CG_DESCENT is very popular, in which the generated directions descend automatically, and this nice property is independent of any line search used. In this paper, we generalize CG_DESCENT with two Barzilai–Borwein steplength reused cyclically. We show that the resulting algorithm owns attractive sufficient descent property and converges globally under some mild conditions. We test the proposed algorithm by using a large set of unconstrained problems with high dimensions in CUTEr library. The numerical comparisons with the state-of-the-art algorithm CG_DESCENT illustrate that the proposed method is effective, competitive, and promising.  相似文献   

10.
A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of large-scale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.  相似文献   

11.
This paper describes a new unconstrained optimisation procedure employing conjugate directions and requiring only threen-dimensional vectors. The method has been tested for computational efficiency and stability on a large set of test functions and compared with numerical data of other major methods. Results show that the method possesses strong superiority over other existing conjugate gradient methods on all problems and can out-perform or is at least as efficient as quasi-Newton methods on many tested problems.  相似文献   

12.
In optimization theory, convex minimization problems have been intensively investigated in the current literature due to its wide range in applications. A major and effective tool for solving such problem is the forward‐backward splitting algorithm. However, to guarantee the convergence, it is usually assumed that the gradient of functions is Lipschitz continuous and the stepsize depends on the Lipschitz constant, which is not an easy task in practice. In this work, we propose the modified forward‐backward splitting method using new linesearches for choosing suitable stepsizes and discuss the convergence analysis including its complexity without any Lipschitz continuity assumption on the gradient. Finally, we provide numerical experiments in signal recovery to demonstrate the computational performance of our algorithm in comparison to some well‐known methods. Our reports show that the proposed algorithm has a good convergence behavior and can outperform the compared methods.  相似文献   

13.
《Optimization》2012,61(1-2):63-73
Serial and parallel implementations of the interior dual proximal point algorithm for the solution of large linear programs are described. A preconditioned conjugate gradient method is used to solve the linear system of equations that arises at each interior point interation. Numerical results for a set of multicommodity network flow problems are given. For larger problem preconditioned conjugate gradient method outperforms direct methods of solution. In fact it is impossible to handle very large problems by direct methods  相似文献   

14.
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient for the given test problems. Accepted 15 September 2000. Online publication 8 December 2000.  相似文献   

15.
A class of globally convergent conjugate gradient methods   总被引:4,自引:0,他引:4  
Conjugate gradient methods are very important ones for solving nonlinear optimization problems, especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods were usually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can be regarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed by Dai et al. To analyze this class of methods, we introduce some unified tools that concern a general method with the scalar βk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods can uniformly be analyzed.  相似文献   

16.
Robinson has proposed the bundle-based decomposition algorithm to solve a class of structured large-scale convex optimization problems. In this method, the original problem is transformed (by dualization) to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. In this paper, we give a posteriori error estimates on the approximate primal optimal solution and on the duality gap. We describe implementation and present computational experience with a special case of this class of problems, namely, block-angular linear programming problems. We observe that the method is efficient in obtaining the approximate optimal solution and compares favorably with MINOS and an advanced implementation of the Dantzig—Wolfe decomposition method.  相似文献   

17.
The conventional function space algorithms for solving minimization of penalized cost functional for optimal control problem characterized by linear-system integral quadratic cost due to Di Pillo and others, though falling within the framework pertinent to the conjugate gradient method algorithm, is difficult to apply computationally. The difficulty arises principally because there exists in the algorithm a number of stringent requirements imposed on the minimization procedures to facilitate its convergence. Incidentally, such computations are very cumbersome to carry out numerically. To circumvent this major numerical draw back, we construct here a control operator associated with this class of problems and use our explicit knowledge of the operator to devise an extended conjugate gradient method algorithm for solving this family of problems. Furthermore, the establishment of some functional inequalities which are obtained using the knowledge of the control operator is discussed.  相似文献   

18.
Many important large-scale linear programs with special structures lead to special computational procedures which are more efficient than the ordinary procedure of the generalized methods. Problems and their solvers taking advantage of multiple updatings of the basis in the dual simplex type method are presented. Computational results run by the efficient algorithm and a standard code MINOS for the test problems are compared and analyzed. It is shown that the amount of work for the optimal solution for the problem can be reduced by the new algorithm.  相似文献   

19.
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to nonlinear eigenvalue problems with very large sparse ill-conditioned matrices monotonically depending on the spectral parameter. To compute the smallest eigenvalue of large-scale matrix nonlinear eigenvalue problems, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors, and inner products of vectors. We investigate the convergence and derive grid-independent error estimates for these methods. Numerical experiments demonstrate the practical effectiveness of the proposed methods for a model problem.  相似文献   

20.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号