首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
《Optimization》2012,61(10):1631-1648
ABSTRACT

In this paper, we develop a three-term conjugate gradient method involving spectral quotient, which always satisfies the famous Dai-Liao conjugacy condition and quasi-Newton secant equation, independently of any line search. This new three-term conjugate gradient method can be regarded as a variant of the memoryless Broyden-Fletcher-Goldfarb-Shanno quasi-Newton method with regard to spectral quotient. By combining this method with the projection technique proposed by Solodov and Svaiter in 1998, we establish a derivative-free three-term projection algorithm for dealing with large-scale nonlinear monotone system of equations. We prove the global convergence of the algorithm and obtain the R-linear convergence rate under some mild conditions. Numerical results show that our projection algorithm is effective and robust, and is more competitive with the TTDFP algorithm proposed Liu and Li [A three-term derivative-free projection method for nonlinear monotone system of equations. Calcolo. 2016;53:427–450].  相似文献   

2.
This paper surveys some of the existing approaches to quasi-Newton methods and introduces a new way for constructing inverse Hessian approximations for such algorithms. This new approach is based on restricting Newton's method to subspaces over which the inverse Hessian is assumed to be known, while expanding this subspace using gradient information. It is shown that this approach can lead to some well-known formulas for updating the inverse Hessian approximation. Deriving such updates through this approach provides new understanding of these formulas and their relation to the pseudo-Newton-Raphson algorithm.  相似文献   

3.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

4.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

5.
A new adaptive subspace minimization three-term conjugate gradient algorithm with nonmonotone line search is introduced and analyzed in this paper.The search directions are computed by minimizing a quadratic approximation of the objective function on special subspaces,and we also proposed an adaptive rule for choosing different searching directions at each iteration.We obtain a significant conclusion that the each choice of the search directions satisfies the sufficient descent condition.With the used nonmonotone line search,we prove that the new algorithm is globally convergent for general nonlinear functions under some mild assumptions.Numerical experiments show that the proposed algorithm is promising for the given test problem set.  相似文献   

6.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

7.
Both conjugate gradient and quasi-Newton methods are quite successful at minimizing smooth nonlinear functions of several variables, and each has its advantages. In particular, conjugate gradient methods require much less storage to implement than a quasi-Newton code and therefore find application when storage limitations occur. They are, however, slower, so there have recently been attempts to combine CG and QN algorithms so as to obtain an algorithm with good convergence properties and low storage requirements. One such method is the code CONMIN due to Shanno and Phua; it has proven quite successful but it has one limitation. It has no middle ground, in that it either operates as a quasi-Newton code using O(n 2) storage locations, or as a conjugate gradient code using 7n locations, but it cannot take advantage of the not unusual situation where more than 7n locations are available, but a quasi-Newton code requires an excessive amount of storage. In this paper we present a way of looking at conjugate gradient algorithms which was in fact given by Shanno and Phua but which we carry further, emphasize and clarify. This applies in particular to Beale's 3-term recurrence relation. Using this point of view, we develop a new combined CG-QN algorithm which can use whatever storage is available; CONMIN occurs as a special case. We present numerical results to demonstrate that the new algorithm is never worse than CONMIN and that it is almost always better if even a small amount of extra storage is provided.  相似文献   

8.
Interior Point algorithms have become a very successful tool for solving large-scale linear programming problems. The Dual Affine algorithm is one of the Interior Point algorithms implemented in the computer program OB1. It is a good candidate for implementation on a parallel computer because it is very computing-intensive. A parallel Dual Affine algorithm is presented which is suitable for a parallel computer with a distributed memory. The algorithm obtains its speedup from parallel sparse linear algebra computations such as Cholesky factorisation, matrix multiplication, and triangular system solving, which form the bulk of the computing work. Efficient algorithms based on the grid distribution of matrices are presented for each of these computations. The algorithm is implemented in occam 2 on a square mesh of transputers. The resulting parallel program is connected to the sequentialFortran 77 program OB1, which performs the preprocessing and the postprocessing. Experimental results on a mesh of 400 transputers are given for a test set of seven realistic planning and scheduling problems from Shell and seven problems from the NETLIB LP collection; the results show a speedup of 88 for the largest problem.  相似文献   

9.
This paper considers the problem of minimizing a functionf(x) when linear equality constraints onx could be present and when an algorithm is to be used which may employ both conjugate gradient and quasi-Newton steps. The conjugate gradient algorithm is examined when a preconditioner is used which may be only positive semi-definite. Quite a general theorem is given about the equivalence between conjugate gradient steps and quasi-Newton steps which includes several previously published results as special cases. Some results are also given for the case where some line searches are inexact.  相似文献   

10.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

11.
四种无约束优化算法的比较研究   总被引:1,自引:0,他引:1  
从数值试验的角度 ,通过对 3个测试问题 (其中构造了一个规模大小可变的算例 )的求解 ,对共轭梯度法、BFGS拟牛顿法、DFP拟牛顿法和截断牛顿法进行比较研究 ,根据测试结果的分析 ,显示截断牛顿法在求解大规模优化问题时具有优势 ,从而为大规模寻优算法的研究提供了有益的借鉴 .  相似文献   

12.
Discrete solution to nonlinear systems problems that leads to a series of linear problems associated with non-invariant large-scale sparse symmetric positive matrices is herein considered. Each linear problem is solved iteratively by a conjugate gradient method. We introduce in this paper new solvers (IRKS, GIRKS and D-GIRKS) that rely on an iterative reuse of Krylov subspaces associated with previously solved linear problems. Numerical assessments are provided on large-scale engineering applications. Considerations related to parallel supercomputing are also addressed. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

13.
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented.  相似文献   

14.
This paper is devoted to globally convergent methods for solving large sparse systems of nonlinear equations with an inexact approximation of the Jacobian matrix. These methods include difference versions of the Newton method and various quasi-Newton methods. We propose a class of trust region methods together with a proof of their global convergence and describe an implementable globally convergent algorithm which can be used as a realization of these methods. Considerable attention is concentrated on the application of conjugate gradient-type iterative methods to the solution of linear subproblems. We prove that both the GMRES and the smoothed COS well-preconditioned methods can be used for the construction of globally convergent trust region methods. The efficiency of our algorithm is demonstrated computationally by using a large collection of sparse test problems.  相似文献   

15.
We wish to examine the conjugate gradient and quasi-Newton minimization algorithms. A relation noted by Nazareth is extended to an algorithm in which conjugate gradient and quasi-Newton search directions occur together and which can be interpreted as a conjugate gradient algorithm with a changing metric.  相似文献   

16.
In this paper, we present an interior-point algorithm for large and sparse convex quadratic programming problems with bound constraints. The algorithm is based on the potential reduction method and the use of iterative techniques to solve the linear system arising at each iteration. The global convergence properties of the potential reduction method are reassessed in order to take into account the inexact solution of the inner system. We describe the iterative solver, based on the conjugate gradient method with a limited-memory incomplete Cholesky factorization as preconditioner. Furthermore, we discuss some adaptive strategies for the fill-in and accuracy requirements that we use in solving the linear systems in order to avoid unnecessary inner iterations when the iterates are far from the solution. Finally, we present the results of numerical experiments carried out to verify the effectiveness of the proposed strategies. We consider randomly generated sparse problems without a special structure. Also, we compare the proposed algorithm with the MOSEK solver. Research partially supported by the MIUR FIRB Project RBNE01WBBB “Large-Scale Nonlinear Optimization.”  相似文献   

17.
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell–Hestenes–Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.  相似文献   

18.
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient algorithms but not so good as in a quasi-Newton approach. The new algorithm uses a strategy of generating a form of conjugate gradient search direction for most iterations, but it periodically uses a quasi-Newton step to improve the convergence. Some theoretical background for a new algorithm has been presented in an earlier paper; here we examine properties of the new algorithm and its implementation. We also present the results of some computational experience.This research was supported by the National Research Council of Canada grant number A-8962.  相似文献   

19.
The paper compares a factorized sparse quasi-Newton update of Goldfarb with a nonfactorized BFGS sparse update of Shanno on a series of test problems, with numerical results strongly favoring the unfactorized update. Analysis of Goldfarb's method is done to explain the poor numerical performance. Two specific conjugate gradient methods for solving the required systems of linear equations with the unfactorized update are described and tested.This research was supported by the National Science Foundation under Grant No. MCS-77-07327  相似文献   

20.
《Optimization》2012,61(12):2229-2246
ABSTRACT

A secant equation (quasi-Newton) has one of the most important rule to find an optimal solution in nonlinear optimization. Curvature information must satisfy the usual secant equation to ensure positive definiteness of the Hessian approximation. In this work, we present a new diagonal updating to improve the Hessian approximation with a modifying weak secant equation for the diagonal quasi-Newton (DQN) method. The gradient and function evaluation are utilized to obtain a new weak secant equation and achieve a higher order accuracy in curvature information in the proposed method. Modified DQN methods based on the modified weak secant equation are globally convergent. Extended numerical results indicate the advantages of modified DQN methods over the usual ones and some classical conjugate gradient methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号