首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

2.
We present an algorithm for super-scale linearly constrained nonlinear programming (LCNP) based on Newton's method. In large-scale programming solving the Newton equation at each iteration can be expensive and may not be justified when far from a local solution. For super-scale problems, the truncated Newton method (where an inaccurate solution is computed by using the conjugate-gradient method) is recommended; a diagonal BFGS preconditioning of the gradient is used, so that the number of iterations to solve the equation is reduced. The procedure for updating that preconditioning is described for LCNP when the set of active constraints or the partition of basic, superbasic and nonbasic (structural) variables have been changed.  相似文献   

3.
We discuss the application of an augmented conjugate gradient to the solution of a sequence of linear systems of the same matrix appearing in an iterative process for the solution of scattering problems. The conjugate gradient method applied to the first system generates a Krylov subspace, then for the following systems, a modified conjugate gradient is applied using orthogonal projections on this subspace to compute an initial guess and modified descent directions leading to a better convergence. The scattering problem is treated via an Exact Controllability formulation and a preconditioned conjugate gradient algorithm is introduced. The set of linear systems to be solved are associated to this preconditioning. The efficiency of the method is tested on different 3D acoustic problems. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

4.
Numerical solutions of time dependent and or nonlinear partial differential equations often require several solutions of a sparse linear system. If this system is factorized it may not fit into the computer core; if it is solved by an iterative process like the conjugate gradient algorithm it takes too much computing time. We show that if the small elements of the factorized matrix are deleted then the resulting operator is an excellent preconditioning operator for the conjugate gradient algorithm. Tests on two problems show that 90% of the main storage space can be saved without increasing the computing time as compared with a direct factorization method.  相似文献   

5.
Preconditionings have proved to be a powerful technique for accelerating the rate of convergence of an iterative method. This paper, which is concerned with the conjugate gradient algorithm for large matrix computations, investigates an approximate polynomial preconditioning strategy. The method is particularly attractive for implementation on vector computers.  相似文献   

6.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

7.
For large systems of linear equations, iterative methods provide attractive solution techniques. We describe the applicability and convergence of iterative methods of Krylov subspace type for an important class of symmetric and indefinite matrix problems, namely augmented (or KKT) systems. Specifically, we consider preconditioned minimum residual methods and discuss indefinite versus positive definite preconditioning. For a natural choice of starting vector we prove that when the definite and indenfinite preconditioners are related in the obvious way, MINRES (which is applicable in the case of positive definite preconditioning) and full GMRES (which is applicable in the case of indefinite preconditioning) give residual vectors with identical Euclidean norm at each iteration. Moreover, we show that the convergence of both methods is related to a system of normal equations for which the LSQR algorithm can be employed. As a side result, we give a rare example of a non-trivial normal(1) matrix where the corresponding inner product is explicitly known: a conjugate gradient method therefore exists and can be employed in this case. This work was supported by British Council/German Academic Exchange Service Research Collaboration Project 465 and NATO Collaborative Research Grant CRG 960782  相似文献   

8.
The spectral gradient method has proved to be effective for solving large-scale uncon-strained optimization problems.It has been recently extended and combined with theprojected gradient method for solving optimization problems on convex sets.This combi-nation includes the use of nonmonotone line search techniques to preserve the fast localconvergence.In this work we further extend the spectral choice of steplength to accept pre-conditioned directions when a good preconditioner is available.We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties.We discussimplementation details for solving large-scale problems.  相似文献   

9.
Steepest Descent, CG, and Iterative Regularization of Ill-Posed Problems   总被引:3,自引:1,他引:2  
The state of the art iterative method for solving large linear systems is the conjugate gradient (CG) algorithm. Theoretical convergence analysis suggests that CG converges more rapidly than steepest descent. This paper argues that steepest descent may be an attractive alternative to CG when solving linear systems arising from the discretization of ill-posed problems. Specifically, it is shown that, for ill-posed problems, steepest descent has a more stable convergence behavior than CG, which may be explained by the fact that the filter factors for steepest descent behave much less erratically than those for CG. Moreover, it is shown that, with proper preconditioning, the convergence rate of steepest descent is competitive with that of CG.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

10.
A modification is proposed for the second order incomplete Cholesky decomposition (IC2). It makes possible to design a preconditioning procedure for the conjugate gradient method (CGM) with a controllable fill-in in the preconditioner. The modified algorithm is used to develop a load-balancing parallel preconditioning for CGM as applied to linear systems with symmetric positive definite matrices. Numerical results obtained using a multiprocessor computer system are presented.  相似文献   

11.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

12.
A preconditioned conjugate gradient method is applied to finite element discretizations of some nonsymmetric elliptic systems. Mesh independent superlinear convergence is proved, which is an extension of a similar earlier result from a single equation to systems. The proposed preconditioning method involves decoupled preconditioners, which yields small and parallelizable auxiliary problems.  相似文献   

13.
In practical problems, iterative methods can hardly be used without some acceleration of convergence, commonly called preconditioning, which is typically achieved by incorporation of some (incomplete or modified) direct algorithm as a part of the iteration. Effectiveness of preconditioned iterative methods increases with possibility of stopping the iteration when the desired accuracy is reached. This requires, however, incorporating a proper measure of achieved accuracy as a part of computation. The goal of this paper is to describe a simple and numerically reliable estimation of the size of the error in the preconditioned conjugate gradient method. In this way this paper extends results from [Z. Strakoš and P. Tichy, ETNA, 13 (2002), pp. 56–80] and communicates them to practical users of the preconditioned conjugate gradient method. AMS subject classification (2000) 15A06, 65F10, 65F25, 65G50  相似文献   

14.
An incomplete factorization method for preconditioning symmetric positive definite matrices is introduced to solve normal equations. The normal equations are form to solve linear least squares problems. The procedure is based on a block incomplete Cholesky factorization and a multilevel recursive strategy with an approximate Schur complement matrix formed implicitly. A diagonal perturbation strategy is implemented to enhance factorization robustness. The factors obtained are used as a preconditioner for the conjugate gradient method. Numerical experiments are used to show the robustness and efficiency of this preconditioning technique, and to compare it with two other preconditioners.  相似文献   

15.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
We propose an optimal computational complexity algorithm for the solution of quadratic programming problems with equality constraints arising from partial differential equations. The algorithm combines a variant of the semi‐monotonic augmented Lagrangian (SMALE) method with adaptive precision control and a multigrid preconditioning for the Hessian of the cost function and for the inner product on the space of Lagrange variables. The update rule for penalty parameter acts as preconditioning of constraints. The optimality of the algorithm is theoretically proven and confirmed by numerical experiments for the two‐dimensional Stokes problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
This paper considers specific aspects of implementing an algorithm for solving problems of quadratic programming, which is based on a reduced gradient method. In the subspace of superbasis variables, minimization is carried out by a conjugate gradient method. Some examples of solving test problems are given.  相似文献   

18.
We propose a simple and effective hybrid (multiplicative) Schwarz precondtioner for solving systems of algebraic equations resulting from the mortar finite element discretization of second order elliptic problems on nonmatching meshes. The preconditioner is embedded in a variant of the classical preconditioned conjugate gradient (PCG) for an effective implementation reducing the cost of computing the matrix-vector multiplication in each iteration of the PCG. In fact, it serves as a framework for effective implementation of a class of hybrid Schwarz preconditioners. The preconditioners of this class are based on solving a sequence of non-overlapping local subproblems exactly, and the coarse problems either exactly or inexactly (approximately). The classical PCG algorithm is reformulated in order to make reuse of the results of matrix-vector multiplications that are already available from the preconditioning step resulting in an algorithm which is cost effective. An analysis of the proposed preconditioner, with numerical results, showing scalability with respect to the number of subdomains, and a convergence which is independent of the jumps of the coefficients are given.  相似文献   

19.
Summary We present an algorithm which combines standard active set strategies with the gradient projection method for the solution of quadratic programming problems subject to bounds. We show, in particular, that if the quadratic is bounded below on the feasible set then termination occurs at a stationary point in a finite number of iterations. Moreover, if all stationary points are nondegenerate, termination occurs at a local minimizer. A numerical comparison of the algorithm based on the gradient projection algorithm with a standard active set strategy shows that on mildly degenerate problems the gradient projection algorithm requires considerable less iterations and time than the active set strategy. On nondegenerate problems the number of iterations typically decreases by at least a factor of 10. For strongly degenerate problems, the performance of the gradient projection algorithm deteriorates, but it still performs better than the active set method.Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38  相似文献   

20.
Parallel preconditioned conjugate gradient algorithm on GPU   总被引:1,自引:0,他引:1  
We propose a parallel implementation of the Preconditioned Conjugate Gradient algorithm on a GPU platform. The preconditioning matrix is an approximate inverse derived from the SSOR preconditioner. Used through sparse matrix–vector multiplication, the proposed preconditioner is well suited for the massively parallel GPU architecture. As compared to CPU implementation of the conjugate gradient algorithm, our GPU preconditioned conjugate gradient implementation is up to 10 times faster (8 times faster at worst).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号