首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider computing estimates of the norm of the error in the conjugate gradient (CG) algorithm. Formulas were given in a paper by Golub and Meurant (1997). Here, we first prove that these expressions are indeed upper and lower bounds for the A-norm of the error. Moreover, starting from these formulas, we investigate the computation of the l 2-norm of the error. Finally, we define an adaptive algorithm where the approximations of the extreme eigenvalues that are needed to obtain upper bounds are computed when running CG leading to an improvement of the upper bounds for the norm of the error. Numerical experiments show the effectiveness of this algorithm. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

2.
Parallel preconditioned conjugate gradient algorithm on GPU   总被引:1,自引:0,他引:1  
We propose a parallel implementation of the Preconditioned Conjugate Gradient algorithm on a GPU platform. The preconditioning matrix is an approximate inverse derived from the SSOR preconditioner. Used through sparse matrix–vector multiplication, the proposed preconditioner is well suited for the massively parallel GPU architecture. As compared to CPU implementation of the conjugate gradient algorithm, our GPU preconditioned conjugate gradient implementation is up to 10 times faster (8 times faster at worst).  相似文献   

3.
Some techniques suitable for the control of the solution error in the preconditioned conjugate gradient method are considered and compared. The estimation can be performed both in the course of the iterations and after their termination.The importance of such techniques follows from the non‐existence of some reasonable a priori error estimate for very ill‐conditioned linear systems when sufficient information about the right‐hand side vector is lacking. Hence, some a posteriori estimates are required, which make it possible to verify the quality of the solution obtained for a prescribed right‐hand side. The performance of the considered error control procedures is demonstrated using real‐world large‐scale linear systems arising in computational mechanics. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

4.
The conjugate gradient method is one of the most popular iterative methods for computing approximate solutions of linear systems of equations with a symmetric positive definite matrix A. It is generally desirable to terminate the iterations as soon as a sufficiently accurate approximate solution has been computed. This paper discusses known and new methods for computing bounds or estimates of the A-norm of the error in the approximate solutions generated by the conjugate gradient method.  相似文献   

5.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

6.
In their original paper, Golub and Meurant (BIT 37:687–705, 1997) suggest to compute bounds for the A-norm of the error in the conjugate gradient (CG) method using Gauss, Gauss-Radau and Gauss-Lobatto quadratures. The quadratures are computed using the (1,1)-entry of the inverse of the corresponding Jacobi matrix (or its rank-one or rank-two modifications). The resulting algorithm called CGQL computes explicitly the entries of the Jacobi matrix and its modifications from the CG coefficients. In this paper, we use the fact that CG computes the Cholesky decomposition of the Jacobi matrix which is given implicitly. For Gauss-Radau and Gauss-Lobatto quadratures, instead of computing the entries of the modified Jacobi matrices, we directly compute the entries of the Cholesky decompositions of the (modified) Jacobi matrices. This leads to simpler formulas in comparison to those used in CGQL.  相似文献   

7.
8.
Alternating least squares (ALS) is often considered the workhorse algorithm for computing the rank‐R canonical tensor approximation, but for certain problems, its convergence can be very slow. The nonlinear conjugate gradient (NCG) method was recently proposed as an alternative to ALS, but the results indicated that NCG is usually not faster than ALS. To improve the convergence speed of NCG, we consider a nonlinearly preconditioned NCG (PNCG) algorithm for computing the rank‐R canonical tensor decomposition. Our approach uses ALS as a nonlinear preconditioner in the NCG algorithm. Alternatively, NCG can be viewed as an acceleration process for ALS. We demonstrate numerically that the convergence acceleration mechanism in PNCG often leads to important pay‐offs for difficult tensor decomposition problems, with convergence that is significantly faster and more robust than for the stand‐alone NCG or ALS algorithms. We consider several approaches for incorporating the nonlinear preconditioner into the NCG algorithm that have been described in the literature previously and have met with success in certain application areas. However, it appears that the nonlinearly PNCG approach has received relatively little attention in the broader community and remains underexplored both theoretically and experimentally. Thus, this paper serves several additional functions, by providing in one place a concise overview of several PNCG variants and their properties that have only been described in a few places scattered throughout the literature, by systematically comparing the performance of these PNCG variants for the tensor decomposition problem, and by drawing further attention to the usefulness of nonlinearly PNCG as a general tool. In addition, we briefly discuss the convergence of the PNCG algorithm. In particular, we obtain a new convergence result for one of the PNCG variants under suitable conditions, building on known convergence results for non‐preconditioned NCG. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.This work was supported by by the Natural Sciences and Engineering Research Council of Canada, and by the Information Technology Research Center, which is funded by the Province of Ontario.  相似文献   

10.
In this paper we propose a procedure to construct approximations of the inverse of a class of C m differentiable mappings. First of all we determine in terms of the data a neighbourhood where the inverse mapping is well defined. Then it is proved that the theoretical inverse can be expressed in terms of the solution of a differential equation depending on parameters. Finally, using one-step matrix methods we construct approximate inverse mappings of a prescribed accuracy.  相似文献   

11.
On the rate of convergence of the preconditioned conjugate gradient method   总被引:3,自引:0,他引:3  
Summary We derive new estimates for the rate of convergence of the conjugate gradient method by utilizing isolated eigenvalues of parts of the spectrum. We present a new generalized version of an incomplete factorization method and compare the derived estimates of the number of iterations with the number actually found for some elliptic difference equations and for a similar problem with a model empirical distribution function.  相似文献   

12.
The parallel version of precondition techniques is developed for matrices arising from the Galerkin boundary element method for two-dimensional domains with Dirichlet boundary conditions. Results were obtained for implementations on a transputer network as well as on an nCUBE-2 parallel computer showing that iterative solution methods are very well suited for a MIMD computer. A comparison of numerical results for iterative and direct solution methods is presented and underlines the superiority of iterative methods for large systems.  相似文献   

13.
14.
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

15.
Summary A generalized conjugate gradient algorithm which is invariant to a nonlinear scaling of a strictly convex quadratic function is described, which terminates after at mostn steps when applied to scaled quadratic functionsf: R n R1 of the formf(x)=h(F(x)) withF(x) strictly convex quadratic andhC 1 (R1) an arbitrary strictly monotone functionh. The algorithm does not suppose the knowledge ofh orF but only off(x) and its gradientg(x).  相似文献   

16.
In this paper we study and compare some preconditioned conjugate gradient methods for solving large-scale higher-order finite element schemes approximating two- and three-dimensional linear elasticity boundary value problems. The preconditioners discussed in this paper are derived from hierarchical splitting of the finite element space first proposed by O. Axelsson and I. Gustafsson. We especially focus our attention to the implicit construction of preconditioning operators by means of some fixpoint iteration process including multigrid techniques. Many numerical experiments confirm the efficiency of these preconditioners in comparison with classical direct methods most frequently used in practice up to now.  相似文献   

17.
For the ill-posed operator equationTx=y in Hilbert space, we introduce a modification of the usual conjugate gradient method which minimizes the error, not the residual, at each step. Moreover, the error is minimized over the same finite-dimensional subspace that is associated with the usual method.This work was completed while the author was on leave at the University of Tennessee, Knoxville, Tennessee. Travel support from the Taft Committee and from the University of Tennessee is gratefully acknowledged.  相似文献   

18.
We develop a parametrized family of matrices and use them totest the performance of some preconditioned iterative methodsas we vary the asymmetry and stability of the test matrices.The test matrices are based on a simple discretization of adynamic, two-species, contant coefficient, reaction-diffusionsystem of partial differential equations. The reaction coefficientsprovide natural parameters for varying the properties of thetest matrices, which are typical of modelling applications.These matrices are reducible via a red-black ordering, and itis shown that the reduced matrices are M-matrices for a largerrange of parameters than the ‘unreduced’ test matrices. The iterative methods tested are of conjugate gradient type,using incomplete factorization preconditioning. The componentsof the methods tested are: the acceleration technique (conjugategradient squared, stabilized biconjugate gradi ent, orthomin),the level of fill-in of the incomplete factorization preconditioner,the use of the reduced system, and the effect of time-step sizereduction (for dynamic simulations). The tests are carried Outby extensive sampling in regions of the parameter space. The results appear to confirm observations of other studiesusing diffusion-convection based tests, and, in particular,show that in these instances the performance of the methodsis essentially unaffected by asymmetry, but is strongly affectedby instability.  相似文献   

19.
This paper presents a conjugate gradient method for solving systems of linear inequalities. The method is of dual optimization type and consists of two phases which can be implemented in a common framework. Phase 1 either finds the minimum-norm solution of the system or detects the inconsistency of the system. In the latter event, the method proceeds to Phase 2 in which an approximate least-squares solution to the system is obtained. The method is particularly suitable to large scale problems because it preserves the sparsity structure of the problem. Its efficiency is shown by computational comparisons with an SOR type method.  相似文献   

20.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号