首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Linear systems of the form Ax = b, where the matrix A is symmetric and positive definite, often arise from the discretization of elliptic partial differential equations. A very successful method for solving these linear systems is the preconditioned conjugate gradient method. In this paper, we study parallel preconditioners for the conjugate gradient method based on the block two-stage iterative methods. Sufficient conditions for the validity of these preconditioners are given. Computational results of these preconditioned conjugate gradient methods on two parallel computing systems are presented.  相似文献   

2.
Semi-Conjugate Direction Methods for Real Positive Definite Systems   总被引:1,自引:0,他引:1  
In this preliminary work, left and right conjugate direction vectors are defined for nonsymmetric, nonsingular matrices A and some properties of these vectors are studied. A left conjugate direction (LCD) method for solving nonsymmetric systems of linear equations is proposed. The method has no breakdown for real positive definite systems. The method reduces to the usual conjugate gradient method when A is symmetric positive definite. A finite termination property of the semi-conjugate direction method is shown, providing a new simple proof of the finite termination property of conjugate gradient methods. The new method is well defined for all nonsingular M-matrices. Some techniques for overcoming breakdown are suggested for general nonsymmetric A. The connection between the semi-conjugate direction method and LU decomposition is established. The semi-conjugate direction method is successfully applied to solve some sample linear systems arising from linear partial differential equations, with attractive convergence rates. Some numerical experiments show the benefits of this method in comparison to well-known methods. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
A class of iterative methods is presented for the solution of systems of linear equationsAx=b, whereA is a generalm ×n matrix. The methods are based on a development as a continued fraction of the inner product (r, r), wherer=b-Ax is the residual. The methods as defined are quite general and include some wellknown methods such as the minimal residual conjugate gradient method with one step.  相似文献   

4.
Iterative methods applied to the normal equationsA T Ax=A T b are sometimes used for solving large sparse linear least squares problems. However, when the matrix is rank-deficient many methods, although convergent, fail to produce the unique solution of minimal Euclidean norm. Examples of such methods are the Jacobi and SOR methods as well as the preconditioned conjugate gradient algorithm. We analyze here an iterative scheme that overcomes this difficulty for the case of stationary iterative methods. The scheme combines two stationary iterative methods. The first method produces any least squares solution whereas the second produces the minimum norm solution to a consistent system. This work was supported by the Swedish Research Council for Engineering Sciences, TFR.  相似文献   

5.
Summary A class of preconditioning methods depending on a relaxation parameter is presented for the solution of large linear systems of equationAx=b, whereA is a symmetric positive definite matrix. The methods are based on an incomplete factorization of the matrixA and include both pointwise and blockwise factorization. We study the dependence of the rate of convergence of the preconditioned conjugate gradient method on the distribution of eigenvalues ofC –1 A, whereC is the preconditioning matrix. We also show graphic representations of the eigenvalues and present numerical tests of the methods.  相似文献   

6.
《Optimization》2012,61(4):657-659
Here, necessary corrections on computing the hybridization parameter of the quadratic hybrid conjugate gradient method of Babaie-Kafaki [S. Babaie-Kafaki, A hybrid conjugate gradient method based on a quadratic relaxation of Dai-Yuan hybrid conjugate gradient parameter, Optimization, DOI: 10.1080/02331934.2011.611512, 2011] are stated in brief. Throughout, we use the same notations and equation numbers as in Babaie-Kafaki (2011).  相似文献   

7.
Summary. We study the additive and multiplicative Schwarz domain decomposition methods for elliptic boundary value problem of order 2 r based on an appropriate spline space of smoothness . The finite element method reduces an elliptic boundary value problem to a linear system of equations. It is well known that as the number of triangles in the underlying triangulation is increased, which is indispensable for increasing the accuracy of the approximate solution, the size and condition number of the linear system increases. The Schwarz domain decomposition methods will enable us to break the linear system into several linear subsystems of smaller size. We shall show in this paper that the approximate solutions from the multiplicative Schwarz domain decomposition method converge to the exact solution of the linear system geometrically. We also show that the additive Schwarz domain decomposition method yields a preconditioner for the preconditioned conjugate gradient method. We tested these methods for the biharmonic equation with Dirichlet boundary condition over an arbitrary polygonal domain using cubic spline functions over a quadrangulation of the given domain. The computer experiments agree with our theoretical results. Received December 28, 1995 / Revised version received November 17, 1998 / Published online September 24, 1999  相似文献   

8.
We study the projection-difference methods for approximate solving the Cauchy problem for operator-differential equations with a leading self-adjoint operator A(t) and a subordinate linear operator K(t), whose definition domain is independent of t. Operators A(t) and K(t) are assumed to be sufficiently smooth. We obtain estimates for the rate of convergence of approximate solutions to the exact solution as well as those for fractional degrees of an operator similar to A(0).  相似文献   

9.
Iterative methods are developed for computing the Moore-Penrose pseudoinverse solution of a linear systemAx=b, whereA is anm ×n sparse matrix. The methods do not require the explicit formation ofA T A orAA T and therefore are advantageous to use when these matrices are much less sparse thanA itself. The methods are based on solving the two related systems (i)x=A T y,AA T y=b, and (ii)A T Ax=A T b. First it is shown how theSOR-andSSOR-methods for these two systems can be implemented efficiently. Further, the acceleration of theSSOR-method by Chebyshev semi-iteration and the conjugate gradient method is discussed. In particular it is shown that theSSOR-cg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns ofA respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction.  相似文献   

10.
In this paper, we study the numerical computation of the errors in linear systems when using iterative methods. This is done by using methods to obtain bounds or approximations of quadratic formsu T A −1 u whereA is a symmetric positive definite matrix andu is a given vector. Numerical examples are given for the Gauss-Seidel algorithm. Moreover, we show that using a formula for theA-norm of the error from Dahlquist, Golub and Nash [1978] very good bounds of the error can be computed almost for free during the iterations of the conjugate gradient method leading to a reliable stopping criterion. The work of the first author was partially supported by NSF Grant CCR-950539.  相似文献   

11.
The conjugate gradient method for the iterative solution of a set of linear equationsAx=b is essentially equivalent to the Lanczos method, which implies that approximations to certain eigen-values ofA can be obtained at low cost. In this paper it is shown how the smallest active eigenvalue ofA can be cheaply approximated, and the usefulness of this approximation for a practical termination criterion for the conjugate gradient method is studied. It is proved that this termination criterion is reliable in many relevant situations.  相似文献   

12.
In this paper the asymptotic behaviour of the solutions of x' = A(t)x + h(t,x) under the assumptions of instability is studied, A(t) and h(t,x) being a square matrix and a vector function, respectively. The conditions for the existence of bounded solutions or solutions tending to the origin as t → ∞ are obtained. The method: the system is recasted to an equation with complex conjugate coordinates and this equation is studied by means of a suitable Lyapunov function and by virtue of the Wazevski topological method. Applications to a nonlinear differential equation of the second order are given.  相似文献   

13.
A new iterative scheme is described for the solution of large linear systems of equations with a matrix of the form A = ρU + ζI, where ρ and ζ are constants, U is a unitary matrix and I is the identity matrix. We show that for such matrices a Krylov subspace basis can be generated by recursion formulas with few terms. This leads to a minimal residual algorithm that requires little storage and makes it possible to determine each iterate with fairly little arithmetic work. This algorithm provides a model for iterative methods for non-Hermitian linear systems of equations, in a similar way to the conjugate gradient and conjugate residual algorithms. Our iterative scheme illustrates that results by Faber and Manteuffel [3,4] on the existence of conjugate gradient algorithms with short recurrence relations, and related results by Joubert and Young [13], can be extended.  相似文献   

14.
In this article, we study a Galerkin method for a nonstationary operator equation with a leading self-adjoint operator A(t) and a subordinate nonlinear operator F. The existence of the strong solutions of the Cauchy problem for differential and approximate equations are proved. New error estimates for the approximate solutions and their derivatives are obtained. The developed method is applied to an initial boundary value problem for a partial differential equation of parabolic type.  相似文献   

15.
In this paper we describe an algebraic multilevel extension of the approximate inverse AINV preconditioner for solving symmetric positive definite linear systems Ax=b with the preconditioned conjugate gradient method. The smoother is the approximate inverse M and the coarse grids and the interpolation operator are constructed by looking at the entries of M. Numerical examples are given for problems arising from discretization of partial differential equations.  相似文献   

16.
We discuss the efficiency of the conjugate gradient (CG) method for solving a sequence of linear systems; Aun+1 = un, where A is assumed to be sparse, symmetric, and positive definite. We show that under certain conditions the Krylov subspace, which is generated when solving the first linear system Au1 = u0, contains the solutions {un} for subsequent time steps. The solutions of these equations can therefore be computed by a straightforward projection of the right‐hand side onto the already computed Krylov subspace. Our theoretical considerations are illustrated by numerical experiments that compare this method with the order‐optimal scheme obtained by applying the multigrid method as a preconditioner for the CG‐method at each time step. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
Tikhonov regularization is one of the most popular methods for solving linear systems of equations or linear least-squares problems with a severely ill-conditioned matrix A. This method replaces the given problem by a penalized least-squares problem. The present paper discusses measuring the residual error (discrepancy) in Tikhonov regularization with a seminorm that uses a fractional power of the Moore-Penrose pseudoinverse of AA T as weighting matrix. Properties of this regularization method are discussed. Numerical examples illustrate that the proposed scheme for a suitable fractional power may give approximate solutions of higher quality than standard Tikhonov regularization.  相似文献   

18.
We investigate different methods for computing a sparse approximate inverse M for a given sparse matrix A by minimizing ∥AM − E∥ in the Frobenius norm. Such methods are very useful for deriving preconditioners in iterative solvers, especially in a parallel environment. We compare different strategies for choosing the sparsity structure of M and different ways for solving the small least squares problem that are related to the computation of each column of M. Especially we show how we can take full advantage of the sparsity of A. © 1998 John Wiley & Sons, Ltd.  相似文献   

19.
Given a terrain T and an antenna A located on it, we would like to approximate the Radio Map of A over T, namely, to associate a signal strength for each point pT as received from A. This work presents a new Radio Map approximation algorithm using an adaptive radial sweep-line technique. The suggested radar-like algorithm (RLA) uses a pipe-line method for computing the signal strength along points on a ray, and an adaptive method for interpolating the signal strength over regions between two consecutive rays. Whenever the difference between two consecutive rays is above a certain threshold, a middle ray is created. Thus, the density of the sampling rays is sensitive to the shape of the terrain. Finally, we report on an experiment which compares the new algorithm with other well-known methods. The main conclusion is that the new RLA is significantly faster than the others, i.e., its running time is 3–15 times faster for the same approximation accuracy.  相似文献   

20.
The approximate solutions in standard iteration methods for linear systems Ax=b, with A an n by n nonsingular matrix, form a subspace. In this subspace, one may try to construct better approximations for the solution x. This is the idea behind Krylov subspace methods. It has led to very powerful and efficient methods such as conjugate gradients, GMRES, and Bi-CGSTAB. We will give an overview of these methods and we will discuss some relevant properties from the user's perspective view.The convergence of Krylov subspace methods depends strongly on the eigenvalue distribution of A, and on the angles between eigenvectors of A. Preconditioning is a popular technique to obtain a better behaved linear system. We will briefly discuss some modern developments in preconditioning, in particular parallel preconditioners will be highlighted: reordering techniques for incomplete decompositions, domain decomposition approaches, and sparsified Schur complements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号