首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
Recently, Wu et al. [S.-L. Wu, T.-Z. Huang, X.-L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math. 228 (1) (2009) 424-433] introduced a modified SSOR (MSSOR) method for augmented systems. In this paper, we establish a generalized MSSOR (GMSSOR) method for solving the large sparse augmented systems of linear equations, which is the extension of the MSSOR method. Furthermore, the convergence of the GMSSOR method for augmented systems is analyzed and numerical experiments are carried out, which show that the GMSSOR method with appropriate parameters has a faster convergence rate than the MSSOR method with optimal parameters.  相似文献   

2.
Golub, Wu and Yuan [G.H. Golub, X. Wu, J.Y. Yuan, SOR-like methods for augmented systems, BIT 41 (2001) 71–85] have presented the SOR-like algorithm to solve augmented systems. In this paper, we present the modified symmetric successive overrelaxation (MSSOR) method for solving augmented systems, which is based on Darvishi and Hessari’s work above. We derive its convergence under suitable restrictions on the iteration parameter, determine its optimal iteration parameter and the corresponding optimal convergence factor under certain conditions. Finally, we apply the MSSOR method to solve augmented systems.  相似文献   

3.
In this paper, a new approach is proposed for solving the augmented systems. Based on the modified homotopy perturbation method, we construct the new iterative methods and derive the sufficient and necessary conditions for guaranteeing its convergence. Some numerical experiments show that this method is more simple and effective.  相似文献   

4.
For the augmented system of linear equations, Golub, Wu and Yuan recently studied an SOR-like method (BIT 41(2001)71–85). By further accelerating it with another parameter, in this paper we present a generalized SOR (GSOR) method for the augmented linear system. We prove its convergence under suitable restrictions on the iteration parameters, and determine its optimal iteration parameters and the corresponding optimal convergence factor. Theoretical analyses show that the GSOR method has faster asymptotic convergence rate than the SOR-like method. Also numerical results show that the GSOR method is more effective than the SOR-like method when they are applied to solve the augmented linear system. This GSOR method is further generalized to obtain a framework of the relaxed splitting iterative methods for solving both symmetric and nonsymmetric augmented linear systems by using the techniques of vector extrapolation, matrix relaxation and inexact iteration. Besides, we also demonstrate a complete version about the convergence theory of the SOR-like method. Subsidized by The Special Funds For Major State Basic Research Projects (No. G1999032803) and The National Natural Science Foundation (No. 10471146), P.R. China  相似文献   

5.
For large systems of linear equations, iterative methods provide attractive solution techniques. We describe the applicability and convergence of iterative methods of Krylov subspace type for an important class of symmetric and indefinite matrix problems, namely augmented (or KKT) systems. Specifically, we consider preconditioned minimum residual methods and discuss indefinite versus positive definite preconditioning. For a natural choice of starting vector we prove that when the definite and indenfinite preconditioners are related in the obvious way, MINRES (which is applicable in the case of positive definite preconditioning) and full GMRES (which is applicable in the case of indefinite preconditioning) give residual vectors with identical Euclidean norm at each iteration. Moreover, we show that the convergence of both methods is related to a system of normal equations for which the LSQR algorithm can be employed. As a side result, we give a rare example of a non-trivial normal(1) matrix where the corresponding inner product is explicitly known: a conjugate gradient method therefore exists and can be employed in this case. This work was supported by British Council/German Academic Exchange Service Research Collaboration Project 465 and NATO Collaborative Research Grant CRG 960782  相似文献   

6.
In this paper, we consider a class of Uzawa-SOR methods for saddle point problems, and prove the convergence of the proposed methods. We solve a lower triangular system per iteration in the proposed methods, instead of solving a linear equation Az=b. Actually, the new methods can be considered as an inexact iteration method with the Uzawa as the outer iteration and the SOR as the inner iteration. Although the proposed methods cannot achieve the same convergence rate as the GSOR methods proposed by Bai et al. [Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 1-38], but our proposed methods have less workloads per iteration step. Experimental results show that our proposed methods are feasible and effective.  相似文献   

7.
We discuss the application of an augmented conjugate gradient to the solution of a sequence of linear systems of the same matrix appearing in an iterative process for the solution of scattering problems. The conjugate gradient method applied to the first system generates a Krylov subspace, then for the following systems, a modified conjugate gradient is applied using orthogonal projections on this subspace to compute an initial guess and modified descent directions leading to a better convergence. The scattering problem is treated via an Exact Controllability formulation and a preconditioned conjugate gradient algorithm is introduced. The set of linear systems to be solved are associated to this preconditioning. The efficiency of the method is tested on different 3D acoustic problems. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

8.
Summary We study the augmented system approach for the solution of sparse linear least-squares problems. It is well known that this method has better numerical properties than the method based on the normal equations. We use recent work by Arioli et al. (1988) to introduce error bounds and estimates for the components of the solution of the augmented system. In particular, we find that, using iterative refinement, we obtain a very robust algorithm and our estimates of the error are accurate and cheap to compute. The final error and all our error estimates are much better than the classical or Skeel's error analysis (1979) indicates. Moreover, we prove that our error estimates are independent of the row scaling of the augmented system and we analyze the influence of the Björck scaling (1967) on these estimates. We illustrate this with runs both on large-scale practical problems and contrived examples, comparing the numerical behaviour of the augmented systems approach with a code using the normal equations. These experiments show that while the augmented system approach with iterative refinement can sometimes be less efficient than the normal equations approach, it is comparable or better when the least-squares matrix has a full row, and is, in any case, much more stable and robust.This author was visiting Harwell and was funded by a grant from the Italian National Council of Research (CNR), Istituto di Elaborazione dell'Informazione-CNR, via S. Maria 46, I-56100 Pisa, ItalyThis author was visiting Harwell from Faculty of Mathematics and Computer Science of the University of Amsterdam  相似文献   

9.
For various applications, it is well-known that the deflated ICCG is an efficient method for solving linear systems with invertible coefficient matrix. We propose two equivalent variants of this deflated ICCG which can also solve linear systems with singular coefficient matrix, arising from discretization of the discontinuous Poisson equation with Neumann boundary conditions. It is demonstrated both theoretically and numerically that the resulting methods accelerate the convergence of the iterative process.  相似文献   

10.
In this paper, building on the previous work by Greif and Schötzau [Preconditioners for the discretized time-harmonic Maxwell equations in mixed form, Numer. Linear Algebra Appl. 14 (2007) 281–297] and Benzi and Olshanskii [An augmented lagrangian-based approach to the Oseen problem, SIAM J. Sci. Comput. 28 (2006) 2095–2113], we present the improved preconditioning techniques for the iterative solution of the saddle point linear systems, which arise from the finite element discretization of the mixed formulation of the time-harmonic Maxwell equations. The modified block diagonal and triangular preconditioners considered are based on augmentation with using the symmetric nonsingular weighted matrix. We discuss the spectral properties of the preconditioned matrix in detail and generalize the results of the above-mentioned paper by Greif and Schötzau. Numerical experiments are given to demonstrate the efficiency of the presented preconditioners.  相似文献   

11.
Abaffy, Broyden, and Spedicato (ABS) have recently proposed a class of direct methods for solving nonsparse linear systems. It is the purpose of this paper to demonstrate that with proper choice of parameters, ABS methods exploit sparsity in a natural way. In particular, we study the choice of parameters which corresponds to an LU-decomposition of the coefficient matrix. In many cases, the fill-in, represented by the nonzero elements of the deflection matrix, uses less storage than the corresponding fill-in of an explicit LU factorization. Hence the above can be a viable and simple method for solving sparse linear systems. A simple reordering scheme for the coefficient matrix is also proposed for the purpose of reducing fill-in of the deflection matrices.  相似文献   

12.
A Newton method to solve total least squares problems for Toeplitz systems of equations is considered. When coupled with a bisection scheme, which is based on an efficient algorithm for factoring Toeplitz matrices, global convergence can be guaranteed. Circulant and approximate factorization preconditioners are proposed to speed convergence when a conjugate gradient method is used to solve linear systems arising during the Newton iterations. The work of the second author was partially supported by a National Science Foundation Postdoctoral Research Fellowship.  相似文献   

13.
A generalized successive overrelaxation method for least squares problems   总被引:5,自引:0,他引:5  
In this paper a new iterative method is given for solving large sparse least squares problems and computing the minimum norm solution to underdetermined consistent linear systems. The new scheme is called the generalized successive overrelaxation (GSOR) method and is shown to be convergent ifA is full column rank. The GSOR method involves a parameter ρ and an auxiliary matrixP. One can choose matrix P so that the GSOR method only involves matrix and vector operations; therefore the GSOR method is suitable for parallel computations. Besides, the GSOR method can be combined with preconditioning techniques, and therefore can be expected to be more effective. This author's work was supported by Natural Science Foundation of Liaoning Province, China.  相似文献   

14.
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss–Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.  相似文献   

15.
In this paper, the behavior of the block Accelerated Overrelaxation (AOR) iterative method, when applied to linear systems with a generalized consistently ordered coefficient matrix, is investigated. An equation, relating the eigenvalues of the block Jacobi iteration matrix to the eigenvalues of its associated block AOR iteration matrix, as well as sufficient conditions for the convergence of the block AOR method, are obtained.  相似文献   

16.
Summary. In [10,14], circulant-type preconditioners have been proposed for ill-conditioned Hermitian Toeplitz systems that are generated by nonnegative continuous functions with a zero of even order. The proposed circulant preconditioners can be constructed without requiring explicit knowledge of the generating functions. It was shown that the spectra of the preconditioned matrices are uniformly bounded except for a fixed number of outliers and that all eigenvalues are uniformly bounded away from zero. Therefore the conjugate gradient method converges linearly when applied to solving the circulant preconditioned systems. In [10,14], it was claimed that this result can be the case where the generating functions have multiple zeros. The main aim of this paper is to give a complete convergence proof of the method in [10,14] for this class of generating functions. Received October 19, 1999 / Revised version received May 2, 2001 / Published online October 17, 2001  相似文献   

17.
In this paper, we present the preconditioned generalized accelerated overrelaxation (GAOR) method for solving linear systems based on a class of weighted linear least square problems. Two kinds of preconditioning are proposed, and each one contains three preconditioners. We compare the spectral radii of the iteration matrices of the preconditioned and the original methods. The comparison results show that the convergence rate of the preconditioned GAOR methods is indeed better than the rate of the original method, whenever the original method is convergent. Finally, a numerical example is presented in order to confirm these theoretical results.  相似文献   

18.
Summary. The GMRES method is a popular iterative method for the solution of large linear systems of equations with a nonsymmetric nonsingular matrix. However, little is known about the behavior of this method when it is applied to the solution of nonsymmetric linear ill-posed problems with a right-hand side that is contaminated by errors. We show that when the associated error-free right-hand side lies in a finite-dimensional Krylov subspace, the GMRES method is a regularization method. The iterations are terminated by a stopping rule based on the discrepancy principle. Received November 10, 2000 / Revised version received April 11, 2001 / Published online October 17, 2001  相似文献   

19.
Linear systems in saddle point form are usually highly indefinite,which often slows down iterative solvers such as Krylov subspace methods. It has been noted by several authors that negating the second block row of a symmetric indefinite saddle point matrix leads to a nonsymmetric matrix ${{\mathcal A}}Linear systems in saddle point form are usually highly indefinite,which often slows down iterative solvers such as Krylov subspace methods. It has been noted by several authors that negating the second block row of a symmetric indefinite saddle point matrix leads to a nonsymmetric matrix whose spectrum is entirely contained in the right half plane. In this paper we study conditions so that is diagonalizable with a real and positive spectrum. These conditions are based on necessary and sufficient conditions for positive definiteness of a certain bilinear form,with respect to which is symmetric. In case the latter conditions are satisfied, there exists a well defined conjugate gradient (CG) method for solving linear systems with . We give an efficient implementation of this method, discuss practical issues such as error bounds, and present numerical experiments. In memory of Gene Golub (1932–2007), our wonderful friend and colleague, who had a great interest in the conjugate gradient method and the numerical solution of saddle point problems. The work of J?rg Liesen was supported by the Emmy Noether-Program and the Heisenberg-Program of the Deutsche Forschungsgemeinschaft.  相似文献   

20.
In this paper, some improvements on Darvishi and Hessari [On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices, Appl. Math. Comput. 176 (2006) 128–133] are presented for bounds of the spectral radius of lω,rlω,r, which is the iterative matrix of the generalized AOR (GAOR) method. Subsequently, some new sufficient conditions for convergence of GAOR method will be given, which improve some results of Darvishi and Hessari [On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices, Appl. Math. Comput. 176 (2006) 128–133].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号