首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary Classical iterative methods for the solution of algebraic linear systems of equations proceed by solving at each step a simpler system of equations. When this system is itself solved by an (inner) iterative method, the global method is called a two-stage iterative method. If this process is repeated, then the resulting method is called a nested iterative method. We study the convergence of such methods and present conditions on the splittings corresponding to the iterative methods to guarantee convergence forany number of inner iterations. We also show that under the conditions presented, the spectral radii of the global iteration matrices decrease when the number of inner iterations increases. The proof uses a new comparison theorem for weak regular splittings. We extend our results to larger classes of iterative methods, which include iterative block Gauss-Seidel. We develop a theory for the concatenation of such iterative methods. This concatenation appears when different numbers of inner interations are performed at each outer step. We also analyze block methods, where different numbers of inner iterations are performed for different diagonal blocks.Dedicated to Richard S. Varga on the occasion of his sixtieth birthdayP.J. Lanzkron was supported by Exxon Foundation Educational grant 12663 and the UNISYS Corporation; D.J. Rose was supported by AT&T Bell Laboratories, the Microelectronic Center of North Carolina and the Office of Naval Research under contract number N00014-85-K-0487; D.B. Szyld was supported by the National Science Foundation grant DMS-8807338.  相似文献   

2.
Summary For a square matrixT n,n , where (I–T) is possibly singular, we investigate the solution of the linear fixed point problemx=T x+c by applying semiiterative methods (SIM's) to the basic iterationx 0 n ,x k T c k–1+c(k1). Such problems arise if one splits the coefficient matrix of a linear systemA x=b of algebraic equations according toA=M–N (M nonsingular) which leads tox=M –1 N x+M –1 bT x+c. Even ifx=T x+c is consistent there are cases where the basic iteration fails to converge, namely ifT possesses eigenvalues 1 with ||1, or if =1 is an eigenvalue ofT with nonlinear elementary divisors. In these cases — and also ifx=T x+c is incompatible — we derive necessary and sufficient conditions implying that a SIM tends to a vector which can be described in terms of the Drazin inverse of (I–T). We further give conditions under which is a solution or a least squares solution of (I–T)x=c.Research supported in part by the Alexander von Humboldt-Stiftung  相似文献   

3.
Summary The Chebyshev and second-order Richardson methods are classical iterative schemes for solving linear systems. We consider the convergence analysis of these methods when each step of the iteration is carried out inexactly. This has many applications, since a preconditioned iteration requires, at each step, the solution of a linear system which may be solved inexactly using an inner iteration. We derive an error bound which applies to the general nonsymmetric inexact Chebyshev iteration. We show how this simplifies slightly in the case of a symmetric or skew-symmetric iteration, and we consider both the cases of underestimating and overestimating the spectrum. We show that in the symmetric case, it is actually advantageous to underestimate the spectrum when the spectral radius and the degree of inexactness are both large. This is not true in the case of the skew-symmetric iteration. We show how similar results apply to the Richardson iteration. Finally, we describe numerical experiments which illustrate the results and suggest that the Chebyshev and Richardson methods, with reasonable parameter choices, may be more effective than the conjugate gradient method in the presence of inexactness.This work was supported in part by National Science Foundation Grants DCR-8412314 and DCR-8502014The work of this author was completed while he was on sabbatical leave at the Centre for Mathematical Analysis and Mathematical Sciences Research Institute at the Australian National University, Canberra, Australia  相似文献   

4.
Summary A variety of iterative methods considered in [3] are applied to linear algebraic systems of the formAu=b, where the matrixA is consistently ordered [12] and the iteration matrix of the Jacobi method is skew-symmetric. The related theory of convergence is developed and the optimum values of the involved parameters for each considered scheme are determined. It reveals that under the aforementioned assumptions the Extrapolated Successive Underrelaxation method attains a rate of convergence which is clearly superior over the Successive Underrelaxation method [5] when the Jacobi iteration matrix is non-singular.  相似文献   

5.
Optimal successive overrelaxation iterative methods for P-cyclic matrices   总被引:1,自引:0,他引:1  
Summary We consider linear systems whose associated block Jacobi matricesJ p are weakly cyclic of indexp. In a recent paper, Pierce, Hadjidimos and Plemmons [13] proved that the block two-cyclic successive overrelaxation (SOR) iterative method is numerically more effective than the blockq-cyclic SOR-method, 2<qp, if the eigenvalues ofJ p p are either all non-negative or all non-positive. Based on the theory of stationaryp-step methods, we give an alternative proof of their theorem. We further determine the optimal relaxation parameter of thep-cyclic SOR method under the assumption that the eigenvalues ofJ p p are contained in a real interval, thereby extending results due to Young [19] (for the casep=2) and Varga [15] (forp>2). Finally, as a counterpart to the result of Pierce, Hadjidimos and Plemmons, we show that, under this more general assumption, the two-cyclic SOR method is not always superior to theq-cyclic SOR method, 2<qp.Dedicated to R. S. Varga on the occasion of his 60th birthdayResearch supported in part by the Deutsche Forschungsgemeinschaft  相似文献   

6.
Summary In this paper we study linear stationary iterative methods with nonnegative iteration matrices for solving singular and consistent systems of linear equationsAx=b. The iteration matrices for the schemes are obtained via regular and weak regular splittings of the coefficients matrixA. In certain cases when only some necessary, but not sufficient, conditions for the convergence of the iterations schemes exist, we consider a transformation on the iteration matrices and obtain new iterative schemes which ensure convergence to a solution toAx=b. This transformation is parameter-dependent, and in the case where all the eigenvalues of the iteration matrix are real, we show how to choose this parameter so that the asymptotic convergence rate of the new schemes is optimal. Finally, some applications to the problem of computing the stationary distribution vector for a finite homogeneous ergodic Markov chain are discussed.Research sponsored in part by US Army Research Office  相似文献   

7.
Summary Ann×n complex matrixB is calledparacontracting if B21 and 0x[N(I-B)]Bx2<x2. We show that a productB=B k B k–1 ...B 1 ofk paracontracting matrices is semiconvergent and give upper bounds on the subdominant eigenvalue ofB in terms of the subdominant singular values of theB i 's and in terms of the angles between certain subspaces. Our results here extend earlier results due to Halperin and due to Smith, Solomon and Wagner. We also determine necessary and sufficient conditions forn numbers in the interval [0, 1] to form the spectrum of a product of two orthogonal projections and hence characterize the subdominant eigenvalue of such a product. In the final part of the paper we apply the upper bounds mentioned earlier to provide an estimate on the subdominant eigenvalue of the SOR iteration matrix associated with ann×n hermitian positive semidefinite matrixA none of whose diagonal entries vanish.The work of this author was supported in part by NSF Research Grant No. MCS-8400879  相似文献   

8.
In this paper, we will present the block splitting iterative methods with general weighting matrices for solving linear systems of algebraic equations Ax=bAx=b when the coefficient matrix A is symmetric positive definite of block form, and establish the convergence theories with respect to the general weighting matrices but special splittings. Finally, a numerical example shows the advantage of this method.  相似文献   

9.
Summary This note is concerned with the following problem: Given a systemA·x=b of linear equations and knowing that certains of its subsystemsA 1·x 1=b 1, ...,A m ·x m =b m can be solved uniquely what can be said about the regularity ofA and how to find the solutionx fromx 1, ...,x m ? This question is of particular interest for establishing methods computing certain linear or quasilinear sequence transformations recursively [7, 13, 15].Work performed under NATO Research Grant 027-81  相似文献   

10.
Summary Recently, special attention has been given in the literature, to the problems of accurately computing the least-squares solution of very largescale over-determined systems of linear equations which occur in geodetic applications. In particular, it has been suggested that one can solve such problems iteratively by applying the block SOR (Successive Overrelaxation) and EGS1 (Extrapolated Gauss Seidel 1) plus semi-iterative methods to a linear system with coefficient matrix 2-cyclic or 3-cyclic. The comparison of 2-block SOR and 3-block SOR was made in [1] and showed that the 2-block SOR is better. In [6], the authors also proved that 3-block EGS1-SI is better than 3-block SOR. Here, we first show that the 2-block DJ (Double Jacobi)-SI, GS-SI and EGS1-SI methods are equivalent and all of them are equivalent to the 3-block EGS1-SI method; then, we prove that the composite methods and 2-block SOR have the same asymptotic rate of convergence, but the former has a better average rate of convergence; finally, numerical experiments are reported, and confirm that the 3-block EGS1-SI is better than the 2-block SOR.  相似文献   

11.
Comparisons of weak regular splittings and multisplitting methods   总被引:10,自引:0,他引:10  
Summary Comparison results for weak regular splittings of monotone matrices are derived. As an application we get upper and lower bounds for the convergence rate of iterative procedures based on multisplittings. This yields a very simple proof of results of Neumann-Plemmons on upper bounds, and establishes lower bounds, which has in special cases been conjectured by these authors.Dedicated to the memory of Peter Henrici  相似文献   

12.
Summary Recently Eiermann, Marek, and Niethammer have shown how to applysemiiterative methods to a fixed point systemx=Tx+c which isinconsistent or in whichthe powers of the fixed point operator T have no limit, to obtain iterative methods which converge to some approximate solution to the fixed point system. In view of their results we consider here stipulations on apreconditioning QAx=Qb of the systemAx=b and, separately, on asplitting A=M–N which lead to fixed point systems such that, with the aid of a semiiterative method, the iterative scheme converges to a weighted Moore-Penrose solution to the systemAx=b. We show in several ways that to obtain a meaningful limit point from a semiiterative method requires less restrictions on the splittings or the reconditionings than those which have been required in the classical Picard iterative method (see, e.g., the works of Berman and Plemmons, Berman and Neumann, and Tanabe).We pay special attention to the case when the weighted Moore-Penrose solution which is sought is the minimal norm least squares solution toAx=b.Research supported by the Deutsche ForschungsgemeinschaftPartially supported by AFOSR research grant 88-0047  相似文献   

13.
Summary We study monotone iterative methods for the numerical solution of a class of nonlinear elliptic boundary value problems by applying a theorem of Ortega and Rheinboldt. This generalizes the work of earlier authors to the case of nonlinear perturbations of linear problems with a nontrivial kernel.  相似文献   

14.
Summary If the columns of a matrix are orthonormal and it is partitioned into a 2-by-1 block matrix, then the singular value decompositions of the blocks are related. This is the essence of the CS decomposition. The computation of these related SVD's requires some care. Stewart has given an algorithm that uses the LINPACK SVD algorithm together with a Jacobitype clean-up operation on a cross-product matrix. Our technique is equally stable and fast but avoids the cross product matrix. The simplicity of our technique makes it more amenable to parallel computation on systolic-type computer architectures. These developments are of interest because a good way to compute the generalized singular value decomposition of a matrix pair (A, B) is to compute the CS decomposition of a certain orthogonal column matrix related toA andB.The research associated with this paper was partially supported by the Office of Naval Research contract N00014-83-K-0640, USA  相似文献   

15.
Summary In this paper we compare several implementations of Kogbetliantz's algorithm for computing the SVD on sequential as well as on parallel machines. Comparisons are based on timings and on operation counts. The numerical accuracy of the different methods is also analyzed.  相似文献   

16.
Summary The topic of iterative substructuring methods, and more generally domain decomposition methods, has been extensively studied over the past few years, and the topic is well advanced with respect to first and second order elliptic problems. However, relatively little work has been done on more general constrained least squares problems (or equivalent formulations) involving equilibrium equations such as those arising, for example, in realistic structural analysis applications. The potential is good for effective use of iterative algorithms on these problems, but such methods are still far from being competitive with direct methods in industrial codes. The purpose of this paper is to investigate an order reducing, preconditioned conjugate gradient method proposed by Barlow, Nichols and Plemmons for solving problems of this type. The relationships between this method and nullspace methods, such as the force method for structures and the dual variable method for fluids, are examined. Convergence properties are discussed in relation to recent optimality results for Varga's theory ofp-cyclic SOR. We suggest a mixed approach for solving equilibrium equations, consisting of both direct reduction in the substructures and the conjugate gradient iterative algorithm to complete the computations.Dedicated to R. S. Varga on the occasion of his 60th birthdayResearch completed while pursuing graduate studies sponsored by the Department of Mathematical Sciences, US Air Force Academy, CO, and funded by the Air Force Institute of Technology, WPAFB, OHResearch supported by the Air Force under grant no. AFOSR-88-0285 and by the National Science Foundation under grant no. DMS-89-02121  相似文献   

17.
Summary The Richardson iteration method is conceptually simple, as well as easy to program and parallelize. This makes the method attractive for the solution of large linear systems of algebraic equations with matrices with complex eigenvalues. We change the ordering of the relaxation parameters of a Richardson iteration method proposed by Eiermann, Niethammer and Varga for the solution of such problems. The new method obtained is shown to be stable and to have better convergence properties.Research supported by the National Science Foundation under Grant DMS-8704196  相似文献   

18.
Summary In a recent paper, [4], Csordas and Varga have unified and extended earlier theorems, of Varga in [10] and Wonicki in [11], on the comparison of the asymptotic rates of convergence of two iteration matrices induced by two regular splittings. The main purpose of this note is to show a connection between the Csordas-Varga paper and a paper by Beauwens, [1], in which a comparison theorem is developed for the asymptotic rate of convergence of two nonnegative iteration matrices induced by two splittings which are not necessarily regular. Monotonic norms already used in [1] play an important role in our work here.Research supported in part by NSF grant number DMS-8400879  相似文献   

19.
Summary A generalization of alternating methods for sets of linear equations is described and the number of operations calculated. It is shown that the lowest number of arithmetic operations is achieved in the SSOR algorithm.  相似文献   

20.
Summary We set up here a general formalism for describing factorization iterative methods of the first order and we use it to review various methods that have been proposed in the literature; next we introduce the notions ofM- andH-operators which generalize those of block-M- and block-H-matrices; finally we discuss the properties of factorization iterative methods in relation with characteristic properties ofM- andH-operators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号