首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary The Chebyshev and second-order Richardson methods are classical iterative schemes for solving linear systems. We consider the convergence analysis of these methods when each step of the iteration is carried out inexactly. This has many applications, since a preconditioned iteration requires, at each step, the solution of a linear system which may be solved inexactly using an inner iteration. We derive an error bound which applies to the general nonsymmetric inexact Chebyshev iteration. We show how this simplifies slightly in the case of a symmetric or skew-symmetric iteration, and we consider both the cases of underestimating and overestimating the spectrum. We show that in the symmetric case, it is actually advantageous to underestimate the spectrum when the spectral radius and the degree of inexactness are both large. This is not true in the case of the skew-symmetric iteration. We show how similar results apply to the Richardson iteration. Finally, we describe numerical experiments which illustrate the results and suggest that the Chebyshev and Richardson methods, with reasonable parameter choices, may be more effective than the conjugate gradient method in the presence of inexactness.This work was supported in part by National Science Foundation Grants DCR-8412314 and DCR-8502014The work of this author was completed while he was on sabbatical leave at the Centre for Mathematical Analysis and Mathematical Sciences Research Institute at the Australian National University, Canberra, Australia  相似文献   

2.
Summary In this paper we perform a round-off error analysis of descent methods for solving a liner systemAx=b, whereA is supposed to be symmetric and positive definite. This leads to a general result on the attainable accuracy of the computed sequence {x i } when the method is performed in floating point arithmetic. The general theory is applied to the Gauss-Southwell method and the gradient method. Both methods appear to be well-behaved which means that these methods compute an approximationx i to the exact solutionA –1 b which is the exact solution of a slightly perturbed linear system, i.e. (A+A)x i =b, A of order A, where is the relative machine precision and · denotes the spectral norm.  相似文献   

3.
Summary A variety of iterative methods considered in [3] are applied to linear algebraic systems of the formAu=b, where the matrixA is consistently ordered [12] and the iteration matrix of the Jacobi method is skew-symmetric. The related theory of convergence is developed and the optimum values of the involved parameters for each considered scheme are determined. It reveals that under the aforementioned assumptions the Extrapolated Successive Underrelaxation method attains a rate of convergence which is clearly superior over the Successive Underrelaxation method [5] when the Jacobi iteration matrix is non-singular.  相似文献   

4.
Summary In this paper we consider an extension to nonlinear algebraic systems of the class of algorithms recently proposed by Abaffy, Broyden and Spedicato for general linear systems. We analyze the convergence properties, showing that under the usual assumptions on the function and some mild assumptions on the free parameters available in the class, the algorithm is locally convergent and has a superlinear rate of convergence (per major iteration, which is computationally comparable to a single Newton's step). Some particular algorithms satisfying the conditions on the free parameters are considered.  相似文献   

5.
Summary Classical iterative methods for the solution of algebraic linear systems of equations proceed by solving at each step a simpler system of equations. When this system is itself solved by an (inner) iterative method, the global method is called a two-stage iterative method. If this process is repeated, then the resulting method is called a nested iterative method. We study the convergence of such methods and present conditions on the splittings corresponding to the iterative methods to guarantee convergence forany number of inner iterations. We also show that under the conditions presented, the spectral radii of the global iteration matrices decrease when the number of inner iterations increases. The proof uses a new comparison theorem for weak regular splittings. We extend our results to larger classes of iterative methods, which include iterative block Gauss-Seidel. We develop a theory for the concatenation of such iterative methods. This concatenation appears when different numbers of inner interations are performed at each outer step. We also analyze block methods, where different numbers of inner iterations are performed for different diagonal blocks.Dedicated to Richard S. Varga on the occasion of his sixtieth birthdayP.J. Lanzkron was supported by Exxon Foundation Educational grant 12663 and the UNISYS Corporation; D.J. Rose was supported by AT&T Bell Laboratories, the Microelectronic Center of North Carolina and the Office of Naval Research under contract number N00014-85-K-0487; D.B. Szyld was supported by the National Science Foundation grant DMS-8807338.  相似文献   

6.
Summary For a square matrixT n,n , where (I–T) is possibly singular, we investigate the solution of the linear fixed point problemx=T x+c by applying semiiterative methods (SIM's) to the basic iterationx 0 n ,x k T c k–1+c(k1). Such problems arise if one splits the coefficient matrix of a linear systemA x=b of algebraic equations according toA=M–N (M nonsingular) which leads tox=M –1 N x+M –1 bT x+c. Even ifx=T x+c is consistent there are cases where the basic iteration fails to converge, namely ifT possesses eigenvalues 1 with ||1, or if =1 is an eigenvalue ofT with nonlinear elementary divisors. In these cases — and also ifx=T x+c is incompatible — we derive necessary and sufficient conditions implying that a SIM tends to a vector which can be described in terms of the Drazin inverse of (I–T). We further give conditions under which is a solution or a least squares solution of (I–T)x=c.Research supported in part by the Alexander von Humboldt-Stiftung  相似文献   

7.
Summary The accumulation of rounding errors in a method used to compute the solution of an underdetermined system of linear equations at the least distance from a given point is being studied. The method used is based on orthogonal matrix decompositions.This research was partially supported by the Progetto Finalizzato Informatica of the Italian National Research Council  相似文献   

8.
Summary The problem of approximating ann x m matrixC by matrices of the formX A+B Y whereA, B, X andY are of appropriate size is considered. The measure of error is the supremum of the absolute values of the individual entries in the error matrix. The problem is closely related to that of approximating a bivariate functionf by sums of functions of the formxh+gy wherex andg are functions of the first variable alone andh andy are functions of the second variable. An old algorithm for constructing best approximations is described, and some of the properties of its convergence are discussed.  相似文献   

9.
Summary A number of iterative methods for the solution of the singular linear systemAx=b (det(A)=0 andb in the range ofA) is analyzed and studied. Among them are the Stationaryk-Step, the Accelerated Overrelaxation (AOR) and the Nonstationary Second Order Chebyshev Semi-Iterative ones. It is proved that, under certain assumptions, the corresponding optimum semiconvergent schemes, which present a great resemblance with their analogs for the nonsingular case, can be determined. Finally, a number of numerical examples shows how one can use the theory to obtain the optimum parameters for each applicable semiconvergent method.  相似文献   

10.
A class of direct methods for linear systems   总被引:4,自引:0,他引:4  
Summary A class of methods of direct type for solving determined or underdetermined, full rank or deficient rank linear systems is presented and theoretically analyzed. The class can be considered as a generalization of the methods of Brent and Brown as restricted to linear systems and implicitly contains orthogonal,LU andLL T factorization methods.  相似文献   

11.
Summary We study direct and iterative domain imbedding methods for the Stokes equations on certain non-rectangular domains in two space dimensions. We analyze a continuous analog of numerical domain imbedding for bounded, smooth domains, and give an example of a simple numerical algorithm suggested by the continuous analysis. This algorithms is applicable for simply connected domains which can be covered by rectangular grids, with uniformly spaced grid lines in at least one coordinate direction. We also discuss a related FFT-based fast solver for Stokes problems with physical boundary conditions on rectangles, and present some numerical results.  相似文献   

12.
Summary Given a nonsingular linear systemA x=b, a splittingA=M–N leads to the one-step iteration (1)x m =T X m–1 +c withT:=M –1N andc:=M –1 b. We investigate semiiterative methods (SIM's) with respect to (1), under the assumption that the eigenvalues ofT are contained in some compact set of , with 1. There exist SIM's which are optimal with respect to , but, except for some special sets , such optimal methods are not explicitly known in general. Using results about maximal convergence of polynomials and uniformly distributed nodes from approximation and function theory, we describe here SIM's which are asymptotically optimal with respect to . It is shown that Euler methods, extensively studied by Niethammer-Varga [NV], are special SIM's. Various algorithms for SIM's are also derived here. A 1-1 correspondence between Euler methods and SIM's, generated by generalized Faber polynomials, is further established here. This correspondence gives that asymptotically optimal Euler methods are quite near the optimal SIM's.Dedicated to Professor Karl Zeller (Universität Tübingen) on the occasion of his sixtieth birthday (December 28, 1984)  相似文献   

13.
Summary Ann×n complex matrixB is calledparacontracting if B21 and 0x[N(I-B)]Bx2<x2. We show that a productB=B k B k–1 ...B 1 ofk paracontracting matrices is semiconvergent and give upper bounds on the subdominant eigenvalue ofB in terms of the subdominant singular values of theB i 's and in terms of the angles between certain subspaces. Our results here extend earlier results due to Halperin and due to Smith, Solomon and Wagner. We also determine necessary and sufficient conditions forn numbers in the interval [0, 1] to form the spectrum of a product of two orthogonal projections and hence characterize the subdominant eigenvalue of such a product. In the final part of the paper we apply the upper bounds mentioned earlier to provide an estimate on the subdominant eigenvalue of the SOR iteration matrix associated with ann×n hermitian positive semidefinite matrixA none of whose diagonal entries vanish.The work of this author was supported in part by NSF Research Grant No. MCS-8400879  相似文献   

14.
Summary A generalization of alternating methods for sets of linear equations is described and the number of operations calculated. It is shown that the lowest number of arithmetic operations is achieved in the SSOR algorithm.  相似文献   

15.
Summary A parallel projection algorithm is proposed to solve the generalized linear least-squares problem: find a vector to minimize the 2-norm distance from its image under an affine mapping to a closed convex cone. In each iteration of the algorithm the problem is decomposed into several independent small problems of finding projections onto subspaces, which are simple and can be tackled parallelly. The algorithm can be viewed as a dual version of the algorithm proposed by Han and Lou [8]. For the special problem under consideration, stronger convergence results are established. The algorithm is also related to the block iterative methods of Elfving [6], Dennis and Steihaug [5], and the primal-dual method of Springarn [14].This material is based on work supported in part by the National Science foundation under Grant DMS-8602416 and by the Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign  相似文献   

16.
Summary Recently Eiermann, Marek, and Niethammer have shown how to applysemiiterative methods to a fixed point systemx=Tx+c which isinconsistent or in whichthe powers of the fixed point operator T have no limit, to obtain iterative methods which converge to some approximate solution to the fixed point system. In view of their results we consider here stipulations on apreconditioning QAx=Qb of the systemAx=b and, separately, on asplitting A=M–N which lead to fixed point systems such that, with the aid of a semiiterative method, the iterative scheme converges to a weighted Moore-Penrose solution to the systemAx=b. We show in several ways that to obtain a meaningful limit point from a semiiterative method requires less restrictions on the splittings or the reconditionings than those which have been required in the classical Picard iterative method (see, e.g., the works of Berman and Plemmons, Berman and Neumann, and Tanabe).We pay special attention to the case when the weighted Moore-Penrose solution which is sought is the minimal norm least squares solution toAx=b.Research supported by the Deutsche ForschungsgemeinschaftPartially supported by AFOSR research grant 88-0047  相似文献   

17.
Summary A forward error analysis is presented for the Björck-Pereyra algorithms used for solving Vandermonde systems of equations. This analysis applies to the case where the points defining the Vandermonde matrix are nonnegative and are arranged in increasing order. It is shown that for a particular class of Vandermonde problems the error bound obtained depends on the dimensionn and on the machine precision only, being independent of the condition number of the coefficient matrix. By comparing appropriate condition numbers for the Vandermonde problem with the forward error bounds it is shown that the Björck-Pereyra algorithms introduce no more uncertainty into the numerical solution than is caused simply by storing the right-hand side vector on the computer. A technique for computing running a posteriori error bounds is derived. Several numerical experiments are presented, and it is observed that the ordering of the points can greatly affect the solution accuracy.  相似文献   

18.
Optimal successive overrelaxation iterative methods for P-cyclic matrices   总被引:1,自引:0,他引:1  
Summary We consider linear systems whose associated block Jacobi matricesJ p are weakly cyclic of indexp. In a recent paper, Pierce, Hadjidimos and Plemmons [13] proved that the block two-cyclic successive overrelaxation (SOR) iterative method is numerically more effective than the blockq-cyclic SOR-method, 2<qp, if the eigenvalues ofJ p p are either all non-negative or all non-positive. Based on the theory of stationaryp-step methods, we give an alternative proof of their theorem. We further determine the optimal relaxation parameter of thep-cyclic SOR method under the assumption that the eigenvalues ofJ p p are contained in a real interval, thereby extending results due to Young [19] (for the casep=2) and Varga [15] (forp>2). Finally, as a counterpart to the result of Pierce, Hadjidimos and Plemmons, we show that, under this more general assumption, the two-cyclic SOR method is not always superior to theq-cyclic SOR method, 2<qp.Dedicated to R. S. Varga on the occasion of his 60th birthdayResearch supported in part by the Deutsche Forschungsgemeinschaft  相似文献   

19.
Summary Recently, special attention has been given in the literature, to the problems of accurately computing the least-squares solution of very largescale over-determined systems of linear equations which occur in geodetic applications. In particular, it has been suggested that one can solve such problems iteratively by applying the block SOR (Successive Overrelaxation) and EGS1 (Extrapolated Gauss Seidel 1) plus semi-iterative methods to a linear system with coefficient matrix 2-cyclic or 3-cyclic. The comparison of 2-block SOR and 3-block SOR was made in [1] and showed that the 2-block SOR is better. In [6], the authors also proved that 3-block EGS1-SI is better than 3-block SOR. Here, we first show that the 2-block DJ (Double Jacobi)-SI, GS-SI and EGS1-SI methods are equivalent and all of them are equivalent to the 3-block EGS1-SI method; then, we prove that the composite methods and 2-block SOR have the same asymptotic rate of convergence, but the former has a better average rate of convergence; finally, numerical experiments are reported, and confirm that the 3-block EGS1-SI is better than the 2-block SOR.  相似文献   

20.
Summary Utilizing kernel structure properties a unified construction of Hankel matrix inversion algorithms is presented. Three types of algorithms are obtained: 1)O(n 2) complexity Levinson type, 2)O (n) parallel complexity Schur-type, and 3)O(n log2 n) complexity asymptotically fast ones. All algorithms work without additional assumption (like strong nonsingularity).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号