首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We introduce a new preconditioner, ILUCP, to be used with an iterative method for solving sparse linear systems. It is based on an incomplete LU factorization combining Crout's formulation of Gaussian elimination with pivoting by columns. It is usually faster than ILUTP, which is based on a delayed update version of Gaussian elimination with pivoting, but requires more memory. For applications where memory is not a primary concern, ILUCP can be an attractive alternative to ILUTP. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
Parallel algorithms for solving tridiagonal and near-circulant systems   总被引:1,自引:0,他引:1  
Many problems in mathematics and applied science lead to the solution of linear systems having circulant coefficient matrices. This paper presents a new stable method for the exact solution of non-symmetric tridiagonal circulant linear systems of equations. The method presented in this paper is quite competitive with Gaussian elimination both in terms of arithmetic operations and storage requirements. It is also competitive with the modified double sweep method. This method can be applied to solve the near-circulant tridiagonal system. In addition, the method is modified to allow for parallel processing.  相似文献   

3.
Summary Part I of this work deals with the forward error analysis of Gaussian elimination for general linear algebraic systems. The error analysis is based on a linearization method which determines first order approximations of the absolute errors exactly. Superposition and cancellation of error effects, structure and sparsity of the coefficient matrices are completely taken into account by this method. The most important results of the paper are new condition numbers and associated optimal component-wise error and residual estimates for the solutions of linear algebraic systems under data perturbations and perturbations by rounding erros in the arithmetic floating-point operations. The estimates do not use vector or matrix norms. The relative data and rounding condition numbers as well as the associated backward and residual stability constants are scaling-invariant. The condition numbers can be computed approximately from the input data, the intermediate results, and the solution of the linear system. Numerical examples show that by these means realistic bounds of the errors and the residuals of approximate solutions can be obtained. Using the forward error analysis, also typical results of backward error analysis are deduced. Stability theorems and a priori error estimates for special classes of linear systems are proved in Part II of this work.  相似文献   

4.
On the Automatic Scaling of Matrices for Gaussian Elimination   总被引:2,自引:0,他引:2  
The usual pivotal strategies for Gaussian elimination are basedon the assumption that all the matrix elements are of comparablesize. We propose an algorithm for scaling based on the assumptionthat the given matrix can be scaled to this required form. Somenumerical experiments are presented to show that it producesmuch better results than straightforward row and column normequilibration, particularly in the sparse case, and that thecomputing cost is moderate.  相似文献   

5.
In this paper, we consider the problem of finding an inner estimation of the solution set of a fuzzy linear system with a real-valued coefficient matrix and a fuzzy-valued right-hand side vector. The proposed idea is based on the utilization of interval Gaussian elimination procedure to produce an inner estimation of the solutions set. To this end, firstly we apply interval Gaussian elimination procedure to obtain the solution set of a fuzzy linear system and secondly, by limiting it via solving a crisp linear system, we find an inner estimation of the solutions set, such that it satisfies the related fuzzy linear system. Finally, several numerical examples are given to show the efficiency and ability of our method.  相似文献   

6.
A new method is proposed for an accurate solution of nearly singular systems of linear equations. The method uses the truncated singular value decomposition of the initial coefficient matrix at the first stage and the Gaussian elimination procedure for a well-conditioned reduced system of linear equations at the last stage. In contrast to various regularization techniques, the method does not require any additional physical information on the problem.  相似文献   

7.
Summary In this paper a Gauss-Jordan algorithm with column interchanges is presented and analysed. We show that, in contrast with Gaussian elimination, the Gauss-Jordan algorithm has essentially differing properties when using column interchanges instead of row interchanges for improving the numerical stability. For solutions obtained by Gauss-Jordan with column interchanges, a more satisfactory bound for the residual norm can be given. The analysis gives theoretical evidence that the algorithm yields numerical solutions as good as those obtained by Gaussian elimination and that, in most practical situations, the residuals are equally small. This is confirmed by numerical experiments. Moreover, timing experiments on a Cyber 205 vector computer show that the algorithm presented has good vectorisation properties.  相似文献   

8.
The article is devoted to the computer realization of well-known computational algorithms of linear algebra with sparse matrices. Formal programs of operations of the second kind (subschemes) over sparse matrices are derived, as well as sub-schemes realizing the normalized expansion of a matrix, the modified Danilewski method, the stepwise choice of the leading element in a process of Gaussian elimination type, the search for approximations to an eigenvalue by the method of traces, etc.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 58, pp. 130–148, 1976.  相似文献   

9.
Different algorithms, based on Gaussian elimination, for the solution of dense linear systems of equations are discussed for a multiprocessor ring. The number of processors is assumed not to exceed the problem size. A fairly general model for data transfer is proposed, and the algorithms are analyzed with respect to their requirements of arithmetic as well as communication times.  相似文献   

10.
Many problems arising in different fields of science and engineering can be reduced, by applying some appropriate discretization, either to a system of linear algebraic equations or to a sequence of such systems. The solution of a system of linear algebraic equations is very often the most time-consuming part of the computational process during the treatment of the original problem, because these systems can be very large (containing up to many millions of equations). It is, therefore, important to select fast, robust and reliable methods for their solution, also in the case where fast modern computers are available. Since the coefficient matrices of the systems are normally sparse (i.e. most of their elements are zeros), the first requirement is to efficiently exploit the sparsity. However, this is normally not sufficient when the systems are very large. The computation of preconditioners based on approximate LU-factorizations and their use in the efforts to increase further the efficiency of the calculations will be discussed in this paper. Computational experiments based on comprehensive comparisons of many numerical results that are obtained by using ten well-known methods for solving systems of linear algebraic equations (the direct Gaussian elimination and nine iterative methods) will be reported. Most of the considered methods are preconditioned Krylov subspace algorithms.  相似文献   

11.
Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that are nested loop methods for large and sparse linear systems problems. We also show that GRC is a KSM that is equivalent to Orthomin with a variable preconditioning. We use the modified Gram–Schmidt method to derive a stable GRC algorithm. We show that GRC presents a general framework for constructing a class of “hybrid” (nested) KSM based on inner loop method selection. We conduct numerical experiments using nonsymmetric indefinite matrices from a widely used library of sparse matrices that validate the efficiency and the robustness of the proposed methods.  相似文献   

12.
Summary. An adaptive Richardson iteration method is described for the solution of large sparse symmetric positive definite linear systems of equations with multiple right-hand side vectors. This scheme ``learns' about the linear system to be solved by computing inner products of residual matrices during the iterations. These inner products are interpreted as block modified moments. A block version of the modified Chebyshev algorithm is presented which yields a block tridiagonal matrix from the block modified moments and the recursion coefficients of the residual polynomials. The eigenvalues of this block tridiagonal matrix define an interval, which determines the choice of relaxation parameters for Richardson iteration. Only minor modifications are necessary in order to obtain a scheme for the solution of symmetric indefinite linear systems with multiple right-hand side vectors. We outline the changes required. Received April 22, 1993  相似文献   

13.
The paper addresses the orthogonal and variational properties of a family of iterative algorithms in Krylov subspaces for solving the systems of linear algebraic equations (SLAE) with sparse nonsymmetric matrices. There are proposed and studied a biconjugate residual method, squared biconjugate residual method, and stabilized conjugate residual method. Some results of numerical experiments are given for a series of model problems as well.  相似文献   

14.
We consider solving large sparse symmetric singular linear systems. We first introduce an algorithm for right preconditioned minimum residual (MINRES) and prove that its iterates converge to the preconditioner weighted least squares solution without breakdown for an arbitrary right‐hand‐side vector and an arbitrary initial vector even if the linear system is singular and inconsistent. For the special case when the system is consistent, we prove that the iterates converge to a min‐norm solution with respect to the preconditioner if the initial vector is in the range space of the right preconditioned coefficient matrix. Furthermore, we propose a right preconditioned MINRES using symmetric successive over‐relaxation (SSOR) with Eisenstat's trick. Some numerical experiments on semidefinite systems in electromagnetic analysis and so forth indicate that the method is efficient and robust. Finally, we show that the residual norm can be further reduced by restarting the iterations.  相似文献   

15.
The CMRH method [H. Sadok, Méthodes de projections pour les systèmes linéaires et non linéaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303–321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix–vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithm. A comparison with Gaussian elimination is provided.  相似文献   

16.
When convergent Jacobi or Gauss-Seidel iterations can be applied to solve systems of linear equations, a natural question is how convergence rates are affected if the original system is modified by performing some Gaussian elimination. We prove that if the initial iteration matrix is nonnegative, then such elimination improves convergence. Our results extend those contained in [4].  相似文献   

17.
This paper addresses the problem of shape preserving spline interpolation formulated as a differential multipoint boundary value problem (DMBVP for short). Its discretization by mesh method yields a five-diagonal linear system which can be ill-conditioned for unequally spaced data. Using the superposition principle we split this system in a set of tridiagonal linear systems with a diagonal dominance. The latter ones can be stably solved either by direct (Gaussian elimination) or iterative methods (SOR method and finite-difference schemes in fractional steps) and admit effective parallelization. Numerical examples illustrate the main features of this approach.  相似文献   

18.
This paper addresses the problem of shape preserving spline interpolation formulated as a differential multipoint boundary value problem (DMBVP for short). Its discretization by mesh method yields a five-diagonal linear system which can be ill-conditioned for unequally spaced data. Using the superposition principle we split this system in a set of tridiagonal linear systems with a diagonal dominance. The latter ones can be stably solved either by direct (Gaussian elimination) or iterative methods (SOR method and finite-difference schemes in fractional steps) and admit effective parallelization. Numerical examples illustrate the main features of this approach.  相似文献   

19.
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization.The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and there is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization.These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

20.
A large number of pivotal strategies for use in conjunctionwith Gaussian elimination for solving sparse systems of linearequations are compared on the grounds of how well they preservesparsity, how many arithmetic operations they involve and theircomputational cost. Conclusions are based mainly on the resultsobtained on test problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号