首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Abaffy, Broyden, and Spedicato (ABS) have recently proposed a class of direct methods for solving nonsparse linear systems. It is the purpose of this paper to demonstrate that with proper choice of parameters, ABS methods exploit sparsity in a natural way. In particular, we study the choice of parameters which corresponds to an LU-decomposition of the coefficient matrix. In many cases, the fill-in, represented by the nonzero elements of the deflection matrix, uses less storage than the corresponding fill-in of an explicit LU factorization. Hence the above can be a viable and simple method for solving sparse linear systems. A simple reordering scheme for the coefficient matrix is also proposed for the purpose of reducing fill-in of the deflection matrices.  相似文献   

2.
We present the results of numerical experiments aimed at comparing two recently proposed sparse approximate inverse preconditioners from the point of view of robustness, cost, and effectiveness. Results for a standard ILU preconditioner are also included. The numerical experiments were carried out on a Cray C98 vector processor. This work was partially supported by the GA AS CR under grant 2030706 and by the grant GA CR 205/96/0921.  相似文献   

3.
We describe a procedure for determining a few of the largest singular values of a large sparse matrix. The method by Golub and Kent which uses the method of modified moments for estimating the eigenvalues of operators used in iterative methods for the solution of linear systems of equations is appropriately modified in order to generate a sequence of bidiagonal matrices whose singular values approximate those of the original sparse matrix. A simple Lanczos recursion is proposed for determining the corresponding left and right singular vectors. The potential asynchronous computation of the bidiagonal matrices using modified moments with the iterations of an adapted Chebyshev semi-iterative (CSI) method is an attractive feature for parallel computers. Comparisons in efficiency and accuracy with an appropriate Lanczos algorithm (with selective re-orthogonalization) are presented on large sparse (rectangular) matrices arising from applications such as information retrieval and seismic reflection tomography. This procedure is essentially motivated by the theory of moments and Gauss quadrature.This author's work was supported by the National Science Foundation under grants NSF CCR-8717492 and CCR-910000N (NCSA), the U.S. Department of Energy under grant DOE DE-FG02-85ER25001, and the Air Force Office of Scientific Research under grant AFOSR-90-0044 while at the University of Illinois at Urbana-Champaign Center for Supercomputing Research and Development.This author's work was supported by the U.S. Army Research Office under grant DAAL03-90-G-0105, and the National Science Foundation under grant NSF DCR-8412314.  相似文献   

4.
Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each of which a dense block is treated. The blocks must be large enough to achieve a high computational speed, but not too large, because this will lead to a large increase in both the computing time and the storage. A special “locally optimized reordering algorithm” (LORA) is described, which reorders the matrix so that dense blocks can be constructed and treated with some standard software, say LAPACK or NAG. These ideas are implemented for linear least-squares problems. The rectangular matrices (that appear in such problems) are decomposed by an orthogonal method. Results obtained on a CRAY C92A computer demonstrate the efficiency of using large dense blocks.  相似文献   

5.
Our randomized additive preconditioners are readily available and regularly facilitate the solution of linear systems of equations and eigen-solving for a very large class of input matrices. We study the generation of such preconditioners and their impact on the rank and the condition number of a matrix. We also propose some techniques for their refinement and two alternative versions of randomized preprocessing. Our analysis and experiments show the power of our approach even where we employ weak randomization, that is generate sparse and structured preconditioners, defined by a small number of random parameters.  相似文献   

6.
We propose a hybrid sparse system solver for handling linear systems using algebraic domain decomposition-based techniques. The solver consists of several stages. The first stage uses a reordering scheme that brings as many of the largest matrix elements as possible closest to the main diagonal. This is followed by partitioning the coefficient matrix into a set of overlapped diagonal blocks that contain most of the largest elements of the coefficient matrix. The only constraint here is to minimize the size of each overlap. Separating these blocks into independent linear systems with the constraint of matching the solution parts of neighboring blocks that correspond to the overlaps, we obtain a balance system. This balance system is not formed explicitly and has a size that is much smaller than the original system. Our novel solver requires only a one-time factorization of each diagonal block, and in each outer iteration, obtaining only the upper and lower tips of a solution vector where the size of each tip is equal to that of the individual overlap. This scheme proves to be scalable on clusters of nodes in which each node has a multicore architecture. Numerical experiments comparing the scalability of our solver with direct and preconditioned iterative methods are also presented.  相似文献   

7.
In this paper, we establish the generalized symmetric SOR method (GSSOR) for solving the large sparse augmented systems of linear equations, which is the extension of the SSOR iteration method. The convergence of the GSSOR method for augmented systems is studied. Numerical resume shows that this method is effective.  相似文献   

8.
Computation of approximate factors for the inverse constitutes an algebraic approach to preconditioning large and sparse linear systems. In this paper, the aim is to combine standard preconditioning ideas with sparse approximate inverse approximation, to have dense approximate inverse approximations (implicitly). For optimality, the approximate factoring problem is associated with a minimization problem involving two matrix subspaces. This task can be converted into an eigenvalue problem for a Hermitian positive semidefinite operator whose smallest eigenpairs are of interest. Because of storage and complexity constraints, the power method appears to be the only admissible algorithm for devising sparse–sparse iterations. The subtle issue of choosing the matrix subspaces is addressed. Numerical experiments are presented.  相似文献   

9.
A backward error for inverse singular value problems with respect to an approximate solution is defined, and an explicit expression for the backward error is derived by extending the approach described in [J.G. Sun, Backward errors for the inverse eigenvalue problem, Numer. Math. 82 (1999) 339-349]. The expression may be useful for testing the stability of practical algorithms.  相似文献   

10.
Wavelet sparse approximate inverse preconditioners   总被引:1,自引:0,他引:1  
We show how to use wavelet compression ideas to improve the performance of approximate inverse preconditioners. Our main idea is to first transform the inverse of the coefficient matrix into a wavelet basis, before applying standard approximate inverse techniques. In this process, smoothness in the entries ofA −1 are converted into small wavelet coefficients, thus allowing a more efficient approximate inverse approximation. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverses. Supported by grants from ONR: ONR-N00014-92-J-1890, and the Army Research Office: DAAL-03-91-C-0047 (Univ. of Tenn. subcontract ORA4466.04 Amendment 1 and 2). The first and the third author also acknowledge support from RIACS/NASA Ames NAS 2-96027 and the Alfred P. Sloan Foundation as Doctoral Dissertation Fellows, respectively. the work was supported by the Natural Sciences and Engineering Research Council of Canada, the Information Technology Research Centre (which is funded by the Province of Ontario), and RIACS/NASA Ames NAS 2-96027.  相似文献   

11.
12.
We consider a time-harmonic electromagnetic scattering problem for an inhomogeneous medium. Some symmetry hypotheses on the refractive index of the medium and on the electromagnetic fields allow to reduce this problem to a two-dimensional scattering problem. This boundary value problem is defined on an unbounded domain, so its numerical solution cannot be obtained by a straightforward application of usual methods, such as for example finite difference methods, and finite element methods. A possible way to overcome this difficulty is given by an equivalent integral formulation of this problem, where the scattered field can be computed from the solution of a Fredholm integral equation of second kind. The numerical approximation of this problem usually produces large dense linear systems. We consider usual iterative methods for the solution of such linear systems, and we study some preconditioning techniques to improve the efficiency of these methods. We show some numerical results obtained with two well known Krylov subspace methods, i.e., Bi-CGSTAB and GMRES.  相似文献   

13.
We describe how to maintain the triangular factor of a sparse QR factorization when columns are added and deleted and Q cannot be stored for sparsity reasons. The updating procedures could be thought of as a sparse counterpart of Reichel and Gragg’s package QRUP. They allow us to solve a sequence of sparse linear least squares subproblems in which each matrix Bk is an independent subset of the columns of a fixed matrix A, and Bk+1 is obtained by adding or deleting one column. Like Coleman and Hulbert [T. Coleman, L. Hulbert, A direct active set algorithm for large sparse quadratic programs with simple bounds, Math. Program. 45 (1989) 373-406], we adapt the sparse direct methodology of Björck and Oreborn of the late 1980s, but without forming ATA, which may be only positive semidefinite. Our Matlab 5 implementation works with a suitable row and column numbering within a static triangular sparsity pattern that is computed in advance by symbolic factorization of ATA and preserved with placeholders.  相似文献   

14.
Summary. The standard approaches to solving overdetermined linear systems construct minimal corrections to the vector c and/or the matrix B such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to c, while in data least squares (DLS) it is restricted to B. In scaled total least squares (STLS) [22], corrections to both c and B are allowed, and their relative sizes depend on a real positive parameter . STLS unifies several formulations since it becomes total least squares (TLS) when , and in the limit corresponds to LS when , and DLS when . This paper analyzes a particularly useful formulation of the STLS problem. The analysis is based on a new assumption that guarantees existence and uniqueness of meaningful STLS solutions for all parameters . It makes the whole STLS theory consistent. Our theory reveals the necessary and sufficient condition for preserving the smallest singular value of a matrix while appending (or deleting) a column. This condition represents a basic matrix theory result for updating the singular value decomposition, as well as the rank-one modification of the Hermitian eigenproblem. The paper allows complex data, and the equivalences in the limit of STLS with DLS and LS are proven for such data. It is shown how any linear system can be reduced to a minimally dimensioned core system satisfying our assumption. Consequently, our theory and algorithms can be applied to fully general systems. The basics of practical algorithms for both the STLS and DLS problems are indicated for either dense or large sparse systems. Our assumption and its consequences are compared with earlier approaches. Received June 2, 1999 / Revised version received July 3, 2000 / Published online July 25, 2001  相似文献   

15.
A new parallel algorithm for the solution of banded linear systems is proposed. The scheme tears the coefficient matrix into several overlapped independent blocks in which the size of the overlap is equal to the system’s bandwidth. A corresponding splitting of the right-hand side is also provided. The resulting independent, and smaller size, linear systems are solved under the constraint that the solutions corresponding to the overlap regions are identical. This results in a linear system whose size is proportional to the sum of the overlap regions which we refer to as the “balance” system. We propose a solution strategy that does not require obtaining this “balance” system explicitly. Once the balance system is solved, retrieving the rest of the solution can be realized with almost perfect parallelism. Our proposed algorithm is a hybrid scheme that combines direct and iterative methods for solving a single banded system of linear equations on parallel architectures. It has broad applications in finite-element analysis, particularly as a parallel solver of banded preconditioners that can be used in conjunction with outer Krylov iterative schemes.  相似文献   

16.
Summary. We develop a data-sparse and accurate approximation to parabolic solution operators in the case of a rather general elliptic part given by a strongly P-positive operator [4]. In the preceding papers [12]–[17], a class of matrices (-matrices) has been analysed which are data-sparse and allow an approximate matrix arithmetic with almost linear complexity. In particular, the matrix-vector/matrix-matrix product with such matrices as well as the computation of the inverse have linear-logarithmic cost. In the present paper, we apply the -matrix techniques to approximate the exponent of an elliptic operator. Starting with the Dunford-Cauchy representation for the operator exponent, we then discretise the integral by the exponentially convergent quadrature rule involving a short sum of resolvents. The latter are approximated by the -matrices. Our algorithm inherits a two-level parallelism with respect to both the computation of resolvents and the treatment of different time values. In the case of smooth data (coefficients, boundaries), we prove the linear-logarithmic complexity of the method. Received June 22, 2000 / Revised version received June 6, 2001 / Published online October 17, 2001  相似文献   

17.
Starting from recent formulas for calculating the permanents of some sparse circulant matrices, we obtain more general formulas expressing the permanents of a wider class of matrices as a linear combination of appropriate determinants.  相似文献   

18.
Iterative methods applied to the normal equationsA T Ax=A T b are sometimes used for solving large sparse linear least squares problems. However, when the matrix is rank-deficient many methods, although convergent, fail to produce the unique solution of minimal Euclidean norm. Examples of such methods are the Jacobi and SOR methods as well as the preconditioned conjugate gradient algorithm. We analyze here an iterative scheme that overcomes this difficulty for the case of stationary iterative methods. The scheme combines two stationary iterative methods. The first method produces any least squares solution whereas the second produces the minimum norm solution to a consistent system. This work was supported by the Swedish Research Council for Engineering Sciences, TFR.  相似文献   

19.
Summary. An adaptive Richardson iteration method is described for the solution of large sparse symmetric positive definite linear systems of equations with multiple right-hand side vectors. This scheme ``learns' about the linear system to be solved by computing inner products of residual matrices during the iterations. These inner products are interpreted as block modified moments. A block version of the modified Chebyshev algorithm is presented which yields a block tridiagonal matrix from the block modified moments and the recursion coefficients of the residual polynomials. The eigenvalues of this block tridiagonal matrix define an interval, which determines the choice of relaxation parameters for Richardson iteration. Only minor modifications are necessary in order to obtain a scheme for the solution of symmetric indefinite linear systems with multiple right-hand side vectors. We outline the changes required. Received April 22, 1993  相似文献   

20.
LSQR uses the Golub-Kahan bidiagonalization process to solve sparse least-squares problems with and without regularization. In some cases, projections of the right-hand side vector are required, rather than the least-squares solution itself. We show that projections may be obtained from the bidiagonalization as linear combinations of (the-oretically) orthogonal vectors. Even the least-squares solution may be obtained from orthogonal vectors, perhaps more accurately than the usual LSQR solution. (However, LSQR has proved equally good in all examples so far.) Presented at the Cornelius Lanczos International Centenary Conference, North Carolina State University, Raleigh, NC, December 1993. Partially supported by Department of Energy grant DE-FG03-92ER25117, National Science Foundation grants DMI-9204208 and DMI-9500668, and Office of Naval Research grants N00014-90-J-1242 and N00014-96-1-0274.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号