首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Lanczos algorithm is used to compute some eigenvalues of a given symmetric matrix of large order. At each step of the Lanczos algorithm it is valuable to know which eigenvalues of the associated tridiagonal matrix have stabilized at eigenvalues of the given symmetric matrix. We present a robust algorithm which is fast (20j to 40j operations at the jth Lanczos step), uses about 30 words of extra storage, and has a fairly short program (approxiamately 200 executable statements)  相似文献   

2.
In this paper we consider algorithms to compute bounds of the A-norm of the error in the preconditioned conjugate gradient (PCG) algorithm. We extend to PCG formulas that were given in an earlier paper [8]. We give numerical experiments which show that good upper and lower bounds can be obtained provided estimates of the lowest and largest eigenvalues of the preconditioned matrix are given or adaptively computed. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

3.
In this paper we consider the problem of estimating the largest eigenvalue and the corresponding eigenvector of a symmetric matrix. In particular, we consider iterative methods, such as the power method and the Lanczos method. These methods need a starting vector which is usually chosen randomly. We analyze the behavior of these methods when the initial vector is chosen with uniform distribution over the unitn-dimensional sphere. We extend and generalize the results reported earlier. In particular, we give upper and lower bounds on the pnorm of the randomized error, and we improve previously known bounds with a detailed analysis of the role of the multiplicity of the largest eigenvalue.  相似文献   

4.
Generalized block Lanczos methods for large unsymmetric eigenproblems are presented, which contain the block Arnoldi method, and the block Arnoldi algorithms are developed. The convergence of this class of methods is analyzed when the matrix A is diagonalizable. Upper bounds for the distances between normalized eigenvectors and a block Krylov subspace are derived, and a priori theoretical error bounds for Ritz elements are established. Compared with generalized Lanczos methods, which contain Arnoldi's method, the convergence analysis shows that the block versions have two advantages: First, they may be efficient for computing clustered eigenvalues; second, they are able to solve multiple eigenproblems. However, a deep analysis exposes that the approximate eigenvectors or Ritz vectors obtained by general orthogonal projection methods including generalized block methods may fail to converge theoretically for a general unsymmetric matrix A even if corresponding approximate eigenvalues or Ritz values do, since the convergence of Ritz vectors needs more sufficient conditions, which may be impossible to satisfy theoretically, than that of Ritz values does. The issues of how to restart and to solve multiple eigenproblems are addressed, and some numerical examples are reported to confirm the theoretical analysis. Received July 7, 1994 / Revised version received March 1, 1997  相似文献   

5.
We study algorithms for approximation of the mild solution of stochastic heat equations on the spatial domain ]0, 1[d. The error of an algorithm is defined in L2-sense. We derive lower bounds for the error of every algorithm that uses a total of N evaluations of one-dimensional components of the driving Wiener process W. For equations with additive noise we derive matching upper bounds and we construct asymptotically optimal algorithms. The error bounds depend on N and d, and on the decay of eigenvalues of the covariance of W in the case of nuclear noise. In the latter case the use of nonuniform time discretizations is crucial.  相似文献   

6.
The technique that was used to build the eigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similar to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right‐hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of eigBiCG to compute spectral approximations with accuracy comparable with that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental eigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right‐hand sides. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Many applications, such as subspace‐based models in information retrieval and signal processing, require the computation of singular subspaces associated with the k dominant, or largest, singular values of an m×n data matrix A, where k?min(m,n). Frequently, A is sparse or structured, which usually means matrix–vector multiplications involving A and its transpose can be done with much less than ??(mn) flops, and A and its transpose can be stored with much less than ??(mn) storage locations. Many Lanczos‐based algorithms have been proposed through the years because the underlying Lanczos method only accesses A and its transpose through matrix–vector multiplications. We implement a new algorithm, called KSVD, in the Matlab environment for computing approximations to the singular subspaces associated with the k dominant singular values of a real or complex matrix A. KSVD is based upon the Lanczos tridiagonalization method, the WY representation for storing products of Householder transformations, implicit deflation, and the QR factorization. Our Matlab simulations suggest it is a fast and reliable strategy for handling troublesome singular‐value spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

9.
In this paper, we generalize the algorithm described by Rump and Graillat to compute verified and narrow error bounds such that a slightly perturbed matrix is guaranteed to have an eigenvalue with geometric multiplicity q within computed error bounds. The corresponding invariant subspace can be directly obtained by our algorithm. Our verification method is based on border matrix technique. We demonstrate the performance of our algorithm for matrices of dimension up to hundreds with non-defective and defective eigenvalues.  相似文献   

10.
Two natural and efficient stopping criteria are derived for conjugate gradient (CG) methods, based on iteration parameters. The derivation makes use of the inner product matrix B-defining the CG method. In particular, the relationship between the eigenvalues and B-norm of a matrix is investigated, and it is shown that the ratio of largest to smallest eigenvalues defines the B-condition number of the matrix. Upper and lower bounds on various measures of the error are also given. The compound stopping criterion presented here is an obvious default in software packages because it does not require any additional norm computations.  相似文献   

11.
For a compact spin manifold M isometrically embedded into Euclidean space, we derive the extrinsic estimates from above and below for eigenvalues of the square of the Dirac operator, which depend on the second fundamental form of the embedding. We also show the bounds of the ratio of the eigenvalues. Since the unit sphere and the projective spaces admit the standard embedding into Euclidean spaces, we also obtain the corresponding results for their compact spin submanifolds.  相似文献   

12.
Several methods for computing the smallest eigenvalues of a symmetric and positive definite Toeplitz matrix T have been studied in the literature. Most of them share the disadvantage that they do not reflect symmetry properties of the corresponding eigenvector. In this note we present a Lanczos method which approximates simultaneously the odd and the even spectrum of T at the same cost as the classical Lanczos approach.  相似文献   

13.
In each iteration of an interior-point method for semidefinite programming, the maximum step-length that can be taken by the iterate while maintaining the positive semidefiniteness constraint needs to be estimated. In this note, we show how the maximum step-length can be estimated via the Lanczos iteration, a standard iterative method for estimating the extremal eigenvalues of a matrix. We also give a posteriori error bounds for the estimate. Numerical results on the performance of the proposed method against two commonly used methods for calculating step-lengths (backtracking via Cholesky factorizations and exact eigenvalues computations) are included.  相似文献   

14.
The use of the fast Fourier transform (FFT) accelerates Lanczos tridiagonalisation method for Hankel and Toeplitz matrices by reducing the complexity of matrix–vector multiplication. In multiprecision arithmetics, the FFT has overheads that make it less competitive compared with alternative methods when the accuracy is over 10000 decimal places. We studied two alternative Hankel matrix–vector multiplication methods based on multiprecision number decomposition and recursive Karatsuba‐like multiplication, respectively. The first method was uncompetitive because of huge precision losses, while the second turned out to be five to 14 times faster than FFT in the ranges of matrix sizes up to n = 8192 and working precision of b = 32768 bits we were interested in. We successfully applied our approach to eigenvalues calculations to studies of spectra of matrices that arise in research on Riemann zeta function. The recursive matrix–vector multiplication significantly outperformed both the FFT and the traditional multiplication in these studies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Theory, algorithms and LAPACK-style software for computing a pair of deflating subspaces with specified eigenvalues of a regular matrix pair (A, B) and error bounds for computed quantities (eigenvalues and eigenspaces) are presented. Thereordering of specified eigenvalues is performed with a direct orthogonal transformation method with guaranteed numerical stability. Each swap of two adjacent diagonal blocks in the real generalized Schur form, where at least one of them corresponds to a complex conjugate pair of eigenvalues, involves solving a generalized Sylvester equation and the construction of two orthogonal transformation matrices from certain eigenspaces associated with the diagonal blocks. The swapping of two 1×1 blocks is performed using orthogonal (unitary) Givens rotations. Theerror bounds are based on estimates of condition numbers for eigenvalues and eigenspaces. The software computes reciprocal values of a condition number for an individual eigenvalue (or a cluster of eigenvalues), a condition number for an eigenvector (or eigenspace), and spectral projectors onto a selected cluster. By computing reciprocal values we avoid overflow. Changes in eigenvectors and eigenspaces are measured by their change in angle. The condition numbers yield bothasymptotic andglobal error bounds. The asymptotic bounds are only accurate for small perturbations (E, F) of (A, B), while the global bounds work for all (E, F.) up to a certain bound, whose size is determined by the conditioning of the problem. It is also shown how these upper bounds can be estimated. Fortran 77software that implements our algorithms for reordering eigenvalues, computing (left and right) deflating subspaces with specified eigenvalues and condition number estimation are presented. Computational experiments that illustrate the accuracy, efficiency and reliability of our software are also described.  相似文献   

16.
This paper is concerned with proving theoretical results related to the convergence of the conjugate gradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projection of the inverse of the matrix, new relations for ratios of the A‐norm of the error and the norm of the residual are provided, starting from some earlier results of Sadok (Numer. Algorithms 2005; 40 :201–216). The proofs of our results rely on the well‐known correspondence between the CG method and the Lanczos algorithm. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

17.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

18.
In this paper we explore two sets of polynomials, the orthogonal polynomials and the residual polynomials, associated with a preconditioned conjugate gradient iteration for the solution of the linear system Ax = b. In the context of preconditioning by the matrix C, we show that the roots of the orthogonal polynomials, also known as generalized Ritz values, are the eigenvalues of an orthogonal section of the matrix C A while the roots of the residual polynomials, also known as pseudo-Ritz values (or roots of kernel polynomials), are the reciprocals of the eigenvalues of an orthogonal section of the matrix (C A)?1. When C A is selfadjoint positive definite, this distinction is minimal, but for the indefinite or nonselfadjoint case this distinction becomes important. We use these two sets of roots to form possibly nonconvex regions in the complex plane that describe the spectrum of C A.  相似文献   

19.
The task of computing a few eigenvalues and associated eigenvectors of a large sparse symmetric matrix arises in many applications. We present new iterative methods designed for the determination of a few extreme or non-extreme eigenvalues and associated eigenvectors. Our methods are based on the recursion formulas of the Implicitly Restarted Lanczos method introduced by Sorensen [1992], but differ from previous applications of these formulas in the selection of accelerating polynomial. The methods of the present paper require very little computer storage. Numerical examples illustrate that the methods can give rapid convergence. Dedicated to Åke Björck on the occasion of his 60th birthday This work was supported by NSF grants F377 DMR-8920147 ALCOM, DMS-9409422 and DMS-9205531.  相似文献   

20.
For statistical inferences that involve covariance matrices, it is desirable to obtain an accurate covariance matrix estimate with a well-structured eigen-system. We propose to estimate the covariance matrix through its matrix logarithm based on an approximate log-likelihood function. We develop a generalization of the Leonard and Hsu log-likelihood approximation that no longer requires a nonsingular sample covariance matrix. The matrix log-transformation provides the ability to impose a convex penalty on the transformed likelihood such that the largest and smallest eigenvalues of the covariance matrix estimate can be regularized simultaneously. The proposed method transforms the problem of estimating the covariance matrix into the problem of estimating a symmetric matrix, which can be solved efficiently by an iterative quadratic programming algorithm. The merits of the proposed method are illustrated by a simulation study and two real applications in classification and portfolio optimization. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号