首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fast algorithms, based on the unsymmetric look‐ahead Lanczos and the Arnoldi process, are developed for the estimation of the functional Φ(?)= u T?(A) v for fixed u , v and A, where A∈??n×n is a large‐scale unsymmetric matrix. Numerical results are presented which validate the comparable accuracy of both approaches. Although the Arnoldi process reaches convergence more quickly in some cases, it has greater memory requirements, and may not be suitable for especially large applications. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
The global Arnoldi method can be used to compute exterior eigenpairs of a large non-Hermitian matrix A, but it does not work well for interior eigenvalue problems. Based on the global Arnoldi process that generates an F-orthonormal basis of a matrix Krylov subspace, we propose a global harmonic Arnoldi method for computing certain harmonic F-Ritz pairs that are used to approximate some interior eigenpairs. We propose computing the F-Rayleigh quotients of the large non-Hermitian matrix with respect to harmonic F-Ritz vectors and taking them as new approximate eigenvalues. They are better and more reliable than the harmonic F-Ritz values. The global harmonic Arnoldi method inherits convergence properties of the harmonic Arnoldi method applied to a larger matrix whose distinct eigenvalues are the same as those of the original given matrix. Some properties of the harmonic F-Ritz vectors are presented. As an application, assuming that A is diagonalizable, we show that the global harmonic Arnoldi method is able to solve multiple eigenvalue problems both in theory and in practice. To be practical, we develop an implicitly restarted global harmonic Arnoldi algorithm with certain harmonic F-shifts suggested. In particular, this algorithm can be adaptively used to solve multiple eigenvalue problems. Numerical experiments show that the algorithm is efficient for the eigenproblem and is reliable for quite ill-conditioned multiple eigenproblems.  相似文献   

3.
The equivalence in exact arithmetic of the Lanczos tridiagonalization procedure and the conjugate gradient optimization procedure for solving Ax = b, where A is a real symmetric, positive definite matrix, is well known. We demonstrate that a relaxed equivalence is valid in the presence of errors. Specifically we demonstrate that local ε-orthonormality of the Lanczos vectors guarantees local ε-A-conjugacy of the direction vectors in the associated conjugate gradient procedure. Moreover we demonstrate that all the conjugate gradient relationships are satisfied approximately. Therefore, any statements valid for the conjugate gradient optimization procedure, which we show converges under very weak conditions, apply directly to the Lanczos procedure. We then use this equivalence to obtain an explanation of the Lanczos phenomenon: the empirically observed “convergence” of Lanczos eigenvalue procedures despite total loss of the global orthogonality of the Lanczos vectors.  相似文献   

4.
Generalized block Lanczos methods for large unsymmetric eigenproblems are presented, which contain the block Arnoldi method, and the block Arnoldi algorithms are developed. The convergence of this class of methods is analyzed when the matrix A is diagonalizable. Upper bounds for the distances between normalized eigenvectors and a block Krylov subspace are derived, and a priori theoretical error bounds for Ritz elements are established. Compared with generalized Lanczos methods, which contain Arnoldi's method, the convergence analysis shows that the block versions have two advantages: First, they may be efficient for computing clustered eigenvalues; second, they are able to solve multiple eigenproblems. However, a deep analysis exposes that the approximate eigenvectors or Ritz vectors obtained by general orthogonal projection methods including generalized block methods may fail to converge theoretically for a general unsymmetric matrix A even if corresponding approximate eigenvalues or Ritz values do, since the convergence of Ritz vectors needs more sufficient conditions, which may be impossible to satisfy theoretically, than that of Ritz values does. The issues of how to restart and to solve multiple eigenproblems are addressed, and some numerical examples are reported to confirm the theoretical analysis. Received July 7, 1994 / Revised version received March 1, 1997  相似文献   

5.
Many problems in applied mathematics require the evaluation of matrix functionals of the form F(A):=uTf(A)u, where A is a large symmetric matrix and u is a vector. Golub and collaborators have described how approximations of such functionals can be computed inexpensively by using the Lanczos algorithm. The present note shows that error bounds for these approximations can be computed essentially for free when bounds for derivatives of f on an interval containing the spectrum of A are available.  相似文献   

6.
The evaluation of matrix functions of the form f(A)v, where A is a large sparse or structured symmetric matrix, f is a nonlinear function, and v is a vector, is frequently subdivided into two steps: first an orthonormal basis of an extended Krylov subspace of fairly small dimension is determined, and then a projection onto this subspace is evaluated by a method designed for small problems. This paper derives short recursion relations for orthonormal bases of extended Krylov subspaces of the type Km,mi+1(A)=span{A-m+1v,…,A-1v,v,Av,…,Amiv}, m=1,2,3,…, with i a positive integer, and describes applications to the evaluation of matrix functions and the computation of rational Gauss quadrature rules.  相似文献   

7.
Many problems in science and engineering require the evaluation of functionals of the form Fu(A)=uTf(A)u, where A is a large symmetric matrix, u a vector, and f a nonlinear function. A popular and fairly inexpensive approach to determining upper and lower bounds for such functionals is based on first carrying out a few steps of the Lanczos procedure applied to A with initial vector u, and then evaluating pairs of Gauss and Gauss-Radau quadrature rules associated with the tridiagonal matrix determined by the Lanczos procedure. The present paper extends this approach to allow the use of rational Gauss quadrature rules.  相似文献   

8.
Eigenvalues and eigenvectors of a large sparse symmetric matrix A can be found accurately and often very quickly using the Lanczos algorithm without reorthogonalization. The algorithm gives essentially correct information on the eigensystem of A, although it does not necessarily give the correct multiplicity of multiple, or even single, eigenvalues. It is straightforward to determine a useful bound on the accuracy of every eigenvalue given by the algorithm. The initial behavior of the algorithm is surprisingly good: it produces vectors spanning the Krylov subspace of a matrix very close to A until this subspace contains an exact eigenvector of a matrix very close to A, and up to this point the effective behavior of the algorithm for the eigenproblem is very like that of the Lanczos algorithm using full reorthogonalization. This helps to explain the remarkable behavior of the basic Lanczos algorithm.  相似文献   

9.
We consider solving eigenvalue problems or model reduction problems for a quadratic matrix polynomial 2 −  − B with large and sparse A and B. We propose new Arnoldi and Lanczos type processes which operate on the same space as A and B live and construct projections of A and B to produce a quadratic matrix polynomial with the coefficient matrices of much smaller size, which is used to approximate the original problem. We shall apply the new processes to solve eigenvalue problems and model reductions of a second order linear input-output system and discuss convergence properties. Our new processes are also extendable to cover a general matrix polynomial of any degree.  相似文献   

10.
A rounding error analysis for the symplectic Lanczos method is given to solve the large-scale sparse Hamiltonian eigenvalue problem. If no breakdown occurs in the method, then it can be shown that the Hamiltonian structure preserving requirement does not destroy the essential feature of the nonsymmetric Lanczos algorithm. The relationship between the loss of J-orthogonality among the symplectic Lanczos vectors and the convergence of the Ritz values in the symmetric Lanczos algorithm is discussed. It is demonstrated that under certain assumptions the computed J-orthogonal Lanczos vectors lose the J-orthogonality when some Ritz values begin to converge. Our analysis closely follows the recent works of Bai and Fabbender. Selected from Journal of Mathematical Research and Exposition, 2004, 24(1): 91–106  相似文献   

11.
In each step of the quasi-minimal residual (QMR) method which uses a look-ahead variant of the nonsymmetric Lanczos process to generate basis vectors for the Krylov subspaces induced byA, it is necessary to decide whether to construct the Lanczos vectorsv n +1 andw n +1 as regular or inner vectors. For a regular step it is necessary thatD k =W k T V k is nonsingular. Therefore, in the floating-point arithmetic, the smallest singular value of matrix Dk,σ min (D k ), is computed and an inner step is performed ifσ min (D k )<∈, where ∈ is a suitably chosen tolerance. In practice it is absolutely impossible to choose correctly the value of the tolerance ∈. The subject of this paper is to show how discrete stochastic arithmetic remedies the problem of this tolerance, as well as the problem of the other tolerances which are needed in the other checks of the QMR method with the estimation of the accuracy of some intermediate results. Numerical examples are used to show the good numerical properties.  相似文献   

12.
We consider the solution of linear systems of equations Ax=b, with A a symmetric positive-definite matrix in ? n×n , through Richardson-type iterations or, equivalently, the minimization of convex quadratic functions (1/2)(Ax,x)?(b,x) with a gradient algorithm. The use of step-sizes asymptotically distributed with the arcsine distribution on the spectrum of A then yields an asymptotic rate of convergence after k<n iterations, k→∞, that coincides with that of the conjugate-gradient algorithm in the worst case. However, the spectral bounds m and M are generally unknown and thus need to be estimated to allow the construction of simple and cost-effective gradient algorithms with fast convergence. It is the purpose of this paper to analyse the properties of estimators of m and M based on moments of probability measures ν k defined on the spectrum of A and generated by the algorithm on its way towards the optimal solution. A precise analysis of the behavior of the rate of convergence of the algorithm is also given. Two situations are considered: (i) the sequence of step-sizes corresponds to i.i.d. random variables, (ii) they are generated through a dynamical system (fractional parts of the golden ratio) producing a low-discrepancy sequence. In the first case, properties of random walk can be used to prove the convergence of simple spectral bound estimators based on the first moment of ν k . The second option requires a more careful choice of spectral bounds estimators but is shown to produce much less fluctuations for the rate of convergence of the algorithm.  相似文献   

13.
In 1975 Chen and Gentleman suggested a 3-block SOR method for solving least-squares problems, based on a partitioning scheme for the observation matrix A into
A=A1A2
where A1 is square and nonsingular. In many cases A1 is obvious from the nature of the problem. This combined direct-iterative method was discussed further and applied to angle adjustment problems in geodesy, where A1 is easily formed and is large and sparse, by Plemmons in 1979. Recently, Niethammer, de Pillis, and Varga have rekindled interest in this method by correcting and extending the SOR convergence interval. The purpose of our paper is to discuss an alternative formulation of the problem leading to a 2-block SOR method. For this formulation it is shown that the resulting direct-iterative method always converges for sufficiently small SOR parameter, in contrast to the 3-block formulation. Formulas for the optimum SOR parameter and the resulting asymptotic convergence factor, based upon 6A2A-1162, are given. Furthermore, it is shown that this 2-cyclic block SOR method always gives better convergence results than the 3-cyclic one for the same amount of work per iteration. The direct part of the algorithm requires only a sparse-matrix factorization of A1. Our purpose here is to establish theoretical convergence results, in line with the purpose of the recent paper by Niethammer, de Pillis, and Varga. Practical considerations of choosing A1 in certain applications and of estimating the resulting 6A2A-1162 will be addressed elsewhere.  相似文献   

14.
We provide a comparative study of the Subspace Projected Approximate Matrix method, abbreviated SPAM, which is a fairly recent iterative method of computing a few eigenvalues of a Hermitian matrix A. It falls in the category of inner-outer iteration methods and aims to reduce the costs of matrix-vector products with A within its inner iteration. This is done by choosing an approximation A 0 of A, and then, based on both A and A 0, to define a sequence (A k ) k=0 n of matrices that increasingly better approximate A as the process progresses. Then the matrix A k is used in the kth inner iteration instead of A.In spite of its main idea being refreshingly new and interesting, SPAM has not yet been studied in detail by the numerical linear algebra community. We would like to change this by explaining the method, and to show that for certain special choices for A 0, SPAM turns out to be mathematically equivalent to known eigenvalue methods. More sophisticated approximations A 0 turn SPAM into a boosted version of Lanczos, whereas it can also be interpreted as an attempt to enhance a certain instance of the preconditioned Jacobi-Davidson method.Numerical experiments are performed that are specifically tailored to illustrate certain aspects of SPAM and its variations. For experiments that test the practical performance of SPAM in comparison with other methods, we refer to other sources. The main conclusion is that SPAM provides a natural transition between the Lanczos method and one-step preconditioned Jacobi-Davidson.  相似文献   

15.
For the Hermitian inexact Rayleigh quotient iteration(RQI),we consider the local convergence of the in exact RQI with the Lanczos method for the linear systems involved.Some attractive properties are derived for the residual,whose norm is ξk,of the linear system obtained by the Lanczos method at outer iteration k+1.Based on them,we make a refned analysis and establish new local convergence results.It is proved that(i) the inexact RQI with Lanczos converges quadratically provided that ξk≤ξ with a constant ξ1 and (ii) the method converges linearly provided that ξk is bounded by some multiple of1/||rk|| with rkthe residual norm of the approximate eigenpair at outer iteration k.The results are fundamentally diferent from the existing ones that always require ξk<1,and they have implications on efective implementations of the method.Based on the new theory,we can design practical criteria to control ξkto achieve quadratic convergence and implement the method more efectively than ever before.Numerical experiments confrm our theory and demonstrate that the inexact RQI with Lanczos is competitive to the inexact RQI with MINRES.  相似文献   

16.
In this paper, we present a convergence result for some Krylov projection methods when applied to the Tikhonov minimization problem in its general form. In particular, we consider the method based on the Arnoldi algorithm and the one based on the Lanczos bidiagonalization process.  相似文献   

17.
Despite its usefulness in solving eigenvalue problems and linear systems of equations, the nonsymmetric Lanczos method is known to suffer from a potential breakdown problem. Previous and recent approaches for handling the Lanczos exact and near-breakdowns include, for example, the look-ahead schemes by Parlett-Taylor-Liu [23], Freund-Gutknecht-Nachtigal [9], and Brezinski-Redivo Zaglia-Sadok [4]; the combined look-ahead and restart scheme by Joubert [18]; and the low-rank modified Lanczos scheme by Huckle [17]. In this paper, we present yet another scheme based on a modified Krylov subspace approach for the solution of nonsymmetric linear systems. When a breakdown occurs, our approach seeks a modified dual Krylov subspace, which is the sum of the original subspace and a new Krylov subspaceK m (w j ,A T ) wherew j is a newstart vector (this approach has been studied by Ye [26] for eigenvalue computations). Based on this strategy, we have developed a practical algorithm for linear systems called the MLAN/QM algorithm, which also incorporates the residual quasi-minimization as proposed in [12]. We present a few convergence bounds for the method as well as numerical results to show its effectiveness.Research supported by Natural Sciences and Engineering Research Council of Canada.  相似文献   

18.
Let us consider a finite inf semilattice G with a set 1 of internal binary operations 11, isotono, satisfying certain conditions of no dispersion, of increasing and of substitution, and so that the greatest lower bound is distributive relatively to 11. A finite subset A of G being given, this article gives a method for enumerating the maximal elements of the sub-algebra A1 generated by A with regard to 1, when A1 is finite. This method, called disengagement algorithm, allows to examine each element once; it generalizes an algorithm giving the maximal n-rectangles of a part of a product of distributive lattices algorithm which already generalized a conjecture of Tison in Boolean algebra. Two applications are developed.  相似文献   

19.
In this note we study a variant of the inverted Lanczos method which computes eigenvalue approximates of a symmetric matrix A as Ritz values of A from a Krylov space of A –1. The method turns out to be slightly faster than the Lanczos method at least as long as reorthogonalization is not required. The method is applied to the problem of determining the smallest eigenvalue of a symmetric Toeplitz matrix. It is accelerated taking advantage of symmetry properties of the correspond ng eigenvector.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

20.
An M-matrix as defined by Ostrowski [5] is a matrix that can be split into A = sI ? B, where s > 0, B ? 0, with s ? r(B), the spectral radius of B. Following Plemmons [6], we develop a classification of all M-matrices. We consider v, the index of zero for A, i.e., the smallest nonnegative integer n such that the null spaces of An and An+1 coincide. We characterize this index in terms of convergence properties of powers of s?1B. We develop additional characterizations in terms of nonnegativity of the Drazin inverse of A on the range of Av, extending (as conjectured by Poole and Boullion [7]) the well-known property that A?1?0 whenever A is nonsingular.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号