首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.  相似文献   

2.
The Generalized Minimal Residual (GMRES) method and the Quasi-Minimal Residual (QMR) method are two Krylov methods for solving linear systems. The main difference between these methods is the generation of the basis vectors for the Krylov subspace. The GMRES method uses the Arnoldi process while QMR uses the Lanczos algorithm for constructing a basis of the Krylov subspace. In this paper we give a new method similar to QMR but based on the Hessenberg process instead of the Lanczos process. We call the new method the CMRH method. The CMRH method is less expensive and requires slightly less storage than GMRES. Numerical experiments suggest that it has behaviour similar to GMRES. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

3.
Lanczos方法是求解大型线性方程组的常用方法.遗憾的是,在Lanczos过程中通常会发生算法中断或数值不稳定的情况.将给出求解大型对称线性方程组的收缩Lanczos方法,即DLanczos方法.新算法将采用增广子空间技术,在Lanczos过程中向Krylov子空间加入少量绝对值较小的特征值所对应的特征向量进行收缩.数值实验表明,新算法比Lanczos方法收敛速度更快,并且适合求解病态对称线性方程组.  相似文献   

4.
The block‐Lanczos method serves to compute a moderate number of eigenvalues and the corresponding invariant subspace of a symmetric matrix. In this paper, the convergence behavior of nonrestarted and restarted versions of the block‐Lanczos method is analyzed. For the nonrestarted version, we improve an estimate by Saad by means of a change of the auxiliary vector so that the new estimate is much more accurate in the case of clustered or multiple eigenvalues. For the restarted version, an estimate by Knyazev is generalized by extending our previous results on block steepest descent iterations and single‐vector restarted Krylov subspace iterations. The new estimates can also be reformulated and applied to invert‐block‐Lanczos methods for solving generalized matrix eigenvalue problems.  相似文献   

5.
Krylov subspace methods and their variants are presently the favorite iterative methods for solving a system of linear equations. Although it is a purely linear algebra problem, it can be tackled by the theory of formal orthogonal polynomials. This theory helps to understand the origin of the algorithms for the implementation of Krylov subspace methods and, moreover, the use of formal orthogonal polynomials brings a major simplification in the treatment of some numerical problems related to these algorithms. This paper reviews this approach in the case of Lanczos method and its variants, the novelty being the introduction of a preconditioner.  相似文献   

6.
LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent conjugate gradient for least squares problems (CGLS) applied to normal equations system, are commonly used for large-scale discrete ill-posed problems. It is well known that LSQR and CGLS have regularizing effects, where the number of iterations plays the role of the regularization parameter. However, it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions. Here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition (TSVD) method. We establish bounds for the distance between the k-dimensional Krylov subspace and the k-dimensional dominant right singular space. They show that the Krylov subspace captures the dominant right singular space better for severely and moderately ill-posed problems than for mildly ill-posed problems. Our general conclusions are that LSQR has better regularizing effects for the first two kinds of problems than for the third kind, and a hybrid LSQR with additional regularization is generally needed for mildly ill-posed problems. Exploiting the established bounds, we derive an estimate for the accuracy of the rank k approximation generated by Lanczos bidiagonalization. Numerical experiments illustrate that the regularizing effects of LSQR are good enough to compute best possible regularized solutions for severely and moderately ill-posed problems, stronger than our theory predicts, but they are not for mildly ill-posed problems and additional regularization is needed.  相似文献   

7.
Computing a small number of singular values is required in many practical applications and it is therefore desirable to have efficient and robust methods that can generate such truncated singular value decompositions. A method based on the Lanczos bidiagonalization and the Krylov–Schur method is presented. It is shown that deflation strategies can be easily implemented in this method and possible stopping criteria are discussed. Numerical experiments show the efficiency of the Krylov–Schur method.  相似文献   

8.
In the quadratic eigenvalue problem (QEP) with all coefficient matrices symmetric, there can be complex eigenvalues. However, some applications need to compute real eigenvalues only. We propose a Lanczos‐based method for computing all real eigenvalues contained in a given interval of large‐scale symmetric QEPs. The method uses matrix inertias of the quadratic polynomial evaluated at different shift values. In this way, for hyperbolic problems, it is possible to make sure that all eigenvalues in the interval have been computed. We also discuss the general nonhyperbolic case. Our implementation is memory‐efficient by representing the computed pseudo‐Lanczos basis in a compact tensor product representation. We show results of computational experiments with a parallel implementation in the SLEPc library.  相似文献   

9.
Algorithms to solve large sparse eigenvalue problems are considered. A new class of algorithms which is based on rational functions of the matrix is described. The Lanczos method, the Arnoldi method, the spectral transformation Lanczos method, and Rayleigh quotient iteration all are special cases, but there are also new algorithms which correspond to rational functions with several poles. In the simplest case a basis of a rational Krylov subspace is found in which the matrix eigenvalue problem is formulated as a linear matrix pencil with a pair of Hessenberg matrices.  相似文献   

10.
Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a prescribed invariant subspace. When A is Hermitian, the closest matrix may be required to be Hermitian. We measure distance in the Frobenius norm and discuss applications to Krylov subspace methods for the solution of large‐scale linear systems of equations and eigenvalue problems as well as to the construction of blurring matrices. Extensions that allow the matrix A to be rectangular and applications to Lanczos bidiagonalization, as well as to the recently proposed subspace‐restricted SVD method for the solution of linear discrete ill‐posed problems, also are considered.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
An algorithm for solving the problem of minimizing a quadratic function subject to ellipsoidal constraints is introduced. This algorithm is based on the impHcitly restarted Lanczos method to construct a basis for the Krylov subspace in conjunction with a model trust region strategy to choose the step. The trial step is computed on the small dimensional subspace that lies inside the trust region.

One of the main advantages of this algorithm is the way that the Krylov subspace is terminated. We introduce a terminationcondition that allows the gradient to be decreased on that subspace.

A convergence theory for this algorithm is presented. It is shown that this algorithm is globally convergent and it shouldcope quite well with large scale minimization problems. This theory is sufficiently general that it holds for any algorithm that projects the problem on a lower dimensional subspace.  相似文献   

12.
Despite its usefulness in solving eigenvalue problems and linear systems of equations, the nonsymmetric Lanczos method is known to suffer from a potential breakdown problem. Previous and recent approaches for handling the Lanczos exact and near-breakdowns include, for example, the look-ahead schemes by Parlett-Taylor-Liu [23], Freund-Gutknecht-Nachtigal [9], and Brezinski-Redivo Zaglia-Sadok [4]; the combined look-ahead and restart scheme by Joubert [18]; and the low-rank modified Lanczos scheme by Huckle [17]. In this paper, we present yet another scheme based on a modified Krylov subspace approach for the solution of nonsymmetric linear systems. When a breakdown occurs, our approach seeks a modified dual Krylov subspace, which is the sum of the original subspace and a new Krylov subspaceK m (w j ,A T ) wherew j is a newstart vector (this approach has been studied by Ye [26] for eigenvalue computations). Based on this strategy, we have developed a practical algorithm for linear systems called the MLAN/QM algorithm, which also incorporates the residual quasi-minimization as proposed in [12]. We present a few convergence bounds for the method as well as numerical results to show its effectiveness.Research supported by Natural Sciences and Engineering Research Council of Canada.  相似文献   

13.
Given a square matrix and single right and left starting vectors, the classical nonsymmetric Lanczos process generates two sequences of biorthogonal basis vectors for the right and left Krylov subspaces induced by the given matrix and vectors. In this paper, we propose a Lanczos-type algorithm that extends the classical Lanczos process for single starting vectors to multiple starting vectors. Given a square matrix and two blocks of right and left starting vectors, the algorithm generates two sequences of biorthogonal basis vectors for the right and left block Krylov subspaces induced by the given data. The algorithm can handle the most general case of right and left starting blocks of arbitrary sizes, while all previously proposed extensions of the Lanczos process are restricted to right and left starting blocks of identical sizes. Other features of our algorithm include a built-in deflation procedure to detect and delete linearly dependent vectors in the block Krylov sequences, and the option to employ look-ahead to remedy the potential breakdowns that may occur in nonsymmetric Lanczos-type methods.

  相似文献   


14.
Applications such as the modal analysis of structures and acoustic cavities require a number of eigenvalues and eigenvectors of large‐scale Hermitian eigenvalue problems. The most popular method is probably the spectral transformation Lanczos method. An important disadvantage of this method is that a change of pole requires a complete restart. In this paper, we investigate the use of the rational Krylov method for this application. This method does not require a complete restart after a change of pole. It is shown that the change of pole can be considered as a change of Lanczos basis. The major conclusion of this paper is that the method is numerically stable when the poles are chosen in between clusters of the approximate eigenvalues. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we present a convergence result for some Krylov projection methods when applied to the Tikhonov minimization problem in its general form. In particular, we consider the method based on the Arnoldi algorithm and the one based on the Lanczos bidiagonalization process.  相似文献   

16.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

17.
It is shown that the method of Arnoldi can be successfully used for solvinglarge unsymmetric eigenproblems. Like the symmetric Lanczos method, Arnoldi's algorithm realizes a projection process onto the Krylov subspace Km spanned by v1,Av1,...,Am?1v1, where v1 is the initial vector. We therefore study the convergence of the approximate eigenelements obtained by such a process. In particular, when the eigenvalues of A are real, we obtain bounds for the rates of convergence similar to those for the symmetric Lanczos algorithm. Some practical methods are presented in addition to that of Arnoldi, and several numerical experiments are described.  相似文献   

18.
This paper will present a new method of adaptively constructing block iterative methods based on Local Sensitivity Analysis (LSA). The method can be used in the context of geometric and algebraic multigrid methods for constructing smoothers, and in the context of Krylov methods for constructing block preconditioners. It is suitable for both constant and variable coefficient problems. Furthermore, the method can be applied to systems arising from both scalar and coupled system partial differential equations (PDEs), as well as linear systems that do not arise from PDEs. The simplicity of the method will allow it to be easily incorporated into existing multigrid and Krylov solvers while providing a powerful tool for adaptively constructing methods tuned to a problem.  相似文献   

19.
Uncertainty quantification for linear inverse problems remains a challenging task, especially for problems with a very large number of unknown parameters (e.g., dynamic inverse problems) and for problems where computation of the square root and inverse of the prior covariance matrix are not feasible. This work exploits Krylov subspace methods to develop and analyze new techniques for large‐scale uncertainty quantification in inverse problems. In this work, we assume that generalized Golub‐Kahan‐based methods have been used to compute an estimate of the solution, and we describe efficient methods to explore the posterior distribution. In particular, we use the generalized Golub‐Kahan bidiagonalization to derive an approximation of the posterior covariance matrix, and we provide theoretical results that quantify the accuracy of the approximate posterior covariance matrix and of the resulting posterior distribution. Then, we describe efficient methods that use the approximation to compute measures of uncertainty, including the Kullback‐Liebler divergence. We present two methods that use the preconditioned Lanczos algorithm to efficiently generate samples from the posterior distribution. Numerical examples from dynamic photoacoustic tomography demonstrate the effectiveness of the described approaches.  相似文献   

20.
We present a probabilistic analysis of two Krylov subspace methods for solving linear systems. We prove a central limit theorem for norms of the residual vectors that are produced by the conjugate gradient and MINRES algorithms when applied to a wide class of sample covariance matrices satisfying some standard moment conditions. The proof involves establishing a four-moment theorem for the so-called spectral measure, implying, in particular, universality for the matrix produced by the Lanczos iteration. The central limit theorem then implies an almost-deterministic iteration count for the iterative methods in question. © 2022 Wiley Periodicals LLC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号