首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

2.
The block‐Lanczos method serves to compute a moderate number of eigenvalues and the corresponding invariant subspace of a symmetric matrix. In this paper, the convergence behavior of nonrestarted and restarted versions of the block‐Lanczos method is analyzed. For the nonrestarted version, we improve an estimate by Saad by means of a change of the auxiliary vector so that the new estimate is much more accurate in the case of clustered or multiple eigenvalues. For the restarted version, an estimate by Knyazev is generalized by extending our previous results on block steepest descent iterations and single‐vector restarted Krylov subspace iterations. The new estimates can also be reformulated and applied to invert‐block‐Lanczos methods for solving generalized matrix eigenvalue problems.  相似文献   

3.
For generalized eigenvalue problems, we consider computing all eigenvalues located in a certain region and their corresponding eigenvectors. Recently, contour integral spectral projection methods have been proposed for solving such problems. In this study, from the analysis of the relationship between the contour integral spectral projection and the Krylov subspace, we conclude that the Rayleigh–Ritz-type of the contour integral spectral projection method is mathematically equivalent to the Arnoldi method with the projected vectors obtained from the contour integration. By this Arnoldi-based interpretation, we then propose a block Arnoldi-type contour integral spectral projection method for solving the eigenvalue problem.  相似文献   

4.
In this paper, an improved block splitting preconditioner for a class of complex symmetric indefinite linear systems is proposed. By adopting two iteration parameters and the relaxation technique, the new preconditioner not only remains the same computational cost with the block preconditioners but also is much closer to the original coefficient matrix. The theoretical analysis shows that the corresponding iteration method is convergent under suitable conditions and the preconditioned matrix can have well-clustered eigenvalues around (0,1) with a reasonable choice of the relaxation parameters. An estimate concerning the dimension of the Krylov subspace for the preconditioned matrix is also obtained. Finally, some numerical experiments are presented to illustrate the effectiveness of the presented preconditioner.  相似文献   

5.
The Arnoldi-type algorithm proposed by Golub and Greif [G. Golub, C. Greif, An Arnoldi-type algorithm for computing PageRank, BIT 46 (2006) 759-771] is a restarted Krylov subspace method for computing PageRank. However, this algorithm may not be efficient when the damping factor is high and the dimension of the search subspace is small. In this paper, we first develop an extrapolation method based on Ritz values. We then consider how to periodically knit this extrapolation method together with the Arnoldi-type algorithm. The resulting algorithm is the Arnoldi-Extrapolation algorithm. The convergence of the new algorithm is analyzed. Numerical experiments demonstrate the numerical behavior of this algorithm.  相似文献   

6.
刘瑶宁 《计算数学》2022,44(2):187-205
一类空间分数阶扩散方程经过有限差分离散后所得到的离散线性方程组的系数矩阵是两个对角矩阵与Toeplitz型矩阵的乘积之和.在本文中,对于几乎各向同性的二维或三维空间分数阶扩散方程的离散线性方程组,采用预处理Krylov子空间迭代方法,我们利用其系数矩阵的特殊结构和具体性质构造了一类分块快速正则Hermite分裂预处理子.通过理论分析,我们证明了所对应的预处理矩阵的特征值大部分都聚集于1的附近.数值实验也表明,这类分块快速正则Hermite分裂预处理子可以明显地加快广义极小残量(GMRES)方法和稳定化的双共轭梯度(BiCGSTAB)方法等Krylov子空间迭代方法的收敛速度.  相似文献   

7.
We discuss a class of deflated block Krylov subspace methods for solving large scale matrix eigenvalue problems. The efficiency of an Arnoldi-type method is examined in computing partial or closely clustered eigenvalues of large matrices. As an improvement, we also propose a refined variant of the Arnoldi-type method. Comparisons show that the refined variant can further improve the Arnoldi-type method and both methods exhibit very regular convergence behavior.  相似文献   

8.
A new implicitly-restarted Krylov subspace method for real symmetric/skew-symmetric generalized eigenvalue problems is presented. The new method improves and generalizes the SHIRA method of Mehrmann and Watkins (2001) [37] to the case where the skew-symmetric matrix is singular. It computes a few eigenvalues and eigenvectors of the matrix pencil close to a given target point. Several applications from control theory are presented and the properties of the new method are illustrated by benchmark examples.  相似文献   

9.
Uncertainty quantification for linear inverse problems remains a challenging task, especially for problems with a very large number of unknown parameters (e.g., dynamic inverse problems) and for problems where computation of the square root and inverse of the prior covariance matrix are not feasible. This work exploits Krylov subspace methods to develop and analyze new techniques for large‐scale uncertainty quantification in inverse problems. In this work, we assume that generalized Golub‐Kahan‐based methods have been used to compute an estimate of the solution, and we describe efficient methods to explore the posterior distribution. In particular, we use the generalized Golub‐Kahan bidiagonalization to derive an approximation of the posterior covariance matrix, and we provide theoretical results that quantify the accuracy of the approximate posterior covariance matrix and of the resulting posterior distribution. Then, we describe efficient methods that use the approximation to compute measures of uncertainty, including the Kullback‐Liebler divergence. We present two methods that use the preconditioned Lanczos algorithm to efficiently generate samples from the posterior distribution. Numerical examples from dynamic photoacoustic tomography demonstrate the effectiveness of the described approaches.  相似文献   

10.
The minimization principle and Cauchy-like interlacing inequalities for the generalized linear response eigenvalue problem are presented. Based on these theoretical results, the best approximations through structure-preserving subspace projection and a locally optimal block conjugate gradient-like algorithm for simultaneously computing the first few smallest eigenvalues with the positive sign are proposed. Numerical results are presented to illustrate essential convergence behaviors of the proposed algorithm.  相似文献   

11.
The locally optimal block preconditioned 4-d conjugate gradient method (LOBP4dCG) for the linear response eigenvalue problem was proposed by Bai and Li (2013) and later was extended to the generalized linear response eigenvalue problem by Bai and Li (2014). We put forward two improvements to the method: A shifting deflation technique and an idea of extending the search subspace. The deflation technique is able to deflate away converged eigenpairs from future computation, and the idea of extending the search subspace increases convergence rate per iterative step. The resulting algorithm is called the extended LOBP4dCG (ELOBP4dCG). Numerical results of the ELOBP4dCG strongly demonstrate the capability of deflation technique and effectiveness the search space extension for solving linear response eigenvalue problems arising from linear response analysis of two molecule systems.  相似文献   

12.
For the large sparse block two-by-two real nonsingular matrices, we establish a general framework of practical and efficient structured preconditioners through matrix transformation and matrix approximations. For the specific versions such as modified block Jacobi-type, modified block Gauss-Seidel-type, and modified block unsymmetric (symmetric) Gauss-Seidel-type preconditioners, we precisely describe their concrete expressions and deliberately analyze eigenvalue distributions and positive definiteness of the preconditioned matrices. Also, we show that when these structured preconditioners are employed to precondition the Krylov subspace methods such as GMRES and restarted GMRES, fast and effective iteration solvers can be obtained for the large sparse systems of linear equations with block two-by-two coefficient matrices. In particular, these structured preconditioners can lead to efficient and high-quality preconditioning matrices for some typical matrices from the real-world applications.

  相似文献   


13.
Recently, Bai et al. (2013) proposed an effective and efficient matrix splitting iterative method, called preconditioned modified Hermitian/skew-Hermitian splitting (PMHSS) iteration method, for two-by-two block linear systems of equations. The eigenvalue distribution of the iterative matrix suggests that the splitting matrix could be advantageously used as a preconditioner. In this study, the CGNR method is utilized for solving the PMHSS preconditioned linear systems, and the performance of the method is considered by estimating the condition number of the normal equations. Furthermore, the proposed method is compared with other PMHSS preconditioned Krylov subspace methods by solving linear systems arising in complex partial differential equations and a distributed control problem. The numerical results demonstrate the difference in the performance of the methods under consideration.  相似文献   

14.
针对由Galerkin有限元离散椭圆PDE-约束优化问题产生的具有特殊结构的3×3块线性鞍点系统,提出了一个预条件子并给出了预处理矩阵特征值及特征向量的具体表达形式.数值结果表明了该预条件子能够有效地加速Krylov子空间方法的收敛速率,同时也验证了理论结果.  相似文献   

15.
Generalized block Lanczos methods for large unsymmetric eigenproblems are presented, which contain the block Arnoldi method, and the block Arnoldi algorithms are developed. The convergence of this class of methods is analyzed when the matrix A is diagonalizable. Upper bounds for the distances between normalized eigenvectors and a block Krylov subspace are derived, and a priori theoretical error bounds for Ritz elements are established. Compared with generalized Lanczos methods, which contain Arnoldi's method, the convergence analysis shows that the block versions have two advantages: First, they may be efficient for computing clustered eigenvalues; second, they are able to solve multiple eigenproblems. However, a deep analysis exposes that the approximate eigenvectors or Ritz vectors obtained by general orthogonal projection methods including generalized block methods may fail to converge theoretically for a general unsymmetric matrix A even if corresponding approximate eigenvalues or Ritz values do, since the convergence of Ritz vectors needs more sufficient conditions, which may be impossible to satisfy theoretically, than that of Ritz values does. The issues of how to restart and to solve multiple eigenproblems are addressed, and some numerical examples are reported to confirm the theoretical analysis. Received July 7, 1994 / Revised version received March 1, 1997  相似文献   

16.
Summary. The paper deals with eigenvalue estimates for block incomplete factorization methods for symmetric matrices. First, some previous results on upper bounds for the maximum eigenvalue of preconditioned matrices are generalized to each eigenvalue. Second, upper bounds for the maximum eigenvalue of the preconditioned matrix are further estimated, which presents a substantial improvement of earlier results. Finally, the results are used to estimate bounds for every eigenvalue of the preconditioned matrices, in particular, for the maximum eigenvalue, when a modified block incomplete factorization is used to solve an elliptic equation with variable coefficients in two dimensions. The analysis yields a new upper bound of type for the condition number of the preconditioned matrix and shows clearly how the coefficients of the differential equation influence the positive constant . Received March 27, 1996 / Revised version received December 27, 1996  相似文献   

17.
The global Arnoldi method can be used to compute exterior eigenpairs of a large non-Hermitian matrix A, but it does not work well for interior eigenvalue problems. Based on the global Arnoldi process that generates an F-orthonormal basis of a matrix Krylov subspace, we propose a global harmonic Arnoldi method for computing certain harmonic F-Ritz pairs that are used to approximate some interior eigenpairs. We propose computing the F-Rayleigh quotients of the large non-Hermitian matrix with respect to harmonic F-Ritz vectors and taking them as new approximate eigenvalues. They are better and more reliable than the harmonic F-Ritz values. The global harmonic Arnoldi method inherits convergence properties of the harmonic Arnoldi method applied to a larger matrix whose distinct eigenvalues are the same as those of the original given matrix. Some properties of the harmonic F-Ritz vectors are presented. As an application, assuming that A is diagonalizable, we show that the global harmonic Arnoldi method is able to solve multiple eigenvalue problems both in theory and in practice. To be practical, we develop an implicitly restarted global harmonic Arnoldi algorithm with certain harmonic F-shifts suggested. In particular, this algorithm can be adaptively used to solve multiple eigenvalue problems. Numerical experiments show that the algorithm is efficient for the eigenproblem and is reliable for quite ill-conditioned multiple eigenproblems.  相似文献   

18.
The regularizing properties of the Golub–Kahan bidiagonalization algorithm are powerful when the associated Krylov subspace captures the dominating components of the solution. In some applications the regularized solution can be further improved by enrichment, that is, by augmenting the Krylov subspace with a low‐dimensional subspace that represents specific prior information. Inspired by earlier work on GMRES, we demonstrate how to carry these ideas over to the bidiagonalization algorithm, and we describe how to incorporate Tikhonov regularization. This leads to a hybrid iterative method where the choice of regularization parameter in each iteration also provides a stopping rule.  相似文献   

19.
Eigenvalues and eigenvectors of a large sparse symmetric matrix A can be found accurately and often very quickly using the Lanczos algorithm without reorthogonalization. The algorithm gives essentially correct information on the eigensystem of A, although it does not necessarily give the correct multiplicity of multiple, or even single, eigenvalues. It is straightforward to determine a useful bound on the accuracy of every eigenvalue given by the algorithm. The initial behavior of the algorithm is surprisingly good: it produces vectors spanning the Krylov subspace of a matrix very close to A until this subspace contains an exact eigenvector of a matrix very close to A, and up to this point the effective behavior of the algorithm for the eigenproblem is very like that of the Lanczos algorithm using full reorthogonalization. This helps to explain the remarkable behavior of the basic Lanczos algorithm.  相似文献   

20.
The aim of this paper is to provide a convergence analysis for a preconditioned subspace iteration, which is designated to determine a modest number of the smallest eigenvalues and its corresponding invariant subspace of eigenvectors of a large, symmetric positive definite matrix. The algorithm is built upon a subspace implementation of preconditioned inverse iteration, i.e., the well-known inverse iteration procedure, where the associated system of linear equations is solved approximately by using a preconditioner. This step is followed by a Rayleigh-Ritz projection so that preconditioned inverse iteration is always applied to the Ritz vectors of the actual subspace of approximate eigenvectors. The given theory provides sharp convergence estimates for the Ritz values and is mainly built on arguments exploiting the geometry underlying preconditioned inverse iteration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号