首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
n this paper, we present an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax=λBx. We first formulate a version of inexact inverse subspace iteration in which the approximation from one step is used as an initial approximation for the next step. We then analyze the convergence property, which relates the accuracy in the inner iteration to the convergence rate of the outer iteration. In particular, the linear convergence property of the inverse subspace iteration is preserved. Numerical examples are given to demonstrate the theoretical results.  相似文献   

2.
The finite difference discretization of the spatial fractional diffusion equations gives discretized linear systems whose coefficient matrices have a diagonal‐plus‐Toeplitz structure. For solving these diagonal‐plus‐Toeplitz linear systems, we construct a class of diagonal and Toeplitz splitting iteration methods and establish its unconditional convergence theory. In particular, we derive a sharp upper bound about its asymptotic convergence rate and deduct the optimal value of its iteration parameter. The diagonal and Toeplitz splitting iteration method naturally leads to a diagonal and circulant splitting preconditioner. Analysis shows that the eigenvalues of the corresponding preconditioned matrix are clustered around 1, especially when the discretization step‐size h is small. Numerical results exhibit that the diagonal and circulant splitting preconditioner can significantly improve the convergence properties of GMRES and BiCGSTAB, and these preconditioned Krylov subspace iteration methods outperform the conjugate gradient method preconditioned by the approximate inverse circulant‐plus‐diagonal preconditioner proposed recently by Ng and Pan (M.K. Ng and J.‐Y. Pan, SIAM J. Sci. Comput. 2010;32:1442‐1464). Moreover, unlike this preconditioned conjugate gradient method, the preconditioned GMRES and BiCGSTAB methods show h‐independent convergence behavior even for the spatial fractional diffusion equations of discontinuous or big‐jump coefficients.  相似文献   

3.
Inverse iteration is simple but not very efficient method for computing few eigenvalues with minimal absolute values and corresponding eigenvectors of a symmetric matrix. The idea is to increase its efficiency by technique similar to multigrid methods used for solving linear systems. This approach is not new, but until now multigrid was mostly used for solving linear system which appear in Rayleigh quotient iteration, inverse iteration and related iterative methods. Instead of choosing appropriate coordinates (grids), our algorithm performs inverse iteration on a sequence of subspaces with decreasing dimensions (multispace). Block Lanczos method is used for the selection of a smaller subspace. This will produce a banded matrix, which makes inverse iteration even faster in the smaller dimensions.   相似文献   

4.
We incorporate our recent preconditioning techniques into the classical inverse power (Rayleigh quotient) iteration for computing matrix eigenvectors. Every loop of this iteration essentially amounts to solving an ill conditioned linear system of equations. Due to our modification we solve a well conditioned linear system instead. We prove that this modification preserves local quadratic convergence, show experimentally that fast global convergence is preserved as well, and yield similar results for higher order inverse iteration, covering the cases of multiple and clustered eigenvalues.  相似文献   

5.
In order to compute the smallest eigenvalue together with an eigenfunction of a self-adjoint elliptic partial differential operator one can use the preconditioned inverse iteration scheme, also called the preconditioned gradient iteration. For this iterative eigensolver estimates on the poorest convergence have been published by several authors. In this paper estimates on the fastest possible convergence are derived. To this end the convergence problem is reformulated as a two-level constrained optimization problem for the Rayleigh quotient. The new convergence estimates reveal a wide range between the fastest possible and the slowest convergence.  相似文献   

6.
In this paper, we will compare the convergence properties of three basic reduction methods, by placing them in a general framework. It covers the reduction to tridiagonal, semiseparable and semiseparable plus diagonal form. These reductions are often used as the first step in the computation of the eigenvalues and/or eigenvectors of arbitrary matrices. In this way, the calculation of the eigenvalues using, for example, the QR-algorithm reduces in complexity. First we will investigate the convergence properties of these three reduction algorithms. It will be shown that for the partially reduced matrices at step k of any of these reduction algorithms, the lower right k × k (already reduced) sub-block will have the Lanczos–Ritz values, w.r.t. a certain starting vector. It will also be shown that the reductions to semiseparable and to semiseparable plus diagonal form have an extra convergence behavior a special type of subspace iteration is performed on the lower right k × k submatrix, which contains these Ritz-values. Secondly we look in more detail at the behavior of the involved subspace iteration. It will be shown that the reduction method can be interpreted as a nested type of multi-shift iteration. Theoretical results will be presented, making it possible to predict the convergence behavior of these reduction algorithms. Also a theoretical bound on the convergence rate is presented. Finally we illustrate by means of numerical examples, how it is possible to tune the convergence behavior such that it can become a powerful tool for certain applications.  相似文献   

7.
This paper deals with the convergence analysis of various preconditioned iterations to compute the smallest eigenvalue of a discretized self-adjoint and elliptic partial differential operator. For these eigenproblems several preconditioned iterative solvers are known, but unfortunately, the convergence theory for some of these solvers is not very well understood.The aim is to show that preconditioned eigensolvers (like the preconditioned steepest descent iteration (PSD) and the locally optimal preconditioned conjugate gradient method (LOPCG)) can be interpreted as truncated approximate Krylov subspace iterations. In the limit of preconditioning with the exact inverse of the system matrix (such preconditioning can be approximated by multiple steps of a preconditioned linear solver) the iterations behave like Invert-Lanczos processes for which convergence estimates are derived.  相似文献   

8.
The topic of this paper is the convergence analysis of subspace gradient iterations for the simultaneous computation of a few of the smallest eigenvalues plus eigenvectors of a symmetric and positive definite matrix pair (A,M). The methods are based on subspace iterations for A ? 1M and use the Rayleigh‐Ritz procedure for convergence acceleration. New sharp convergence estimates are proved by generalizing estimates, which have been presented for vectorial steepest descent iterations (see SIAM J. Matrix Anal. Appl., 32(2):443‐456, 2011). Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, an algorithm based on a shifted inverse power iteration for computing generalized eigenvalues with corresponding eigenvectors of a large scale sparse symmetric positive definite matrix pencil is presented. It converges globally with a cubic asymptotic convergence rate, preserves sparsity of the original matrices and is fully parallelizable. The algebraic multilevel itera-tion method (AMLI) is used to improve the efficiency when symmetric positive definite linear equa-tions need to be solved.  相似文献   

10.
We construct, analyze, and implement SSOR‐like preconditioners for non‐Hermitian positive definite system of linear equations when its coefficient matrix possesses either a dominant Hermitian part or a dominant skew‐Hermitian part. We derive tight bounds for eigenvalues of the preconditioned matrices and obtain convergence rates of the corresponding SSOR‐like iteration methods as well as the corresponding preconditioned GMRES iteration methods. Numerical implementations show that Krylov subspace iteration methods such as GMRES, when accelerated by the SSOR‐like preconditioners, are efficient solvers for these classes of non‐Hermitian positive definite linear systems. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
刘瑶宁 《计算数学》2022,44(2):187-205
一类空间分数阶扩散方程经过有限差分离散后所得到的离散线性方程组的系数矩阵是两个对角矩阵与Toeplitz型矩阵的乘积之和.在本文中,对于几乎各向同性的二维或三维空间分数阶扩散方程的离散线性方程组,采用预处理Krylov子空间迭代方法,我们利用其系数矩阵的特殊结构和具体性质构造了一类分块快速正则Hermite分裂预处理子.通过理论分析,我们证明了所对应的预处理矩阵的特征值大部分都聚集于1的附近.数值实验也表明,这类分块快速正则Hermite分裂预处理子可以明显地加快广义极小残量(GMRES)方法和稳定化的双共轭梯度(BiCGSTAB)方法等Krylov子空间迭代方法的收敛速度.  相似文献   

12.
白中治  仇寿霞 《计算数学》2002,24(1):113-128
1.引 言 考虑大型稀疏线性代数方程组 为利用系数矩阵的稀疏结构以尽可能减少存储空间和计算开销,Krylov子空间迭代算法[1,16,23]及其预处理变型[6,8,13,18,19]通常是求解(1)的有效而实用的方法.当系数矩阵对称正定时,共轭梯度法(CG(  相似文献   

13.
The discretization of eigenvalue problems for partial differential operators is a major source of matrix eigenvalue problems having very large dimensions, but only some of the smallest eigenvalues together with the eigenvectors are to be determined. Preconditioned inverse iteration (a “matrix-free” method) derives from the well-known inverse iteration procedure in such a way that the associated system of linear equations is solved approximately by using a (multigrid) preconditioner. A new convergence analysis for preconditioned inverse iteration is presented. The preconditioner is assumed to satisfy some bound for the spectral radius of the error propagation matrix resulting in a simple geometric setup. In this first part the case of poorest convergence depending on the choice of the preconditioner is analyzed. In the second part the dependence on all initial vectors having a fixed Rayleigh quotient is considered. The given theory provides sharp convergence estimates for the eigenvalue approximations showing that multigrid eigenvalue/vector computations can be done with comparable efficiency as known from multigrid methods for boundary value problems.  相似文献   

14.
In this paper we study inexact inverse iteration for solving the generalised eigenvalue problem A xM x. We show that inexact inverse iteration is a modified Newton method and hence obtain convergence rates for various versions of inexact inverse iteration for the calculation of an algebraically simple eigenvalue. In particular, if the inexact solves are carried out with a tolerance chosen proportional to the eigenvalue residual then quadratic convergence is achieved. We also show how modifying the right hand side in inverse iteration still provides a convergent method, but the rate of convergence will be quadratic only under certain conditions on the right hand side. We discuss the implications of this for the preconditioned iterative solution of the linear systems. Finally we introduce a new ILU preconditioner which is a simple modification to the usual preconditioner, but which has advantages both for the standard form of inverse iteration and for the version with a modified right hand side. Numerical examples are given to illustrate the theoretical results. AMS subject classification (2000)  65F15, 65F10  相似文献   

15.
Generalized block Lanczos methods for large unsymmetric eigenproblems are presented, which contain the block Arnoldi method, and the block Arnoldi algorithms are developed. The convergence of this class of methods is analyzed when the matrix A is diagonalizable. Upper bounds for the distances between normalized eigenvectors and a block Krylov subspace are derived, and a priori theoretical error bounds for Ritz elements are established. Compared with generalized Lanczos methods, which contain Arnoldi's method, the convergence analysis shows that the block versions have two advantages: First, they may be efficient for computing clustered eigenvalues; second, they are able to solve multiple eigenproblems. However, a deep analysis exposes that the approximate eigenvectors or Ritz vectors obtained by general orthogonal projection methods including generalized block methods may fail to converge theoretically for a general unsymmetric matrix A even if corresponding approximate eigenvalues or Ritz values do, since the convergence of Ritz vectors needs more sufficient conditions, which may be impossible to satisfy theoretically, than that of Ritz values does. The issues of how to restart and to solve multiple eigenproblems are addressed, and some numerical examples are reported to confirm the theoretical analysis. Received July 7, 1994 / Revised version received March 1, 1997  相似文献   

16.
Ming Zhou 《PAMM》2010,10(1):553-554
We consider preconditioned subspace iterations for the numerical solution of discretized elliptic eigenvalue problems. For these iterative solvers, the convergence theory is still an incomplete puzzle. We generalize some results from the classical convergence theory of inverse subspace iterations, as given by Parlett, and some recent results on the convergence of preconditioned vector iterations. To this end, we use a geometric cone representation and prove some new trigonometric inequalities for subspace angles and canonical angles. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Based on the modified relaxed splitting (MRS) preconditioner proposed by Fan and Zhu (Appl. Math. Lett. 55, 18–26 2016), an inexact modified relaxed splitting (IMRS) preconditioner is proposed for the generalized saddle point problems arising from the incompressible Navier-Stokes equations. The eigenvalues and eigenvectors of the preconditioned matrix are analyzed, and the convergence property of the corresponding iteration method is also discussed. Numerical experiments are presented to show the effectiveness of the proposed preconditioner when it is used to accelerate the convergence rate of Krylov subspace methods such as GMRES.  相似文献   

18.
We propose a preconditioned variant of the modified HSS (MHSS) iteration method for solving a class of complex symmetric systems of linear equations. Under suitable conditions, we prove the convergence of the preconditioned MHSS (PMHSS) iteration method and discuss the spectral properties of the PMHSS-preconditioned matrix. Numerical implementations show that the resulting PMHSS preconditioner leads to fast convergence when it is used to precondition Krylov subspace iteration methods such as GMRES and its restarted variants. In particular, both the stationary PMHSS iteration and PMHSS-preconditioned GMRES show meshsize-independent and parameter-insensitive convergence behavior for the tested numerical examples.  相似文献   

19.
A shift splitting concept is introduced and, correspondingly, a shift-splitting iteration scheme and a shift-splitting preconditioner are presented, for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned non-Hermitian positive definite matrix. The convergence property of the shift-splitting iteration method and the eigenvalue distribution of the shift-splitting preconditioned matrix are discussed in depth, and the best possible choice of the shift is investigated in detail. Numerical computations show that the shift-splitting preconditioner can induce accurate, robust and effective preconditioned Krylov subspace iteration methods for solving the large sparse non-Hermitian positive definite systems of linear equations.  相似文献   

20.
In this paper, we propose a two-parameter preconditioned variant of the deteriorated PSS iteration method (J. Comput. Appl. Math., 273, 41–60 (2015)) for solving singular saddle point problems. Semi-convergence analysis shows that the new iteration method is convergent unconditionally. The new iteration method can also be regarded as a preconditioner to accelerate the convergence of Krylov subspace methods. Eigenvalue distribution of the corresponding preconditioned matrix is presented, which is instructive for the Krylov subspace acceleration. Note that, when the leading block of the saddle point matrix is symmetric, the new iteration method will reduce to the preconditioned accelerated HSS iteration method (Numer. Algor., 63 (3), 521–535 2013), the semi-convergence conditions of which can be simplified by the results in this paper. To further improve the effectiveness of the new iteration method, a relaxed variant is given, which has much better convergence and spectral properties. Numerical experiments are presented to investigate the performance of the new iteration methods for solving singular saddle point problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号