首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

2.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

3.
In this paper, we develop an algorithm in which the block shift-and-invert Krylov subspace method can be employed for approximating the linear combination of the matrix exponential and related exponential-type functions. Such evaluation plays a major role in a class of numerical methods known as exponential integrators. We derive a low-dimensional matrix exponential to approximate the objective function based on the block shift-and-invert Krylov subspace methods. We obtain the error expansion of the approximation, and show that the variants of its first term can be used as reliable a posteriori error estimates and correctors. Numerical experiments illustrate that the error estimates are efficient and the proposed algorithm is worthy of further study.  相似文献   

4.
刘瑶宁 《计算数学》2022,44(2):187-205
一类空间分数阶扩散方程经过有限差分离散后所得到的离散线性方程组的系数矩阵是两个对角矩阵与Toeplitz型矩阵的乘积之和.在本文中,对于几乎各向同性的二维或三维空间分数阶扩散方程的离散线性方程组,采用预处理Krylov子空间迭代方法,我们利用其系数矩阵的特殊结构和具体性质构造了一类分块快速正则Hermite分裂预处理子.通过理论分析,我们证明了所对应的预处理矩阵的特征值大部分都聚集于1的附近.数值实验也表明,这类分块快速正则Hermite分裂预处理子可以明显地加快广义极小残量(GMRES)方法和稳定化的双共轭梯度(BiCGSTAB)方法等Krylov子空间迭代方法的收敛速度.  相似文献   

5.
A deflated restarting Krylov subspace method for approximating a function of a matrix times a vector is proposed. In contrast to other Krylov subspace methods, the performance of the method in this paper is better. We further show that the deflating algorithm inherits the superlinear convergence property of its unrestarted counterpart for the entire function and present the results of numerical experiments.  相似文献   

6.
Krylov子空间投影法及其在油藏数值模拟中的应用   总被引:3,自引:0,他引:3  
Krylov子空间投影法是一类非常有效的大型线性代数方程组解法,随着左右空间Lm、Km的不同选取可以得到许多人们熟知的方法.按矩阵Hm的不同类型,将Krylov子空间方法分成两大类,简要分析了这两类方法的优缺点及其最新进展.将目前最为可靠实用的广义最小余量法(GMRES)应用于油藏数值模拟计算问题,利用矩阵分块技术,采用块拟消去法(PE)对系数阵进行预处理.计算结果表明本文的预处理GMRES方法优于目前使用较多的预处理正交极小化ORTHMIN方法,最后还讨论了投影类方法的局限和今后的可能发展方向.  相似文献   

7.
The inverse-free preconditioned Krylov subspace method of Golub and Ye [G.H. Golub, Q. Ye, An inverse free preconditioned Krylov subspace method for symmetric generalized eigenvalue problems, SIAM J. Sci. Comp. 24 (2002) 312-334] is an efficient algorithm for computing a few extreme eigenvalues of the symmetric generalized eigenvalue problem. In this paper, we first present an analysis of the preconditioning strategy based on incomplete factorizations. We then extend the method by developing a block generalization for computing multiple or severely clustered eigenvalues and develop a robust black-box implementation. Numerical examples are given to illustrate the analysis and the efficiency of the block algorithm.  相似文献   

8.
In the present paper, we present numerical methods for the computation of approximate solutions to large continuous-time and discrete-time algebraic Riccati equations. The proposed methods are projection methods onto block Krylov subspaces. We use the block Arnoldi process to construct an orthonormal basis of the corresponding block Krylov subspace and then extract low rank approximate solutions. We consider the sequential version of the block Arnoldi algorithm by incorporating a deflation technique which allows us to delete linearly and almost linearly dependent vectors in the block Krylov subspace sequences. We give some theoretical results and present numerical experiments for large problems.  相似文献   

9.
In this paper, we first give a result which links any global Krylov method for solving linear systems with several right-hand sides to the corresponding classical Krylov method. Then, we propose a general framework for matrix Krylov subspace methods for linear systems with multiple right-hand sides. Our approach use global projection techniques, it is based on the Global Generalized Hessenberg Process (GGHP) – which use the Frobenius scalar product and construct a basis of a matrix Krylov subspace – and on the use of a Galerkin or a minimizing norm condition. To accelerate the convergence of global methods, we will introduce weighted global methods. In these methods, the GGHP uses a different scalar product at each restart. Experimental results are presented to show the good performances of the weighted global methods. AMS subject classification 65F10  相似文献   

10.
In this paper we consider the problem of approximating the solution of infinite linear systems, finitely expressed by a sparse coefficient matrix. We analyse an algorithm based on Krylov subspace methods embedded in an adaptive enlargement scheme. The management of the algorithm is not trivial, due to the irregular convergence behaviour frequently displayed by Krylov subspace methods for nonsymmetric systems. Numerical experiments, carried out on several test problems, indicate that the more robust methods, such as GMRES and QMR, embedded in the adaptive enlargement scheme, exhibit good performances.  相似文献   

11.
Krylov subspace methods and their variants are presently the favorite iterative methods for solving a system of linear equations. Although it is a purely linear algebra problem, it can be tackled by the theory of formal orthogonal polynomials. This theory helps to understand the origin of the algorithms for the implementation of Krylov subspace methods and, moreover, the use of formal orthogonal polynomials brings a major simplification in the treatment of some numerical problems related to these algorithms. This paper reviews this approach in the case of Lanczos method and its variants, the novelty being the introduction of a preconditioner.  相似文献   

12.
We consider two Krylov subspace methods for solving linear systems, which are the minimal residual method and the orthogonal residual method. These two methods are studied without referring to any particular implementations. By using the Petrov–Galerkin condition, we describe the residual norms of these two methods in terms of Krylov vectors, and the relationship between there two norms. We define the Ritz singular values, and prove that the convergence of these two methods is governed by the convergence of the Ritz singular values. AMS subject classification 65F10  相似文献   

13.
14.
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.  相似文献   

15.
The block‐Lanczos method serves to compute a moderate number of eigenvalues and the corresponding invariant subspace of a symmetric matrix. In this paper, the convergence behavior of nonrestarted and restarted versions of the block‐Lanczos method is analyzed. For the nonrestarted version, we improve an estimate by Saad by means of a change of the auxiliary vector so that the new estimate is much more accurate in the case of clustered or multiple eigenvalues. For the restarted version, an estimate by Knyazev is generalized by extending our previous results on block steepest descent iterations and single‐vector restarted Krylov subspace iterations. The new estimates can also be reformulated and applied to invert‐block‐Lanczos methods for solving generalized matrix eigenvalue problems.  相似文献   

16.
The restarted FOM method presented by Simoncini[7]according to the natural collinearity of all residuals is an efficient method for solving shifted systems,which generates the same Krylov subspace when the shifts are handled simultaneously.However,restarting slows down the convergence.We present a practical method for solving the shifted systems by adding some Ritz vectors into the Krylov subspace to form an augmented Krylov subspace. Numerical experiments illustrate that the augmented FOM approach(restarted version)can converge more quickly than the restarted FOM method.  相似文献   

17.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

18.
By transforming nonsymmetric linear systems to the extended skew-symmetric ones, we present the skew-symmetric methods for solving nonsymmetric linear systems with multiple right-hand sides. These methods are based on the block and global Arnoldi algorithm which is formed by implementing orthogonal projections of the initial matrix residual onto a matrix Krylov subspace. The algorithms avoid the tediously long Arnoldi process and highly reduce expensive storage. Numerical experiments show that these algorithms are effective and give better practical performances than global GMRES for solving nonsymmetric linear systems with multiple right-hand sides.  相似文献   

19.
The concept of a representative spectrum is introduced in the context of Newton‐Krylov methods. This concept allows a better understanding of convergence rate accelerating techniques for Krylov‐subspace iterative methods in the context of CFD applications of the Newton‐Krylov approach to iteratively solve sets of non‐linear equations. The dependence of the representative spectrum on several parameters such as mesh, Mach number or discretization techniques is studied and analyzed. © 2005 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2006  相似文献   

20.
We consider the approximation of trigonometric operator functions that arise in the numerical solution of wave equations by trigonometric integrators. It is well known that Krylov subspace methods for matrix functions without exponential decay show superlinear convergence behavior if the number of steps is larger than the norm of the operator. Thus, Krylov approximations may fail to converge for unbounded operators. In this paper, we propose and analyze a rational Krylov subspace method which converges not only for finite element or finite difference approximations to differential operators but even for abstract, unbounded operators. In contrast to standard Krylov methods, the convergence will be independent of the norm of the operator and thus of its spatial discretization. We will discuss efficient implementations for finite element discretizations and illustrate our analysis with numerical experiments. AMS subject classification (2000)  65F10, 65L60, 65M60, 65N22  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号