首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we study shifted restated full orthogonalization method with deflation for simultaneously solving a number of shifted systems of linear equations. Theoretical analysis shows that with the deflation technique, the new residual of shifted restarted FOM is still collinear with each other. Hence, the new approach can solve the shifted systems simultaneously based on the same Krylov subspace. Numerical experiments show that the deflation technique can significantly improve the convergence performance of shifted restarted FOM.  相似文献   

2.
Restarted Full Orthogonalization Method for Shifted Linear Systems   总被引:1,自引:0,他引:1  
Restarted GMRES is known to be inefficient for solving shifted systems when the shifts are handled simultaneously. Variants have been proposed to enhance its performance. We show that another restarted method, restarted Full Orthogonalization Method (FOM), can effectively be employed. The total number of iterations of restarted FOM applied to all shifted systems simultaneously is the same as that obtained by applying restarted FOM to the shifted system with slowest convergence rate, while the computational cost grows only sub-linearly with the number of shifts. Numerical experiments are reported.  相似文献   

3.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

4.
1 IntroductionThe solution of large N× N nonsingular unsymmetric( non-Hermitian) sparse sys-tems of linear equationsAx =b, ( 1 )is one of the most frequently encountered tasks in numerical computations.For example,such systems arise from finite difference or finite element approximations to partial differ-ential equationsA major class of methods for solving ( 1 ) is Krylov subspace or conjugate gradienttype methods.Most successful scheme of these methods are based on the orthogonal pro-jec…  相似文献   

5.
The restarted block generalized minimum residual method (BGMRES) with deflated restarting (BGMRES‐DR) was proposed by Morgan to dump the negative effect of small eigenvalues from the convergence of the BGMRES method. More recently, Wu et al. introduced the shifted BGMRES method (BGMRES‐Sh) for solving the sequence of linear systems with multiple shifts and multiple right‐hand sides. In this paper, a new shifted block Krylov subspace algorithm that combines the characteristics of both the BGMRES‐DR and the BGMRES‐Sh methods is proposed. Moreover, our method is enhanced with a seed selection strategy to handle the case of almost linear dependence of the right‐hand sides. Numerical experiments illustrate the potential of the proposed method to solve efficiently the sequence of linear systems with multiple shifts and multiple right‐hand sides, with and without preconditioner, also against other state‐of‐the‐art solvers.  相似文献   

6.
The block‐Lanczos method serves to compute a moderate number of eigenvalues and the corresponding invariant subspace of a symmetric matrix. In this paper, the convergence behavior of nonrestarted and restarted versions of the block‐Lanczos method is analyzed. For the nonrestarted version, we improve an estimate by Saad by means of a change of the auxiliary vector so that the new estimate is much more accurate in the case of clustered or multiple eigenvalues. For the restarted version, an estimate by Knyazev is generalized by extending our previous results on block steepest descent iterations and single‐vector restarted Krylov subspace iterations. The new estimates can also be reformulated and applied to invert‐block‐Lanczos methods for solving generalized matrix eigenvalue problems.  相似文献   

7.
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
An algorithm for solving the problem of minimizing a quadratic function subject to ellipsoidal constraints is introduced. This algorithm is based on the impHcitly restarted Lanczos method to construct a basis for the Krylov subspace in conjunction with a model trust region strategy to choose the step. The trial step is computed on the small dimensional subspace that lies inside the trust region.

One of the main advantages of this algorithm is the way that the Krylov subspace is terminated. We introduce a terminationcondition that allows the gradient to be decreased on that subspace.

A convergence theory for this algorithm is presented. It is shown that this algorithm is globally convergent and it shouldcope quite well with large scale minimization problems. This theory is sufficiently general that it holds for any algorithm that projects the problem on a lower dimensional subspace.  相似文献   

9.
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.  相似文献   

10.
We consider the GMRES(m,k) method for the solution of linear systems Ax=b, i.e. the restarted GMRES with restart m where to the standard Krylov subspace of dimension m the other subspace of dimension k is added, resulting in an augmented Krylov subspace. This additional subspace approximates usually an A‐invariant subspace. The eigenspaces associated with the eigenvalues closest to zero are commonly used, as those are thought to hinder convergence the most. The behaviour of residual bounds is described for various situations which can arise during the GMRES(m,k) process. The obtained estimates for the norm of the residual vector suggest sufficient conditions for convergence of GMRES(m,k) and illustrate that these augmentation techniques can remove stagnation of GMRES(m) in many cases. All estimates are independent of the choice of an initial approximation. Conclusions and remarks assessing numerically the quality of proposed bounds conclude the paper. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
Multistep matrix splitting iterations serve as preconditioning for Krylov subspace methods for solving singular linear systems. The preconditioner is applied to the generalized minimal residual (GMRES) method and the flexible GMRES (FGMRES) method. We present theoretical and practical justifications for using this approach. Numerical experiments show that the multistep generalized shifted splitting (GSS) and Hermitian and skew-Hermitian splitting (HSS) iteration preconditioning are more robust and efficient compared to standard preconditioners for some test problems of large sparse singular linear systems.  相似文献   

12.
We propose a preconditioned variant of the modified HSS (MHSS) iteration method for solving a class of complex symmetric systems of linear equations. Under suitable conditions, we prove the convergence of the preconditioned MHSS (PMHSS) iteration method and discuss the spectral properties of the PMHSS-preconditioned matrix. Numerical implementations show that the resulting PMHSS preconditioner leads to fast convergence when it is used to precondition Krylov subspace iteration methods such as GMRES and its restarted variants. In particular, both the stationary PMHSS iteration and PMHSS-preconditioned GMRES show meshsize-independent and parameter-insensitive convergence behavior for the tested numerical examples.  相似文献   

13.
ON THE BREAKDOWNS OF THE GALERKIN AND LEAST-SQUARES METHODS   总被引:3,自引:0,他引:3  
1 IntroductionWeconsiderlinearsystemsoftheformAx=b,(1 )whereA∈CN×Nisnonsingularandpossiblynon Hermitian .Amajorclassofmethodsforsolving (1 )istheclassofKrylovsubspacemethods (see[6] ,[1 3]foroverviewsofsuchmethods) ,definedbythepropertiesxm ∈x0 +Km(r0 ,A) ;(2 )rm ⊥Lm, (3)whe…  相似文献   

14.
The CMRH (Changing Minimal Residual method based on the Hessenberg process) method is a Krylov subspace method for solving large linear systems with non-symmetric coefficient matrices. CMRH generates a (non orthogonal) basis of the Krylov subspace through the Hessenberg process, and minimizes a quasi-residual norm. On dense matrices, the CMRH method is less expensive and requires less storage than other Krylov methods. In this work, we describe Matlab codes for the best of these implementations. Fortran codes for sequential and parallel implementations are also presented.  相似文献   

15.
A variant of the simpler GMRES method is developed for solving shifted linear systems (SGMRES‐Sh), exhibiting almost the same advantage of the simpler GMRES method over the regular GMRES method. Because the remedy adapted by GMRES‐Sh is no longer feasible for SGMRES‐Sh due to the differences between simpler GMRES and GMRES for constructing the residual vectors of linear systems, we take an alternative strategy to force the residual vectors of the add system also be orthogonal to the subspaces, to which the residual vectors of the seed system are orthogonal when the seed system is solved with the simpler GMRES method. In addition, a seed selection strategy is also employed for solving the rest non‐converged linear systems. Furthermore, an adaptive version of SGMRES‐Sh is presented for the purpose of improving the stability of SGMRES‐Sh based on the technique of the adaptive choice of the Krylov subspace basis developed for the adaptive simpler GMRES. Numerical experiments demonstrate the benefits of the presented methods.  相似文献   

16.
The truncated version of the generalized minimal residual method (GMRES), the incomplete generalized minimal residual method (IGMRES), is studied. It is based on an incomplete orthogonalization of the Krylov vectors in question, and gives an approximate or quasi-minimum residual solution over the Krylov subspace. A convergence analysis of this method is given, showing that in the non-restarted version IGMRES can behave like GMRES once the basis vectors of Krylov subspace generated by the incomplete orthogonalization are strongly linearly independent. Meanwhile, some relationships between the residual norms for IOM and IGMRES are established. Numerical experiments are reported to show convergence behavior of IGMRES and of its restarted version IGMRES(m). Project supported by the China State Key Basic Researches, the National Natural Science Foundation of China (Grant No. 19571014), the Doctoral Program (97014113), the Foundation of Returning Scholars of China and the Natural Science Foundation of Liaoning Province.  相似文献   

17.
Luc Giraud  Serge Gratton  Xavier Pinel  Xavier Vasseur 《PAMM》2007,7(1):1020701-1020702
The Flexible GMRES (FGMRES [1]) and the GMRES with deflated restarting (GMRES-DR [2]) methods are two algorithms derived from GMRES [3], that are considered as powerful when solving large non hermitian systems of linear equations. GMRES-DR is a variant of GMRES with an improved restarting technique that maintains in the Krylov subspace harmonic Ritz vector from the previous restart. In situations where the convergence of restarted GMRES is slow and where the matrix has few eigenvalues close to the origin, this technique has proved very efficient. The new method that we propose is the Flexible GMRES with deflated restarting (FGMRES-DR [6]), which combines the two above mentioned algorithms in order to yield better performance. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
1. IntroductionArnoldi's method [1, 12] is used for computing.,a few selected eigenpairs of largeunsymmetric matrices. It hajs been investigated since the 1980s; see, e-g., [3--15].It is well known that the m--step Arnoldi processt as described in detail in Section 2,generates an orthonormal basis {yi}7=1 of the Krylov subspace Km(vi, A) spanned byvil Avi,... 5 Am--'v,. Here yi is an initial unit norm vector. The projected matrix ofA onto Km(vi, A) is represented by an m x m upper Hessenb…  相似文献   

19.
Numerical Algorithms - The problem of shifted linear systems is an important and challenging issue in a number of research applications. Krylov subspace methods are effective techniques for...  相似文献   

20.
Numerical Algorithms - A class of novel parallel preconditioning schemes in conjunction with a Krylov subspace iterative method for solving general sparse linear systems is presented. The proposed...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号