首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is concerned with a generalization of the Kronecker product splitting (KPS) iteration for solving linear systems arising in implicit Runge–Kutta and boundary value methods discretizations of ordinary differential equations. It is shown that the new scheme can outperform the standard KPS method in some situations and can be used as an effective preconditioner for Krylov subspace methods. Numerical experiments are presented to demonstrate the effectiveness of the methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

3.
The Generalized Minimal Residual (GMRES) method and the Quasi-Minimal Residual (QMR) method are two Krylov methods for solving linear systems. The main difference between these methods is the generation of the basis vectors for the Krylov subspace. The GMRES method uses the Arnoldi process while QMR uses the Lanczos algorithm for constructing a basis of the Krylov subspace. In this paper we give a new method similar to QMR but based on the Hessenberg process instead of the Lanczos process. We call the new method the CMRH method. The CMRH method is less expensive and requires slightly less storage than GMRES. Numerical experiments suggest that it has behaviour similar to GMRES. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

4.
The regularizing properties of the Golub–Kahan bidiagonalization algorithm are powerful when the associated Krylov subspace captures the dominating components of the solution. In some applications the regularized solution can be further improved by enrichment, that is, by augmenting the Krylov subspace with a low‐dimensional subspace that represents specific prior information. Inspired by earlier work on GMRES, we demonstrate how to carry these ideas over to the bidiagonalization algorithm, and we describe how to incorporate Tikhonov regularization. This leads to a hybrid iterative method where the choice of regularization parameter in each iteration also provides a stopping rule.  相似文献   

5.
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.  相似文献   

6.
The four vector extrapolation methods, minimal polynomial extrapolation, reduced rank extrapolation, modified minimal polynomial extrapolation and the topological epsilon algorithm, when applied to linearly generated vector sequences are Krylov subspace methods and it is known that they are equivalent to some well-known conjugate gradient type methods. However, the vector -algorithm is an extrapolation method, older than the four extrapolation methods above, and no similar results are known for it. In this paper, a determinantal formula for the vector -algorithm is given. Then it is shown that, when applied to a linearly generated vector sequence, the algorithm is also a Krylov subspace method and for a class of matrices the method is equivalent to a preconditioned Lanczos method. A new determinantal formula for the CGS is given, and an algebraic comparison between the vector -algorithm for linear systems and CGS is also given.  相似文献   

7.
By transforming nonsymmetric linear systems to the extended skew-symmetric ones, we present the skew-symmetric methods for solving nonsymmetric linear systems with multiple right-hand sides. These methods are based on the block and global Arnoldi algorithm which is formed by implementing orthogonal projections of the initial matrix residual onto a matrix Krylov subspace. The algorithms avoid the tediously long Arnoldi process and highly reduce expensive storage. Numerical experiments show that these algorithms are effective and give better practical performances than global GMRES for solving nonsymmetric linear systems with multiple right-hand sides.  相似文献   

8.
In this paper, we describe tensor methods for large systems of nonlinear equations based on Krylov subspace techniques for approximately solving the linear systems that are required in each tensor iteration. We refer to a method in this class as a tensor-Krylov algorithm. We describe comparative testing for a tensor-Krylov implementation versus an analogous implementation based on a Newton-Krylov method. The test results show that tensor-Krylov methods are much more efficient and robust than Newton-Krylov methods on hard nonlinear equations problems.Part of this work was performed while the author was research associate at CERFACS (Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique).Research supported in part by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38.  相似文献   

9.
We consider two Krylov subspace methods for solving linear systems, which are the minimal residual method and the orthogonal residual method. These two methods are studied without referring to any particular implementations. By using the Petrov–Galerkin condition, we describe the residual norms of these two methods in terms of Krylov vectors, and the relationship between there two norms. We define the Ritz singular values, and prove that the convergence of these two methods is governed by the convergence of the Ritz singular values. AMS subject classification 65F10  相似文献   

10.
The aim of this paper is to develop fast second-order accurate difference schemes for solving one- and two-dimensional time distributed-order and Riesz space fractional diffusion equations. We adopt the same measures for one- and two-dimensional problems as follows: we first transform the time distributed-order fractional diffusion problem into the multi-term time-space fractional diffusion problem with the composite trapezoid formula. Then, we propose a second-order accurate difference scheme based on the interpolation approximation on a special point to solve the resultant problem. Meanwhile, the unconditional stability and convergence of the new difference scheme in $L_2$-norm are proved. Furthermore, we find that the discretizations lead to a series of Toeplitz systems which can be efficiently solved by Krylov subspace methods with suitable circulant preconditioners. Finally, numerical results are presented to show the effectiveness of the proposed difference methods and demonstrate the fast convergence of our preconditioned Krylov subspace methods.  相似文献   

11.
This paper gives the truncated version of the Minpert method:the incomplete minimum perturbation algorithm(IMinpert).It is based on an incomplete orthogonal- ization of the Krylov vectors in question,and gives a quasi-minimum backward error solution over the Krylov subspace.In order to make the practical implementation of IMinpert easy and convenient,we give another approximate version of the IMinpert method:A-IMinpert.Theoretical properties of the latter algorithm are discussed.Nu- merical experiments are reported to show the proposed method is effective in practice and is competitive with the Minpert algorithm.  相似文献   

12.
Despite its usefulness in solving eigenvalue problems and linear systems of equations, the nonsymmetric Lanczos method is known to suffer from a potential breakdown problem. Previous and recent approaches for handling the Lanczos exact and near-breakdowns include, for example, the look-ahead schemes by Parlett-Taylor-Liu [23], Freund-Gutknecht-Nachtigal [9], and Brezinski-Redivo Zaglia-Sadok [4]; the combined look-ahead and restart scheme by Joubert [18]; and the low-rank modified Lanczos scheme by Huckle [17]. In this paper, we present yet another scheme based on a modified Krylov subspace approach for the solution of nonsymmetric linear systems. When a breakdown occurs, our approach seeks a modified dual Krylov subspace, which is the sum of the original subspace and a new Krylov subspaceK m (w j ,A T ) wherew j is a newstart vector (this approach has been studied by Ye [26] for eigenvalue computations). Based on this strategy, we have developed a practical algorithm for linear systems called the MLAN/QM algorithm, which also incorporates the residual quasi-minimization as proposed in [12]. We present a few convergence bounds for the method as well as numerical results to show its effectiveness.Research supported by Natural Sciences and Engineering Research Council of Canada.  相似文献   

13.
For solving least squares problems, the CGLS method is a typical method in the point of view of iterative methods. When the least squares problems are ill-conditioned, the convergence behavior of the CGLS method will present a deteriorated result. We expect to select other iterative Krylov subspace methods to overcome the disadvantage of CGLS. Here the GMRES method is a suitable algorithm for the reason that it is derived from the minimal residual norm approach, which coincides with least squares problems. Ken Hayami proposed BAGMRES for solving least squares problems in [\emph{GMRES Methods for Least Squares Problems, SIAM J. Matrix Anal. Appl., 31(2010)}, pp.2400-2430]. The deflation and balancing preconditioners can optimize the convergence rate through modulating spectral distribution. Hence, in this paper we utilize preconditioned iterative Krylov subspace methods with deflation and balancing preconditioners in order to solve ill-conditioned least squares problems. Numerical experiments show that the methods proposed in this paper are better than the CGLS method.  相似文献   

14.
In the present paper, we present numerical methods for the computation of approximate solutions to large continuous-time and discrete-time algebraic Riccati equations. The proposed methods are projection methods onto block Krylov subspaces. We use the block Arnoldi process to construct an orthonormal basis of the corresponding block Krylov subspace and then extract low rank approximate solutions. We consider the sequential version of the block Arnoldi algorithm by incorporating a deflation technique which allows us to delete linearly and almost linearly dependent vectors in the block Krylov subspace sequences. We give some theoretical results and present numerical experiments for large problems.  相似文献   

15.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

16.
Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that are nested loop methods for large and sparse linear systems problems. We also show that GRC is a KSM that is equivalent to Orthomin with a variable preconditioning. We use the modified Gram–Schmidt method to derive a stable GRC algorithm. We show that GRC presents a general framework for constructing a class of “hybrid” (nested) KSM based on inner loop method selection. We conduct numerical experiments using nonsymmetric indefinite matrices from a widely used library of sparse matrices that validate the efficiency and the robustness of the proposed methods.  相似文献   

17.
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
The Conjugate Gradient (CG) method and the Conjugate Residual (CR) method are Krylov subspace methods for solving symmetric (positive definite) linear systems. To solve nonsymmetric linear systems, the Bi-Conjugate Gradient (Bi-CG) method has been proposed as an extension of CG. Bi-CG has attractive short-term recurrences, and it is the basis for the successful variants such as Bi-CGSTAB. In this paper, we extend CR to nonsymmetric linear systems with the aim of finding an alternative basic solver. Numerical experiments show that the resulting algorithm with short-term recurrences often gives smoother convergence behavior than Bi-CG. Hence, it may take the place of Bi-CG for the successful variants.  相似文献   

19.
This paper concerns the use of Krylov subspace methods for the solution of nearly singular nonsymmetric linear systems. We show that the incomplete orthogonalization methods (IOM) in conjunction with certain deflation techniques of Stewart, Chan, and Saad can be used to solve large nonsymmetric linear systems which are nearly singular.This work was supported by the National Science Foundation, Grants DMS-8403148 and DCR-81-16779, and by the Office of Naval Research, Contract N00014-85-K-0725.  相似文献   

20.
By further generalizing the modified skew-Hermitian triangular splitting iteration methods studied in [L. Wang, Z.-Z. Bai, Skew-Hermitian triangular splitting iteration methods for non-Hermitian positive definite linear systems of strong skew-Hermitian parts, BIT Numer. Math. 44 (2004) 363-386], in this paper, we present a new iteration scheme, called the product-type skew-Hermitian triangular splitting iteration method, for solving the strongly non-Hermitian systems of linear equations with positive definite coefficient matrices. We discuss the convergence property and the optimal parameters of this method. Moreover, when it is applied to precondition the Krylov subspace methods, the preconditioning property of the product-type skew-Hermitian triangular splitting iteration is analyzed in detail. Numerical results show that the product-type skew-Hermitian triangular splitting iteration method can produce high-quality preconditioners for the Krylov subspace methods for solving large sparse positive definite systems of linear equations of strong skew-Hermitian parts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号