首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
In the present paper, we propose Krylov‐based methods for solving large‐scale differential Sylvester matrix equations having a low‐rank constant term. We present two new approaches for solving such differential matrix equations. The first approach is based on the integral expression of the exact solution and a Krylov method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low‐dimensional differential Sylvester matrix equation. The latter problem is then solved by some integration numerical methods such as the backward differentiation formula or Rosenbrock method, and the obtained solution is used to build the low‐rank approximate solution of the original problem. We give some new theoretical results such as a simple expression of the residual norm and upper bounds for the norm of the error. Some numerical experiments are given in order to compare the two approaches.  相似文献   

2.

The solution of a large-scale Sylvester matrix equation plays an important role in control and large scientific computations. In this paper, we are interested in the large Sylvester matrix equation with large dimensionA and small dimension B, and a popular approach is to use the global Krylov subspace method. In this paper, we propose three new algorithms for this problem. We first consider the global GMRES algorithm with weighting strategy, which can be viewed as a precondition method. We present three new schemes to update the weighting matrix during iterations. Due to the growth of memory requirements and computational cost, it is necessary to restart the algorithm effectively. The deflation strategy is efficient for the solution of large linear systems and large eigenvalue problems; to the best of our knowledge, little work is done on applying deflation to the (weighted) global GMRES algorithm for large Sylvester matrix equations. We then consider how to combine the weighting strategy with deflated restarting, and propose a weighted global GMRES algorithm with deflation for solving large Sylvester matrix equations. In particular, we are interested in the global GMRES algorithm with deflation, which can be viewed as a special case when the weighted matrix is chosen as the identity. Theoretical analysis is given to show rationality of the new algorithms. Numerical experiments illustrate the numerical behavior of the proposed algorithms.

  相似文献   

3.
We consider the approximation of operator functions in resolvent Krylov subspaces. Besides many other applications, such approximations are currently of high interest for the approximation of φ-functions that arise in the numerical solution of evolution equations by exponential integrators. It is well known that Krylov subspace methods for matrix functions without exponential decay show superlinear convergence behaviour if the number of steps is larger than the norm of the operator. Thus, Krylov approximations may fail to converge for unbounded operators. In this paper, we analyse a rational Krylov subspace method which converges not only for finite element or finite difference approximations to differential operators but even for abstract, unbounded operators whose field of values lies in the left half plane. In contrast to standard Krylov methods, the convergence will be independent of the norm of the discretised operator and thus of the spatial discretisation. We will discuss efficient implementations for finite element discretisations and illustrate our analysis with numerical experiments.  相似文献   

4.
For the large sparse block two-by-two real nonsingular matrices, we establish a general framework of practical and efficient structured preconditioners through matrix transformation and matrix approximations. For the specific versions such as modified block Jacobi-type, modified block Gauss-Seidel-type, and modified block unsymmetric (symmetric) Gauss-Seidel-type preconditioners, we precisely describe their concrete expressions and deliberately analyze eigenvalue distributions and positive definiteness of the preconditioned matrices. Also, we show that when these structured preconditioners are employed to precondition the Krylov subspace methods such as GMRES and restarted GMRES, fast and effective iteration solvers can be obtained for the large sparse systems of linear equations with block two-by-two coefficient matrices. In particular, these structured preconditioners can lead to efficient and high-quality preconditioning matrices for some typical matrices from the real-world applications.

  相似文献   


5.

The basic aim of this article is to present a novel efficient matrix approach for solving the second-order linear matrix partial differential equations (MPDEs) under given initial conditions. For imposing the given initial conditions to the main MPDEs, the associated matrix integro-differential equations (MIDEs) with partial derivatives are obtained from direct integration with regard to the spatial variable x and time variable t. Hence, operational matrices of differentiation and integration together with the completeness of Bernoulli polynomials are used to reduce the obtained MIDEs to the corresponding algebraic Sylvester equations. Using two well-known subspace Krylov iterative methods (i.e., GMRES(10) and Bi-CGSTAB) we provide two algorithms for solving the mentioned Sylvester equations. A numerical example is provided to show the efficiency and accuracy of the presented approach.

  相似文献   

6.
The approximate solutions in standard iteration methods for linear systems Ax=b, with A an n by n nonsingular matrix, form a subspace. In this subspace, one may try to construct better approximations for the solution x. This is the idea behind Krylov subspace methods. It has led to very powerful and efficient methods such as conjugate gradients, GMRES, and Bi-CGSTAB. We will give an overview of these methods and we will discuss some relevant properties from the user's perspective view.The convergence of Krylov subspace methods depends strongly on the eigenvalue distribution of A, and on the angles between eigenvectors of A. Preconditioning is a popular technique to obtain a better behaved linear system. We will briefly discuss some modern developments in preconditioning, in particular parallel preconditioners will be highlighted: reordering techniques for incomplete decompositions, domain decomposition approaches, and sparsified Schur complements.  相似文献   

7.
In this paper, we study the alternating direction implicit (ADI) iteration for solving the continuous Sylvester equation AX + XB = C , where the coefficient matrices A and B are assumed to be positive semi‐definite matrices (not necessarily Hermitian), and at least one of them to be positive definite. We first analyze the convergence of the ADI iteration for solving such a class of Sylvester equations, then derive an upper bound for the contraction factor of this ADI iteration. To reduce its computational complexity, we further propose an inexact variant of the ADI iteration, which employs some Krylov subspace methods as its inner iteration processes at each step of the outer ADI iteration. The convergence is also analyzed in detail. The numerical experiments are given to illustrate the effectiveness of both ADI and inexact ADI iterations.  相似文献   

8.
We consider the approximation of trigonometric operator functions that arise in the numerical solution of wave equations by trigonometric integrators. It is well known that Krylov subspace methods for matrix functions without exponential decay show superlinear convergence behavior if the number of steps is larger than the norm of the operator. Thus, Krylov approximations may fail to converge for unbounded operators. In this paper, we propose and analyze a rational Krylov subspace method which converges not only for finite element or finite difference approximations to differential operators but even for abstract, unbounded operators. In contrast to standard Krylov methods, the convergence will be independent of the norm of the operator and thus of its spatial discretization. We will discuss efficient implementations for finite element discretizations and illustrate our analysis with numerical experiments. AMS subject classification (2000)  65F10, 65L60, 65M60, 65N22  相似文献   

9.
In this paper, we develop an algorithm in which the block shift-and-invert Krylov subspace method can be employed for approximating the linear combination of the matrix exponential and related exponential-type functions. Such evaluation plays a major role in a class of numerical methods known as exponential integrators. We derive a low-dimensional matrix exponential to approximate the objective function based on the block shift-and-invert Krylov subspace methods. We obtain the error expansion of the approximation, and show that the variants of its first term can be used as reliable a posteriori error estimates and correctors. Numerical experiments illustrate that the error estimates are efficient and the proposed algorithm is worthy of further study.  相似文献   

10.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

11.
We present a nested splitting conjugate gradient iteration method for solving large sparse continuous Sylvester equation, in which both coefficient matrices are (non-Hermitian) positive semi-definite, and at least one of them is positive definite. This method is actually inner/outer iterations, which employs the Sylvester conjugate gradient method as inner iteration to approximate each outer iterate, while each outer iteration is induced by a convergent and Hermitian positive definite splitting of the coefficient matrices. Convergence conditions of this method are studied and numerical experiments show the efficiency of this method. In addition, we show that the quasi-Hermitian splitting can induce accurate, robust and effective preconditioned Krylov subspace methods.  相似文献   

12.
In this paper, we consider large‐scale nonsymmetric differential matrix Riccati equations with low‐rank right‐hand sides. These matrix equations appear in many applications such as control theory, transport theory, applied probability, and others. We show how to apply Krylov‐type methods such as the extended block Arnoldi algorithm to get low‐rank approximate solutions. The initial problem is projected onto small subspaces to get low dimensional nonsymmetric differential equations that are solved using the exponential approximation or via other integration schemes such as backward differentiation formula (BDF) or Rosenbrock method. We also show how these techniques can be easily used to solve some problems from the well‐known transport equation. Some numerical examples are given to illustrate the application of the proposed methods to large‐scale problems.  相似文献   

13.
Techniques of Krylov subspace iterations play an important role in computing ε-spectra of large matrices. To obtain results about the reliability of this kind of approximations, we propose to compare the position of the ε-spectrum of A with those of its diagonal submatrices. We give theoretical results which are valid for any block decomposition in four blocks, A11,A12,A21,A22. We then illustrate our results by numerical experiments. The same kind of problem arises when we compute the stability radius of a large matrix. In that context, we propose a new sufficient condition for the stability of a matrix involving quantities readily computable such as stability radius of small submatrices.  相似文献   

14.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

15.
In this paper, we consider large-scale linear discrete ill-posed problems where the right-hand side contains noise. Regularization techniques such as Tikhonov regularization are needed to control the effect of the noise on the solution. In many applications such as in image restoration the coefficient matrix is given as a Kronecker product of two matrices and then Tikhonov regularization problem leads to the generalized Sylvester matrix equation. For large-scale problems, we use the global-GMRES method which is an orthogonal projection method onto a matrix Krylov subspace. We present some theoretical results and give numerical tests in image restoration.  相似文献   

16.
This paper presents a new method for obtaining a matrix M which is an approximate inverse preconditioner for a given matrix A, where the eigenvalues of A all either have negative real parts or all have positive real parts. This method is based on the approximate solution of the special Sylvester equation AX + XA = 2I. We use a Krylov subspace method for obtaining an approximate solution of this Sylvester matrix equation which is based on the Arnoldi algorithm and on an integral formula. The computation of the preconditioner can be carried out in parallel and its implementation requires only the solution of very simple and small Sylvester equations. The sparsity of the preconditioner is preserved by using a proper dropping strategy. Some numerical experiments on test matrices from Harwell–Boing collection for comparing the numerical performance of the new method with an available well-known algorithm are presented.  相似文献   

17.
Krylov子空间投影法及其在油藏数值模拟中的应用   总被引:3,自引:0,他引:3  
Krylov子空间投影法是一类非常有效的大型线性代数方程组解法,随着左右空间Lm、Km的不同选取可以得到许多人们熟知的方法.按矩阵Hm的不同类型,将Krylov子空间方法分成两大类,简要分析了这两类方法的优缺点及其最新进展.将目前最为可靠实用的广义最小余量法(GMRES)应用于油藏数值模拟计算问题,利用矩阵分块技术,采用块拟消去法(PE)对系数阵进行预处理.计算结果表明本文的预处理GMRES方法优于目前使用较多的预处理正交极小化ORTHMIN方法,最后还讨论了投影类方法的局限和今后的可能发展方向.  相似文献   

18.
The incomplete orthogonalization method (IOM) proposed by Saad for computing a few eigenpairs of large nonsymmetric matrices is generalized into a block incomplete orthogonalization method (BIOM). It is studied how the departure from symmetry A – A H affects the conditioning of the block basis vectors generated by BIOM, and some relationships are established between the approximate eigenpairs obtained by BIOM and Ritz pairs. It is proved that BIOM behaves much like generalized block Lanczos methods if the basis vectors of the block Krylov subspace generated by it are strongly linearly independent. However, it is shown that BIOM may generate a nearly linearly dependent basis for a general nonsymmetric matrix. Numerical experiments illustrate the convergence behavior of BIOM.This work was supported in part by the Graduiertenkolleg at the University of Bielefeld, Germany.  相似文献   

19.
刘瑶宁 《计算数学》2022,44(2):187-205
一类空间分数阶扩散方程经过有限差分离散后所得到的离散线性方程组的系数矩阵是两个对角矩阵与Toeplitz型矩阵的乘积之和.在本文中,对于几乎各向同性的二维或三维空间分数阶扩散方程的离散线性方程组,采用预处理Krylov子空间迭代方法,我们利用其系数矩阵的特殊结构和具体性质构造了一类分块快速正则Hermite分裂预处理子.通过理论分析,我们证明了所对应的预处理矩阵的特征值大部分都聚集于1的附近.数值实验也表明,这类分块快速正则Hermite分裂预处理子可以明显地加快广义极小残量(GMRES)方法和稳定化的双共轭梯度(BiCGSTAB)方法等Krylov子空间迭代方法的收敛速度.  相似文献   

20.
We discuss a class of deflated block Krylov subspace methods for solving large scale matrix eigenvalue problems. The efficiency of an Arnoldi-type method is examined in computing partial or closely clustered eigenvalues of large matrices. As an improvement, we also propose a refined variant of the Arnoldi-type method. Comparisons show that the refined variant can further improve the Arnoldi-type method and both methods exhibit very regular convergence behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号