首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a numerical method for solving large‐scale differential symmetric Stein equations having low‐rank right constant term. Our approach is based on projection the given problem onto a Krylov subspace then solving the low dimensional matrix problem by using an integration method, and the original problem solution is built by using obtained low‐rank approximate solution. Using the extended block Arnoldi process and backward differentiation formula (BDF), we give statements of the approximate solution and corresponding residual. Some numerical results are given to show the efficiency of the proposed method.  相似文献   

2.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

3.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

4.
Preconditioned Krylov subspace (KSP) methods are widely used for solving large‐scale sparse linear systems arising from numerical solutions of partial differential equations (PDEs). These linear systems are often nonsymmetric due to the nature of the PDEs, boundary or jump conditions, or discretization methods. While implementations of preconditioned KSP methods are usually readily available, it is unclear to users which methods are the best for different classes of problems. In this work, we present a comparison of some KSP methods, including GMRES, TFQMR, BiCGSTAB, and QMRCGSTAB, coupled with three classes of preconditioners, namely, Gauss–Seidel, incomplete LU factorization (including ILUT, ILUTP, and multilevel ILU), and algebraic multigrid (including BoomerAMG and ML). Theoretically, we compare the mathematical formulations and operation counts of these methods. Empirically, we compare the convergence and serial performance for a range of benchmark problems from numerical PDEs in two and three dimensions with up to millions of unknowns and also assess the asymptotic complexity of the methods as the number of unknowns increases. Our results show that GMRES tends to deliver better performance when coupled with an effective multigrid preconditioner, but it is less competitive with an ineffective preconditioner due to restarts. BoomerAMG with a proper choice of coarsening and interpolation techniques typically converges faster than ML, but both may fail for ill‐conditioned or saddle‐point problems, whereas multilevel ILU tends to succeed. We also show that right preconditioning is more desirable. This study helps establish some practical guidelines for choosing preconditioned KSP methods and motivates the development of more effective preconditioners.  相似文献   

5.
A non-linear structure preserving matrix method for the computation of a structured low rank approximation of the Sylvester resultant matrix S(f,g) of two inexact polynomials f=f(y) and g=g(y) is considered in this paper. It is shown that considerably improved results are obtained when f(y) and g(y) are processed prior to the computation of , and that these preprocessing operations introduce two parameters. These parameters can either be held constant during the computation of , which leads to a linear structure preserving matrix method, or they can be incremented during the computation of , which leads to a non-linear structure preserving matrix method. It is shown that the non-linear method yields a better structured low rank approximation of S(f,g) and that the assignment of f(y) and g(y) is important because may be a good structured low rank approximation of S(f,g), but may be a poor structured low rank approximation of S(g,f) because its numerical rank is not defined. Examples that illustrate the differences between the linear and non-linear structure preserving matrix methods, and the importance of the assignment of f(y) and g(y), are shown.  相似文献   

6.
This paper reports on improvements to recent work on the computation of a structured low rank approximation of the Sylvester resultant matrix S(f,g)S(f,g) of two inexact polynomials f=f(y)f=f(y) and g=g(y)g=g(y). Specifically, it has been shown in previous work that these polynomials must be processed before a structured low rank approximation of S(f,g)S(f,g) is computed. The existing algorithm may still, however, yield a structured low rank approximation of S(f,g)S(f,g), but not a structured low rank approximation of S(g,f)S(g,f), which is unsatisfactory. Moreover, a structured low rank approximation of S(f,g)S(f,g) must be equal to, apart from permutations of its columns, a structured low rank approximation of S(g,f)S(g,f), but the existing algorithm does not guarantee the satisfaction of this condition. This paper addresses these issues by modifying the existing algorithm, such that these deficiencies are overcome. Examples that illustrate these improvements are shown.  相似文献   

7.
A squared Smith type algorithm for solving large‐scale discrete‐time Stein equations is developed. The algorithm uses restarted Krylov spaces to compute approximations of the squared Smith iterations in low‐rank factored form. Fast convergence results when very few iterations of the alternating direction implicit method are applied to the Stein equation beforehand. The convergence of the algorithm is discussed and its performance is demonstrated by several test examples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we consider large‐scale nonsymmetric differential matrix Riccati equations with low‐rank right‐hand sides. These matrix equations appear in many applications such as control theory, transport theory, applied probability, and others. We show how to apply Krylov‐type methods such as the extended block Arnoldi algorithm to get low‐rank approximate solutions. The initial problem is projected onto small subspaces to get low dimensional nonsymmetric differential equations that are solved using the exponential approximation or via other integration schemes such as backward differentiation formula (BDF) or Rosenbrock method. We also show how these techniques can be easily used to solve some problems from the well‐known transport equation. Some numerical examples are given to illustrate the application of the proposed methods to large‐scale problems.  相似文献   

9.
In this paper, we consider an explicit solution of system of Sylvester matrix equations of the form A1V1 ? E1V1F1 = B1W1 and A2V2 ? E2V2F2 = B2W2 with F1 and F2 being arbitrary matrices, where V1,W1,V2 and W2 are the matrices to be determined. First, the definitions, of the matrix polynomial of block matrix, Sylvester sum, and Kronecker product of block matrices are defined. Some definitions, lemmas, and theorems that are needed to propose our method are stated and proved. Numerical test problems are solved to illustrate the suggested technique.  相似文献   

10.
This paper deals with studying some of well‐known iterative methods in their tensor forms to solve a Sylvester tensor equation. More precisely, the tensor form of the Arnoldi process and full orthogonalization method are derived by using a product between two tensors. Then tensor forms of the conjugate gradient and nested conjugate gradient algorithms are also presented. Rough estimation of the required number of operations for the tensor form of the Arnoldi process is obtained, which reveals the advantage of handling the algorithms based on tensor format over their classical forms in general. Some numerical experiments are examined, which confirm the feasibility and applicability of the proposed algorithms in practice. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
It is well known that the least‐squares QR‐factorization (LSQR) algorithm is a powerful method for solving linear systems Ax = b and unconstrained least‐squares problem minx | | Ax ? b | | . In the paper, the LSQR approach is developed to obtain iterative algorithms for solving the generalized Sylvester‐transpose matrix equation the minimum Frobenius norm residual problem and the periodic Sylvester matrix equation Numerical results are given to illustrate the effect of the proposed algorithms. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
In this work, we propose a new method, termed as R‐CORK, for the numerical solution of large‐scale rational eigenvalue problems, which is based on a linearization and on a compact decomposition of the rational Krylov subspaces corresponding to this linearization. R‐CORK is an extension of the compact rational Krylov method (CORK) introduced very recently in the literature to solve a family of nonlinear eigenvalue problems that can be expressed and linearized in certain particular ways and which include arbitrary polynomial eigenvalue problems, but not arbitrary rational eigenvalue problems. The R‐CORK method exploits the structure of the linearized problem by representing the Krylov vectors in a compact form in order to reduce the cost of storage, resulting in a method with two levels of orthogonalization. The first level of orthogonalization works with vectors of the same size as the original problem, and the second level works with vectors of size much smaller than the original problem. Since vectors of the size of the linearization are never stored or orthogonalized, R‐CORK is more efficient from the point of view of memory and orthogonalization than the classical rational Krylov method applied directly to the linearization. Taking into account that the R‐CORK method is based on a classical rational Krylov method, to implement implicit restarting is also possible, and we show how to do it in a memory‐efficient way. Finally, some numerical examples are included in order to show that the R‐CORK method performs satisfactorily in practice.  相似文献   

13.
We consider the Sylvester equation AX?XB+C=0 where the matrix C∈?n×m is of low rank and the spectra of A∈?n×n and B∈?m×m are separated by a line. We prove that the singular values of the solution X decay exponentially, that means for any ε∈(0,1) there exists a matrix X? of rank k=O(log(1/ε)) such that ∥X?X?2?εX2. As a generalization we prove that if A,B,C are hierarchical matrices then the solution X can be approximated by the hierarchical matrix format described in Hackbusch (Computing 2000; 62 : 89–108). The blockwise rank of the approximation is again proportional to log(1/ε). Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
A Convergence Analysis of Gmres and Fom Methods for Sylvester Equations   总被引:3,自引:0,他引:3  
We discuss convergence properties of the GMRES and FOM methods for solving large Sylvester equations of the form AXXB=C. In particular we show the importance of the separation between the fields of values of A and B on the convergence behavior of GMRES. We also discuss the stagnation phenomenon in GMRES and its consequence on FOM. We generalize the issue of breakdown in the block-Arnoldi algorithm and explain its consequence on FOM and GMRES methods. Several numerical tests illustrate the theoretical results.  相似文献   

15.
An iterative method is proposed to solve generalized coupled Sylvester matrix equations, based on a matrix form of the least-squares QR-factorization (LSQR) algorithm. By this iterative method on the selection of special initial matrices, we can obtain the minimum Frobenius norm solutions or the minimum Frobenius norm least-squares solutions over some constrained matrices, such as symmetric, generalized bisymmetric and (RS)-symmetric matrices. Meanwhile, the optimal approximate solutions to the given matrices can be derived by solving the corresponding new generalized coupled Sylvester matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the present method.  相似文献   

16.
The Runge-Kutta method is one of the most popular implicit methods for the solution of stiff ordinary differential equations. For large problems, the main drawback of such methods is the cost required at each integration step for computing the solution of a nonlinear system of equations. In this paper, we propose to reduce the cost of the computation by transforming the linear systems arising in the application of Newton's method to Stein matrix equations. We propose an iterative projection method onto block Krylov subspaces for solving numerically such Stein matrix equations. Numerical examples are given to illustrate the performance of our proposed method.  相似文献   

17.
In this paper a modified gradient based algorithm for solving Sylvester equations is presented. Different from the gradient based method introduced by Ding and Chen [7] and the relaxed gradient based algorithm proposed by Niu et al. [18], the information generated in the first half-iterative step is fully exploited and used to construct the approximate solution. Theoretical analysis shows that the new method converges under certain assumptions. Numerical results are given to verify the efficiency of the new method.  相似文献   

18.
In the present paper, we present block Arnoldi-based methods for the computation of low rank approximate solutions of large discrete-time algebraic Riccati equations (DARE). The proposed methods are projection methods onto block or extended block Krylov subspaces. We give new upper bounds for the norm of the error obtained by applying these block Arnoldi-based processes. We also introduce the Newton method combined with the block Arnoldi algorithm and present some numerical experiments with comparisons between these methods.  相似文献   

19.
The regularizing properties of the Golub–Kahan bidiagonalization algorithm are powerful when the associated Krylov subspace captures the dominating components of the solution. In some applications the regularized solution can be further improved by enrichment, that is, by augmenting the Krylov subspace with a low‐dimensional subspace that represents specific prior information. Inspired by earlier work on GMRES, we demonstrate how to carry these ideas over to the bidiagonalization algorithm, and we describe how to incorporate Tikhonov regularization. This leads to a hybrid iterative method where the choice of regularization parameter in each iteration also provides a stopping rule.  相似文献   

20.
The problem of finding interior eigenvalues of a large nonsymmetric matrix is examined. A procedure for extracting approximate eigenpairs from a subspace is discussed. It is related to the Rayleigh–Ritz procedure, but is designed for finding interior eigenvalues. Harmonic Ritz values and other approximate eigenvalues are generated. This procedure can be applied to the Arnoldi method, to preconditioning methods, and to other methods for nonsymmetric eigenvalue problems that use the Rayleigh–Ritz procedure. The subject of estimating the boundary of the entire spectrum is briefly discussed, and the importance of preconditioning for interior eigenvalue problems is mentioned. © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号