首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
In the present work, we present a numerical method for the computation of approximate solutions for large continuous-time algebraic Riccati equations. The proposed method is a method of projection onto a matrix Krylov subspace. We use a matrix Arnoldi process to construct an orthonormal basis. We give some theoretical results and numerical experiments for large problems.  相似文献   

2.
In the present paper, we present block Arnoldi-based methods for the computation of low rank approximate solutions of large discrete-time algebraic Riccati equations (DARE). The proposed methods are projection methods onto block or extended block Krylov subspaces. We give new upper bounds for the norm of the error obtained by applying these block Arnoldi-based processes. We also introduce the Newton method combined with the block Arnoldi algorithm and present some numerical experiments with comparisons between these methods.  相似文献   

3.
In the present paper, we present block Arnoldi-based methods for the computation of low rank approximate solutions of large discrete-time algebraic Riccati equations (DARE). The proposed methods are projection methods onto block or extended block Krylov subspaces. We give new upper bounds for the norm of the error obtained by applying these block Arnoldi-based processes. We also introduce the Newton method combined with the block Arnoldi algorithm and present some numerical experiments with comparisons between these methods.  相似文献   

4.
We propose a numerical method for solving large‐scale differential symmetric Stein equations having low‐rank right constant term. Our approach is based on projection the given problem onto a Krylov subspace then solving the low dimensional matrix problem by using an integration method, and the original problem solution is built by using obtained low‐rank approximate solution. Using the extended block Arnoldi process and backward differentiation formula (BDF), we give statements of the approximate solution and corresponding residual. Some numerical results are given to show the efficiency of the proposed method.  相似文献   

5.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

6.
7.
Tikhonov regularization is a popular method for the solution of linear discrete ill-posed problems with error-contaminated data. Nonstationary iterated Tikhonov regularization is known to be able to determine approximate solutions of higher quality than standard Tikhonov regularization. We investigate the choice of solution subspace in iterative methods for nonstationary iterated Tikhonov regularization of large-scale problems. Generalized Krylov subspaces are compared with Krylov subspaces that are generated by Golub–Kahan bidiagonalization and the Arnoldi process. Numerical examples illustrate the effectiveness of the methods.  相似文献   

8.
Generalized block Lanczos methods for large unsymmetric eigenproblems are presented, which contain the block Arnoldi method, and the block Arnoldi algorithms are developed. The convergence of this class of methods is analyzed when the matrix A is diagonalizable. Upper bounds for the distances between normalized eigenvectors and a block Krylov subspace are derived, and a priori theoretical error bounds for Ritz elements are established. Compared with generalized Lanczos methods, which contain Arnoldi's method, the convergence analysis shows that the block versions have two advantages: First, they may be efficient for computing clustered eigenvalues; second, they are able to solve multiple eigenproblems. However, a deep analysis exposes that the approximate eigenvectors or Ritz vectors obtained by general orthogonal projection methods including generalized block methods may fail to converge theoretically for a general unsymmetric matrix A even if corresponding approximate eigenvalues or Ritz values do, since the convergence of Ritz vectors needs more sufficient conditions, which may be impossible to satisfy theoretically, than that of Ritz values does. The issues of how to restart and to solve multiple eigenproblems are addressed, and some numerical examples are reported to confirm the theoretical analysis. Received July 7, 1994 / Revised version received March 1, 1997  相似文献   

9.
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.  相似文献   

10.
In the present paper, we propose Krylov‐based methods for solving large‐scale differential Sylvester matrix equations having a low‐rank constant term. We present two new approaches for solving such differential matrix equations. The first approach is based on the integral expression of the exact solution and a Krylov method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low‐dimensional differential Sylvester matrix equation. The latter problem is then solved by some integration numerical methods such as the backward differentiation formula or Rosenbrock method, and the obtained solution is used to build the low‐rank approximate solution of the original problem. We give some new theoretical results such as a simple expression of the residual norm and upper bounds for the norm of the error. Some numerical experiments are given in order to compare the two approaches.  相似文献   

11.
In this paper, we consider large‐scale nonsymmetric differential matrix Riccati equations with low‐rank right‐hand sides. These matrix equations appear in many applications such as control theory, transport theory, applied probability, and others. We show how to apply Krylov‐type methods such as the extended block Arnoldi algorithm to get low‐rank approximate solutions. The initial problem is projected onto small subspaces to get low dimensional nonsymmetric differential equations that are solved using the exponential approximation or via other integration schemes such as backward differentiation formula (BDF) or Rosenbrock method. We also show how these techniques can be easily used to solve some problems from the well‐known transport equation. Some numerical examples are given to illustrate the application of the proposed methods to large‐scale problems.  相似文献   

12.
By transforming nonsymmetric linear systems to the extended skew-symmetric ones, we present the skew-symmetric methods for solving nonsymmetric linear systems with multiple right-hand sides. These methods are based on the block and global Arnoldi algorithm which is formed by implementing orthogonal projections of the initial matrix residual onto a matrix Krylov subspace. The algorithms avoid the tediously long Arnoldi process and highly reduce expensive storage. Numerical experiments show that these algorithms are effective and give better practical performances than global GMRES for solving nonsymmetric linear systems with multiple right-hand sides.  相似文献   

13.
1. IntroductionArnoldi's method [1, 12] is used for computing.,a few selected eigenpairs of largeunsymmetric matrices. It hajs been investigated since the 1980s; see, e-g., [3--15].It is well known that the m--step Arnoldi processt as described in detail in Section 2,generates an orthonormal basis {yi}7=1 of the Krylov subspace Km(vi, A) spanned byvil Avi,... 5 Am--'v,. Here yi is an initial unit norm vector. The projected matrix ofA onto Km(vi, A) is represented by an m x m upper Hessenb…  相似文献   

14.
In this paper, we develop an algorithm in which the block shift-and-invert Krylov subspace method can be employed for approximating the linear combination of the matrix exponential and related exponential-type functions. Such evaluation plays a major role in a class of numerical methods known as exponential integrators. We derive a low-dimensional matrix exponential to approximate the objective function based on the block shift-and-invert Krylov subspace methods. We obtain the error expansion of the approximation, and show that the variants of its first term can be used as reliable a posteriori error estimates and correctors. Numerical experiments illustrate that the error estimates are efficient and the proposed algorithm is worthy of further study.  相似文献   

15.
Convergence of the implicitly restarted Arnoldi (IRA) method for nonsymmetric eigenvalue problems has often been studied by deriving bounds for the angle between a desired eigenvector and the Krylov projection subspace. Bounds for residual norms of approximate eigenvectors have been less studied and this paper derives a new a-posteriori residual bound for nonsymmetric matrices with simple eigenvalues. The residual vector is shown to be a linear combination of exact eigenvectors and a residual bound is obtained as the sum of the magnitudes of the coefficients of the eigenvectors. We numerically illustrate that the convergence of the residual norm to zero is governed by a scalar term, namely the last element of the wanted eigenvector of the projected matrix. Both cases of convergence and non-convergence are illustrated and this validates our theoretical results. We derive an analogous result for implicitly restarted refined Arnoldi (IRRA) and for this algorithm, we numerically illustrate that convergence is governed by two scalar terms appearing in the linear combination which drives the residual norm to zero. We provide a set of numerical results that validate the residual bounds for both variants of Arnoldi methods.  相似文献   

16.
It is shown that the method of Arnoldi can be successfully used for solvinglarge unsymmetric eigenproblems. Like the symmetric Lanczos method, Arnoldi's algorithm realizes a projection process onto the Krylov subspace Km spanned by v1,Av1,...,Am?1v1, where v1 is the initial vector. We therefore study the convergence of the approximate eigenelements obtained by such a process. In particular, when the eigenvalues of A are real, we obtain bounds for the rates of convergence similar to those for the symmetric Lanczos algorithm. Some practical methods are presented in addition to that of Arnoldi, and several numerical experiments are described.  相似文献   

17.
Novel memory‐efficient Arnoldi algorithms for solving matrix polynomial eigenvalue problems are presented. More specifically, we consider the case of matrix polynomials expressed in the Chebyshev basis, which is often numerically more appropriate than the standard monomial basis for a larger degree d. The standard way of solving polynomial eigenvalue problems proceeds by linearization, which increases the problem size by a factor d. Consequently, the memory requirements of Krylov subspace methods applied to the linearization grow by this factor. In this paper, we develop two variants of the Arnoldi method that build the Krylov subspace basis implicitly, in a way that only vectors of length equal to the size of the original problem need to be stored. The proposed variants are generalizations of the so‐called quadratic Arnoldi method and two‐level orthogonal Arnoldi procedure methods, which have been developed for the monomial case. We also show how the typical ingredients of a full implementation of the Arnoldi method, including shift‐and‐invert and restarting, can be incorporated. Numerical experiments are presented for matrix polynomials up to degree 30 arising from the interpolation of nonlinear eigenvalue problems, which stem from boundary element discretizations of PDE eigenvalue problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Nonsymmetric saddle point problems arise in a wide variety of applications in computational science and engineering. The aim of this paper is to discuss the numerical behavior of several nonsymmetric iterative methods applied for solving the saddle point systems via the Schur complement reduction or the null-space projection approach. Krylov subspace methods often produce the iterates which fluctuate rather strongly. Here we address the question whether large intermediate approximate solutions reduce the final accuracy of these two-level (inner–outer) iteration algorithms. We extend our previous analysis obtained for symmetric saddle point problems and distinguish between three mathematically equivalent back-substitution schemes which lead to a different numerical behavior when applied in finite precision arithmetic. Theoretical results are then illustrated on a simple model example.  相似文献   

19.
This paper presents a new method for obtaining a matrix M which is an approximate inverse preconditioner for a given matrix A, where the eigenvalues of A all either have negative real parts or all have positive real parts. This method is based on the approximate solution of the special Sylvester equation AX + XA = 2I. We use a Krylov subspace method for obtaining an approximate solution of this Sylvester matrix equation which is based on the Arnoldi algorithm and on an integral formula. The computation of the preconditioner can be carried out in parallel and its implementation requires only the solution of very simple and small Sylvester equations. The sparsity of the preconditioner is preserved by using a proper dropping strategy. Some numerical experiments on test matrices from Harwell–Boing collection for comparing the numerical performance of the new method with an available well-known algorithm are presented.  相似文献   

20.
Solutions of large sparse linear systems of equations are usually obtained iteratively by constructing a smaller dimensional subspace such as a Krylov subspace. The convergence of these methods is sometimes hampered by the presence of small eigenvalues, in which case, some form of deflation can help improve convergence. The method presented in this paper enables the solution to be approximated by focusing the attention directly on the ‘small’ eigenspace (‘singular vector’ space). It is based on embedding the solution of the linear system within the eigenvalue problem (singular value problem) in order to facilitate the direct use of methods such as implicitly restarted Arnoldi or Jacobi–Davidson for the linear system solution. The proposed method, called ‘solution by null‐space approximation and projection’ (SNAP), differs from other similar approaches in that it converts the non‐homogeneous system into a homogeneous one by constructing an annihilator of the right‐hand side. The solution then lies in the null space of the resulting matrix. We examine the construction of a sequence of approximate null spaces using a Jacobi–Davidson style singular value decomposition method, called restarted SNAP‐JD, from which an approximate solution can be obtained. Relevant theory is discussed and the method is illustrated by numerical examples where SNAP is compared with both GMRES and GMRES‐IR. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号