首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We consider the numerical solution of the continuous algebraic Riccati equation A*X + XA ? XFX + G = 0, with F = F*,G = G* of low rank and A large and sparse. We develop an algorithm for the low‐rank approximation of X by means of an invariant subspace iteration on a function of the associated Hamiltonian matrix. We show that the sought‐after approximation can be obtained by a low‐rank update, in the style of the well known Alternating Direction Implicit (ADI) iteration for the linear equation, from which the new method inherits many algebraic properties. Moreover, we establish new insightful matrix relations with emerging projection‐type methods, which will help increase our understanding of this latter class of solution strategies. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
The extended Krylov subspace method has recently arisen as a competitive method for solving large-scale Lyapunov equations. Using the theoretical framework of orthogonal rational functions, in this paper we provide a general a priori error estimate when the known term has rank-one. Special cases, such as symmetric coefficient matrix, are also treated. Numerical experiments confirm the proved theoretical assertions.  相似文献   

3.
4.
n this paper, we present an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax=λBx. We first formulate a version of inexact inverse subspace iteration in which the approximation from one step is used as an initial approximation for the next step. We then analyze the convergence property, which relates the accuracy in the inner iteration to the convergence rate of the outer iteration. In particular, the linear convergence property of the inverse subspace iteration is preserved. Numerical examples are given to demonstrate the theoretical results.  相似文献   

5.
A full multigrid scheme was used in computing some eigenvalues of the Laplace eigenvalue problem with the Dirichlet boundary condition. We get a system of algebraic equations with an aid of finite difference method and apply subspace iteration method to the system to compute first some eigenvalues. The result shows that this is very effective in calculating some eigenvalues of this problem.  相似文献   

6.
The paper presents convergence estimates for a class of iterative methods for solving partial generalized symmetric eigenvalue problems whereby a sequence of subspaces containing approximations to eigenvectors is generated by combining the Rayleigh-Ritz and the preconditioned steepest descent/ascent methods. The paper uses a novel approach of studying the convergence of groups of eigenvalues, rather than individual ones, to obtain new convergence estimates for this class of methods that are cluster robust, i.e. do not involve distances between computed eigenvalues.  相似文献   

7.
Numerical Algorithms - Bregman-type iterative methods have received considerable attention in recent years due to their ease of implementation and the high quality of the computed solutions they...  相似文献   

8.
The aim of this paper is to provide a convergence analysis for a preconditioned subspace iteration, which is designated to determine a modest number of the smallest eigenvalues and its corresponding invariant subspace of eigenvectors of a large, symmetric positive definite matrix. The algorithm is built upon a subspace implementation of preconditioned inverse iteration, i.e., the well-known inverse iteration procedure, where the associated system of linear equations is solved approximately by using a preconditioner. This step is followed by a Rayleigh-Ritz projection so that preconditioned inverse iteration is always applied to the Ritz vectors of the actual subspace of approximate eigenvectors. The given theory provides sharp convergence estimates for the Ritz values and is mainly built on arguments exploiting the geometry underlying preconditioned inverse iteration.  相似文献   

9.
We construct a novel multi-step iterative method for solving systems of nonlinear equations by introducing a parameter θ to generalize the multi-step Newton method while keeping its order of convergence and computational cost. By an appropriate selection of θ, the new method can both have faster convergence and have larger radius of convergence. The new iterative method only requires one Jacobian inversion per iteration, and therefore, can be efficiently implemented using Krylov subspace methods. The new method can be used to solve nonlinear systems of partial differential equations, such as complex generalized Zakharov systems of partial differential equations, by transforming them into systems of nonlinear equations by discretizing approaches in both spatial and temporal independent variables such as, for instance, the Chebyshev pseudo-spectral discretizing method. Quite extensive tests show that the new method can have significantly faster convergence and significantly larger radius of convergence than the multi-step Newton method.  相似文献   

10.
We study inexact subspace iteration for solving generalized non-Hermitian eigenvalue problems with spectral transformation, with focus on a few strategies that help accelerate preconditioned iterative solution of the linear systems of equations arising in this context. We provide new insights into a special type of preconditioner with “tuning” that has been studied for this algorithm applied to standard eigenvalue problems. Specifically, we propose an alternative way to use the tuned preconditioner to achieve similar performance for generalized problems, and we show that these performance improvements can also be obtained by solving an inexpensive least squares problem. In addition, we show that the cost of iterative solution of the linear systems can be further reduced by using deflation of converged Schur vectors, special starting vectors constructed from previously solved linear systems, and iterative linear solvers with subspace recycling. The effectiveness of these techniques is demonstrated by numerical experiments.  相似文献   

11.
The governing dynamics of fluid flow is stated as a system of partial differential equations referred to as the Navier-Stokes system. In industrial and scientific applications, fluid flow control becomes an optimization problem where the governing partial differential equations of the fluid flow are stated as constraints. When discretized, the optimal control of the Navier-Stokes equations leads to large sparse saddle point systems in two levels. In this paper, we consider distributed optimal control for the Stokes system and test the particular case when the arising linear system can be compressed after eliminating the control function. In that case, a system arises in a form which enables the application of an efficient block matrix preconditioner that previously has been applied to solve complex-valued systems in real arithmetic. Under certain conditions, the condition number of the so preconditioned matrix is bounded by 2. The numerical and computational efficiency of the method in terms of number of iterations and execution time is favorably compared with other published methods.  相似文献   

12.
In this paper, we use a projected gradient algorithm to solve a nonlinear operator equation with ?p-norm (1<p≤2) constraint. Gradient iterations with ?p-norm constraints have been studied recently both in the context of inverse problem and of compressed sensing. In this paper, the constrained gradient iteration is implemented via a projected operator. We establish the ?2-norm convergence of sequence constructed by the constrained gradient iteration when p∈(1,2]. The performance of the method is testified by a numerical example.  相似文献   

13.
An image segmentation algorithm called"segmentation based on the localized subspace iterations"(SLSI)is proposed in this paper.The basic idea is to combine the strategies in Ncut algorithm by Shi and Malik in 2000 and the LSI by E,Li and Lu in 2007.The LSI is applied to solve an eigenvalue problem associated with the affinity matrix of an image,which makes the overall algorithm linearly scaled.The choices of the partition number,the supports and weight functions in SLSI are discussed.Numerical experiments for real images show the applicability of the algorithm.  相似文献   

14.
Let p(x) be a polynomial of degree n?2 with coefficients in a subfield K of the complex numbers. For each natural number m?2, let Lm(x) be the m×m lower triangular matrix whose diagonal entries are p(x) and for each j=1,…,m−1, its jth subdiagonal entries are . For i=1,2, let Lmi)(x) be the matrix obtained from Lm(x) by deleting its first i rows and its last i columns. L1(1)(x)≡1. Then, the function Bm(x)=xp(x) defines a fixed-point iteration function having mth order convergence rate for simple roots of p(x). For m=2 and 3, Bm(x) coincides with Newton's and Halley's, respectively. The function Bm(x) is a member of S(m,m+n−2), where for any M?m, S(m,M) is the set of all rational iteration functions g(x) ∈ K(x) such that for all roots θ of p(x), then g(x)=θ+∑i=mMγi(x)(θ−x)i, with γi(x) ∈ K(x) and well-defined at any simple root θ. Given gS(m,M), and a simple root θ of p(x), gi(θ)=0, i=1, …, m−1 and the asymptotic constant of convergence of the corresponding fixed-point iteration is . For Bm(x) we obtain . If all roots of p(x) are simple, Bm(x) is the unique member of S(m,m + n − 2). By making use of the identity , we arrive at two recursive formulas for constructing iteration functions within the S(m,M) family. In particular, the family of Bm(x) can be generated using one of these formulas. Moreover, the other formula gives a simple scheme for constructing a family of iteration functions credited to Euler as well as Schröder, whose mth order member belongs to S(m,mn), m>2. The iteration functions within S(m,M) can be extended to any arbitrary smooth function f, with the uniform replacement of p(j) with f(j) in g as well as in γm(θ).  相似文献   

15.
A new method of finding explicit solutions of Lyapunov equations is described based on a lemma on one-dimensional perturbations of invertible operators. If Y satisfies the equation Y?CYC1 = bb1 for an appropriate vector b, then X = Y-1 satisfies X? C1XC = aa1 for a given vector a. A concrete example [with a=(1,0,…,0)T] is given.  相似文献   

16.
On the solvability for the mixed-type Lyapunov equation   总被引:3,自引:0,他引:3  
** Email: xsf{at}math.pku.edu.cn*** Email: mscheng{at}math.pku.edu.cn In this paper, the linear matrix equation X = AXB* + BXA* +Q is considered, which is called the mixed-type Lyapunov equation.Some necessary and sufficient conditions for the existence ofa unique solution are presented. Since a Hermitian positivesemidefinite solution is important from the application pointof view, some sufficient conditions for the existence of a Hermitianpositive semidefinite solution are derived.  相似文献   

17.
18.
It was shown by Bushell [1] that the equation txt = x2 has a unique positive-definite solution when t is a real invertible matrix; the proof utilizes the Hilbert projective metric and the Banach fixed-point theorem. I present a simpler proof of a more general result.  相似文献   

19.
This paper is devoted to the time‐fractional gas dynamics equation with Caputo derivative. Fractional operators are very natural tools to model memory‐dependent phenomena. Modified iteration method is proposed to obtain the approximate and analytical solution of the fractional gas dynamics equation. This method is a combined form of the new iteration method and Laplace transform. Modified iteration method really is powerful and simple method compared with other methods. Existence and uniqueness of solution are proven. Numerical results for different cases of the equation are obtained. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号