首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present a Hermitian and skew-Hermitian splitting (HSS) iteration method for solving large sparse continuous Sylvester equations with non-Hermitian and positive definite/semi-definite matrices. The unconditional convergence of the HSS iteration method is proved and an upper bound on the convergence rate is derived. Moreover, to reduce the computing cost, we establish an inexact variant of the HSS iteration method and analyze its convergence property in detail. Numerical results show that the HSS iteration method and its inexact variant are efficient and robust solvers for this class of continuous Sylvester equations.  相似文献   

2.
李旭  李明翔 《计算数学》2021,43(3):354-366
对于求解大型稀疏连续Sylvester方程,Bai提出了非常有效的Hermitian和反Hermitian分裂(HSS)迭代法.为了进一步提高求解这类方程的效率,本文建立一种广义正定和反Hermitian分裂(GPSS)迭代法,并且提出不精确GPSS(IGPSS)迭代法从而可以降低计算成本.对GPSS迭代法及其不精确变体的收敛性作了详细分析.另外,建立一种超松弛加速GPSS(AGPSS)方法并且讨论了收敛性.数值结果表明了方法的高效性和鲁棒性.  相似文献   

3.
In this paper, we study the convergence and the convergence rates of an inexact Newton–Landweber iteration method for solving nonlinear inverse problems in Banach spaces. Opposed to the traditional methods, we analyze an inexact Newton–Landweber iteration depending on the Hölder continuity of the inverse mapping when the data are not contaminated by noise. With the namely Hölder-type stability and the Lipschitz continuity of DF, we prove convergence and monotonicity of the residuals defined by the sequence induced by the iteration. Finally, we discuss the convergence rates.  相似文献   

4.
正定反Hermite分裂(PSS)方法是求解大型稀疏非Hermite正定线性代数方程组的一类无条件收敛的迭代算法.将其作为不精确Newton方法的内迭代求解器,我们构造了一类用于求解大型稀疏且具有非Hermite正定Jacobi矩阵的非线性方程组的不精确Newton-PSS方法,并对方法的局部收敛性和半局部收敛性进行了详细的分析.数值结果验证了该方法的可行性与有效性.  相似文献   

5.
In this paper, we present a convergence analysis of the inexact Newton method for solving Discrete-time algebraic Riccati equations (DAREs) for large and sparse systems. The inexact Newton method requires, at each iteration, the solution of a symmetric Stein matrix equation. These linear matrix equations are solved approximatively by the alternating directions implicit (ADI) or Smith?s methods. We give some new matrix identities that will allow us to derive new theoretical convergence results for the obtained inexact Newton sequences. We show that under some necessary conditions the approximate solutions satisfy some desired properties such as the d-stability. The theoretical results developed in this paper are an extension to the discrete case of the analysis performed by Feitzinger et al. (2009) [8] for the continuous-time algebraic Riccati equations. In the last section, we give some numerical experiments.  相似文献   

6.
For the large sparse linear complementarity problems, by reformulating them as implicit fixed‐point equations based on splittings of the system matrices, we establish a class of modulus‐based matrix splitting iteration methods and prove their convergence when the system matrices are positive‐definite matrices and H+‐matrices. These results naturally present convergence conditions for the symmetric positive‐definite matrices and the M‐matrices. Numerical results show that the modulus‐based relaxation methods are superior to the projected relaxation methods as well as the modified modulus method in computing efficiency. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
We present a nested splitting conjugate gradient iteration method for solving large sparse continuous Sylvester equation, in which both coefficient matrices are (non-Hermitian) positive semi-definite, and at least one of them is positive definite. This method is actually inner/outer iterations, which employs the Sylvester conjugate gradient method as inner iteration to approximate each outer iterate, while each outer iteration is induced by a convergent and Hermitian positive definite splitting of the coefficient matrices. Convergence conditions of this method are studied and numerical experiments show the efficiency of this method. In addition, we show that the quasi-Hermitian splitting can induce accurate, robust and effective preconditioned Krylov subspace methods.  相似文献   

8.
n this paper, we present an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax=λBx. We first formulate a version of inexact inverse subspace iteration in which the approximation from one step is used as an initial approximation for the next step. We then analyze the convergence property, which relates the accuracy in the inner iteration to the convergence rate of the outer iteration. In particular, the linear convergence property of the inverse subspace iteration is preserved. Numerical examples are given to demonstrate the theoretical results.  相似文献   

9.
In this paper, we study an inexact inverse iteration with inner-outer iterations for solving the generalized eigenvalu problem Ax = Bx, and analyze how the accuracy in the inner iterations affects the convergence of the outer iterations. By considering a special stopping criterion depending on a threshold parameter, we show that the outer iteration converges linearly with the inner threshold parameter as the convergence rate. We also discuss the total amount of work and asymptotic equivalence between this stopping criterion and a more standard one. Numerical examples are given to illustrate the theoretical results.  相似文献   

10.
In this paper we study inexact inverse iteration for solving the generalised eigenvalue problem A xM x. We show that inexact inverse iteration is a modified Newton method and hence obtain convergence rates for various versions of inexact inverse iteration for the calculation of an algebraically simple eigenvalue. In particular, if the inexact solves are carried out with a tolerance chosen proportional to the eigenvalue residual then quadratic convergence is achieved. We also show how modifying the right hand side in inverse iteration still provides a convergent method, but the rate of convergence will be quadratic only under certain conditions on the right hand side. We discuss the implications of this for the preconditioned iterative solution of the linear systems. Finally we introduce a new ILU preconditioner which is a simple modification to the usual preconditioner, but which has advantages both for the standard form of inverse iteration and for the version with a modified right hand side. Numerical examples are given to illustrate the theoretical results. AMS subject classification (2000)  65F15, 65F10  相似文献   

11.
Recently, Xue etc. \cite{28} discussed the Smith method for solving Sylvester equation $AX+XB=C$, where one of the matrices $A$ and $B$ is at least a nonsingular $M$-matrix and the other is an (singular or nonsingular) $M$-matrix. Furthermore, in order to find the minimal non-negative solution of a certain class of non-symmetric algebraic Riccati equations, Gao and Bai \cite{gao-2010} considered a doubling iteration scheme to inexactly solve the Sylvester equations. This paper discusses the iterative error of the standard Smith method used in \cite{gao-2010} and presents the prior estimations of the accurate solution $X$ for the Sylvester equation. Furthermore, we give a new version of the Smith method for solving discrete-time Sylvester equation or Stein equation $AXB+X=C$, while the new version of the Smith method can also be used to solve Sylvester equation $AX+XB=C$, where both $A$ and $B$ are positive definite. % matrices. We also study the convergence rate of the new Smith method. At last, numerical examples are given to illustrate the effectiveness of our methods  相似文献   

12.
This paper is concerned with the numerical solution of large scale Sylvester equations AXXB=C, Lyapunov equations as a special case in particular included, with C having very small rank. For stable Lyapunov equations, Penzl (2000) [22] and Li and White (2002) [20] demonstrated that the so-called Cholesky factor ADI method with decent shift parameters can be very effective. In this paper we present a generalization of the Cholesky factor ADI method for Sylvester equations. An easily implementable extension of Penz’s shift strategy for the Lyapunov equation is presented for the current case. It is demonstrated that Galerkin projection via ADI subspaces often produces much more accurate solutions than ADI solutions.  相似文献   

13.
Newton iteration method can be used to find the minimal non‐negative solution of a certain class of non‐symmetric algebraic Riccati equations. However, a serious bottleneck exists in efficiency and storage for the implementation of the Newton iteration method, which comes from the use of some direct methods in exactly solving the involved Sylvester equations. In this paper, instead of direct methods, we apply a fast doubling iteration scheme to inexactly solve the Sylvester equations. Hence, a class of inexact Newton iteration methods that uses the Newton iteration method as the outer iteration and the doubling iteration scheme as the inner iteration is obtained. The corresponding procedure is precisely described and two practical methods of monotone convergence are algorithmically presented. In addition, the convergence property of these new methods is studied and numerical results are given to show their feasibility and effectiveness for solving the non‐symmetric algebraic Riccati equations. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
We consider a path following algorithm for solving linear complementarity problems with positive semi-definite matrices. This algorithm can start from any interior solution and attain a linear rate of convergence. Moreover, if the starting solution is appropriately chosen, this algorithm achieves a complexity of O( L}) iterations, wherem is the number of variables andL is the size of the problem encoding in binary. We present a simple complexity analysis for this algorithm, which is based on a new Lyapunov function for measuring the nearness to optimality. This Lyapunov function has itself interesting properties that can be used in a line search to accelerate convergence. We also develop an inexact line search procedure in which the line search stepsize is obtainable in a closed form. Finally, we extended this algorithm to handle directly variables which are unconstrained in sign and whose corresponding matrix is positive definite. The rate of convergence of this extended algorithm is shown to be independent of the number of such variables.This research is partially supported by the U.S. Army Research Office, contract DAAL03-86-K-0171 (Center for Intelligent Control Systems), and by the National Science Foundation, grant NSF-ECS-8519058.  相似文献   

15.
By further generalizing the skew-symmetric triangular splitting iteration method studied by Krukier, Chikina and Belokon (Applied Numerical Mathematics, 41 (2002), pp. 89–105), in this paper, we present a new iteration scheme, called the modified skew-Hermitian triangular splitting iteration method, for solving the strongly non-Hermitian systems of linear equations with positive definite coefficient matrices. We discuss the convergence property and the optimal parameters of this new method in depth. Moreover, when it is applied to precondition the Krylov subspace methods like GMRES, the preconditioning property of the modified skew-Hermitian triangular splitting iteration is analyzed in detail. Numerical results show that, as both solver and preconditioner, the modified skew-Hermitian triangular splitting iteration method is very effective for solving large sparse positive definite systems of linear equations of strong skew-Hermitian parts.  相似文献   

16.
Newton‐HSS methods, which are variants of inexact Newton methods different from the Newton–Krylov methods, have been shown to be competitive methods for solving large sparse systems of nonlinear equations with positive‐definite Jacobian matrices (J. Comp. Math. 2010; 28 :235–260). In that paper, only local convergence was proved. In this paper, we prove a Kantorovich‐type semilocal convergence. Then we introduce Newton‐HSS methods with a backtracking strategy and analyse their global convergence. Finally, these globally convergent Newton‐HSS methods are shown to work well on several typical examples using different forcing terms to stop the inner iterations. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
By further generalizing the modified skew-Hermitian triangular splitting iteration methods studied in [L. Wang, Z.-Z. Bai, Skew-Hermitian triangular splitting iteration methods for non-Hermitian positive definite linear systems of strong skew-Hermitian parts, BIT Numer. Math. 44 (2004) 363-386], in this paper, we present a new iteration scheme, called the product-type skew-Hermitian triangular splitting iteration method, for solving the strongly non-Hermitian systems of linear equations with positive definite coefficient matrices. We discuss the convergence property and the optimal parameters of this method. Moreover, when it is applied to precondition the Krylov subspace methods, the preconditioning property of the product-type skew-Hermitian triangular splitting iteration is analyzed in detail. Numerical results show that the product-type skew-Hermitian triangular splitting iteration method can produce high-quality preconditioners for the Krylov subspace methods for solving large sparse positive definite systems of linear equations of strong skew-Hermitian parts.  相似文献   

18.
Based on separable property of the linear and the nonlinear terms and on the Hermitian and skew-Hermitian splitting of the coefficient matrix, we present the Picard-HSS and the nonlinear HSS-like iteration methods for solving a class of large scale systems of weakly nonlinear equations. The advantage of these methods over the Newton and the Newton-HSS iteration methods is that they do not require explicit construction and accurate computation of the Jacobian matrix, and only need to solve linear sub-systems of constant coefficient matrices. Hence, computational workloads and computer memory may be saved in actual implementations. Under suitable conditions, we establish local convergence theorems for both Picard-HSS and nonlinear HSS-like iteration methods. Numerical implementations show that both Picard-HSS and nonlinear HSS-like iteration methods are feasible, effective, and robust nonlinear solvers for this class of large scale systems of weakly nonlinear equations.  相似文献   

19.
We consider the computation of an eigenvalue and corresponding eigenvector of a Hermitian positive definite matrix A , assuming that good approximations of the wanted eigenpair are already available, as may be the case in applications such as structural mechanics. We analyze efficient implementations of inexact Rayleigh quotient-type methods, which involve the approximate solution of a linear system at each iteration by means of the Conjugate Residuals method. We show that the inexact version of the classical Rayleigh quotient iteration is mathematically equivalent to a Newton approach. New insightful bounds relating the inner and outer recurrences are derived. In particular, we show that even if in the inner iterations the norm of the residual for the linear system decreases very slowly, the eigenvalue residual is reduced substantially. Based on the theoretical results, we examine stopping criteria for the inner iteration. We also discuss and motivate a preconditioning strategy for the inner iteration in order to further accelerate the convergence. Numerical experiments illustrate the analysis.  相似文献   

20.
Hermitian and skew-Hermitian splitting(HSS) method has been proved quite successfully in solving large sparse non-Hermitian positive definite systems of linear equations. Recently, by making use of HSS method as inner iteration, Newton-HSS method for solving the systems of nonlinear equations with non-Hermitian positive definite Jacobian matrices has been proposed by Bai and Guo. It has shown that the Newton-HSS method outperforms the Newton-USOR and the Newton-GMRES iteration methods. In this paper, a class of modified Newton-HSS methods for solving large systems of nonlinear equations is discussed. In our method, the modified Newton method with R-order of convergence three at least is used to solve the nonlinear equations, and the HSS method is applied to approximately solve the Newton equations. For this class of inexact Newton methods, local and semilocal convergence theorems are proved under suitable conditions. Moreover, a globally convergent modified Newton-HSS method is introduced and a basic global convergence theorem is proved. Numerical results are given to confirm the effectiveness of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号