首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An iterative method is proposed to solve generalized coupled Sylvester matrix equations, based on a matrix form of the least-squares QR-factorization (LSQR) algorithm. By this iterative method on the selection of special initial matrices, we can obtain the minimum Frobenius norm solutions or the minimum Frobenius norm least-squares solutions over some constrained matrices, such as symmetric, generalized bisymmetric and (RS)-symmetric matrices. Meanwhile, the optimal approximate solutions to the given matrices can be derived by solving the corresponding new generalized coupled Sylvester matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the present method.  相似文献   

2.
We present a nested splitting conjugate gradient iteration method for solving large sparse continuous Sylvester equation, in which both coefficient matrices are (non-Hermitian) positive semi-definite, and at least one of them is positive definite. This method is actually inner/outer iterations, which employs the Sylvester conjugate gradient method as inner iteration to approximate each outer iterate, while each outer iteration is induced by a convergent and Hermitian positive definite splitting of the coefficient matrices. Convergence conditions of this method are studied and numerical experiments show the efficiency of this method. In addition, we show that the quasi-Hermitian splitting can induce accurate, robust and effective preconditioned Krylov subspace methods.  相似文献   

3.

The basic aim of this article is to present a novel efficient matrix approach for solving the second-order linear matrix partial differential equations (MPDEs) under given initial conditions. For imposing the given initial conditions to the main MPDEs, the associated matrix integro-differential equations (MIDEs) with partial derivatives are obtained from direct integration with regard to the spatial variable x and time variable t. Hence, operational matrices of differentiation and integration together with the completeness of Bernoulli polynomials are used to reduce the obtained MIDEs to the corresponding algebraic Sylvester equations. Using two well-known subspace Krylov iterative methods (i.e., GMRES(10) and Bi-CGSTAB) we provide two algorithms for solving the mentioned Sylvester equations. A numerical example is provided to show the efficiency and accuracy of the presented approach.

  相似文献   

4.
Recently, Xue etc. \cite{28} discussed the Smith method for solving Sylvester equation $AX+XB=C$, where one of the matrices $A$ and $B$ is at least a nonsingular $M$-matrix and the other is an (singular or nonsingular) $M$-matrix. Furthermore, in order to find the minimal non-negative solution of a certain class of non-symmetric algebraic Riccati equations, Gao and Bai \cite{gao-2010} considered a doubling iteration scheme to inexactly solve the Sylvester equations. This paper discusses the iterative error of the standard Smith method used in \cite{gao-2010} and presents the prior estimations of the accurate solution $X$ for the Sylvester equation. Furthermore, we give a new version of the Smith method for solving discrete-time Sylvester equation or Stein equation $AXB+X=C$, while the new version of the Smith method can also be used to solve Sylvester equation $AX+XB=C$, where both $A$ and $B$ are positive definite. % matrices. We also study the convergence rate of the new Smith method. At last, numerical examples are given to illustrate the effectiveness of our methods  相似文献   

5.
Recently, Ding and Chen [F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM J. Control Optim. 44 (2006) 2269-2284] developed a gradient-based iterative method for solving a class of coupled Sylvester matrix equations. The basic idea is to regard the unknown matrices to be solved as parameters of a system to be identified, so that the iterative solutions are obtained by applying hierarchical identification principle. In this note, by considering the coupled Sylvester matrix equation as a linear operator equation we give a natural way to derive this algorithm. We also propose some faster algorithms and present some numerical results.  相似文献   

6.
In this paper, we study the alternating direction implicit (ADI) iteration for solving the continuous Sylvester equation AX + XB = C , where the coefficient matrices A and B are assumed to be positive semi‐definite matrices (not necessarily Hermitian), and at least one of them to be positive definite. We first analyze the convergence of the ADI iteration for solving such a class of Sylvester equations, then derive an upper bound for the contraction factor of this ADI iteration. To reduce its computational complexity, we further propose an inexact variant of the ADI iteration, which employs some Krylov subspace methods as its inner iteration processes at each step of the outer ADI iteration. The convergence is also analyzed in detail. The numerical experiments are given to illustrate the effectiveness of both ADI and inexact ADI iterations.  相似文献   

7.
In this paper, we present the preconditioned generalized accelerated overrelaxation (GAOR) method for solving linear systems based on a class of weighted linear least square problems. Two kinds of preconditioning are proposed, and each one contains three preconditioners. We compare the spectral radii of the iteration matrices of the preconditioned and the original methods. The comparison results show that the convergence rate of the preconditioned GAOR methods is indeed better than the rate of the original method, whenever the original method is convergent. Finally, a numerical example is presented in order to confirm these theoretical results.  相似文献   

8.
给出了求解一类加权线性最小二乘问题的预处理迭代方法,也就是预处理的广义加速超松弛方法(GAOR),得到了一些收敛和比较结果.比较结果表明当原来的迭代方法收敛时,预处理迭代方法会比原来的方法具有更好的收敛率.而且,通过数值算例也验证了新预处理迭代方法的有效性.  相似文献   

9.
We study inexact subspace iteration for solving generalized non-Hermitian eigenvalue problems with spectral transformation, with focus on a few strategies that help accelerate preconditioned iterative solution of the linear systems of equations arising in this context. We provide new insights into a special type of preconditioner with “tuning” that has been studied for this algorithm applied to standard eigenvalue problems. Specifically, we propose an alternative way to use the tuned preconditioner to achieve similar performance for generalized problems, and we show that these performance improvements can also be obtained by solving an inexpensive least squares problem. In addition, we show that the cost of iterative solution of the linear systems can be further reduced by using deflation of converged Schur vectors, special starting vectors constructed from previously solved linear systems, and iterative linear solvers with subspace recycling. The effectiveness of these techniques is demonstrated by numerical experiments.  相似文献   

10.
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to nonlinear eigenvalue problems with very large sparse ill-conditioned matrices monotonically depending on the spectral parameter. To compute the smallest eigenvalue of large-scale matrix nonlinear eigenvalue problems, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors, and inner products of vectors. We investigate the convergence and derive grid-independent error estimates for these methods. Numerical experiments demonstrate the practical effectiveness of the proposed methods for a model problem.  相似文献   

11.
The preconditioned iterative solvers for solving Sylvester tensor equations are considered in this paper.By fully exploiting the structure of the tensor equation,we propose a projection method based on the tensor format,which needs less flops and storage than the standard projection method.The structure of the coefficient matrices of the tensor equation is used to design the nearest Kronecker product(NKP) preconditioner,which is easy to construct and is able to accelerate the convergence of the iterative solver.Numerical experiments are presented to show good performance of the approaches.  相似文献   

12.
This paper is concerned with solutions to the so-called coupled Sylveter-conjugate matrix equations, which include the generalized Sylvester matrix equation and coupled Lyapunov matrix equation as special cases. An iterative algorithm is constructed to solve this kind of matrix equations. By using the proposed algorithm, the existence of a solution to a coupled Sylvester-conjugate matrix equation can be determined automatically. When the considered matrix equation is consistent, it is proven by using a real inner product in complex matrix spaces as a tool that a solution can be obtained within finite iteration steps for any initial values in the absence of round-off errors. Another feature of the proposed algorithm is that it can be implemented by using original coefficient matrices, and does not require to transform the coefficient matrices into any canonical forms. The algorithm is also generalized to solve a more general case. Two numerical examples are given to illustrate the effectiveness of the proposed methods.  相似文献   

13.
Projection methods have emerged as competitive techniques for solving large scale matrix Lyapunov equations. We explore the numerical solution of this class of linear matrix equations when a Minimal Residual (MR) condition is used during the projection step. We derive both a new direct method, and a preconditioned operator-oriented iterative solver based on CGLS, for solving the projected reduced least squares problem. Numerical experiments with benchmark problems show the effectiveness of an MR approach over a Galerkin procedure using the same approximation space.  相似文献   

14.
In this paper, we propose a new distinctive version of a generalized Newton method for solving nonsmooth equations. The iterative formula is not the classic Newton type, but an exponential one. Moreover, it uses matrices from B‐differential instead of generalized Jacobian. We prove local convergence of the method and we present some numerical examples.  相似文献   

15.
In this paper, we consider large-scale linear discrete ill-posed problems where the right-hand side contains noise. Regularization techniques such as Tikhonov regularization are needed to control the effect of the noise on the solution. In many applications such as in image restoration the coefficient matrix is given as a Kronecker product of two matrices and then Tikhonov regularization problem leads to the generalized Sylvester matrix equation. For large-scale problems, we use the global-GMRES method which is an orthogonal projection method onto a matrix Krylov subspace. We present some theoretical results and give numerical tests in image restoration.  相似文献   

16.
This paper discusses some applications of statistical condition estimation (SCE) to the problem of solving linear systems. Specifically, triangular and bidiagonal matrices are studied in some detail as typical of structured matrices. Such a structure, when properly respected, leads to condition estimates that are much less conservative compared with traditional non‐statistical methods of condition estimation. Some examples of linear systems and Sylvester equations are presented. Vandermonde and Cauchy matrices are also studied as representative of linear systems with large condition numbers that can nonetheless be solved accurately. SCE reflects this. Moreover, SCE when applied to solving very large linear systems by iterative solvers, including conjugate gradient and multigrid methods, performs equally well and various examples are given to illustrate the performance. SCE for solving large linear systems with direct methods, such as methods for semi‐separable structures, are also investigated. In all cases, the advantages of using SCE are manifold: ease of use, efficiency, and reliability. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

17.
The simulation of large-scale fluid flow applications often requires the efficient solution of extremely large nonsymmetric linear and nonlinear sparse systems of equations arising from the discretization of systems of partial differential equations. While preconditioned conjugate gradient methods work well for symmetric, positive-definite matrices, other methods are necessary to treat large, nonsymmetric matrices. The applications may also involve highly localized phenomena which can be addressed via local and adaptive grid refinement techniques. These local refinement methods usually cause non-standard grid connections which destroy the bandedness of the matrices and the associated ease of solution and vectorization of the algorithms. The use of preconditioned conjugate gradient or conjugate-gradient-like iterative methods in large-scale reservoir simulation applications is briefly surveyed. Then, some block preconditioning methods for adaptive grid refinement via domain decomposition techniques are presented and compared. These techniques are being used efficiently in existing large-scale simulation codes.  相似文献   

18.
It is well known that the ordering of the unknowns can have a significant effect on the convergence of a preconditioned iterative method and on its implementation on a parallel computer. To do so, we introduce a block red-black coloring to increase the degree of parallelism in the application of the blockILU preconditioner for solving sparse matrices, arising from convection-diffusion equations discretized using the finite difference scheme (five-point operator). We study the preconditioned PGMRES iterative method for solving these linear systems.  相似文献   

19.
20.
Preconditioned sor methods for generalized least-squares problems   总被引:1,自引:0,他引:1  
1.IntroductionThegeneralizedleastsquaresproblem,definedasmin(Ax--b)"W--'(Ax--b),(1.1)xacwhereAERm",m>n,bERm,andWERm'misasymmetricandpositivedefinitematrix,isfrequentlyfoundwhensolvingproblemsinstatistics,engineeringandeconomics.Forexample,wegetgeneralizedleastsquaresproblemswhensolvingnonlinearregressionanalysisbyquasi-likelihoodestimation,imagereconstructionproblemsandeconomicmodelsobtainedbythemaximumlikelihoodmethod(of.[1,21).Paige[3,4]investigatestheproblemexplicitly.Hechangestheorig…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号