首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
提出了一种改进的梯度迭代算法来求解Sylvester矩阵方程和Lyapunov矩阵方程.该梯度算法是通过构造一种特殊的矩阵分裂,综合利用Jaucobi迭代算法和梯度迭代算法的求解思路.与已知的梯度算法相比,提高了算法的迭代效率.同时研究了该算法在满足初始条件下的收敛性.数值算例验证了该算法的有效性.  相似文献   

2.
3.
4.
This paper is concerned with weighted least squares solutions to general coupled Sylvester matrix equations. Gradient based iterative algorithms are proposed to solve this problem. This type of iterative algorithm includes a wide class of iterative algorithms, and two special cases of them are studied in detail in this paper. Necessary and sufficient conditions guaranteeing the convergence of the proposed algorithms are presented. Sufficient conditions that are easy to compute are also given. The optimal step sizes such that the convergence rates of the algorithms, which are properly defined in this paper, are maximized and established. Several special cases of the weighted least squares problem, such as a least squares solution to the coupled Sylvester matrix equations problem, solutions to the general coupled Sylvester matrix equations problem, and a weighted least squares solution to the linear matrix equation problem are simultaneously solved. Several numerical examples are given to illustrate the effectiveness of the proposed algorithms.  相似文献   

5.
Weighted FOM and GMRES for solving nonsymmetric linear systems   总被引:1,自引:0,他引:1  
Essai  Azeddine 《Numerical Algorithms》1998,18(3-4):277-292
This paper presents two new methods called WFOM and WGMRES, which are variants of FOM and GMRES, for solving large and sparse nonsymmetric linear systems. To accelerate the convergence, these new methods use a different inner product instead of the Euclidean one. Furthermore, at each restart, a different inner product is chosen. The weighted Arnoldi process is introduced for implementing these methods. After describing the weighted methods, we give the relations that link them to FOM and GMRES. Experimental results are presented to show the good performances of the new methods compared to FOM(m) and GMRES(m). This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

6.
We introduce a deflation method that takes advantage of the IRA method, by extracting a GMRES solution from the Krylov basis computed within the Arnoldi process of the IRA method itself. The deflation is well-suited because it is done with eigenvectors associated to the eigenvalues that are closest to zero, which are approximated by IRA very quickly. By a slight modification, we adapt it to the FOM algorithm, and then to GMRES enhanced by imposing constraints within the minimization condition. The use of IRA enables us to reduce the number of matrix-vector products, while keeping a low storage. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

7.
8.
9.
In this paper a modified gradient based algorithm for solving Sylvester equations is presented. Different from the gradient based method introduced by Ding and Chen [7] and the relaxed gradient based algorithm proposed by Niu et al. [18], the information generated in the first half-iterative step is fully exploited and used to construct the approximate solution. Theoretical analysis shows that the new method converges under certain assumptions. Numerical results are given to verify the efficiency of the new method.  相似文献   

10.
Pseudoeigenvalues have been extensively studied for highly nonnormal matrices. This paper focuses on the corresponding pseudoeigenvectors. The properties and uses of pseudoeigenvector bases are investigated. It is shown that pseudoeigenvector bases can be much better conditioned than eigenvector bases. We look at the stability and the varying quality of pseudoeigenvector bases. Then applications are considered including the exponential of a matrix. Several aspects of GMRES convergence are looked at, including why using approximate eigenvectors to deflate eigenvalues can be effective even when there is not a basis of eigenvectors.  相似文献   

11.
This paper is concerned with iterative solutions to a class of complex matrix equations, which include some previously investigated matrix equations as special cases. By applying the hierarchical identification principle, an iterative algorithm is constructed to solve this class of matrix equations. A sufficient condition is presented to guarantee that the proposed algorithm is convergent for an arbitrary initial matrix with a real representation of a complex matrix as tools. By using some properties of the real representation, a convergence condition that is easier to compute is also given in terms of original coefficient matrices. A numerical example is employed to illustrate the effectiveness of the proposed methods.  相似文献   

12.
Translated from Vychislitel'nye Kompleksy i Modelirovanie Slozhnykh Sistem, pp. 28–36, Moscow State University, 1989.  相似文献   

13.
14.
An iterative method is proposed to solve generalized coupled Sylvester matrix equations, based on a matrix form of the least-squares QR-factorization (LSQR) algorithm. By this iterative method on the selection of special initial matrices, we can obtain the minimum Frobenius norm solutions or the minimum Frobenius norm least-squares solutions over some constrained matrices, such as symmetric, generalized bisymmetric and (RS)-symmetric matrices. Meanwhile, the optimal approximate solutions to the given matrices can be derived by solving the corresponding new generalized coupled Sylvester matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the present method.  相似文献   

15.
The preconditioned iterative solvers for solving Sylvester tensor equations are considered in this paper.By fully exploiting the structure of the tensor equation,we propose a projection method based on the tensor format,which needs less flops and storage than the standard projection method.The structure of the coefficient matrices of the tensor equation is used to design the nearest Kronecker product(NKP) preconditioner,which is easy to construct and is able to accelerate the convergence of the iterative solver.Numerical experiments are presented to show good performance of the approaches.  相似文献   

16.
We propose a class of iterative algorithms to solve some tensor equations via Einstein product. These algorithms use tensor computations with no matricizations involved. For any (special) initial tensor, a solution (the minimal Frobenius norm solution) of related problems can be obtained within finite iteration steps in the absence of roundoff errors. Numerical examples are provided to confirm the theoretical results, which demonstrate that this kind of iterative methods are effective and feasible for solving some tensor equations.  相似文献   

17.
We focus on numerically solving a typical type of Hamilton-Jacobi-Bellman (HJB) equations arising from a class of optimal controls with a standard multidimensional diffusion model. Solving such an equation results in the value function and an optimal feedback control law. The Bellman's curse of dimensionality seems to be the main obstacle to applicability of most numerical algorithms for solving HJB. We decompose HJB into a number of lower-dimensional problems, and discuss how the usual alternating direction method can be extended for solving HJB. We present some convergence results, as well as preliminary experimental outcomes.This research was funded in part by an RGC grant from the University of Alabama.  相似文献   

18.
In this paper, we consider an explicit solution of system of Sylvester matrix equations of the form A1V1 ? E1V1F1 = B1W1 and A2V2 ? E2V2F2 = B2W2 with F1 and F2 being arbitrary matrices, where V1,W1,V2 and W2 are the matrices to be determined. First, the definitions, of the matrix polynomial of block matrix, Sylvester sum, and Kronecker product of block matrices are defined. Some definitions, lemmas, and theorems that are needed to propose our method are stated and proved. Numerical test problems are solved to illustrate the suggested technique.  相似文献   

19.
The solution of linear systems continues to play an important role in scientific computing. The problems to be solved often are of very large size, so that solving them requires large computer resources. To solve these problems, at least supercomputers with large shared memory or massive parallel computer systems with distributed memory are needed.

This paper gives a survey of research on parallel implementation of various direct methods to solve dense linear systems. In particular are considered: Gaussian elimination, Gauss-Jordan elimination and a variant due to Huard (1979), and an algorithm due to Enright (1978), designed in relation to solving (stiff) ODEs, such that stepsize and other method parameters can easily be varied.

Some theoretical results are mentioned, including a new result on error analysis of Huard's algorithm. Moreover, practical considerations and results of experiments on supercomputers and on a distributed-memory computer system are presented.  相似文献   


20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号