首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we compare Krylov subspace methods with Faber series expansion for approximating the matrix exponential operator on large, sparse, non‐symmetric matrices. We consider in particular the case of Chebyshev series, corresponding to an initial estimate of the spectrum of the matrix by a suitable ellipse. Experimental results upon matrices with large size, arising from space discretization of 2D advection–diffusion problems, demonstrate that the Chebyshev method can be an effective alternative to Krylov techniques. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

3.
An overview is given of the simplifications which arise when p-cyclic systems are solved by iterative methods. Besides the classic iterative methods, we treat the Chebyshev semi-iterative method and the OR and MR variants of the class of Krylov subspace methods. Particular emphasis is given to equivalent iterations applied to the cyclically reduced system.  相似文献   

4.
jun-Feng Yin  Ken Hayami  Zhong-Zhi Bai 《PAMM》2007,7(1):2020151-2020152
We consider preconditioned Krylov subspace iteration methods, e.g., CG, LSQR and GMRES, for the solution of large sparse least-squares problems min ∥Axb2, with A ∈ R m×n, based on the Krylov subspaces Kk (BA, Br) and Kk (AB, r), respectively, where B ∈ R n×m is the preconditioning matrix. More concretely, we propose and implement a class of incomplete QR factorization preconditioners based on the Givens rotations and analyze in detail the efficiency and robustness of the correspondingly preconditioned Krylov subspace iteration methods. A number of numerical experiments are used to further examine their numerical behaviour. It is shown that for both overdetermined and underdetermined least-squares problems, the preconditioned GMRES methods are the best for large, sparse and ill-conditioned matrices in terms of both CPU time and iteration step. Also, comparisons with the diagonal scaling and the RIF preconditioners are given to show the superiority of the newly-proposed GMRES-type methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a prescribed invariant subspace. When A is Hermitian, the closest matrix may be required to be Hermitian. We measure distance in the Frobenius norm and discuss applications to Krylov subspace methods for the solution of large‐scale linear systems of equations and eigenvalue problems as well as to the construction of blurring matrices. Extensions that allow the matrix A to be rectangular and applications to Lanczos bidiagonalization, as well as to the recently proposed subspace‐restricted SVD method for the solution of linear discrete ill‐posed problems, also are considered.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Newton‐HSS methods, which are variants of inexact Newton methods different from the Newton–Krylov methods, have been shown to be competitive methods for solving large sparse systems of nonlinear equations with positive‐definite Jacobian matrices (J. Comp. Math. 2010; 28 :235–260). In that paper, only local convergence was proved. In this paper, we prove a Kantorovich‐type semilocal convergence. Then we introduce Newton‐HSS methods with a backtracking strategy and analyse their global convergence. Finally, these globally convergent Newton‐HSS methods are shown to work well on several typical examples using different forcing terms to stop the inner iterations. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
A new iterative scheme is described for the solution of large linear systems of equations with a matrix of the form A = ρU + ζI, where ρ and ζ are constants, U is a unitary matrix and I is the identity matrix. We show that for such matrices a Krylov subspace basis can be generated by recursion formulas with few terms. This leads to a minimal residual algorithm that requires little storage and makes it possible to determine each iterate with fairly little arithmetic work. This algorithm provides a model for iterative methods for non-Hermitian linear systems of equations, in a similar way to the conjugate gradient and conjugate residual algorithms. Our iterative scheme illustrates that results by Faber and Manteuffel [3,4] on the existence of conjugate gradient algorithms with short recurrence relations, and related results by Joubert and Young [13], can be extended.  相似文献   

8.
The convergence problem of many Krylov subspace methods,e.g., FOM, GCR, GMRES and QMR, for solving large unsymmetric (non-Hermitian) linear systems is considered in a unified way when the coefficient matrixA is defective and its spectrum lies in the open right (left) half plane. Related theoretical error bounds are established and some intrinsic relationships between the convergence speed and the spectrum ofA are exposed. It is shown that these methods are likely to converge slowly once one of the three cases occurs:A is defective, the distribution of its spectrum is not favorable, or the Jordan basis ofA is ill conditioned. In the proof, some properties on the higher order derivatives of Chebyshev polynomials in an ellipse in the complex plane are derived, one of which corrects a result that has been used extensively in the literature. Supported by the China State Major Key Project for Basic Researches, the National Natural Science Foundation of China, the Doctoral Program of the Chinese National Educational Commission, the Foundation of Returned Scholars of China and Liaoning Province Natural Science Foundation.  相似文献   

9.
Novel memory‐efficient Arnoldi algorithms for solving matrix polynomial eigenvalue problems are presented. More specifically, we consider the case of matrix polynomials expressed in the Chebyshev basis, which is often numerically more appropriate than the standard monomial basis for a larger degree d. The standard way of solving polynomial eigenvalue problems proceeds by linearization, which increases the problem size by a factor d. Consequently, the memory requirements of Krylov subspace methods applied to the linearization grow by this factor. In this paper, we develop two variants of the Arnoldi method that build the Krylov subspace basis implicitly, in a way that only vectors of length equal to the size of the original problem need to be stored. The proposed variants are generalizations of the so‐called quadratic Arnoldi method and two‐level orthogonal Arnoldi procedure methods, which have been developed for the monomial case. We also show how the typical ingredients of a full implementation of the Arnoldi method, including shift‐and‐invert and restarting, can be incorporated. Numerical experiments are presented for matrix polynomials up to degree 30 arising from the interpolation of nonlinear eigenvalue problems, which stem from boundary element discretizations of PDE eigenvalue problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, we study the alternating direction implicit (ADI) iteration for solving the continuous Sylvester equation AX + XB = C , where the coefficient matrices A and B are assumed to be positive semi‐definite matrices (not necessarily Hermitian), and at least one of them to be positive definite. We first analyze the convergence of the ADI iteration for solving such a class of Sylvester equations, then derive an upper bound for the contraction factor of this ADI iteration. To reduce its computational complexity, we further propose an inexact variant of the ADI iteration, which employs some Krylov subspace methods as its inner iteration processes at each step of the outer ADI iteration. The convergence is also analyzed in detail. The numerical experiments are given to illustrate the effectiveness of both ADI and inexact ADI iterations.  相似文献   

11.
For the large sparse block two-by-two real nonsingular matrices, we establish a general framework of practical and efficient structured preconditioners through matrix transformation and matrix approximations. For the specific versions such as modified block Jacobi-type, modified block Gauss-Seidel-type, and modified block unsymmetric (symmetric) Gauss-Seidel-type preconditioners, we precisely describe their concrete expressions and deliberately analyze eigenvalue distributions and positive definiteness of the preconditioned matrices. Also, we show that when these structured preconditioners are employed to precondition the Krylov subspace methods such as GMRES and restarted GMRES, fast and effective iteration solvers can be obtained for the large sparse systems of linear equations with block two-by-two coefficient matrices. In particular, these structured preconditioners can lead to efficient and high-quality preconditioning matrices for some typical matrices from the real-world applications.

  相似文献   


12.
We consider an iterative preconditioning technique for non-convex large scale optimization. First, we refer to the solution of large scale indefinite linear systems by using a Krylov subspace method, and describe the iterative construction of a preconditioner which does not involve matrices products or matrices storage. The set of directions generated by the Krylov subspace method is used, as by product, to provide an approximate inverse preconditioner. Then, we experience our preconditioner within Truncated Newton schemes for large scale unconstrained optimization, where we generalize the truncation rule by Nash–Sofer (Oper. Res. Lett. 9:219–221, 1990) to the indefinite case, too. We use a Krylov subspace method to both approximately solve the Newton equation and to construct the preconditioner to be used at the current outer iteration. An extensive numerical experience shows that the proposed preconditioning strategy, compared with the unpreconditioned strategy and PREQN (Morales and Nocedal in SIAM J. Optim. 10:1079–1096, 2000), may lead to a reduction of the overall inner iterations. Finally, we show that our proposal has some similarities with the Limited Memory Preconditioners (Gratton et al. in SIAM J. Optim. 21:912–935, 2011).  相似文献   

13.
The CMRH (Changing Minimal Residual method based on the Hessenberg process) method is a Krylov subspace method for solving large linear systems with non-symmetric coefficient matrices. CMRH generates a (non orthogonal) basis of the Krylov subspace through the Hessenberg process, and minimizes a quasi-residual norm. On dense matrices, the CMRH method is less expensive and requires less storage than other Krylov methods. In this work, we describe Matlab codes for the best of these implementations. Fortran codes for sequential and parallel implementations are also presented.  相似文献   

14.
A method is presented to solveAx=b by computing optimum iteration parameters for Richardson's method. It requires some information on the location of the eigenvalues ofA. The algorithm yields parameters well-suited for matrices for which Chebyshev parameters are not appropriate. It therefore supplements the Manteuffel algorithm, developed for the Chebyshev case. Numerical examples are described.  相似文献   

15.
For a given nonderogatory matrix A, formulas are given for functions of A in terms of Krylov matrices of A. Relations between the coefficients of a polynomial of A and the generating vector of a Krylov matrix of A are provided. With the formulas, linear transformations between Krylov matrices and functions of A are introduced, and associated algebraic properties are derived. Hessenberg reduction forms are revisited equipped with appropriate inner products and related properties and matrix factorizations are given.  相似文献   

16.
In this paper, we present a new type of restarted Krylov method for calculating the smallest eigenvalues of a symmetric positive definite matrix G. The new framework avoids the Lanczos tridiagonalization process and the use of polynomial filtering. This simplifies the restarting mechanism and allows the introduction of several modifications. Convergence is assured by a monotonicity property that pushes the eigenvalues toward their limits. Another innovation is the use of inexact inversions of G to generate the Krylov matrices. In this approach, the inverse of G is approximated by using an iterative method to solve the related linear system. Numerical experiments illustrate the usefulness of the proposed approach.  相似文献   

17.
Norm-minimizing-type methods for solving large sparse linear systems with symmetric and indefinite coefficient matrices are considered. The Krylov subspace can be generated by either the Lanczos approach, such as the methods MINRES, GMRES and QMR, or by a conjugate-gradient approach. Here, we propose an algorithm based on the latter approach. Some relations among the search directions and the residuals, and how the search directions are related to the Krylov subspace are investigated. Numerical experiments are reported to verify the convergence properties.  相似文献   

18.
da Rocha  Zélia 《Numerical Algorithms》1999,20(2-3):139-164
This paper is concerned with the Shohat-Favard, Chebyshev and Modified Chebyshev methods for d-orthogonal polynomial sequences d∈ℕ. Shohat-Favard’s method is presented from the concept of dual sequence of a sequence of polynomials. We deduce the recurrence relations for the Chebyshev and the Modified Chebyshev methods for every d∈ℕ. The three methods are implemented in the Mathematica programming language. We show several formal and numerical tests realized with the software developed. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.

The basic aim of this article is to present a novel efficient matrix approach for solving the second-order linear matrix partial differential equations (MPDEs) under given initial conditions. For imposing the given initial conditions to the main MPDEs, the associated matrix integro-differential equations (MIDEs) with partial derivatives are obtained from direct integration with regard to the spatial variable x and time variable t. Hence, operational matrices of differentiation and integration together with the completeness of Bernoulli polynomials are used to reduce the obtained MIDEs to the corresponding algebraic Sylvester equations. Using two well-known subspace Krylov iterative methods (i.e., GMRES(10) and Bi-CGSTAB) we provide two algorithms for solving the mentioned Sylvester equations. A numerical example is provided to show the efficiency and accuracy of the presented approach.

  相似文献   

20.
We present a comparison of different multigrid approaches for the solution of systems arising from high‐order continuous finite element discretizations of elliptic partial differential equations on complex geometries. We consider the pointwise Jacobi, the Chebyshev‐accelerated Jacobi, and the symmetric successive over‐relaxation smoothers, as well as elementwise block Jacobi smoothing. Three approaches for the multigrid hierarchy are compared: (1) high‐order h‐multigrid, which uses high‐order interpolation and restriction between geometrically coarsened meshes; (2) p‐multigrid, in which the polynomial order is reduced while the mesh remains unchanged, and the interpolation and restriction incorporate the different‐order basis functions; and (3) a first‐order approximation multigrid preconditioner constructed using the nodes of the high‐order discretization. This latter approach is often combined with algebraic multigrid for the low‐order operator and is attractive for high‐order discretizations on unstructured meshes, where geometric coarsening is difficult. Based on a simple performance model, we compare the computational cost of the different approaches. Using scalar test problems in two and three dimensions with constant and varying coefficients, we compare the performance of the different multigrid approaches for polynomial orders up to 16. Overall, both h‐multigrid and p‐multigrid work well; the first‐order approximation is less efficient. For constant coefficients, all smoothers work well. For variable coefficients, Chebyshev and symmetric successive over‐relaxation smoothing outperform Jacobi smoothing. While all of the tested methods converge in a mesh‐independent number of iterations, none of them behaves completely independent of the polynomial order. When multigrid is used as a preconditioner in a Krylov method, the iteration number decreases significantly compared with using multigrid as a solver. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号