首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The global Arnoldi method can be used to compute exterior eigenpairs of a large non-Hermitian matrix A, but it does not work well for interior eigenvalue problems. Based on the global Arnoldi process that generates an F-orthonormal basis of a matrix Krylov subspace, we propose a global harmonic Arnoldi method for computing certain harmonic F-Ritz pairs that are used to approximate some interior eigenpairs. We propose computing the F-Rayleigh quotients of the large non-Hermitian matrix with respect to harmonic F-Ritz vectors and taking them as new approximate eigenvalues. They are better and more reliable than the harmonic F-Ritz values. The global harmonic Arnoldi method inherits convergence properties of the harmonic Arnoldi method applied to a larger matrix whose distinct eigenvalues are the same as those of the original given matrix. Some properties of the harmonic F-Ritz vectors are presented. As an application, assuming that A is diagonalizable, we show that the global harmonic Arnoldi method is able to solve multiple eigenvalue problems both in theory and in practice. To be practical, we develop an implicitly restarted global harmonic Arnoldi algorithm with certain harmonic F-shifts suggested. In particular, this algorithm can be adaptively used to solve multiple eigenvalue problems. Numerical experiments show that the algorithm is efficient for the eigenproblem and is reliable for quite ill-conditioned multiple eigenproblems.  相似文献   

2.
In this paper we design a fast new algorithm for reducing an N × N quasiseparable matrix to upper Hessenberg form via a sequence of N − 2 unitary transformations. The new reduction is especially useful when it is followed by the QR algorithm to obtain a complete set of eigenvalues of the original matrix. In particular, it is shown that in a number of cases some recently devised fast adaptations of the QR method for quasiseparable matrices can benefit from using the proposed reduction as a preprocessing step, yielding lower cost and a simplification of implementation.  相似文献   

3.
ADI preconditioned Krylov methods for large Lyapunov matrix equations   总被引:1,自引:0,他引:1  
In the present paper, we propose preconditioned Krylov methods for solving large Lyapunov matrix equations AX+XAT+BBT=0. Such problems appear in control theory, model reduction, circuit simulation and others. Using the Alternating Direction Implicit (ADI) iteration method, we transform the original Lyapunov equation to an equivalent symmetric Stein equation depending on some ADI parameters. We then define the Smith and the low rank ADI preconditioners. To solve the obtained Stein matrix equation, we apply the global Arnoldi method and get low rank approximate solutions. We give some theoretical results and report numerical tests to show the effectiveness of the proposed approaches.  相似文献   

4.
We consider solving eigenvalue problems or model reduction problems for a quadratic matrix polynomial 2 −  − B with large and sparse A and B. We propose new Arnoldi and Lanczos type processes which operate on the same space as A and B live and construct projections of A and B to produce a quadratic matrix polynomial with the coefficient matrices of much smaller size, which is used to approximate the original problem. We shall apply the new processes to solve eigenvalue problems and model reductions of a second order linear input-output system and discuss convergence properties. Our new processes are also extendable to cover a general matrix polynomial of any degree.  相似文献   

5.
The harmonic block Arnoldi method can be used to find interior eigenpairs of large matrices. Given a target point or shift ττ to which the needed interior eigenvalues are close, the desired interior eigenpairs are the eigenvalues nearest ττ and the associated eigenvectors. However, it has been shown that the harmonic Ritz vectors may converge erratically and even may fail to do so. To do a better job, a modified harmonic block Arnoldi method is coined that replaces the harmonic Ritz vectors by some modified harmonic Ritz vectors. The relationships between the modified harmonic block Arnoldi method and the original one are analyzed. Moreover, how to adaptively adjust shifts during iterations so as to improve convergence is also discussed. Numerical results on the efficiency of the new algorithm are reported.  相似文献   

6.
In this note we define EN subspaces by using the Eirola-Nevanlinna algorithm for solving a linear system. We compare this construction with the Arnoldi method for generating Krylov subspaces and computing eigenvalue approximations. Further, we compute Ritz pairs by restricting the updated preconditionerH k of the EN algorithm to the generated EN subspaces.  相似文献   

7.
Stewart's recently introduced Krylov-Schur algorithm is a modification of the implicitly restarted Arnoldi algorithm which employs reordered Schur decompositions to perform restarts and deflations in a numerically reliable manner. This paper describes a variant of the Krylov-Schur algorithm suitable for addressing eigenvalue problems associated with products of large and sparse matrices. It performs restarts and deflations via reordered periodic Schur decompositions and, by taking the product structure into account, it is capable of achieving qualitatively better approximations to eigenvalues of small magnitude. Supported by DFG Research Center Matheon, Mathematics for key technologies, in Berlin.  相似文献   

8.
A new algorithm for the computation of eigenvalues of a nonsymmetric matrix pencil is described. It is a generalization of the shifted and inverted Lanczos (or Arnoldi) algorithm, in which several shifts are used in one run. It computes an orthogonal basis and a small Hessenberg pencil. The eigensolution of the Hessenberg pencil, gives Ritz approximations to the solution of the original pencil. It is shown how complex shifts can be used to compute a real block Hessenberg pencil to a real matrix pair.Two applicationx, one coming from an aircraft stability problem and the other from a hydrodynamic bifurcation, have been tested and results are reported.Dedicated to Carl-Erik Fröberg on the occasion of his 75th birthday.  相似文献   

9.
Two square complex matrices A, B are said to be unitarily congruent if there is a unitary matrix U such that A = UBUT. The Youla form is a canonical form under unitary congruence. We give a simple derivation of this form using coninvariant subspaces. For the special class of conjugate-normal matrices the associated Youla form is discussed.  相似文献   

10.
A restarted Arnoldi algorithm is given that computes eigenvalues and eigenvectors. It is related to implicitly restarted Arnoldi, but has a simpler restarting approach. Harmonic and regular Rayleigh-Ritz versions are possible.For multiple eigenvalues, an approach is proposed that first computes eigenvalues with the new harmonic restarted Arnoldi algorithm, then uses random restarts to determine multiplicity. This avoids the need for a block method or for relying on roundoff error to produce the multiple copies.  相似文献   

11.
It is well known that if a matrix A∈Cn×nACn×n solves the matrix equation f(A,AH)=0f(A,AH)=0, where f(x,y)f(x,y) is a linear bivariate polynomial, then A is normal; A   and AHAH can be simultaneously reduced in a finite number of operations to tridiagonal form by a unitary congruence and, moreover, the spectrum of A is located on a straight line in the complex plane. In this paper we present some generalizations of these properties for almost normal matrices which satisfy certain quadratic matrix equations arising in the study of structured eigenvalue problems for perturbed Hermitian and unitary matrices.  相似文献   

12.
In this paper we describe how to compute the eigenvalues of a unitary rank structured matrix in two steps. First we perform a reduction of the given matrix into Hessenberg form, next we compute the eigenvalues of this resulting Hessenberg matrix via an implicit QR-algorithm. Along the way, we explain how the knowledge of a certain ‘shift’ correction term to the structure can be used to speed up the QR-algorithm for unitary Hessenberg matrices, and how this observation was implicitly used in a paper due to William B. Gragg. We also treat an analogue of this observation in the Hermitian tridiagonal case.  相似文献   

13.
As is well known, a rank-r matrix can be recovered from a cross of r linearly independent columns and rows, and an arbitrary matrix can be interpolated on the cross entries. Other entries by this cross or pseudo-skeleton approximation are given with errors depending on the closeness of the matrix to a rank-r matrix and as well on the choice of cross. In this paper we extend this construction to d-dimensional arrays (tensors) and suggest a new interpolation formula in which a d-dimensional array is interpolated on the entries of some TT-cross (tensor train-cross). The total number of entries and the complexity of our interpolation algorithm depend on d linearly, so the approach does not suffer from the curse of dimensionality.We also propose a TT-cross method for computation of d-dimensional integrals and apply it to some examples with dimensionality in the range from d=100 up to d=4000 and the relative accuracy of order 10-10. In all constructions we capitalize on the new tensor decomposition in the form of tensor trains (TT-decomposition).  相似文献   

14.
Computing the eigenvalues and eigenvectors of a large sparse nonsymmetric matrix arises in many applications and can be a very computationally challenging problem. In this paper we propose the Augmented Block Householder Arnoldi (ABHA) method that combines the advantages of a block routine with an augmented Krylov routine. A public domain MATLAB code ahbeigs has been developed and numerical experiments indicate that the code is competitive with other publicly available codes.  相似文献   

15.
Our goal is to identify and understand matrices A that share essential properties of the unitary Hessenberg matrices M that are fundamental for Szegö’s orthogonal polynomials. Those properties include: (i) Recurrence relations connect characteristic polynomials {rk(x)} of principal minors of A. (ii) A is determined by generators (parameters generalizing reflection coefficients of unitary Hessenberg theory). (iii) Polynomials {rk(x)} correspond not only to A but also to a certain “CMV-like” five-diagonal matrix. (iv) The five-diagonal matrix factors into a product BC of block diagonal matrices with 2 × 2 blocks. (v) Submatrices above and below the main diagonal of A have rank 1. (vi) A is a multiplication operator in the appropriate basis of Laurent polynomials. (vii) Eigenvectors of A can be expressed in terms of those polynomials.Conditions (v) connects our analysis to the study of quasi-separable matrices. But the factorization requirement (iv) narrows it to the subclass of “Green’s matrices” that share Properties (i)-(vii).The key tool is “twist transformations” that provide 2n matrices all sharing characteristic polynomials of principal minors with A. One such twist transformation connects unitary Hessenberg to CMV. Another twist transformation explains findings of Fiedler who noticed that companion matrices give examples outside the unitary Hessenberg framework. We mention briefly the further example of a Daubechies wavelet matrix. Infinite matrices are included.  相似文献   

16.
Let A be a matrix whose sparsity pattern is a tree with maximal degree dmax. We show that if the columns of A are ordered using minimum degree on |A|+|A|, then factoring A using a sparse LU with partial pivoting algorithm generates only O(dmaxn) fill, requires only O(dmaxn) operations, and is much more stable than LU with partial pivoting on a general matrix. We also propose an even more efficient and just-as-stable algorithm called sibling-dominant pivoting. This algorithm is a strict partial pivoting algorithm that modifies the column preordering locally to minimize fill and work. It leads to only O(n) work and fill. More conventional column pre-ordering methods that are based (usually implicitly) on the sparsity pattern of |A||A| are not as efficient as the approaches that we propose in this paper.  相似文献   

17.
The problem of polynomial least squares fitting in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix of the overdetermined system to be solved in the least squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition of A, the first stage consists of computing the bidiagonal decomposition of the coefficient matrix A. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of A is the applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the R-factor of A. Some numerical experiments showing the behavior of this approach are included.  相似文献   

18.
For a given nonderogatory matrix A, formulas are given for functions of A in terms of Krylov matrices of A. Relations between the coefficients of a polynomial of A and the generating vector of a Krylov matrix of A are provided. With the formulas, linear transformations between Krylov matrices and functions of A are introduced, and associated algebraic properties are derived. Hessenberg reduction forms are revisited equipped with appropriate inner products and related properties and matrix factorizations are given.  相似文献   

19.
A fast solution algorithm is proposed for solving block banded block Toeplitz systems with non-banded Toeplitz blocks. The algorithm constructs the circulant transformation of a given Toeplitz system and then by means of the Sherman-Morrison-Woodbury formula transforms its inverse to an inverse of the original matrix. The block circulant matrix with Toeplitz blocks is converted to a block diagonal matrix with Toeplitz blocks, and the resulting Toeplitz systems are solved by means of a fast Toeplitz solver.The computational complexity in the case one uses fast Toeplitz solvers is equal to ξ(m,n,k)=O(mn3)+O(k3n3) flops, there are m block rows and m block columns in the matrix, n is the order of blocks, 2k+1 is the bandwidth. The validity of the approach is illustrated by numerical experiments.  相似文献   

20.
For a matrix polynomial P(λ) and a given complex number μ, we introduce a (spectral norm) distance from P(λ) to the matrix polynomials that have μ as an eigenvalue of geometric multiplicity at least κ, and a distance from P(λ) to the matrix polynomials that have μ as a multiple eigenvalue. Then we compute the first distance and obtain bounds for the second one, constructing associated perturbations of P(λ).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号