首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
In this paper we propose a method for computing the roots of a monic matrix polynomial. To this end we compute the eigenvalues of the corresponding block companion matrix C. This is done by implementing the QR algorithm in such a way that it exploits the rank structure of the matrix. Because of this structure, we can represent the matrix in Givens-weight representation. A similar method as in Chandrasekaran et al. (Oper Theory Adv Appl 179:111–143, 2007), the bulge chasing, is used during the QR iteration. For practical usage, matrix C has to be brought in Hessenberg form before the QR iteration starts. During the QR iteration and the transformation to Hessenberg form, the property of the matrix being unitary plus low rank numerically deteriorates. A method to restore this property is used.  相似文献   

2.
3.
This paper describes the use of a generalized isometric Arnoldi algorithm to reduce a unitary matrix, via unitary similarity, to a product of elementary reflectors and permutations. The computation is analogous to the reduction of a unitary matrix to a unitary Hessenberg matrix using the isometric Arnoldi algorithm. In the case in which A is a shift matrix, the reduction provides a novel recurrence for the factor R in the QR factorization of a Toeplitz-like matrix.  相似文献   

4.
Large classes of self-similar (isospectral) flows can be viewed as continuous analogues of certain matrix eigenvalue algorithms. In particular there exist families of flows associated with the QR, LR, and Cholesky eigenvalue algorithms. This paper uses Lie theory to develop a general theory of self-similar flows which includes the QR, LR, and Cholesky flows as special cases. Also included are new families of flows associated with the SR and HR eigenvalue algorithms. The basic theory produces analogues of unshifted, single-step eigenvalue algorithms, but it is also shown how the theory can be extended to include flows which are continuous analogues of shifted and multiple-step eigenvalue algorithms.  相似文献   

5.
A fast method for computing all the eigenvalues of a Hamiltonian matrix M is given. The method relies on orthogonal symplectic similarity transformations which preserve structure and have desirable numerical properties. The algorithm requires about one-fourth the number of floating-point operations and one-half the space of the standard QR algorithm. The computed eigenvalues are shown to be the exact eigenvalues of a matrix M + E where ∥E∥ depends on the square root of the machine precision. The accuracy of a computed eigenvalue depends on both its condition and its magnitude, larger eigenvalues typically being more accurate.  相似文献   

6.
A new method is presented for the solution of the matrix eigenvalue problem Ax=λBx, where A and B are real symmetric square matrices and B is positive semidefinite. It reduces A and B to diagonal form by congruence transformations that preserve the symmetry of the problem. This method is closely related to the QR algorithm for real symmetric matrices.  相似文献   

7.
In this paper we study both direct and inverse eigenvalue problems for diagonal-plus-semiseparable (dpss) matrices. In particular, we show that the computation of the eigenvalues of a symmetric dpss matrix can be reduced by a congruence transformation to solving a generalized symmetric definite tridiagonal eigenproblem. Using this reduction, we devise a set of recurrence relations for evaluating the characteristic polynomial of a dpss matrix in a stable way at a linear time. This in turn allows us to apply divide-and-conquer eigenvalue solvers based on functional iterations directly to dpss matrices without performing any preliminary reduction into a tridiagonal form. In the second part of the paper, we exploit the structural properties of dpss matrices to solve the inverse eigenvalue problem of reconstructing a symmetric dpss matrix from its spectrum and some other informations. Finally, applications of our results to the computation of a QR factorization of a Cauchy matrix with real nodes are provided.  相似文献   

8.
Two recent approaches (Van Overschee, De Moor, N4SID, Automatica 30 (1) (1994) 75; Verhaegen, Int. J. Control 58(3) (1993) 555) in subspace identification problems require the computation of the R factor of the QR factorization of a block-Hankel matrix H, which, in general has a huge number of rows. Since the data are perturbed by noise, the involved matrix H is, in general, full rank. It is well known that, from a theoretical point of view, the R factor of the QR factorization of H is equivalent to the Cholesky factor of the correlation matrix HTH, apart from a multiplication by a sign matrix. In Sima (Proceedings Second NICONET Workshop, Paris-Versailles, December 3, 1999, p. 75), a fast Cholesky factorization of the correlation matrix, exploiting the block-Hankel structure of H, is described. In this paper we consider a fast algorithm to compute the R factor based on the generalized Schur algorithm. The proposed algorithm allows to handle the rank–deficient case.  相似文献   

9.
Recent progress in signal processing and estimation has generated considerable interest in the problem of computing the smallest eigenvalue of a symmetric positive‐definite (SPD) Toeplitz matrix. An algorithm for computing upper and lower bounds to the smallest eigenvalue of a SPD Toeplitz matrix has been recently derived (Linear Algebra Appl. 2007; DOI: 10.1016/j.laa.2007.05.008 ). The algorithm relies on the computation of the R factor of the QR factorization of the Toeplitz matrix and the inverse of R. The simultaneous computation of R and R?1 is efficiently accomplished by the generalized Schur algorithm. In this paper, exploiting the properties of the latter algorithm, a numerical method to compute the smallest eigenvalue and the corresponding eigenvector of SPD Toeplitz matrices in an accurate way is proposed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
A Schur-type decomposition for Hamiltonian matrices is given that relies on unitary symplectic similarity transformations. These transformations preserve the Hamiltonian structure and are numerically stable, making them ideal for analysis and computation. Using this decomposition and a special singular-value decomposition for unitary symplectic matrices, a canonical reduction of the algebraic Riccati equation is obtained which sheds light on the sensitivity of the nonnegative definite solution. After presenting some real decompositions for real Hamiltonian matrices, we look into the possibility of an orthogonal symplectic version of the QR algorithm suitable for Hamiltonian matrices. A finite-step initial reduction to a Hessenberg-type canonical form is presented. However, no extension of the Francis implicit-shift technique was found, and reasons for the difficulty are given.  相似文献   

11.
The class of eigenvalue problems for upper Hessenberg matrices of banded-plus-spike form includes companion and comrade matrices as special cases. For this class of matrices a factored form is developed in which the matrix is represented as a product of essentially 2×2 matrices and a banded upper-triangular matrix. A non-unitary analogue of Francis’s implicitly-shifted QR algorithm that preserves the factored form and consequently computes the eigenvalues in O(n 2) time and O(n) space is developed. Inexpensive a posteriori tests for stability and accuracy are performed as part of the algorithm. The results of numerical experiments are mixed but promising in certain areas. The single-shift version of the code applied to companion matrices is much faster than the nearest competitor.  相似文献   

12.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low‐rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low‐rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low‐rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew‐Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low‐rank perturbation. Numerical tests prove that this straightforward algorithm is effective.  相似文献   

13.
The problem of polynomial least squares fitting in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix of the overdetermined system to be solved in the least squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition of A, the first stage consists of computing the bidiagonal decomposition of the coefficient matrix A. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of A is the applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the R-factor of A. Some numerical experiments showing the behavior of this approach are included.  相似文献   

14.
Summary. This paper explores the relationship between certain inverse unitary eigenvalue problems and orthogonal functions. In particular, the inverse eigenvalue problems for unitary Hessenberg matrices and for Schur parameter pencils are considered. The Szeg? recursion is known to be identical to the Arnoldi process and can be seen as an algorithm for solving an inverse unitary Hessenberg eigenvalue problem. Reformulation of this inverse unitary Hessenberg eigenvalue problem yields an inverse eigenvalue problem for Schur parameter pencils. It is shown that solving this inverse eigenvalue problem is equivalent to computing Laurent polynomials orthogonal on the unit circle. Efficient and reliable algorithms for solving the inverse unitary eigenvalue problems are given which require only O() arithmetic operations as compared with O() operations needed for algorithms that ignore the structure of the problem. Received April 3, 1995 / Revised version received August 29, 1996  相似文献   

15.
We present an algorithm for mixed precision iterative refinement on the constrained and weighted linear least squares problem, the CWLSQ problem. The approximate solution is obtained by solving the CWLSQ problem with the weightedQR factorization [6]. With backward errors for the weightedQR decomposition together with perturbation bounds for the CWLSQ problem we analyze the convergence behaviour of the iterative refinement procedure.In the unweighted case the initial convergence rate of the error of the iteratively refined solution is determined essentially by the condition number. For the CWLSQ problem the initial convergence behaviour is more complicated. The analysis shows that the initial convergence is dependent both on the condition of the problem related to the solution,x, and the vector =Wr, whereW is the weight matrix andr is the residual.We test our algorithm on two examples where the solution is known and the condition number of the problem can be varied. The computational test confirms the theoretical results and verifies that mixed precision iterative refinement, using the system matrix and the weightedQR decomposition, is an effective way of improving an approximate solution to the CWLSQ problem.  相似文献   

16.
In this paper we design a fast new algorithm for reducing an N × N quasiseparable matrix to upper Hessenberg form via a sequence of N − 2 unitary transformations. The new reduction is especially useful when it is followed by the QR algorithm to obtain a complete set of eigenvalues of the original matrix. In particular, it is shown that in a number of cases some recently devised fast adaptations of the QR method for quasiseparable matrices can benefit from using the proposed reduction as a preprocessing step, yielding lower cost and a simplification of implementation.  相似文献   

17.
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, we investigate variants of the well-known Golub and Welsch algorithm for computing nodes and weights of Gaussian quadrature rules for symmetric weights w in intervals (?a, a) (not necessarily bounded). The purpose is to reduce the complexity of the Jacobi eigenvalue problem stemming from Wilf’s theorem and show the effectiveness of Matlab implementations of our variants for reducing the computer times compared to some other methods. Numerical examples on three test problems show the benefits of these variants.  相似文献   

19.
Summary A rational version of theQR algorithm for symmetric tridiagonal matrices is presented. Stability is ensured by calculating the elements of the transformed matrix by various formulas, depending on the angle of rotation. Virtual origin shifts are determined from perturbation estimates for the leading 2×2 and 3×3 submatrices; the size of these shifts can optionally serve as a convergence criterion. A number of test matrices, including one with several degeneracies, were diagonalized; an average of 1.3–1.5QR iterations per eigenvalue was needed for 12-figure precision, and of 1.7–2.0 for 22-figure precision.  相似文献   

20.
Summary Strassen [2] has described a method for the multiplication of (N, N)-matrices which needs O (N 2.8...) basic operations. Here algorithms are given forQR-decomposition and unitary transformation of arbitrary complex matrices to upper Hessenberg form and for unitary triangularization of hermitean matrices, which by use of a fast matrix multiplication with time bound O (N ) have nearly the same speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号