首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article the unitary equivalence transformation of normal matrices to tridiagonal form is studied.It is well-known that any matrix is unitarily equivalent to a tridiagonal matrix. In case of a normal matrix the resulting tridiagonal inherits a strong relation between its super- and subdiagonal elements. The corresponding elements of the super- and subdiagonal will have the same absolute value.In this article some basic facts about a unitary equivalence transformation of an arbitrary matrix to tridiagonal form are firstly studied. Both an iterative reduction based on Krylov sequences as a direct tridiagonalization procedure via Householder transformations are reconsidered. This equivalence transformation is then applied to the normal case and equality of the absolute value between the super- and subdiagonals is proved. Self-adjointness of the resulting tridiagonal matrix with regard to a specific scalar product is proved. Properties when applying the reduction on symmetric, skew-symmetric, Hermitian, skew-Hermitian and unitary matrices and their relations with, e.g., complex symmetric and pseudo-symmetric matrices are presented.It is shown that the reduction can then be used to compute the singular value decomposition of normal matrices making use of the Takagi factorization. Finally some extra properties of the reduction as well as an efficient method for computing a unitary complex symmetric decomposition of a normal matrix are given.  相似文献   

2.
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
Convergence of CG and GMRES on a tridiagonal Toeplitz linear system   总被引:1,自引:0,他引:1  
The Conjugate Gradient method (CG), the Minimal Residual method (MINRES), or more generally, the Generalized Minimal Residual method (GMRES) are widely used to solve a linear system Ax=b. The choice of a method depends on A’s symmetry property and/or definiteness), and MINRES is really just a special case of GMRES. This paper establishes error bounds on and sometimes exact expressions for residuals of CG, MINRES, and GMRES on solving a tridiagonal Toeplitz linear system, where A is Hermitian or just normal. These expressions and bounds are in terms of the three parameters that define A and Chebyshev polynomials of the first or second kind. AMS subject classification (2000)  65F10, 65N22  相似文献   

4.
Norm-minimizing-type methods for solving large sparse linear systems with symmetric and indefinite coefficient matrices are considered. The Krylov subspace can be generated by either the Lanczos approach, such as the methods MINRES, GMRES and QMR, or by a conjugate-gradient approach. Here, we propose an algorithm based on the latter approach. Some relations among the search directions and the residuals, and how the search directions are related to the Krylov subspace are investigated. Numerical experiments are reported to verify the convergence properties.  相似文献   

5.
In this paper we describe an orthogonal similarity transformation for transforming arbitrary symmetric matrices into a diagonal-plus-semiseparable matrix, where we can freely choose the diagonal. Very recently an algorithm was proposed for transforming arbitrary symmetric matrices into similar semiseparable ones. This reduction is strongly connected to the reduction to tridiagonal form. The class of semiseparable matrices can be considered as a subclass of the diagonalplus- semiseparable matrices. Therefore we can interpret the proposed algorithm here as an extension of the reduction to semiseparable form. A numerical experiment is performed comparing thereby the accuracy of this reduction algorithm with respect to the accuracy of the traditional reduction to tridiagonal form, and the reduction to semiseparable form. The experiment indicates that all three reduction algorithms are equally accurate. Moreover it is shown in the experiments that asymptotically all the three approaches have the same complexity, i.e. that they have the same factor preceding the n3 term in the computational complexity. Finally we illustrate that special choices of the diagonal create a specific convergence behavior. The research was partially supported by the Research Council K.U.Leuven, project OT/05/40 (Large rank structured matrix computations), by the Fund for Scientific Research–Flanders (Belgium), projects G.0078.01 (SMA: Structured Matrices and their Applications), G.0176.02 (ANCILA: Asymptotic aNalysis of the Convergence behavior of Iterative methods in numerical Linear Algebra), G.0184.02 (CORFU: Constructive study of Orthogonal Functions) and G.0455.0 (RHPH: Riemann-Hilbert problems, random matrices and Padé-Hermite approximation), and by the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Office for Science, Technology and Culture, project IUAP V-22 (Dynamical Systems and Control: Computation, Identification & Modelling). The scientific responsibility rests with the authors.  相似文献   

6.
For a class of block two-by-two systems of linear equations with certain skew-Hamiltonian coefficient matrices, we construct additive block diagonal preconditioning matrices and discuss the eigen-properties of the corresponding preconditioned matrices. The additive block diagonal preconditioners can be employed to accelerate the convergence rates of Krylov subspace iteration methods such as MINRES and GMRES. Numerical experiments show that MINRES preconditioned by the exact and the inexact additive block diagonal preconditioners are effective, robust and scalable solvers for the block two-by-two linear systems arising from the Galerkin finite-element discretizations of a class of distributed control problems.  相似文献   

7.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low‐rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low‐rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low‐rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew‐Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low‐rank perturbation. Numerical tests prove that this straightforward algorithm is effective.  相似文献   

8.
A minimal residual method, called MINRES-N2, that is based on the use of unconventional Krylov subspaces was previously proposed by the authors for solving a system of linear equations Ax = b with a normal coefficient matrix whose spectrum belongs to an algebraic second-degree curve Γ. However, the computational scheme of this method does not cover matrices of the form A = αU + βI, where U is an arbitrary unitary matrix; for such matrices, Γ is a circle. Systems of this type are repeatedly solved when the eigenvectors of a unitary matrix are calculated by inverse iteration. In this paper, a modification of MINRES-N2 suitable for linear polynomials in unitary matrices is proposed. Numerical results are presented demonstrating the significant superiority of the modified method over GMRES as applied to systems of this class.  相似文献   

9.
In this paper we study both direct and inverse eigenvalue problems for diagonal-plus-semiseparable (dpss) matrices. In particular, we show that the computation of the eigenvalues of a symmetric dpss matrix can be reduced by a congruence transformation to solving a generalized symmetric definite tridiagonal eigenproblem. Using this reduction, we devise a set of recurrence relations for evaluating the characteristic polynomial of a dpss matrix in a stable way at a linear time. This in turn allows us to apply divide-and-conquer eigenvalue solvers based on functional iterations directly to dpss matrices without performing any preliminary reduction into a tridiagonal form. In the second part of the paper, we exploit the structural properties of dpss matrices to solve the inverse eigenvalue problem of reconstructing a symmetric dpss matrix from its spectrum and some other informations. Finally, applications of our results to the computation of a QR factorization of a Cauchy matrix with real nodes are provided.  相似文献   

10.
The generalized qd algorithm for block band matrices is an extension of the block qd algorithm applied to a block tridiagonal matrix. This algorithm is applied to a positive definite symmetric block band matrix. The result concerning the behavior of the eigenvalues of the first and the last diagonal block of the matrix containing the entries q (k) which was obtained in the tridiagonal case is still valid for positive definite symmetric block band matrices. The eigenvalues of the first block constitute strictly increasing sequences and those of the last block constitute strictly decreasing sequences. The theorem of convergence, given in Draux and Sadik (Appl Numer Math 60:1300?C1308, 2010), also remains valid in this more general case.  相似文献   

11.
The main purpose of the paper is looking for a larger class of matrices which have real spectrum. The first well-known class having this property is the symmetric one, then is the Hermite one. This paper introduces a new class, called Hermitizable matrices. The closely related isospectral problem, not only for matrices but also for differential operators is also studied. The paper provides a way to describe the discrete spectrum, at least for tridiagonal matrices or one-dimensional differential operators. Especially, an unexpected result in the paper says that each Hermitizable matrix is isospectral to a birth–death type matrix (having positive sub-diagonal elements, in the irreducible case for instance). Besides, new efficient algorithms are proposed for computing the maximal eigenpairs of these class of matrices.  相似文献   

12.
Summary. We describe a fast matrix eigenvalue algorithm that uses a matrix factorization and reverse order multiply technique involving three factors and that is based on the symmetric matrix factorization as well as on –orthogonal reduction techniques where is computed from the given matrix . It operates on a similarity reduction of a real matrix to general tridiagonal form and computes all of 's eigenvalues in operations, where the part of the operations is possibly performed over , instead of the 7–8 real flops required by the eigenvalue algorithm. Potential breakdo wn of the algorithm can occur in the reduction to tridiagonal form and in the –orthogonal reductions. Both, however, can be monitored during the computations. The former occurs rather rarely for dimensions and can essentially be bypassed, while the latter is extremely rare and can be bypassed as well in our conditionally stable implementation of the steps. We prove an implicit theorem which allows implicit shifts, give a convergence proof for the algorithm and show that is conditionally stable for general balanced tridiagonal matrices . Received April 25, 1995 / Revised version received February 9, 1996  相似文献   

13.
In this paper, the normative matrices and their double LR transformation with origin shifts are defined, and the essential relationship between the double LR transformation of a normative matrix and the QR transformation of the related symmetric tridiagonal matrix is proved. We obtain a stable double LR algorithm for double LR transformation of normative matrices and give the error analysis of our algorithm. The operation number of the stable double LR algorithm for normative matrices is only four sevenths of the rational QR algorithm for reed symmetric tridiagonal matrices.  相似文献   

14.
We consider a class of symmetric tridiagonal matrices which may be viewed as perturbations of Toeplitz matrices. The Toeplitz structure is destroyed since two elements on each off-diagonal are perturbed. Based on a careful analysis, we derive sharp bounds for the extremal eigenvalues of this class of matrices in terms of the original data of the given matrix. In this way, we also obtain a lower bound for the smallest singular value of certain matrices. Some numerical results indicate that our bounds are extremely good.  相似文献   

15.
We present an algorithm for the approximation of the dominant singular values and corresponding right and left singular vectors of a complex symmetric matrix. The method is based on two short-term recurrences first proposed by Saunders, Simon and Yip [24] for a non-Hermitian linear system solver. With symmetric matrices, the recurrence can be modified so as to generate a tridiagonal symmetric matrix from which the original triplets can be approximated. The recurrence formally resembles the Lanczos method, in spite of substantial differences which make usual convergence results inapplicable. Implementation aspects are discussed, such as re-orthogonalization and the use of alternative representation matrices. The method is very efficient over existing approaches which do not exploit the symmetry of the problem. Numerical experiments on application problems validate the analysis, while showing satisfactory results, especially on dense matrices. © 1997 by John Wiley & Sons, Ltd.  相似文献   

16.
An algorithm for computing the singular value decomposition of normal matrices using intermediate complex symmetric matrices is proposed. This algorithm, as most eigenvalue and singular value algorithms, consists of two steps. It is based on combining the unitarily equivalence of normal matrices to complex symmetric tridiagonal form with the symmetric singular value decomposition of complex symmetric matrices. Numerical experiments are included comparing several algorithms, with respect to speed and accuracy, for computing the symmetric singular value decomposition (also known as the Takagi factorization). Next we compare the novel approach with the classical Golub-Kahan method for computing the singular value decomposition of normal matrices: it is faster, consumes less memory, but on the other hand the results are significantly less accurate.  相似文献   

17.
In 1989, Bai and Demmel proposed the multishift QR algorithm for eigenvalue problems. Although the global convergence property of the algorithm (i.e., the convergence from any initial matrix) still remains an open question for general nonsymmetric matrices, in 1992 Jiang focused on symmetric tridiagonal case and gave a global convergence proof for the generalized Rayleigh quotient shifts. In this paper, we propose Wilkinson-like shifts, which reduce to the standard Wilkinson shift in the single shift case, and show a global convergence theorem.  相似文献   

18.
A well known numerical task is the inversion of large symmetric tridiagonal Toeplitz matrices, i.e., matrices whose entries equal a on the diagonal and b on the extra diagonals (\(a, b\in \mathbb R\)). The inverses of such matrices are dense and there exist well known explicit formulas by which they can be calculated in \(\mathcal O(n^2)\). In this note we present a simplification of the problem that has proven to be rather useful in everyday practice: If \(\vert a\vert > 2\vert b\vert \), that is, if the matrix is strictly diagonally dominant, its inverse is a band matrix to working precision and the bandwidth is independent of n for sufficiently large n. Employing this observation, we construct a linear time algorithm for an explicit tridiagonal inversion that only uses \(\mathcal O(1)\) floating point operations. On the basis of this simplified inversion algorithm we outline the cornerstones for an efficient parallelizable approximative equation solver.  相似文献   

19.
For large systems of linear equations, iterative methods provide attractive solution techniques. We describe the applicability and convergence of iterative methods of Krylov subspace type for an important class of symmetric and indefinite matrix problems, namely augmented (or KKT) systems. Specifically, we consider preconditioned minimum residual methods and discuss indefinite versus positive definite preconditioning. For a natural choice of starting vector we prove that when the definite and indenfinite preconditioners are related in the obvious way, MINRES (which is applicable in the case of positive definite preconditioning) and full GMRES (which is applicable in the case of indefinite preconditioning) give residual vectors with identical Euclidean norm at each iteration. Moreover, we show that the convergence of both methods is related to a system of normal equations for which the LSQR algorithm can be employed. As a side result, we give a rare example of a non-trivial normal(1) matrix where the corresponding inner product is explicitly known: a conjugate gradient method therefore exists and can be employed in this case. This work was supported by British Council/German Academic Exchange Service Research Collaboration Project 465 and NATO Collaborative Research Grant CRG 960782  相似文献   

20.
This paper concerns the LBM T factorization of unsymmetric tridiagonal matrices, where L and M are unit lower triangular matrices and B is block diagonal with 1×1 and 2×2 blocks. In some applications, it is necessary to form this factorization without row or column interchanges while the tridiagonal matrix is formed. Bunch and Kaufman proposed a pivoting strategy without interchanges specifically for symmetric tridiagonal matrices, and more recently, Bunch and Marcia proposed pivoting strategies that are normwise backward stable for linear systems involving such matrices. In this paper, we extend these strategies to the unsymmetric tridiagonal case and demonstrate that the proposed methods both exhibit bounded growth factors and are normwise backward stable. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号