首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Very recently, an algorithm, which reduces any symmetric matrix into a semiseparable one of semi‐ separability rank 1 by similar orthogonality transformations, has been proposed by Vandebril, Van Barel and Mastronardi. Partial execution of this algorithm computes a semiseparable matrix whose eigenvalues are the Ritz‐values obtained by the Lanczos' process applied to the original matrix. Also a kind of nested subspace iteration is performed at each step. In this paper, we generalize the above results and propose an algorithm to reduce any symmetric matrix into a similar block‐semiseparable one of semiseparability rank k, with k ∈ ?, by orthogonal similarity transformations. Also in this case partial execution of the algorithm computes a block‐semiseparable matrix whose eigenvalues are the Ritz‐values obtained by the block‐Lanczos' process with k starting vectors, applied to the original matrix. Subspace iteration is performed at each step as well. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we describe an orthogonal similarity transformation for transforming arbitrary symmetric matrices into a diagonal-plus-semiseparable matrix, where we can freely choose the diagonal. Very recently an algorithm was proposed for transforming arbitrary symmetric matrices into similar semiseparable ones. This reduction is strongly connected to the reduction to tridiagonal form. The class of semiseparable matrices can be considered as a subclass of the diagonalplus- semiseparable matrices. Therefore we can interpret the proposed algorithm here as an extension of the reduction to semiseparable form. A numerical experiment is performed comparing thereby the accuracy of this reduction algorithm with respect to the accuracy of the traditional reduction to tridiagonal form, and the reduction to semiseparable form. The experiment indicates that all three reduction algorithms are equally accurate. Moreover it is shown in the experiments that asymptotically all the three approaches have the same complexity, i.e. that they have the same factor preceding the n3 term in the computational complexity. Finally we illustrate that special choices of the diagonal create a specific convergence behavior. The research was partially supported by the Research Council K.U.Leuven, project OT/05/40 (Large rank structured matrix computations), by the Fund for Scientific Research–Flanders (Belgium), projects G.0078.01 (SMA: Structured Matrices and their Applications), G.0176.02 (ANCILA: Asymptotic aNalysis of the Convergence behavior of Iterative methods in numerical Linear Algebra), G.0184.02 (CORFU: Constructive study of Orthogonal Functions) and G.0455.0 (RHPH: Riemann-Hilbert problems, random matrices and Padé-Hermite approximation), and by the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Office for Science, Technology and Culture, project IUAP V-22 (Dynamical Systems and Control: Computation, Identification & Modelling). The scientific responsibility rests with the authors.  相似文献   

3.
This paper proposes that the study of Sturm sequences is invaluable in the numerical computation and theoretical derivation of eigenvalue distributions of random matrix ensembles. We first explore the use of Sturm sequences to efficiently compute histograms of eigenvalues for symmetric tridiagonal matrices and apply these ideas to random matrix ensembles such as the β-Hermite ensemble. Using our techniques, we reduce the time to compute a histogram of the eigenvalues of such a matrix from O(n 2+m) to O(mn) time where n is the dimension of the matrix and m is the number of bins (with arbitrary bin centers and widths) desired in the histogram (m is usually much smaller than n). Second, we derive analytic formulas in terms of iterated multivariate integrals for the eigenvalue distribution and the largest eigenvalue distribution for arbitrary symmetric tridiagonal random matrix models. As an example of the utility of this approach, we give a derivation of both distributions for the β-Hermite random matrix ensemble (for general β). Third, we explore the relationship between the Sturm sequence of a random matrix and its shooting eigenvectors. We show using Sturm sequences that assuming the eigenvector contains no zeros, the number of sign changes in a shooting eigenvector of parameter λ is equal to the number of eigenvalues greater than λ. Finally, we use the techniques presented in the first section to experimentally demonstrate a O(log n) growth relationship between the variance of histogram bin values and the order of the β-Hermite matrix ensemble. This research was supported by NSF Grant DMS–0411962.  相似文献   

4.
In this paper we study both direct and inverse eigenvalue problems for diagonal-plus-semiseparable (dpss) matrices. In particular, we show that the computation of the eigenvalues of a symmetric dpss matrix can be reduced by a congruence transformation to solving a generalized symmetric definite tridiagonal eigenproblem. Using this reduction, we devise a set of recurrence relations for evaluating the characteristic polynomial of a dpss matrix in a stable way at a linear time. This in turn allows us to apply divide-and-conquer eigenvalue solvers based on functional iterations directly to dpss matrices without performing any preliminary reduction into a tridiagonal form. In the second part of the paper, we exploit the structural properties of dpss matrices to solve the inverse eigenvalue problem of reconstructing a symmetric dpss matrix from its spectrum and some other informations. Finally, applications of our results to the computation of a QR factorization of a Cauchy matrix with real nodes are provided.  相似文献   

5.
In this paper we will adapt a known method for diagonal scaling of symmetric positive definite tridiagonal matrices towards the semiseparable case. Based on the fact that a symmetric, positive definite tridiagonal matrix satisfies property A, one can easily construct a diagonal matrix such that has the lowest condition number over all matrices , for any choice of diagonal matrix . Knowing that semiseparable matrices are the inverses of tridiagonal matrices, one can derive similar properties for semiseparable matrices. Here, we will construct the optimal diagonal scaling of a semiseparable matrix, based on a new inversion formula for semiseparable matrices. Some numerical experiments are performed. In a first experiment we compare the condition numbers of the semiseparable matrices before and after the scaling. In a second numerical experiment we compare the scalability of matrices coming from the reduction to semiseparable form and matrices coming from the reduction to tridiagonal form. *The research was partially supported by the Research Council K.U. Leuven, project OT/00/16 (SLAP: Structured Linear Algebra Package), by the Fund for Scientific Research–Flanders (Belgium), projects G.0078.01 (SMA: Structured Matrices and their Applications), G.0176.02 (ANCILA: Asymptotic aNalysis of the Convergence behavior of Iterative methods in numerical Linear Algebra), G.0184.02 (CORFU: Constructive study of Orthogonal Functions) and G.0455.0 (RHPH: Riemann–Hilbert problems, random matrices and Padé–Hermite approximation), and by the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Office for Science, Technology and Culture, project IUAP V-22 (Dynamical Systems and Control: Computation, Identification & Modelling). The scientific responsibility rests with the authors. The second author participates in the SCCM program, Gates 2B, Stanford University, CA, USA and is also partially supported by the NSF. The first author visited the second one with a grant by the Fund for Scientific Research–Flanders (Belgium).  相似文献   

6.
We present a fast algorithm for computing the QR factorization of Cauchy matrices with real nodes. The algorithm works for almost any input matrix, does not require squaring the matrix, and fully exploits the displacement structure of Cauchy matrices. We prove that, if the determinant of a certain semiseparable matrix is non‐zero, a three term recurrence relation among the rows or columns of the factors exists. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, two accelerated divide‐and‐conquer (ADC) algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N2r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices, which are Cauchy‐like matrices and are off‐diagonally low‐rank. The difference of these two versions lies in using different HSS construction algorithms, one (denoted by ADC1) uses a structured low‐rank approximation method and the other (ADC2) uses a randomized HSS construction algorithm. For the ADC2 algorithm, a method is proposed to estimate the off‐diagonal rank. Numerous experiments have been carried out to show their stability and efficiency. These algorithms are implemented in parallel in a shared memory environment, and some parallel implementation details are included. Comparing the ADCs with highly optimized multithreaded libraries such as Intel MKL, we find that ADCs could be more than six times faster for some large matrices with few deflations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
孔繁旭  卢琳璋 《数学研究》2008,41(2):119-125
在本文中,我们证明了对一个反Krylov矩阵作QR分解后,利用得到的正交矩阵可以将一个具有互异特征值的对称矩阵转化为一个半可分矩阵的形式,这个结果表明了反Krylov矩阵与半可分矩阵之间的联系.另外,我们还证明了这类对称半可分矩阵在QR达代下矩阵结构保持不变性.  相似文献   

9.
10.
The class of eigenvalue problems for upper Hessenberg matrices of banded-plus-spike form includes companion and comrade matrices as special cases. For this class of matrices a factored form is developed in which the matrix is represented as a product of essentially 2×2 matrices and a banded upper-triangular matrix. A non-unitary analogue of Francis’s implicitly-shifted QR algorithm that preserves the factored form and consequently computes the eigenvalues in O(n 2) time and O(n) space is developed. Inexpensive a posteriori tests for stability and accuracy are performed as part of the algorithm. The results of numerical experiments are mixed but promising in certain areas. The single-shift version of the code applied to companion matrices is much faster than the nearest competitor.  相似文献   

11.
Recent progress in signal processing and estimation has generated considerable interest in the problem of computing the smallest eigenvalue of a symmetric positive‐definite (SPD) Toeplitz matrix. An algorithm for computing upper and lower bounds to the smallest eigenvalue of a SPD Toeplitz matrix has been recently derived (Linear Algebra Appl. 2007; DOI: 10.1016/j.laa.2007.05.008 ). The algorithm relies on the computation of the R factor of the QR factorization of the Toeplitz matrix and the inverse of R. The simultaneous computation of R and R?1 is efficiently accomplished by the generalized Schur algorithm. In this paper, exploiting the properties of the latter algorithm, a numerical method to compute the smallest eigenvalue and the corresponding eigenvector of SPD Toeplitz matrices in an accurate way is proposed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
《Journal of Complexity》1993,9(3):387-405
We apply a novel approach to approximate within ϵ to all the eigenvalues of an n × n symmetric tridiagonal matrix A using at most n2([3 log2(625n6)] + (83n − 34)[log2 (log2((λ1 − λn)/(2ϵ))/log2(25n))]) arithmetic operations where λ1 and λn denote the extremal eigenvalues of A. The algorithm can be modified to compute any fixed numbers of the largest and the smallest eigenvalues of A and may also be applied to the band symmetric matrices without their reduction to the tridiagonal form.  相似文献   

13.
Summary A method is given for calculating the eigenvalues of a symmetric tridiagonal matrix. The method is shown to be stable and for a large class of matrices it is, asymptotically, faster by an order of magnitude than theQR method.  相似文献   

14.
Summary A rational version of theQR algorithm for symmetric tridiagonal matrices is presented. Stability is ensured by calculating the elements of the transformed matrix by various formulas, depending on the angle of rotation. Virtual origin shifts are determined from perturbation estimates for the leading 2×2 and 3×3 submatrices; the size of these shifts can optionally serve as a convergence criterion. A number of test matrices, including one with several degeneracies, were diagonalized; an average of 1.3–1.5QR iterations per eigenvalue was needed for 12-figure precision, and of 1.7–2.0 for 22-figure precision.  相似文献   

15.
Summary The Symmetric Tridiagonal Eigenproblem has been the topic of some recent work. Many methods have been advanced for the computation of the eigenvalues of such a matrix. In this paper, we present a divide-and-conquer approach to the computation of the eigenvalues of a symmetric tridiagonal matrix via the evaluation of the characteristic polynomial. The problem of evaluation of the characteristic polynomial is partitioned into smaller parts which are solved and these solutions are then combined to form the solution to the original problem. We give the update equations for the characteristic polynomial and certain auxiliary polynomials used in the computation. Furthermore, this set of recursions can be implemented on a regulartree structure. If the concurrency exhibited by this algorithm is exploited, it can be shown that thetime for computation of all the eigenvalues becomesO(nlogn) instead ofO(n 2) as is the case for the approach where the order is increased by only one at every step. We address the numerical problems associated with the use of the characteristic polynomial and present a numerically stable technique for the eigenvalue computation.  相似文献   

16.
In this paper, we will compare the convergence properties of three basic reduction methods, by placing them in a general framework. It covers the reduction to tridiagonal, semiseparable and semiseparable plus diagonal form. These reductions are often used as the first step in the computation of the eigenvalues and/or eigenvectors of arbitrary matrices. In this way, the calculation of the eigenvalues using, for example, the QR-algorithm reduces in complexity. First we will investigate the convergence properties of these three reduction algorithms. It will be shown that for the partially reduced matrices at step k of any of these reduction algorithms, the lower right k × k (already reduced) sub-block will have the Lanczos–Ritz values, w.r.t. a certain starting vector. It will also be shown that the reductions to semiseparable and to semiseparable plus diagonal form have an extra convergence behavior a special type of subspace iteration is performed on the lower right k × k submatrix, which contains these Ritz-values. Secondly we look in more detail at the behavior of the involved subspace iteration. It will be shown that the reduction method can be interpreted as a nested type of multi-shift iteration. Theoretical results will be presented, making it possible to predict the convergence behavior of these reduction algorithms. Also a theoretical bound on the convergence rate is presented. Finally we illustrate by means of numerical examples, how it is possible to tune the convergence behavior such that it can become a powerful tool for certain applications.  相似文献   

17.
A framework for an efficient low-complexity divide-and-conquer algorithm for computing eigenvalues and eigenvectors of an n × n symmetric band matrix with semibandwidth b n is proposed and its arithmetic complexity analyzed. The distinctive feature of the algorithm—after subdivision of the original problem into p subproblems and their solution—is a separation of the eigenvalue and eigenvector computations in the central synthesis problem. The eigenvalues are computed recursively by representing the corresponding symmetric rank b(p–1) modification of a diagonal matrix as a series of rank-one modifications. Each rank-one modifications problem can be solved using techniques developed for the tridiagonal divide-and-conquer algorithm. Once the eigenvalues are known, the corresponding eigenvectors can be computed efficiently using modified QR factorizations with restricted column pivoting. It is shown that the complexity of the resulting divide-and-conquer algorithm is O (n 2 b 2) (in exact arithmetic).This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

18.
Computing the extremal eigenvalue bounds of interval matrices is non‐deterministic polynomial‐time (NP)‐hard. We investigate bounds on real eigenvalues of real symmetric tridiagonal interval matrices and prove that for a given real symmetric tridiagonal interval matrices, we can achieve its exact range of the smallest and largest eigenvalues just by computing extremal eigenvalues of four symmetric tridiagonal matrices.  相似文献   

19.
Volker Drygalla 《PAMM》2008,8(1):10809-10810
The use of higher precision preconditioning for the symmetric eigenvalue problem and the singular value problem of general non–structured non–graded matrices are discussed. The matrix Q from the QR–decomposition as a preconditioner, applied to A with higher precision, in combination with Jacobi's method seems to allow the computation of all eigenvalues of symmetric positive definite matrices rsp. all singular values of general matrices to nearly full accuracy. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
The main advantage of inverse iteration over the QR algorithm and Divide & Conquer for the symmetric tridiagonal eigenproblem is that subsets of eigenpairs can be computed at reduced cost. The MRRR algorithm (MRRR=multiple relatively robust representations) is a clever variant of inverse iteration without the need for reorthogonalization. STEGR, the current version of MRRR in LAPACK 3.0, does not allow for subset computations. The next release of STEGR is designed to compute a (sub‐)set of k eigenpairs with ??(kn) operations. Because of the special way in which eigenvectors are computed, MRRR subset computations are more complicated than when using inverse iteration. Unlike the latter, MRRR sometimes cannot ignore the unwanted part of the spectrum. We describe the problems with what we call ‘false singletons’. These are eigenvalues that appear to be isolated with respect to the wanted eigenvalues but in fact belong to a tight cluster of unwanted eigenvalues. This paper analyses these complications and ways to deal with them. Published in 2006 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号