首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary. An adaptive Richardson iteration method is described for the solution of large sparse symmetric positive definite linear systems of equations with multiple right-hand side vectors. This scheme ``learns' about the linear system to be solved by computing inner products of residual matrices during the iterations. These inner products are interpreted as block modified moments. A block version of the modified Chebyshev algorithm is presented which yields a block tridiagonal matrix from the block modified moments and the recursion coefficients of the residual polynomials. The eigenvalues of this block tridiagonal matrix define an interval, which determines the choice of relaxation parameters for Richardson iteration. Only minor modifications are necessary in order to obtain a scheme for the solution of symmetric indefinite linear systems with multiple right-hand side vectors. We outline the changes required. Received April 22, 1993  相似文献   

2.
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
《Journal of Complexity》1993,9(3):387-405
We apply a novel approach to approximate within ϵ to all the eigenvalues of an n × n symmetric tridiagonal matrix A using at most n2([3 log2(625n6)] + (83n − 34)[log2 (log2((λ1 − λn)/(2ϵ))/log2(25n))]) arithmetic operations where λ1 and λn denote the extremal eigenvalues of A. The algorithm can be modified to compute any fixed numbers of the largest and the smallest eigenvalues of A and may also be applied to the band symmetric matrices without their reduction to the tridiagonal form.  相似文献   

4.
In this paper we study both direct and inverse eigenvalue problems for diagonal-plus-semiseparable (dpss) matrices. In particular, we show that the computation of the eigenvalues of a symmetric dpss matrix can be reduced by a congruence transformation to solving a generalized symmetric definite tridiagonal eigenproblem. Using this reduction, we devise a set of recurrence relations for evaluating the characteristic polynomial of a dpss matrix in a stable way at a linear time. This in turn allows us to apply divide-and-conquer eigenvalue solvers based on functional iterations directly to dpss matrices without performing any preliminary reduction into a tridiagonal form. In the second part of the paper, we exploit the structural properties of dpss matrices to solve the inverse eigenvalue problem of reconstructing a symmetric dpss matrix from its spectrum and some other informations. Finally, applications of our results to the computation of a QR factorization of a Cauchy matrix with real nodes are provided.  相似文献   

5.
The paper presents a method for solving the eigenvalue problem Ax = λBx, where A and B are real symmetric but not necessarily positive definite matrices, and B is nonsingular. The method reduces the general case into a form Cz = λz where C is a pseudosymmetric matrix. A further reduction of C produces a tridiagonal pseudosymmetric form to which the iterative HR process is applied. The tridiagonal pseudosymmetric form is invariant under the HR transformations. The amount of computation is significantly less than in treating the problem by a general method.  相似文献   

6.
It is proved that the eigenvectors of a symmetric centrosymmetric matrix of order N are either symmetric or skew symmetric, and that there are ?N/2? symmetric and ?N/2? skew symmetric eigenvectors. Some previously known but widely scattered facts about symmetric centrosymmetric matrices are presented for completeness. Special cases are considered, in particular tridiagonal matrices of both odd and even order, for which it is shown that the eigenvectors corresponding to the eigenvalues arranged in descending order are alternately symmetric and skew symmetric provided the eigenvalues are distinct.  相似文献   

7.
The Lanczos algorithm is used to compute some eigenvalues of a given symmetric matrix of large order. At each step of the Lanczos algorithm it is valuable to know which eigenvalues of the associated tridiagonal matrix have stabilized at eigenvalues of the given symmetric matrix. We present a robust algorithm which is fast (20j to 40j operations at the jth Lanczos step), uses about 30 words of extra storage, and has a fairly short program (approxiamately 200 executable statements)  相似文献   

8.
This report may be considered as a non-trivial extension of an unpublished report by William Kahan (Accurate Eigenvalues of a symmetric tri-diagonal matrix, Technical Report CS 41, Computer Science Department, Stanford University, 1966). His interplay between matrix theory and computer arithmetic led to the development of algorithms for computing accurate eigenvalues and singular values. His report is generally considered as the precursor for the development of IEEE standard 754 for binary arithmetic. This standard has been universally adopted by virtually all PC, workstation and midrange hardware manufactures and tens of billions of such machines have been produced. Now we use the features in this standard to improve the original algorithm.In this paper, we describe an algorithm in floating-point arithmetic to compute the exact inertia of a real symmetric (shifted) tridiagonal matrix. The inertia, denoted by the integer triplet (πνζ), is defined as the number of positive, negative and zero eigenvalues of a real symmetric (or complex Hermitian) matrix and the adjective exact refers to the eigenvalues computed in exact arithmetic. This requires the floating-point computation of the diagonal matrix D of the LDLt factorization of the shifted tridiagonal matrix T − τI with +∞ and −∞ rounding modes defined in IEEE 754 standard. We are not aware of any other algorithm which gives the exact answer to a numerical problem when implemented in floating-point arithmetic in standard working precisions. The guaranteed intervals for eigenvalues are obtained by bisection or multisection with this exact inertia information. Similarly, using the Golub-Kahan form, guaranteed intervals for singular values of bidiagonal matrices can be computed. The diameter of the eigenvalue (singular value) intervals depends on the number of shifts with inconsistent inertia in two rounding modes. Our algorithm not only guarantees the accuracy of the solutions but is also consistent across different IEEE 754 standard compliant architectures. The unprecedented accuracy provided by our algorithms could be also used to debug and validate standard floating-point algorithms for computation of eigenvalues (singular values). Accurate eigenvalues (singular values) are also required by certain algorithms to compute accurate eigenvectors (singular vectors).We demonstrate the accuracy of our algorithms by using standard matrix examples. For the Wilkinson matrix, the eigenvalues (in IEEE double precision) are very accurate with an (open) interval diameter of 6 ulps (units of the last place held of the mantissa) for one of the eigenvalues and lesser (down to 2 ulps) for others. These results are consistent across many architectures including Intel, AMD, SGI and DEC Alpha. However, by enabling IEEE double extended precision arithmetic in Intel/AMD 32-bit architectures at no extra computational cost, the (open) interval diameters were reduced to one ulp, which is the best possible solution for this problem. We have also computed the eigenvalues of a tridiagonal matrix which manifests in Gauss-Laguerre quadrature and the results are extremely good in double extended precision but less so in double precision. To demonstrate the accuracy of computed singular values, we have also computed the eigenvalues of the Kac30 matrix, which is the Golub-Kahan form of a bidiagonal matrix. The tridiagonal matrix has known integer eigenvalues. The bidiagonal Cholesky factor of the Gauss-Laguerre tridiagonal is also included in the singular value study.  相似文献   

9.
The Markov–Bernstein inequalities for the Jacobi measure remained to be studied in detail. Indeed the tools used for obtaining lower and upper bounds of the constant which appear in these inequalities, did not work, since it is linked with the smallest eigenvalue of a five diagonal positive definite symmetric matrix. The aim of this paper is to generalize the qd algorithm for positive definite symmetric band matrices and to give the mean to expand the determinant of a five diagonal symmetric matrix. After that these new tools are applied to the problem to produce effective lower and upper bounds of the Markov–Bernstein constant in the Jacobi case. In the last part we com pare, in the particular case of the Gegenbauer measure, the lower and upper bounds which can be deduced from this paper, with those given in Draux and Elhami (Comput J Appl Math 106:203–243, 1999) and Draux (Numer Algor 24:31–58, 2000).   相似文献   

10.
It is proved that a real symmetric tridiagonal matrix with positive codiagonal elements is uniquely determined by its eigenvalues and the eigenvalues of the largest leading principal submatrix, and can be constructed from these data. The matrix depends continuously on the data, but because of rounding errors the investigated algorithm might in practice break down for large matrices.  相似文献   

11.
We show that the block principal pivot algorithm (BPPA) for the linear complementarity problem (LCP) solves the problem for a special class of matrices in at most n block principal pivot steps. We provide cycling examples for the BPPA in which the matrix is positive definite or symmetric positive definite. For LCP of order three, we prove that strict column (row) diagonal dominance is a sufficient condition to avoid cycling.  相似文献   

12.
本文从共轭梯度法的公式推导出对称正定阵A与三对角阵B的相似关系,B的元素由共轭梯度法的迭代参数确定.因此,对称正定阵的条件数计算可以化成三对角阵条件数的计算,并且可以在共轭梯度法的计算中顺带完成.它只需增加O(s)次的计算量,s为迭代次数.这与共轭梯度法的计算量相比是可以忽略的.当A为非对称正定阵时,只要A非奇异,即可用共轭梯度法计算ATA的特征极值和条件数,从而得出A的条件数.对不同算例的计算表明,这是一种快速有效的简便方法.  相似文献   

13.
Factorized sparse approximate inverse (FSAI) preconditioners are robust algorithms for symmetric positive matrices, which are particularly attractive in a parallel computational environment because of their inherent and almost perfect scalability. Their parallel degree is even redundant with respect to the actual capabilities of the current computational architectures. In this work, we present two new approaches for FSAI preconditioners with the aim of improving the algorithm effectiveness by adding some sequentiality to the native formulation. The first one, denoted as block tridiagonal FSAI, is based on a block tridiagonal factorization strategy, whereas the second one, domain decomposition FSAI, is built by reordering the matrix graph according to a multilevel k‐way partitioning method followed by a bandwidth minimization algorithm. We test these preconditioners by solving a set of symmetric positive definite problems arising from different engineering applications. The results are evaluated in terms of performance, scalability, and robustness, showing that both strategies lead to faster convergent schemes regarding the number of iterations and total computational time in comparison with the native FSAI with no significant loss in the algorithmic parallel degree.  相似文献   

14.
While numerically stable techniques have been available for deflating a fulln byn matrix, no satisfactory finite technique has been known which preserves Hessenberg form. We describe a new algorithm which explicitly deflates a Hessenberg matrix in floating point arithmetic by means of a sequence of plane rotations. When applied to a symmetric tridiagonal matrix, the deflated matrix is again symmetric tridiagonal. Repeated deflation can be used to find an orthogonal set of eigenvectors associated with any selection of eigenvalues of a symmetric tridiagonal matrix.  相似文献   

15.
A well known numerical task is the inversion of large symmetric tridiagonal Toeplitz matrices, i.e., matrices whose entries equal a on the diagonal and b on the extra diagonals (\(a, b\in \mathbb R\)). The inverses of such matrices are dense and there exist well known explicit formulas by which they can be calculated in \(\mathcal O(n^2)\). In this note we present a simplification of the problem that has proven to be rather useful in everyday practice: If \(\vert a\vert > 2\vert b\vert \), that is, if the matrix is strictly diagonally dominant, its inverse is a band matrix to working precision and the bandwidth is independent of n for sufficiently large n. Employing this observation, we construct a linear time algorithm for an explicit tridiagonal inversion that only uses \(\mathcal O(1)\) floating point operations. On the basis of this simplified inversion algorithm we outline the cornerstones for an efficient parallelizable approximative equation solver.  相似文献   

16.
On the way to establishing a commutative analog to the Gelfand-Kirillov theorem in Lie theory, Kostant and Wallach produced a decomposition of M(n) which we will describe in the language of linear algebra. The “Ritz values” of a matrix are the eigenvalues of its leading principal submatrices of order m=1,2,…,n. There is a unique unit upper Hessenberg matrix H with those eigenvalues. For real symmetric matrices with interlacing Ritz values, we extend their analysis to allow eigenvalues at successive levels to be equal. We also decide whether given Ritz values can come from a tridiagonal matrix.  相似文献   

17.
We derive new perturbation bounds for eigenvalues of Hermitian matrices with block tridiagonal structure. The main message of this paper is that an eigenvalue is insensitive to blockwise perturbation, if it is well-separated from the spectrum of the diagonal blocks nearby the perturbed blocks. Our bound is particularly effective when the matrix is block-diagonally dominant and graded. Our approach is to obtain eigenvalue bounds via bounding eigenvector components, which is based on the observation that an eigenvalue is insensitive to componentwise perturbation if the corresponding eigenvector components are small. We use the same idea to explain two well-known phenomena, one concerning aggressive early deflation used in the symmetric tridiagonal QR algorithm and the other concerning the extremal eigenvalues of Wilkinson matrices.  相似文献   

18.
We are interested in higher-order derivatives of functions of the eigenvalues of real symmetric matrices with respect to the matrix argument. We describe a formula for the k-th derivative of such functions in two general cases.The first case concerns the derivatives of the composition of an arbitrary (not necessarily symmetric) k-times differentiable function with the eigenvalues of symmetric matrices at a symmetric matrix with distinct eigenvalues.The second case describes the derivatives of the composition of a k-times differentiable separable symmetric function with the eigenvalues of symmetric matrices at an arbitrary symmetric matrix. We show that the formula significantly simplifies when the separable symmetric function is k-times continuously differentiable.As an application of the developed techniques, we re-derive the formula for the Hessian of a general spectral function at an arbitrary symmetric matrix. The new tools lead to a shorter, cleaner derivation than the original one.To make the exposition as self contained as possible, we have included the necessary background results and definitions. Proofs of the intermediate technical results are collected in the appendices.  相似文献   

19.
In this paper, a formula for inverting general band matrices is established. It takes a simple form when the matrices are tridiagonal, and as a special case it includes the Bukhberger-Emel'yanenko algorithm for symmetric tridiagonal matrices.  相似文献   

20.
Volker Drygalla 《PAMM》2008,8(1):10809-10810
The use of higher precision preconditioning for the symmetric eigenvalue problem and the singular value problem of general non–structured non–graded matrices are discussed. The matrix Q from the QR–decomposition as a preconditioner, applied to A with higher precision, in combination with Jacobi's method seems to allow the computation of all eigenvalues of symmetric positive definite matrices rsp. all singular values of general matrices to nearly full accuracy. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号