首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary. It is well known that the zeros of a polynomial are equal to the eigenvalues of the associated companion matrix . In this paper we take a geometric view of the conditioning of these two problems and of the stability of algorithms for polynomial zerofinding. The is the set of zeros of all polynomials obtained by coefficientwise perturbations of of size ; this is a subset of the complex plane considered earlier by Mosier, and is bounded by a certain generalized lemniscate. The is another subset of defined as the set of eigenvalues of matrices with ; it is bounded by a level curve of the resolvent of $A$. We find that if $A$ is first balanced in the usual EISPACK sense, then and are usually quite close to one another. It follows that the Matlab ROOTS algorithm of balancing the companion matrix, then computing its eigenvalues, is a stable algorithm for polynomial zerofinding. Experimental comparisons with the Jenkins-Traub (IMSL) and Madsen-Reid (Harwell) Fortran codes confirm that these three algorithms have roughly similar stability properties. Received June 15, 1993  相似文献   

2.
It is commonplace in many application domains to utilize polynomial eigenvalue problems to model the behaviour of physical systems. Many techniques exist to compute solutions of these polynomial eigenvalue problems. One of the most frequently used techniques is linearization, in which the polynomial eigenvalue problem is turned into an equivalent linear eigenvalue problem with the same eigenvalues, and with easily recoverable eigenvectors. The eigenvalues and eigenvectors of the linearization are usually computed using a backward stable solver such as the QZ algorithm. Such backward stable algorithms ensure that the computed eigenvalues and eigenvectors of the linearization are exactly those of a nearby linear pencil, where the perturbations are bounded in terms of the machine precision and the norms of the matrices defining the linearization. Although we have solved a nearby linear eigenvalue problem, we are not certain that our computed solution is in fact the exact solution of a nearby polynomial eigenvalue problem. Here, we perform a backward error analysis for the solution of a specific linearization for polynomials expressed in the monomial basis. We use a suitable one-sided factorization of the linearization that allows us to map generic perturbations of the linearization onto structured perturbations of the polynomial coefficients. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
In this paper we propose a method for computing the roots of a monic matrix polynomial. To this end we compute the eigenvalues of the corresponding block companion matrix C. This is done by implementing the QR algorithm in such a way that it exploits the rank structure of the matrix. Because of this structure, we can represent the matrix in Givens-weight representation. A similar method as in Chandrasekaran et al. (Oper Theory Adv Appl 179:111–143, 2007), the bulge chasing, is used during the QR iteration. For practical usage, matrix C has to be brought in Hessenberg form before the QR iteration starts. During the QR iteration and the transformation to Hessenberg form, the property of the matrix being unitary plus low rank numerically deteriorates. A method to restore this property is used.  相似文献   

4.
One of the most efficient methods for solving the polynomial eigenvalue problem (PEP) is the Sakurai-Sugiura method with Rayleigh-Ritz projection (SS-RR), which finds the eigenvalues contained in a certain domain using the contour integral. The SS-RR method converts the original PEP to a small projected PEP using the Rayleigh-Ritz projection. However, the SS-RR method suffers from backward instability when the norms of the coefficient matrices of the projected PEP vary widely. To improve the backward stability of the SS-RR method, we combine it with a balancing technique for solving a small projected PEP. We then analyze the backward stability of the SS-RR method. Several numerical examples demonstrate that the SS-RR method with the balancing technique reduces the backward error of eigenpairs of PEP.  相似文献   

5.
We discuss the perturbation analysis for eigenvalues and eigenvectors of structured homogeneous matrix polynomials with Hermitian, skew-Hermitian, H-even and H-odd structure. We construct minimal structured perturbations (structured backward errors) such that an approximate eigenvalue and eigenvector pair (finite or infinite eigenvalues) is an exact eigenvalue eigenvector pair of an appropriately perturbed structured matrix polynomial. We present various comparisons with unstructured backward errors and previous backward errors constructed for the non-homogeneous case and show that our results generalize previous results.  相似文献   

6.
This paper summarizes the results of comparative testing of (1) Wilf's global bisection method, (2) the Laguerre method, (3) the companion matrix eigenvalue method, (4) the companion matrix eigenvalue method with balancing, and (5) the Jenkens-Traub method, all of which are methods for finding the zeros of polynomials. The test set of polynomials used are those suggested by [5]. The methods were compared on each test polynomials on the basis of the accuracy of the computed roots and the CPU time required to numerically compute all roots.  相似文献   

7.
The bezoutian matrix, which provides information concerning co-primeness and greatest common divisor of polynomials, has recently been generalized by Heinig to the case of square polynomial matrices. Some of the properties of the bezoutian for the scalar case then carry over directly. In particular, the central result of the paper is an extension of a factorization due to Barnett, which enables the bezoutian to be expressed in terms of a Kronecker matrix polynomial in an appropriate block companion matrix. The most important consequence of this result is a determination of the structure of the kernel of the bezoutian. Thus, the bezoutian is nonsingular if and only if the two polynomial matrices have no common eigenvalues (i.e., their determinants are relatively prime); otherwise, the dimension of the kernel is given in terms of the multiplicities of the common eigenvalues of the polynomial matrices. Finally, an explicit basis is developed for the kernel of the bezoutian, using the concept of Jordan chains.  相似文献   

8.
For a given polynomial in the usual power form, its associated companion matrix can be applied to investigate qualitative properties, such as the location of the roots of the polynomial relative to regions of the complex plane, or to determine the greatest common divisor of a set of polynomials. If the polynomial is in “generalized” form, i.e. expressed relative to an orthogonal basis, then an analogue to the companion matrix has been termed the comrade form. This followed a special case when the basis is Chebyshev, for which the term colleague matrix had been introduced. When a yet more general basis is used, the corresponding matrix has been named confederate. These constitute the class of congenial matrices, which allow polynomials to be studied relative to an appropriate basis. Block-partitioned forms relate to polynomial matrices.  相似文献   

9.
An algorithm based on the Ehrlich–Aberth root-finding method is presented for the computation of the eigenvalues of a T-palindromic matrix polynomial. A structured linearization of the polynomial represented in the Dickson basis is introduced in order to exploit the symmetry of the roots by halving the total number of the required approximations. The rank structure properties of the linearization allow the design of a fast and numerically robust implementation of the root-finding iteration. Numerical experiments that confirm the effectiveness and the robustness of the approach are provided.  相似文献   

10.
The computation of zeros of polynomials is a classical computational problem. This paper presents two new zerofinders that are based on the observation that, after a suitable change of variable, any polynomial can be considered a member of a family of Szegő polynomials. Numerical experiments indicate that these methods generally give higher accuracy than computing the eigenvalues of the companion matrix associated with the polynomial.  相似文献   

11.
As n × n Hessenberg matrix A is defined whose characteristic polynomial is relative to an arbitrary basis. This generalizes the companion, colleague, and comrade matrices when the bases are, respectively, power, Chebyshev, and orthogonal, so the term “confederate” matrix is suggested. Some properties of A are derived, including an algorithm for computing powers of A. A scheme is given for inverting the transformation matrix between the arbitrary and power bases. A Vandermonde-type matrix associated with A and a block confederate matrix are defined.  相似文献   

12.
For many applications — such as the look-ahead variants of the Lanczos algorithm — a sequence of formal (block-)orthogonal polynomials is required. Usually, one generates such a sequence by taking suitable polynomial combinations of a pair of basis polynomials. These basis polynomials are determined by a look-ahead generalization of the classical three term recurrence, where the polynomial coefficients are obtained by solving a small system of linear equations. In finite precision arithmetic, the numerical orthogonality of the polynomials depends on a good choice of the size of the small systems; this size is usually controlled by a heuristic argument such as the condition number of the small matrix of coefficients. However, quite often it happens that orthogonality gets lost.We present a new variant of the Cabay-Meleshko algorithm for numerically computing pairs of basis polynomials, where the numerical orthogonality is explicitly monitored with the help of stability parameters. A corresponding error analysis is given. Our stability parameter is shown to reflect the condition number of the underlying Hankel matrix of moments. This enables us to prove the weak and strong stability of our method, provided that the corresponding Hankel matrix is well-conditioned.This work was partially supported by the HCM project ROLLS, under contract CHRX-CT93-0416.  相似文献   

13.
Theory, algorithms and LAPACK-style software for computing a pair of deflating subspaces with specified eigenvalues of a regular matrix pair (A, B) and error bounds for computed quantities (eigenvalues and eigenspaces) are presented. Thereordering of specified eigenvalues is performed with a direct orthogonal transformation method with guaranteed numerical stability. Each swap of two adjacent diagonal blocks in the real generalized Schur form, where at least one of them corresponds to a complex conjugate pair of eigenvalues, involves solving a generalized Sylvester equation and the construction of two orthogonal transformation matrices from certain eigenspaces associated with the diagonal blocks. The swapping of two 1×1 blocks is performed using orthogonal (unitary) Givens rotations. Theerror bounds are based on estimates of condition numbers for eigenvalues and eigenspaces. The software computes reciprocal values of a condition number for an individual eigenvalue (or a cluster of eigenvalues), a condition number for an eigenvector (or eigenspace), and spectral projectors onto a selected cluster. By computing reciprocal values we avoid overflow. Changes in eigenvectors and eigenspaces are measured by their change in angle. The condition numbers yield bothasymptotic andglobal error bounds. The asymptotic bounds are only accurate for small perturbations (E, F) of (A, B), while the global bounds work for all (E, F.) up to a certain bound, whose size is determined by the conditioning of the problem. It is also shown how these upper bounds can be estimated. Fortran 77software that implements our algorithms for reordering eigenvalues, computing (left and right) deflating subspaces with specified eigenvalues and condition number estimation are presented. Computational experiments that illustrate the accuracy, efficiency and reliability of our software are also described.  相似文献   

14.
The paper continues the investigation of methods for factorizing q-parameter polynomial matrices and considers their applications to solving multiparameter problems of algebra. An extension of the AB-algorithm, suggested earlier as a method for solving spectral problems for matrix pencils of the form A - λB, to the case of q-parameter (q ≥ 1) polynomial matrices of full rank is proposed. In accordance with the AB-algorithm, a finite sequence of q-parameter polynomial matrices such that every subsequent matrix provides a basis of the null-space of polynomial solutions of its transposed predecessor is constructed. A certain rule for selecting specific basis matrices is described. Applications of the AB-algorithm to computing complete polynomials of a q-parameter polynomial matrix and exhausting them from the regular spectrum of the matrix, to constructing irreducible factorizations of rational matrices satisfying certain assumptions, and to computing “free” bases of the null-spaces of polynomial solutions of an arbitrary q-parameter polynomial matrix are considered. Bibliography: 7 titles. __________ Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 309, 2004, pp. 127–143.  相似文献   

15.
Correlation matrices—symmetric positive semidefinite matrices with unit diagonal—are important in statistics and in numerical linear algebra. For simulation and testing it is desirable to be able to generate random correlation matrices with specified eigenvalues (which must be nonnegative and sum to the dimension of the matrix). A popular algorithm of Bendel and Mickey takes a matrix having the specified eigenvalues and uses a finite sequence of Givens rotations to introduce 1s on the diagonal. We give improved formulae for computing the rotations and prove that the resulting algorithm is numerically stable. We show by example that the formulae originally proposed, which are used in certain existing Fortran implementations, can lead to serious instability. We also show how to modify the algorithm to generate a rectangular matrix with columns of unit 2-norm. Such a matrix represents a correlation matrix in factored form, which can be preferable to representing the matrix itself, for example when the correlation matrix is nearly singular to working precision.  相似文献   

16.
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.  相似文献   

17.
José-Javier Martínez  Ana Marco 《PAMM》2007,7(1):1021301-1021302
The class of Bernstein-Vandermonde matrices (a generalization of Vandermonde matrices arising when the monomial basis is replaced by the Bernstein basis) is considered. A convenient ordering of their rows makes these matrices strictly totally positive. By using results related to total positivity and Neville elimination, an algorithm for computing the bidiagonal decomposition of a Bernstein-Vandermonde matrix is constructed. The use of explicit expressions for the determinants involved in the process serves to make the algorithm both fast and accurate. One of the applications of our algorithm is the design of fast and accurate algorithms for solving Lagrange interpolation problems when using the Bernstein basis, an approach useful for the field of Computer Aided Geometric Design since it avoids the stability problems involved with basis transformations between the Bernstein and the monomial bases. A different application consists of the use of the bidiagonal decomposition as an intermediate step of the computation of the eigenvalues and the singular value decomposition of a totally positive Bernstein-Vandermonde matrix. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
We show that the zeros of a trigonometric polynomial of degree $N$ with the usual $(2N +1)$ terms can be calculated by computing the eigenvalues of a matrix of dimension $2N$ with real-valued elements $M_{jk}$. This matrix $\vec{\vec{M}}$ is a multiplication matrix in the sense that, after first defining a vector $\vec{\phi}$ whose elements are the first $2N$ basis functions, $\vec{\vec{M}}\vec{\phi}$ = 2cos($t$)$\vec{\phi}$. This relationship is the eigenproblem; the zeros $t_{k}$ are the arccosine function of $\lambda_{k}/2$ where the $\lambda_{k}$ are the eigenvalues of $\vec{\vec {M}}$. We dub this the "Fourier Division Companion Matrix'', or FDCM for short, because it is derived using trigonometric polynomial division. We show through examples that the algorithm computes both real and complex-valued roots, even double roots, to near machine precision accuracy.  相似文献   

19.
Heavily damped quadratic eigenvalue problem (QEP) is a special type of QEPs. It has a large gap between small and large eigenvalues in absolute value. One common way for solving QEP is to linearize the original problem via linearizations. Previous work on the accuracy of eigenpairs of not heavily damped QEP focuses on analyzing the backward error of eigenpairs relative to linearizations. The objective of this paper is to explain why different linearizations lead to different errors when computing small and large eigenpairs. To obtain this goal, we bound the backward error of eigenpairs relative to the linearization methods. Using these bounds, we build upper bounds of growth factors for the backward error. We present results of numerical experiments that support the predictions of the proposed methods.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号