首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We develop first order eigenvalue expansions of one-parametric perturbations of square singular matrix polynomials. Although the eigenvalues of a singular matrix polynomial P(λ) are not continuous functions of the entries of the coefficients of the polynomial, we show that for most perturbations they are indeed continuous. Given an eigenvalue λ0 of P(λ) we prove that, for generic perturbations M(λ) of degree at most the degree of P(λ), the eigenvalues of P(λ)+?M(λ) admit covergent series expansions near λ0 and we describe the first order term of these expansions in terms of M(λ0) and certain particular bases of the left and right null spaces of P(λ0). In the important case of λ0 being a semisimple eigenvalue of P(λ) any bases of the left and right null spaces of P(λ0) can be used, and the first order term of the eigenvalue expansions takes a simple form. In this situation we also obtain the limit vector of the associated eigenvector expansions.  相似文献   

2.
Given a pair of distinct eigenvalues (λ1,λ2) of an n×n quadratic matrix polynomial Q(λ) with nonsingular leading coefficient and their corresponding eigenvectors, we show how to transform Q(λ) into a quadratic of the form having the same eigenvalue s as Q(λ), with Qd(λ) an (n-1)×(n-1) quadratic matrix polynomial and q(λ) a scalar quadratic polynomial with roots λ1 and λ2. This block diagonalization cannot be achieved by a similarity transformation applied directly to Q(λ) unless the eigenvectors corresponding to λ1 and λ2 are parallel. We identify conditions under which we can construct a family of 2n×2n elementary similarity transformations that (a) are rank-two modifications of the identity matrix, (b) act on linearizations of Q(λ), (c) preserve the block structure of a large class of block symmetric linearizations of Q(λ), thereby defining new quadratic matrix polynomials Q1(λ) that have the same eigenvalue s as Q(λ), (d) yield quadratics Q1(λ) with the property that their eigenvectors associated with λ1 and λ2 are parallel and hence can subsequently be deflated by a similarity applied directly to Q1(λ). This is the first attempt at building elementary transformations that preserve the block structure of widely used linearizations and which have a specific action.  相似文献   

3.
The standard way to solve polynomial eigenvalue problems P(λ)x=0 is to convert the matrix polynomial P(λ) into a matrix pencil that preserves its spectral information — a process known as linearization. When P(λ) is palindromic, the eigenvalues, elementary divisors, and minimal indices of P(λ) have certain symmetries that can be lost when using the classical first and second Frobenius companion linearizations for numerical computations, since these linearizations do not preserve the palindromic structure. Recently new families of pencils have been introduced with the goal of finding linearizations that retain whatever structure the original P(λ) might possess, with particular attention to the preservation of palindromic structure. However, no general construction of palindromic linearizations valid for all palindromic polynomials has as yet been achieved. In this paper we present a family of linearizations for odd degree polynomials P(λ) which are palindromic whenever P(λ) is, and which are valid for all palindromic polynomials of odd degree. We illustrate our construction with several examples. In addition, we establish a simple way to recover the minimal indices of the polynomial from those of the linearizations in the new family.  相似文献   

4.
We study the properties of palindromic quadratic matrix polynomials φ(z)=P+Qz+Pz2, i.e., quadratic polynomials where the coefficients P and Q are square matrices, and where the constant and the leading coefficients are equal. We show that, for suitable choices of the matrix coefficients P and Q, it is possible to characterize by means of φ(z) well known matrix functions, namely the matrix square root, the matrix polar factor, the matrix sign and the geometric mean of two matrices. Finally we provide some integral representations of these matrix functions.  相似文献   

5.
We develop a general framework for perturbation analysis of matrix polynomials. More specifically, we show that the normed linear space Lm(Cn×n) of n-by-n matrix polynomials of degree at most m provides a natural framework for perturbation analysis of matrix polynomials in Lm(Cn×n). We present a family of natural norms on the space Lm(Cn×n) and show that the norms on the spaces Cm+1 and Cn×n play a crucial role in the perturbation analysis of matrix polynomials. We define pseudospectra of matrix polynomials in the general framework of the normed space Lm(Cn×n) and show that the pseudospectra of matrix polynomials well known in the literature follow as special cases. We analyze various properties of pseudospectra in the unified framework of the normed space Lm(Cn×n). We analyze critical points of backward errors of approximate eigenvalues of matrix polynomials and show that each critical point is a multiple eigenvalue of an appropriately perturbed polynomial. We show that common boundary points of components of pseudospectra of matrix polynomials are critical points. As a consequence, we show that a solution of Wilkinson’s problem for matrix polynomials can be read off from the pseudospectra of matrix polynomials.  相似文献   

6.
Associated with an n×n matrix polynomial of degree , are the eigenvalue problem P(λ)x=0 and the linear system problem P(ω)x=b, where in the latter case x is to be computed for many values of the parameter ω. Both problems can be solved by conversion to an equivalent problem L(λ)z=0 or L(ω)z=c that is linear in the parameter λ or ω. This linearization process has received much attention in recent years for the eigenvalue problem, but it is less well understood for the linear system problem. We develop a framework in which more general versions of both problems can be analyzed, based on one-sided factorizations connecting a general nonlinear matrix function N(λ) to a simpler function M(λ), typically a polynomial of degree 1 or 2. Our analysis relates the solutions of the original and lower degree problems and in the linear system case indicates how to choose the right-hand side c and recover the solution x from z. For the eigenvalue problem this framework includes many special cases studied in the literature, including the vector spaces of pencils L1(P) and L2(P) recently introduced by Mackey, Mackey, Mehl, and Mehrmann and a class of rational problems. We use the framework to investigate the conditioning and stability of the parametrized linear system P(ω)x=b and thereby study the effect of scaling, both of the original polynomial and of the pencil L. Our results identify situations in which scaling can potentially greatly improve the conditioning and stability and our numerical results show that dramatic improvements can be achieved in practice.  相似文献   

7.
The nonnegative inverse eigenvalue problem is that given a family of complex numbers λ={λ1,…,λn}, find a nonnegative matrix of order n with spectrum λ. This problem is difficult and remains unsolved partially. In this paper, we focus on its generalization that the reconstructed nonnegative matrices should have some prescribed entries. It is easy to see that this new problem will come back to the common nonnegative inverse eigenvalue problem if there is no constraint of the locations of entries. A numerical isospectral flow method which is developed by hybridizing the optimization theory and steepest descent method is used to study the reconstruction. Moreover, an error estimate of the numerical iteration for ordinary differential equations on the matrix manifold is presented. After that, a numerical method for the nonnegative symmetric inverse eigenvalue problem with prescribed entries and its error estimate are considered. Finally, the approaches are verified by the numerical test results.  相似文献   

8.
Given a complex square matrix A and two complex numbers λ1 and λ2, we present a method to calculate the distance from A to the set of matrices X that have λ1 and λ2 as some of their eigenvalues. We also find the nearest matrix X.  相似文献   

9.
Given a quadratic two-parameter matrix polynomial Q(λ,?μ), we develop a systematic approach to generating a vector space of linear two-parameter matrix polynomials. The purpose for constructing this vector space is that potential linearizations of Q(λ,?μ) lie in it. Then, we identify a set of linearizations and describe their constructions. Finally, we determine a class of linearizations for a quadratic two-parameter eigenvalue problem.  相似文献   

10.
Let A(λ) be a complex regular matrix polynomial of degree ? with g elementary divisors corresponding to the finite eigenvalue λ0. We show that for most complex matrix polynomials B(λ) with degree at most ? satisfying rank the perturbed polynomial (A+B)(λ) has exactly elementary divisors corresponding to λ0, and we determine their degrees. If does not exceed the number of λ0-elementary divisors of A(λ) with degree greater than 1, then the λ0-elementary divisors of (A+B)(λ) are the elementary divisors of A(λ) corresponding to λ0 with smallest degree, together with rank(B(λ)-B(λ0)) linear λ0-elementary divisors. Otherwise, the degree of all the λ0-elementary divisors of (A+B)(λ) is one. This behavior happens for any matrix polynomial B(λ) except those in a proper algebraic submanifold in the set of matrix polynomials of degree at most ?. If A(λ) has an infinite eigenvalue, the corresponding result follows from considering the zero eigenvalue of the perturbed dual polynomial.  相似文献   

11.
We consider solving eigenvalue problems or model reduction problems for a quadratic matrix polynomial 2 −  − B with large and sparse A and B. We propose new Arnoldi and Lanczos type processes which operate on the same space as A and B live and construct projections of A and B to produce a quadratic matrix polynomial with the coefficient matrices of much smaller size, which is used to approximate the original problem. We shall apply the new processes to solve eigenvalue problems and model reductions of a second order linear input-output system and discuss convergence properties. Our new processes are also extendable to cover a general matrix polynomial of any degree.  相似文献   

12.
The spectral and Jordan structures of the Web hyperlink matrix G(c)=cG+(1−c)evT have been analyzed when G is the basic (stochastic) Google matrix, c is a real parameter such that 0<c<1, v is a nonnegative probability vector, and e is the all-ones vector. Typical studies have relied heavily on special properties of nonnegative, positive, and stochastic matrices. There is a unique nonnegative vector y(c) such that y(c)TG(c)=y(c)T and y(c)Te=1. This PageRank vector y(c) can be computed effectively by the power method.We consider a square complex matrix A and nonzero complex vectors x and v such that Ax=λx and vx=1. We use standard matrix analytic tools to determine the eigenvalues, the Jordan blocks, and a distinguished left λ-eigenvector of A(c)=cA+(1−c)λxv as a function of a complex variable c. If λ is a semisimple eigenvalue of A, there is a uniquely determined projection N such that limc→1y(c)=Nv for all v; this limit may fail to exist for some v if λ is not semisimple. As a special case of our results, we obtain a complex analog of PageRank for the Web hyperlink matrix G(c) with a complex parameter c. We study regularity, limits, expansions, and conditioning of y(c) and we propose algorithms (e.g., complex extrapolation, power method on a modified matrix etc.) that may provide an efficient way to compute PageRank also with c close or equal to 1. An interpretation of the limit vector Nv and a related critical discussion on the model, on its adherence to reality, and possible ways for its improvement, represent the contribution of the paper on modeling issues.  相似文献   

13.
For an algebraically closed field FF, we show that any matrix polynomial P(λ)∈F[λ]n×mP(λ)F[λ]n×m, n?mn?m, can be reduced to triangular form, preserving the degree and the finite and infinite elementary divisors. We also characterize the real matrix polynomials that are triangularizable over the real numbers and show that those that are not triangularizable are quasi-triangularizable with diagonal blocks of sizes 1×11×1 and 2×22×2. The proofs we present solve the structured inverse problem of building up triangular matrix polynomials starting from lists of elementary divisors.  相似文献   

14.
Pseudospectra of matrix polynomials have been systematically investigated in recent years, since they provide important insights into the sensitivity of polynomial eigenvalue problems. An accurate approximation of the pseudospectrum of a matrix polynomial P(λ) by means of the standard grid method is highly demanding computationally. In this paper, we propose an improvement of the grid method, which reduces the computational cost and retains the robustness and the parallelism of the method. In particular, after giving two lower bounds for the distance from a point to the boundary of the pseudospectrum of P(λ), we present two algorithms for the estimation of the pseudospectrum, using exclusion discs. Furthermore, two illustrative examples and an application of pseudospectra on elliptic (quadratic) eigenvalue problems are given.  相似文献   

15.
Given n-square Hermitian matrices A,B, let Ai,Bi denote the principal (n?1)- square submatrices of A,B, respectively, obtained by deleting row i and column i. Let μ, λ be independent indeterminates. The first main result of this paper is the characterization (for fixed i) of the polynomials representable as det(μAiBi) in terms of the polynomial det(μAB) and the elementary divisors, minimal indices, and inertial signatures of the pencil μAB. This result contains, as a special case, the classical interlacing relationship governing the eigenvalues of a principal sub- matrix of a Hermitian matrix. The second main result is the determination of the number of different values of i to which the characterization just described can be simultaneously applied.  相似文献   

16.
Summary The Symmetric Tridiagonal Eigenproblem has been the topic of some recent work. Many methods have been advanced for the computation of the eigenvalues of such a matrix. In this paper, we present a divide-and-conquer approach to the computation of the eigenvalues of a symmetric tridiagonal matrix via the evaluation of the characteristic polynomial. The problem of evaluation of the characteristic polynomial is partitioned into smaller parts which are solved and these solutions are then combined to form the solution to the original problem. We give the update equations for the characteristic polynomial and certain auxiliary polynomials used in the computation. Furthermore, this set of recursions can be implemented on a regulartree structure. If the concurrency exhibited by this algorithm is exploited, it can be shown that thetime for computation of all the eigenvalues becomesO(nlogn) instead ofO(n 2) as is the case for the approach where the order is increased by only one at every step. We address the numerical problems associated with the use of the characteristic polynomial and present a numerically stable technique for the eigenvalue computation.  相似文献   

17.
We consider an infinite Hermitian positive definite matrix M which is the moment matrix associated with a measure μ with infinite and compact support on the complex plane. We prove that if the polynomials are dense in L2(μ) then the smallest eigenvalue λn of the truncated matrix Mn of M of size (n+1)×(n+1) tends to zero when n tends to infinity. In the case of measures in the closed unit disk we obtain some related results.  相似文献   

18.
This work is concerned with eigenvalue problems for structured matrix polynomials, including complex symmetric, Hermitian, even, odd, palindromic, and anti-palindromic matrix polynomials. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Recently, linearizations have been classified for which the pencil reflects the structure of the original polynomial. A question of practical importance is whether this process of linearization significantly increases the eigenvalue sensitivity with respect to structured perturbations. For all structures under consideration, we show that this cannot happen if the matrix polynomial is well scaled: there is always a structured linearization for which the structured eigenvalue condition number does not differ much. This implies, for example, that a structure-preserving algorithm applied to the linearization fully benefits from a potentially low structured eigenvalue condition number of the original matrix polynomial.  相似文献   

19.
Let σ = (λ1, … , λn) be the spectrum of a nonnegative symmetric matrix A with the Perron eigenvalue λ1, a diagonal entry c and let τ = (μ1, … , μm) be the spectrum of a nonnegative symmetric matrix B with the Perron eigenvalue μ1. We show how to construct a nonnegative symmetric matrix C with the spectrum
(λ1+max{0,μ1-c},λ2,…,λn,μ2,…,μm).  相似文献   

20.
Based on the exact modal expansion method, an arbitrary high-order approximate method is developed for calculating the second-order eigenvalue derivatives and the first-order eigenvector derivatives of a defective matrix. The numerical example shows the validity of the method. If the different eigenvalues μ(1),…,μ(q) of the matrix are arranged so that |μ(1)|≤?≤|μ(q)| and satisfy the condition that |μ(q1)|<|μ(q1+1)| for some q1<q, and if the approximate method only uses the left and right principal eigenvectors associated with μ(1),…,μ(q1), then associated with μ(h)(hq1) the errors of the eigenvalue and eigenvector derivatives by the pth-order approximate method are nearly proportional to |μ(h)/μ(q1+1)|p+1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号