首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.  相似文献   

2.
Through different orthogonal decompositions of computed eigenvectors we can define different Hermitian backward perturbations for a Hermitian eigenvalue problem. Certain optimal Hermitian backward perturbations are studied. The results show that not all the optimal Hermitian backward perturbations are small when the computed eigenvectors have a small residual and are close to orthonormal.Dedicated to Åke Björck on the occasion of his 60th birthdayThis work was supported by the Swedish Natural Science Research Council under Contract F-FU 6952-302 and the Department of Computing Science, Umeå University.  相似文献   

3.
We discuss the perturbation analysis for eigenvalues and eigenvectors of structured homogeneous matrix polynomials with Hermitian, skew-Hermitian, H-even and H-odd structure. We construct minimal structured perturbations (structured backward errors) such that an approximate eigenvalue and eigenvector pair (finite or infinite eigenvalues) is an exact eigenvalue eigenvector pair of an appropriately perturbed structured matrix polynomial. We present various comparisons with unstructured backward errors and previous backward errors constructed for the non-homogeneous case and show that our results generalize previous results.  相似文献   

4.
This paper studies tensor eigenvalue complementarity problems. Basic properties of standard and complementarity tensor eigenvalues are discussed. We formulate tensor eigenvalue complementarity problems as constrained polynomial optimization. When one tensor is strictly copositive, the complementarity eigenvalues can be computed by solving polynomial optimization with normalization by strict copositivity. When no tensor is strictly copositive, we formulate the tensor eigenvalue complementarity problem equivalently as polynomial optimization by a randomization process. The complementarity eigenvalues can be computed sequentially. The formulated polynomial optimization can be solved by Lasserre’s hierarchy of semidefinite relaxations. We show that it has finite convergence for generic tensors. Numerical experiments are presented to show the efficiency of proposed methods.  相似文献   

5.
We consider matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. One of the most fundamental differences from the linear case is that distinct eigenvalues may have linearly dependent eigenvectors or even share the same eigenvector. This has been a severe hindrance in the development of general numerical schemes for computing several eigenvalues of a nonlinear eigenvalue problem, either simultaneously or subsequently. The purpose of this work is to show that the concept of invariant pairs offers a way of representing eigenvalues and eigenvectors that is insensitive to this phenomenon. To demonstrate the use of this concept in the development of numerical methods, we have developed a novel block Newton method for computing such invariant pairs. Algorithmic aspects of this method are considered and a few academic examples demonstrate its viability.  相似文献   

6.
We propose a numerical method for computing all eigenvalues (and the corresponding eigenvectors) of a nonlinear holomorphic eigenvalue problem that lie within a given contour in the complex plane. The method uses complex integrals of the resolvent operator, applied to at least k column vectors, where k is the number of eigenvalues inside the contour. The theorem of Keldysh is employed to show that the original nonlinear eigenvalue problem reduces to a linear eigenvalue problem of dimension k. No initial approximations of eigenvalues and eigenvectors are needed. The method is particularly suitable for moderately large eigenvalue problems where k is much smaller than the matrix dimension. We also give an extension of the method to the case where k is larger than the matrix dimension. The quadrature errors caused by the trapezoid sum are discussed for the case of analytic closed contours. Using well known techniques it is shown that the error decays exponentially with an exponent given by the product of the number of quadrature points and the minimal distance of the eigenvalues to the contour.  相似文献   

7.
We consider the eigenvalues and eigenvectors of finite, low rank perturbations of random matrices. Specifically, we prove almost sure convergence of the extreme eigenvalues and appropriate projections of the corresponding eigenvectors of the perturbed matrix for additive and multiplicative perturbation models.The limiting non-random value is shown to depend explicitly on the limiting eigenvalue distribution of the unperturbed random matrix and the assumed perturbation model via integral transforms that correspond to very well-known objects in free probability theory that linearize non-commutative free additive and multiplicative convolution. Furthermore, we uncover a phase transition phenomenon whereby the large matrix limit of the extreme eigenvalues of the perturbed matrix differs from that of the original matrix if and only if the eigenvalues of the perturbing matrix are above a certain critical threshold. Square root decay of the eigenvalue density at the edge is sufficient to ensure that this threshold is finite. This critical threshold is intimately related to the same aforementioned integral transforms and our proof techniques bring this connection and the origin of the phase transition into focus. Consequently, our results extend the class of ‘spiked’ random matrix models about which such predictions (called the BBP phase transition) can be made well beyond the Wigner, Wishart and Jacobi random ensembles found in the literature. We examine the impact of this eigenvalue phase transition on the associated eigenvectors and observe an analogous phase transition in the eigenvectors. Various extensions of our results to the problem of non-extreme eigenvalues are discussed.  相似文献   

8.
We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is constructed from certain basis vectors of an invariant subspace of A, such as eigenvectors, Jordan vectors, or Schur vectors. We show that most of the eigenvalues of the low-rank perturbed matrix stayed unchanged from the eigenvalues of A; the perturbation can only change the eigenvalues of A that are related to the invariant subspace. Existing results mostly studied using eigenvectors with full column rank for perturbations, we generalize the results to more general settings. Applications of our results to a few interesting problems including the Google’s second eigenvalue problem are presented.  相似文献   

9.
Positive eigenvector of nonlinear perturbations of nonsymmetric M-matrix and its Newton iterative solution are studied. It is shown that any number greater than the smallest positive eigenvalue of the M-matrix is an eigenvalue of the nonlinear problem and that the corresponding positive eigenvector is unique and the Newton iteration of the positive eigenvector is convergent. Moreover, such positive eigenvectors form a monotone increasing and continuous function of the corresponding eigenvalues.  相似文献   

10.
This work is concerned with eigenvalue problems for structured matrix polynomials, including complex symmetric, Hermitian, even, odd, palindromic, and anti-palindromic matrix polynomials. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Recently, linearizations have been classified for which the pencil reflects the structure of the original polynomial. A question of practical importance is whether this process of linearization significantly increases the eigenvalue sensitivity with respect to structured perturbations. For all structures under consideration, we show that this cannot happen if the matrix polynomial is well scaled: there is always a structured linearization for which the structured eigenvalue condition number does not differ much. This implies, for example, that a structure-preserving algorithm applied to the linearization fully benefits from a potentially low structured eigenvalue condition number of the original matrix polynomial.  相似文献   

11.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

12.
We develop a nonlinear spectral graph theory, in which the Laplace operator is replaced by the 1 ? Laplacian Δ1. The eigenvalue problem is to solve a nonlinear system involving a set valued function. In the study, we investigate the structure of the solutions, the minimax characterization of eigenvalues, the multiplicity theorem, etc. The eigenvalues as well as the eigenvectors are computed for several elementary graphs. The graphic feature of eigenvalues are also studied. In particular, Cheeger's constant, which has only some upper and lower bounds in linear spectral theory, equals to the first nonzero Δ1 eigenvalue for connected graphs.  相似文献   

13.
One crucial step of the solution of large-scale generalized eigenvalue problems with iterative subspace methods, e.g. Arnoldi, Jacobi-Davidson, is a projection of the original large-scale problem onto a low dimensional subspaces. Here we investigate two-sided methods, where approximate eigenvalues together with their right and left eigenvectors of the full-size problem are extracted from the resulting small eigenproblem. The two-sided Ritz-Galerkin projection can be seen as the most basic form of this approach. It usually provides a good convergence towards the extremal eigenvalues of the spectrum. For improving the convergence towards interior eigenvalues, we investigate two approaches based on harmonic subspace extractions for the generalized eigenvalue problem. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
研究L^p(1相似文献   

15.
Quadratic finite element model updating problem (QFEMUP), to be studied in this paper, is concerned with updating a symmetric nonsingular quadratic pencil in such a way that, a small set of measured eigenvalues and eigenvectors is reproduced by the updated model. If in addition, the updated model preserves the large number of unupdated eigenpairs of the original model, the model is said to be updated with no spill-over. QFEMUP is, in general, a difficult and computationally challenging problem due to the practical constraint that only a very small number of eigenvalues and eigenvectors of the associated quadratic eigenvalue problem are available from computation or measurement. Additionally, for practical effectiveness, engineering concerns such as nonorthogonality and incompleteness of the measured eigenvectors must be considered. Most of the existing methods, including those used in industrial settings, deal with updating a linear model only, ignoring damping. Only in the last few years a small number of papers been published on the quadratic model updating; several of the above issues have been dealt with both from theoretical and computational point of views. However, mathematical criterion for existence of solution has not been fully developed. In this paper, we first (i) prove a set of necessary and sufficient conditions for the existence of a solution of the no spill-over QFEMUP, then (ii) present a parametric representation of the solution, assuming a solution exists and finally, (iii) propose an algorithm for QFEMUP with no spill-over and incomplete measured eigenvectors. Interestingly, it is shown that the parametric representation can be constructed with the knowledge of only the few eigenvalues and eigenvectors that are to be updated and the corresponding measured eigenvalues and eigenvectors—complete knowledge of eigenvalues and eigenvectors of the original pencil is not needed, which makes the solution readily applicable to real-life structures.  相似文献   

16.
The eigenvalue problem for linear differential operators is important since eigenvalues correspond to the possible energy levels of a physical system. It is also important to have good estimates of the error in the computed eigenvalues. In this work, we use spline interpolation to construct approximate eigenfunctions of a linear operator using the corresponding eigenvectors of a discretized approximation of the operator. We show that an error estimate for the approximate eigenvalues can be obtained by evaluating the residual for an approximate eigenpair. The interpolation scheme is selected in such a way that the residual can be evaluated analytically. To demonstrate that the method gives useful error bounds, we apply it to a problem originating from the study of graphene quantum dots where the goal was to investigate the change in the spectrum from incorporating electron–electron interactions in the potential.  相似文献   

17.
We investigate eigenvalues and eigenvectors of certain linear variational eigenvalue inequalities where the constraints are defined by a convex cone as in [4], [7], [8], [10]-[12], [17]. The eigenvalues of those eigenvalue problems are of interest in connection with bifurcation from the trivial solution of nonlinear variational inequalities. A rather far reaching theory is presented for the case that the cone is given by a finite number of linear inequalities, where an eigensolution corresponds to a (+)-Kuhn-Tucker point of the Rayleigh quotient. Application to an unlaterally supported beam are discussed and numerical results are given.  相似文献   

18.
On the modification of an eigenvalue problem that preserves an eigenspace   总被引:1,自引:0,他引:1  
Eigenvalue problems arise in many application areas ranging from computational fluid dynamics to information retrieval. In these fields we are often interested in only a few eigenvalues and corresponding eigenvectors of a sparse matrix. In this paper, we comment on the modifications of the eigenvalue problem that can simplify the computation of those eigenpairs. These transformations allow us to avoid difficulties associated with non-Hermitian eigenvalue problems, such as the lack of reliable non-Hermitian eigenvalue solvers, by mapping them into generalized Hermitian eigenvalue problems. Also, they allow us to expose and explore parallelism. They require knowledge of a selected eigenvalue and preserve its eigenspace. The positive definiteness of the Hermitian part is inherited by the matrices in the generalized Hermitian eigenvalue problem. The position of the selected eigenspace in the ordering of the eigenvalues is also preserved under certain conditions. The effect of using approximate eigenvalues in the transformation is analyzed and numerical experiments are presented.  相似文献   

19.
In applications of linear algebra including nuclear physics and structural dynamics, there is a need to deal with uncertainty in the matrices. We focus on matrices that depend on a set of parameters ω and we are interested in the minimal eigenvalue of a matrix pencil ( A , B ) with A , B symmetric and B positive definite. If ω can be interpreted as the realization of random variables, one may be interested in statistical moments of the minimal eigenvalue. In order to obtain statistical moments, we need a fast evaluation of the eigenvalue as a function of ω . Because this is costly for large matrices, we are looking for a small parameterized eigenvalue problem whose minimal eigenvalue makes a small error with the minimal eigenvalue of the large eigenvalue problem. The advantage, in comparison with a global polynomial approximation (on which, e.g., the polynomial chaos approximation relies), is that we do not suffer from the possible nonsmoothness of the minimal eigenvalue. The small‐scale eigenvalue problem is obtained by projection of the large‐scale problem. Our main contribution is that, for constructing the subspace, we use multiple eigenvectors and derivatives of eigenvectors. We provide theoretical results and document numerical experiments regarding the beneficial effect of adding multiple eigenvectors and derivatives.  相似文献   

20.
In this paper, continuous methods are introduced to compute both the extreme and interior eigenvalues and their corresponding eigenvectors for real symmetric matrices. The main idea is to convert the extreme and interior eigenvalue problems into some optimization problems. Then a continuous method which includes both a merit function and an ordinary differential equation (ODE) is introduced for each resulting optimization problem. The convergence of each ODE solution is proved for any starting point. The limit of each ODE solution for any starting point is fully studied. Both the extreme and the interior eigenvalues and their corresponding eigenvectors can be easily obtained under a very mild condition. Promising numerical results are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号