首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 594 毫秒
1.
For a given subspace, the Rayleigh-Ritz method projects the large quadratic eigenvalue problem (QEP) onto it and produces a small sized dense QEP. Similar to the Rayleigh-Ritz method for the linear eigenvalue problem, the Rayleigh-Ritz method defines the Ritz values and the Ritz vectors of the QEP with respect to the projection subspace. We analyze the convergence of the method when the angle between the subspace and the desired eigenvector converges to zero. We prove that there is a Ritz value that converges to the desired eigenvalue unconditionally but the Ritz vector converges conditionally and may fail to converge. To remedy the drawback of possible non-convergence of the Ritz vector, we propose a refined Ritz vector that is mathematically different from the Ritz vector and is proved to converge unconditionally. We construct examples to illustrate our theory.  相似文献   

2.
We propose a new localization result for the leading eigenvalue and eigenvector of a symmetric matrix A. The result exploits the Frobenius inner product between A and a given rank-one landmark matrix X. Different choices for X may be used, depending on the problem under investigation. In particular, we show that the choice where X is the all-ones matrix allows to estimate the signature of the leading eigenvector of A, generalizing previous results on Perron-Frobenius properties of matrices with some negative entries. As another application we consider the problem of community detection in graphs and networks. The problem is solved by means of modularity-based spectral techniques, following the ideas pioneered by Miroslav Fiedler in mid-’70s.  相似文献   

3.
We establish a general convergence theory of the Shift-Invert Residual Arnoldi(SIRA)method for computing a simple eigenvalue nearest to a given targetσand the associated eigenvector.In SIRA,a subspace expansion vector at each step is obtained by solving a certain inner linear system.We prove that the inexact SIRA method mimics the exact SIRA well,i.e.,the former uses almost the same outer iterations to achieve the convergence as the latter does if all the inner linear systems are iteratively solved with low or modest accuracy during outer iterations.Based on the theory,we design practical stopping criteria for inner solves.Our analysis is on one step expansion of subspace and the approach applies to the Jacobi-Davidson(JD)method with the fixed targetσas well,and a similar general convergence theory is obtained for it.Numerical experiments confirm our theory and demonstrate that the inexact SIRA and JD are similarly effective and are considerably superior to the inexact SIA.  相似文献   

4.
Eigenvalues and eigenvectors of a large sparse symmetric matrix A can be found accurately and often very quickly using the Lanczos algorithm without reorthogonalization. The algorithm gives essentially correct information on the eigensystem of A, although it does not necessarily give the correct multiplicity of multiple, or even single, eigenvalues. It is straightforward to determine a useful bound on the accuracy of every eigenvalue given by the algorithm. The initial behavior of the algorithm is surprisingly good: it produces vectors spanning the Krylov subspace of a matrix very close to A until this subspace contains an exact eigenvector of a matrix very close to A, and up to this point the effective behavior of the algorithm for the eigenproblem is very like that of the Lanczos algorithm using full reorthogonalization. This helps to explain the remarkable behavior of the basic Lanczos algorithm.  相似文献   

5.
We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is constructed from certain basis vectors of an invariant subspace of A, such as eigenvectors, Jordan vectors, or Schur vectors. We show that most of the eigenvalues of the low-rank perturbed matrix stayed unchanged from the eigenvalues of A; the perturbation can only change the eigenvalues of A that are related to the invariant subspace. Existing results mostly studied using eigenvectors with full column rank for perturbations, we generalize the results to more general settings. Applications of our results to a few interesting problems including the Google’s second eigenvalue problem are presented.  相似文献   

6.
The principal component analysis is to recursively estimate the eigenvectors and the corresponding eigenvalues of a symmetric matrix A based on its noisy observations Ak=A+Nk, where A is allowed to have arbitrary eigenvalues with multiplicity possibly bigger than one. In the paper the recursive algorithms are proposed and their ordered convergence is established: It is shown that the first algorithm a.s. converges to a unit eigenvector corresponding to the largest eigenvalue, the second algorithm a.s. converges to a unit eigenvector corresponding to either the second largest eigenvalue in the case the largest eigenvalue is of single multiplicity or the largest eigenvalue if the multiplicity of the largest eigenvalue is bigger than one, and so on. The convergence rate is also derived.  相似文献   

7.
For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitian. To this end, a class of new refined orthogonal projection methods has been proposed. It is proved that in some sense each refined method is a composite of two classical orthogonal projections, in which each refined approximate eigenvector is obtained by realizing a new one of some Hermitian semipositive definite matrix onto the same subspace. Apriori error bounds on the refined approximate eigenvector are established in terms of the sine of acute angle of the normalized eigenvector and the subspace involved. It is shown that the sufficient conditions for convergence of the refined vector and that of the Ritz value are the same, so that the refined methods may be much more efficient than the classical ones.  相似文献   

8.
The approximate solutions in standard iteration methods for linear systems Ax=b, with A an n by n nonsingular matrix, form a subspace. In this subspace, one may try to construct better approximations for the solution x. This is the idea behind Krylov subspace methods. It has led to very powerful and efficient methods such as conjugate gradients, GMRES, and Bi-CGSTAB. We will give an overview of these methods and we will discuss some relevant properties from the user's perspective view.The convergence of Krylov subspace methods depends strongly on the eigenvalue distribution of A, and on the angles between eigenvectors of A. Preconditioning is a popular technique to obtain a better behaved linear system. We will briefly discuss some modern developments in preconditioning, in particular parallel preconditioners will be highlighted: reordering techniques for incomplete decompositions, domain decomposition approaches, and sparsified Schur complements.  相似文献   

9.
One very natural way of describing industrial processes with interference gives rise to particularly unwieldy mathematical expressions; but by a change of language, these expressions can be simply handled by the familiar notation of matrix algebra-where, however, addition and multiplication do not have their usual arithmetical meanings. This line of attack promises to produce a useful machinery for the analysis of systems of interacting processes, but first all the usual algebraic questions must be answered-e.g. the question of evaluating the eigenvalues and eigenvectors of a matrix in the new notation. This problem is equivalent to that of determining the steady states of a system of interacting processes, assuming constant activity times. This paper shows that the eigenvalue of a general matrix A can be determined explicitly in terms of A, as can the eigenvector in a particular set of cases. The general eigenvector and eigenvalue computation problem can be simplified into a dual form closely analogous to a formulation of the Assignment Problem by H. W. Kuhn.  相似文献   

10.
Let A be a nonnegative square matrix, and let D be a diagonal matrix whose iith element is (Ax)ixi, where x is a (fixed) positive vector. It is shown that the number of final classes of A equals n?rank(A?D). We also show that null(A?D) = null(A?D)2, and that this subspace is spanned by a set of nonnegative elements. Our proof uses a characterization of nonnegative matrices having a positive eigenvector corresponding to their spectral radius.  相似文献   

11.
We consider estimating a random vector from its measurements in a fusion frame, in presence of noise and subspace erasures. A fusion frame is a collection of subspaces, for which the sum of the projection operators onto the subspaces is bounded below and above by constant multiples of the identity operator. We first consider the linear minimum mean-squared error (LMMSE) estimation of the random vector of interest from its fusion frame measurements in the presence of additive white noise. Each fusion frame measurement is a vector whose elements are inner products of an orthogonal basis for a fusion frame subspace and the random vector of interest. We derive bounds on the mean-squared error (MSE) and show that the MSE will achieve its lower bound if the fusion frame is tight. We then analyze the robustness of the constructed LMMSE estimator to erasures of the fusion frame subspaces. We limit our erasure analysis to the class of tight fusion frames and assume that all erasures are equally important. Under these assumptions, we prove that tight fusion frames consisting of equi-dimensional subspaces have maximum robustness (in the MSE sense) with respect to erasures of one subspace among all tight fusion frames, and that the optimal subspace dimension depends on signal-to-noise ratio (SNR). We also prove that tight fusion frames consisting of equi-dimensional subspaces with equal pairwise chordal distances are most robust with respect to two and more subspace erasures, among the class of equi-dimensional tight fusion frames. We call such fusion frames equi-distance tight fusion frames. We prove that the squared chordal distance between the subspaces in such fusion frames meets the so-called simplex bound, and thereby establish connections between equi-distance tight fusion frames and optimal Grassmannian packings. Finally, we present several examples for the construction of equi-distance tight fusion frames.  相似文献   

12.
Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a prescribed invariant subspace. When A is Hermitian, the closest matrix may be required to be Hermitian. We measure distance in the Frobenius norm and discuss applications to Krylov subspace methods for the solution of large‐scale linear systems of equations and eigenvalue problems as well as to the construction of blurring matrices. Extensions that allow the matrix A to be rectangular and applications to Lanczos bidiagonalization, as well as to the recently proposed subspace‐restricted SVD method for the solution of linear discrete ill‐posed problems, also are considered.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
In El Ghazi et al. [Backward error for the common eigenvector problem, CERFACS Report TR/PA/06/16, Toulouse, France, 2006], we have proved the sensitivity of computing the common eigenvector of two matrices A and B, and we have designed a new approach to solve this problem based on the notion of the backward error.If one of the two matrices (saying A) has n eigenvectors then to find the common eigenvector we have just to write the matrix B in the basis formed by the eigenvectors of A. But if there is eigenvectors with multiplicity >1, the common vector belong to vector space of dimension >1 and such strategy would not help compute it.In this paper we use Newton's method to compute a common eigenvector for two matrices, taking the backward error as a stopping criteria.We mention that no assumptions are made on the matrices A and B.  相似文献   

14.
The present qualitative case study on mathematics majors’ visualization of eigenvector–eigenvalue concepts in a dynamic environment highlights the significance of student-generated representations in a theoretical framework drawn from Sierpinska's (2000) modes of thinking in linear algebra. Such an approach seemed to provide the research participants with mathematical freedom, which resulted in an awareness of the multiple ways that eigenvalue–eigenvector relationships could be visualized in a manner that widened students’ repertoire of meta-representational competences (diSessa, 2004) in coordination with their preferred modes of visualization. Students’ expression of visual fluency in the course of making sense of the eigenvalue problem Au = λu associated with a variety of matrices occurred in different, yet not necessarily hierarchical modes of visualizations that differed from matrix to matrix: (i) synthetic/analytic mode manifested in the process of detecting eigenvectors when the sought eigenvector and the matrix-applied product vector were aligned in the same/opposite directions; (ii) analytic arithmetic mode manifested in the case of singular matrices (in the determination of the zero eigenvalue) and invertible matrices with nonreal eigenvalues; (iii) analytic structural mode, though rarely occurred, manifested in making sense of the trajectory (circle, ellipse, line segment) of the matrix-applied product vector and relating trajectory behavior to matrix type. While the connection between the thinking modes (Sierpinska, 2000) and the concreteness–necessity–generalizability triad (Harel, 2000) was not sharp, math majors still frequently implemented the CNG principles, which proved facilitatory tools in the evolution of students’ thinking about the eigenvalue–eigenvector relationships.  相似文献   

15.
Composite orthogonal projection methods for large matrix eigenproblems   总被引:1,自引:0,他引:1  
For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitian. To this end, a class of new refined orthogonal projection methods has been proposed. It is proved that in some sense each refined method is a composite of two classical orthogonal projections, in which each refined approximate eigenvector is obtained by realizing a new one of some Hermitian semipositive definite matrix onto the same subspace. Apriori error bounds on the refined approximate eigenvector are established in terms of the sine of acute angle of the normalized eigenvector and the subspace involved. It is shown that the sufficient conditions for convergence of the refined vector and that of the Ritz value are the same, so that the refined methods may be much more efficient than the classical ones. Project supported by the China State Major Key Projects for Basic Researches, the National Natural Science Foundation of China (Grant No. 19571014), the Doctoral Program (97014113), the Foundation of Excellent Young Scholors of Ministry of Education, the Foundation of Returned Scholars of China and the Liaoning Province Natural Science Foundation.  相似文献   

16.
By means of an eigenvector and eigenvalue of a real symmetric matrix A, a unitary matrix U is constructed such that U1AU deflates A and, moreover, the transformation preserves the bandstructure.  相似文献   

17.
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.  相似文献   

18.
The classical Hermitian eigenvalue problem addresses the following question: What are the possible eigenvalues of the sum A + B of two Hermitian matrices A and B, provided we fix the eigenvalues of A and B. A systematic study of this problem was initiated by H. Weyl (1912). By virtue of contributions from a long list of mathematicians, notably Weyl (1912), Horn (1962), Klyachko (1998) and Knutson–Tao (1999), the problem is finally settled. The solution asserts that the eigenvalues of A + B are given in terms of certain system of linear inequalities in the eigenvalues of A and B. These inequalities (called the Hom inequalities) are given explicitly in terms of certain triples of Schubert classes in the singular cohomology of Grassmannians and the standard cup product. Belkale (2001) gave a smaller set of inequalities for the problem in this case (which was shown to be optimal by Knutson–Tao–Woodward). The Hermitian eigenvalue problem has been extended by Berenstein–Sjamaar (2000) and Kapovich–Leeb–Millson (2009) for any semisimple complex algebraic group G. Their solution is again in terms of a system of linear inequalities obtained from certain triples of Schubert classes in the singular cohomology of the partial ag varieties G/P (P being a maximal parabolic subgroup) and the standard cup product. However, their solution is far from being optimal. In a joint work with P. Belkale, we define a deformation of the cup product in the cohomology of G/P and use this new product to generate our system of inequalities which solves the problem for any G optimally (as shown by Ressayre). This article is a survey (with more or less complete proofs) of this additive eigenvalue problem. The eigenvalue problem is equivalent to the saturated tensor product problem. We also give an extension of the saturated tensor product problem to the saturated restriction problem for any pair G ? ? of connected reductive algebraic groups. In the appendix by M. Kapovich, a connection between metric geometry and the representation theory of complex semisimple algebraic groups is explained. The connection runs through the theory of buildings. This connection is exploited to give a uniform (though not optimal) saturation factor for any G.  相似文献   

19.
We prove strictly monotonic error decrease in the Euclidian norm of the Krylov subspace approximation of exp(A)φ, where φ and A are respectively a vector and a symmetric matrix. In addition, we show that the norm of the approximate solution grows strictly monotonically with the subspace dimension.  相似文献   

20.
In this paper we consider the problem of estimating the largest eigenvalue and the corresponding eigenvector of a symmetric matrix. In particular, we consider iterative methods, such as the power method and the Lanczos method. These methods need a starting vector which is usually chosen randomly. We analyze the behavior of these methods when the initial vector is chosen with uniform distribution over the unitn-dimensional sphere. We extend and generalize the results reported earlier. In particular, we give upper and lower bounds on the pnorm of the randomized error, and we improve previously known bounds with a detailed analysis of the role of the multiplicity of the largest eigenvalue.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号