首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For a pair of given Hermitian matrix A and rectangular matrix B with the same row number, we reformulate a well‐known simultaneous Hermitian‐type generalized singular value decomposition (HGSVD) with more precise structure and parameters and use it to derive some algebraic properties of the linear Hermitian matrix function A?BXB* and Hermitian solution of the matrix equation BXB* = A, and the canonical form of a partitioned Hermitian matrix and some optimization problems on its inertia and rank. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
An outstanding problem when computing a function of a matrix, f(A), by using a Krylov method is to accurately estimate errors when convergence is slow. Apart from the case of the exponential function that has been extensively studied in the past, there are no well‐established solutions to the problem. Often, the quantity of interest in applications is not the matrix f(A) itself but rather the matrix–vector products or bilinear forms. When the computation related to f(A) is a building block of a larger problem (e.g., approximately computing its trace), a consequence of the lack of reliable error estimates is that the accuracy of the computed result is unknown. In this paper, we consider the problem of computing tr(f(A)) for a symmetric positive‐definite matrix A by using the Lanczos method and make two contributions: (a) an error estimate for the bilinear form associated with f(A) and (b) an error estimate for the trace of f(A). We demonstrate the practical usefulness of these estimates for large matrices and, in particular, show that the trace error estimate is indicative of the number of accurate digits. As an application, we compute the log determinant of a covariance matrix in Gaussian process analysis and underline the importance of error tolerance as a stopping criterion as a means of bounding the number of Lanczos steps to achieve a desired accuracy.  相似文献   

3.
A complex square matrix A is called an orthogonal projector if A 2?=?A?=?A*, where A* is the conjugate transpose of A. In this article, we first give some formulas for calculating the distributions of real eigenvalues of a linear combination of two orthogonal projectors. Then, we establish various expansion formulas for calculating the inertias, ranks and signatures of some 2?×?2 and 3?×?3, as well as k?×?k block Hermitian matrices consisting of two orthogonal projectors. Many applications of the formulas are presented in characterizing interval distributions of numbers of eigenvalues, and nonsingularity of these block Hermitian matrices. In addition, necessary and sufficient conditions are given for various equalities and inequalities of these block Hermitian matrices to hold.  相似文献   

4.
Through a Hermitian‐type (skew‐Hermitian‐type) singular value decomposition for pair of matrices (A, B) introduced by Zha (Linear Algebra Appl. 1996; 240 :199–205), where A is Hermitian (skew‐Hermitian), we show how to find a Hermitian (skew‐Hermitian) matrix X such that the matrix expressions A ? BX ± X*B* achieve their maximal and minimal possible ranks, respectively. For the consistent matrix equations BX ± X*B* = A, we give general solutions through the two kinds of generalized singular value decompositions. As applications to the general linear model {y, Xβ, σ2V}, we discuss the existence of a symmetric matrix G such that Gy is the weighted least‐squares estimator and the best linear unbiased estimator of Xβ, respectively. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Many applications, such as subspace‐based models in information retrieval and signal processing, require the computation of singular subspaces associated with the k dominant, or largest, singular values of an m×n data matrix A, where k?min(m,n). Frequently, A is sparse or structured, which usually means matrix–vector multiplications involving A and its transpose can be done with much less than ??(mn) flops, and A and its transpose can be stored with much less than ??(mn) storage locations. Many Lanczos‐based algorithms have been proposed through the years because the underlying Lanczos method only accesses A and its transpose through matrix–vector multiplications. We implement a new algorithm, called KSVD, in the Matlab environment for computing approximations to the singular subspaces associated with the k dominant singular values of a real or complex matrix A. KSVD is based upon the Lanczos tridiagonalization method, the WY representation for storing products of Householder transformations, implicit deflation, and the QR factorization. Our Matlab simulations suggest it is a fast and reliable strategy for handling troublesome singular‐value spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
In this note we study a variant of the inverted Lanczos method which computes eigenvalue approximates of a symmetric matrix A as Ritz values of A from a Krylov space of A –1. The method turns out to be slightly faster than the Lanczos method at least as long as reorthogonalization is not required. The method is applied to the problem of determining the smallest eigenvalue of a symmetric Toeplitz matrix. It is accelerated taking advantage of symmetry properties of the correspond ng eigenvector.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

7.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low‐rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low‐rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low‐rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew‐Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low‐rank perturbation. Numerical tests prove that this straightforward algorithm is effective.  相似文献   

8.
The consistent conditions and the general expressions about the Hermitian solutions of the linear matrix equations AXB=C and (AX, XB)=(C, D) are studied in depth, where A, B, C and D are given matrices of suitable sizes. The Hermitian minimum F‐norm solutions are obtained for the matrix equations AXB=C and (AX, XB)=(C, D) by Moore–Penrose generalized inverse, respectively. For both matrix equations, we design iterative methods according to the fundamental idea of the classical conjugate direction method for the standard system of linear equations. Numerical results show that these iterative methods are feasible and effective in actual computations of the solutions of the above‐mentioned two matrix equations. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
Truncated singular value decomposition is a popular method for solving linear discrete ill‐posed problems with a small to moderately sized matrix A. Regularization is achieved by replacing the matrix A by its best rank‐k approximant, which we denote by Ak. The rank may be determined in a variety of ways, for example, by the discrepancy principle or the L‐curve criterion. This paper describes a novel regularization approach, in which A is replaced by the closest matrix in a unitarily invariant matrix norm with the same spectral condition number as Ak. Computed examples illustrate that this regularization approach often yields approximate solutions of higher quality than the replacement of A by Ak.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Despite its usefulness in solving eigenvalue problems and linear systems of equations, the nonsymmetric Lanczos method is known to suffer from a potential breakdown problem. Previous and recent approaches for handling the Lanczos exact and near-breakdowns include, for example, the look-ahead schemes by Parlett-Taylor-Liu [23], Freund-Gutknecht-Nachtigal [9], and Brezinski-Redivo Zaglia-Sadok [4]; the combined look-ahead and restart scheme by Joubert [18]; and the low-rank modified Lanczos scheme by Huckle [17]. In this paper, we present yet another scheme based on a modified Krylov subspace approach for the solution of nonsymmetric linear systems. When a breakdown occurs, our approach seeks a modified dual Krylov subspace, which is the sum of the original subspace and a new Krylov subspaceK m (w j ,A T ) wherew j is a newstart vector (this approach has been studied by Ye [26] for eigenvalue computations). Based on this strategy, we have developed a practical algorithm for linear systems called the MLAN/QM algorithm, which also incorporates the residual quasi-minimization as proposed in [12]. We present a few convergence bounds for the method as well as numerical results to show its effectiveness.Research supported by Natural Sciences and Engineering Research Council of Canada.  相似文献   

11.
To further study the Hermitian and non‐Hermitian splitting methods for a non‐Hermitian and positive‐definite matrix, we introduce a so‐called lopsided Hermitian and skew‐Hermitian splitting and then establish a class of lopsided Hermitian/skew‐Hermitian (LHSS) methods to solve the non‐Hermitian and positive‐definite systems of linear equations. These methods include a two‐step LHSS iteration and its inexact version, the inexact Hermitian/skew‐Hermitian (ILHSS) iteration, which employs some Krylov subspace methods as its inner process. We theoretically prove that the LHSS method converges to the unique solution of the linear system for a loose restriction on the parameter α. Moreover, the contraction factor of the LHSS iteration is derived. The presented numerical examples illustrate the effectiveness of both LHSS and ILHSS iterations. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix A to a more convenient form. One of the most useful types of such reduction is the orthogonal reduction to (upper) Hessenberg form. This reduction can be computed by the Arnoldi algorithm. When A is Hermitian, the resulting upper Hessenberg matrix is tridiagonal, which is a significant computational advantage. In this paper we study necessary and sufficient conditions on A so that the orthogonal Hessenberg reduction yields a Hessenberg matrix with small bandwidth. This includes the orthogonal reduction to tridiagonal form as a special case. Orthogonality here is meant with respect to some given but unspecified inner product. While the main result is already implied by the Faber-Manteuffel theorem on short recurrences for orthogonalizing Krylov sequences (see Liesen and Strakoš, SIAM Rev 50:485–503, 2008), we consider it useful to present a new, less technical proof. Our proof utilizes the idea of a “minimal counterexample”, which is standard in combinatorial optimization, but rarely used in the context of linear algebra. The work of P. Tichy was supported by the Emmy Noether-Program of the Deutsche Forschungsgemeinschaft and by the GAAS grant IAA100300802.  相似文献   

13.
Fast algorithms, based on the unsymmetric look‐ahead Lanczos and the Arnoldi process, are developed for the estimation of the functional Φ(?)= u T?(A) v for fixed u , v and A, where A∈??n×n is a large‐scale unsymmetric matrix. Numerical results are presented which validate the comparable accuracy of both approaches. Although the Arnoldi process reaches convergence more quickly in some cases, it has greater memory requirements, and may not be suitable for especially large applications. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

14.
General stationary iterative methods with a singular matrix M for solving range‐Hermitian singular linear systems are presented, some convergence conditions and the representation of the solution are also given. It can be verified that the general Ortega–Plemmons theorem and Keller theorem for the singular matrix M still hold. Furthermore, the singular matrix M can act as a good preconditioner for solving range‐Hermitian linear systems. Numerical results have demonstrated the effectiveness of the general stationary iterations and the singular preconditioner M. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
For a H-matrix A=D+DHHD, D diagonal, H Hermitian, D is a candidate for being a closest normal matrix to A. The additional, second-order optimality condition is reformulated into an eigenvalue problem involving matrices of the same order. Finally, an example for locally but not globally best normal approximation is given.  相似文献   

16.
This paper gives a group of expansion formulas for the inertias of Hermitian matrix polynomials AA2, IA2 and AA3 through some congruence transformations for block matrices, where A is a Hermitian matrix. Then, the paper derives various expansion formulas for the ranks and inertias of some matrix pencils generated from two or three orthogonal projectors and Hermitian unitary matrices. As applications, the paper establishes necessary and sufficient conditions for many matrix equalities to hold, as well as many inequalities in the Löwner partial ordering to hold.  相似文献   

17.
We construct, analyze, and implement SSOR‐like preconditioners for non‐Hermitian positive definite system of linear equations when its coefficient matrix possesses either a dominant Hermitian part or a dominant skew‐Hermitian part. We derive tight bounds for eigenvalues of the preconditioned matrices and obtain convergence rates of the corresponding SSOR‐like iteration methods as well as the corresponding preconditioned GMRES iteration methods. Numerical implementations show that Krylov subspace iteration methods such as GMRES, when accelerated by the SSOR‐like preconditioners, are efficient solvers for these classes of non‐Hermitian positive definite linear systems. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Suppose that the eigenvalues of an Hermitian matrix A whose graph is a tree T are known, as well as the eigenvalues of the principal submatrix of A corresponding to a certain branch of T. A method for constructing a larger tree T?', in which the branch is ‘`duplicated’', and an Hermitian matrix A′ whose graph is T?' is described. The eigenvalues of A' are all of those of A, together with those corresponding to the branch, including multiplicities. This idea is applied (1) to give a solution to the inverse eigenvalue problem for stars, (2) to prove that the known diameter lower bound, for the minimum number of distinct eigenvalues among Hermitian matrices with a given graph, is best possible for trees of bounded diameter, and (3) to increase the list of trees for which all possible lists for the possible spectra are know. A generalization of the basic branch duplication method is presented.  相似文献   

19.
A Hermitian matrix X is called a least‐squares solution of the inconsistent matrix equation AXA* = B, where B is Hermitian. A* denotes the conjugate transpose of A if it minimizes the F‐norm of B ? AXA*; it is called a least‐rank solution of AXA* = B if it minimizes the rank of B ? AXA*. In this paper, we study these two types of solutions by using generalized inverses of matrices and some matrix decompositions. In particular, we derive necessary and sufficient conditions for the two types of solutions to coincide. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号