首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We are interested in the calculation of explicit formulae for the condition numbers of the two factors of the polar decomposition of a full rank real or complex m × n matrix A, where mn. We use a unified presentation that enables us to compute such condition numbers in the Frobenius norm, in cases where A is a square or a rectangular matrix subjected to real or complex perturbations. We denote by σ1 (respectively σ n) the largest (respectively smallest) singular value of A, and by K(A) = σ1 n the generalized condition number of A. Our main results are that the absolute condition number of the Hermitian polar factor is √2(1 + K(A)2)1/2/(1 + K(A)) and that the absolute condition number of the unitary factor of a rectangular matrix is 1/σ n. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
Some Simple Estimates for the Singular Values of Matrices   总被引:1,自引:0,他引:1  
Abstract We first provide a simple estimate for ||A~(-1)||_∞ and ||A~(-1)||_1 of a strictly diagonally dominant matrixA. On the Basis of the result, we obtain an estimate for the smallest singular value of A. Secondly, by scalingwith a positive diagonal matrix D, we obtain some simple estimates for the smallest singular value of an H-matrix, which is not necessarily positive definite. Finally, we give some examples to show the effectiveness ofthe new bounds.  相似文献   

3.
Many applications, such as subspace‐based models in information retrieval and signal processing, require the computation of singular subspaces associated with the k dominant, or largest, singular values of an m×n data matrix A, where k?min(m,n). Frequently, A is sparse or structured, which usually means matrix–vector multiplications involving A and its transpose can be done with much less than ??(mn) flops, and A and its transpose can be stored with much less than ??(mn) storage locations. Many Lanczos‐based algorithms have been proposed through the years because the underlying Lanczos method only accesses A and its transpose through matrix–vector multiplications. We implement a new algorithm, called KSVD, in the Matlab environment for computing approximations to the singular subspaces associated with the k dominant singular values of a real or complex matrix A. KSVD is based upon the Lanczos tridiagonalization method, the WY representation for storing products of Householder transformations, implicit deflation, and the QR factorization. Our Matlab simulations suggest it is a fast and reliable strategy for handling troublesome singular‐value spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

4.
The Wilkinson distance of a matrix A is the two-norm of the smallest perturbation E so that A + E has a multiple eigenvalue. Malyshev derived a singular value optimization characterization for the Wilkinson distance. In this work we generalize the definition of the Wilkinson distance as the two-norm of the smallest perturbation so that the perturbed matrix has an eigenvalue of prespecified algebraic multiplicity. We provide a singular value characterization for this generalized Wilkinson distance. Then we outline a numerical technique to solve the derived singular value optimization problems. In particular the numerical technique is applicable to Malyshev’s formula to compute the Wilkinson distance as well as to retrieve a nearest matrix with a multiple eigenvalue.  相似文献   

5.
We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n−1/2, which is optimal for Gaussian matrices. Moreover, we give a optimal estimate on the tail probability. This comes as a consequence of a new and essentially sharp estimate in the Littlewood-Offord problem: for i.i.d. random variables Xk and real numbers ak, determine the probability p that the sum kakXk lies near some number v. For arbitrary coefficients ak of the same order of magnitude, we show that they essentially lie in an arithmetic progression of length 1/p.  相似文献   

6.
Through a Hermitian‐type (skew‐Hermitian‐type) singular value decomposition for pair of matrices (A, B) introduced by Zha (Linear Algebra Appl. 1996; 240 :199–205), where A is Hermitian (skew‐Hermitian), we show how to find a Hermitian (skew‐Hermitian) matrix X such that the matrix expressions A ? BX ± X*B* achieve their maximal and minimal possible ranks, respectively. For the consistent matrix equations BX ± X*B* = A, we give general solutions through the two kinds of generalized singular value decompositions. As applications to the general linear model {y, Xβ, σ2V}, we discuss the existence of a symmetric matrix G such that Gy is the weighted least‐squares estimator and the best linear unbiased estimator of Xβ, respectively. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
William C. Brown 《代数通讯》2013,41(8):2401-2417
Let Rbe a commutative ring and A?M m×n . The spanning rank of Ais the smallest positive integer s for which A=PQ(m×s s×n) The spanning rank of the zero matrix is set equal to zero. If Ris a field, then the spanning rank of Ais just the classical rank of A. In the first section of this paper, various theorems and examples are given which indicate how much of the classical theory of rank is still valid for spanning rank over a commutative ring. If A= PQ(n×s s×n) is a spanning rank factorization of a square matrix and D= QP, then Dis called a spanning rank partner of A. In the second part of this paper, the null ideals N Aand N Dof Aand Drespectively are compared. For instance, we show N A=N Dif s= nand N A= XN Dif s<nwhenever Ris a PIDand A≠0. This result sometimes (e.g. s<<n) makes the computation of N Aeasy.  相似文献   

8.
This paper presents an O(n2) method based on the twisted factorization for computing the Takagi vectors of an n‐by‐n complex symmetric tridiagonal matrix with known singular values. Since the singular values can be obtained in O(n2) flops, the total cost of symmetric singular value decomposition or the Takagi factorization is O(n2) flops. An analysis shows the accuracy and orthogonality of Takagi vectors. Also, techniques for a practical implementation of our method are proposed. Our preliminary numerical experiments have verified our analysis and demonstrated that the twisted factorization method is much more efficient than the implicit QR method, divide‐and‐conquer method and Matlab singular value decomposition subroutine with comparable accuracy. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Let A be an m ×n real matrix with singular values σ1 ? ··· ? σn?1 ? σn ? 0. In cases where σn ? 0, the corresponding right singular vector υn is a natural choice to use for an approximate null vector ofA. Using an elementary perturbation analysis, we show that κ = σ1/(σn?1 ? σn) provides a quantitative measure of the intrinsic conditioning of the computation of υn from A.  相似文献   

10.
We investigate the NP-hard absolute value equation (AVE) Ax−|x|=b, where A is an arbitrary n×n real matrix. In this paper, we propose a smoothing Newton method for the AVE. When the singular values of A exceed 1, we show that this proposed method is globally convergent and the convergence rate is quadratic. Preliminary numerical results show that this method is promising.  相似文献   

11.
Let A be an n × n symmetric matrix of bandwidth 2m + 1. The matrix need not be positive definite. In this paper we will present an algorithm for factoring A which preserves symmetry and the band structure and limits element growth in the factorization. With this factorization one may solve a linear system with A as the coefficient matrix and determine the inertia of A, the number of positive, negative, and zero eigenvalues of A. The algorithm requires between 1/2nm2 and 5/4nm2 multiplications and at most (2m + 1)n locations compared to non‐symmetric Gaussian elimination which requires between nm2 and 2nm2 multiplications and at most (3m + 1)n locations. Our algorithm reduces A to block diagonal form with 1 × 1 and 2 × 2 blocks on the diagonal. When pivoting for stability and subsequent transformations produce non‐zero elements outside the original band, column/row transformations are used to retract the bandwidth. To decrease the operation count and the necessary storage, we use the fact that the correction outside the band is rank‐1 and invert the process, applying the transformations that would restore the bandwidth first, followed by a modified correction. This paper contains an element growth analysis and a computational comparison with LAPACKs non‐symmetric band routines and the Snap‐back code of Irony and Toledo. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
A Buckley matrix is an n × n complex symmetric matrix A = I n + iC, where C is real symmetric positive definite. We prove that, for such A the growth factor in Gaussian elimination is not greater than $${1 + \sqrt{17} \over 4} \simeq 1.28078\ldots$$ Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

13.
A sequence of least‐squares problems of the form minyG1/2(AT y?h)∥2, where G is an n×n positive‐definite diagonal weight matrix, and A an m×n (m?n) sparse matrix with some dense columns; has many applications in linear programming, electrical networks, elliptic boundary value problems, and structural analysis. We suggest low‐rank correction preconditioners for such problems, and a mixed solver (a combination of a direct solver and an iterative solver). The numerical results show that our technique for selecting the low‐rank correction matrix is very effective. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
For a pair of given Hermitian matrix A and rectangular matrix B with the same row number, we reformulate a well‐known simultaneous Hermitian‐type generalized singular value decomposition (HGSVD) with more precise structure and parameters and use it to derive some algebraic properties of the linear Hermitian matrix function A?BXB* and Hermitian solution of the matrix equation BXB* = A, and the canonical form of a partitioned Hermitian matrix and some optimization problems on its inertia and rank. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
We describe a randomized Krylov‐subspace method for estimating the spectral condition number of a real matrix A or indicating that it is numerically rank deficient. The main difficulty in estimating the condition number is the estimation of the smallest singular value σ min of A. Our method estimates this value by solving a consistent linear least squares problem with a known solution using a specific Krylov‐subspace method called LSQR. In this method, the forward error tends to concentrate in the direction of a right singular vector corresponding to σ min . Extensive experiments show that the method is able to estimate well the condition number of a wide array of matrices. It can sometimes estimate the condition number when running dense singular value decomposition would be impractical due to the computational cost or the memory requirements. The method uses very little memory (it inherits this property from LSQR), and it works equally well on square and rectangular matrices.  相似文献   

16.
A matrix A in the semigroup N n of non-negative n×nmatrices is prime if A is not monomial and A=BC,B CεN n implies that either B or C is monomial. One necessary and another sufficient condition are given for a matrix in N n to be prime. It is proved that every prime in N n is completely decomposable.  相似文献   

17.
A sub‐Stiefel matrix is a matrix that results from deleting simultaneously the last row and the last column of an orthogonal matrix. In this paper, we consider a Procrustes problem on the set of sub‐Stiefel matrices of order n. For n = 2, this problem has arisen in computer vision to solve the surface unfolding problem considered by R. Fereirra, J. Xavier and J. Costeira. An iterative algorithm for computing the solution of the sub‐Stiefel Procrustes problem for an arbitrary n is proposed, and some numerical experiments are carried out to illustrate its performance. For these purposes, we investigate the properties of sub‐Stiefel matrices. In particular, we derive two necessary and sufficient conditions for a matrix to be sub‐Stiefel. We also relate the sub‐Stiefel Procrustes problem with the Stiefel Procrustes problem and compare it with the orthogonal Procrustes problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
An n × n real matrix A = (aij)n × n is called bi‐symmetric matrix if A is both symmetric and per‐symmetric, that is, aij = aji and aij = an+1?1,n+1?i (i, j = 1, 2,..., n). This paper is mainly concerned with finding the least‐squares bi‐symmetric solutions of matrix inverse problem AX = B with a submatrix constraint, where X and B are given matrices of suitable sizes. Moreover, in the corresponding solution set, the analytical expression of the optimal approximation solution to a given matrix A* is derived. A direct method for finding the optimal approximation solution is described in detail, and three numerical examples are provided to show the validity of our algorithm. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
We study the spectral norm of matrices W that can be factored as W?=?BA, where A is a random matrix with independent mean zero entries and B is a fixed matrix. Under the (4?+???)th moment assumption on the entries of A, we show that the spectral norm of such an m × n matrix W is bounded by ${\sqrt{m} + \sqrt{n}}$ , which is sharp. In other words, in regard to the spectral norm, products of random and deterministic matrices behave similarly to random matrices with independent entries. This result along with the previous work of Rudelson and the author implies that the smallest singular value of a random m × n matrix with i.i.d. mean zero entries and bounded (4?+???)th moment is bounded below by ${\sqrt{m} - \sqrt{n-1}}$ with high probability.  相似文献   

20.
In a recent paper by J.M. Varah, an upper bound for 6A-16 was determined, under the assumption that A is strictly diagonally dominant, and this bound was then used to obtain a lower bound for the smallest singular value for A. In this note, this upper bound for 6A-16 is sharpened, and extended to a wider class of matrices. This bound is then used to obtain an improved lower bound for the smallest singular value of a matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号