首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Vandermonde matrices defined by pairs of symmetric points with respect to the origin are analysed. Direct methods for the determinant computation and for the linear system solution, both for primal and dual case, are proposed that reduce the computational complexity by a factor four and two respectively with respect to the known algorithms of ordern 2. Moreover perturbation bounds for the new LUD factorization error are found depending on a condition number of an halved dimension Vandermonde matrix.
Sunto Si considerano matrici di Vandermonde definite da coppie di punti simmetrici rispetto all'origine. Sono proposti metodi diretti per il calcolo del determinante e per la soluzione dei sistemi lineari, sia nel caso primario che duale, che riducono rispettivamente di un fattore quattro e due la complessità computazionale degli algoritmi noti di ordinen 2. Inoltre si determinano limitazioni dell'errore della nuova fattorizzazione LUD dipendenti dall'indice di condizionamento di una matrice di Vandermonde di dimensioni dimezzate.
  相似文献   

2.
A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O(n 2.7743).  相似文献   

3.
This article considers a family of Gram matrices of pairs of bases of a finite dimensional vector space of polynomials with respect to certain indefinite inner products. Such a family includes all the generalized confluent Vandermonde matrices relative to any polynomial basis, like the Chebyshev-Vandermonde matrices, for example. Using the biorthogonality of pairs of bases with respect to a divided difference functional, properties of matrices and functionals, as well as interpolation formulas are obtained. I show that the computation of the inverse of a Vandermonde-like matrix is essentially equivalent to the computation of the partial fractions decompositions of a set of rational functions with a common denominator. I also explain why the various Chebyshev-Vandermonde matrices are the simplest generalizations of the classic Vandermonde matrices and describe a simple algorithm for the computation of their inverses, which requires a number of multiplications of the order of 3N2.  相似文献   

4.
The confluent Cauchy and Cauchy–Vandermonde matrices are considered, which were studied earlier by various authors in different ways. In this paper, we use another way called displacement structure approach to deal with matrices of this kind. We show that the Cauchy and Cauchy–Vandermonde matrices satisfy some special type of matrix equations. This leads quite naturally to the inversion formulas and fast algorithms for matrices of this kind.  相似文献   

5.
We give a complete solution of the matrix equation AX?+?BX ??=?0, where A, B?∈?? m×n are two given matrices, X?∈?? n×n is an unknown matrix, and ? denotes the transpose or the conjugate transpose. We provide a closed formula for the dimension of the solution space of the equation in terms of the Kronecker canonical form of the matrix pencil A?+?λB, and we also provide an expression for the solution X in terms of this canonical form, together with two invertible matrices leading A?+?λB to the canonical form by strict equivalence.  相似文献   

6.
Due to their remarkable application in many branches of applied mathematics such as combinatorics, coding theory, and cryptography, Vandermonde matrices have received a great amount of attention. Maximum distance separable (MDS) codes introduce MDS matrices which not only have applications in coding theory but also are of great importance in the design of block ciphers. Lacan and Fimes introduce a method for the construction of an MDS matrix from two Vandermonde matrices in the finite field. In this paper, we first suggest a method that makes an involutory MDS matrix from the Vandermonde matrices. Then we propose another method for the construction of 2 n × 2 n Hadamard MDS matrices in the finite field GF(2 q ). In addition to introducing this method, we present a direct method for the inversion of a special class of 2 n ?× 2 n Vandermonde matrices.  相似文献   

7.
The use of the fast Fourier transform (FFT) accelerates Lanczos tridiagonalisation method for Hankel and Toeplitz matrices by reducing the complexity of matrix–vector multiplication. In multiprecision arithmetics, the FFT has overheads that make it less competitive compared with alternative methods when the accuracy is over 10000 decimal places. We studied two alternative Hankel matrix–vector multiplication methods based on multiprecision number decomposition and recursive Karatsuba‐like multiplication, respectively. The first method was uncompetitive because of huge precision losses, while the second turned out to be five to 14 times faster than FFT in the ranges of matrix sizes up to n = 8192 and working precision of b = 32768 bits we were interested in. We successfully applied our approach to eigenvalues calculations to studies of spectra of matrices that arise in research on Riemann zeta function. The recursive matrix–vector multiplication significantly outperformed both the FFT and the traditional multiplication in these studies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
《Journal of Complexity》1998,14(2):257-299
First we study asymptotically fast algorithms for rectangular matrix multiplication. We begin with new algorithms for multiplication of ann×nmatrix by ann×n2matrix in arithmetic timeO(nω),ω=3.333953…, which is less by 0.041 than the previous record 3.375477…. Then we present fast multiplication algorithms for matrix pairs of arbitrary dimensions, estimate the asymptotic running time as a function of the dimensions, and optimize the exponents of the complexity estimates. For a large class of input matrix pairs, we improve the known exponents. Finally we show three applications of our results:   (a) we decrease from 2.851 to 2.837 the known exponent of the work bounds for fast deterministic (NC) parallel evaluation of the determinant, the characteristic polynomial, and the inverse of ann×nmatrix, as well as for the solution to a nonsingular linear system ofnequations,   (b) we asymptotically accelerate the known sequential algorithms for the univariate polynomial composition mod xn, yielding the complexity boundO(n1.667) versus the old record ofO(n1.688), and for the univariate polynomial factorization over a finite field, and   (c) we improve slightly the known complexity estimates for computing basic solutions to the linear programming problem withmconstraints andnvariables.  相似文献   

9.
Two issues concerning the construction of square matrices with prescribe singular values an eigenvalues are addressed. First, a necessary and sufficient condition for the existence of an n × n complex matrix with n given nonnegative numbers as singular values an m ( n) given complex numbers to be m of the eigenvalues is determined. This extends the classical result of Weyl and Horn treating the case when m = n. Second, an algorithm is given to generate a triangular matrix with prescribe singular values an eigenvalues. Unlike earlier algorithms, the eigenvalues can be arranged in any prescribe order on the diagonal. A slight modification of this algorithm allows one to construct a real matrix with specified real an complex conjugate eigenvalues an specified singular values. The construction is done by multiplication by diagonal unitary matrices, permutation matrices and rotation matrices. It is numerically stable and may be useful in developing test software for numerical linear algebra packages.  相似文献   

10.
We study asymptotically fast multiplication algorithms for matrix pairs of arbitrary di- mensions, and optimize the exponents of their arithmetic complexity bounds. For a large class of input matrix pairs, we improve the known exponents. We also show some applications of our results:(i) we decrease from O(n~2 n~(1 o)(1)logq)to O(n~(1.9998) n~(1 o(1))logq)the known arithmetic complexity bound for the univariate polynomial factorization of degree n over a finite field with q elements; (ii) we decrease from 2.837 to 2.7945 the known exponent of the work and arithmetic processor bounds for fast deterministic(NC)parallel evaluation of the determinant, the characteristic polynomial, and the inverse of an n×n matrix, as well as for the solution to a nonsingular linear system of n equations; (iii)we decrease from O(m~(1.575)n)to O(m~(1.5356)n)the known bound for computing basic solutions to a linear programming problem with m constraints and n variables.  相似文献   

11.
In this paper the authors develop two algorithms to solve systems of linear equations where the coefficients form a confluent Vandermonde matrix of Hermite type, or its transpose. These algorithms reduce the given system to upper triangular form by means of elementary matrix transformations. Recursive formulas to obtain the upper triangular form in an economical way are derived. Applications and numerical results are included.Algol-60 programs are appended.  相似文献   

12.
Summary It was recently shown that the inverse of a strictly ultrametric matrix is a strictly diagonally dominant Stieltjes matrix. On the other hand, as it is well-known that the inverse of a strictly diagonally dominant Stieltjes matrix is a real symmetric matrix with nonnegative entries, it is natural to ask, conversely, if every strictly diagonally dominant Stieltjes matrix has a strictly ultrametric inverse. Examples show, however, that the converse is not true in general, i.e., there are strictly diagonally dominant Stieltjes matrices in n×n (for everyn3) whose inverses are not strictly ultrametric matrices. Then, the question naturally arises if one can determine which strictly diagonally dominant Stieltjes matrices, in n×n (n3), have inverses which are strictly ultrametric. Here, we develop an algorithm, based on graph theory, which determines if a given strictly diagonally dominant Stieltjes matrixA has a strictly ultrametric inverse, where the algorithm is applied toA and requires no computation of inverse. Moreover, if this given strictly diagonally dominant Stieltjes matrix has a strictly ultrametric inverse, our algorithm uniquely determines this inverse as a special sum of rank-one matrices.Research supported by the National Science FoundationResearch supported by the Deutsche Forschungsgemeinschaft  相似文献   

13.
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
Numerical algorithms with complexityO(n 2) operations are proposed for solving matrix Nehari and Nehari-Takagi problems withn interpolation points. The algorithms are based on explicit formulas for the solutions and on theorems about cascade decomposition of rational matrix function given in a state space form. The method suggests also fast algorithms for LDU factorizations of structured matrices. The numerical behavior of the designed algorithms is studied for a wide set of examples.  相似文献   

15.
Summary Strassen [2] has described a method for the multiplication of (N, N)-matrices which needs O (N 2.8...) basic operations. Here algorithms are given forQR-decomposition and unitary transformation of arbitrary complex matrices to upper Hessenberg form and for unitary triangularization of hermitean matrices, which by use of a fast matrix multiplication with time bound O (N ) have nearly the same speed.  相似文献   

16.
The problems under study are connected with the choice of a vector subset from a given finite set of vectors in the Euclidean space ℝ k . The sum norm and averaged square of the sumnorm are considered as the target functions (to be maximized). The optimal combinatorial algorithms with time complexity O(k 2 n 2k ) are developed for these problems. Thus, the polynomial solvability of these problems is proved for k fixed.  相似文献   

17.
We present a new approach of the decoding algorithm for Gabidulin Codes. In the same way as efficient erasure decoding for Generalized Reed Solomon codes by using the structure of the inverse of the VanderMonde matrices, we show that, the erasure(t erasures mean that t components of a code vector are erased) decoding Gabidulin code can be seen as a computation of three matrice and an affine permutation, instead of computing an inverse from the generator or parity check matrix. This significantly reduces the decoding complexity compared to others algorithms. For t erasures with tr, where r = n − k, the erasure algorithm decoding for Gab n, k (g) Gabidulin code compute the t symbols by simple multiplication of three matrices. That requires rt + r(k − 1) Galois field multiplications, t(r − 1) + (t + r)k field additions, r 2 + r(k + 1) field negations and t(k + 1) field inversions.  相似文献   

18.
We discuss a generalization of the Cohn–Umans method, a potent technique developed for studying the bilinear complexity of matrix multiplication by embedding matrices into an appropriate group algebra. We investigate how the Cohn–Umans method may be used for bilinear operations other than matrix multiplication, with algebras other than group algebras, and we relate it to Strassen’s tensor rank approach, the traditional framework for investigating bilinear complexity. To demonstrate the utility of the generalized method, we apply it to find the fastest algorithms for forming structured matrix–vector product, the basic operation underlying iterative algorithms for structured matrices. The structures we study include Toeplitz, Hankel, circulant, symmetric, skew-symmetric, f-circulant, block Toeplitz–Toeplitz block, triangular Toeplitz matrices, Toeplitz-plus-Hankel, sparse/banded/triangular. Except for the case of skew-symmetric matrices, for which we have only upper bounds, the algorithms derived using the generalized Cohn–Umans method in all other instances are the fastest possible in the sense of having minimum bilinear complexity. We also apply this framework to a few other bilinear operations including matrix–matrix, commutator, simultaneous matrix products, and briefly discuss the relation between tensor nuclear norm and numerical stability.  相似文献   

19.
We consider the cost of estimating an error bound for the computed solution of a system of linear equations, i.e., estimating the norm of a matrix inverse. Under some technical assumptions we show that computing even a coarse error bound for the solution of a triangular system of equations costs at least as much as testing whether the product of two matrices is zero. The complexity of the latter problem is in turn conjectured to be the same as matrix multiplication, matrix inversion, etc. Since most error bounds in practical use have much lower complexity, this means they should sometimes exhibit large errors. In particular, it is shown that condition estimators that: (1) perform at least one operation on each matrix entry; and (2) are asymptotically faster than any zero tester, must sometimes over or underestimate the inverse norm by a factor of at least , where n is the dimension of the input matrix, k is the bitsize, and where either or grows faster than any polynomial in n . Our results hold for the RAM model with bit complexity, as well as computations over rational and algebraic numbers, but not real or complex numbers. Our results also extend to estimating error bounds or condition numbers for other linear algebra problems such as computing eigenvectors. September 10, 1999. Final version received: August 23, 2000.  相似文献   

20.
Summary Algorithms are presented which compute theQR factorization of a block-Toeplitz matrix inO(n) 2 block-operations, wheren is the block-order of the matrix and a block-operation is a multiplication, inversion or a set of Householder operations involving one or two blocks. The algorithms are in general analogous to those presented in the scalar Toeplitz case in a previous paper, but the basic operation is the Householder transform rather than the Givens transform, and the computation of the Householder coefficients and other working matrices requires special treatment. Two algorithms are presented-the first computes onlyR explicitly and the second computes bothQ andR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号