首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study asymptotically fast multiplication algorithms for matrix pairs of arbitrary di- mensions, and optimize the exponents of their arithmetic complexity bounds. For a large class of input matrix pairs, we improve the known exponents. We also show some applications of our results:(i) we decrease from O(n~2 n~(1 o)(1)logq)to O(n~(1.9998) n~(1 o(1))logq)the known arithmetic complexity bound for the univariate polynomial factorization of degree n over a finite field with q elements; (ii) we decrease from 2.837 to 2.7945 the known exponent of the work and arithmetic processor bounds for fast deterministic(NC)parallel evaluation of the determinant, the characteristic polynomial, and the inverse of an n×n matrix, as well as for the solution to a nonsingular linear system of n equations; (iii)we decrease from O(m~(1.575)n)to O(m~(1.5356)n)the known bound for computing basic solutions to a linear programming problem with m constraints and n variables.  相似文献   

2.
We discuss several methods for real interval matrix multiplication. First, earlier studies of fast algorithms for interval matrix multiplication are introduced: naive interval arithmetic, interval arithmetic by midpoint-radius form by Oishi-Rump and its fast variant by Ogita-Oishi. Next, three new and fast algorithms are developed. The proposed algorithms require one, two or three matrix products, respectively. The point is that our algorithms quickly predict which terms become dominant radii in interval computations. We propose a hybrid method to predict which algorithm is suitable for optimizing performance and width of the result. Numerical examples are presented to show the efficiency of the proposed algorithms.  相似文献   

3.
Starting from the Strassen method for rapid matrix multiplication and inversion as well as from the recursive Cholesky factorization algorithm, we introduced a completely block recursive algorithm for generalized Cholesky factorization of a given symmetric, positive semi-definite matrix A∈Rn×nARn×n. We used the Strassen method for matrix inversion together with the recursive generalized Cholesky factorization method, and established an algorithm for computing generalized {2,3}{2,3} and {2,4}{2,4} inverses. Introduced algorithms are not harder than the matrix–matrix multiplication.  相似文献   

4.
In Demmel et al. (Numer. Math. 106(2), 199–224, 2007) we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of n-by-n matrices can be done by any algorithm in O(n ω+η ) operations for any η >  0, then it can be done stably in O(n ω+η ) operations for any η >  0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(n ω+η ) operations. J. Demmel acknowledges support of NSF under grants CCF-0444486, ACI-00090127, CNS-0325873 and of DOE under grant DE-FC02-01ER25478.  相似文献   

5.
We present an algorithm for multiplying an N × N recursive block Toeplitz matrix by a vector with cost O (N log N). Its application to optimal surface interpolation is discussed.  相似文献   

6.
Motivated by the symmetric version of matrix multiplication we study the plethysm Sk(sln) of the adjoint representation sln of the Lie group SLn. In particular, we describe the decomposition of this representation into irreducible components for k=3, and find highest-weight vectors for all irreducible components. Relations to fast matrix multiplication, in particular the Coppersmith–Winograd tensor, are presented.  相似文献   

7.
This paper is concerned with accurate matrix multiplication in floating-point arithmetic. Recently, an accurate summation algorithm was developed by Rump et al. (SIAM J Sci Comput 31(1):189–224, 2008). The key technique of their method is a fast error-free splitting of floating-point numbers. Using this technique, we first develop an error-free transformation of a product of two floating-point matrices into a sum of floating-point matrices. Next, we partially apply this error-free transformation and develop an algorithm which aims to output an accurate approximation of the matrix product. In addition, an a priori error estimate is given. It is a characteristic of the proposed method that in terms of computation as well as in terms of memory consumption, the dominant part of our algorithm is constituted by ordinary floating-point matrix multiplications. The routine for matrix multiplication is highly optimized using BLAS, so that our algorithms show a good computational performance. Although our algorithms require a significant amount of working memory, they are significantly faster than ‘gemmx’ in XBLAS when all sizes of matrices are large enough to realize nearly peak performance of ‘gemm’. Numerical examples illustrate the efficiency of the proposed method.  相似文献   

8.
The purpose of this paper is to present an algorithm for matrix multiplication based on a formula discovered by Pan [7]. For matrices of order up to 10 000, the nearly optimum tuning of the algorithm results in a rather clear non‐recursive one‐ or two‐level structure with the operation count comparable to that of the Strassen algorithm [9]. The algorithm takes less workspace and has better numerical stability as compared to the Strassen algorithm, especially in Winograd's modification [2]. Moreover, its clearer and more flexible structure is potentially more suitable for efficient implementation on modern supercomputers. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
Routines callable fromFortran and C are described which implement matrix-matrix multiplication and transposition for a variety of sparse matrix formats. Conversion routines between various formats are provided.The algorithms and routines described here were developed while both authors were visiting the Center for Applied Mathematics, Department of Mathematics, Purdue University. This research was supported in part by the Office of Naval Research, Grant No. N00014-895-1440.  相似文献   

10.
Let be a row diagonally dominant matrix, i.e.,


where with We show that no pivoting is necessary when Gaussian elimination is applied to Moreover, the growth factor for does not exceed The same results are true with row diagonal dominance being replaced by column diagonal dominance.

  相似文献   


11.
It is shown that the approximate bilinear complexity of multiplying matrices of the order 2 × 2 by a matrix of the order 2 × 6 does not exceed 19. An approximate bilinear algorithm of complexity 19 is presented for this task.  相似文献   

12.
Let K be a subfield of R. The theory of R viewed as an ordered K-vector space and expanded by a predicate for Z is decidable if and only if K is a real quadratic field.  相似文献   

13.
It is shown that the multiplicative complexity of multiplication of a 3 × 2 matrix by a 2 × 2 matrix is equal to 11 for arbitrary field of constants.  相似文献   

14.
A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O(n 2.7743).  相似文献   

15.
In this paper, we study the nearest stable matrix pair problem: given a square matrix pair (E,A), minimize the Frobenius norm of (ΔEA) such that (EE,AA) is a stable matrix pair. We propose a reformulation of the problem with a simpler feasible set by introducing dissipative Hamiltonian matrix pairs: A matrix pair (E,A) is dissipative Hamiltonian if A=(JR)Q with skew‐symmetric J, positive semidefinite R, and an invertible Q such that QTE is positive semidefinite. This reformulation has a convex feasible domain onto which it is easy to project. This allows us to employ a fast gradient method to obtain a nearby stable approximation of a given matrix pair.  相似文献   

16.
17.
Let be a closed rectifiable curve and a region in the complex plane. Suppose for each , R() represents multiplication by an nxn-matrix of rational functions and F() is a finite rank operator, both acting on the Hilbert space L 2 n (). Sufficient conditions are given for the integer valued function dim ker (R()+F()) to be continuous at all but finitely many points in . This result is applied to singular integral operators.This work was partially supported by the National Science Foundation.  相似文献   

18.
In this paper we present a procedure, based on data dependencies and space–time transformations of index space, to design a unidirectional linear systolic array (ULSA) for computing a matrix–vector product. The obtained array is optimal with respect to the number of processing elements (PEs) for a given problem size. The execution time of the array is the minimal possible for that number of PEs. To achieve this, we first derive an appropriate systolic algorithm for ULSA synthesis. In order to design a ULSA with the optimal number of PEs we then perform an accommodation of the index space to the projection direction vector. The performance of the synthesized array is discussed and compared with the bidirectional linear SA. Finally, we demonstrate how this array can be used to compute the correlation of two given sequences.  相似文献   

19.
Any associative bilinear multiplication on the set of n-by-n matrices over some field of characteristic not two, that makes the same vectors orthogonal and has the same trace as ordinary matrix multiplication, must be ordinary matrix multiplication or its opposite.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号