首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we show that if the step (displacement) vectors generated by the preconditioned conjugate gradient algorithm are scaled appropriately they may be used to solve equations whose coefficient matrices are the preconditioning matrices of the original equations. The dual algorithms thus obtained are shown to be equivalent to the reverse algorithms of Hegedüs and are subsequently generalised to their block forms. It is finally shown how these may be used to construct dual (or reverse) algorithms for solving equations involving nonsymmetric matrices using only short recurrences, and reasons are suggested why some of these algorithms may be more numerically stable than their primal counterparts. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

2.
The problem of solving linear equations, or equivalently of inverting matrices, arises in many fields. Efficient recursive algorithms for finding the inverses of Toeplitz or displacement-type matrices have been known for some time. By introducting a way of characterizing matrices in terms of their “distance” from being Toeplitz, a natural extension of these algorithms is obtained. Several new inversion formulas for the representation of the inverse of non-Toeplitz matrices are also presented.  相似文献   

3.
Here are considered matrices represented as a sum of diagonal and semiseparable ones. These matrices belong to the class of structured matrices which arises in numerous applications. FastO(N) algorithms for their inversion were developed before under additional restrictions which are a source of instability. Our aim is to eliminate these restrictions and to develop reliable and stable numerical algorithms. In this paper we obtain such algorithms with the only requirement that the considered matrix is invertible and its determinant is not close to zero. The case of semiseparable matrices of order one was considered in detail in an earlier paper of the authors.  相似文献   

4.
Fast algorithms, associated with the names of Schur and Levinson, are known for the triangular factorization of symmetric, positive definite Toeplitz matrices and their inverses. In this paper we show that these algorithms can be derived from simple arguments involving causality, symmetry, and energy conservation in discrete lossless transmission lines. The results not only provide a nice interpretation of the classical Schur and Levinson algorithms and a certain Toeplitz inversion formula of Gohberg and Semencul, but they also show immediately that the same fast algorithms apply not only to Toeplitz matrices but to all matrices with so-called displacement inertia (1,1). The results have been helpful in suggesting new digital filter structures and in the study of nonstationary second-order processes.  相似文献   

5.
In the first part of this paper, we investigate the reduced forms of circulant matrices and quasi-skew circulant matrices. By using their properties we present two efficient algorithms to compute the square roots of circulant matrices and quasi-skew circulant matrices, respectively. Those methods are faster than the traditional algorithm which is based on the Schur decomposition. In the second part, we further consider circulant H-matrices with positive diagonal entries and develop two algorithms for computing their principal square roots. Those two algorithms have the common advantage that is they only need matrix-matrix multiplications in their iterative sequences, an operation which can be done very efficiently on modern high performance computers.  相似文献   

6.
For matrices whose elements are scalar linear difference operators, algorithms for checking invertibility (unimodularity) and constructing an inverse matrix (if it exists) are proposed. Their complexity is lower than that of other available algorithms. The differences of these algorithms from their differential analogues are discussed.  相似文献   

7.
Some algorithms are suggested for constructing pseudoinverse matrices and for solving systems with rectangular matrices whose entries depend on a parameter in polynomial and rational ways. The cases of one- and two-parameter matrices are considered. The construction of pseudoinverse matrices are realized on the basis of rank factorization algorithms. In the case of matrices with polynomial occurrence of parameters, these algorithms are the ΔW-1 and ΔW-2 algorithms for one- and two-parameter matrices, respectively. In the case of matrices with rational occurrence of parameters, these algorithms are the irreducible factorization algorithms. This paper is a continuation of the author's studies of the solution of systems with one-parameter matrices and an extension of the results to the case of two-parameter matrices with polynomial and rational entries. Bibliography: 12 titles. Translated fromZapiski Nauchnykh Seminarov POMI, Vol. 219, 1994, pp. 176–185. This work was supported by the Russian Foundation of Fundamental Research (grant 94-01-00919). Translated by V. N. Kublanovskaya.  相似文献   

8.
We describe several approximation algorithms for the joint spectral radius and compare their performance on a large number of test cases. The joint spectral radius of a set Σ of $n \times n$ matrices is the maximal asymptotic growth rate that can be obtained by forming products of matrices from Σ. This quantity is NP-hard to compute and appears in many areas, including in system theory, combinatorics and information theory. A dozen algorithms have been proposed this last decade for approximating the joint spectral radius but little is known about their practical efficiency. We overview these approximation algorithms and classify them in three categories: approximation obtained by examining long products, by building a specific matrix norm, and by using optimization-based techniques. All these algorithms are now implemented in a (freely available) MATLAB toolbox that was released in 2011. This toolbox allows us to present a comparison of the approximations obtained on a large number of test cases as well as on sets of matrices taken from the literature. Finally, in our comparison we include a method, available in the toolbox, that combines different existing algorithms and that is the toolbox’s default method. This default method was able to find optimal products for all test cases of dimension less than four.  相似文献   

9.
In this paper, a viable bandwidth reduction algorithm based on graphs for reducing the bandwidth of sparse symmetric matrices, arising from standard L-structured and Z-structured graphs, is presented. Bandwidth results for these matrices are obtained using this algorithm and compared with that of existing algorithms. This algorithm can easily be applied to these matrices while the bandwidths obtained are as good as those obtained with the existing algorithms.  相似文献   

10.
We introduce a new set of algorithms to compute the Jacobi matrices associated with invariant measures of infinite iterated function systems, composed of one–dimensional, homogeneous affine maps. We demonstrate their utility in the study of theoretical problems, like the conjectured almost periodicity of such Jacobi matrices, the singularity of the measures, and the logarithmic capacity of their support. Since our technique is based on a reversible transformation between pairs of Jacobi matrices, it can also be applied to solve an inverse/approximation problem. The proposed algorithms are tested in significant, highly sensitive cases: they perform in a stable fashion, and can reliably compute Jacobi matrices of large order.  相似文献   

11.
Matrices resulting from wavelet transforms have a special “shadow” block structure, that is, their small upper left blocks contain their lower frequency information. Numerical solutions of linear systems with such matrices require special care. We propose shadow block iterative methods for solving linear systems of this type. Convergence analysis for these algorithms are presented. We apply the algorithms to three applications: linear systems arising in the classical regularization with a single parameter for the signal de-blurring problem, multilevel regularization with multiple parameters for the same problem and the Galerkin method of solving differential equations. We also demonstrate the efficiency of these algorithms by numerical examples in these applications.  相似文献   

12.
In this paper, two accelerated divide‐and‐conquer (ADC) algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N2r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices, which are Cauchy‐like matrices and are off‐diagonally low‐rank. The difference of these two versions lies in using different HSS construction algorithms, one (denoted by ADC1) uses a structured low‐rank approximation method and the other (ADC2) uses a randomized HSS construction algorithm. For the ADC2 algorithm, a method is proposed to estimate the off‐diagonal rank. Numerous experiments have been carried out to show their stability and efficiency. These algorithms are implemented in parallel in a shared memory environment, and some parallel implementation details are included. Comparing the ADCs with highly optimized multithreaded libraries such as Intel MKL, we find that ADCs could be more than six times faster for some large matrices with few deflations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Semiseparable matrices and many other rank‐structured matrices have been widely used in developing new fast matrix algorithms. In this paper, we generalize the hierarchically semiseparable (HSS) matrix representations and propose some fast algorithms for HSS matrices. We represent HSS matrices in terms of general binary HSS trees and use simplified postordering notation for HSS forms. Fast HSS algorithms including new HSS structure generation and HSS form Cholesky factorization are developed. Moreover, we provide a new linear complexity explicit ULV factorization algorithm for symmetric positive definite HSS matrices with a low‐rank property. The corresponding factors can be used to solve the HSS systems also in linear complexity. Numerical examples demonstrate the efficiency of the algorithms. All these algorithms have nice data locality. They are useful in developing fast‐structured numerical methods for large discretized PDEs (such as elliptic equations), integral equations, eigenvalue problems, etc. Some applications are shown. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
An important for applications, the class of hp discretizations of second-order elliptic equations consists of discretizations based on spectral finite elements. The development of fast domain decomposition algorithms for them was restrained by the absence of fast solvers for the basic components of the method, i.e., for local interior problems on decomposition subdomains and their faces. Recently, the authors have established that such solvers can be designed using special factorized preconditioners. In turn, factorized preconditioners are constructed using an important analogy between the stiffness matrices of spectral and hierarchical basis hp-elements (coordinate functions of the latter are defined as tensor products of integrated Legendre polynomials). Due to this analogy, for matrices of spectral elements, fast solvers can be developed that are similar to those for matrices of hierarchical elements. Based on these facts and previous results on the preconditioning of other components, fast domain decomposition algorithms for spectral discretizations are obtained.  相似文献   

15.
The inversion of polynomial and rational matrices is considered. For regular matrices, three algorithms for computing the inverse matrix in a factored form are proposed. For singular matrices, algorithms of constructing pseudoinverse matrices are considered. The algorithms of inversion of rational matrices are based on the minimal factorization which reduces the problem to the inversion of polynomial matrices. A class of special polynomial matrices is regarded whose inverse matrices are also polynomial matrices. Inversion algorithms are applied to the solution of systems with polynomial and rational matrices. Bibliography: 3 titles. Translated by V. N. Kublanovskaya. Translated fromZapiski Nauchnykh Seminarov POMI, Vol. 202, 1992, pp. 97–109.  相似文献   

16.
A class of sign‐symmetric P‐matrices including all nonsingular totally positive matrices and their inverses as well as tridiagonal nonsingular H‐matrices is presented and analyzed. These matrices present a bidiagonal decomposition that can be used to obtain algorithms to compute with high relative accuracy their singular values, eigenvalues, inverses, or their LDU factorization. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
A general proposal is presented for fast algorithms for multilevel structured matrices. It is based on investigation of their tensor properties and develops the idea recently introduced by Kamm and Nagy in the block Toeplitz case. We show that tensor properties of multilevel Toeplitz matrices are related to separation of variables in the corresponding symbol, present analytical tools to study the latter, expose truncation algorithms preserving the structure, and report on some numerical results confirming advantages of the proposal.  相似文献   

18.
Two operations are introduced for complex matrices. In terms of these two operations an infinite series expression is obtained for the unique solution of the Kalman-Yakubovich-conjugate matrix equation. Based on the obtained explicit solution, some iterative algorithms are given for solving this class of matrix equations. Convergence properties of the proposed algorithms are also analyzed by using some properties of the proposed operations for complex matrices.  相似文献   

19.
This paper deals with new variable-metric algorithms for nonsmooth optimization problems, the so-called adaptive algorithms. The essence of these algorithms is that there are two simultaneously working gradient algorithms: the first is in the main space and the second is in the space of the matrices that modify the main variables. The convergence of these algorithms is proved for different cases. The results of numerical experiments are also given.  相似文献   

20.
本文主要讨论了广义中心对称矩阵平方根的几个性质,并利用这些矩阵的特殊结构,推出了求广义中心对称矩阵平方根的快速算法。与传统算法相比,我们的结构算法无论在内存,还是在计算量方面,都有相当的节省。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号