首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary A class of preconditioning methods depending on a relaxation parameter is presented for the solution of large linear systems of equationAx=b, whereA is a symmetric positive definite matrix. The methods are based on an incomplete factorization of the matrixA and include both pointwise and blockwise factorization. We study the dependence of the rate of convergence of the preconditioned conjugate gradient method on the distribution of eigenvalues ofC –1 A, whereC is the preconditioning matrix. We also show graphic representations of the eigenvalues and present numerical tests of the methods.  相似文献   

2.
This article develops the preconditioning technique as a method to address the accuracy issue caused by ill‐conditioning. Given a preconditioner M for an ill‐conditioned linear system Ax=b, we show that, if the inverse of the preconditioner M?1 can be applied to vectors accurately, then the linear system can be solved accurately. A stability concept called inverse‐equivalent accuracy is introduced to describe the high accuracy that is achieved and an error analysis will be presented. Numerical examples are presented to illustrate the error analysis and the performance of the methods.  相似文献   

3.
Randomized Kaczmarz solver for noisy linear systems   总被引:1,自引:0,他引:1  
The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax=b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax=b is corrupted by noise, so we consider the system Axb+r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the error-free case. We provide examples showing our results are sharp in the general context.  相似文献   

4.
In this paper we present a parallel algorithm for parallel computing the solution of the general restricted linear equations Ax=b,xT, where T is a subspace of ? n and bAT. By this algorithm the solution x=A T,S (2) b is obtained in n(log?2 m+log?2(n?s+1)+7)+log?2 m+1 steps with P=mn processors when m≥2(n?1) and with P=2n(n?1) processors otherwise.  相似文献   

5.
Lanczos' method for solving the system of linear equationsAx=b consists in constructing a sequence of vectors (x k ) such thatr k =b–Ax k =P k (A)r 0 wherer 0=b–Ax 0.P k is an orthogonal polynomial which is computed recursively. The conjugate gradient squared algorithm (CGS) consists in takingr k =P k 2 (A)r0. In the recurrence relation forP k , the coefficients are given as ratios of scalar products. When a scalar product in a denominator is zero, then a breakdown occurs in the algorithm. When such a scalar product is close to zero, then rounding errors can seriously affect the algorithm, a situation known as near-breakdown. In this paper it is shown how to avoid near-breakdown in the CGS algorithm in order to obtain a more stable method.  相似文献   

6.
A class of iterative methods is presented for the solution of systems of linear equationsAx=b, whereA is a generalm ×n matrix. The methods are based on a development as a continued fraction of the inner product (r, r), wherer=b-Ax is the residual. The methods as defined are quite general and include some wellknown methods such as the minimal residual conjugate gradient method with one step.  相似文献   

7.
For the unique solution of a special consistent restricted linear system Ax=bx?M we derive two different determinantal forms, which both reduce to Cramer's classical rule if A is nonsingular. The representation for the minimum Euclidean-norm solution of Ax=b given recently by Ben-Israel [2] is reobtained in our first approach as special case. A determinantal formula for any {1, 2}-generalized inverse is also given.  相似文献   

8.
The expectation maximization (EM) algorithm is an iterative procedure used to determine maximum likelihood estimators in situations of incomplete data. In the case of independent Poisson variables it converges to a solution of a problem of the form min ∑[〈ai,x〉 ? bi log 〈ai, x〉] such that x ?0. Thus, it can be used to solve systems of the form Ax = b, x?0 (with A stochastic and b positive.) It converges to a feasible solution if it exists and to an approximate one otherwise (the one that minimizes d (b, Ax), where d is the Kullback–Leibler information divergence). We study the convergence of the multiplicatively relaxed version proposed by Tanaka for use in positron emission tomography. We prove global convergence in the underrelaxed and unrelaxed cases. In the overrelaxed case we present a local convergence theorem together with two partial global results: the sequence generated by the algorithm is bounded and, if it converges, its limit point is a solution of the aforementioned problem.  相似文献   

9.
Integral matrices A for which the system Ax=b will have an integral basic solution are considered. Results for these matrices are presented which parallel results concerning other matrices for which the system has integral solutions.  相似文献   

10.
A decomposition method, used in least-weight plastic design, is extended to solve problems with nonlinearity arising from variable structure geometry. The problem considered is that of finding vectorsx 1,x 2, andq that minimize [l max{|x 1|, |x 2|}], subject toAx 1=b 1 andAx 2=b 2, where both the vectorl and the matrixA are nonlinear functions ofq.  相似文献   

11.
For solving a singular linear system Ax=b by GMRES, it is shown in the literature that if A is range-symmetric, then GMRES converges safely to a solution. In this paper we consider preconditioned GMRES for solving a singular linear system, we construct preconditioners by so-called proper splittings, which can ensure that the coefficient matrix of the preconditioned system is range-symmetric.  相似文献   

12.
We consider methods for regularising the least-squares solution of the linear system Ax=b. In particular, we propose iterative methods for solving large problems in which a trust-region bound ‖x‖≤Δ is imposed on the size of the solution, and in which the least value of linear combinations of ‖Axb2 q and a regularisation term ‖x2 p for various p and q=1,2 is sought. In each case, one or more “secular” equations are derived, and fast Newton-like solution procedures are suggested. The resulting algorithms are available as part of the ALAHAD optimization library. This work was partially supported by EPSRC grants EP/E053351/1 and EP/F005369/1.  相似文献   

13.
The following theorem is proved: given square matrices A, D of the same size, D nonnegative, then either the equation Ax?+?B|x|?=?b has a unique solution for each B with |B|?≤?D and for each b, or the equation Ax?+?B 0|x|?=?0 has a nontrivial solution for some matrix B 0 of a very special form, |B 0|?≤?D; the two alternatives exclude each other. Some consequences of this result are drawn. In particular, we define a λ to be an absolute eigenvalue of A if |Ax|?=?λ|x| for some x?≠?0, and we prove that each square real matrix has an absolute eigenvalue.  相似文献   

14.
The linear equation Ax=b, with An×n, is considered, and it is shown that the controllable subspace of (A, b) can intersect the solution set of this equation in at most one point. Some additional properties of the controllable subspace related to the solution set are presented, and the role of controllability in establishing nonsingularity of A is exhibited.  相似文献   

15.
jun-Feng Yin  Ken Hayami  Zhong-Zhi Bai 《PAMM》2007,7(1):2020151-2020152
We consider preconditioned Krylov subspace iteration methods, e.g., CG, LSQR and GMRES, for the solution of large sparse least-squares problems min ∥Axb2, with A ∈ R m×n, based on the Krylov subspaces Kk (BA, Br) and Kk (AB, r), respectively, where B ∈ R n×m is the preconditioning matrix. More concretely, we propose and implement a class of incomplete QR factorization preconditioners based on the Givens rotations and analyze in detail the efficiency and robustness of the correspondingly preconditioned Krylov subspace iteration methods. A number of numerical experiments are used to further examine their numerical behaviour. It is shown that for both overdetermined and underdetermined least-squares problems, the preconditioned GMRES methods are the best for large, sparse and ill-conditioned matrices in terms of both CPU time and iteration step. Also, comparisons with the diagonal scaling and the RIF preconditioners are given to show the superiority of the newly-proposed GMRES-type methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
Iterative methods are developed for computing the Moore-Penrose pseudoinverse solution of a linear systemAx=b, whereA is anm ×n sparse matrix. The methods do not require the explicit formation ofA T A orAA T and therefore are advantageous to use when these matrices are much less sparse thanA itself. The methods are based on solving the two related systems (i)x=A T y,AA T y=b, and (ii)A T Ax=A T b. First it is shown how theSOR-andSSOR-methods for these two systems can be implemented efficiently. Further, the acceleration of theSSOR-method by Chebyshev semi-iteration and the conjugate gradient method is discussed. In particular it is shown that theSSOR-cg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns ofA respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction.  相似文献   

17.
In this paper, our attention is concentrated on the GMRES method for the solution of the system (IT)x=b of linear algebraic equations with a nonsymmetric matrix. We perform m pre-iterations y l+1 =T yl +b before starting GMRES and put y m for the initial approximation in GMRES. We derive an upper estimate for the norm of the error vector in dependence on the mth powers of eigenvalues of the matrix T Further we study under what eigenvalues lay-out this upper estimate is the best one. The estimate shows and numerical experiments verify that it is advisable to perform pre-iterations before starting GMRES as they require fewer arithmetic operations than GMRES. Towards the end of the paper we present a numerical experiment for a system obtained by the finite difference approximation of convection-diffusion equations.  相似文献   

18.
邓永坤  王海军  陈飞 《数学杂志》2014,34(6):1125-1133
本文研究了广义绝对值方程Ax-|Bx-c|=b的求解问题.利用一个光滑的NCP函数将广义绝对值方程转化为等价的光滑方程组,获得了算法全局超线性收敛性的结果.并给出数值实验验证了理论分析及算法的有效性.  相似文献   

19.
We consider the solution of linear systems of equations Ax=b, with A a symmetric positive-definite matrix in ? n×n , through Richardson-type iterations or, equivalently, the minimization of convex quadratic functions (1/2)(Ax,x)?(b,x) with a gradient algorithm. The use of step-sizes asymptotically distributed with the arcsine distribution on the spectrum of A then yields an asymptotic rate of convergence after k<n iterations, k→∞, that coincides with that of the conjugate-gradient algorithm in the worst case. However, the spectral bounds m and M are generally unknown and thus need to be estimated to allow the construction of simple and cost-effective gradient algorithms with fast convergence. It is the purpose of this paper to analyse the properties of estimators of m and M based on moments of probability measures ν k defined on the spectrum of A and generated by the algorithm on its way towards the optimal solution. A precise analysis of the behavior of the rate of convergence of the algorithm is also given. Two situations are considered: (i) the sequence of step-sizes corresponds to i.i.d. random variables, (ii) they are generated through a dynamical system (fractional parts of the golden ratio) producing a low-discrepancy sequence. In the first case, properties of random walk can be used to prove the convergence of simple spectral bound estimators based on the first moment of ν k . The second option requires a more careful choice of spectral bounds estimators but is shown to produce much less fluctuations for the rate of convergence of the algorithm.  相似文献   

20.
A full-rank under-determined linear system of equations Ax = b has in general infinitely many possible solutions. In recent years there is a growing interest in the sparsest solution of this equation—the one with the fewest non-zero entries, measured by ∥x0. Such solutions find applications in signal and image processing, where the topic is typically referred to as “sparse representation”. Considering the columns of A as atoms of a dictionary, it is assumed that a given signal b is a linear composition of few such atoms. Recent work established that if the desired solution x is sparse enough, uniqueness of such a result is guaranteed. Also, pursuit algorithms, approximation solvers for the above problem, are guaranteed to succeed in finding this solution.Armed with these recent results, the problem can be reversed, and formed as an implied matrix factorization problem: Given a set of vectors {bi}, known to emerge from such sparse constructions, Axi = bi, with sufficiently sparse representations xi, we seek the matrix A. In this paper we present both theoretical and algorithmic studies of this problem. We establish the uniqueness of the dictionary A, depending on the quantity and nature of the set {bi}, and the sparsity of {xi}. We also describe a recently developed algorithm, the K-SVD, that practically find the matrix A, in a manner similar to the K-Means algorithm. Finally, we demonstrate this algorithm on several stylized applications in image processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号