首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For solving large scale linear least‐squares problem by iteration methods, we introduce an effective probability criterion for selecting the working columns from the coefficient matrix and construct a greedy randomized coordinate descent method. It is proved that this method converges to the unique solution of the linear least‐squares problem when its coefficient matrix is of full rank, with the number of rows being no less than the number of columns. Numerical results show that the greedy randomized coordinate descent method is more efficient than the randomized coordinate descent method.  相似文献   

2.
ABSTRACT

We propose a novel iterative algorithm for solving a large sparse linear system. The method is based on the EM algorithm. If the system has a unique solution, the algorithm guarantees convergence with a geometric rate. Otherwise, convergence to a minimal Kullback–Leibler divergence point is guaranteed. The algorithm is easy to code and competitive with other iterative algorithms.  相似文献   

3.
For solving large sparse systems of linear equations, we construct a paradigm of two-step matrix splitting iteration methods and analyze its convergence property for the nonsingular and the positive-definite matrix class. This two-step matrix splitting iteration paradigm adopts only one single splitting of the coefficient matrix, together with several arbitrary iteration parameters. Hence, it can be constructed easily in actual applications, and can also recover a number of representatives of the existing two-step matrix splitting iteration methods. This result provides systematic treatment for the two-step matrix splitting iteration methods, establishes rigorous theory for their asymptotic convergence, and enriches algorithmic family of the linear iteration solvers, for the iterative solutions of large sparse linear systems.  相似文献   

4.
A modification of certain well-known methods of the conjugate direction type is proposed and examined. The modified methods are more stable with respect to the accumulation of round-off errors. Moreover, these methods are applicable for solving ill-conditioned systems of linear algebraic equations that, in particular, arise as approximations of ill-posed problems. Numerical results illustrating the advantages of the proposed modification are presented.  相似文献   

5.
We consider linear systems of equations and solution approximations derived by projection on a low-dimensional subspace. We propose stochastic iterative algorithms, based on simulation, which converge to the approximate solution and are suitable for very large-dimensional problems. The algorithms are extensions of recent approximate dynamic programming methods, known as temporal difference methods, which solve a projected form of Bellman’s equation by using simulation-based approximations to this equation, or by using a projected value iteration method.  相似文献   

6.
The use of modifications of certain well-known methods of the conjugate direction type for solving systems of linear algebraic equations with rectangular matrices is examined. The modified methods are shown to be superior to the original versions with respect to the round-off accumulation; the advantage is especially large for ill-conditioned matrices. Examples are given of the efficient use of the modified methods for solving certain fairly large ill-conditioned problems.  相似文献   

7.
Based on separable property of the linear and the nonlinear terms and on the Hermitian and skew-Hermitian splitting of the coefficient matrix, we present the Picard-HSS and the nonlinear HSS-like iteration methods for solving a class of large scale systems of weakly nonlinear equations. The advantage of these methods over the Newton and the Newton-HSS iteration methods is that they do not require explicit construction and accurate computation of the Jacobian matrix, and only need to solve linear sub-systems of constant coefficient matrices. Hence, computational workloads and computer memory may be saved in actual implementations. Under suitable conditions, we establish local convergence theorems for both Picard-HSS and nonlinear HSS-like iteration methods. Numerical implementations show that both Picard-HSS and nonlinear HSS-like iteration methods are feasible, effective, and robust nonlinear solvers for this class of large scale systems of weakly nonlinear equations.  相似文献   

8.
Second degree normalized implicit conjugate gradient methods for the numerical solution of self-adjoint elliptic partial differential equations are developed. A proposal for the selection of certain values of the iteration parameters ?i, γi involved in solving two and three-dimensional elliptic boundary-value problems leading to substantial savings in computational work is presented. Experimental results for model problems are given.  相似文献   

9.
The convergence of the parallel matrix multisplitting relaxation methods presented by Wang (Linear Algebra and Its Applications 154/156 (1991) 473-486) is further investigated.The investigations show that these relaxation methods really have considerably larger convergence domains.  相似文献   

10.
Parallel iterative methods are powerful in solving large systems of linear equations (LEs). The existing parallel computing research results focus mainly on sparse systems or others with particular structure. Most are based on parallel implementation of the classical relaxation methods such as Gauss-Seidel, SOR, and AOR methods which can be efficiently carried out on multiprocessor system. In this paper, we propose a novel parallel splitting operator method in which we divide the coefficient matrix into two or three parts. Then we convert the original problem (LEs) into a monotone (linear) variational inequality problem (VI) with separable structure. Finally, an inexact parallel splitting augmented Lagrangian method is proposed to solve the variational inequality problem (VI). To avoid dealing with the matrix inverse operator, we introduce proper inexact terms in subproblems such that the complexity of each iteration of the proposed method is O(n2). In addition, the proposed method does not require any special structure of system of LEs under consideration. Convergence of the proposed methods in dealing with two and three separable operators respectively, is proved. Numerical computations are provided to show the applicability and robustness of the proposed methods.  相似文献   

11.
考虑线性方程组l_1范数问题的求解,在分别将其转化为一个分裂可行问题和凸可行问题的基础上,设计了几种松弛投影算法,然后将所设计的求解方法用于信号处理问题的求解上.  相似文献   

12.
In this paper, we examine three algorithms in the ABS family and consider their storage requirements on sparse band systems. It is shown that, when using the implicit Cholesky algorithm on a band matrix with band width 2q+1, onlyq additional vectors are required. Indeed, for any matrix with upper band widthq, onlyq additional vectors are needed. More generally, ifa kj 0,j>k, then thejth row ofH i is effectively nonzero ifj>i>k. The arithmetic operations involved in solving a band matrix by this method are dominated by (1/2)n 2 q. Special results are obtained forq-band tridiagonal matrices and cyclic band matrices.The implicit Cholesky algorithm may require pivoting if the matrixA does not possess positive-definite principal minors, so two further algorithms were considered that do not require this property. When using the implicit QR algorithm, a matrix with band widthq needs at most 2q additional vectors. Similar results forq-band tridiagonal matrices and cyclic band matrices are obtained.For the symmetric Huang algorithm, a matrix with band widthq requiresq–1 additional vectors. The storage required forq-band tridiagonal matrices and cyclic band matrices are again analyzed.This work was undertaken during the visit of Dr. J. Abaffy to Hatfield Polytechnic, sponsored by SERC Grant No. GR/E-07760.  相似文献   

13.
This paper presents an exponential matrix method for the solutions of systems of high‐order linear differential equations with variable coefficients. The problem is considered with the mixed conditions. On the basis of the method, the matrix forms of exponential functions and their derivatives are constructed, and then by substituting the collocation points into the matrix forms, the fundamental matrix equation is formed. This matrix equation corresponds to a system of linear algebraic equations. By solving this system, the unknown coefficients are determined and thus the approximate solutions are obtained. Also, an error estimation based on the residual functions is presented for the method. The approximate solutions are improved by using this error estimation. To demonstrate the efficiency of the method, some numerical examples are given and the comparisons are made with the results of other methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
The convergence problem of many Krylov subspace methods,e.g., FOM, GCR, GMRES and QMR, for solving large unsymmetric (non-Hermitian) linear systems is considered in a unified way when the coefficient matrixA is defective and its spectrum lies in the open right (left) half plane. Related theoretical error bounds are established and some intrinsic relationships between the convergence speed and the spectrum ofA are exposed. It is shown that these methods are likely to converge slowly once one of the three cases occurs:A is defective, the distribution of its spectrum is not favorable, or the Jordan basis ofA is ill conditioned. In the proof, some properties on the higher order derivatives of Chebyshev polynomials in an ellipse in the complex plane are derived, one of which corrects a result that has been used extensively in the literature. Supported by the China State Major Key Project for Basic Researches, the National Natural Science Foundation of China, the Doctoral Program of the Chinese National Educational Commission, the Foundation of Returned Scholars of China and Liaoning Province Natural Science Foundation.  相似文献   

15.
1.IntroductionTheclassicaliterativemethods,suchastheJacobimethod,theGauss-SeidelmethodandtheSORmethod,aswellastheirsymmetrizedvariants,playanimportantroleforsolvingthelargesparsesystemoflinearequationsInaccordancewiththebasicextrapolationprincipleofthelineariterativemethod,Hadjidimos[1]furtherproposedaclassofacceleratedoverrelaxation(AOR)methodforsolyingthelinearsystem(1.1)in1978.Thismethodincludestwoarbitraryparameters,andtheirsuitablechoicesnotonlycannaturallyrecovertheJacobi,theGauss-S…  相似文献   

16.
Recently, Freund and Nachtigal proposed the quasi-minimal residual algorithm (QMR) for solving general nonsingular non-Hermitian linear systems. The method is based on the Lanczos process, and thus it involves matrix—vector products with both the coefficient matrix of the linear system and its transpose. Freund developed a variant of QMR, the transpose-free QMR algorithm (TFQMR), that only requires products with the coefficient matrix. In this paper, the use of QMR and TFQMR for solving singular systems is explored. First, a convergence result for the general class of Krylov-subspace methods applied to singular systems is presented. Then, it is shown that QMR and TFQMR both converge for consistent singular linear systems with coefficient matrices of index 1. Singular systems of this type arise in Markov chain modeling. For this particular application, numerical experiments are reported.  相似文献   

17.
For the block system of weakly nonlinear equations Ax=G(x), where is a large sparse block matrix and is a block nonlinear mapping having certain smoothness properties, we present a class of asynchronous parallel multisplitting block two-stage iteration methods in this paper. These methods are actually the block variants and generalizations of the asynchronous multisplitting two-stage iteration methods studied by Bai and Huang (Journal of Computational and Applied Mathematics 93(1) (1998) 13–33), and they can achieve high parallel efficiency of the multiprocessor system, especially, when there is load imbalance. Under quite general conditions that is a block H-matrix of different types and is a block P-bounded mapping, we establish convergence theories of these asynchronous multisplitting block two-stage iteration methods. Numerical computations show that these new methods are very efficient for solving the block system of weakly nonlinear equations in the asynchronous parallel computing environment.  相似文献   

18.
In this paper, we introduce some new iterative methods to solve linear systems \(Ax=b\}. We  show that these methods, comparing to the classical Jacobi or Gauss-Seidel method, can be applied to more systems and have faster convergence.  相似文献   

19.
白中治  仇寿霞 《计算数学》2002,24(1):113-128
1.引 言 考虑大型稀疏线性代数方程组 为利用系数矩阵的稀疏结构以尽可能减少存储空间和计算开销,Krylov子空间迭代算法[1,16,23]及其预处理变型[6,8,13,18,19]通常是求解(1)的有效而实用的方法.当系数矩阵对称正定时,共轭梯度法(CG(  相似文献   

20.
Tensor methods for large sparse systems of nonlinear equations   总被引:1,自引:0,他引:1  
This paper introduces tensor methods for solving large sparse systems of nonlinear equations. Tensor methods for nonlinear equations were developed in the context of solving small to medium-sized dense problems. They base each iteration on a quadratic model of the nonlinear equations, where the second-order term is selected so that the model requires no more derivative or function information per iteration than standard linear model-based methods, and hardly more storage or arithmetic operations per iteration. Computational experiments on small to medium-sized problems have shown tensor methods to be considerably more efficient than standard Newton-based methods, with a particularly large advantage on singular problems. This paper considers the extension of this approach to solve large sparse problems. The key issue considered is how to make efficient use of sparsity in forming and solving the tensor model problem at each iteration. Accomplishing this turns out to require an entirely new way of solving the tensor model that successfully exploits the sparsity of the Jacobian, whether the Jacobian is nonsingular or singular. We develop such an approach and, based upon it, an efficient tensor method for solving large sparse systems of nonlinear equations. Test results indicate that this tensor method is significantly more efficient and robust than an efficient sparse Newton-based method, in terms of iterations, function evaluations, and execution time. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Work supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, US Department of Energy, under Contract W-31-109-Eng-38, by the National Aerospace Agency under Purchase Order L25935D, and by the National Science Foundation, through the Center for Research on Parallel Computation, under Cooperative Agreement No. CCR-9120008.Research supported by AFOSR Grants No. AFOSR-90-0109 and F49620-94-1-0101, ARO Grants No. DAAL03-91-G-0151 and DAAH04-94-G-0228, and NSF Grant No. CCR-9101795.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号