首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Newton, in notes that he would rather not have seen published, described a process for solving simultaneous equations that later authors applied specifically to linear equations. This method — which Euler did not recommend, which Legendre called “ordinary,” and which Gauss called “common” — is now named after Gauss: “Gaussian” elimination. Gauss’s name became associated with elimination through the adoption, by professional computers, of a specialized notation that Gauss devised for his own least-squares calculations. The notation allowed elimination to be viewed as a sequence of arithmetic operations that were repeatedly optimized for hand computing and eventually were described by matrices.  相似文献   

2.
It is well-known that Gaussian elimination is very sensitive to round-off errors if row interchanges are not performed. In this note we make precise and prove the fact that there exist sets of data for which that algorithm leads to improper results regardless of how many digits are carried in the computation.Our research was supported in part by NSF grant GJ-797.  相似文献   

3.
We extend a result of D. J. Rose [9] on perfect Gaussian elimination for symmetric matrices. It is proved that the restriction that all pivots are to be chosen along the main diagonal can be removed without loss of generality.  相似文献   

4.
Complete pivoting is known to be numerically preferable to partial pivoting for solving systems of linear algebraic equations by Gaussian elimination. However, partial pivoting requires less computational work. Hence we should like to use partial pivoting provided we can easily recognize numerical difficulties. We propose an effective and inexpensive test for this purpose.  相似文献   

5.
Gaussian elimination is not optimal   总被引:24,自引:0,他引:24  
  相似文献   

6.
Certain algebraic operations (in the Boolean sense) are developed for directed graphs. Nonsingular and inverse graphs are defined and some of their characteristics are derived. The results are applied for the Gaussian elimination process.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 58, pp. 72–79, 1976.  相似文献   

7.
Summary Part I of this work deals with the forward error analysis of Gaussian elimination for general linear algebraic systems. The error analysis is based on a linearization method which determines first order approximations of the absolute errors exactly. Superposition and cancellation of error effects, structure and sparsity of the coefficient matrices are completely taken into account by this method. The most important results of the paper are new condition numbers and associated optimal component-wise error and residual estimates for the solutions of linear algebraic systems under data perturbations and perturbations by rounding erros in the arithmetic floating-point operations. The estimates do not use vector or matrix norms. The relative data and rounding condition numbers as well as the associated backward and residual stability constants are scaling-invariant. The condition numbers can be computed approximately from the input data, the intermediate results, and the solution of the linear system. Numerical examples show that by these means realistic bounds of the errors and the residuals of approximate solutions can be obtained. Using the forward error analysis, also typical results of backward error analysis are deduced. Stability theorems and a priori error estimates for special classes of linear systems are proved in Part II of this work.  相似文献   

8.
Neville elimination is an elimination procedure alternative to Gaussian elimination. It is very useful when dealing with totally positive matrices, for which nice stability results are known. Here we include examples, most of them test matrices used in MATLAB which are not totally positive matrices, where Neville elimination outperforms Gaussian elimination.  相似文献   

9.
In the present note it is proved that, given a real n × n matrix An=(aij), such that |aij| ? 1, the maximum values in modulus of the pivots p3,p4 in Gaussian elimination with complete pivoting are 214 and 4, and 4, respectively. In addition, it is shown that |p5| <5.005.  相似文献   

10.
Summary The results of Wilkinson on the stability of Gaussian elimination with column pivoting are generalized.  相似文献   

11.
Summary In order to factorize an indefinite symmetric matrixG of the formG=LDL T whereL is a trivially invertible matrix andD is a diagonal matrix, we introduce a new kind of pivoting operation. The algorithm suggested maintains the stability and efficiency of the standard Cholesky decomposition whileG need not be positive definite. The problem of factorizingG+uu T whereu is a vector, scalar and the factors ofG are known, is also discussed.This research was supported in part by the Israeli National Council for Research and Development  相似文献   

12.
To solve linear programming problems by interior point methods an approximately centered interior point has to be known. Such a point can be found by an algorithmic approach – a so-called phase 1 algorithm or centering algorithm. For random linear programming problems distributed according to the rotation symmetry model, especially with normal distribution, we present probabilistic results on the quality of the origin as starting point and the average number of steps of a centering algorithm.  相似文献   

13.
Using the simple vehicle of tridiagonal Toeplitz matrices, the question of whether one must pivot during the Gauss elimination procedure is examined. An exact expression for the multipliers encountered during the elimination process is given. It is then shown that for a prototype Helmholtz problem, one cannot guarantee that elimination without pivoting is stable.  相似文献   

14.
The communcation complexity of functions defined in lattices is bounded from above and below, hereby generalizing former results of Lovasz [1] and Ahlswede and Cai [2]. Especially in geometric lattices, upper and lower bound often differ by at most one bit.  相似文献   

15.
16.
In this article, we consider the factorization of a sparse nonsymmetric matrix using Gaussian elimination with partial pivoting on a multiprocessor having a globally-shared memory. The parallel algorithm makes use of a static data structure developed by George, Liu and Ng in [17]. Some numerical experiments on a Sequent Balance 8000 are presented to demonstrate the efficiency of the parallel implementation.Research supported in part by the Applied Mathematical Sciences Research Program, Office of Energy Research, U.S. Department of Energy under contract DE-AC05-84OR21400 with Martin Marietta Energy Systems, Inc.  相似文献   

17.
This paper contains a review of the authors’ results in the theory of algorithm complexity. The results described concern methods for obtaining lower bounds (containing almost all exponential lower bounds on monotone complexity of monotone functions), synthesis of asymptotically optimal functional networks, minimization of Boolean functions, and the problem of solving Boolean equations.  相似文献   

18.
In this paper, we propose an algorithm for allocating the tasks of the well known Gaussian Elimination Algorithm on an MIMD architecture and prove that the schedule is optimal in order of magnitude, up to a polylog factor.  相似文献   

19.
Consider a sports competition among various teams playing against each other in pairs (matches) according to a previously determined schedule. At some stage of the competition one may ask whether a particular team still has a (theoretical) chance to win the competition. The computational complexity of this question depends on the way scores are allocated according to the outcome of a match. For competitions with at most 3 different outcomes of a match the complexity is already known. In practice there are many competitions in which more than 3 outcomes are possible. We determine the complexity of the above problem for competitions with an arbitrary number of different outcomes. Our model also includes competitions that are asymmetric in the sense that away playing teams possibly receive other scores than home playing teams.  相似文献   

20.
We present a general method for the linear least-squares solutionof overdetermined and underdetermined systems. The method isparticularly efficient when the coefficient matrix is quasi-square,that is when the number of rows and number of columns is almostthe same. The numerical methods for linear least-squares problemsand minimum-norm solutions do not generally take account ofthis special characteristic. The proposed method is based onLU factorization of the original quasi-square matrix A, assumingthat A has full rank. In the overdetermined case, the LU factorsare used to compute a basis for the null space of AT. The right-handside vector b is then projected onto this subspace and the least-squaressolution is obtained from the solution of this reduced problem.In the case of underdetermined systems, the desired solutionis again obtained through the solution of a reduced system.The use of this method may lead to important savings in computationaltime for both dense and sparse matrices. It is also shown inthe paper that, even in cases where the matrices are quite small,sparse solvers perform better than dense solvers. Some practicalexamples that illustrate the use of the method are included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号