首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A general analysis of the condition of the linear least squares problem is given. The influence of rounding errors is studied in detail for a modified version of the Gram-Schmidt orthogonalization to obtain a factorizationA=QR of a givenm×n matrixA, whereR is upper triangular andQ T Q=I. Letx be the vector which minimizes ‖b?Ax2 andr=b?Ax. It is shown that if inner-products are accumulated in double precision then the errors in the computedx andr are less than the errors resulting from some simultaneous initial perturbation δA, δb such that
$$\parallel \delta A\parallel _E /\parallel A\parallel _E \approx \parallel \delta b\parallel _2 /\parallel b\parallel _2 \approx 2 \cdot n^{3/2} machine units.$$  相似文献   

2.
A framework and an algorithm for using modified Gram-Schmidt for constrained and weighted linear least squares problems is presented. It is shown that a direct implementation of a weighted modified Gram-Schmidt algorithm is unstable for heavily weighted problems. It is shown that, in most cases it is possible to get a stable algorithm by a simple modification free from any extra computational costs. In particular, it is not necessary to perform reorthogonalization.Solving the weighted and constrained linear least squares problem with the presented weighted modified Gram-Schmidt algorithm is seen to be numerically equivalent to an algorithm based on a weighted Householder-likeQR factorization applied to a slightly larger problem. This equivalence is used to explain the instability of the weighted modified Gram-Schmidt algorithm. If orthogonality, with respect to a weighted inner product, of the columns inQ is important then reorthogonalization can be used. One way of performing such reorthogonalization is described.Computational tests are given to show the main features of the algorithm.  相似文献   

3.
4.
One gives a survey of the direct method of solving large sparse diverse overdetermined linear systems of full column rank in the least squares sense. The survey covers practically all investigations on this topic, published in the seventies and in the beginning of the eighties.Translated from Itogi Nauki i Tekhniki, Seriya Matematicheskii Analiz, Vol. 23, pp. 219–285, 1985.  相似文献   

5.
Perturbation bounds for the linear least squares problem min x Axb2 corresponding tocomponent-wise perturbations in the data are derived. These bounds can be computed using a method of Hager and are often much better than the bounds derived from the standard perturbation analysis. In particular this is true for problems where the rows ofA are of widely different magnitudes. Generalizing a result by Oettli and Prager, we can use the bounds to compute a posteriori error bounds for computed least squares solutions.  相似文献   

6.
Recently Miyajima presented algorithms to compute componentwise verified error bounds for the solution of full-rank least squares problems and underdetermined linear systems. In this paper we derive simpler and improved componentwise error bounds which are based on equalities for the error of a given approximate solution. Equalities are not improvable, and the expressions are formulated in a way that direct evaluation yields componentwise and rigorous estimates of good quality. The computed bounds are correct in a mathematical sense covering all sources of errors, in particular rounding errors. Numerical results show a gain in accuracy compared to previous results.  相似文献   

7.
Summary In this paper we describe how to use Gram-Schmidt factorizations of Daniel et al. [1] to realize the method of [8] for solving linearly constrained linear least squares problems. The main advantage of using these factorizations is that it is relatively easy to take data changes into account, if necessary.The algorithm is compared numerically with two other codes, one of them published by Lawson and Hanson [3]. Further computational tests show the efficiency of the proposed methods, if a few data of the original problem are changed subsequently.This paper was sponsored by Deutsche Forschungsgemeinschaft, Bonn-Bad Godesberg  相似文献   

8.
A direct error analysis is given for orthogonal factorization methods for calculating the least squares solution of an overdetermined system of linear equations. The direct method has the interesting advantage in that it permits the separation of errors occurring in the transformation and back-substitution phases of solution. This shows the partial elimination of potentially significant terms occurring in different stages of the algorithm. Presumably it is prudent to minimize the error at each stage of the algorithm, so it is significant that numerical evidence shows column pivoting can reduce the magnitude of these terms. This is offered as an explanation for the common observation that column pivoting is beneficial in least squares calculations. We also summarize the corresponding error analysis for the calculation of the minimum norm solution of an underdetermined system using orthogonal transformations.  相似文献   

9.
Numerical methods for solving linear least squares problems   总被引:6,自引:0,他引:6  
A common problem in a Computer Laboratory is that of finding linear least squares solutions. These problems arise in a variety of areas and in a variety of contexts. Linear least squares problems are particularly difficult to solve because they frequently involve large quantities of data, and they are ill-conditioned by their very nature. In this paper, we shall consider stable numerical methods for handling these problems. Our basic tool is a matrix decomposition based on orthogonal Householder transformations.Reproduction in Whole or in Part is permitted for any Purpose of the United States government. This report was supported in part by Office of Naval Research Contract Nonr-225(37) (NR 044-11) at Stanford University.  相似文献   

10.
Least squares solution of linear inequalities appears in many disciplines such as linear separability problems and inconsistency correction. In this paper we consider this problem with uncertainty in its data. Then we prove that its robust counterpart is equivalent to a second order conic linear optimization problem, which can be efficiently solved using interior point methods.  相似文献   

11.
The weighting method for solving a least squares problem with linear equality constraints multiplies the constraints by a large number and appends them to the top of the least squares problem, which is then solved by standard techniques. In this paper we give a new analysis of the method, based on the QR decomposition, that exhibits many features of the algorithm. In particular it suggests a natural criterion for chosing the weighting factor. This work was supported in part by the National Science Foundation under grant CCR 95503126.  相似文献   

12.
This article is concerned with iterative techniques for linear systems of equations arising from a least squares formulation of boundary value problems. In its classical form, the solution of the least squares method is obtained by solving the traditional normal equation. However, for nonsmooth boundary conditions or in the case of refinement at a selected set of interior points, the matrix associated with the normal equation tends to be ill-conditioned. In this case, the least squares method may be formulated as a Powell multiplier method and the equations solved iteratively. Therein we use and compare two different iterative algorithms. The first algorithm is the preconditioned conjugate gradient method applied to the normal equation, while the second is a new algorithm based on the Powell method and formulated on the stabilized dual problem. The two algorithms are first compared on a one-dimensional problem with poorly conditioned matrices. Results show that, for such problems, the new algorithm gives more accurate results. The new algorithm is then applied to a two-dimensional steady state diffusion problem and a boundary layer problem. A comparison between the least squares method of Bramble and Schatz and the new algorithm demonstrates the ability of the new method to give highly accurate results on the boundary, or at a set of given interior collocation points without the deterioration of the condition number of the matrix. Conditions for convergence of the proposed algorithm are discussed. © 1997 John Wiley & Sons, Inc.  相似文献   

13.
Numerical Algorithms - In our work, we consider the linear least squares problem for m × n-systems of linear equations Ax = b, m ≥ n, such that the matrix A and right-hand side vector b...  相似文献   

14.
In this paper, we present the preconditioned generalized accelerated overrelaxation (GAOR) method for solving linear systems based on a class of weighted linear least square problems. Two kinds of preconditioning are proposed, and each one contains three preconditioners. We compare the spectral radii of the iteration matrices of the preconditioned and the original methods. The comparison results show that the convergence rate of the preconditioned GAOR methods is indeed better than the rate of the original method, whenever the original method is convergent. Finally, a numerical example is presented in order to confirm these theoretical results.  相似文献   

15.
Numerische Mathematik - In 1966 Golub and Wilkinson gave upper bounds for the errors of least squares solutions based on orthogonal transformations, in which the square of the condition number of...  相似文献   

16.
Summary This paper completes our previous discussion on the total least squares (TLS) and the least squares (LS) problems for the linear systemAX=B which may contain more than one solution [12, 13], generalizes the work of Golub and Van Loan [1,2], Van Huffel [8], Van Huffel and Vandewalle [11]. The TLS problem is extended to the more general case. The sets of the solutions and the squared residuals for the TLS and LS problems are compared. The concept of the weighted squares residuals is extended and the difference between the TLS and the LS approaches is derived. The connection between the approximate subspaces and the perturbation theories are studied.It is proved that under moderate conditions, all the corresponding quantities for the solution sets of the TLS and the modified LS problems are close to each other, while the quantities for the solution set of the LS problem are close to the corresponding ones of a subset of that of the TLS problem.This work was financially supported by the Education Committee, People's Republic of China  相似文献   

17.
A variant of the preconditioned conjugate gradient method to solve generalized least squares problems is presented. If the problem is min (Axb)TW−1(Axb) with ARm×n and WRm×m symmetric and positive definite, the method needs only a preconditioner A1Rn×n, but not the inverse of matrix W or of any of its submatrices. Freund's comparison result for regular least squares problems is extended to generalized least squares problems. An error bound is also given.  相似文献   

18.
A generalized successive overrelaxation method for least squares problems   总被引:5,自引:0,他引:5  
In this paper a new iterative method is given for solving large sparse least squares problems and computing the minimum norm solution to underdetermined consistent linear systems. The new scheme is called the generalized successive overrelaxation (GSOR) method and is shown to be convergent ifA is full column rank. The GSOR method involves a parameter ρ and an auxiliary matrixP. One can choose matrix P so that the GSOR method only involves matrix and vector operations; therefore the GSOR method is suitable for parallel computations. Besides, the GSOR method can be combined with preconditioning techniques, and therefore can be expected to be more effective. This author's work was supported by Natural Science Foundation of Liaoning Province, China.  相似文献   

19.
Surface interpolation finds application in many aspects of science and technology. Two specific areas of interest are surface reconstruction techniques for plant architecture and approximating cell face fluxes in the finite volume discretisation strategy for solving partial differential equations numerically. An important requirement of both applications is accurate local gradient estimation. In surface reconstruction this gradient information is used to increase the accuracy of the local interpolant, while in the finite volume framework accurate gradient information is essential to ensure second order spatial accuracy of the discretisation.  相似文献   

20.
Accuracy of a Gram–Schmidt algorithm for the solution of linear least squares equations is compared with accuracy of least squares subroutines in three highly respected mathematical packages that use Householder transformations. Results from the four programs for 13 test problems were evaluated at 16 digit precision on four different desktop computers using four different compilers. Singular values obtained from the different programs are compared and the effect of pivoting to improve the accuracy is discussed. Solution vectors from the program using the Gram–Schmidt algorithm were generally more accurate or comparable to solution vectors from the programs using the Householder transformations. © 1997 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号