首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a linear systolic array of O(n) cells that solves the least squares linear prediction problem in time O(n) via an algorithm based on the so-called lattice algorithm. The total storage required is O(n) words, i.e., only a constant number of words are needed at each cell.  相似文献   

2.
A framework and an algorithm for using modified Gram-Schmidt for constrained and weighted linear least squares problems is presented. It is shown that a direct implementation of a weighted modified Gram-Schmidt algorithm is unstable for heavily weighted problems. It is shown that, in most cases it is possible to get a stable algorithm by a simple modification free from any extra computational costs. In particular, it is not necessary to perform reorthogonalization.Solving the weighted and constrained linear least squares problem with the presented weighted modified Gram-Schmidt algorithm is seen to be numerically equivalent to an algorithm based on a weighted Householder-likeQR factorization applied to a slightly larger problem. This equivalence is used to explain the instability of the weighted modified Gram-Schmidt algorithm. If orthogonality, with respect to a weighted inner product, of the columns inQ is important then reorthogonalization can be used. One way of performing such reorthogonalization is described.Computational tests are given to show the main features of the algorithm.  相似文献   

3.
We present a new algorithm for solving a linear least squares problem with linear constraints. These are equality constraint equations and nonnegativity constraints on selected variables. This problem, while appearing to be quite special, is the core problem arising in the solution of the general linearly constrained linear least squares problem. The reduction process of the general problem to the core problem can be done in many ways. We discuss three such techniques.The method employed for solving the core problem is based on combining the equality constraints with differentially weighted least squares equations to form an augmented least squares system. This weighted least squares system, which is equivalent to a penalty function method, is solved with nonnegativity constraints on selected variables.Three types of examples are presented that illustrate applications of the algorithm. The first is rank deficient, constrained least squares curve fitting. The second is concerned with solving linear systems of algebraic equations with Hilbert matrices and bounds on the variables. The third illustrates a constrained curve fitting problem with inconsistent inequality constraints.  相似文献   

4.
Numerical methods for solving linear least squares problems   总被引:6,自引:0,他引:6  
A common problem in a Computer Laboratory is that of finding linear least squares solutions. These problems arise in a variety of areas and in a variety of contexts. Linear least squares problems are particularly difficult to solve because they frequently involve large quantities of data, and they are ill-conditioned by their very nature. In this paper, we shall consider stable numerical methods for handling these problems. Our basic tool is a matrix decomposition based on orthogonal Householder transformations.Reproduction in Whole or in Part is permitted for any Purpose of the United States government. This report was supported in part by Office of Naval Research Contract Nonr-225(37) (NR 044-11) at Stanford University.  相似文献   

5.
One gives a survey of the direct method of solving large sparse diverse overdetermined linear systems of full column rank in the least squares sense. The survey covers practically all investigations on this topic, published in the seventies and in the beginning of the eighties.Translated from Itogi Nauki i Tekhniki, Seriya Matematicheskii Analiz, Vol. 23, pp. 219–285, 1985.  相似文献   

6.
Numerische Mathematik - In 1966 Golub and Wilkinson gave upper bounds for the errors of least squares solutions based on orthogonal transformations, in which the square of the condition number of...  相似文献   

7.
The ABS class for linear and nonlinear systems has been recently introduced by Abaffy, Broyden, Galantai and Spedicato. Here we consider various ways of applying these algorithms to the determination of the minimal euclidean norm solution of over-determined linear systems in the least squares sense. Extensive numerical experiments show that the proposed algorithms are efficient and that one of them usually gives better accuracy than standard implementations of the QR orthogonalization algorithm with Householder reflections.  相似文献   

8.
A linesearch (steplength) algorithm for unconstrained nonlinear least squares problems is described. To estimate the steplength inside the linesearch algorithm a new method that interpolates the residual vector is used together with a standards method that interpolates the sums of squares. Numerical results are reported that point out the advantage with the new steplength estimation method.  相似文献   

9.
A general analysis of the condition of the linear least squares problem is given. The influence of rounding errors is studied in detail for a modified version of the Gram-Schmidt orthogonalization to obtain a factorizationA=QR of a givenm×n matrixA, whereR is upper triangular andQ T Q=I. Letx be the vector which minimizes ‖b?Ax2 andr=b?Ax. It is shown that if inner-products are accumulated in double precision then the errors in the computedx andr are less than the errors resulting from some simultaneous initial perturbation δA, δb such that
$$\parallel \delta A\parallel _E /\parallel A\parallel _E \approx \parallel \delta b\parallel _2 /\parallel b\parallel _2 \approx 2 \cdot n^{3/2} machine units.$$  相似文献   

10.
Numerical Algorithms - In our work, we consider the linear least squares problem for m × n-systems of linear equations Ax = b, m ≥ n, such that the matrix A and right-hand side vector b...  相似文献   

11.
In this paper, we present the preconditioned generalized accelerated overrelaxation (GAOR) method for solving linear systems based on a class of weighted linear least square problems. Two kinds of preconditioning are proposed, and each one contains three preconditioners. We compare the spectral radii of the iteration matrices of the preconditioned and the original methods. The comparison results show that the convergence rate of the preconditioned GAOR methods is indeed better than the rate of the original method, whenever the original method is convergent. Finally, a numerical example is presented in order to confirm these theoretical results.  相似文献   

12.
In this work we consider the problem of training a linear classifier by assuming that the number of data is huge (in particular, data may be larger than the memory capacity). We propose to adopt a linear least-squares formulation of the problem and an incremental recursive algorithm which requires to store a square matrix (whose dimension is equal to the number of features of the data). The algorithm (very simple to implement) converges to the solution using each training data once, so that it effectively handles possible memory issues and is a viable method for linear large scale classification and for real time applications, provided that the number of features of the data is not too large (say of the order of thousands). The extensive computational experiments show that the proposed algorithm is at least competitive with the state-of-the-art algorithms for large scale linear classification.  相似文献   

13.
14.
The weighting method for solving a least squares problem with linear equality constraints multiplies the constraints by a large number and appends them to the top of the least squares problem, which is then solved by standard techniques. In this paper we give a new analysis of the method, based on the QR decomposition, that exhibits many features of the algorithm. In particular it suggests a natural criterion for chosing the weighting factor. This work was supported in part by the National Science Foundation under grant CCR 95503126.  相似文献   

15.
This paper extends prior work by the authors on solving nonlinear least squares unconstrained problems using a factorized quasi-Newton technique. With this aim we use a primal-dual interior-point algorithm for nonconvex nonlinear programming. The factorized quasi-Newton technique is now applied to the Hessian of the Lagrangian function for the transformed problem which is based on a logarithmic barrier formulation. We emphasize the importance of establishing and maintaining symmetric quasi-definiteness of the reduced KKT system. The algorithm then tries to choose a step size that reduces a merit function, and to select a penalty parameter that ensures descent directions along the iterative process. Computational results are included for a variety of least squares constrained problems and preliminary numerical testing indicates that the algorithm is robust and efficient in practice.  相似文献   

16.
We describe a direct method for solving sparse linear least squares problems. The storage required for the method is no more than that needed for the conventional normal equations approach. However, the normal equations are not computed; orthogonal transformations are applied to the coefficient matrix, thus avoiding the potential numerical instability associated with computing the normal equations. Our approach allows full exploitation of sparsity, and permits the use of a fixed (static) data structure during the numerical computation. Finally, the method processes the coefficient matrix one row at a time, allowing for the convenient use of auxiliary storage and updating operations.  相似文献   

17.
A possible improvement of the Faddeev-Kublanovskaja-Faddeeva lower bound for the least singular value ofR by using additional information aboutR is discussed. A fast algorithm is given for calculating such a bound using the diagonal elements and the elements of largest modulus in each row ofR.  相似文献   

18.
We propose a smoothing trust region filter algorithm for nonsmooth nonconvex least squares problems. We present convergence theorems of the proposed algorithm to a Clarke stationary point or a global minimizer of the objective function under certain conditions. Preliminary numerical experiments show the efficiency of the proposed algorithm for finding zeros of a system of polynomial equations with high degrees on the sphere and solving differential variational inequalities.  相似文献   

19.
Summary In this paper the closeness of the total least squares (TLS) and the classical least squares (LS) problem is studied algebraically. Interesting algebraic connections between their solutions, their residuals, their corrections applied to data fitting and their approximate subspaces are proven.All these relationships point out the parameters which mainly determine the equivalences and differences between the two techniques. These parameters also lead to a better understanding of the differences in sensitivity between both approaches with respect to perturbations of the data.In particular, it is shown how the differences between both approaches increase when the equationsAXB become less compatible, when the length ofB orX is growing or whenA tends to be rank-deficient. They are maximal whenB is parallel with the singular vector ofA associated with its smallest singular value. Furthermore, it is shown how TLS leads to a weighted LS problem, and assumptions about the underlying perturbation model of both techniques are deduced. It is shown that many perturbation models correspond with the same TLS solution.Senior Research Assistant of the Belgian N.F.W.O. (National Fund of Scientific Research)  相似文献   

20.
In this paper, an improved algorithm PTLS for solving total least squares (TLS) problems AXB is presented. As only a basis of the right singular subspace associated with the smallest singular values of the data [A; B] is needed, the computational cost can be reduced considerably by using the partial SVD algorithm. This algorithm computes in an efficient way a basis for the left and/or right singular subspace of a matrix associated with its smallest singular values.An analysis of the operation counts, as well as computational results, show the relative efficiency of PTLS with respect to the classical TLS algorithm. Typically, PTLS reduces the computation time with a factor 2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号