首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Algorithms for the regularization of ill-conditioned least squares problems   总被引:1,自引:0,他引:1  
Two regularization methods for ill-conditioned least squares problems are studied from the point of view of numerical efficiency. The regularization methods are formulated as quadratically constrained least squares problems, and it is shown that if they are transformed into a certain standard form, very efficient algorithms can be used for their solution. New algorithms are given, both for the transformation and for the regularization methods in standard form. A comparison to previous algorithms is made and it is shown that the overall efficiency (in terms of the number of arithmetic operations) of the new algorithms is better.  相似文献   

2.
The ABS class for linear and nonlinear systems has been recently introduced by Abaffy, Broyden, Galantai and Spedicato. Here we consider various ways of applying these algorithms to the determination of the minimal euclidean norm solution of over-determined linear systems in the least squares sense. Extensive numerical experiments show that the proposed algorithms are efficient and that one of them usually gives better accuracy than standard implementations of the QR orthogonalization algorithm with Householder reflections.  相似文献   

3.
This paper is concerned with weighted least squares solutions to general coupled Sylvester matrix equations. Gradient based iterative algorithms are proposed to solve this problem. This type of iterative algorithm includes a wide class of iterative algorithms, and two special cases of them are studied in detail in this paper. Necessary and sufficient conditions guaranteeing the convergence of the proposed algorithms are presented. Sufficient conditions that are easy to compute are also given. The optimal step sizes such that the convergence rates of the algorithms, which are properly defined in this paper, are maximized and established. Several special cases of the weighted least squares problem, such as a least squares solution to the coupled Sylvester matrix equations problem, solutions to the general coupled Sylvester matrix equations problem, and a weighted least squares solution to the linear matrix equation problem are simultaneously solved. Several numerical examples are given to illustrate the effectiveness of the proposed algorithms.  相似文献   

4.
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill-conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP-ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP-ALS subproblems efficiently, have the same complexity as the standard CP-ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill-conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.  相似文献   

5.
Motivated by the recently popular probabilistic methods for low‐rank approximations and randomized algorithms for the least squares problems, we develop randomized algorithms for the total least squares problem with a single right‐hand side. We present the Nyström method for the medium‐sized problems. For the large‐scale and ill‐conditioned cases, we introduce the randomized truncated total least squares with the known or estimated rank as the regularization parameter. We analyze the accuracy of the algorithm randomized truncated total least squares and perform numerical experiments to demonstrate the efficiency of our randomized algorithms. The randomized algorithms can greatly reduce the computational time and still maintain good accuracy with very high probability.  相似文献   

6.
非线性最小二乘法的算法   总被引:4,自引:0,他引:4  
本给出非线性最小二乘的优化条件和几何特征.  相似文献   

7.
An approach for the creation of high-accuracy versions of the collocations and least squares method for the numerical solution of the Navier-Stokes equations is proposed. New versions of up to the eighth order of accuracy inclusive are implemented. For smooth solutions, numerical experiments on a sequence of grids show that the approximate solutions produced by these versions converge to the exact one with a high order of accuracy as h → 0, where h is the maximal linear cell size of a grid. The numerical results obtained for the benchmark problem of the lid-driven cavity flow suggest that the collocations and least squares method is well suited for the numerical simulation of viscous flows.  相似文献   

8.
Aiming at identifying nonlinear systems, one of the most challenging problems in system identification, a class of data-driven recursive least squares algorithms are presented in this work. First, a full form dynamic linearization based linear data model for nonlinear systems is derived. Consequently, a full form dynamic linearization-based data-driven recursive least squares identification method for estimating the unknown parameter of the obtained linear data model is proposed along with convergence analysis and prediction of the outputs subject to stochastic noises. Furthermore, a partial form dynamic linearization-based data-driven recursive least squares identification algorithm is also developed as a special case of the full form dynamic linearization based algorithm. The proposed two identification algorithms for the nonlinear nonaffine discrete-time systems are flexible in applications without relying on any explicit mechanism model information of the systems. Additionally, the number of the parameters in the obtained linear data model can be tuned flexibly to reduce computation complexity. The validity of the two identification algorithms is verified by rigorous theoretical analysis and simulation studies.  相似文献   

9.
We propose and implement new, more general versions of the method of collocations and least squares (the CLS method) and, for a system of linear algebraic equations, an orthogonal method for accelerating the convergence of an iterative solution. The use of the latter method and the proper choice of values of control parameters, based on the results of investigating the dependence of the properties of the CLS method on these parameters, as well as some other improvements of the CLS method suggested in this paper, allow one to solve numerically problems for Navier-Stokes equations in a reasonable time using a single-processor computer even for grids as large as 1280 × 1280. In this case, the total number of unknown variables is ~ 25 · 106. The numerical results for the problem of the lid-driven cavity flow of a viscous fluid are in good agreement with known results of other authors, including those obtained by means of schemes of higher approximation order with a small artificial viscosity. This and some other facts prove that the new versions of the CLS method make it possible to obtain an approximate solution with high accuracy.  相似文献   

10.
In previous work we introduced a construction to produce biorthogonal multiresolutions from given subdivisions. The approach involved estimating the solution to a least squares problem by means of a number of smaller least squares approximations on local portions of the data. In this work we use a result by Dahlquist, et al. on the method of averages to make observational comparisons between this local least squares estimation and full least squares approximation. We have explored examples in two problem domains: data reduction and data approximation. We observe that, particularly for design matrices with a repetitive pattern of column entries, the least squares solution is often well estimated by local least squares, that the estimation rapidly improves with the size of the local least squares problems, and that the quality of the estimate is largely independent of the size of the full problem. In memory of Germund Dahlquist (1925–2005).AMS subject classification (2000) 93E24  相似文献   

11.
A novel parallel method for determining an approximate total least squares (TLS) solution is introduced. Based on domain decomposition, the global TLS problem is partitioned into several dependent TLS subproblems. A convergent algorithm using the parallel variable distribution technique (SIAM J. Optim. 1994; 4 :815–832) is presented. Numerical results support the development and analysis of the algorithms. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

12.
Bivariate least squares approximation with linear constraints   总被引:1,自引:1,他引:0  
In this article linear least squares problems with linear equality constraints are considered, where the data points lie on the vertices of a rectangular grid. A fast and efficient computational method for the case when the linear equality constraints can be formulated in a tensor product form is presented. Using the solution of several univariate approximation problems the solution of the bivariate approximation problem can be derived easily. AMS subject classification (2000)  65D05, 65D07, 65D10, 65F05, 65F20  相似文献   

13.
This article is concerned with iterative techniques for linear systems of equations arising from a least squares formulation of boundary value problems. In its classical form, the solution of the least squares method is obtained by solving the traditional normal equation. However, for nonsmooth boundary conditions or in the case of refinement at a selected set of interior points, the matrix associated with the normal equation tends to be ill-conditioned. In this case, the least squares method may be formulated as a Powell multiplier method and the equations solved iteratively. Therein we use and compare two different iterative algorithms. The first algorithm is the preconditioned conjugate gradient method applied to the normal equation, while the second is a new algorithm based on the Powell method and formulated on the stabilized dual problem. The two algorithms are first compared on a one-dimensional problem with poorly conditioned matrices. Results show that, for such problems, the new algorithm gives more accurate results. The new algorithm is then applied to a two-dimensional steady state diffusion problem and a boundary layer problem. A comparison between the least squares method of Bramble and Schatz and the new algorithm demonstrates the ability of the new method to give highly accurate results on the boundary, or at a set of given interior collocation points without the deterioration of the condition number of the matrix. Conditions for convergence of the proposed algorithm are discussed. © 1997 John Wiley & Sons, Inc.  相似文献   

14.
The ordinary least squares estimation is based on minimization of the squared distance of the response variable to its conditional mean given the predictor variable. We extend this method by including in the criterion function the distance of the squared response variable to its second conditional moment. It is shown that this “second-order” least squares estimator is asymptotically more efficient than the ordinary least squares estimator if the third moment of the random error is nonzero, and both estimators have the same asymptotic covariance matrix if the error distribution is symmetric. Simulation studies show that the variance reduction of the new estimator can be as high as 50% for sample sizes lower than 100. As a by-product, the joint asymptotic covariance matrix of the ordinary least squares estimators for the regression parameter and for the random error variance is also derived, which is only available in the literature for very special cases, e.g. that random error has a normal distribution. The results apply to both linear and nonlinear regression models, where the random error distributions are not necessarily known.  相似文献   

15.
Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with n >> d{n \gg d}. Then, existing exact methods find a solution vector in O(nd 2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.  相似文献   

16.
In this paper, we present a weighted least squares method to fit scattered data with noise. Existence and uniqueness of a solution are proved and an error bound is derived. The numerical experiments illustrate that our weighted least squares method has better performance than the traditional least squares method in case of noisy data.  相似文献   

17.
Linear mixed models and penalized least squares   总被引:1,自引:0,他引:1  
Linear mixed-effects models are an important class of statistical models that are used directly in many fields of applications and also are used as iterative steps in fitting other types of mixed-effects models, such as generalized linear mixed models. The parameters in these models are typically estimated by maximum likelihood or restricted maximum likelihood. In general, there is no closed-form solution for these estimates and they must be determined by iterative algorithms such as EM iterations or general nonlinear optimization. Many of the intermediate calculations for such iterations have been expressed as generalized least squares problems. We show that an alternative representation as a penalized least squares problem has many advantageous computational properties including the ability to evaluate explicitly a profiled log-likelihood or log-restricted likelihood, the gradient and Hessian of this profiled objective, and an ECME update to refine this objective.  相似文献   

18.
The interpolation method by radial basis functions is used widely for solving scattered data approximation. However, sometimes it makes more sense to approximate the solution by least squares fit. This is especially true when the data are contaminated with noise. A meshfree method namely, meshless dynamic weighted least squares (MDWLS) method, is presented in this paper to solve least squares problems with noise. The MDWLS method by Gaussian radial basis function is proposed to fit scattered data with some noisy areas in the problem’s domain. Existence and uniqueness of a solution is proved. This method has one parameter which can adjusts the accuracy according to the size of noises. Another advantage of the developed method is that it can be applied to problems with nonregular geometrical domains. The new approach is applied for some problems in two dimensions and the obtained results confirm the accuracy and efficiency of the proposed method. The numerical experiments illustrate that our MDWLS method has better performance than the traditional least squares method in case of noisy data.  相似文献   

19.
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than global optimum. Genetic algorithms have been applied successfully to function optimization and therefore would be effective for nonlinear least squares estimation. This paper provides an illustration of a genetic algorithm applied to a simple nonlinear least squares example.  相似文献   

20.
A computational procedure is developed for determining the solution of minimal length to a linear least squares problem subject to bounds on the variables. In the first stage, a solution to the least squares problem is computed and then in the second stage, the solution of minimal length is determined. The objective function in each step is minimized by an active set method adapted to the special structure of the problem.The systems of linear equations satisfied by the descent direction and the Lagrange multipliers in the minimization algorithm are solved by direct methods based on QR decompositions or iterative preconditioned conjugate gradient methods. The direct and the iterative methods are compared in numerical experiments, where the solutions are sought to a sequence of related, minimal least squares problems subject to bounds on the variables. The application of the iterative methods to large, sparse problems is discussed briefly.This work was supported by The National Swedish Board for Technical Development under contract dnr 80-3341.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号