首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.Research supported in part by the Joint Services Electronics Program, contract no. F49620-90-C-0039.  相似文献   

2.
Hybrid methods are developed for improving the Gauss-Newton method in the case of large residual or ill-conditioned nonlinear least-square problems. These methods are used usually in a form suitable for dense problems. But some standard approaches are unsuitable, and some new possibilities appear in the sparse case. We propose efficient hybrid methods for various representations of the sparse problems. After describing the basic ideas that help deriving new hybrid methods, we are concerned with designing hybrid methods for sparse Jacobian and sparse Hessian representations of the least-square problems. The efficiency of hybrid methods is demonstrated by extensive numerical experiments.This work was supported by the Czech Republic Grant Agency, Grant 201/93/0129. The author is indebted to Jan Vlek for his comments on the first draft of this paper and to anonymous referees for many useful remarks.  相似文献   

3.
This paper provides a modification to the Gauss—Newton method for nonlinear least squares problems. The new method is based on structured quasi-Newton methods which yield a good approximation to the second derivative matrix of the objective function. In particular, we propose BFGS-like and DFP-like updates in a factorized form which give descent search directions for the objective function. We prove local and q-superlinear convergence of our methods, and give results of computational experiments for the BFGS-like and DFP-like updates.This work was supported in part by the Grant-in-Aid for Encouragement of Young Scientists of the Japanese Ministry of Education: (A)61740133 and (A)62740137.  相似文献   

4.
5.
A three-stage recursive least squares parameter estimation algorithm is derived for controlled autoregressive autoregressive (CARAR) systems. The basic idea is to decompose a CARAR system into three subsystems, one of which contains one parameter vector, and to identify the parameters of each subsystem one by one. Compared with the recursive generalized least squares algorithm, the dimensions of the involved covariance matrices in each subsystem become small and thus the proposed algorithm has a high computational efficiency. Finally, we verify the proposed algorithm with a simulation example.  相似文献   

6.
Some approximate methods for solving linear hyperbolic systems are presented and analyzed. The methods consist of discretizing with respect to time and solving the resulting hyperbolic system for fixed time by least squares finite element methods. An analysis of least squares approximations is given, including optimal order estimates for piecewise polynomial approximation spaces. Numerical results for the inviscid Burgers' equation are also presented. © 1992 John Wiley & Sons, Inc.  相似文献   

7.
Summary This paper presents a family of methods for accurate solution of higher index linear variable DAE systems, . These methods use the DAE system and some of its first derivatives as constraints to a least squares problem that corresponds to a Taylor series ofy, or an approximative equality derived from a Pade' approximation of the exponential function. Accuracy results for systems transformable to standard canonical form are given. Advantages, disadvantages, stability properties and implementation of these methods are discussed and two numerical examples are given, where we compare our results with results from more traditional methods.  相似文献   

8.
Nonlinear least squares problems over convex sets inR n are treated here by iterative methods which extend the classical Newton, gradient and steepest descent methods and the methods studied recently by Pereyra and the author. Applications are given to nonlinear least squares problems under linear constraint, and to linear and nonlinear inequalities. Part of the research underlying this report was undertaken for the Office of Naval Research, Contract Nonr-1228(10), Project NR047-021, and for the U.S. Army Research Office — Durham, Contract No. DA-31-124-ARO-D-322 at Northwestern University. Reproduction of this paper in whole or in part is permitted for any purpose of the United States Government.  相似文献   

9.
The variable projection algorithm of Golub and Pereyra (SIAM J. Numer. Anal. 10:413–432, 1973) has proven to be quite valuable in the solution of nonlinear least squares problems in which a substantial number of the parameters are linear. Its advantages are efficiency and, more importantly, a better likelihood of finding a global minimizer rather than a local one. The purpose of our work is to provide a more robust implementation of this algorithm, include constraints on the parameters, more clearly identify key ingredients so that improvements can be made, compute the Jacobian matrix more accurately, and make future implementations in other languages easy.  相似文献   

10.
A new algorithm for the nonlinear least-squares problems is introduced and illustrated in this article. It is shown that the new algorithm is relatively more efficient compared to the other algorithms in current use and it works for problems where other methods fail. The new method is illustrated by solving a number of classical test problems. In light of the present method, improvements for some other methods in current use are also suggested in this article. Bibliography: 22 titles. Published inZapiski Nauchnykh Seminarov POMI, Vol. 207, pp. 143–157, 1993.  相似文献   

11.
Least squares estimations have been used extensively in many applications, e.g. system identification and signal prediction. When the stochastic process is stationary, the least squares estimators can be found by solving a Toeplitz or near-Toeplitz matrix system depending on the knowledge of the data statistics. In this paper, we employ the preconditioned conjugate gradient method with circulant preconditioners to solve such systems. Our proposed circulant preconditioners are derived from the spectral property of the given stationary process. In the case where the spectral density functions() of the process is known, we prove that ifs() is a positive continuous function, then the spectrum of the preconditioned system will be clustered around 1 and the method converges superlinearly. However, if the statistics of the process is unknown, then we prove that with probability 1, the spectrum of the preconditioned system is still clustered around 1 provided that large data samples are taken. For finite impulse response (FIR) system identification problems, our numerical results show that annth order least squares estimator can usually be obtained inO(n logn) operations whenO(n) data samples are used. Finally, we remark that our algorithm can be modified to suit the applications of recursive least squares computations with the proper use of sliding window method arising in signal processing applications.Research supported in part by HKRGC grant no. 221600070, ONR contract no. N00014-90-J-1695 and DOE grant no. DE-FG03-87ER25037.  相似文献   

12.
The ordinary least squares estimation is based on minimization of the squared distance of the response variable to its conditional mean given the predictor variable. We extend this method by including in the criterion function the distance of the squared response variable to its second conditional moment. It is shown that this “second-order” least squares estimator is asymptotically more efficient than the ordinary least squares estimator if the third moment of the random error is nonzero, and both estimators have the same asymptotic covariance matrix if the error distribution is symmetric. Simulation studies show that the variance reduction of the new estimator can be as high as 50% for sample sizes lower than 100. As a by-product, the joint asymptotic covariance matrix of the ordinary least squares estimators for the regression parameter and for the random error variance is also derived, which is only available in the literature for very special cases, e.g. that random error has a normal distribution. The results apply to both linear and nonlinear regression models, where the random error distributions are not necessarily known.  相似文献   

13.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

14.
In order to improve the performance of a gamma camera, it’s fundamental to accurately reconstruct the photon hit position on the detector surface. In the last years, the increasing demand of small highly-efficient PET systems led to the development of new hit position estimation methods, with the purpose of improving the performances near the edges of the detector, where most of the information is typically lost. In this paper we apply iterative optimization methods, based on the regularization of the nonlinear least squares problem, to estimate the photon hit position. Numerical results show that, compared with the classic Anger algorithm, the proposed methods allow to recover more information near the edges.  相似文献   

15.
16.
Recent theoretical and practical investigations have shown that the Gauss-Newton algorithm is the method of choice for the numerical solution of nonlinear least squares parameter estimation problems. It is shown that when line searches are included, the Gauss-Newton algorithm behaves asymptotically like steepest descent, for a special choice of parameterization. Based on this a conjugate gradient acceleration is developed. It converges fast also for those large residual problems, where the original Gauss-Newton algorithm has a slow rate of convergence. Several numerical test examples are reported, verifying the applicability of the theory.  相似文献   

17.
非线性最小二乘法的算法   总被引:4,自引:0,他引:4  
本给出非线性最小二乘的优化条件和几何特征.  相似文献   

18.
The line search subproblem in unconstrained optimization is concerned with finding an acceptable steplength which satisfies certain standard conditions. Prototype algorithms are described which guarantee finding such a step in a finite number of operations. This is achieved by first bracketing an interval of acceptable values and then reducing this bracket uniformly by the repeated use of sectioning in a systematic way. Some new theorems about convergence and termination of the line search are presented.Use of these algorithms to solve the line search subproblem in methods for nonlinear least squares is considered. We show that substantial gains in efficiency can be made by making polynomial interpolations to the individual residual functions rather than the overall objective function. We also study modified schemes in which the Jacobian matrix is evaluated as infrequently as possible, and show that further worthwhile savings can be made. Numerical results are presented.This work was supported by the award of a Syrian Ministry of Higher Education Scholarship.  相似文献   

19.
This paper is a geometric study of finding general exponential observers for discrete-time nonlinear systems. Using center manifold theory for maps, we derive necessary and sufficient conditions for general exponential observers for Lyapunov stable discrete-time nonlinear systems. As an application of our characterization of general exponential observers, we give a construction procedure for identity exponential observers for discrete-time nonlinear systems.  相似文献   

20.
A linesearch (steplength) algorithm for unconstrained nonlinear least squares problems is described. To estimate the steplength inside the linesearch algorithm a new method that interpolates the residual vector is used together with a standards method that interpolates the sums of squares. Numerical results are reported that point out the advantage with the new steplength estimation method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号