首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent theoretical and practical investigations have shown that the Gauss-Newton algorithm is the method of choice for the numerical solution of nonlinear least squares parameter estimation problems. It is shown that when line searches are included, the Gauss-Newton algorithm behaves asymptotically like steepest descent, for a special choice of parameterization. Based on this a conjugate gradient acceleration is developed. It converges fast also for those large residual problems, where the original Gauss-Newton algorithm has a slow rate of convergence. Several numerical test examples are reported, verifying the applicability of the theory.  相似文献   

2.
Summary The inverse Stefan problem can be understood as a problem of nonlinear approximation theory which we solved numerically by a generalized Gauss-Newton method introduced by Osborne and Watson [19]. Under some assumptions on the parameter space we prove its quadratic convergence and demonstrate its high efficiency by three numerical examples.  相似文献   

3.
Summary Strong uniqueness has proved to be an important condition in demonstrating the second order convergence of the generalised Gauss-Newton method for discrete nonlinear approximation problems [4]. Here we compare strong uniqueness with the multiplier condition which has also been used for this purpose. We describe strong uniqueness in terms of the local geometry of the unit ball and properties of the problem functions at the minimum point. When the norm is polyhedral we are able to give necessary and sufficient conditions for the second order convergence of the generalised Gauss-Newton algorithm.  相似文献   

4.
An extension of the Gauss-Newton algorithm is proposed to find local minimizers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applications for solving constrained nonlinear equations are discussed and the numerical performance of the method is assessed on some significant test problems.  相似文献   

5.
In this paper, a Gauss-Newton method is proposed for the solution of large-scale nonlinear least-squares problems, by introducing a truncation strategy in the method presented in [9]. First, sufficient conditions are established for ensuring the convergence of an iterative method employing a truncation scheme for computing the search direction, as approximate solution of a Gauss-Newton type equation. Then, a specific truncated Gauss-Newton algorithm is described, whose global convergence is ensured under standard assumptions, together with the superlinear convergence rate in the zero-residual case. The results of a computational experimentation on a set of standard test problems are reported.  相似文献   

6.
Summary The convergence of the Gauss-Newton algorithm for solving discrete nonlinear approximation problems is analyzed for general norms and families of functions. Aquantitative global convergence theorem and several theorems on the rate of local convergence are derived. A general stepsize control procedure and two regularization principles are incorporated. Examples indicate the limits of the convergence theorems.  相似文献   

7.
In this paper, a truncated conjugate gradient method with an inexact Gauss-Newton technique is proposed for solving nonlinear systems.?The iterative direction is obtained by the conjugate gradient method solving the inexact Gauss-Newton equation.?Global convergence and local superlinear convergence rate of the proposed algorithm are established under some reasonable conditions. Finally, some numerical results are presented to illustrate the effectiveness of the proposed algorithm.  相似文献   

8.
In this work, a new stabilization scheme for the Gauss-Newton method is defined, where the minimum norm solution of the linear least-squares problem is normally taken as search direction and the standard Gauss-Newton equation is suitably modified only at a subsequence of the iterates. Moreover, the stepsize is computed by means of a nonmonotone line search technique. The global convergence of the proposed algorithm model is proved under standard assumptions and the superlinear rate of convergence is ensured for the zero-residual case. A specific implementation algorithm is described, where the use of the pure Gauss-Newton iteration is conditioned to the progress made in the minimization process by controlling the stepsize. The results of a computational experimentation performed on a set of standard test problems are reported.  相似文献   

9.
We investigate the convergence of a two-step modification of the Gauss-Newton method applying the generalized Lipschitz condition for the first and second order derivatives. The convergence order as well as the convergence radius of the method are studied and the uniqueness ball of solution of the nonlinear least squares problem is examined. Finally, we carry out numerical experiments on a set of well-known test problems. (© 2014 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
A simple spectral correction for the Gauss-Newton model applied to nonlinear least squares problems is presented. Such a correction consists in adding a sign-free multiple of the identity to the Hessian of the Gauss-Newton model, being the multiple based on spectral approximations for the Hessians of the residual functions. A detailed local convergence analysis is provided for the resulting method applied to the class of quadratic residual problems. Under mild assumptions, the proposed method is proved to be convergent for problems for which the convergence of the Gauss-Newton method might not be ensured. Moreover, the rate of linear convergence is proved to be better than the Gauss-Newton’s one for a class of non-zero residue problems. These theoretical results are illustrated by numerical examples with quadratic and non-quadratic residual problems.  相似文献   

11.
In the paper we consider constrained nonlinear parameter estimation problems. The method of choice to solve such problems is the generalized Gauss-Newton method. At each iteration of the Gauss-Newton we solve the linearized parameter estimation problem and compute covariance matrix, necessary for the error assessment of the estimates, using an iterative linear algebra technique, namely LSQR algorithm. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
This paper investigates the generalized least squares estimation and the maximum likelihood estimation of the parameters in a multivariate polychoric correlations model, based on data from a multidimensional contingency table. Asymptotic properties of the estimators are discussed. An iterative procedure based on the Gauss-Newton algorithm is implemented to produce the generalized least squares estimates and the standard errors estimates. It is shown that via an iteratively reweighted method, the algorithm produces the maximum likelihood estimates as well. Numerical results on the finite sample behaviors of the methods are reported.  相似文献   

13.
Multiplicative calculus(MUC) measures the rate of change of function in terms of ratios, which makes the exponential functions significantly linear in the framework of MUC.Therefore, a generally non-linear optimization problem containing exponential functions becomes a linear problem in MUC. Taking this as motivation, this paper lays mathematical foundation of well-known classical Gauss-Newton minimization(CGNM) algorithm in the framework of MUC. This paper formulates the mathematical derivation of proposed method named as multiplicative Gauss-Newton minimization(MGNM) method along with its convergence properties.The proposed method is generalized for n number of variables, and all its theoretical concepts are authenticated by simulation results. Two case studies have been conducted incorporating multiplicatively-linear and non-linear exponential functions. From simulation results, it has been observed that proposed MGNM method converges for 12972 points, out of 19600 points considered while optimizing multiplicatively-linear exponential function, whereas CGNM and multiplicative Newton minimization methods converge for only 2111 and 9922 points, respectively. Furthermore, for a given set of initial value, the proposed MGNM converges only after 2 iterations as compared to 5 iterations taken by other methods. A similar pattern is observed for multiplicatively-non-linear exponential function. Therefore, it can be said that proposed method converges faster and for large range of initial values as compared to conventional methods.  相似文献   

14.
We investigate convergence of the one-step modification of Gauss-Newton method using the divided differences and the weak generalized Lipschitz condition for the divided differences. Convergence order of the method was examined and the uniqueness ball for the solution of the nonlinear least squares problem was proved. Numerical experiments were also provided. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
An approach to the numerical solution of optimization problems with equality constraints violating the traditional constraint qualification is developed. According to this approach, an (overdetermined) defining system is constructed based on the Fritz John optimality conditions and the Gauss-Newton method is applied to this system. The assumptions required for the implementability and local superlinear convergence of the resulting algorithm are completely characterized in terms of the original problem.  相似文献   

16.
In this paper, we consider convex composite optimization problems on Riemannian manifolds, and discuss the semi-local convergence of the Gauss-Newton method with quasi-regular initial point and under the majorant condition. As special cases, we also discuss the convergence of the sequence generated by the Gauss-Newton method under Lipschitz-type condition, or under γ-condition.  相似文献   

17.
In this paper, we consider the Extended Kalman Filter (EKF) for solving nonlinear least squares problems. EKF is an incremental iterative method based on Gauss-Newton method that has nice convergence properties. Although EKF has the global convergence property under some conditions, the convergence rate is only sublinear under the same conditions. One of the reasons why EKF shows slow convergence is the lack of explicit stepsize. In the paper, we propose a stepsize rule for EKF and establish global convergence of the algorithm under the boundedness of the generated sequence and appropriate assumptions on the objective function. A notable feature of the stepsize rule is that the stepsize is kept greater than or equal to 1 at each iteration, and increases at a linear rate of k under an additional condition. Therefore, we can expect that the proposed method converges faster than the original EKF. We report some numerical results, which demonstrate that the proposed method is promising.  相似文献   

18.
In this paper, we develop, analyze, and test a new algorithm for nonlinear least-squares problems. The algorithm uses a BFGS update of the Gauss-Newton Hessian when some heuristics indicate that the Gauss-Newton method may not make a good step. Some important elements are that the secant or quasi-Newton equations considered are not the obvious ones, and the method does not build up a Hessian approximation over several steps. The algorithm can be implemented easily as a modification of any Gauss-Newton code, and it seems to be useful for large residual problems.  相似文献   

19.
In this paper, we consider a class of the stochastic linear complementarity problems (SLCPs) with finitely many elements. A feasible semismooth damped Gauss-Newton algorithm for the SLCP is proposed. The global and local quadratic convergence of the proposed algorithm are obtained under suitable conditions. Some numerical results are reported in this paper, which confirm the good theoretical properties of the proposed algorithm.  相似文献   

20.
In this paper, we discuss the semilocal convergence of Martínez's generalization of Brent's and Brown's methods. Through a careful investigation of the algorithm structure, we convert Martínez's generalized method into an approximate Newton method with a special error term. Based on such equivalent variation, we prove the semilocal convergence theorem of Martínez's generalized method. This is a complementary result to the convergence theory of Martínez's generalized method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号