首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
In this paper, the classical Gauss-Newton method for the unconstrained least squares problem is modified by introducing a quasi-Newton approximation to the second-order term of the Hessian. Various quasi-Newton formulas are considered, and numerical experiments show that most of them are more efficient on large residual problems than the Gauss-Newton method and a general purpose minimization algorithm based upon the BFGS formula. A particular quasi-Newton formula is shown numerically to be superior. Further improvements are obtained by using a line search that exploits the special form of the function.  相似文献   

2.
An Augmented Lagrangian algorithm that uses Gauss-Newton approximations of the Hessian at each inner iteration is introduced and tested using a family of Hard-Spheres problems. The Gauss-Newton model convexifies the quadratic approximations of the Augmented Lagrangian function thus increasing the efficiency of the iterative quadratic solver. The resulting method is considerably more efficient than the corresponding algorithm that uses true Hessians. A comparative study using the well-known package LANCELOT is presented.  相似文献   

3.
A simple spectral correction for the Gauss-Newton model applied to nonlinear least squares problems is presented. Such a correction consists in adding a sign-free multiple of the identity to the Hessian of the Gauss-Newton model, being the multiple based on spectral approximations for the Hessians of the residual functions. A detailed local convergence analysis is provided for the resulting method applied to the class of quadratic residual problems. Under mild assumptions, the proposed method is proved to be convergent for problems for which the convergence of the Gauss-Newton method might not be ensured. Moreover, the rate of linear convergence is proved to be better than the Gauss-Newton’s one for a class of non-zero residue problems. These theoretical results are illustrated by numerical examples with quadratic and non-quadratic residual problems.  相似文献   

4.
We consider Newton-like line search descent methods for solving non-linear least-squares problems. The basis of our approach is to choose a method, or parameters within a method, by minimizing a variational measure which estimates the error in an inverse Hessian approximation. In one approach we consider sizing methods and choose sizing parameters in an optimal way. In another approach we consider various possibilities for hybrid Gauss-Newton/BFGS methods. We conclude that a simple Gauss-Newton/BFGS hybrid is both efficient and robust and we illustrate this by a range of comparative tests with other methods. These experiments include not only many well known test problems but also some new classes of large residual problem.  相似文献   

5.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

6.
Recent theoretical and practical investigations have shown that the Gauss-Newton algorithm is the method of choice for the numerical solution of nonlinear least squares parameter estimation problems. It is shown that when line searches are included, the Gauss-Newton algorithm behaves asymptotically like steepest descent, for a special choice of parameterization. Based on this a conjugate gradient acceleration is developed. It converges fast also for those large residual problems, where the original Gauss-Newton algorithm has a slow rate of convergence. Several numerical test examples are reported, verifying the applicability of the theory.  相似文献   

7.
In this paper, a Gauss-Newton method is proposed for the solution of large-scale nonlinear least-squares problems, by introducing a truncation strategy in the method presented in [9]. First, sufficient conditions are established for ensuring the convergence of an iterative method employing a truncation scheme for computing the search direction, as approximate solution of a Gauss-Newton type equation. Then, a specific truncated Gauss-Newton algorithm is described, whose global convergence is ensured under standard assumptions, together with the superlinear convergence rate in the zero-residual case. The results of a computational experimentation on a set of standard test problems are reported.  相似文献   

8.
Reduced Hessian methods have been shown to be successful for equality constrained problems. However there are few results on reduced Hessian methods for general constrained problems. In this paper we propose a method for general constrained problems, based on Byrd and Schnabel's basis-independent algorithm. It can be regarded as a smooth extension of the standard reduced Hessian Method.Research supported in part by NSF, AFORS and ONR through NSF grant DMS-8920550.  相似文献   

9.
A new algorithm for solving smooth large-scale minimization problems with bound constraints is introduced. The way of dealing with active constraints is similar to the one used in some recently introduced quadratic solvers. A limited-memory multipoint symmetric secant method for approximating the Hessian is presented. Positive-definiteness of the Hessian approximation is not enforced. A combination of trust-region and conjugate-gradient approaches is used to explore useful information. Global convergence is proved for a general model algorithm. Results of numerical experiments are presented.  相似文献   

10.
Hybrid methods are developed for improving the Gauss-Newton method in the case of large residual or ill-conditioned nonlinear least-square problems. These methods are used usually in a form suitable for dense problems. But some standard approaches are unsuitable, and some new possibilities appear in the sparse case. We propose efficient hybrid methods for various representations of the sparse problems. After describing the basic ideas that help deriving new hybrid methods, we are concerned with designing hybrid methods for sparse Jacobian and sparse Hessian representations of the least-square problems. The efficiency of hybrid methods is demonstrated by extensive numerical experiments.This work was supported by the Czech Republic Grant Agency, Grant 201/93/0129. The author is indebted to Jan Vlek for his comments on the first draft of this paper and to anonymous referees for many useful remarks.  相似文献   

11.
In this work, a new stabilization scheme for the Gauss-Newton method is defined, where the minimum norm solution of the linear least-squares problem is normally taken as search direction and the standard Gauss-Newton equation is suitably modified only at a subsequence of the iterates. Moreover, the stepsize is computed by means of a nonmonotone line search technique. The global convergence of the proposed algorithm model is proved under standard assumptions and the superlinear rate of convergence is ensured for the zero-residual case. A specific implementation algorithm is described, where the use of the pure Gauss-Newton iteration is conditioned to the progress made in the minimization process by controlling the stepsize. The results of a computational experimentation performed on a set of standard test problems are reported.  相似文献   

12.
The implementation of the recently proposed semi-monotonic augmented Lagrangian algorithm for the solution of large convex equality constrained quadratic programming problems is considered. It is proved that if the auxiliary problems are approximately solved by the conjugate gradient method, then the algorithm finds an approximate solution of the class of problems with uniformly bounded spectrum of the Hessian matrix at O(1) matrix–vector multiplications. If applied to the class of problems with the Hessian matrices that are in addition either sufficiently sparse or can be expressed as a product of such sparse matrices, then the cost of the solution is proportional to the dimension of the problems. Theoretical results are illustrated by numerical experiments. This research is supported by grants of the Ministry of Education No. S3086102, ET400300415 and MSM 6198910027.  相似文献   

13.

This paper concerns the issue of asymptotic acceptance of the true Hessian and the full step by the sequential quadratic programming algorithm for equality-constrained optimization problems. In order to enforce global convergence, the algorithm is equipped with a standard Armijo linesearch procedure for a nonsmooth exact penalty function. The specificity of considerations here is that the standard assumptions for local superlinear convergence of the method may be violated. The analysis focuses on the case when there exist critical Lagrange multipliers, and does not require regularity assumptions on the constraints or satisfaction of second-order sufficient optimality conditions. The results provide a basis for application of known acceleration techniques, such as extrapolation, and allow the formulation of algorithms that can outperform the standard SQP with BFGS approximations of the Hessian on problems with degenerate constraints. This claim is confirmed by some numerical experiments.

  相似文献   

14.
In the paper we consider constrained nonlinear parameter estimation problems. The method of choice to solve such problems is the generalized Gauss-Newton method. At each iteration of the Gauss-Newton we solve the linearized parameter estimation problem and compute covariance matrix, necessary for the error assessment of the estimates, using an iterative linear algebra technique, namely LSQR algorithm. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Summary Strong uniqueness has proved to be an important condition in demonstrating the second order convergence of the generalised Gauss-Newton method for discrete nonlinear approximation problems [4]. Here we compare strong uniqueness with the multiplier condition which has also been used for this purpose. We describe strong uniqueness in terms of the local geometry of the unit ball and properties of the problem functions at the minimum point. When the norm is polyhedral we are able to give necessary and sufficient conditions for the second order convergence of the generalised Gauss-Newton algorithm.  相似文献   

16.
A DIRECT SEARCH FRAME-BASED CONJUGATE GRADIENTS METHOD   总被引:2,自引:0,他引:2  
A derivative-free frame-based conjugate gradients algorithm is presented.Convergenceis shown for C~1 functions,and this is verified in numerical trials.The algorithm is tested ona variety of low dimensional problems,some of which are ill-conditioned,and is also testedon problems of high dimension.Numerical results show that the algorithm is effectiveon both classes of problems.The results are compared with those from a discrete quasi-Newton method,showing that the conjugate gradients algorithm is competitive.Thealgorithm exhibits the conjugate gradients speed-up on problems for which the Hessian atthe solution has repeated or clustered eigenvalues.The algorithm is easily parallelizable.  相似文献   

17.
When solving large complex optimization problems, the user is faced with three major problems. These are (i) the cost in human time in obtaining accurate expressions for the derivatives involved; (ii) the need to store second derivative information; and (iii), of lessening importance, the time taken to solve the problem on the computer. For many problems, a significant part of the latter can be attributed to solving Newton-like equations. In the algorithm described, the equations are solved using a conjugate direction method that only needs the Hessian at the current point when it is multiplied by a trial vector. In this paper, we present a method that finds this product using automatic differentiation while only requiring vector storage. The method takes advantage of any sparsity in the Hessian matrix and computes exact derivatives. It avoids the complexity of symbolic differentiation, the inaccuracy of numerical differentiation, the labor of finding analytic derivatives, and the need for matrix store. When far from a minimum, an accurate solution to the Newton equations is not justified, so an approximate solution is obtained by using a version of Dembo and Steihaug's truncated Newton algorithm (Ref. 1).This paper was presented at the SIAM National Meeting, Boston, Massachusetts, 1986.  相似文献   

18.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

19.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

20.
This paper is concerned with the solution of the nonlinear least squares problems. A new secant method is suggested in this paper, which is based on an affine model of the objective function and updates the first-order approximation each step when the iterations proceed. We present an algorithm which combines the new secant method with Gauss-Newton method for general nonlinear least squares problems. Furthermore, we prove that this algorithm is Q-superlinearly convergent for large residual problems under mild conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号