首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper provides a modification to the Gauss—Newton method for nonlinear least squares problems. The new method is based on structured quasi-Newton methods which yield a good approximation to the second derivative matrix of the objective function. In particular, we propose BFGS-like and DFP-like updates in a factorized form which give descent search directions for the objective function. We prove local and q-superlinear convergence of our methods, and give results of computational experiments for the BFGS-like and DFP-like updates.This work was supported in part by the Grant-in-Aid for Encouragement of Young Scientists of the Japanese Ministry of Education: (A)61740133 and (A)62740137.  相似文献   

2.
1.IntroductionThispaPerdealswiththeproblemofndniedingasumofsquaresofnonlinearfuntions-mwhereri(x),i==1,2,',maretwicecontinuouslyfferentiable,m2n'r(x)=(rl(x),rz(x),'jbe(x))"and"T"denotestranspose.NoIilinearleastSquaresproblemisakindofAnportan0ptiedationprobletnsandisaPpearedinmanyfield8suchasscientilicexperiments,mbomumlikelihoodestimation,solutionofnonlinearequaions'patternrecoghtionandetc.ThederiVativesofthefUnctionj(x)aregivenbywhereAEppxnistheJacobianmatrisofr(x)anditselementsare~=f…  相似文献   

3.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

4.
An algorithm for solving nonlinear least squares problems with general linear inequality constraints is described.At each step,the problem is reduced to an unconstrained linear least squares problem in a subs pace defined by the active constraints,which is solved using the quasi-Newton method.The major update formula is similar to the one given by Dennis,Gay and Welsch (1981).In this paper,we state the detailed implement of the algorithm,such as the choice of active set,the solution of subproblem and the avoidance of zigzagging.We also prove the globally convergent property of the algorithm.  相似文献   

5.
In this paper, the classical Gauss-Newton method for the unconstrained least squares problem is modified by introducing a quasi-Newton approximation to the second-order term of the Hessian. Various quasi-Newton formulas are considered, and numerical experiments show that most of them are more efficient on large residual problems than the Gauss-Newton method and a general purpose minimization algorithm based upon the BFGS formula. A particular quasi-Newton formula is shown numerically to be superior. Further improvements are obtained by using a line search that exploits the special form of the function.  相似文献   

6.
In this paper, a new quasi-Newton equation is applied to the structured secant methods for nonlinear least squares problems. We show that the new equation is better than the original quasi-Newton equation as it provides a more accurate approximation to the second order information. Furthermore, combining the new quasi-Newton equation with a product structure, a new algorithm is established. It is shown that the resulting algorithm is quadratically convergent for the zero-residual case and superlinearly convergent for the nonzero-residual case. In order to compare the new algorithm with some related methods, our preliminary numerical experiments are also reported.  相似文献   

7.
Partial separability and partitioned quasi-Newton updating have been recently introduced and experimented with success in large scale nonlinear optimization, large nonlinear least squares calculations and in large systems of nonlinear equations. It is the purpose of this paper to apply this idea to large dimensional nonlinear network optimization problems. The method proposed thus uses these techniques for handling the cost function, while more classical tools as variable partitioning and specialized data structures are used in handling the network constraints. The performance of a code implementing this method, as well as more classical techniques, is analyzed on several numerical examples.  相似文献   

8.
This paper is concerned with quadratic and superlinear convergence of structured quasi-Newton methods for solving nonlinear least squares problems. These methods make use of a special structure of the Hessian matrix of the objective function. Recently, Huschens proposed a new kind of structured quasi-Newton methods and dealt with the convex class of the structured Broyden family, and showed its quadratic and superlinear convergence properties for zero and nonzero residual problems, respectively. In this paper, we extend the results by Huschens to a wider class of the structured Broyden family. We prove local convergence properties of the method in a way different from the proof by Huschens.  相似文献   

9.
This paper is concerned with the solution of the nonlinear least squares problems. A new secant method is suggested in this paper, which is based on an affine model of the objective function and updates the first-order approximation each step when the iterations proceed. We present an algorithm which combines the new secant method with Gauss-Newton method for general nonlinear least squares problems. Furthermore, we prove that this algorithm is Q-superlinearly convergent for large residual problems under mild conditions.  相似文献   

10.
We have recently proposed a structured algorithm for solving constrained nonlinear least-squares problems and established its local two-step Q-superlinear convergence rate. The approach is based on an earlier adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method. The structured adaptation makes use of the ideas of Nocedal and Overton for handling quasi-Newton updates of projected Hessians and adapts a structuring scheme due to Engels and Martinez. For robustness, we have employed a specific nonsmooth line search strategy, taking account of the least-squares objective. Numerical results also confirm the practical relevance of our special considerations for the inherent structure of the least squares. Here, we establish global convergence of the proposed algorithm using a weaker condition than the one used by the exact penalty method of Coleman and Conn for general nonlinear programs.  相似文献   

11.
Parameter estimation and data regression represent special classes of optimization problems. Often, nonlinear programming methods can be tailored to take advantage of the least squares, or more generally, the maximum likelihood, objective function. In previous studies we have developed large-scale nonlinear programming methods that are based on tailored quasi-Newton updating strategies and matrix decomposition of the process model. The resulting algorithms converge in a reduced space of the parameters while simultaneously converging the process model. Moreover, tailoring the method to least squares functions leads to significant improvements in the performance of the algorithm. These approaches can be very efficient for both explicit and implicit models (i.e. problems with small and large degrees of freedom, respectively). In the latter case, degrees of freedom are proportional to a potential large number of data sets. Applications of this case include errors-in-all-variables estimation, data reconciliation and identification of process parameters. To deal with this structure, we apply a decomposition approach that performs a quadratic programming factorization for each data set. Because these are small components of large problems, an efficient and reliable algorithm results. These methods have generally been implemented on standard von Neumann architectures and few studies exist that exploit the parallelism of nonlinear programming algorithms. It is therefore interesting to note that for implicit model parameter estimation and related process applications, this approach can be quite amenable to parallel computation, because the major cost occurs in matrix decompositions for each data set. Here we describe an implementation of this approach on the Alliant FX/8 parallel computer at the Advanced Computing Research Facility at Argone National Laboratory. Special attention is paid to the architecture of this machine and its effect on the performance of the algorithm. This approach is demonstrated on five small, undetermined regression problems as well as a larger process example for simultaneous data reconciliation and parameter estimation.  相似文献   

12.
将非线性不等式组的求解转化成非线性最小二乘问题,利用引入的光滑辅助函数,构造新的极小化问题来逐次逼近最小二乘问题.在一定的条件下,文中所提出的光滑高斯-牛顿算法的全局收敛性得到保证.适当条件下,算法的局部二阶收敛性得到了证明.文后的数值试验表明本文算法有效.  相似文献   

13.
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than global optimum. Genetic algorithms have been applied successfully to function optimization and therefore would be effective for nonlinear least squares estimation. This paper provides an illustration of a genetic algorithm applied to a simple nonlinear least squares example.  相似文献   

14.
This paper examines a type of symmetric quasi-Newton update for use in nonlinear optimization algorithms. The updates presented here impose additional properties on the Hessian approximations that do not result if the usual quasi-Newton updating schemes are applied to certain Gibbs free energy minimization problems. The updates derived in this paper are symmetric matrices that satisfy a given matrix equation and are least squares solutions to the secant equation. A general representation for this class of updates is given. The update in this class that has the minimum weighted Frobenius norm is also presented. This work was done at Sandia National Laboratories and supported by the US Dept. of Energy under contract no. DE-AC04-76DP00789.  相似文献   

15.
A new quasi-Newton method for nonlinear least squares problems is proposed. Two advantages of the method are accomplished by utilizing special geometrical properties in the problem class. First, fast convergence is established for well-conditioned problems by interpolating both the current and the previous step in each iteration. Second, high accuracy is achieved for certain difficult problems, such as ill-conditioned problems and problems with large curvatures in the tangent space. Numerical results for artificial problems and standard test problems are presented and discussed.  相似文献   

16.
The problem of minimizing a sum of squares of nonlinear functions is studied. To solve this problem one usually takes advantage of the fact that the objective function is of this special form. Doing this gives the Gauss-Newton method or modifications thereof. To study how these specialized methods compare with general purpose nonlinear optimization routines, test problems were generated where parameters determining the local behaviour of the algorithms could be controlled. The order of 1000 test problems were generated for testing three algorithms: the Gauss-Newton method, the Levenberg-Marquardt method and a quasi-Newton method.  相似文献   

17.
A new optimization formulation for simulating multiphase flow in porous media is introduced. A locally mass-conservative, mixed finite-element method is employed for the spatial discretization. An unconditionally stable, fully-implicit time discretization is used and leads to a coupled system of nonlinear equations that must be solved at each time step. We reformulate this system as a least squares problem with simple bounds involving only one of the phase saturations. Both a Gauss–Newton method and a quasi-Newton secant method are considered as potential solvers for the optimization problem. Each evaluation of the least squares objective function and gradient requires solving two single-phase self-adjoint, linear, uniformly-elliptic partial differential equations for which very efficient solution techniques have been developed.  相似文献   

18.
An application in magnetic resonance spectroscopy quantification models a signal as a linear combination of nonlinear functions. It leads to a separable nonlinear least squares fitting problem, with linear bound constraints on some variables. The variable projection (VARPRO) technique can be applied to this problem, but needs to be adapted in several respects. If only the nonlinear variables are subject to constraints, then the Levenberg–Marquardt minimization algorithm that is classically used by the VARPRO method should be replaced with a version that can incorporate those constraints. If some of the linear variables are also constrained, then they cannot be projected out via a closed-form expression as is the case for the classical VARPRO technique. We show how quadratic programming problems can be solved instead, and we provide details on efficient function and approximate Jacobian evaluations for the inequality constrained VARPRO method.  相似文献   

19.
考虑非线性规划问题:[1]和[4]曾讨论对某点x处的投影Hesse阵z(x)~T?_(xx)~2L(x,λ)z(x)进行变尺度校正算法的收敛性.假设f(x),c_i(x),i=1,…,t为二次连续可微函数,x~*为(1.1)的解,且在x~*处满足二阶充分性条件,以及假设  相似文献   

20.
The noise contained in data measured by imaging instruments is often primarily of Poisson type. This motivates, in many cases, the use of the Poisson negative-log likelihood function in place of the ubiquitous least squares data fidelity when solving image deblurring problems. We assume that the underlying blurring operator is compact, so that, as in the least squares case, the resulting minimization problem is ill-posed and must be regularized. In this paper, we focus on total variation regularization and show that the problem of computing the minimizer of the resulting total variation-penalized Poisson likelihood functional is well-posed. We then prove that, as the errors in the data and in the blurring operator tend to zero, the resulting minimizers converge to the minimizer of the exact likelihood function. Finally, the practical effectiveness of the approach is demonstrated on synthetically generated data, and a nonnegatively constrained, projected quasi-Newton method is introduced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号