首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
广义非线性最小二乘问题的一个分离解法   总被引:4,自引:1,他引:3  
徐成贤 《计算数学》1992,14(1):20-26
非线性最小二乘涉及数据拟合问题.在测量、实验与科学研究中常用一个选定的含有可调参数向量x∈R~n的函数y=φ(x,t)(通常为x的非线性函数)去拟合一组含有误差的数据(T_j,y_j),j=1,2,…,m,最小二乘就是选择适当的参数向量x使函数x=φ(x,t)在拟合误差平方和最小意义下最优地拟合这些数据.如T_j(j=1,2,…,m)上的误差为零或忽略不计,问题则成为常规非线性最小二乘问题:  相似文献   

2.
In this paper, we develop, analyze, and test a new algorithm for nonlinear least-squares problems. The algorithm uses a BFGS update of the Gauss-Newton Hessian when some heuristics indicate that the Gauss-Newton method may not make a good step. Some important elements are that the secant or quasi-Newton equations considered are not the obvious ones, and the method does not build up a Hessian approximation over several steps. The algorithm can be implemented easily as a modification of any Gauss-Newton code, and it seems to be useful for large residual problems.  相似文献   

3.
This paper presents a no-derivative modification of the hybrid Gauss-Newton-BFGS method for nonlinear least-square problems suggested initially by Al-Baali and Fletcher and modified later by Fletcher and Xu. The modification is made in such a way that, in a Gauss-Newton step, the Broyden's rank-one updating formula is used to obtain an approximate Jacobian and, in a BFGS step, the Jacobian is estimated using difference formulas. A set of numerical comparisons among the new hybrid method, the Gauss-Newton-Broyden method, and the finite-difference BFGS method is made and shows that the new hybrid method combines the better features of the Gauss-Newton-Broyden method and the finite-difference BFGS method. This paper also extends to the least-square problem the finite-termination property of the Broyden method, proved for a nonsingular system of equations by Gay and for the full-rank rectangular system of equations by Gerber and Luk.The author would like to acknowledge the support of Xian Jiaotong University, Xian, China, and the award of a United Kingdom ORS studentship. The author wishes to express his gratitute to Professor R. Fletcher for his encouragement and to thank Dr. G. A. Watson and Dr. M. C. Bartholomew-Biggs for their useful comments during the preparation of this paper. The author also wishes to acknowledge Professor R. A. Tapia for his valuable suggestions.  相似文献   

4.
本本文给出了一个解非线性对称方程组问题的具有下降方向的近似高斯一牛顿基础BFGS方法。无论使用何种线性搜索此方法产生的方向总是下降的。在适当的条件下我们将证明此方法的全局收敛性和超线性收敛性。并给出数值检验结果。  相似文献   

5.
In this work, we present a new hybrid conjugate gradient method based on the approach of the convex hybridization of the conjugate gradient update parameters of DY and HS+, adapting a quasi-Newton philosophy. The computation of the hybrization parameter is obtained by minimizing the distance between the hybrid conjugate gradient direction and the self-scaling memoryless BFGS direction. Furthermore, a significant property of our proposed method is that it ensures sufficient descent independent of the accuracy of the line search. The global convergence of the proposed method is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments on a set of unconstrained optimization test problems from the CUTEr collection indicate that our proposed method is preferable and in general superior to classic conjugate gradient methods in terms of efficiency and robustness.  相似文献   

6.
In this paper, the classical Gauss-Newton method for the unconstrained least squares problem is modified by introducing a quasi-Newton approximation to the second-order term of the Hessian. Various quasi-Newton formulas are considered, and numerical experiments show that most of them are more efficient on large residual problems than the Gauss-Newton method and a general purpose minimization algorithm based upon the BFGS formula. A particular quasi-Newton formula is shown numerically to be superior. Further improvements are obtained by using a line search that exploits the special form of the function.  相似文献   

7.
In this study, we develop and test a strategy for selectively sizing (multiplying by an appropriate scalar) the approximate Hessian matrix before it is updated in the BFGS and DFP trust-region methods for unconstrained optimization. Our numerical results imply that, for use with the DFP update, the Oren-Luenberger sizing factor is completely satisfactory and selective sizing is vastly superior to the alternatives of never sizing or first-iteration sizing and is slightly better than the alternative of always sizing. Numerical experimentation showed that the Oren-Luenberger sizing factor is not a satisfactory sizing factor for use with the BFGS update. Therefore, based on our newly acquired understanding of the situation, we propose a centered Oren-Luenberger sizing factor to be used with the BFGS update. Our numerical experimentation implies that selectively sizing the BFGS update with the centered Oren-Luenberger sizing factor is superior to the alternatives. These results contradict the folk axiom that sizing should be done only at the first iteration. They also show that, without sufficient sizing, DFP is vastly inferior to BFGS; however, when selectively sized, DFP is competitive with BFGS.This research was supported in part by NSF Cooperative Agreement No. CCR-88-09615, AFOSR Grant 89-0363, DOE Grant DEFG05-86-ER25017, and AR0 Grant 9DAAL03-90-G-0093. This paper was presented at the ICIAM 91 International Conference, Washington DC, July 1991.The authors thank all those individuals who took part in the lively discussions concerning this material following the ICIAM 91 presentation. These discussions influenced the current version of the paper. The first author also thanks Maria Cristina Maciel for assistance and support during the earlier stages of the research.This author is currently a Graduate Student in the Mathematics Department, University of California at Riverside, Riverside, California, and has participated in the Summer Student Visitor Program at the Center for Research in Parallel Computation of Rice University for the past two years.  相似文献   

8.
Hybrid methods are developed for improving the Gauss-Newton method in the case of large residual or ill-conditioned nonlinear least-square problems. These methods are used usually in a form suitable for dense problems. But some standard approaches are unsuitable, and some new possibilities appear in the sparse case. We propose efficient hybrid methods for various representations of the sparse problems. After describing the basic ideas that help deriving new hybrid methods, we are concerned with designing hybrid methods for sparse Jacobian and sparse Hessian representations of the least-square problems. The efficiency of hybrid methods is demonstrated by extensive numerical experiments.This work was supported by the Czech Republic Grant Agency, Grant 201/93/0129. The author is indebted to Jan Vlek for his comments on the first draft of this paper and to anonymous referees for many useful remarks.  相似文献   

9.
In this paper, we propose new members of the Broyden family of quasi-Newton methods. We develop, on the basis of well-known least-change results for the BFGS and DFP updates, a measure for the Broyden family which seeks to take into account the change in both the Hessian approximation and its inverse. The proposal is then to choose the formula which gives the least value of this measure in terms of the two parameters available, and hence to produce an update which is optimal in the sense of the given measure. Several approaches to the problem of minimizing the measure are considered, from which new updates are obtained. In particular, one approach yields a new variational result for the Davidon optimally conditioned method and another yields a reasonable modification to this method. The paper is also concerned with the possibility of estimating, in a certain sense, the size of the eigenvalues of the Hessian approximation on the basis of two available scalars. This allows one to derive further modifications to the above-mentioned methods. Comparisons with the BFGS and Davidson methods are made on a set of standard test problems that show promising results for certain new methods.Part of this work was done during the author's visits at International Centre for Theoretical Physics, Trieste, Italy, at Systems Department, University of Calabria, Cosenza, Italy, and at Ajman University College of Science and Technology, Ajman, United Arab Emirates.The author expresses his gratitude to Professor L. Grandinetti for his encouragement and thanks the anonymous referees for their careful reading of an earlier draft of the paper and valuable comments, which led to a substantial improvement of the original paper.  相似文献   

10.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

11.
Robust optimization (RO) is a tractable method to address uncertainty in optimization problems where uncertain parameters are modeled as belonging to uncertainty sets that are commonly polyhedral or ellipsoidal. The two most frequently described methods in the literature for solving RO problems are reformulation to a deterministic optimization problem or an iterative cutting-plane method. There has been limited comparison of the two methods in the literature, and there is no guidance for when one method should be selected over the other. In this paper we perform a comprehensive computational study on a variety of problem instances for both robust linear optimization (RLO) and robust mixed-integer optimization (RMIO) problems using both methods and both polyhedral and ellipsoidal uncertainty sets. We consider multiple variants of the methods and characterize the various implementation decisions that must be made. We measure performance with multiple metrics and use statistical techniques to quantify certainty in the results. We find for polyhedral uncertainty sets that neither method dominates the other, in contrast to previous results in the literature. For ellipsoidal uncertainty sets we find that the reformulation is better for RLO problems, but there is no dominant method for RMIO problems. Given that there is no clearly dominant method, we describe a hybrid method that solves, in parallel, an instance with both the reformulation method and the cutting-plane method. We find that this hybrid approach can reduce runtimes to 50–75 % of the runtime for any one method and suggest ways that this result can be achieved and further improved on.  相似文献   

12.
In the paper we consider constrained nonlinear parameter estimation problems. The method of choice to solve such problems is the generalized Gauss-Newton method. At each iteration of the Gauss-Newton we solve the linearized parameter estimation problem and compute covariance matrix, necessary for the error assessment of the estimates, using an iterative linear algebra technique, namely LSQR algorithm. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
The application of quasi-Newton methods is widespread in numerical optimization. Independently of the application, the techniques used to update the BFGS matrices seem to play an important role in the performance of the overall method. In this paper, we address precisely this issue. We compare two implementations of the limited memory BFGS method for large-scale unconstrained problems. They differ in the updating technique and the choice of initial matrix. L-BFGS performs continuous updating, whereas SNOPT uses a restarted limited memory strategy. Our study shows that continuous updating techniques are more effective, particularly for large problems.  相似文献   

14.
In this paper, we consider convex composite optimization problems on Riemannian manifolds, and discuss the semi-local convergence of the Gauss-Newton method with quasi-regular initial point and under the majorant condition. As special cases, we also discuss the convergence of the sequence generated by the Gauss-Newton method under Lipschitz-type condition, or under γ-condition.  相似文献   

15.
1.IntroductionIn[6],aQPFTHmethodwasproposedforsolvingthefollowingnonlinearprogrammingproblemwherefunctionsf:R"-- RIandgi:R"-- R',jeJaretwicecontinuouslydifferentiable.TheQPFTHalgorithmwasdevelopedforsolvingsparselarge-scaleproblem(l.l)andwastwo-stepQ-quadraticallyandR-quadraticallyconvergent(see[6]).Theglobalconvergenceofthisalgorithmisdiscussedindetailinthispaper.Forthefollowinginvestigationwerequiresomenotationsandassumptions.TheLagrangianofproblem(1.1)isdefinedbyFOundationofJiangs…  相似文献   

16.
The problem of minimizing a sum of squares of nonlinear functions is studied. To solve this problem one usually takes advantage of the fact that the objective function is of this special form. Doing this gives the Gauss-Newton method or modifications thereof. To study how these specialized methods compare with general purpose nonlinear optimization routines, test problems were generated where parameters determining the local behaviour of the algorithms could be controlled. The order of 1000 test problems were generated for testing three algorithms: the Gauss-Newton method, the Levenberg-Marquardt method and a quasi-Newton method.  相似文献   

17.
This work is an attempt to develop multiobjective versions of some well-known single objective quasi-Newton methods, including BFGS, self-scaling BFGS (SS-BFGS), and the Huang BFGS (H-BFGS). A comprehensive and comparative study of these methods is presented in this paper. The Armijo line search is used for the implementation of these methods. The numerical results show that the Armijo rule does not work the same way for the multiobjective case as for the single objective case, because, in this case, it imposes a large computational effort and significantly decreases the speed of convergence in contrast to the single objective case. Hence, we consider two cases of all multi-objective versions of quasi-Newton methods: in the presence of the Armijo line search and in the absence of any line search. Moreover, the convergence of these methods without using any line search under some mild conditions is shown. Also, by introducing a multiobjective subproblem for finding the quasi-Newton multiobjective search direction, a simple representation of the Karush–Kuhn–Tucker conditions is derived. The H-BFGS quasi-Newton multiobjective optimization method provides a higher-order accuracy in approximating the second order curvature of the problem functions than the BFGS and SS-BFGS methods. Thus, this method has some benefits compared to the other methods as shown in the numerical results. All mentioned methods proposed in this paper are evaluated and compared with each other in different aspects. To do so, some well-known test problems and performance assessment criteria are employed. Moreover, these methods are compared with each other with regard to the expended CPU time, the number of iterations, and the number of function evaluations.  相似文献   

18.
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal sequence processing applications. This approach allows learning performance to deteriorate in some iterations, nevertheless the network’s performance is improved over time. A self-scaling BFGS is equipped with an adaptive nonmonotone technique that employs approximations of the Lipschitz constant and is tested on a set of sequence processing problems. Simulation results show that the proposed algorithm outperforms the BFGS as well as other methods previously applied to these sequences, providing an effective modification that is capable of training recurrent networks of various architectures.  相似文献   

19.
The “inverse problem” of determining parameter distributions in linear elastic structures has been explored widely in the literature. In the present article we discuss this problem in the context of a particular formulation of linear elastic systems, dividing the associated inverse problems into two classes which we call Case 1 and Case 2. In the first case the elastic parameters can be obtained by solving a certain set of linear algebraic equations, typically poorly conditioned. In the second case the corresponding problem involves nonlinear equations which usually must be solved by approximation methods, including the Gauss-Newton method for overdetermined systems. Here we discuss the application of this method and a related, empirically more stable, method which we call the inverse Gauss-Newton method. Convergence theorems are established and computational results for sample problems are presented.  相似文献   

20.
We consider a subproblem in parameter estimation using the Gauss-Newton algorithm with regularization for NURBS curve fitting. The NURBS curve is fitted to a set of data points in least-squares sense, where the sum of squared orthogonal distances is minimized. Control-points and weights are estimated. The knot-vector and the degree of the NURBS curve are kept constant. In the Gauss-Newton algorithm, a search direction is obtained from a linear overdetermined system with a Jacobian and a residual vector. Because of the properties of our problem, the Jacobian has a particular sparse structure which is suitable for performing a splitting of variables. We are handling the computational problems and report the obtained accuracy using different methods, and the elapsed real computational time. The splitting of variables is a two times faster method than using plain normal equations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号