首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

2.
Local convergence of a secant type iterative method for approximating a solution of nonlinear least squares problems is investigated in this paper. The radius of convergence is determined as well as usable error estimates. Numerical examples are also provided.  相似文献   

3.
In this paper, a new quasi-Newton equation is applied to the structured secant methods for nonlinear least squares problems. We show that the new equation is better than the original quasi-Newton equation as it provides a more accurate approximation to the second order information. Furthermore, combining the new quasi-Newton equation with a product structure, a new algorithm is established. It is shown that the resulting algorithm is quadratically convergent for the zero-residual case and superlinearly convergent for the nonzero-residual case. In order to compare the new algorithm with some related methods, our preliminary numerical experiments are also reported.  相似文献   

4.
We propose an extension of secant methods for nonlinear equations using a population of previous iterates. Contrarily to classical secant methods, where exact interpolation is used, we prefer a least squares approach to calibrate the linear model. We propose an explicit control of the numerical stability of the method.  相似文献   

5.
An algorithm for solving nonlinear least squares problems with general linear inequality constraints is described.At each step,the problem is reduced to an unconstrained linear least squares problem in a subs pace defined by the active constraints,which is solved using the quasi-Newton method.The major update formula is similar to the one given by Dennis,Gay and Welsch (1981).In this paper,we state the detailed implement of the algorithm,such as the choice of active set,the solution of subproblem and the avoidance of zigzagging.We also prove the globally convergent property of the algorithm.  相似文献   

6.
A new optimization formulation for simulating multiphase flow in porous media is introduced. A locally mass-conservative, mixed finite-element method is employed for the spatial discretization. An unconditionally stable, fully-implicit time discretization is used and leads to a coupled system of nonlinear equations that must be solved at each time step. We reformulate this system as a least squares problem with simple bounds involving only one of the phase saturations. Both a Gauss–Newton method and a quasi-Newton secant method are considered as potential solvers for the optimization problem. Each evaluation of the least squares objective function and gradient requires solving two single-phase self-adjoint, linear, uniformly-elliptic partial differential equations for which very efficient solution techniques have been developed.  相似文献   

7.
将非线性不等式组的求解转化成非线性最小二乘问题,利用引入的光滑辅助函数,构造新的极小化问题来逐次逼近最小二乘问题.在一定的条件下,文中所提出的光滑高斯-牛顿算法的全局收敛性得到保证.适当条件下,算法的局部二阶收敛性得到了证明.文后的数值试验表明本文算法有效.  相似文献   

8.
A linesearch (steplength) algorithm for unconstrained nonlinear least squares problems is described. To estimate the steplength inside the linesearch algorithm a new method that interpolates the residual vector is used together with a standards method that interpolates the sums of squares. Numerical results are reported that point out the advantage with the new steplength estimation method.  相似文献   

9.
刘海林 《经济数学》2007,24(2):213-216
本文提出一个新的非线性最小二乘的信赖域方法,在该方法中每个信赖域子问题只需要一次求解,而且每次迭代的一维搜索步长因子是给定的,避开一维搜索的环节,大大地提高了算法效率.文中证明了在一定的条件下算法的全局收敛性.  相似文献   

10.
We present a superlinearly convergent exact penalty method for solving constrained nonlinear least squares problems, in which the projected exact penalty Hessian is approximated by using a structured secant updating scheme. We give general conditions for the two-step superlinear convergence of the algorithm and prove that the projected structured Broyden–Fletcher–Goldfarb–Shanno (BFGS), Powell-symmetric-Broyden (PSB), and Davidon–Fletcher–Powell (DFP) update formulas satisfy these conditions. Then we extend the results to the projected structured convex Broyden family update formulas. Extensive testing results obtained by an implementation of our algorithms, as compared to the results obtained by several other competent algorithms, demonstrate the efficiency and robustness of the proposed approach.  相似文献   

11.
In 1988, Tapia (Ref. 1) developed and analyzed SQP secant methods in equality constrained optimization taking explicitly the additive structure of the problem setting into account. In this paper, we extend Tapia's augmented scale Lagrangian secant method to the case where additional structure coming from the objective function is available. Using the example of nonlinear least squares with equality constraints, we demonstrate these ideas and develop a convergence theory proving local and q-superlinear convergence for this kind of structured SQP-algorithms.This research was supported by the Studienstiftung des Deutschen Volkes.  相似文献   

12.
In this study a new insight into least squares regression is identified and immediately applied to estimating the parameters of nonlinear rational models. From the beginning the ordinary explicit expression for linear in the parameters model is expanded into an implicit expression. Then a generic algorithm in terms of least squares error is developed for the model parameter estimation. It has been proved that a nonlinear rational model can be expressed as an implicit linear in the parameters model, therefore, the developed algorithm can be comfortably revised for estimating the parameters of the rational models. The major advancement of the generic algorithm is its conciseness and efficiency in dealing with the parameter estimation problems associated with nonlinear in the parameters models. Further, the algorithm can be used to deal with those regression terms which are subject to noise. The algorithm is reduced to an ordinary least square algorithm in the case of linear or linear in the parameters models. Three simulated examples plus a realistic case study are used to test and illustrate the performance of the algorithm.  相似文献   

13.
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than global optimum. Genetic algorithms have been applied successfully to function optimization and therefore would be effective for nonlinear least squares estimation. This paper provides an illustration of a genetic algorithm applied to a simple nonlinear least squares example.  相似文献   

14.
In this paper, we develop, analyze, and test a new algorithm for nonlinear least-squares problems. The algorithm uses a BFGS update of the Gauss-Newton Hessian when some heuristics indicate that the Gauss-Newton method may not make a good step. Some important elements are that the secant or quasi-Newton equations considered are not the obvious ones, and the method does not build up a Hessian approximation over several steps. The algorithm can be implemented easily as a modification of any Gauss-Newton code, and it seems to be useful for large residual problems.  相似文献   

15.
This paper extends prior work by the authors on solving nonlinear least squares unconstrained problems using a factorized quasi-Newton technique. With this aim we use a primal-dual interior-point algorithm for nonconvex nonlinear programming. The factorized quasi-Newton technique is now applied to the Hessian of the Lagrangian function for the transformed problem which is based on a logarithmic barrier formulation. We emphasize the importance of establishing and maintaining symmetric quasi-definiteness of the reduced KKT system. The algorithm then tries to choose a step size that reduces a merit function, and to select a penalty parameter that ensures descent directions along the iterative process. Computational results are included for a variety of least squares constrained problems and preliminary numerical testing indicates that the algorithm is robust and efficient in practice.  相似文献   

16.
This paper describes a new efficient conjugate subgradient algorithm which minimizes a convex function containing a least squares fidelity term and an absolute value regularization term. This method is successfully applied to the inversion of ill-conditioned linear problems, in particular for computed tomography with the dictionary learning method. A comparison with other state-of-art methods shows a significant reduction of the number of iterations, which makes this algorithm appealing for practical use.  相似文献   

17.
This paper is concerned with algorithms for solving constrained nonlinear least squares problems. We first propose a local Gauss–Newton method with approximate projections for solving the aforementioned problems and study, by using a general majorant condition, its convergence results, including results on its rate. By combining the latter method and a nonmonotone line search strategy, we then propose a global algorithm and analyze its convergence results. Finally, some preliminary numerical experiments are reported in order to illustrate the advantages of the new schemes.  相似文献   

18.
The G-algorithm was proposed by Bareiss [1] as a method for solving the weighted linear least squares problem. It is a square root free algorithm similar to the fast Givens method except that it triangularizes a rectangular matrix a column at a time instead of one element at a time.In this paper an error analysis of the G-algorithm is presented which shows that it is as stable as any of the standard orthogonal decomposition methods for solving least squares problems. The algorithm is shown to be a competitive method for sparse least squares problems.A pivoting strategy is given for heavily weighted problems similar to that in [14] for the Householder-Golub algorithm. The strategy is prohibitively expensive, but it is not necessary for most of the least squares problems that arise in practice.The research was supported by the National Science Foundation under contract no. MCS-8201065 and by the Office of Naval Research under contract no. N0014-80-0517.  相似文献   

19.
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill-conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP-ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP-ALS subproblems efficiently, have the same complexity as the standard CP-ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill-conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.  相似文献   

20.
This paper examines a type of symmetric quasi-Newton update for use in nonlinear optimization algorithms. The updates presented here impose additional properties on the Hessian approximations that do not result if the usual quasi-Newton updating schemes are applied to certain Gibbs free energy minimization problems. The updates derived in this paper are symmetric matrices that satisfy a given matrix equation and are least squares solutions to the secant equation. A general representation for this class of updates is given. The update in this class that has the minimum weighted Frobenius norm is also presented. This work was done at Sandia National Laboratories and supported by the US Dept. of Energy under contract no. DE-AC04-76DP00789.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号