首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 437 毫秒
1.
本文就非拟牛顿法在无约束最优化问题上,对采用非单调线搜索的情况下是否具有全局收敛性进行了研究,在目标函数满足一致凸的条件下,证明了非拟牛顿族是全局收敛的.  相似文献   

2.
伪Newton—B族的导出及其性质   总被引:7,自引:0,他引:7  
本文对无约束优化问题提出了一类新的近似牛顿法(伪牛顿-B族),此方法同样具有二次终止性,产生的矩阵序列保持正定对称传递性。并证明了算法的全局收敛性和超级性收敛性。  相似文献   

3.
针对牛顿法在求解一般非凸函数极小值过程中,迭代点处Hessian矩阵不一定正定的情况,提出了一种精细修正的牛顿法.该方法充分利用迭代点处目标函数的一阶、二阶信息,合适选取搜索方向,是最速下降法、牛顿法和已有修正牛顿法相混合的一种方法.在较弱的条件下建立了算法的全局收敛性.进一步的数值实验验证了提出的算法比以往同类算法计...  相似文献   

4.
一种解决不等式约束优化问题的光滑牛顿法   总被引:2,自引:0,他引:2  
本通过引入松弛变量和Fischer函数把带有不等式约束优化问题的K-T条件转化为一个等价的非线性系统,并引入一参数μ,从而提出了一种新的光滑牛顿法。在适当的条件下,证明了算法的全局收敛性,并提供了数值结果。  相似文献   

5.
基于广义Fischer-Burmeister函数,在本文我们提出了求解互补问题的一族非单调光滑牛顿法.该方法的全局和局部收敛性在理想情况下得到了证明,并且也给出了实验结果.  相似文献   

6.
非凸无约束优化问题的广义拟牛顿法的全局收敛性   总被引:3,自引:0,他引:3  
陈兰平  焦宝聪 《应用数学》2005,18(4):573-579
本文对无约束优化问题提出一类新的广义拟牛顿法,并采用一类非精确线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

7.
1.引言 牛顿型方法是解变分不等式的一类重要数值迭代算法.其局部收敛性质的研究也取得了很好的成果(见[5]等).近几年来,此类算法的全局收敛性研究也得到了许多进展.如阻尼牛顿法的局部超线性乃至二阶收敛性质的研究(见[4,6,9; 11, 12, 14; 16]等).然而,对于计算上更为实用的拟牛顿法的研究还不多见.文[18]基于祁力群等在[14]中给出的逐次逼近牛顿型法,建立了一种解非线性互补问题的拟牛顿法,并得到了类Broyden算法的全局收敛性.但是,该方法有以下两个缺陷:1.线搜索可能不能实现…  相似文献   

8.
一族超线性收敛的投影拟牛顿算法   总被引:5,自引:0,他引:5  
本文将梯度投影与拟牛顿法相结合,给出了求解一般线性约束非线性规划问题含两组参数的算法族.在一定的条件下证明了算法族的全局收敛性与它的子族的超线性收敛速度,并给出了投影D.F.P方法、投影BFGS方法等一些特例.  相似文献   

9.
李慧茹 《经济数学》2002,19(1):85-94
通过定义一种新的*-微分,本文给出了局部Lipschitz非光滑方程组的牛顿法,并对其全局收敛性进行了研究.该牛顿法结合了非光滑方程组的局部收敛性和全局收敛性.最后,我们把这种牛顿法应用到非光滑函数的光滑复合方程组问题上,得到了较好的收敛性.  相似文献   

10.
对一般目标函数极小化问题的拟牛顿法及其全局收敛性的研究,已经成为拟牛顿法理论中最基本的开问题之一.本文对这个问题做了进一步的研究,对无约束优化问题提出一类新的广义拟牛顿算法,并结合Goldstein线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

11.
In this work we study Newton type method for functions on Riemannian manifolds whose Hessian satisfies a double inequality. The main results refer to global convergence and convergence rate estimates.  相似文献   

12.
Convergence of Newton's method for convex best interpolation   总被引:7,自引:0,他引:7  
Summary. In this paper, we consider the problem of finding a convex function which interpolates given points and has a minimal norm of the second derivative. This problem reduces to a system of equations involving semismooth functions. We study a Newton-type method utilizing Clarke's generalized Jacobian and prove that its local convergence is superlinear. For a special choice of a matrix in the generalized Jacobian, we obtain the Newton method proposed by Irvine et al. [17] and settle the question of its convergence. By using a line search strategy, we present a global extension of the Newton method considered. The efficiency of the proposed global strategy is confirmed with numerical experiments. Received October 26, 1998 / Revised version received October 20, 1999 / Published online August 2, 2000  相似文献   

13.
The Powell singular function was introduced 1962 by M.J.D. Powell as an unconstrained optimization problem. The function is also used as nonlinear least squares problem and system of nonlinear equations. The function is a classic test function included in collections of test problems in optimization as well as an example problem in text books. In the global optimization literature the function is stated as a difficult test case. The function is convex and the Hessian has a double singularity at the solution. In this paper we consider Newton’s method and methods in Halley class and we discuss the relationship between these methods on the Powell Singular Function. We show that these methods have global but linear rate of convergence. The function is in a subclass of unary functions and results for Newton’s method and methods in the Halley class can be extended to this class. Newton’s method is often made globally convergent by introducing a line search. We show that a full Newton step will satisfy many of standard step length rules and that exact line searches will yield slightly faster linear rate of convergence than Newton’s method. We illustrate some of these properties with numerical experiments.  相似文献   

14.
This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and B-differentiable versions of Newton’s method for nonsmooth Lipschitzian equations.  相似文献   

15.
There recently has been much interest in smoothing Newton method for solving nonlinear complementarity problems. We extend such method to symmetric cone complementarity problems (SCCP). In this paper, we first investigate a one-parametric class of smoothing functions in the context of symmetric cones, which contains the Fischer–Burmeister smoothing function and the CHKS smoothing function as special cases. Then we propose a smoothing Newton method for the SCCP based on the one-parametric class of smoothing functions. For the proposed method, besides the classical step length, we provide a new step length and the global convergence is obtained. Finally, preliminary numerical results are reported, which show the effectiveness of the two step lengthes in the algorithm and provide efficient domains of the parameter for the complementarity problems.  相似文献   

16.
In this paper, the global and superlinear convergence of smoothing Newton method for solving nonsmooth operator equations in Banach spaces are shown. The feature of smoothing Newton method is to use a smooth function to approximate the nonsmooth mapping. Under suitable assumptions, we prove that the smoothing Newton method is superlinearly convergent. As an application, we use the smoothing Newton method to solve a constrained optimal control problem.  相似文献   

17.
We consider an inverse quadratic programming (QP) problem in which the parameters in both the objective function and the constraint set of a given QP problem need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a linear complementarity constrained minimization problem with a positive semidefinite cone constraint. With the help of duality theory, we reformulate this problem as a linear complementarity constrained semismoothly differentiable (SC1) optimization problem with fewer variables than the original one. We propose a perturbation approach to solve the reformulated problem and demonstrate its global convergence. An inexact Newton method is constructed to solve the perturbed problem and its global convergence and local quadratic convergence rate are shown. As the objective function of the problem is a SC1 function involving the projection operator onto the cone of positively semi-definite symmetric matrices, the analysis requires an implicit function theorem for semismooth functions as well as properties of the projection operator in the symmetric-matrix space. Since an approximate proximal point is required in the inexact Newton method, we also give a Newton method to obtain it. Finally we report our numerical results showing that the proposed approach is quite effective.  相似文献   

18.
On the Nonmonotone Line Search   总被引:10,自引:0,他引:10  
The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.  相似文献   

19.
We give a framework for the globalization of a nonsmooth Newton method. In part one we start with recalling B. Kummer’s approach to convergence analysis of a nonsmooth Newton method and state his results for local convergence. In part two we give a globalized version of this method. Our approach uses a path search idea to control the descent. After elaborating the single steps, we analyze and prove the global convergence resp. the local superlinear or quadratic convergence of the algorithm. In the third part we illustrate the method for nonlinear complementarity problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号