首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
圆锥规划是一类重要的非对称锥优化问题.基于一个光滑函数,将圆锥规划的最优性条件转化成一个非线性方程组,然后给出求解圆锥规划的光滑牛顿法.该算法只需求解一个线性方程组和进行一次线搜索.运用欧几里得约当代数理论,证明该算法具有全局和局部二阶收敛性.最后数值结果表明算法的有效性.  相似文献   

2.
3.
给出求解圆锥规划问题的一种新光滑牛顿方法.基于圆锥互补函数的一个新光滑函数,将圆锥规划问题转化成一个非线性方程组,然后用光滑牛顿方法求解该方程组.该算法可从任意初始点开始,且不要求中间迭代点是内点.运用欧几里得代数理论,证明算法具有全局收敛性和局部超线性收敛速度.数值算例表明算法的有效性.  相似文献   

4.
关于光滑映射芽的确定性的几个结果   总被引:1,自引:0,他引:1  
姜广峰 《东北数学》1990,6(2):195-203
  相似文献   

5.
马昌凤  梁国平 《数学杂志》2004,24(4):399-402
提出了求解混合互补问题的一个光滑逼近算法,并在一定条件下证明了该算法的全局收敛性.  相似文献   

6.
谢骊玲  关履泰  覃廉 《计算数学》2005,27(3):257-266
本文讨论一般的凸光顺问题minF(y):=∫a^b(|D^k y|)^2dt+∑(i=1)^N ωi|y(ti)-zi|^2.其中,忌芝3而且可在闭凸集凡K(∪→)L2^k[a,b].我们把该问题转化为半光滑方程组并给出一个求解该方程组的半光滑牛顿算法.最后证明算法的超线性收敛性并给出数值算例.  相似文献   

7.
周正勇  秦丽娜 《应用数学》2020,33(3):690-698
本文利用分段三次多项式方程构造了一种积极集策略的二次连续可微的光滑化max函数,给出积极集及稳定的光滑化max函数的计算方法.基于该光滑化max函数,结合Armijo线搜索,负梯度和牛顿方向及光滑化参数的更新策略,给出一种解含多个复杂分量函数无约束minimax问题的积极集光滑化算法.初步的数值实验表明了该算法的有效性.  相似文献   

8.
本文利用区间工具及目标函数的特殊导数,给出一个非光滑总体优化的区间算法,该算法提供了目标函数总体极小值及总体极小点的取值界限(在给定的精度范围内)。我们也将算法推广到并行计算中。数值实验表明本文方法是可靠和有效的。  相似文献   

9.
求解非线性互补问题的内点正算法   总被引:2,自引:0,他引:2  
针对非线性互补问题,提出了与其等价的非光滑方程的内点正算法,并在一定条件下证明了该算法的收敛性定理.数值结果表明,该算法是十分有效的.  相似文献   

10.
马昌凤  王婷 《应用数学》2023,(3):589-601
非线性互补问题(NCP)可以重新表述为一个非光滑方程组的解.通过引入一个新的光滑函数,将问题近似为参数化光滑方程组.基于这个光滑函数,我们提出了一个求解P0映射和R0映射非线性互补问题的光滑牛顿法.该算法每次迭代只求解一个线性方程和一次线搜索.在适当的条件下,证明了该方法是全局和局部二次收敛的.数值结果表明,该算法是有效的.  相似文献   

11.
1. IntroductionConsider the following nonsmooth equationsF(x) = 0 (l)where F: R" - R" is LipsChitz continuous. A lot of work has been done and is bellg doneto deal with (1). It is basicly a genera1ization of the cIassic Newton method [8,10,11,14],Newton-lthe methods[1,18] and quasiNewton methods [6,7]. As it is discussed in [7], the latter,quasiNewton methods, seem to be lindted when aPplied to nonsmooth caJse in that a boundof the deterioration of uPdating matrir can not be maintained w…  相似文献   

12.
Implicit Runge-Kutta (IRK) methods (such as the s-stage Radau IIA method with s=3,5, or 7) for solving stiff ordinary differential equation systems have excellent stability properties and high solution accuracy orders, but their high computing costs in solving their nonlinear stage equations have seriously limited their applications to large scale problems. To reduce such a cost, several approximate Newton algorithms were developed, including a commonly used one called the simplified Newton method. In this paper, a new approximate Jacobian matrix and two new test rules for controlling the updating of approximate Jacobian matrices are proposed, yielding an improved approximate Newton method. Theoretical and numerical analysis show that the improved approximate Newton method can significantly improve the convergence and performance of the simplified Newton method.  相似文献   

13.
In this paper, the global and superlinear convergence of smoothing Newton method for solving nonsmooth operator equations in Banach spaces are shown. The feature of smoothing Newton method is to use a smooth function to approximate the nonsmooth mapping. Under suitable assumptions, we prove that the smoothing Newton method is superlinearly convergent. As an application, we use the smoothing Newton method to solve a constrained optimal control problem.  相似文献   

14.
Building on the method of Kantorovich majorants, we give convergence results and error estimates for the two-step Newton method for the approximate solution of a nonlinear operator equation.  相似文献   

15.
In this work we study an interior penalty method for a finite-dimensional large-scale linear complementarity problem (LCP) arising often from the discretization of stochastic optimal problems in financial engineering. In this approach, we approximate the LCP by a nonlinear algebraic equation containing a penalty term linked to the logarithmic barrier function for constrained optimization problems. We show that the penalty equation has a solution and establish a convergence theory for the approximate solutions. A smooth Newton method is proposed for solving the penalty equation and properties of the Jacobian matrix in the Newton method have been investigated. Numerical experimental results using three non-trivial test examples are presented to demonstrate the rates of convergence, efficiency and usefulness of the method for solving practical problems.  相似文献   

16.
Hiroyuki Sato 《Optimization》2017,66(12):2211-2231
The joint approximate diagonalization of non-commuting symmetric matrices is an important process in independent component analysis. This problem can be formulated as an optimization problem on the Stiefel manifold that can be solved using Riemannian optimization techniques. Among the available optimization techniques, this study utilizes the Riemannian Newton’s method for the joint diagonalization problem on the Stiefel manifold, which has quadratic convergence. In particular, the resultant Newton’s equation can be effectively solved by means of the Kronecker product and the vec and veck operators, which reduce the dimension of the equation to that of the Stiefel manifold. Numerical experiments are performed to show that the proposed method improves the accuracy of the approximate solution to this problem. The proposed method is also applied to independent component analysis for the image separation problem. The proposed Newton method further leads to a novel and fast Riemannian trust-region Newton method for the joint diagonalization problem.  相似文献   

17.
The present paper is concerned with the convergence problem of inexact Newton methods. Assuming that the nonlinear operator satisfies the γ-condition, a convergence criterion for inexact Newton methods is established which includes Smale's type convergence criterion. The concept of an approximate zero for inexact Newton methods is proposed in this paper and the criterion for judging an initial point being an approximate zero is established. Consequently, Smale's α-theory is generalized to inexact Newton methods. Furthermore, a numerical example is presented to illustrate the applicability of our main results.  相似文献   

18.
Inexact Newton methods are variant of the Newton method in which each step satisfies only approximately the linear system (Ref. 1). The local convergence theory given by the authors of Ref. 1 and most of the results based on it consider the error terms as being provided only by the fact that the linear systems are not solved exactly. The few existing results for the general case (when some perturbed linear systems are considered, which in turn are not solved exactly) do not offer explicit formulas in terms of the perturbations and residuals. We extend this local convergence theory to the general case, characterizing the rate of convergence in terms of the perturbations and residuals.The Newton iterations are then analyzed when, at each step, an approximate solution of the linear system is determined by the following Krylov solvers based on backward error minimization properties: GMRES, GMBACK, MINPERT. We obtain results concerning the following topics: monotone properties of the errors in these Newton–Krylov iterates when the initial guess is taken 0 in the Krylov algorithms; control of the convergence orders of the Newton–Krylov iterations by the magnitude of the backward errors of the approximate steps; similarities of the asymptotical behavior of GMRES and MINPERT when used in a converging Newton method. At the end of the paper, the theoretical results are verified on some numerical examples.  相似文献   

19.
We show that a modified Milstein scheme combined with explicit Newton’s method enables us to construct fast converging sequences of approximate solutions of stochastic differential equations. The fast uniform convergence of our Newton–Milstein scheme follows from Amano’s probabilistic second-order error estimate, which had been an open problem since 1991. The Newton–Milstein scheme, which is based on a modified Milstein scheme and the symbolic Newton’s method, will be classified as a numerical and computer algebraic hybrid method and it may give a new possibility to the study of computer algebraic method in stochastic analysis.  相似文献   

20.
We develop general approximate Newton methods for solving Lipschitz continuous equations by replacing the iteration matrix with a consistently approximated Jacobian, thereby reducing the computation in the generalized Newton method. Locally superlinear convergence results are presented under moderate assumptions. To construct a consistently approximated Jacobian, we introduce two main methods: the classic difference approximation method and the -generalized Jacobian method. The former can be applied to problems with specific structures, while the latter is expected to work well for general problems. Numerical tests show that the two methods are efficient. Finally, a norm-reducing technique for the global convergence of the generalized Newton method is briefly discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号