首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
该文将经典Langevin方程在分数阶上进行拓展,使其具有时间记忆性,采用预估校正算法数值求解一类分数阶Langevin方程.先用R0算法求出预估值,再将预估值代入R2算法中,对数值解进行校正,最终得到一类分数阶Langevin方程预估校正算法的数值解.误差分析证明在该方程的0 α1条件下,预估校正算法是(1+α)阶收敛的.数值试验也表明不同α,步长h取值下,预估校正算法的数值解都是收敛的.  相似文献   

2.
基于光滑Fischer-Burmeister函数,给出一个求解二次锥规划的预估-校正光滑牛顿法.该算法构造一个等价于最优性条件的非线性方程组,再用牛顿法求解此方程组的扰动.在适当的假设下,证明算法是全局收敛且是局部二阶收敛的.数值试验表明算法的有效性.  相似文献   

3.
本文研究特殊加权线性互补问题的求解方法.我们利用一个带有权重的光滑函数将问题转化成一个光滑方程组,然后提出一个预估校正光滑牛顿法去求解它.在适当条件下,我们证明提出的算法具有全局和局部二次收敛性质.特别地,在解集非空的条件下,我们证明价值函数点列收敛到零.数值试验表明算法是有效的.  相似文献   

4.
基于对牛顿迭代公式的改进及预估校正迭代的思想,提出了一种求解非线性方程的新的三阶预估-校正迭代格式.迭代公式无须计算函数的导数值,且理论上证明了它至少是三阶收敛的.数值实验验证了该迭代公式的有效性.  相似文献   

5.
基于不可行内点法和预估-校正算法的思想,提出两个新的求解二阶锥规划的内点预估-校正算法.其预估方向分别是Newton方向和Euler方向,校正方向属于Alizadeh-Haeberly-Overton(AHO)方向的范畴.算法对于迭代点可行或不可行的情形都适用.主要构造了一个更简单的中心路径的邻域,这是有别于其它内点预估-校正算法的关键.在一些假设条件下,算法具有全局收敛性、线性和二次收敛速度,并获得了O(rln(ε0/ε))的迭代复杂性界,其中r表示二阶锥规划问题所包含的二阶锥约束的个数.数值实验结果表明提出的两个算法是有效的.  相似文献   

6.
最近,Salahi对线性规划提出了一个基于新的自适应参数校正策略的Mehrotra型预估-校正算法,该策略使其在不使用安全策略的情况下,证明了算法的多项式迭代复杂界.本文将这一算法推广到半定规划的情形.通过利用Zhang的对称化技术,得到了算法的多项式迭代复杂界,这与求解线性规划的相应算法有相同的迭代复杂性阶.  相似文献   

7.
黄娜  马昌凤  谢亚君 《计算数学》2013,35(4):401-418
来源于输运理论的非对称代数Riccati 方程可等价地转化成向量方程组来求解. 本文提出了求解该向量方程组的几个预估-校正迭代格式,证明了这些迭代格式所产生的序列是严格单调递增且有上界,并收敛于向量方程 组的最小正解. 最后,给出了一些数值实验,实验结果表明,本文所提出的算法是有效的.  相似文献   

8.
本文提出一种求解单调非线性互补问题的Mehrotra型预估-校正算法.新算法采用不同的自适应更新策略.在尺度化的Lipschitz条件下,证明了新算法的迭代复杂性为O(n2 log (x0)T s0/ε)),其中(x0,s0)为初始点,ε为精度.  相似文献   

9.
Mehrotra型预估-校正算法是很多内点算法软件包的算法基础,但它的多项式迭代复杂性直到2007年才被Salahi等人证明.通过选择一个固定的预估步长及与Salahi文中不同的校正方向,本文把Salahi等人的算法拓展到单调线性互补问题,使得新算法的迭代复杂性为O(n log((x0)T s0/ε)),同时,初步的数值实验证明了新算法是有效的.  相似文献   

10.
低秩矩阵补全问题作为一类在机器学习和图像处理等信息科学领域中都十分重要的问题已被广泛研究.一阶原始-对偶算法是求解该问题的经典算法之一.然而实际应用中处理的数据往往是大规模的.针对大规模矩阵补全问题,本文在原始-对偶算法的框架下,应用变步长校正技术,提出了一种改进的求解矩阵补全问题的原始-对偶算法.该算法在每一步迭代过程中,首先利用原始-对偶算法对原始变量和对偶变量进行更新,然后采用变步长校正技术对这两块变量进行进一步的校正更新.在一定的假设条件下,证明了新算法的全局收敛性.最后通过求解随机低秩矩阵补全问题及图像修复的实例验证新算法的有效性.  相似文献   

11.
In this paper, we focus on the variational inequality problem. Based on the Fischer-Burmeister function with smoothing parameters, the variational inequality problem can be reformulated as a system of parameterized smooth equations, a non-interior-point smoothing method is presented for solving the problem. The proposed algorithm not only has no restriction on the initial point, but also has global convergence and local quadratic convergence, moreover, the local quadratic convergence is established without a strict complementarity condition. Preliminary numerical results show that the algorithm is promising.  相似文献   

12.
It is well known that Newton’s method for a nonlinear system has quadratic convergence when the Jacobian is a nonsingular matrix in a neighborhood of the solution. Here we present a modification of this method for nonlinear systems whose Jacobian matrix is singular. We prove, under certain conditions, that this modified Newton’s method has quadratic convergence. Moreover, different numerical tests confirm the theoretical results and allow us to compare this variant with the classical Newton’s method.  相似文献   

13.
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported.  相似文献   

14.
A modified Levenberg–Marquardt method for solving singular systems of nonlinear equations was proposed by Fan [J Comput Appl Math. 2003;21;625–636]. Using trust region techniques, the global and quadratic convergence of the method were proved. In this paper, to improve this method, we decide to introduce a new Levenberg–Marquardt parameter while also incorporate a new nonmonotone technique to this method. The global and quadratic convergence of the new method is proved under the local error bound condition. Numerical results show the new algorithm is efficient and promising.  相似文献   

15.
The Levenberg–Marquardt method is a regularized Gauss–Newton method for solving systems of nonlinear equations. If an error bound condition holds it is known that local quadratic convergence to a non-isolated solution can be achieved. This result was extended to constrained Levenberg–Marquardt methods for solving systems of equations subject to convex constraints. This paper presents a local convergence analysis for an inexact version of a constrained Levenberg–Marquardt method. It is shown that the best results known for the unconstrained case also hold for the constrained Levenberg–Marquardt method. Moreover, the influence of the regularization parameter on the level of inexactness and the convergence rate is described. The paper improves and unifies several existing results on the local convergence of Levenberg–Marquardt methods.  相似文献   

16.
In this paper, we describe a new method for solving the inverse problem associated with a set of ordinary nonlinear differential equations. It is proved that the method has quadratic convergence.  相似文献   

17.
We propose and analyze a perturbed version of the classical Josephy–Newton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilized version, sequential quadratically constrained quadratic programming, and linearly constrained Lagrangian methods. For the linearly constrained Lagrangian methods, in particular, we obtain superlinear convergence under the second-order sufficient optimality condition and the strict Mangasarian–Fromovitz constraint qualification, while previous results in the literature assume (in addition to second-order sufficiency) the stronger linear independence constraint qualification as well as the strict complementarity condition. For the sequential quadratically constrained quadratic programming methods, we prove primal-dual superlinear/quadratic convergence under the same assumptions as above, which also gives a new result.  相似文献   

18.
Extended Linear-Quadratic Programming (ELQP) problems were introduced by Rockafellar and Wets for various models in stochastic programming and multistage optimization. Several numerical methods with linear convergence rates have been developed for solving fully quadratic ELQP problems, where the primal and dual coefficient matrices are positive definite. We present a two-stage sequential quadratic programming (SQP) method for solving ELQP problems arising in stochastic programming. The first stage algorithm realizes global convergence and the second stage algorithm realizes superlinear local convergence under a condition calledB-regularity.B-regularity is milder than the fully quadratic condition; the primal coefficient matrix need not be positive definite. Numerical tests are given to demonstrate the efficiency of the algorithm. Solution properties of the ELQP problem underB-regularity are also discussed.Supported by the Australian Research Council.  相似文献   

19.
一个解凸二次规划的预测-校正光滑化方法   总被引:1,自引:0,他引:1  
本文为凸二次规划问题提出一个光滑型方法,它是Engelke和Kanzow提出的解线性规划的光滑化算法的推广。其主要思想是将二次规划的最优性K-T条件写成一个非线性非光滑方程组,并利用Newton型方法来解其光滑近似。本文的方法是预测-校正方法。在较弱的条件下,证明了算法的全局收敛性和超线性收敛性。  相似文献   

20.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号