首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
黄莎  董云达 《数学杂志》2011,31(5):952-954
本文研究了求解单调变分不等式问题的一个投影收缩算法.利用何炳生教授的分析手法,给出了新步长,并且证明了在该步长下算法的全局收敛性.初步的数值试验表明了新步长的实用性.  相似文献   

2.
后六生 《应用数学》2012,25(2):454-459
最近,何[3]证明了投影收缩算法的O(1/t)阶收敛性.受此启发,本文证明了结构型单调变分不等式的平行分裂增广Lagrangian方法的O(1/t)阶收敛性.  相似文献   

3.
基于修正拟牛顿方程, 利用Goldstein-Levitin-Polyak (GLP)投影技术, 建立了 求解带凸集约束的优化问题的两阶段步长Zhang H.C.非单调变尺度梯度投影方法, 证明了算法的全局收敛性. 数值实验表明算法是有效的, 适合求解大规模问题.  相似文献   

4.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

5.
赵婷婷  王湘美 《经济数学》2019,36(3):104-110
次梯度法是解决大规模凸优化问题的经典和有效的方法之一,步长的选取对次梯度法的收敛性起着至关重要的作用.Goffino等(1999)提出了动态步长次梯度算法,通过改进其中的一个参数,提出了改进的动态步长次梯度算法,并证明了改进算法的收敛性.最后,通过数值实验可以看出改进的算法比原来的算法更有效.  相似文献   

6.
本文讨论Banach空间中子集非紧的情况下的变分不等式数值解.提出了求解相应问题的Ishikawa类迭代算法,证明了算法的子列收敛性和全局收敛性.同时也证明了变分不等式解的存在性.  相似文献   

7.
讨论变分不等式问题VIP(X,F),其中F是单调函数,约束集X为有界区域.利用摄动技术和一类光滑互补函数将问题等价转化为序列合两个参数的非线性方程组,然后据此建立VIP(X,F)的一个内点连续算法.分析和论证了方程组解的存在性和惟一性等重要性质,证明了算法很好的整体收敛性,最后对算法进行了初步的数值试验。  相似文献   

8.
通过引入中间值函数的一类光滑价值函数,构造了箱约束变分不等式的一种新的光滑价值函数,该函数形式简单且具有良好的微分性质.基于此给出了求解箱约束变分不等式的一种阻尼牛顿算法,在较弱的条件下,证明了算法的全局收敛性和局部超线性收敛率,以及对线性箱约束变分不等式的有限步收敛性.数值实验结果表明了算法可靠有效的实用性能.  相似文献   

9.
首先证明了广义单调集值混合变分不等式等价于一个新的不动点问题,在此基础上提出了解广义集值混合变分不等式及其相关优化问题的迭代算法,并给出了这类新算法的收敛性分析,我们的结果推广和综合了该领域的一些最新结论.  相似文献   

10.
在Hilbert空间的非空闭凸子集上研究了具有Lipschitz和强单调算子的经典变分不等式.为求解此变分不等式引入了一类新的三步松弛混合最速下降法.在算法参数的适当假设下,证明了此算法的强收敛性.  相似文献   

11.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

12.
In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.  相似文献   

13.
A fundamental problem in constrained nonlinear optimization algorithms is the design of a satisfactory stepsize strategy which converges to unity. In this paper, we discuss stepsize strategies for Newton or quasi-Newton algorithms which require the solution of quadratic optimization subproblems. Five stepsize strategies are considered for three different subproblems, and the conditions under which the stepsizes will converge to unity are established. It is shown that these conditions depend critically on the convergence of the Hessian approximations used in the algorithms. The stepsize strategies are constructed using basic principles from which the conditions to unit stepsizes follow. Numerical results are discussed in an Appendix.Paper presented to the XI Symposium on Mathematical Programming, Bonn, Germany, 1982.This work was completed while the author was visiting the European University in Florence where, in particular, Professors Fitoussi and Velupillai provided the opportunity for its completion. The author is grateful to Dr. L. C. W. Dixon for his helpful comments and criticisms on numerous versions of the paper, and to R. G. Becker for programming the algorithms in Section 3 and for helpful discussions concerning these algorithms.  相似文献   

14.
Recently, similar to Hager and Zhang (SIAM J Optim 16:170–192, 2005), Yu (Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems. Thesis of Doctors Degree, Sun Yat-Sen University, 2007) and Yuan (Optim Lett 3:11–21, 2009) proposed modified PRP conjugate gradient methods which generate sufficient descent directions without any line searches. In order to obtain the global convergence of their algorithms, they need the assumption that the stepsize is bounded away from zero. In this paper, we take a little modification to these methods such that the modified methods retain sufficient descent property. Without requirement of the positive lower bound of the stepsize, we prove that the proposed methods are globally convergent. Some numerical results are also reported.  相似文献   

15.
In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker.  相似文献   

16.
张国凤  赵双锁 《计算数学》2000,22(3):285-294
1.引言解 Stiff ODEs初值问题的自开始型单块法已为 [4, 5]所研究.这里, e=(1,1,……,1)T为单位矩阵,当 时见 [4],当 0< a1< a2<…< ar= r时见[5]。 众所周知,解(1.1)的有效方法通常是隐的.仅当有效地解决了其变步长计算问题并具有有效的迭代法求其解时,这样的方法才能有效地用于实际计算.后者是不言而喻的,前者是由于定步长计算或者往往带来精度的严重损失,或者会带来计算量的严重增加(当存在(t0,T]的两个子区间,该两区间上的合理积分步长相差悬殊时,就会出现这种…  相似文献   

17.
In this paper, we propose a unified framework of extragradient-type methods for solving pseudomonotone variational inequalities, which allows one to take different stepsize rules and requires the computation of only two projections at each iteration. It is shown that the modified extragradient method of Ref. 1 falls within this framework with a short stepsize and so does the method of Ref. 2 with a long stepsize. It is further demonstrated that the algorithmic framework is globally convergent under mild assumptions and is sublinearly convergent if in addition a projection-type error bound holds locally. Preliminary numerical experiments are reported.  相似文献   

18.
It is well-known (see Pang and Chan [8]) that Newton's method, applied to strongly monotone variational inequalities, is locally and quadratically convergent. In this paper we show that Newton's method yields a descent direction for a non-convex, non-differentiable merit function, even in the absence of strong monotonicity. This result is then used to modify Newton's method into a globally convergent algorithm by introducing a linesearch strategy. Furthermore, under strong monotonicity (i) the optimal face is attained after a finite number of iterations, (ii) the stepsize is eventually fixed to the value one, resulting in the usual Newton step. Computational results are presented.  相似文献   

19.
Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization   总被引:6,自引:0,他引:6  
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.  相似文献   

20.
无约束最优化线搜索一般模型及BFGS方法的整体收敛性   总被引:7,自引:0,他引:7  
本文给出了无约束最优化的算法中线性搜索的可接受的步长选择律的一种一般形式,它概括了大多数已有的步长律为其特例,并且研究了它基本性质,最后证明了此线性搜索一般模拟相结合的无约束优化的BFGS算法的整体收敛性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号