首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 93 毫秒
1.
带约束的变尺度算法   总被引:3,自引:0,他引:3  
迄今为止,变尺度算法是求解无约束最优化问题最有效的一类方法。因此,近年来,对约束最优化问题建立类似方法的工作。引起了许多优化工作者的兴趣,他们提出了Wilson-Han-Powell算法及其改进等等。并且证明在一定条件下,算法具有超线性的收敛率。但这些条件不仅要求很“高”,而且很难在计算前确定能否成立。文[4]利用文[1]和[2]的结果,提出一类新的算法,求解带线性等式约束条件的非线性规划问题。并且证明了算法的超线性收敛率。本文把这个结果推广到一般的约束规划问题:  相似文献   

2.
闭凸集上Fermat场址问题的信赖域算法   总被引:1,自引:1,他引:0  
杨益民 《应用数学》1999,12(4):36-40
本文提出了一类求闭凸集上Ferm at场址最优解的信赖域算法,该算法既不要求诸旧场址不共线,也不要求迭代近似矩阵列{Bk}有界,同时具有全局收敛性.  相似文献   

3.
马玉敏  蔡邢菊 《计算数学》2022,44(2):272-288
增广拉格朗日方法是求解带线性约束的凸优化问题的有效算法.线性化增广拉格朗日方法通过线性化增广拉格朗日函数的二次罚项并加上一个临近正则项,使得子问题容易求解,其中正则项系数的恰当选取对算法的收敛性和收敛速度至关重要.较大的系数可保证算法收敛性,但容易导致小步长.较小的系数允许迭代步长增大,但容易导致算法不收敛.本文考虑求解带线性等式或不等式约束的凸优化问题.我们利用自适应技术设计了一类不定线性化增广拉格朗日方法,即利用当前迭代点的信息自适应选取合适的正则项系数,在保证收敛性的前提下尽量使得子问题步长选择范围更大,从而提高算法收敛速度.我们从理论上证明了算法的全局收敛性,并利用数值实验说明了算法的有效性.  相似文献   

4.
凸约束优化的非单调信赖域算法的收敛性   总被引:1,自引:0,他引:1  
本文对凸约束优化问题提出一类新的非单调信赖域算法,在二次模型Hesse矩阵{Bk}一致有界条件下,证明了算法具有强收敛性;在{Bk}线性增长的条件下,证明了算法具有弱收敛性;这推广了现有约束或凸约束优化问题的各种信赖域算法,改进了收敛性结果。  相似文献   

5.
提出了一组无界性条件,在此基础上给出了求解一类无界非凸非线性规划问题的K-K-T点的一种高效的全局收敛性算法,在适当的条件下,给出了算法的收敛性证明.结果把已有的研究结果推广到无界区域上,进一步扩大了算法的求解区域.  相似文献   

6.
交替最小化算法(简称AMA)最早由[SIAM J.Control Optim.,1991,29(1):119-138]提出,并能用于求解强凸函数与凸函数和的极小值问题.本文直接利用AMA算法来求解强凸函数与弱凸函数和的极小值问题.在强凸函数的模大于弱凸函数的模的假设下,我们证明了AMA生成的点列全局收敛到优化问题的解,并且若该优化问题中的某个函数是光滑函数时,AMA所生成的点列的收敛率是线性的.  相似文献   

7.
针对一类非线性规划问题的解存在的新等价性条件,给出了大范围收敛的连续化方法及证明了收敛性的结论.  相似文献   

8.
高岳林  井霞 《计算数学》2013,35(1):89-98
提出了求解一类线性乘积规划问题的分支定界缩减方法, 并证明了算法的收敛性.在这个方法中, 利用两个变量乘积的凸包络技术, 给出了目标函数与约束函数中乘积的下界, 由此确定原问题的一个松弛凸规划, 从而找到原问题全局最优值的下界和可行解. 为了加快所提算法的收敛速度, 使用了超矩形的缩减策略. 数值结果表明所提出的算法是可行的.  相似文献   

9.
本文考虑带有约束的连续型多场址问题(CEMFLC).对于连续型多场址问题(CEMFLC),我们给出了在闭集上选择最优场址的算法,证明了该算法是全局收敛的,最后,我们指出这一算法可用于解有约束或无约束的的高离散型多场址问题(EMFL),而且简化了(EMFL)问题现有的一些算法.  相似文献   

10.
梯度投影法是一类有效的约束最优化算法,在最优化领域中占有重要的地位.但是,梯度投影法所采用的投影是正交投影,不包含目标函数和约束函数的二阶导数信息·因而;收敛速度不太令人满意.本文介绍一种共轭投影概念,利用共轭投影构造了一般线性或非线性约束下的共轭投影变尺度算法,并证明了算法在一定条件下具有全局收敛性.由于算法中的共轭投影恰当地包含了目标函数和约束函数的二阶导数信息,因而收敛速度有希望加快.数值试验的结果表明算法是有效的.  相似文献   

11.
The paper discusses a general framework for outer approximation type algorithms for the canonical DC optimization problem. The algorithms rely on a polar reformulation of the problem and exploit an approximated oracle in order to check global optimality. Consequently, approximate optimality conditions are introduced and bounds on the quality of the approximate global optimal solution are obtained. A thorough analysis of properties which guarantee convergence is carried out; two families of conditions are introduced which lead to design six implementable algorithms, whose convergence can be proved within a unified framework.  相似文献   

12.
In this paper, we introduce two Schwarz type domain decomposition algorithms for solving boundary element equations, which decompose the original problem defined on global boundary surface into several ones defined on sub-domains so that they may be solved ileratively or parallelly. The convergence of these methods are also proved.  相似文献   

13.
We present a new algorithm for solving equilibrium problems, where the underlying bifunctions are pseudomonotone and not necessarily Lipschitz-type continuous. The algorithm is based on the auxiliary problem principle and the Armijo-type linesearch techniques. Convergence properties of the algorithms are established, among them the global convergence is proved under few assumptions. Applications to generalized variational inequalities and some numerical results are reported.  相似文献   

14.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

15.
In this paper, two nonmonotone Levenberg–Marquardt algorithms for unconstrained nonlinear least-square problems with zero or small residual are presented. These algorithms allow the sequence of objective function values to be nonmonotone, which accelerates the iteration progress, especially in the case where the objective function is ill-conditioned. Some global convergence properties of the proposed algorithms are proved under mild conditions which exclude the requirement for the positive definiteness of the approximate Hessian T(x). Some stronger global convergence properties and the local superlinear convergence of the first algorithm are also proved. Finally, a set of numerical results is reported which shows that the proposed algorithms are promising and superior to the monotone Levenberg–Marquardt algorithm according to the numbers of gradient and function evaluations.  相似文献   

16.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

17.
陈传  孔伟程 《计算数学》1988,10(3):299-310
1.引言 本文所讨论的问题如下: Min f(x) x∈R~n, s.t. c_i(x)=0,i=1,…,q,(1.1) c_i(x)≤0,i=q+1,…,p.解此问题的递归等式约束二次逼近算法,是由Murry(1969)提出,而后由Biggs(1972)发展的.此项研究是从罚函数的轨迹出发,建立一个只包含等式约束的二次规划子问题,从而可用代数的方法求得搜索方向.并沿该方向作线性搜索而完成一次迭代过程.Biggs将二次罚函数作为效应函数用于线性搜索,并证明了该算法具有全局收敛性和局部超线  相似文献   

18.
关于不等式约束的信赖域算法   总被引:3,自引:0,他引:3  
对于具有不等式约束的非线性优化问题,本文给出一个依赖域算法,由于算法中依赖区域约束采用向量的∞范数约束的形式,从而使子问题变二次规划,同时使算法变得更实用。在通常假设条件下,证明了算法的整体收敛性和超线性收敛性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号