首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
安荣  李媛 《计算数学》2013,35(1):11-20
基于加罚方法和增广Lagrange泛函, 本文给出了一种求解具有梯度限制的四阶障碍问题的增广Lagrange迭代方法, 并证明了算法的收敛性.通过采用非协调有限元离散的数值实验表明, 该算法是行之有效的.  相似文献   

2.
根据冲击接触计算模型所需满足的基本控制方程和非线性互补条件,应用非线性互补问题与约束优化的等价关系将非线性互补接触问题转变成一个非线性规划问题,系统地推导建立了冲击接触问题的一种双共轭投影梯度计算方法.增广Lagrange乘子法克服了罚函数要求减小迭代步长以达到计算稳定的限制,即使对于冲击接触问题亦可以采用较大迭代步长,在形成的与原互补问题等价的无约束规划模式下,应用双共轭投影梯度算法提高非线性搜索速度和计算效率.算法模型计算结果表明,所建立的双共轭投影梯度计算理论及方法是正确有效的.  相似文献   

3.
柳颜  贺素香 《应用数学》2020,33(1):138-145
本文提出一个求解不等式约束优化问题的基于指数型增广Lagrange函数的信赖域方法.基于指数型增广Lagrange函数,将传统的增广Lagrange方法的精确求解子问题转化为一个信赖域子问题,从而减少了计算量,并建立相应的信赖域算法.在一定的假设条件下,证明了算法的全局收敛性,并给出相应经典算例的数值实验结果.  相似文献   

4.
马玉敏  蔡邢菊 《计算数学》2022,44(2):272-288
增广拉格朗日方法是求解带线性约束的凸优化问题的有效算法.线性化增广拉格朗日方法通过线性化增广拉格朗日函数的二次罚项并加上一个临近正则项,使得子问题容易求解,其中正则项系数的恰当选取对算法的收敛性和收敛速度至关重要.较大的系数可保证算法收敛性,但容易导致小步长.较小的系数允许迭代步长增大,但容易导致算法不收敛.本文考虑求解带线性等式或不等式约束的凸优化问题.我们利用自适应技术设计了一类不定线性化增广拉格朗日方法,即利用当前迭代点的信息自适应选取合适的正则项系数,在保证收敛性的前提下尽量使得子问题步长选择范围更大,从而提高算法收敛速度.我们从理论上证明了算法的全局收敛性,并利用数值实验说明了算法的有效性.  相似文献   

5.
采用既约预条件共轭梯度路径结合非单调技术解线性等式约束的非线性优化问题.基于广义消去法将原问题转化为等式约束矩阵的零空间中的一个无约束优化问题,通过一个增广系统获得既约预条件方程,并构造共轭梯度路径解二次模型,从而获得搜索方向和迭代步长.基于共轭梯度路径的良好性质,在合理的假设条件下,证明了算法不仅具有整体收敛性,而且保持快速的超线性收敛速率.进一步,数值计算表明了算法的可行性和有效性.  相似文献   

6.
王华 《运筹学学报》2011,15(2):85-94
非线性互补问题可以转化成非线性约束优化问题. 提出一种非单调线搜索的可行SQP方法. 利用QP子问题的K-T点得到一个可行下降方向,通过引入一个高阶校正步以克服Maratos效应. 同时,算法采用非单调线搜索技巧获得搜索步长. 证明全局收敛性时不需要严格互补条件, 最后给出数值试验.  相似文献   

7.
本文研究了约束优化信赖域法中的线性化约束条件在信赖域内无解的问题.利用一种基于增广Lagrange函数的方法.获得了一个改进的约束优化的信赖域法.该法的线性化约束条件在信赖内有解,并且具有全局收敛性和超线性收敛性.  相似文献   

8.
给出了一种求解非线性约束优化问题的算法.利用Lagrange函数,将非线性约束优化问题转化为无约束优化问题,从而得到解决.方法仅仅依靠求解一个线性方程组来求解,因此使得计算量减小,计算速度变快.在一定条件下,给出算法的收敛性证明.数值试验表明方法是有效的.  相似文献   

9.
文章基于l_∞-范数的性质及奇异值阈值方法,提出Hankel矩阵填充的一种算法.该算法保证每次迭代产生的填充矩阵是可行的Hankel矩阵,不仅减少了奇异值分解所用的时间,而且获得更精确的填充矩阵.同时,讨论了新算法的收敛性.最后通过数值实验以及简单的图像修复证明新算法比加速邻近梯度算法、阈值的增广Lagrange乘子算法以及基于F-模的Hankel矩阵填充的保结构阈值算法更有效.  相似文献   

10.
定步长的连续极小化方法...   总被引:4,自引:0,他引:4  
求解非线性方程组F(x)=0可转化为求非线性最小二乘问题min1/2F(x)^TF(x)的极小点。文章提出了一种求解上述非线性最小二乘问题的连续极小化方法,方法给出确定的步长,并证明具有整体和线性的收敛性。两个数值例子说明了方法的优越性。  相似文献   

11.
$k$-均值问题是机器学习和组合优化领域十分重要的问题。它是经典的NP-难问题, 被广泛的应用于数据挖掘、企业生产决策、图像处理、生物医疗科技等领域。随着时代的发展, 人们越来越注重于个人的隐私保护:在决策通常由人工智能算法做出的情况下, 如何保证尽可能多地从数据中挖掘更多信息,同时不泄露个人隐私。近十年来不断有专家学者研究探索带隐私保护的$k$-均值问题, 得到了许多具有理论指导意义和实际应用价值的结果, 本文主要介绍关于$k$-均值问题的差分隐私算法供读者参考。  相似文献   

12.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

13.
梯度法简述     
孙聪  张亚 《运筹学学报》2021,25(3):119-132
梯度法是一类求解优化问题的一阶方法。梯度法形式简单、计算开销小,在大规模问题的求解中得到了广泛应用。系统地介绍了光滑无约束问题梯度法的迭代格式、理论框架。梯度法中最重要的参数是步长,步长的选取直接决定了梯度法的收敛性质与收敛速度。从线搜索框架、近似技巧、随机技巧和交替重复步长四方面介绍了梯度步长的构造思想及相应梯度法的收敛性结果,还对非光滑及约束问题的梯度法、梯度法加速技巧和随机梯度法等扩展方向做了简要介绍。  相似文献   

14.
The connection between the convergence of the Hestenes method of multipliers and the existence of augmented Lagrange multipliers for the constrained minimum problem (P): minimizef(x), subject tog(x)=0, is investigated under very general assumptions onX,f, andg.In the first part, we use the existence of augmented Lagrange multipliers as a sufficient condition for the convergence of the algorithm. In the second part, we prove that this is also a necessary condition for the convergence of the method and the boundedness of the sequence of the multiplier estimates.Further, we give very simple examples to show that the existence of augmented Lagrange multipliers is independent of smoothness condition onf andg. Finally, an application to the linear-convex problem is given.  相似文献   

15.
Summary For solving an equality constrained nonlinear least squares problem, a globalization scheme for the generalized Gauss-Newton method via damping is proposed. The stepsize strategy is based on a special exact penalty function. Under natural conditions the global convergence of the algorithm is proved. Moreover, if the algorithm converges to a solution having a sufficiently small residual, the algorithm is shown to change automatically into the undamped generalized Gauss-Newton method with a fast linear rate of convergence. The behaviour of the method is demonstrated on hand of some examples taken from the literature.  相似文献   

16.
基于修正拟牛顿方程, 利用Goldstein-Levitin-Polyak (GLP)投影技术, 建立了 求解带凸集约束的优化问题的两阶段步长Zhang H.C.非单调变尺度梯度投影方法, 证明了算法的全局收敛性. 数值实验表明算法是有效的, 适合求解大规模问题.  相似文献   

17.
In optimization theory, convex minimization problems have been intensively investigated in the current literature due to its wide range in applications. A major and effective tool for solving such problem is the forward‐backward splitting algorithm. However, to guarantee the convergence, it is usually assumed that the gradient of functions is Lipschitz continuous and the stepsize depends on the Lipschitz constant, which is not an easy task in practice. In this work, we propose the modified forward‐backward splitting method using new linesearches for choosing suitable stepsizes and discuss the convergence analysis including its complexity without any Lipschitz continuity assumption on the gradient. Finally, we provide numerical experiments in signal recovery to demonstrate the computational performance of our algorithm in comparison to some well‐known methods. Our reports show that the proposed algorithm has a good convergence behavior and can outperform the compared methods.  相似文献   

18.
The classical method for optimizing a functional subject to an integral constraint is to introduce the Lagrange multiplier and apply the Euler-Lagrange equations to the augmented integrand. The Lagrange multiplier is a constant whose value is selected such that the integral constraint is satisfied. This value is frequently an eigenvalue of the boundary-value problem and is determined by a trial-and-error procedure. A new approach for solving this isoperimetric problem is presented. The Lagrange multiplier is introduced as a state variable and evaluated simultaneously with the optimum solution. A numerical example is given and is shown to have a large region of convergence.  相似文献   

19.
An effective algorithm is described for solving the general constrained parameter optimization problem. The method is quasi-second-order and requires only function and gradient information. An exterior point penalty function method is used to transform the constrained problem into a sequence of unconstrained problems. The penalty weightr is chosen as a function of the pointx such that the sequence of optimization problems is computationally easy. A rank-one optimization algorithm is developed that takes advantage of the special properties of the augmented performance index. The optimization algorithm accounts for the usual difficulties associated with discontinuous second derivatives of the augmented index. Finite convergence is exhibited for a quadratic performance index with linear constraints; accelerated convergence is demonstrated for nonquadratic indices and nonlinear constraints. A computer program has been written to implement the algorithm and its performance is illustrated in fourteen test problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号