首页 | 本学科首页   官方微博 | 高级检索  
     检索      

求解带线性约束的凸优化的一类自适应不定线性化增广拉格朗日方法
引用本文:马玉敏,蔡邢菊.求解带线性约束的凸优化的一类自适应不定线性化增广拉格朗日方法[J].计算数学,2022,44(2):272-288.
作者姓名:马玉敏  蔡邢菊
作者单位:南京师范大学数学科学学院, 南京 210023
摘    要:增广拉格朗日方法是求解带线性约束的凸优化问题的有效算法.线性化增广拉格朗日方法通过线性化增广拉格朗日函数的二次罚项并加上一个临近正则项,使得子问题容易求解,其中正则项系数的恰当选取对算法的收敛性和收敛速度至关重要.较大的系数可保证算法收敛性,但容易导致小步长.较小的系数允许迭代步长增大,但容易导致算法不收敛.本文考虑求解带线性等式或不等式约束的凸优化问题.我们利用自适应技术设计了一类不定线性化增广拉格朗日方法,即利用当前迭代点的信息自适应选取合适的正则项系数,在保证收敛性的前提下尽量使得子问题步长选择范围更大,从而提高算法收敛速度.我们从理论上证明了算法的全局收敛性,并利用数值实验说明了算法的有效性.

关 键 词:凸优化  增广拉格朗日方法  自适应  全局收敛性  
收稿时间:2021-08-30

AN ADAPTIVE INDEFINITE LINEARIZED AUGMENTED LAGRANGIAN METHOD FOR CONVEX OPTIMIZATION WITH LINEAR CONSTRAINTS
Ma Yumin,Cai Xingju.AN ADAPTIVE INDEFINITE LINEARIZED AUGMENTED LAGRANGIAN METHOD FOR CONVEX OPTIMIZATION WITH LINEAR CONSTRAINTS[J].Mathematica Numerica Sinica,2022,44(2):272-288.
Authors:Ma Yumin  Cai Xingju
Institution:School of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, China
Abstract:Augmented Lagrangian method is an effective algorithm for solving convex optimization problems with linear constraints. While the primal subproblem needs to be solved iteratively, a common technique is to linearize the objective function of the subproblem and add a regularization term (equivalently to add a proximal term on the objective function of the subproblem), which is called the linearized augmented Lagrangian method. The proper selection of the regularization parameter is crucial to the convergence and convergence rate of the algorithm. The larger parameter can ensure the convergence of the algorithm, but it may lead to small stepsizes. The smaller parameter, however, permits larger stepsizes, but it may lead to divergence. In this paper, we consider the convex optimization problems with linear equality or inequality constraints. We design a class of adaptive indefinite linearized augmented Lagrangian method, that is, we use the information of the current iteration point to select the appropriate parameter of the regularization term adaptively, and try to make the primal stepsize larger with the guarantee of convergence, so as to improve the convergence rate of the algorithm. We theoretically prove the global convergence of the algorithm, and use numerical experiments to illustrate the effectiveness of the algorithm.
Keywords:Convex optimization  Augmented Lagrangian method  Adaptive  Global convergence  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算数学》浏览原始摘要信息
点击此处可从《计算数学》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号