首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
基于对称交替方向乘子法(ADMM),结合松弛步技巧,该文提出一种带松弛步的对称ADMM用于求解两分块线性约束非凸优化问题.同时,新算法乘子更新步采用不同的松弛因子.常规假设下,给出新算法子序列的收敛性证明.误差界条件下,分析并获得由新算法产生的迭代点列以线性收敛的速率局部趋于问题稳定点,相应增广拉格朗日函数序列亦线性收敛.最后,初步试验结果表明新算法是有效的.  相似文献   

2.
考虑带线性约束的三块变量的凸优化模型,目标函数是可分的三个函数和.给出了一个新的分裂算法.首先,对每个块变量解极小化增广拉格朗日函数.然后,通过一个校正步得到新的迭代点.证明了新算法的整体收敛性和O(1/t)的收敛阶.  相似文献   

3.
增广Lagrange方法是求解非线性规划的一种有效方法.从一新的角度证明不等式约束非线性非光滑凸优化问题的增广Lagrange方法的收敛性.用常步长梯度法的收敛性定理证明基于增广Lagrange函数的对偶问题的常步长梯度方法的收敛性,由此得到增广Lagrange方法乘子迭代的全局收敛性.  相似文献   

4.
考虑利用广义交替方向法(GADMM)求解线性约束两个函数和的最小值问题,其中一个函数为凸函数,另一个函数可以表示为两个凸函数的差.对GADMM的每一个子问题,采用两个凸函数之差算法中的线性化技术来处理.通过假定相应函数满足Kurdyka-Lojasiewicz不等式,当增广Lagrange(拉格朗日)函数的罚参数充分大时,证明了GADMM所产生的迭代序列收敛到增广Lagrange函数的稳定点.最后,给出了该算法的收敛速度分析.  相似文献   

5.
交替方向乘子法(ADMM)是一种求解可分离优化问题的简单有效的方法,相关研究已经较为完善.然而,当目标函数存在耦合项时,对ADMM算法收敛性的研究还处于初期.文章针对非凸非光滑不可分离优化问题,基于对称交替方向乘子法(SADMM),结合线性化技术,提出了一种新的线性对称邻近ADMM.在一定的假设条件下,证明了算法生成的序列有界并收敛至增广拉格朗日函数的稳定点.其次,当辅助函数满足Kurdyka-Lojasiewicz性质时,证明了算法的强收敛性.最后,数值实验的结果表明了算法的有效性.  相似文献   

6.
姜帆  刘雅梅  蔡邢菊 《计算数学》2018,40(4):367-386
广义交替方向乘子法是求解凸优化问题的有效算法.当实际问题中子问题难以求解时,可以采用在子问题中添加邻近项的方法处理,邻近矩阵正定时,算法收敛,然而这也会使迭代步长较小.最新研究表明,邻近矩阵可以有一定的不正定性.本文在基于不定邻近项的广义交替方向乘子法框架下,提出一种自适应的广义交替方向乘子法,动态地选择邻近矩阵,增大迭代步长.在一些较弱的假设下,证明了算法的全局收敛性.我们进行一些初等数值实验,验证了算法的有效性.  相似文献   

7.
自由边界问题的自适应Uzawa块松弛算法   总被引:1,自引:1,他引:0       下载免费PDF全文
利用增广Lagrange乘子法和自适应法则,得到求解单侧障碍自由边界问题的自适应Uzawa块松弛法.单侧障碍自由边界问题离散为有限维线性互补问题,等价于一个用辅助变量和增广Lagrange函数表示的鞍点问题.采用Uzawa块松弛算法求解该问题得到一个两步迭代法,主要的子问题为一个线性问题,同时能显式求解辅助变量.由于Uzawa块松弛算法的收敛速度显著依赖于罚参数,而且对具体问题很难选择合适的罚参数.为提高算法的性能,提出了自适应法则,该方法自动调整每次迭代所需的罚参数.数值结果验证了该算法的理论分析.  相似文献   

8.
ADMM算法是求解可分离凸优化问题的经典算法之一,但其无法保证原始迭代序列的收敛性且其子问题计算量很大.为了保证该算法所有迭代点列的全局收敛性及提高计算效率,采用凸组合技术的黄金比率邻近ADMM算法被提出,其中凸组合因子Ψ是关键参数.本文在黄金比率邻近ADMM算法的基础上,扩大了凸组合因子Ψ的取值范围,提出了收敛步长范围更广的推广黄金比率邻近ADMM算法.并在一定的假设下,证明了算法的全局收敛性及函数值残差和约束违反度在遍历意义下的O(1/N)次线性收敛速度.以及,当目标函数中任意一个函数强凸时,证明了算法在遍历意义下的O(1/N2)收敛率.最后,本文通过数值试验表明推广算法的有效性.  相似文献   

9.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

10.
常小凯 《计算数学》2014,36(2):133-142
基于变换X=VV~T,本文将半定规划问题转换为非线性规划问题,提出了解决此问题的增广拉格朗日算法,并证明了算法的线性收敛性.在此算法中,每一次迭代计算的子问题利用最速下降搜索方向和满足wolf条件的线性搜索法求最优解.数值实验表明,此算法是行之有效的,且优于内点算法.  相似文献   

11.
Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? We answer this question in the affirmative for a convex program and an associated subgradient algorithm for its Lagrange dual. We show that the primal–dual pair of programs corresponding to an associated homogeneous dual function is in turn associated with a saddle-point problem, in which—in the inconsistent case—the primal part amounts to finding a solution in the primal space such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized; the dual part amounts to identifying a feasible steepest ascent direction for the Lagrangian dual function. We present convergence results for a conditional \(\varepsilon \)-subgradient optimization algorithm applied to the Lagrangian dual problem, and the construction of an ergodic sequence of primal subproblem solutions; this composite algorithm yields convergence of the primal–dual sequence to the set of saddle-points of the associated homogeneous Lagrangian function; for linear programs, convergence to the subset in which the primal objective is at minimum is also achieved.  相似文献   

12.
Recently,an indefinite linearized augmented Lagrangian method(IL-ALM)was proposed for the convex programming problems with linear constraints.The IL-ALM differs from the linearized augmented Lagrangian method in that the augmented Lagrangian is linearized by adding an indefinite quadratic proximal term.But,it preserves the algorithmic feature of the linearized ALM and usually has the advantage to improve the performance.The IL-ALM is proved to be convergent from contraction perspective,but its convergence rate is still missing.This is mainly because that the indefinite setting destroys the structures when we directly employ the contraction frameworks.In this paper,we derive the convergence rate for this algorithm by using a different analysis.We prove that a worst-case O(1/t)convergence rate is still hold for this algorithm,where t is the number of iterations.Additionally we show that the customized proximal point algorithm can employ larger step sizes by proving its equivalence to the linearized ALM.  相似文献   

13.
Yi Zhang  Liwei Zhang  Yue Wu 《TOP》2014,22(1):45-79
The focus of this paper is on studying an inverse second-order cone quadratic programming problem, in which the parameters in the objective function need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with cone constraints, and its dual, which has fewer variables than the original one, is a semismoothly differentiable (SC 1) convex programming problem with both a linear inequality constraint and a linear second-order cone constraint. We demonstrate the global convergence of the augmented Lagrangian method with an exact solution to the subproblem and prove that the convergence rate of primal iterates, generated by the augmented Lagrangian method, is proportional to 1/r, and the rate of multiplier iterates is proportional to $1/\sqrt{r}$ , where r is the penalty parameter in the augmented Lagrangian. Furthermore, a semismooth Newton method with Armijo line search is constructed to solve the subproblems in the augmented Lagrangian approach. Finally, numerical results are reported to show the effectiveness of the augmented Lagrangian method with both an exact solution and an inexact solution to the subproblem for solving the inverse second-order cone quadratic programming problem.  相似文献   

14.
A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization   总被引:4,自引:0,他引:4  
We present and analyze an interior-exterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lagrangian methods. It is based on the MBAL function, which treats inequality constraints with a modified barrier term and equalities with an augmented Lagrangian term. The MBAL method alternatively minimizes the MBAL function in the primal space and updates the Lagrange multipliers. For a large enough fixed barrier-penalty parameter the MBAL method is shown to converge Q-linearly under the standard second-order optimality conditions. Q-superlinear convergence can be achieved by increasing the barrier-penalty parameter after each Lagrange multiplier update. We consider a dual problem that is based on the MBAL function. We prove a basic duality theorem for it and show that it has several important properties that fail to hold for the dual based on the classical Lagrangian.  相似文献   

15.
Given a set of corrupted data drawn from a union of multiple subspace, the subspace recovery problem is to segment the data into their respective subspace and to correct the possible noise simultaneously. Recently, it is discovered that the task can be characterized, both theoretically and numerically, by solving a matrix nuclear-norm and a ?2,1-mixed norm involved convex minimization problems. The minimization model actually has separable structure in both the objective function and constraint; it thus falls into the framework of the augmented Lagrangian alternating direction approach. In this paper, we propose and investigate an augmented Lagrangian algorithm. We split the augmented Lagrangian function and minimize the subproblems alternatively with one variable by fixing the other one. Moreover, we linearize the subproblem and add a proximal point term to easily derive the closed-form solutions. Global convergence of the proposed algorithm is established under some technical conditions. Extensive experiments on the simulated and the real data verify that the proposed method is very effective and faster than the sate-of-the-art algorithm LRR.  相似文献   

16.
Nonnegative tensor factorizations using an alternating direction method   总被引:1,自引:0,他引:1  
The nonnegative tensor (matrix) factorization finds more and more applications in various disciplines including machine learning, data mining, and blind source separation, etc. In computation, the optimization problem involved is solved by alternatively minimizing one factor while the others are fixed. To solve the subproblem efficiently, we first exploit a variable regularization term which makes the subproblem far from ill-condition. Second, an augmented Lagrangian alternating direction method is employed to solve this convex and well-conditioned regularized subproblem, and two accelerating skills are also implemented. Some preliminary numerical experiments are performed to show the improvements of the new method.  相似文献   

17.
We present a primal–dual algorithm for solving a constrained optimization problem. This method is based on a Newtonian method applied to a sequence of perturbed KKT systems. These systems follow from a reformulation of the initial problem under the form of a sequence of penalized problems, by introducing an augmented Lagrangian for handling the equality constraints and a log-barrier penalty for the inequalities. We detail the updating rules for monitoring the different parameters (Lagrange multiplier estimate, quadratic penalty and log-barrier parameter), in order to get strong global convergence properties. We show that one advantage of this approach is that it introduces a natural regularization of the linear system to solve at each iteration, for the solution of a problem with a rank deficient Jacobian of constraints. The numerical experiments show the good practical performances of the proposed method especially for degenerate problems.  相似文献   

18.
We apply a modified subgradient algorithm (MSG) for solving the dual of a nonlinear and nonconvex optimization problem. The dual scheme we consider uses the sharp augmented Lagrangian. A desirable feature of this method is primal convergence, which means that every accumulation point of a primal sequence (which is automatically generated during the process), is a primal solution. This feature is not true in general for available variants of MSG. We propose here two new variants of MSG which enjoy both primal and dual convergence, as long as the dual optimal set is nonempty. These variants have a very simple choice for the stepsizes. Moreover, we also establish primal convergence when the dual optimal set is empty. Finally, our second variant of MSG converges in a finite number of steps.  相似文献   

19.
We present a primal-dual row-action method for the minimization of a convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions and their Jacobian matrix (thus, the row-action nature of the algorithm), and at each iteration a subproblem is solved consisting of minimization of the objective function subject to one or two linear equations. The algorithm generates two sequences: one of them, called primal, converges to the solution of the problem; the other one, called dual, approximates a vector of optimal KKT multipliers for the problem. We prove convergence of the primal sequence for general convex constraints. In the case of linear constraints, we prove that the primal sequence converges at least linearly and obtain as a consequence the convergence of the dual sequence.The research of the first author was partially supported by CNPq Grant No. 301280/86.  相似文献   

20.
In this paper, we propose a primal majorized semismooth Newton-CG augmented Lagrangian method for large-scale linearly constrained convex programming problems, especially for some difficult problems. The basic idea of this method is to apply the majorized semismooth Newton-CG augmented Lagrangian method to the primal convex problem. And we take two special nonlinear semidefinite programming problems as examples to illustrate the algorithm. Furthermore, we establish the global convergence and the iteration complexity of the algorithm. Numerical experiments demonstrate that our method works very well for the testing problems, especially for many ill-conditioned ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号