首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
本文提出一个求解非光滑凸优化问题非精确梯度镜面下降算法.该算法是Allen-Zhu2016年提出求解光滑凸优化问题梯度镜面下降算法的推广,而且该算法允许目标函数中光滑部分梯度计算和非光滑部分邻近算子计算都存在误差,并且在适当条件下分析了该算法函数值序列的O(1/(k2))收敛速度,这里k表示迭代数.最后关于Lasso问题和Logistic问题的数值结果表明该算法是有效的.  相似文献   

2.
对一类在压缩感知、图像处理等相关领域有广泛应用的特殊非光滑优化问题进行了研究,给出了求解此类问题的光滑梯度法及算法的全局收敛性证明,相关的数值实验表明算法的有效性.  相似文献   

3.
欧宜贵  侯定丕 《数学杂志》2003,23(3):345-348
本文提出了一个易实施的处理一类无约束复合非光滑优化的信赖域算法,并在一定条件下证明了该算法所产生的迭代序列的任何聚点都是原问题的稳定点.  相似文献   

4.
交替方向乘子法(ADMM)是一种求解可分离优化问题的简单有效的方法,相关研究已经较为完善.然而,当目标函数存在耦合项时,对ADMM算法收敛性的研究还处于初期.文章针对非凸非光滑不可分离优化问题,基于对称交替方向乘子法(SADMM),结合线性化技术,提出了一种新的线性对称邻近ADMM.在一定的假设条件下,证明了算法生成的序列有界并收敛至增广拉格朗日函数的稳定点.其次,当辅助函数满足Kurdyka-Lojasiewicz性质时,证明了算法的强收敛性.最后,数值实验的结果表明了算法的有效性.  相似文献   

5.
一类非光滑优化问题的区间算法   总被引:17,自引:2,他引:17  
1引言 考虑下面离散minimax问题x∈X~(o)≤i≤m min max{f_i(x)},(1.1)  相似文献   

6.
1 引言 考虑下列无约束非光滑优化问题 minf(x),(1) x∈R~n,其中f为R~n上的局部Lipschitz函数,本文将‖·‖_2简记为‖·‖.记下列信赖域子问题为S∪B(x,△). min m(x,s)=φ(x,s)+1/2s~TBs, 其中φ:R~(2m)→R为f的迭代函数。 对于无约束非光滑优化问题(1),[11],[13],[3]、[4]和[5]分别在特殊的条件下给出了信赖域算法用以求解(1)的收敛性结果。最近,[10]、[2]和[6]在不同的假设条件下分别给出了信赖域算法求解无约束非光滑优化问题的一般模型,并在子问题的目标函数满足局部一致有界性条件时证明了算法模型的整体收敛性。在目标函数满足某种正则性条件时,[11]和[9]给出了当信赖域子问题的目标函数中二次项不满足一致有界性条件时的收敛性结果.本文则在目标函数仅为局部Lipschitz函数时得到了和[8]、[11]、[9]相同的收敛性结果。  相似文献   

7.
基于寻找分离超平面的三种经典线搜索技术,本文提出了一种自适应线搜索技术.结合谱梯度投影法,提出了凸约束非光滑单调方程组的一个谱梯度投影算法.该算法不需要计算和存储任何矩阵,因而适合求解大规模非光滑的非线性单调方程组.在较弱的条件下,证明了方法的全局收敛性,并分析了算法的收敛率.数值试验结果表明算法是有效的和鲁棒的.  相似文献   

8.
受性能估计问题(PEP)方法的启发,通过考察最坏函数误差的收敛边界(即效率),优化了迭代点对应的梯度满足Q-线性收敛的光滑凸极小化的一阶方法的步长系数.介绍新的有效的一阶方法,称为QGM,具有与优化梯度法(OGM)类似的计算有效形式.  相似文献   

9.
研究了一类带不等式约束的非光滑优化问题,利用Clarke 次微分和Lagrange 乘子研究该类问题的解集的一些性质,给出了一个例子解释主要结果.主要结论是对最近一些文献中相应结果的改进与推广.  相似文献   

10.
基于对称交替方向乘子法(ADMM),结合松弛步技巧,该文提出一种带松弛步的对称ADMM用于求解两分块线性约束非凸优化问题.同时,新算法乘子更新步采用不同的松弛因子.常规假设下,给出新算法子序列的收敛性证明.误差界条件下,分析并获得由新算法产生的迭代点列以线性收敛的速率局部趋于问题稳定点,相应增广拉格朗日函数序列亦线性收敛.最后,初步试验结果表明新算法是有效的.  相似文献   

11.
12.
张清叶  高岩 《运筹学学报》2016,20(2):113-120
提出一种求解非光滑凸规划问题的混合束方法. 该方法通过对目标函数增加迫近项, 且对可行域增加信赖域约束进行迭代, 做为迫近束方法与信赖域束方法的有机结合, 混合束方法自动在二者之间切换, 收敛性分析表明该方法具有全局收敛性. 最后的数值算例验证了算法的有效性.  相似文献   

13.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

14.
In the lines of our previous approach to devise proximal algorithms for nonsmooth convex optimization by applying Nesterov fast gradient concept to the Moreau–Yosida regularization of a convex function, we develop three new proximal algorithms for nonsmooth convex optimization. In these algorithms, the errors in computing approximate solutions for the Moreau–Yosida regularization are not fixed beforehand, while preserving the complexity estimates already established. We report some preliminary computational results to give a first estimate of their performance.  相似文献   

15.
B. Jin 《Optimization》2016,65(6):1151-1166
In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms: we derive a Lagrange–Newton algorithm for the nonsmooth convex optimization, and establish the nonsingularity of the Newton system and the local convergence of the algorithm.  相似文献   

16.
Two distributed algorithms are described that enable all users connected over a network to cooperatively solve the problem of minimizing the sum of all users’ objective functions over the intersection of all users’ constraint sets, where each user has its own private nonsmooth convex objective function and closed convex constraint set, which is the intersection of a number of simple, closed convex sets. One algorithm enables each user to adjust its estimate using the proximity operator of its objective function and the metric projection onto one constraint set randomly selected from a number of simple, closed convex sets. The other determines each user’s estimate using the subdifferential of its objective function instead of the proximity operator. Investigation of the two algorithms’ convergence properties for a diminishing step-size rule revealed that, under certain assumptions, the sequences of all users generated by each of the two algorithms converge almost surely to the same solution. It also showed that the rate of convergence depends on the step size and that a smaller step size results in quicker convergence. The results of numerical evaluation using a nonsmooth convex optimization problem support the convergence analysis and demonstrate the effectiveness of the two algorithms.  相似文献   

17.
We propose a proximal Newton method for solving nondifferentiable convex optimization. This method combines the generalized Newton method with Rockafellar’s proximal point algorithm. At each step, the proximal point is found approximately and the regularization matrix is preconditioned to overcome inexactness of this approximation. We show that such a preconditioning is possible within some accuracy and the second-order differentiability properties of the Moreau-Yosida regularization are invariant with respect to this preconditioning. Based upon these, superlinear convergence is established under a semismoothness condition. This work is supported by the Australian Research Council.  相似文献   

18.
We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an k -subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes k are exogenously given, satisfying k=0 k = , k=0 k 2 < , and k is chosen so that k k for some > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research of this author was partially supported by CNPq grant nos. 301280/86 and 300734/95-6.  相似文献   

19.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号