共查询到19条相似文献,搜索用时 62 毫秒
1.
2.
3.
本文提出了一个易实施的处理一类无约束复合非光滑优化的信赖域算法,并在一定条件下证明了该算法所产生的迭代序列的任何聚点都是原问题的稳定点. 相似文献
4.
交替方向乘子法(ADMM)是一种求解可分离优化问题的简单有效的方法,相关研究已经较为完善.然而,当目标函数存在耦合项时,对ADMM算法收敛性的研究还处于初期.文章针对非凸非光滑不可分离优化问题,基于对称交替方向乘子法(SADMM),结合线性化技术,提出了一种新的线性对称邻近ADMM.在一定的假设条件下,证明了算法生成的序列有界并收敛至增广拉格朗日函数的稳定点.其次,当辅助函数满足Kurdyka-Lojasiewicz性质时,证明了算法的强收敛性.最后,数值实验的结果表明了算法的有效性. 相似文献
5.
6.
刘国山 《高等学校计算数学学报》1997,19(1):77-82
1 引言 考虑下列无约束非光滑优化问题 minf(x),(1) x∈R~n,其中f为R~n上的局部Lipschitz函数,本文将‖·‖_2简记为‖·‖.记下列信赖域子问题为S∪B(x,△). min m(x,s)=φ(x,s)+1/2s~TBs, 其中φ:R~(2m)→R为f的迭代函数。 对于无约束非光滑优化问题(1),[11],[13],[3]、[4]和[5]分别在特殊的条件下给出了信赖域算法用以求解(1)的收敛性结果。最近,[10]、[2]和[6]在不同的假设条件下分别给出了信赖域算法求解无约束非光滑优化问题的一般模型,并在子问题的目标函数满足局部一致有界性条件时证明了算法模型的整体收敛性。在目标函数满足某种正则性条件时,[11]和[9]给出了当信赖域子问题的目标函数中二次项不满足一致有界性条件时的收敛性结果.本文则在目标函数仅为局部Lipschitz函数时得到了和[8]、[11]、[9]相同的收敛性结果。 相似文献
7.
8.
9.
10.
11.
12.
提出一种求解非光滑凸规划问题的混合束方法. 该方法通过对目标函数增加迫近项, 且对可行域增加信赖域约束进行迭代, 做为迫近束方法与信赖域束方法的有机结合, 混合束方法自动在二者之间切换, 收敛性分析表明该方法具有全局收敛性. 最后的数值算例验证了算法的有效性. 相似文献
13.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network. 相似文献
14.
《Operations Research Letters》2020,48(6):777-783
In the lines of our previous approach to devise proximal algorithms for nonsmooth convex optimization by applying Nesterov fast gradient concept to the Moreau–Yosida regularization of a convex function, we develop three new proximal algorithms for nonsmooth convex optimization. In these algorithms, the errors in computing approximate solutions for the Moreau–Yosida regularization are not fixed beforehand, while preserving the complexity estimates already established. We report some preliminary computational results to give a first estimate of their performance. 相似文献
15.
B. Jin 《Optimization》2016,65(6):1151-1166
In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms: we derive a Lagrange–Newton algorithm for the nonsmooth convex optimization, and establish the nonsingularity of the Newton system and the local convergence of the algorithm. 相似文献
16.
Hideaki Iiduka 《Optimization》2017,66(1):35-59
Two distributed algorithms are described that enable all users connected over a network to cooperatively solve the problem of minimizing the sum of all users’ objective functions over the intersection of all users’ constraint sets, where each user has its own private nonsmooth convex objective function and closed convex constraint set, which is the intersection of a number of simple, closed convex sets. One algorithm enables each user to adjust its estimate using the proximity operator of its objective function and the metric projection onto one constraint set randomly selected from a number of simple, closed convex sets. The other determines each user’s estimate using the subdifferential of its objective function instead of the proximity operator. Investigation of the two algorithms’ convergence properties for a diminishing step-size rule revealed that, under certain assumptions, the sequences of all users generated by each of the two algorithms converge almost surely to the same solution. It also showed that the rate of convergence depends on the step size and that a smaller step size results in quicker convergence. The results of numerical evaluation using a nonsmooth convex optimization problem support the convergence analysis and demonstrate the effectiveness of the two algorithms. 相似文献
17.
We propose a proximal Newton method for solving nondifferentiable convex optimization. This method combines the generalized
Newton method with Rockafellar’s proximal point algorithm. At each step, the proximal point is found approximately and the
regularization matrix is preconditioned to overcome inexactness of this approximation. We show that such a preconditioning
is possible within some accuracy and the second-order differentiability properties of the Moreau-Yosida regularization are
invariant with respect to this preconditioning. Based upon these, superlinear convergence is established under a semismoothness
condition.
This work is supported by the Australian Research Council. 相似文献
18.
We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an
k
-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes
k
are exogenously given, satisfying
k=0
k = ,
k=0
k
2
< , and
k is chosen so that
k k for some > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research of this author was partially supported by CNPq grant nos. 301280/86 and 300734/95-6. 相似文献
19.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered. 相似文献