首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
郭洁  万中 《计算数学》2022,44(3):324-338
基于指数罚函数,对最近提出的一种求解无约束优化问题的三项共轭梯度法进行了修正,并用它求解更复杂的大规模极大极小值问题.证明了该方法生成的搜索方向对每一个光滑子问题是充分下降方向,而且与所用的线搜索规则无关.以此为基础,设计了求解大规模极大极小值问题的算法,并在合理的假设下,证明了算法的全局收敛性.数值实验表明,该算法优于文献中已有的类似算法.  相似文献   

2.
本文对局部Lipschitz连续函数引入了非光滑程度的概念,讨论了函数的非光滑程度的某些与函数的下降方向以及最优性有关的性质,并将其用于研究求函数极小值的直接方法的收敛性质。  相似文献   

3.
4.
非凸极小极大问题是近期国际上优化与机器学习、信号处理等交叉领域的一个重要研究前沿和热点,包括对抗学习、强化学习、分布式非凸优化等前沿研究方向的一些关键科学问题都归结为该类问题。国际上凸-凹极小极大问题的研究已取得很好的成果,但非凸极小极大问题不同于凸-凹极小极大问题,是有其自身结构的非凸非光滑优化问题,理论研究和求解难度都更具挑战性,一般都是NP-难的。重点介绍非凸极小极大问题的优化算法和复杂度分析方面的最新进展。  相似文献   

5.
本文提出一个求解非光滑凸优化问题非精确梯度镜面下降算法.该算法是Allen-Zhu2016年提出求解光滑凸优化问题梯度镜面下降算法的推广,而且该算法允许目标函数中光滑部分梯度计算和非光滑部分邻近算子计算都存在误差,并且在适当条件下分析了该算法函数值序列的O(1/(k2))收敛速度,这里k表示迭代数.最后关于Lasso问题和Logistic问题的数值结果表明该算法是有效的.  相似文献   

6.
考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是R-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率.最后,数值实验结果验证了方法的有效性.  相似文献   

7.
The maximal entropy principle is applied to solve convex inequality problems. An inequality problem can be transformed into a minmax problem.Then it can be transformed into an unconstrained parameterized min problem,using the entropic function to smooth the minmax problem. The solution of the inequality problem can be obtained, by solving the parameterized min problems and adjusting the parameter to zero, under a certain principle. However, it is sufficient to solve a parameterized inequality problem each time, from the propositions of the aggregate function. In the article, some propositions of the aggregate function are discussed, the algorithm and its convergence are obtained.  相似文献   

8.
王丽平  陈晓红 《计算数学》2009,31(2):127-136
左共轭梯度法是求解大型稀疏线性方程组的一种新兴的Krylov子空间方法.为克服该算法数值表现不稳定、迭代中断的缺点,本文对原方法进行等价变形,得到左共轭梯度方向的另一迭代格式,给出一个拟极小化左共轭梯度算法.数值结果证实了该变形算法与原算法的相关性.  相似文献   

9.
本文研究了图分割问题中的矩阵迹极小化问题.利用半正定矩阵的Gramian表示,将该问题转化为无约束优化问题,设计了Armijo线搜索下的非线性共轭梯度方法进行求解.数值例子表明新方法是可行的.  相似文献   

10.
研究目标函数是若干光滑函数和的可分离优化问题,提出了一种单位化增量梯度算法.该算法每次子迭代只需要计算一个(或几个)分量函数的单位负梯度方向作为迭代方向.在一定条件下,证明了采用发散步长的单位化增量梯度算法的收敛性.作为应用,新算法和Bertsekas D P,Tsitsikils J N提出的(没有单位化)增量梯度算...  相似文献   

11.
《Optimization》2012,61(12):2587-2597
Abstract

Our purpose in this paper is to obtain strong convergence result for approximation of solution to constrained convex minimization problem using a new iterative scheme in a real Hilbert space. Furthermore, we give numerical analysis of our iterative scheme.  相似文献   

12.
Proximal bundle methods for minimizing a convex functionf generate a sequence {x k } by takingx k+1 to be the minimizer of , where is a sufficiently accurate polyhedral approximation tof andu k > 0. The usual choice ofu k = 1 may yield very slow convergence. A technique is given for choosing {u k } adaptively that eliminates sensitivity to objective scaling. Some encouraging numerical experience is reported.This research was supported by Project CPBP.02.15.  相似文献   

13.
14.
This paper presents new versions of proximal bundle methods for solving convex constrained nondifferentiable minimization problems. The methods employ 1 or exact penalty functions with new penalty updates that limit unnecessary penalty growth. In contrast to other methods, some of them are insensitive to problem function scaling. Global convergence of the methods is established, as well as finite termination for polyhedral problems. Some encouraging numerical experience is reported. The ideas presented may also be used in variable metric methods for smooth nonlinear programming.This research was supported by the Polish Academy of Sciences.  相似文献   

15.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

16.
《Optimization》2012,61(9):1367-1385
The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. Based on Marino and Xu's method [G. Marino and H.-K. Xu, A general method for nonexpansive mappings in Hilbert space, J. Math. Anal. Appl. 318 (2006), pp. 43–52], we combine GPA and averaged mapping approach to propose implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem for the first time in this article. Under suitable conditions, strong convergence theorems are obtained.  相似文献   

17.
In this paper, we show that an analogue of the classical conjugate gradient method converges linearly when applied to solving the problem of unconstrained minimization of a strictly convex quadratic spline. Since a strictly convex quadratic program with simple bound constraints can be reformulated as unconstrained minimization of a strictly convex quadratic spline, the conjugate gradient method is used to solve the unconstrained reformulation and find the solution of the original quadratic program. In particular, if the solution of the original quadratic program is nondegenerate, then the conjugate gradient method finds the solution in a finite number of iterations. This author's research is partially supported by the NASA/Langley Research Center under grant NCC-1-68 Supplement-15.  相似文献   

18.
In optimization theory, convex minimization problems have been intensively investigated in the current literature due to its wide range in applications. A major and effective tool for solving such problem is the forward‐backward splitting algorithm. However, to guarantee the convergence, it is usually assumed that the gradient of functions is Lipschitz continuous and the stepsize depends on the Lipschitz constant, which is not an easy task in practice. In this work, we propose the modified forward‐backward splitting method using new linesearches for choosing suitable stepsizes and discuss the convergence analysis including its complexity without any Lipschitz continuity assumption on the gradient. Finally, we provide numerical experiments in signal recovery to demonstrate the computational performance of our algorithm in comparison to some well‐known methods. Our reports show that the proposed algorithm has a good convergence behavior and can outperform the compared methods.  相似文献   

19.
We introduce a novel approach for analyzing the worst-case performance of first-order black-box optimization methods. We focus on smooth unconstrained convex minimization over the Euclidean space. Our approach relies on the observation that by definition, the worst-case behavior of a black-box optimization method is by itself an optimization problem, which we call the performance estimation problem (PEP). We formulate and analyze the PEP for two classes of first-order algorithms. We first apply this approach on the classical gradient method and derive a new and tight analytical bound on its performance. We then consider a broader class of first-order black-box methods, which among others, include the so-called heavy-ball method and the fast gradient schemes. We show that for this broader class, it is possible to derive new bounds on the performance of these methods by solving an adequately relaxed convex semidefinite PEP. Finally, we show an efficient procedure for finding optimal step sizes which results in a first-order black-box method that achieves best worst-case performance.  相似文献   

20.
We consider the convex composite problem of minimizing the sum of a strongly convex function and a general extended valued convex function. We present a dual-based proximal gradient scheme for solving this problem. We show that although the rate of convergence of the dual objective function sequence converges to the optimal value with the rate O(1/k2)O(1/k2), the rate of convergence of the primal sequence is of the order O(1/k)O(1/k).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号