首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
徐海文  孙黎明 《计算数学》2017,39(2):200-212
凸优化问题的混合下降算法利用近似条件的已知信息和随机数扩张预测校正步得到了一组下降方向.而前向加速收缩算法利用高斯赛德尔迭代算法的技术,结合邻近点算法和近似邻近点算法的思想,构造了富有扩张性的下降方向.本文借鉴混合下降算法和前向加速收缩算法的思想,利用已有近似规则信息改善了混合下降算法的下降方向,得到了一类凸优化问题的加速混合下降算法.随后利用Markov不等式、凸函数性质和投影的基本性质等,实现了算法的依概率收敛证明.一系列数值试验表明了加速混合下降算法的有效性和效率性.  相似文献   

2.
郦旭东 《计算数学》2020,42(4):385-404
在大数据时代,随着数据采集手段的不断提升,大规模复合凸优化问题大量的出现在包括统计数据分析,机器与统计学习以及信号与图像处理等应用中.本文针对大规模复合凸优化问题介绍了一类快速邻近点算法.在易计算的近似准则和较弱的平稳性条件下,本文给出了该算法的全局收敛与局部渐近超线性收敛结果.同时,我们设计了基于对偶原理的半光滑牛顿法来高效稳定求解邻近点算法所涉及的重要子问题.最后,本文还讨论了如何通过深入挖掘并利用复合凸优化问题中由非光滑正则函数所诱导的非光滑二阶信息来极大减少半光滑牛顿算法中求解牛顿线性系统所需的工作量,从而进一步加速邻近点算法.  相似文献   

3.
周叔子  孙佑兰 《经济数学》2005,22(3):312-316
本文对DC函数(即两凸函数之差)的最小化问题提出了一个非精确邻近点算法,并证明此算法的下降性和全局收敛性.  相似文献   

4.
近似邻近点算法是求解单调变分不等式的一个有效方法,该算法通过解决一系列强单调子问题,产生近似邻近点序列来逼近变分不等式的解,而外梯度算法则通过每次迭代中增加一个投影来克服一般投影算法限制太强的缺点,但它们均未能改变迭代步骤中不规则闭凸区域上投影难计算的问题.于是,本文结合外梯度算法的迭代格式,构造包含原投影区域的半空间,将投影建立在半空间上,简化了投影的求解过程,并对新的邻近点序列作相应限制,使得改进的算法具有较好的收敛性.  相似文献   

5.
本文证明DC函数最小化问题邻近点算法的一个收敛性定理,并对此问题提出一类非精确邻近点算法.  相似文献   

6.
本文在Banach空间中引入一类H-增生算子的混合拟变分包含,并提出求该变分包含问题解的邻近点法.通过H-增生算子的预解算子技术,建立了混合拟变分包含问题与邻近算子方程的等价关系,由这个等价关系得到求解邻近算子方程的迭代算法,该算法收敛于上述混合拟变分包含问题的解.  相似文献   

7.
ADMM算法是求解可分离凸优化问题的经典算法之一,但其无法保证原始迭代序列的收敛性且其子问题计算量很大.为了保证该算法所有迭代点列的全局收敛性及提高计算效率,采用凸组合技术的黄金比率邻近ADMM算法被提出,其中凸组合因子Ψ是关键参数.本文在黄金比率邻近ADMM算法的基础上,扩大了凸组合因子Ψ的取值范围,提出了收敛步长范围更广的推广黄金比率邻近ADMM算法.并在一定的假设下,证明了算法的全局收敛性及函数值残差和约束违反度在遍历意义下的O(1/N)次线性收敛速度.以及,当目标函数中任意一个函数强凸时,证明了算法在遍历意义下的O(1/N2)收敛率.最后,本文通过数值试验表明推广算法的有效性.  相似文献   

8.
针对一类特殊的多目标优化问题,其每个目标函数为一个二阶连续可微凸函数与一个真凸但不必可微函数之和,提出了邻近牛顿法.我们引入了带线搜索的邻近牛顿法和不带线搜索的邻近牛顿法.在适当的条件下,我们证明了由这两类算法产生的序列的每个聚点是多目标优化问题的Pareto平稳点.此外,我们给出了它们在约束多目标优化和鲁棒多目标优化...  相似文献   

9.
近似邻近点算法在最优化理论与方法研究中具有重要作用.在不同误差准则下,近似邻近点算法具有不同的收敛性.利用极大单调算子等工具给出了一个具体的例子,解释了在一些误差准则下近似邻近点算法的收敛性.  相似文献   

10.
本文对非线性不等式约束优化问题提出了一个新的可行 QP-free 算法. 新算法保存了现有算法的优点, 并具有以下特性: (1) 算法每次迭代只需求解三个具有相同系数矩阵的线性方程组, 计算量小; (2) 可行下降方向只需通过求解一个线性方程组即可获得, 克服了以往分别求解两个线性方程组获得下降方向和可行方向, 然后再做凸组合的困难;(3) 迭代点均为可行点, 并不要求是严格内点; (4) 算法中采用了试探性线搜索,可以进一步减少计算量; (5) 算法中参数很少,数值试验表明算法具有较好的数值效果和较强的稳定性.  相似文献   

11.
We propose and study the iteration-complexity of a proximal-Newton method for finding approximate solutions of the problem of minimizing a twice continuously differentiable convex function on a (possibly infinite dimensional) Hilbert space. We prove global convergence rates for obtaining approximate solutions in terms of function/gradient values. Our main results follow from an iteration-complexity study of an (large-step) inexact proximal point method for solving convex minimization problems.  相似文献   

12.
关于单调变分不等式的不精确邻近点算法的收敛性分析   总被引:7,自引:0,他引:7  
We consider a proximal point algorithm(PPA) for solving monotone variational inequalities. PPA generates a sequence by solving a sequence of strongly monotone subproblems .However,solving the subproblems is either expensive or impossible. Some inexact proximal point algorithms(IPPA) have been developed in many literatures. In this paper, we present a criterion for approximately solving subproblems. It only needs one simple additional work on the basis of original algorithm, and the convergence criterion becomes milder. We show that this method converges globally under new criterion provided that the solution set of the problem is nonempty.  相似文献   

13.
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.  相似文献   

14.
《Optimization》2012,61(7):1043-1055
In this article, a new method is proposed for solving a class of structured variational inequalities (SVIs). The proposed method is referred to as the partial inexact proximal alternating direction (piPAD) method. In the method, two subproblems are solved independently. One is handled by an inexact proximal point method and the other is solved directly. This feature is the major difference between the proposed method and some existing alternating direction-like methods. The convergence of the piPAD method is proved. Two examples of the modern convex optimization problem arising from engineering and information sciences, which can be reformulated into the encountered SVIs, are presented to demonstrate the applicability of the piPAD method. Also, some preliminary numerical results are reported to validate the feasibility and efficiency of the piPAD method.  相似文献   

15.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

16.
This paper studies a general vector optimization problem of finding weakly efficient points for mappings from Hilbert spaces to arbitrary Banach spaces, where the latter are partially ordered by some closed, convex, and pointed cones with nonempty interiors. To find solutions of this vector optimization problem, we introduce an auxiliary variational inequality problem for a monotone and Lipschitz continuous mapping. The approximate proximal method in vector optimization is extended to develop a hybrid approximate proximal method for the general vector optimization problem under consideration by combining an extragradient method to find a solution of the variational inequality problem and an approximate proximal point method for finding a root of a maximal monotone operator. In this hybrid approximate proximal method, the subproblems consist of finding approximate solutions to the variational inequality problem for monotone and Lipschitz continuous mapping, and then finding weakly efficient points for a suitable regularization of the original mapping. We present both absolute and relative versions of our hybrid algorithm in which the subproblems are solved only approximately. The weak convergence of the generated sequence to a weak efficient point is established under quite mild assumptions. In addition, we develop some extensions of our hybrid algorithms for vector optimization by using Bregman-type functions.  相似文献   

17.
A proximal bundle method is presented for minimizing a nonsmooth convex functionf. At each iteration, it requires only one approximate evaluation off and its -subgradient, and it finds a search direction via quadratic programming. When applied to the Lagrangian decomposition of convex programs, it allows for inexact solutions of decomposed subproblems; yet, increasing their required accuracy automatically, it asymptotically finds both the primal and dual solutions. It is an implementable approximate version of the proximal point algorithm. Some encouraging numerical experience is reported.The author thanks two anonymous referees for their valuable comments.Research supported by the State Committee for Scientific Research under Grant 8550502206.  相似文献   

18.
线性约束的凸优化问题和鞍点问题的一阶最优性条件是一个单调变分不等式. 在变分不等式框架下求解这些问题, 选取适当的矩阵G, 采用G- 模下的PPA 算法, 会使迭代过程中的子问题求解变得相当容易. 本文证明这类定制的PPA 算法的误差界有1/k 的收敛速率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号