首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
1引言考虑如下优化问题: min f(x)=sum from i=1 to m f_i(x),s.t. x∈X (1)其中,f_i∶R~n→R是凸函数且f_i不可微,X是R~n上的非空闭凸子集.解(1)的主要方法  相似文献   

2.
One of the main drawbacks of the subgradient method is the tuning process to determine the sequence of steplengths. In this paper, the radar subgradient method, a heuristic method designed to compute a tuning-free subgradient steplength, is geometrically motivated and algebraically deduced. The unit commitment problem, which arises in the electrical engineering field, is used to compare the performance of the subgradient method with the new radar subgradient method.Communicated by M. SimaanThis research was supported by the Spanish Government, CICYT Grant TAP99-1075-C02-01. We acknowledge the technical support from Logilab (HEC, University of Geneva) and especially the valuable remarks and suggestions of the referees.  相似文献   

3.
We generalize the subgradient optimization method for nondifferentiable convex programming to utilize conditional subgradients. Firstly, we derive the new method and establish its convergence by generalizing convergence results for traditional subgradient optimization. Secondly, we consider a particular choice of conditional subgradients, obtained by projections, which leads to an easily implementable modification of traditional subgradient optimization schemes. To evaluate the subgradient projection method we consider its use in three applications: uncapacitated facility location, two-person zero-sum matrix games, and multicommodity network flows. Computational experiments show that the subgradient projection method performs better than traditional subgradient optimization; in some cases the difference is considerable. These results suggest that our simply modification may improve subgradient optimization schemes significantly. This finding is important as such schemes are very popular, especially in the context of Lagrangean relaxation.  相似文献   

4.
Subgradient projectors play an important role in optimization and for solving convex feasibility problems. For every locally Lipschitz function, we can define a subgradient projector via generalized subgradients even if the function is not convex. The paper consists of three parts. In the first part, we study basic properties of subgradient projectors and give characterizations when a subgradient projector is a cutter, a local cutter, or a quasi-nonexpansive mapping. We present global and local convergence analyses of subgradent projectors. Many examples are provided to illustrate the theory. In the second part, we investigate the relationship between the subgradient projector of a prox-regular function and the subgradient projector of its Moreau envelope. We also characterize when a mapping is the subgradient projector of a convex function. In the third part, we focus on linearity properties of subgradient projectors. We show that, under appropriate conditions, a linear operator is a subgradient projector of a convex function if and only if it is a convex combination of the identity operator and a projection operator onto a subspace. In general, neither a convex combination nor a composition of subgradient projectors of convex functions is a subgradient projector of a convex function.  相似文献   

5.
In the present paper, we use subgradient projection algorithms for solving convex feasibility problems. We show that almost all iterates, generated by a subgradient projection algorithm in a Hilbert space, are approximate solutions. Moreover, we obtain an estimate of the number of iterates which are not approximate solutions. In a finite-dimensional case, we study the behavior of the subgradient projection algorithm in the presence of computational errors. Provided computational errors are bounded, we prove that our subgradient projection algorithm generates a good approximate solution after a certain number of iterates.  相似文献   

6.
本文,针对由非线性不等式系统构成的凸可行问题,提出了序列块迭代次梯度投影算法和平行块迭代次梯度投影算法.将非线性不等式系统分成若干个子系统,然后将当前迭代点在子系统各个子集上的次梯度投影的凸组合作为当前迭代点在这个子系统上的近似投影.在较弱条件下证明了两种算法的收敛性.  相似文献   

7.
A generalized subgradient method is considered which uses the subgradients at previous iterations as well as the subgradient at current point. This method is a direct generalization of the usual subgradient method. We provide two sets of convergence conditions of the generalized subgradient method. Our results provide a larger class of sequences which converge to a minimum point and more freedom of adjustment to accelerate the speed of convergence.Most of this research was performed when the first author was visiting Decision and Information Systems Department, College of Business, Arizona State University.  相似文献   

8.
轩华  李冰 《运筹与管理》2015,24(6):121-127
为降低求解复杂度和缩短计算时间,针对多阶段混合流水车间总加权完成时间问题,提出了一种结合异步次梯度法的改进拉格朗日松弛算法。建立综合考虑有限等待时间和工件释放时间的整数规划数学模型,将异步次梯度法嵌入到拉格朗日松弛算法中,从而通过近似求解拉格朗日松弛问题得到一个合理的异步次梯度方向,沿此方向进行搜索,逐渐降低到最优点的距离。通过仿真实验,验证了所提算法的有效性。对比所提算法与传统的基于次梯度法的拉格朗日松弛算法,结果表明,就综合解的质量和计算效率而言,所提算法能在较短的计算时间内获得更好的近优解,尤其是对大规模问题。  相似文献   

9.
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.  相似文献   

10.
The subgradient extragradient method for solving the variational inequality (VI) problem, which is introduced by Censor et al. (J. Optim. Theory Appl. 148, 318–335, 2011), replaces the second projection onto the feasible set of the VI, in the extragradient method, with a subgradient projection onto some constructible half-space. Since the method has been introduced, many authors proposed extensions and modifications with applications to various problems. In this paper, we introduce a modified subgradient extragradient method by improving the stepsize of its second step. Convergence of the proposed method is proved under standard and mild conditions and primary numerical experiments illustrate the performance and advantage of this new subgradient extragradient variant.  相似文献   

11.
In a Hilbert space, we study the convergence of the subgradient method to a solution of a variational inequality, under the presence of computational errors. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In the present paper, the convergence of the subgradient method for solving variational inequalities is established for nonsummable computational errors. We show that the subgradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

12.
A readily implementable algorithm is proposed for minimizing any convex, not necessarily differentiable, function f of several variables subject to a finite number of linear constraints. The algorithm requires only the calculation of f at designated feasible points. At each iteration a lower polyhedral approximation to f is constructed from a few previously calculated subgradients and an aggregate subgradient. The recursively updated aggregate subgradient accumulates the subgradient information to deal with nondifferentiability of f. The polyhedral approximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a step length is found by approximate line search. It is shown that the algorithm converges to a solution, if any.  相似文献   

13.
Based on the gradient sampling technique, we present a subgradient algorithm to solve the nondifferentiable convex optimization problem with an extended real-valued objective function. A feature of our algorithm is the approximation of subgradient at a point via random sampling of (relative) gradients at nearby points, and then taking convex combinations of these (relative) gradients. We prove that our algorithm converges to an optimal solution with probability 1. Numerical results demonstrate that our algorithm performs favorably compared with existing subgradient algorithms on applications considered.  相似文献   

14.
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme. Dedicated to B.T. Polyak on the occassion of his 70th birthday.  相似文献   

15.
In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, we cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, we address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. We refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, we show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that we impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, we study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. We illustrate the applicability of our approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.This work was supported by Grant Nos. NSF-DDM-89-10046 and NSF-DDM-9114352 from the National Science Foundation.  相似文献   

16.
We replace orthogonal projections in the Polyak subgradient method for nonnegatively constrained minimization with entropic projections, thus obtaining an interior-point subgradient method. Inexact entropic projections are quite cheap. Global convergence of the resulting method is established.  相似文献   

17.
We propose a weighting subgradient algorithm for solving multiobjective minimization problems on a nonempty closed convex subset of an Euclidean space. This method combines weighting technique and the classical projected subgradient method, using a divergent series steplength rule. Under the assumption of convexity, we show that the sequence generated by this method converges to a Pareto optimal point of the problem. Some numerical results are presented.  相似文献   

18.
Lagrangean dualization and subgradient optimization techniques are frequently used within the field of computational optimization for finding approximate solutions to large, structured optimization problems. The dual subgradient scheme does not automatically produce primal feasible solutions; there is an abundance of techniques for computing such solutions (via penalty functions, tangential approximation schemes, or the solution of auxiliary primal programs), all of which require a fair amount of computational effort. We consider a subgradient optimization scheme applied to a Lagrangean dual formulation of a convex program, and construct, at minor cost, an ergodic sequence of subproblem solutions which converges to the primal solution set. Numerical experiments performed on a traffic equilibrium assignment problem under road pricing show that the computation of the ergodic sequence results in a considerable improvement in the quality of the primal solutions obtained, compared to those generated in the basic subgradient scheme. Received February 11, 1997 / Revised version received June 19, 1998?Published online June 28, 1999  相似文献   

19.
We study conditions for convergence of a generalized subgradient algorithm in which a relaxation step is taken in a direction, which is a convex combination of possibly all previously generated subgradients. A simple condition for convergence is given and conditions that guarantee a linear convergence rate are also presented. We show that choosing the steplength parameter and convex combination of subgradients in a certain sense optimally is equivalent to solving a minimum norm quadratic programming problem. It is also shown that if the direction is restricted to be a convex combination of the current subgradient and the previous direction, then an optimal choice of stepsize and direction is equivalent to the Camerini—Fratta—Maffioli modification of the subgradient method.Research supported by the Swedish Research Council for Engineering Sciences (TFR).  相似文献   

20.
Surrogate Gradient Algorithm for Lagrangian Relaxation   总被引:6,自引:0,他引:6  
The subgradient method is used frequently to optimize dual functions in Lagrangian relaxation for separable integer programming problems. In the method, all subproblems must be solved optimally to obtain a subgradient direction. In this paper, the surrogate subgradient method is developed, where a proper direction can be obtained without solving optimally all the subproblems. In fact, only an approximate optimization of one subproblem is needed to get a proper surrogate subgradient direction, and the directions are smooth for problems of large size. The convergence of the algorithm is proved. Compared with methods that take effort to find better directions, this method can obtain good directions with much less effort and provides a new approach that is especially powerful for problems of very large size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号