首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A new dual gradient method is given to solve linearly constrained, strongly convex, separable mathematical programming problems. The dual problem can be decomposed into one-dimensional problems whose solutions can be computed extremely easily. The dual objective function is shown to have a Lipschitz continuous gradient, and therefore a gradient-type algorithm can be used for solving the dual problem. The primal optimal solution can be obtained from the dual optimal solution in a straightforward way. Convergence proofs and computational results are given.  相似文献   

2.
We investigate constrained first order techniques for training support vector machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the constrained incremental algorithms working in the dual space achieve the best trade-off between prediction accuracy and training time. We perform comparisons with an unconstrained large scale learning algorithm (Pegasos stochastic gradient) to emphasize that our choice can remain competitive for large scale learning due to the very special structure of the training problem.  相似文献   

3.
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.  相似文献   

4.
In this paper, we introduce a transformation that converts a class of linear and nonlinear semidefinite programming (SDP) problems into nonlinear optimization problems. For those problems of interest, the transformation replaces matrix-valued constraints by vector-valued ones, hence reducing the number of constraints by an order of magnitude. The class of transformable problems includes instances of SDP relaxations of combinatorial optimization problems with binary variables as well as other important SDP problems. We also derive gradient formulas for the objective function of the resulting nonlinear optimization problem and show that both function and gradient evaluations have affordable complexities that effectively exploit the sparsity of the problem data. This transformation, together with the efficient gradient formulas, enables the solution of very large-scale SDP problems by gradient-based nonlinear optimization techniques. In particular, we propose a first-order log-barrier method designed for solving a class of large-scale linear SDP problems. This algorithm operates entirely within the space of the transformed problem while still maintaining close ties with both the primal and the dual of the original SDP problem. Global convergence of the algorithm is established under mild and reasonable assumptions. Received: January 5, 2000 / Accepted: October 2001?Published online February 14, 2002  相似文献   

5.
We study dual functionals which have two fundamental properties. Firstly, they have a good asymptotical behavior. Secondly, to each dual sequence of subgradients converging to zero, one can associate a primal sequence which converges to an optimal solution of the primal problem. Furthermore, minimal conditions for the convergence of the Gauss-Seidel methods are given and applied to such kinds of functionals.  相似文献   

6.
Efficient algorithms for buffer space allocation   总被引:1,自引:0,他引:1  
This paper describes efficient algorithms for determining how buffer space should be allocated in a flow line. We analyze two problems: a primal problem, which minimizes total buffer space subject to a production rate constraint; and a dual problem, which maximizes production rate subject to a total buffer space constraint. The dual problem is solved by means of a gradient method, and the primal problem is solved using the dual solution. Numerical results are presented. Profit optimization problems are natural generalizations of the primal and dual problems, and we show how they can be solved using essentially the same algorithms.  相似文献   

7.
In this paper we analyze from a unique point of view the behavior of path-following and primal-dual potential reduction methods on nonlinear conic problems. We demonstrate that most interior-point methods with efficiency estimate can be considered as different strategies of minimizing aconvex primal-dual potential function in an extended primal-dual space. Their efficiency estimate is a direct consequence of large local norm of the gradient of the potential function along a central path. It is shown that the neighborhood of this path is a region of the fastest decrease of the potential. Therefore the long-step path-following methods are, in a sense, the best potential-reduction strategies. We present three examples of such long-step strategies. We prove also an efficiency estimate for a pure primal-dual potential reduction method, which can be considered as an implementation of apenalty strategy based on a functional proximity measure. Using the convex primal dual potential, we prove efficiency estimates for Karmarkar-type and Dikin-type methods as applied to a homogeneous reformulation of the initial primal-dual problem.  相似文献   

8.
并行技术在约束凸规划化问题的对偶算法中的应用   总被引:1,自引:0,他引:1  
用 Rosen(196 1)的投影梯度的方法求解约束凸规划化问题的对偶问题 ,在计算投影梯度方向时 ,涉及求关于原始变量的最小化问题的最优解 .我们用并行梯度分布算法 (PGD)计算出这一极小化问题的近似解 ,证明近似解可以达到任何给定的精度 ,并说明当精度选取合适时 ,Rosen方法仍然是收敛的  相似文献   

9.
This paper presents a canonical dual approach for finding either an optimal or approximate solution to the maximum cut problem (MAX CUT). We show that, by introducing a linear perturbation term to the objective function, the maximum cut problem is perturbed to have a dual problem which is a concave maximization problem over a convex feasible domain under certain conditions. Consequently, some global optimality conditions are derived for finding an optimal or approximate solution. A gradient decent algorithm is proposed for this purpose and computational examples are provided to illustrate the proposed approach.  相似文献   

10.
We consider the convex composite problem of minimizing the sum of a strongly convex function and a general extended valued convex function. We present a dual-based proximal gradient scheme for solving this problem. We show that although the rate of convergence of the dual objective function sequence converges to the optimal value with the rate O(1/k2)O(1/k2), the rate of convergence of the primal sequence is of the order O(1/k)O(1/k).  相似文献   

11.
$k$-均值问题是机器学习和组合优化领域十分重要的问题。它是经典的NP-难问题, 被广泛的应用于数据挖掘、企业生产决策、图像处理、生物医疗科技等领域。随着时代的发展, 人们越来越注重于个人的隐私保护:在决策通常由人工智能算法做出的情况下, 如何保证尽可能多地从数据中挖掘更多信息,同时不泄露个人隐私。近十年来不断有专家学者研究探索带隐私保护的$k$-均值问题, 得到了许多具有理论指导意义和实际应用价值的结果, 本文主要介绍关于$k$-均值问题的差分隐私算法供读者参考。  相似文献   

12.
We are interested in minimizing functionals with ℓ2 data and gradient fitting term and ℓ1 regularization term with higher order derivatives in a discrete setting. We examine the structure of the solution in 1D by reformulating the original problem into a contact problem which can be solved by dual optimization techniques. The solution turns out to be a ’smooth’ discrete polynomial spline whose knots coincide with the contact points while its counterpart in the contact problem is a discrete version of a spline with higher defect and contact points as knots. In 2D we modify Chambolle’s algorithm to solve the minimization problem with the ℓ1 norm of interacting second order partial derivatives as regularization term. We show that the algorithm can be implemented efficiently by applying the fast cosine transform. We demonstrate by numerical denoising examples that the ℓ2 gradient fitting term can be used to avoid both edge blurring and staircasing effects.   相似文献   

13.
This paper presents a conjugate gradient method for solving systems of linear inequalities. The method is of dual optimization type and consists of two phases which can be implemented in a common framework. Phase 1 either finds the minimum-norm solution of the system or detects the inconsistency of the system. In the latter event, the method proceeds to Phase 2 in which an approximate least-squares solution to the system is obtained. The method is particularly suitable to large scale problems because it preserves the sparsity structure of the problem. Its efficiency is shown by computational comparisons with an SOR type method.  相似文献   

14.
Recently, Balaji and Xu studied the consistency of stationary points, in the sense of the Clarke generalized gradient, for the sample average approximations to a one-stage stochastic optimization problem in a separable Banach space with separable dual. We present an alternative approach, showing that the restrictive assumptions that the dual space is separable and the Clarke generalized gradient is a (norm) upper semicontinuous and compact-valued multifunction can be dropped. For that purpose, we use two results having independent interest: a strong law of large numbers and a multivalued Komlós theorem in the dual to a separable Banach space, and a result on the weak* closedness of the expectation of a random weak* compact convex set.  相似文献   

15.
We present a procedure for computing lower bounds for the optimal cost in a linear programming problem. Whenever the procedure succeeds, it finds a dual feasible slack and the associated duality gap. Although no projective transformations or problem restatements are used, the method coincides with the procedures by Todd and Burrell and by de Ghellinck and Vial when these procedures are applicable. The procedure applies directly to affine potential reduction algorithms, and improves on existent techniques for finding lower bounds.  相似文献   

16.
Numerical methods are proposed for solving finite-dimensional convex problems with inequality constraints satisfying the Slater condition. A method based on solving the dual to the original regularized problem is proposed and justified for problems having a strictly uniformly convex sum of the objective function and the constraint functions. Conditions for the convergence of this method are derived, and convergence rate estimates are obtained for convergence with respect to the functional, convergence with respect to the argument to the set of optimizers, and convergence to the g-normal solution. For more general convex finite-dimensional minimization problems with inequality constraints, two methods with finite-step inner algorithms are proposed. The methods are based on the projected gradient and conditional gradient algorithms. The paper is focused on finite-dimensional problems obtained by approximating infinite-dimensional problems, in particular, optimal control problems for systems with lumped or distributed parameters.  相似文献   

17.
We consider a general equilibrium problem defined on a convex set, whose cost bifunction may not be monotone. We show that this problem can be solved by the inexact proximal point method if there exists a solution to the dual problem. An application of this approach to nonlinearly constrained problems is also suggested.  相似文献   

18.
In this paper, we propose a feasible-direction method for large-scale nonconvex programs, where the gradient projection on a linear subspace defined by the active constraints of the original problem is determined by dual decomposition. Results are extended for dynamical problems which include distributed delays and constraints both in state and control variables. The approach is compared with other feasible-direction approaches, and the method is applied to a power generation problem. Some computational results are included.This work was supported by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, Brasilia, Brasil, and by the Fundaçao de Amparo a Pesquisa do Estado de Sao Paulo, Sao Paulo, Brazil.On leave from UNICAMP, Campinas, Brazil.  相似文献   

19.
We present alternative methods for verifying the quality of a proposed solution to a two stage stochastic program with recourse. Our methods revolve around implications of a dual problem in which dual multipliers on the nonanticipativity constraints play a critical role. Using randomly sampled observations of the stochastic elements, we introduce notions of statistical dual feasibility and sampled error bounds. Additionally, we use the nonanticipativity multipliers to develop connections to reduced gradient methods. Finally, we propose a statistical test based on directional derivatives. We illustrate the applicability of these tests via some examples. This work was supported in part by Grant No. NSF-DMI-9414680 from the National Science Foundation  相似文献   

20.
For the scalar semicoercive Signorini problem, the dual problem is constructed on the basis of a modified Lagrangian functional. The convergence of the gradient method as applied to solving the dual problem is examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号