首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
A New Self-Dual Embedding Method for Convex Programming   总被引:5,自引:0,他引:5  
In this paper we introduce a conic optimization formulation to solve constrained convex programming, and propose a self-dual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint functions and their corresponding conjugate functions respectively. Hence they are completely symmetric. This allows for a standard primal-dual path following approach for solving the embedded problem. Moreover, there are two immediate logarithmic barrier functions for the primal and dual cones. We show that these two logarithmic barrier functions are conjugate to each other. The explicit form of the conjugate functions are in fact not required to be known in the algorithm. An advantage of the new approach is that there is no need to assume an initial feasible solution to start with. To guarantee the polynomiality of the path-following procedure, we may apply the self-concordant barrier theory of Nesterov and Nemirovski. For this purpose, as one application, we prove that the barrier functions constructed this way are indeed self-concordant when the original constraint functions are convex and quadratic. We pose as an open question to find general conditions under which the constructed barrier functions are self-concordant.  相似文献   

2.
由Nesterov和Nemirovski[4]创立的self-concordant障碍函数理论为解线性和凸优化问题提供了多项式时间内点算法.根据self-concordant障碍函数的参数,就可以分析内点算法的复杂性.在这篇文章中,我们介绍了基于核函数的局部self-concordant障碍函数,它在线性优化问题的中心路径及其邻域内满足self-concordant性质.通过求解此障碍函数的局部参数值,我们得到了求解线性规划问题的基于此局部Self-concordant障碍函数的纯牛顿步内点算法的理论迭代界.此迭代界与目前已知的最好的理论迭代界是一致的.  相似文献   

3.
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using the theory of self-concordant functions developed by Nesterov and Nemirovski in SIAM Studies in Applied Mathematics, SIAM Publications, Philadelphia, 1994. We describe the classical short-step interior-point method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of self-concordancy and which one is the best to fix. A lemma due to den Hertog et al. in Mathematical Programming Series B 69 (1) (1995) is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.  相似文献   

4.
In this paper, we propose a distributed algorithm for solving large-scale separable convex problems using Lagrangian dual decomposition and the interior-point framework. By adding self-concordant barrier terms to the ordinary Lagrangian, we prove under mild assumptions that the corresponding family of augmented dual functions is self-concordant. This makes it possible to efficiently use the Newton method for tracing the central path. We show that the new algorithm is globally convergent and highly parallelizable and thus it is suitable for solving large-scale separable convex problems.  相似文献   

5.
楼烨  高越天 《运筹学学报》2012,16(4):112-124
目前,已发表了大量研究各类不同凸规划的低复杂度的障碍函数方法的文章. 利用自和谐理论,对不同的几类凸规划问题构造相应的对数障碍函数,通过两个引理证明这些凸规划问题相应的对数障碍函数都满足自和谐,根据Nesterov 和Nemirovsky的工作证明了所给问题的内点算法具有多项式复杂性.  相似文献   

6.
Journal of Optimization Theory and Applications - Many convex optimization problems have structured objective functions written as a sum of functions with different oracle types (e.g., full...  相似文献   

7.
《Optimization》2012,61(4):627-643
Recently, the so-called second order cone optimization problem has received much attention, because the problem has many applications and the problem can in theory be solved efficiently by interior-point methods. In this note we treat duality for second order cone optimization problems and in particular whether a nonzero duality gap can be obtained when casting a convex quadratically constrained optimization problem as a second order cone optimization problem. Furthermore, we also discuss the p -order cone optimization problem which is a natural generalization of the second order case. Specifically, we suggest a new self-concordant barrier for the p -order cone optimization problem.  相似文献   

8.
General successive convex relaxation methods (SRCMs) can be used to compute the convex hull of any compact set, in an Euclidean space, described by a system of quadratic inequalities and a compact convex set. Linear complementarity problems (LCPs) make an interesting and rich class of structured nonconvex optimization problems. In this paper, we study a few of the specialized lift-and-project methods and some of the possible ways of applying the general SCRMs to LCPs and related problems.  相似文献   

9.
Recently a number of papers were written that present low-complexity interior-point methods for different classes of convex programs. The goal of this article is to show that the logarithmic barrier function associated with these programs is self-concordant. Hence the polynomial complexity results for these convex programs can be derived from the theory of Nesterov and Nemirovsky on self-concordant barrier functions. We also show that the approach can be applied to some other known classes of convex programs.This author's research was supported by a research grant from SHELL.On leave from the Eötvös University, Budapest, Hungary. This author's research was partially supported by OTKA No. 2116.  相似文献   

10.
A progressive hedging method incorporated with self-concordant barrier for solving multistage stochastic programs is proposed recently by Zhao [G. Zhao, A Lagrangian dual method with self-concordant barrier for multistage stochastic convex nonlinear programming, Math. Program. 102 (2005) 1-24]. The method relaxes the nonanticipativity constraints by the Lagrangian dual approach and smoothes the Lagrangian dual function by self-concordant barrier functions. The convergence and polynomial-time complexity of the method have been established. Although the analysis is done on stochastic convex programming, the method can be applied to the nonconvex situation. We discuss some details on the implementation of this method in this paper, including when to terminate the solution of unconstrained subproblems with special structure and how to perform a line search procedure for a new dual estimate effectively. In particular, the method is used to solve some multistage stochastic nonlinear test problems. The collection of test problems also contains two practical examples from the literature. We report the results of our preliminary numerical experiments. As a comparison, we also solve all test problems by the well-known progressive hedging method.  相似文献   

11.
A class of nonconvex minimization problems can be classified as hidden convex minimization problems. A nonconvex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the equivalent transformation of it is a convex minimization problem. Sufficient conditions that are independent of transformations are derived in this paper for identifying such a class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. Thus, a global optimality can be achieved for this class of hidden convex optimization problems by using local search methods. The results presented in this paper extend the reach of convex minimization by identifying its equivalent with a nonconvex representation.  相似文献   

12.
We show the importance of exploiting the complementary convex structure for efficiently solving a wide class of specially structured nonconvex global optimization problems. Roughly speaking, a specific feature of these problems is that their nonconvex nucleus can be transformed into a complementary convex structure which can then be shifted to a subspace of much lower dimension than the original underlying space. This approach leads to quite efficient algorithms for many problems of practical interest, including linear and convex multiplicative programming problems, concave minimization problems with few nonlinear variables, bilevel linear optimization problems, etc...  相似文献   

13.
In this paper we develop a new affine-invariant primal–dual subgradient method for nonsmooth convex optimization problems. This scheme is based on a self-concordant barrier for the basic feasible set. It is suitable for finding approximate solutions with certain relative accuracy. We discuss some applications of this technique including fractional covering problem, maximal concurrent flow problem, semidefinite relaxations and nonlinear online optimization. For all these problems, the rate of convergence of our method does not depend on the problem’s data.  相似文献   

14.
We propose and study the use of convex constrained optimization techniques for solving large-scale Generalized Sylvester Equations (GSE). For that, we adapt recently developed globalized variants of the projected gradient method to a convex constrained least-squares approach for solving GSE. We demonstrate the effectiveness of our approach on two different applications. First, we apply it to solve the GSE that appears after applying left and right preconditioning schemes to the linear problems associated with the discretization of some partial differential equations. Second, we apply the new approach, combined with a Tikhonov regularization term, to restore some blurred and highly noisy images.  相似文献   

15.
In this paper we develop convex relaxations of chance constrained optimization problems in order to obtain lower bounds on the optimal value. Unlike existing statistical lower bounding techniques, our approach is designed to provide deterministic lower bounds. We show that a version of the proposed scheme leads to a tractable convex relaxation when the chance constraint function is affine with respect to the underlying random vector and the random vector has independent components. We also propose an iterative improvement scheme for refining the bounds.  相似文献   

16.
We present a new duality theory to treat convex optimization problems and we prove that the geometric duality used by Scott and Jefferson in different papers during the last quarter of century is a special case of it. Moreover, weaker sufficient conditions to achieve strong duality are considered and optimality conditions are derived. Next, we apply our approach to some problems considered by Scott and Jefferson, determining their duals. We give weaker sufficient conditions to achieve strong duality and the corresponding optimality conditions. Finally, posynomial geometric programming is viewed also as a particular case of the duality approach that we present. Communicated by V. F. Demyanov The first author was supported in part by Gottlieb Daimler and Karl Benz Stiftung 02-48/99. The second author was supported in part by Karl und Ruth Mayer Stiftung.  相似文献   

17.
Modelling of convex optimization in the face of data uncertainty often gives rise to families of parametric convex optimization problems. This motivates us to present, in this paper, a duality framework for a family of parametric convex optimization problems. By employing conjugate analysis, we present robust duality for the family of parametric problems by establishing strong duality between associated dual pair. We first show that robust duality holds whenever a constraint qualification holds. We then show that this constraint qualification is also necessary for robust duality in the sense that the constraint qualification holds if and only if robust duality holds for every linear perturbation of the objective function. As an application, we obtain a robust duality theorem for the best approximation problems with constraint data uncertainty under a strict feasibility condition.  相似文献   

18.
We present a heuristic approach for convex optimization problems containing different types of sparsity constraints. Whenever the support is required to belong to a matroid, we propose an exchange heuristic adapting the support in every iteration. The entering non-zero is determined by considering the dual multipliers of the bounds on variables being fixed to zero. While this algorithm is purely heuristic, we show experimentally that it often finds near-optimal solutions for cardinality-constrained knapsack problems and for sparse regression problems.  相似文献   

19.
In this paper we present penalty and barrier methods for solving general convex semidefinite programming problems. More precisely, the constraint set is described by a convex operator that takes its values in the cone of negative semidefinite symmetric matrices. This class of methods is an extension of penalty and barrier methods for convex optimization to this setting. We provide implementable stopping rules and prove the convergence of the primal and dual paths obtained by these methods under minimal assumptions. The two parameters approach for penalty methods is also extended. As for usual convex programming, we prove that after a finite number of steps all iterates will be feasible.  相似文献   

20.
A new decomposition optimization algorithm, called path-following gradient-based decomposition, is proposed to solve separable convex optimization problems. Unlike path-following Newton methods considered in the literature, this algorithm does not require any smoothness assumption on the objective function. This allows us to handle more general classes of problems arising in many real applications than in the path-following Newton methods. The new algorithm is a combination of three techniques, namely smoothing, Lagrangian decomposition and path-following gradient framework. The algorithm decomposes the original problem into smaller subproblems by using dual decomposition and smoothing via self-concordant barriers, updates the dual variables using a path-following gradient method and allows one to solve the subproblems in parallel. Moreover, compared to augmented Lagrangian approaches, our algorithmic parameters are updated automatically without any tuning strategy. We prove the global convergence of the new algorithm and analyze its convergence rate. Then, we modify the proposed algorithm by applying Nesterov’s accelerating scheme to get a new variant which has a better convergence rate than the first algorithm. Finally, we present preliminary numerical tests that confirm the theoretical development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号