首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present in this paper alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most ${O(1/\epsilon)}$ iterations to obtain an ${\epsilon}$ -optimal solution, while our accelerated (i.e., fast) versions of them require at most ${O(1/\sqrt{\epsilon})}$ iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.  相似文献   

2.
We show the existence of sets with $n$ points ( $n\ge 4$ ) for which every convex decomposition contains more than $\frac{35}{32}n-\frac{3}{2}$ polygons, which refutes the conjecture that for every set of $n$ points there is a convex decomposition with at most $n+C$ polygons. For sets having exactly three extreme points we show that more than $n+\sqrt{2(n-3)}-4$ polygons may be necessary to form a convex decomposition.  相似文献   

3.
Let I be an interval. We consider the non-monotonic convex self-mappings \(f:I\rightarrow I\) such that \(f^2\) is convex. They have the property that all iterates \(f^n\) are convex. In the class of these mappings we study three families of functions possessing convex iterative roots. A function f is said to be iteratively convex if f possesses convex iterative roots of all orders. A mapping f is said to be dyadically convex if for every \(n\ge 2\) there exists a convex iterative root \(f^{1/2^n}\) of order \(2^n\) and the sequence \(\{f^{1/2^n}\}\) satisfies the condition of compatibility, that is \( f^{1/2^n}\circ f^{1/2^n}= f^{1/2^{n-1}}.\) A function f is said to be flowly convex if it possesses a convex semi-flow of f, that is a family of convex functions \(\{f^t,t>0\}\) such that \(f^t\circ f^s=f^{t+s}, \ \ t,s >0\) and \(f^1=f\). We show the relations among these three types of convexity and we determine all convex iterative roots of non-monotonic functions.  相似文献   

4.
McCormick (Math Prog 10(1):147–175, 1976) provides the framework for convex/concave relaxations of factorable functions, via rules for the product of functions and compositions of the form \(F\circ f\) , where \(F\) is a univariate function. Herein, the composition theorem is generalized to allow multivariate outer functions \(F\) , and theory for the propagation of subgradients is presented. The generalization interprets the McCormick relaxation approach as a decomposition method for the auxiliary variable method. In addition to extending the framework, the new result provides a tool for the proof of relaxations of specific functions. Moreover, a direct consequence is an improved relaxation for the product of two functions, at least as tight as McCormick’s result, and often tighter. The result also allows the direct relaxation of multilinear products of functions. Furthermore, the composition result is applied to obtain improved convex underestimators for the minimum/maximum and the division of two functions for which current relaxations are often weak. These cases can be extended to allow composition of a variety of functions for which relaxations have been proposed.  相似文献   

5.
We consider an inverse quadratic programming (QP) problem in which the parameters in the objective function of a given QP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semidefinite cone constraint and its dual is a linearly constrained semismoothly differentiable (SC1) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to 1/r, and the rate of multiplier iterates is proportional to  $1/\sqrt{r}$ , where r is the penalty parameter in the augmented Lagrangian. As the objective function of the dual problem is a SC1 function involving the projection operator onto the cone of symmetrically semi-definite matrices, the analysis requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and properties of the projection operator in the symmetric-matrix space. Furthermore, the semismooth Newton method with Armijo line search is applied to solve the subproblems in the augmented Lagrange approach, which is proven to have global convergence and local quadratic rate. Finally numerical results, implemented by the augmented Lagrangian method, are reported.  相似文献   

6.
Since Dantzig—Wolfe's pioneering contribution, the decomposition approach using a pricing mechanism has been developed for a wide class of mathematical programs. For convex programs a linear space of Lagrangean multipliers is enough to define price functions. For general mathematical programs the price functions could be defined by using a subclass of nondecreasing functions. However the space of nondecreasing functions is no longer finite dimensional. In this paper we consider a specific nonconvex optimization problem min {f(x):h j (x)g(x),j=1, ,m, xX}, wheref(·),h j (·) andg(·) are finite convex functions andX is a closed convex set. We generalize optimal price functions for this problem in such a way that the parameters of generalized price functions are defined in a finite dimensional space. Combining convex duality and a nonconvex duality we can develop a decomposition method to find a globally optimal solution.This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday.  相似文献   

7.
Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem \(\min _x \mathbf{E}_v\left[ f_v\big (\mathbf{E}_w [g_w(x)]\big ) \right] .\) In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of \(f_v,g_{w}\) and use an auxiliary variable to track the unknown quantity \(\mathbf{E}_w\left[ g_w(x)\right] \). We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of \(\mathcal {O}(k^{-1/4})\) in the general case and \(\mathcal {O}(k^{-2/3})\) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of \(\mathcal {O}(k^{-2/7})\) in the general case and \(\mathcal {O}(k^{-4/5})\) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.  相似文献   

8.
We consider the convex optimization problem \({\min_{\mathbf{x}} \{f(\mathbf{x}): g_j(\mathbf{x})\leq 0, j=1,\ldots,m\}}\) where f is convex, the feasible set \({\mathbf{K}}\) is convex and Slater’s condition holds, but the functions g j ’s are not necessarily convex. We show that for any representation of \({\mathbf{K}}\) that satisfies a mild nondegeneracy assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely every KKT point is a minimizer. That is, the KKT optimality conditions are necessary and sufficient as in convex programming where one assumes that the g j ’s are convex. So in convex optimization, and as far as one is concerned with KKT points, what really matters is the geometry of \({\mathbf{K}}\) and not so much its representation.  相似文献   

9.
In this paper, we propose and analyze an accelerated augmented Lagrangian method (denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k 2) while the convergence rate of the classical augmented Lagrangian method (ALM) is O(1/k). Numerical experiments on the linearly constrained l 1?l 2 minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

10.
For strictly increasing concave functions \({\varphi}\) whose inverse functions are log-concave, the \({\varphi}\)-Brunn–Minkowski inequality for planar convex bodies is established. It is shown that for convex bodies in \({\mathbb{R}^n}\) the \({\varphi}\)-Brunn–Minkowski is equivalent to the \({\varphi}\)-Minkowski mixed volume inequalities.  相似文献   

11.
Necessary conditions are established for the continuity of finite sums of ridge functions defined on convex subsets E of the space Rn. It is shown that under some constraints imposed on the summed functions ?i, in the case when E is open, the continuity of the sum implies the continuity of all ?i. In the case when E is a convex body with nonsmooth boundary, a logarithmic estimate is obtained for the growth of the functions ?i in the neighborhoods of the boundary points of their domains of definition. In addition, an example is constructed that demonstrates the accuracy of the estimate obtained.  相似文献   

12.
A parallel Nesterov algorithm, for solving unconstrained minimization of large scale partially separable convex functions, is presented. The problem is first transformed into a linearly constrained minimization of a separable function. A fast projected gradient (Nesterov) method is then applied to obtain a decomposition method with \(O(1/k^2)\) rate of convergence (where k is the iteration number). Preliminary numerical experiments show the efficiency of the proposed approach.  相似文献   

13.
In this paper we exploit a slight variant of a result previously proved in Locatelli and Schoen (Math Program 144:65–91, 2014) to define a procedure which delivers the convex envelope of some bivariate functions over polytopes. The procedure is based on the solution of a KKT system and simplifies the derivation of the convex envelope with respect to previously proposed techniques. The procedure is applied to derive the convex envelope of the bilinear function xy over any polytope, and the convex envelope of functions \(x^n y^m\) over boxes.  相似文献   

14.
We prove the Mejia-Pommerenke conjecture that the Taylor coefficients of hyperbolically convex functions in the disk behave like O(log?2(n)/n) as n → ∞ assuming that the image of the unit disk under such functions is a domain of bounded boundary rotation. Moreover, we obtain some asymptotically sharp estimates for the integral means of the derivatives of such functions and consider an example of a hyperbolically convex function that maps the unit disk onto a domain of infinite boundary rotation.  相似文献   

15.
We study finitely generated submodules in the module $P$ of entire functions bounded by a system of $\rho $ -trigonometrically convex weights majorized by a given $\rho $ -trigonometrically convex function. Sufficient conditions for the ampleness of a finitely generated submodule in terms of the relative position of the zeros of its generators are obtained. Using these conditions, we prove that each ample submodule in $P$ is generated by two (possibly, coinciding) functions.  相似文献   

16.
We consider convex relaxations for the problem of minimizing a (possibly nonconvex) quadratic objective subject to linear and (possibly nonconvex) quadratic constraints. Let $\mathcal{F }$ denote the feasible region for the linear constraints. We first show that replacing the quadratic objective and constraint functions with their convex lower envelopes on $\mathcal{F }$ is dominated by an alternative methodology based on convexifying the range of the quadratic form $\genfrac(){0.0pt}{}{1}{x}\genfrac(){0.0pt}{}{1}{x}^T$ for $x\in \mathcal{F }$ . We next show that the use of ?? $\alpha $ BB?? underestimators as computable estimates of convex lower envelopes is dominated by a relaxation of the convex hull of the quadratic form that imposes semidefiniteness and linear constraints on diagonal terms. Finally, we show that the use of a large class of D.C. (??difference of convex??) underestimators is dominated by a relaxation that combines semidefiniteness with RLT constraints.  相似文献   

17.
W. Hare 《Optimization Letters》2017,11(7):1217-1227
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models—an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the \(\ell _1\) norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.  相似文献   

18.
We investigate how well the graph of a bilinear function \(b{:}\;[0,1]^n\rightarrow \mathbb {R}\) can be approximated by its McCormick relaxation. In particular, we are interested in the smallest number c such that the difference between the concave upper bounding and convex lower bounding functions obtained from the McCormick relaxation approach is at most c times the difference between the concave and convex envelopes. Answering a question of Luedtke, Namazifar and Linderoth, we show that this factor c cannot be bounded by a constant independent of n. More precisely, we show that for a random bilinear function b we have asymptotically almost surely \(c\geqslant \sqrt{n}/4\). On the other hand, we prove that \(c\leqslant 600\sqrt{n}\), which improves the linear upper bound proved by Luedtke, Namazifar and Linderoth. In addition, we present an alternative proof for a result of Misener, Smadbeck and Floudas characterizing functions b for which the McCormick relaxation is equal to the convex hull.  相似文献   

19.
In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints. The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings. However, many problems arising from statistics, image processing and other fields have the structure that while one of the two functions has an easy proximal mapping, the other function is smoothly convex but does not have an easy proximal mapping. Therefore, the classical alternating direction methods cannot be applied. To deal with the difficulty, we propose in this paper an alternating direction method based on extragradients. Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an \(\epsilon \)-optimal solution within \(O(1/\epsilon )\) iterations. We apply the proposed method to solve a new statistical model called fused logistic regression. Our numerical experiments show that the proposed method performs very well when solving the test problems. We also test the performance of the proposed method through solving the lasso problem arising from statistics and compare the result with several existing efficient solvers for this problem; the results are very encouraging.  相似文献   

20.
We consider the class Co(p) of all conformal maps of the unit disk onto the exterior of a bounded convex set. We prove that the triangle mappings, i.e., the functions that map the unit disk onto the exterior of a triangle, are among the extreme points of the closed convex hull of Co(p). Moreover, we prove a conjecture on the closed convex hull of Co(p) for all p ∈ (0, 1) which had partially been proved by the authors for some values of p ∈ (0, 1).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号