首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
数学   9篇
  2021年   2篇
  2017年   3篇
  2015年   1篇
  2014年   2篇
  2009年   1篇
排序方式: 共有9条查询结果,搜索用时 31 毫秒
1
1.
In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ?-optimal solution in $\mathcal{O}(n^{2}/\epsilon)$ iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.  相似文献   
2.
Parametric convex programming has received a lot of attention, since it has many applications in chemical engineering, control engineering, signal processing, etc. Further, inverse optimality plays an important role in many contexts, e.g., image processing, motion planning. This paper introduces a constructive solution of the inverse optimality problem for the class of continuous piecewise affine functions. The main idea is based on the convex lifting concept. Accordingly, an algorithm to construct convex liftings of a given convexly liftable partition will be put forward. Following this idea, an important result will be presented in this article: Any continuous piecewise affine function defined over a polytopic partition is the solution of a parametric linear/quadratic programming problem. Regarding linear optimal control, it will be shown that any continuous piecewise affine control law can be obtained via a linear optimal control problem with the control horizon at most equal to 2 prediction steps.  相似文献   
3.
Journal of Optimization Theory and Applications - In this paper, we consider stochastic composite convex optimization problems with the objective function satisfying a stochastic bounded gradient...  相似文献   
4.

In this paper we consider convex feasibility problems where the feasible set is given as the intersection of a collection of closed convex sets. We assume that each set is specified algebraically as a convex inequality, where the associated convex function is general (possibly non-differentiable). For finding a point satisfying all the convex inequalities we design and analyze random projection algorithms using special subgradient iterations and extrapolated stepsizes. Moreover, the iterate updates are performed based on parallel random observations of several constraint components. For these minibatch stochastic subgradient-based projection methods we prove sublinear convergence results and, under some linear regularity condition for the functional constraints, we prove linear convergence rates. We also derive sufficient conditions under which these rates depend explicitly on the minibatch size. To the best of our knowledge, this work is the first deriving conditions that show theoretically when minibatch stochastic subgradient-based projection updates have a better complexity than their single-sample variants when parallel computing is used to implement the minibatch. Numerical results also show a better performance of our minibatch scheme over its non-minibatch counterpart.

  相似文献   
5.
A new decomposition optimization algorithm, called path-following gradient-based decomposition, is proposed to solve separable convex optimization problems. Unlike path-following Newton methods considered in the literature, this algorithm does not require any smoothness assumption on the objective function. This allows us to handle more general classes of problems arising in many real applications than in the path-following Newton methods. The new algorithm is a combination of three techniques, namely smoothing, Lagrangian decomposition and path-following gradient framework. The algorithm decomposes the original problem into smaller subproblems by using dual decomposition and smoothing via self-concordant barriers, updates the dual variables using a path-following gradient method and allows one to solve the subproblems in parallel. Moreover, compared to augmented Lagrangian approaches, our algorithmic parameters are updated automatically without any tuning strategy. We prove the global convergence of the new algorithm and analyze its convergence rate. Then, we modify the proposed algorithm by applying Nesterov’s accelerating scheme to get a new variant which has a better convergence rate than the first algorithm. Finally, we present preliminary numerical tests that confirm the theoretical development.  相似文献   
6.
7.
In this paper we study two inexact fast augmented Lagrangian algorithms for solving linearly constrained convex optimization problems. Our methods rely on a combination of the excessive-gap-like smoothing technique introduced in Nesterov (SIAM J Optim 16(1):235–249, 2005) and the general inexact oracle framework studied in Devolder (Math Program 146:37–75, 2014). We develop and analyze two augmented based algorithmic instances with constant and adaptive smoothness parameters, and derive a total computational complexity estimate in terms of projections on a simple primal feasible set for each algorithm. For the constant parameter algorithm we obtain the overall computational complexity of order \(\mathcal {O}(\frac{1}{\epsilon ^{5/4}})\), while for the adaptive one we obtain \(\mathcal {O}(\frac{1}{\epsilon })\) total number of projections onto the primal feasible set in order to achieve an \(\epsilon \)-optimal solution for the original problem.  相似文献   
8.
In this paper, we propose a distributed algorithm for solving large-scale separable convex problems using Lagrangian dual decomposition and the interior-point framework. By adding self-concordant barrier terms to the ordinary Lagrangian, we prove under mild assumptions that the corresponding family of augmented dual functions is self-concordant. This makes it possible to efficiently use the Newton method for tracing the central path. We show that the new algorithm is globally convergent and highly parallelizable and thus it is suitable for solving large-scale separable convex problems.  相似文献   
9.
In this paper we develop random block coordinate descent methods for minimizing large-scale linearly constrained convex problems over networks. Since coupled constraints appear in the problem, we devise an algorithm that updates in parallel at each iteration at least two random components of the solution, chosen according to a given probability distribution. Those computations can be performed in a distributed fashion according to the structure of the network. Complexity per iteration of the proposed methods is usually cheaper than that of the full gradient method when the number of nodes in the network is much larger than the number of updated components. On smooth convex problems, we prove that these methods exhibit a sublinear worst-case convergence rate in the expected value of the objective function. Moreover, this convergence rate depends linearly on the number of components to be updated. On smooth strongly convex problems we prove that our methods converge linearly. We also focus on how to choose the probabilities to make our randomized algorithms converge as fast as possible, which leads us to solving a sparse semidefinite program. We then describe several applications that fit in our framework, in particular the convex feasibility problem. Finally, numerical experiments illustrate the behaviour of our methods, showing in particular that updating more than two components in parallel accelerates the method.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号