首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Random Gradient-Free Minimization of Convex Functions
Authors:Yurii Nesterov  Vladimir Spokoiny
Institution:1.Center for Operations Research and Econometrics (CORE),Catholic University of Louvain (UCL),Leuven,Belgium;2.Weierstrass Institute for Applied Analysis and Stochastics (WIAS),Humboldt University of Berlin,Berlin,Germany
Abstract:In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence \(O\Big ({n^2 \over k^2}\Big )\), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence \(O\Big ({n \over k^{1/2}}\Big )\). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号