首页 | 本学科首页   官方微博 | 高级检索  
     


Inexact subgradient methods with applications in stochastic programming
Authors:Au  Kelly T.  Higle  Julia L.  Sen  Suvrajeet
Affiliation:(1) Systems and Industrial Engineering Department, College of Engineering and Mines, University of Arizona, 85721 Tuscon, AZ, USA
Abstract:In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, we cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, we address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. We refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, we show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that we impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, we study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. We illustrate the applicability of our approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.This work was supported by Grant Nos. NSF-DDM-89-10046 and NSF-DDM-9114352 from the National Science Foundation.
Keywords:Subgradient methods  Approximations  stochastic optimization
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号