首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Steepest Descent Methods with Generalized Distances for Constrained Optimization
Authors:Alfredo N Iusem
Institution:(1) Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro, RJ, CEP 22460-320, Brazil
Abstract:We consider the problem 
$$ \min f(x) $$
s.t. 
$$ x \in C $$
, where C is a closed and covex subset of 
$$ {\text{R}}^n $$
with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a segment in the interior of C. The information about the boundary of C is contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objective of the search is either f or f plus a regularizing term. When 
$$ C{\text{ = R}}^n $$
, the usual steepest descent method is a particular case of our general scheme, and we manage to extend known convergence results for the steepest descent method to our family: for nonregularized one-dimensional searches,under a level set boundedness assumption on f, the sequence is bounded, the difference between consecutive iterates converges to 0 and every cluster point of the sequence satisfies first-order optimality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a solution of the problem is established.
Keywords:proximal point methods  steepest descent method  interior methods  convex programming  constrained optimization
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号