首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Primal-Dual Nonlinear Rescaling Method for Convex Optimization
Authors:Polyak  R  Griva  I
Institution:(1) Department of Systems Engineering and Operations Research and Mathematical Sciences Department, George Mason University, Fairfax, Virginia
Abstract:In this paper, we consider a general primal-dual nonlinear rescaling (PDNR) method for convex optimization with inequality constraints. We prove the global convergence of the PDNR method and estimate the error bounds for the primal and dual sequences. In particular, we prove that, under the standard second-order optimality conditions, the error bounds for the primal and dual sequences converge to zero with linear rate. Moreover, for any given ratio 0 > gamma > 1, there is a fixed scaling parameter kgamma > 0 such that each PDNR step shrinks the primal-dual error bound by at least a factor 0 > gamma > 1, for any k ge kgamma. The PDNR solver was tested on a variety of NLP problems including the constrained optimization problems (COPS) set. The results obtained show that the PDNR solver is numerically stable and produces results with high accuracy. Moreover, for most of the problems solved, the number of Newton steps is practically independent of the problem size.
Keywords:Nonlinear rescaling  duality  Lagrangian  primal-dual methods  multipliers methods
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号