首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Equality and inequality constrained optimization algorithms with convergent stepsizes
Authors:B Rustem
Institution:(1) Department of Computing, Imperial College of Science, Technology, and Medicine, London, UK
Abstract:In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker.
Keywords:Nonlinear programming  sequential quadratic programming  nonlinear equality and inequality constraints  augmented Lagrangians  adaptive penalty parameter  convergence analysis
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号