首页 | 本学科首页   官方微博 | 高级检索  
     

Global Convergence of a Modified Gradient Projection Method for Convex Constrained Problems
引用本文:Qing-ying Sun Chang-yu Wang Zhen-jun Shi. Global Convergence of a Modified Gradient Projection Method for Convex Constrained Problems[J]. 应用数学学报(英文版), 2006, 22(2): 227-242. DOI: 10.1007/s10255-006-0299-2
作者姓名:Qing-ying Sun Chang-yu Wang Zhen-jun Shi
作者单位:[1]School of Mathematics and Computational Sciences, University of Petroleum, Dongying 257061, China [2]College of Operations Research and Management, Qufu Normal University, Rizhao 276826, China
基金项目:Supported by the National Natural Science Foundation of China (No.10571106).
摘    要:In this paper, the continuously differentiable optimization problem min{f(x) : x∈Ω}, where Ω ∈ R^n is a nonempty closed convex set, the gradient projection method by Calamai and More (Math. Programming, Vol.39. P.93-116, 1987) is modified by memory gradient to improve the convergence rate of the gradient projection method is considered. The convergence of the new method is analyzed without assuming that the iteration sequence {x^k} of bounded. Moreover, it is shown that, when f(x) is pseudo-convex (quasiconvex) function, this new method has strong convergence results. The numerical results show that the method in this paper is more effective than the gradient projection method.

关 键 词:非线性规划 投射 收敛 连续可微最优问题
收稿时间:2003-04-14
修稿时间:2003-04-142005-09-16

Global Convergence of a Modified Gradient Projection Method for Convex Constrained Problems
Qing-ying Sun,Chang-yu Wang,Zhen-jun Shi. Global Convergence of a Modified Gradient Projection Method for Convex Constrained Problems[J]. Acta Mathematicae Applicatae Sinica, 2006, 22(2): 227-242. DOI: 10.1007/s10255-006-0299-2
Authors:Qing-ying Sun  Chang-yu Wang  Zhen-jun Shi
Affiliation:(1) School of Mathematics and Computational Sciences, University of Petroleum, Dongying 257061, China;(2) College of Operations Research and Management, Qufu Normal University, Rizhao 276826, China
Abstract:Abstract In this paper, the continuously differentiable optimization problem min{f(x) : x ∈ Ω}, where Ω ∈ Rn is a nonempty closed convex set, the gradient projection method by Calamai and More (Math. Programming, Vol.39. P.93-116, 1987) is modified by memory gradient to improve the convergence rate of the gradient projection method is considered. The convergence of the new method is analyzed without assuming that the iteration sequence {xk} of bounded. Moreover, it is shown that, when f(x) is pseudo-convex (quasi-convex) function, this new method has strong convergence results. The numerical results show that the method in this paper is more effective than the gradient projection method. Supported by the National Natural Science Foundation of China (No.10571106).
Keywords:Nonlinear programming   projection   generalized Armijo step size rule   convergence
本文献已被 CNKI 维普 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号