首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
R-linear convergence of the Barzilai and Borwein gradient method   总被引:4,自引:0,他引:4  
Combined with non-monotone line search, the Barzilai and Borwein(BB) gradient method has been successfully extended for solvingunconstrained optimization problems and is competitive withconjugate gradient methods. In this paper, we establish theR-linear convergence of the BB method for any-dimensional stronglyconvex quadratics. One corollary of this result is that theBB method is also locally R-linear convergent for general objectivefunctions, and hence the stepsize in the BB method will alwaysbe accepted by the non-monotone line search when the iterateis close to the solution.  相似文献   

2.
ABSTRACT

In this paper, a projection-type approximation method is introduced for solving a variational inequality problem. The proposed method involves only one projection per iteration and the underline operator is pseudo-monotone and L-Lipschitz-continuous. The strong convergence result of the iterative sequence generated by the proposed method is established, under mild conditions, in real Hilbert spaces. Sound computational experiments comparing our newly proposed method with the existing state of the art on multiple realistic test problems are given.  相似文献   

3.
Many mathematical and applied problems can be reduced to finding a common point of a system of convex sets. The aim of this paper is twofold: first, to present a unified framework for the study of all the projection-like methods, both parallel and serial (chaotic, mostremote set, cyclic order, barycentric, extrapolated, etc.); second, to establish strong convergence results for quite general sets of constraints (generalized Slater, generalized uniformly convex, made of affine varieties, complementary, etc.). This is done by introducing the concept of regular family. We proceed as follows: first, we present definitions, assumptions, theorems, and conclusions; thereafter, we prove them.  相似文献   

4.
The convergence of Rosen's gradient method is a long-standing problem in nonlinear programming. Recently, progress has been made by several researchers. In this paper, we completely resolve the problem.This author's work was supported in part by AF OSR-86-0078, NSF DMS-86-06225, and NSF of China.  相似文献   

5.
Since Rosen’s gradient projection method was published in 1960, a rigorous convergence proof of his method has remained an open question. A convergence theorem is given in this paper. Part of this author’s work was done while he studied at the Department of Mathematics, University of California at Santa Barbara, and was supported by the National Science Foundation under Grant No. MCS83-14977. Part of this author’s work was done while he visited the Computer Science Department, University of Minnesota, Minneapolis, and was supported by the National Science Foundation under Grant No. MCS81-01214.  相似文献   

6.
This paper develops a procedure for numerically solving continuous games (and also matrix games) using a gradient projection method in a general Hilbert space setting. First, we analyze the symmetric case. Our approach is to introduce a functional which measures how far a strategy deviates from giving zero value (i.e., how near the strategy is to being optimal). We then incorporate this functional into a nonlinear optimization problem with constraints and solve this problem using the gradient projection algorithm. The convergence is studied via the corresponding steepest-descent differential equation. The differential equation is a nonlinear initial-value problem in a Hilbert space; thus, we include a proof of existence and uniqueness of its solution. Finally, nonsymmetric games are handled using the symmetrization techniques of Ref. 1.  相似文献   

7.
《Optimization》2012,61(2):163-179
In this article, we consider the global convergence of the Polak–Ribiére–Polyak conjugate gradient method (abbreviated PRP method) for minimizing functions that have Lipschitz continuous partial derivatives. A novel form of non-monotone line search is proposed to guarantee the global convergence of the PRP method. It is also shown that the PRP method has linear convergence rate under some mild conditions when the non-monotone line search reduces to a related monotone line search. The new non-monotone line search needs to estimate the Lipschitz constant of the gradients of objective functions, for which two practical estimations are proposed to help us to find a suitable initial step size for the PRP method. Numerical results show that the new line search approach is efficient in practical computation.  相似文献   

8.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

9.
《Optimization》2012,61(12):2347-2358
ABSTRACT

In this paper, we consider the varying stepsize gradient projection algorithm (GPA) for solving the split equality problem (SEP) in Hilbert spaces, and study its linear convergence. In particular, we introduce a notion of bounded linear regularity property for the SEP, and use it to establish the linear convergence property for the varying stepsize GPA. We provide some mild sufficient conditions to ensure the bounded linear regularity property, and then conclude the linear convergence rate of the varying stepsize GPA. To the best of our knowledge, this is the first work to study the linear convergence for the SEP.  相似文献   

10.
11.
We provide sufficient conditions for norm convergence of various projection and reflection methods, as well as giving limiting examples regarding convergence rates.  相似文献   

12.
This work focuses on convergence analysis of the projected gradient method for solving constrained convex minimization problems in Hilbert spaces. We show that the sequence of points generated by the method employing the Armijo line search converges weakly to a solution of the considered convex optimization problem. Weak convergence is established by assuming convexity and Gateaux differentiability of the objective function, whose Gateaux derivative is supposed to be uniformly continuous on bounded sets. Furthermore, we propose some modifications in the classical projected gradient method in order to obtain strong convergence. The new variant has the following desirable properties: the sequence of generated points is entirely contained in a ball with diameter equal to the distance between the initial point and the solution set, and the whole sequence converges strongly to the solution of the problem that lies closest to the initial iterate. Convergence analysis of both methods is presented without Lipschitz continuity assumption.  相似文献   

13.
The coefficients of the plant are perturbed in an abstract, linear, convex problem on a Hilbert space. Necessary and sufficient conditions are obtained about the perturbations for the continuous dependence of the adjoint variables on the coefficients of the plant. An application is given to the optimal control of ordinary differential equations.This work was supported by CNR-CNAFA, Rome, Italy.  相似文献   

14.
J. K. Liu  X. L. Du 《Applicable analysis》2018,97(12):2122-2131
Many problems arising from machine learning, compressive sensing, linear inverse problem, and statistical inference involve finding sparse solutions to under-determined or ill-conditioned equations. In this paper, a gradient projection method is proposed to recover sparse signal in compressive sensing by solving the nonlinear convex constrained equations. The global convergence is established with the backtracking line search. Preliminary numerical experiments coping with the sparse signal reconstruction in compressive sensing are performed, which show that the proposed method is very effective and stable.  相似文献   

15.
We study the projected gradient algorithm for linearly constrained optimization. Wolfe (Ref. 1) has produced a counterexample to show that this algorithm can jam. However, his counterexample is only 1( n ), and it is conjectured that the algorithm is convergent for 2-functions. We show that this conjecture is partly right. We also show that one needs more assumptions to prove convergence, since we present a family of counterexamples. We finally give a demonstration that no jamming can occur for quadratic objective functions.This work was supported by the Natural Sciences and Engineering Research Council of Canada  相似文献   

16.
共轭梯度法是最优化中最常用的方法之一,广泛地应用于求解大规模优化问题,其中参数β_k的不同选取可以构成不同的共轭梯度法.给出了一类含有三个参数的共轭梯度算法,这种算法能够在给定的条件下证明选定的β_k在每一步都能产生一个下降方向,同时在强Wolfe线搜索下,这种算法具有全局收敛性.  相似文献   

17.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

18.
I. V. Konnov 《Optimization》2018,67(5):665-682
We suggest simple implementable modifications of conditional gradient and gradient projection methods for smooth convex optimization problems in Hilbert spaces. Usually, the custom methods attain only weak convergence. We prove strong convergence of the new versions and establish their complexity estimates, which appear similar to the convergence rate of the weakly convergent versions. Preliminary results of computational tests confirm efficiency of the proposed modification.  相似文献   

19.
In this paper, we provide a new generalized gradient projection algorithm for nonlinear programming problems with linear constraints. This algorithm has simple structure and is very practical and stable. Under the weaker assumptions, we have proved the global convergence of our algorithm.  相似文献   

20.
考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是R-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率.最后,数值实验结果验证了方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号