首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
研究一种新的无约束优化超记忆梯度算法,算法在每步迭代中充分利用前面迭代点的信息产生下降方向,利用Wolfe线性搜索产生步长,在较弱的条件下证明了算法的全局收敛性。新算法在每步迭代中不需计算和存储矩阵,适于求解大规模优化问题。  相似文献   

2.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

3.
提出一类新的求解无约束优化问题的记忆梯度法,在较弱条件下证明了算法具有全局收敛性和线性收敛速率.算法采用曲线搜索方法,在每一步同时确定搜索方向和步长,收敛稳定,并且不需计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

4.
In recent years several proposals for the step-size selection have largely improved the gradient methods, in the case of both constrained and unconstrained nonlinear optimization. We introduce a new step-size rule with some crucial properties. We design step-size selection strategies where the new rule and a standard Barzilai-Borwein (BB) rule can be adaptively alternated to get meaningful convergence rate improvements in comparison with other BB-like gradient schemes. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
《Optimization》2012,61(4-5):395-415
The Barzilai and Borwein (BB) gradient method does not guarantee a descent in the objective function at each iteration, but performs better than the classical steepest descent (SD) method in practice. So far, the BB method has found many successful applications and generalizations in linear systems, unconstrained optimization, convex-constrained optimization, stochastic optimization, etc. In this article, we propose a new gradient method that uses the SD and the BB steps alternately. Hence the name “alternate step (AS) gradient method.” Our theoretical and numerical analyses show that the AS method is a promising alternative to the BB method for linear systems. Unconstrained optimization algorithms related to the AS method are also discussed. Particularly, a more efficient gradient algorithm is provided by exploring the idea of the AS method in the GBB algorithm by Raydan (1997).

To establish a general R-linear convergence result for gradient methods, an important property of the stepsize is drawn in this article. Consequently, R-linear convergence result is established for a large collection of gradient methods, including the AS method. Some interesting insights into gradient methods and discussion about monotonicity and nonmonotonicity are also given.  相似文献   

6.
一个新的无约束优化超记忆梯度算法   总被引:3,自引:0,他引:3  
时贞军 《数学进展》2006,35(3):265-274
本文提出一种新的无约束优化超记忆梯度算法,算法利用当前点的负梯度和前一点的负梯度的线性组合为搜索方向,以精确线性搜索和Armijo搜索确定步长.在很弱的条件下证明了算法具有全局收敛性和线性收敛速度.因算法中避免了存贮和计算与目标函数相关的矩阵,故适于求解大型无约束优化问题.数值实验表明算法比一般的共轭梯度算法有效.  相似文献   

7.
Tensor is a hot topic in the past decade and eigenvalue problems of higher order tensors become more and more important in the numerical multilinear algebra. Several methods for finding the Z-eigenvalues and generalized eigenvalues of symmetric tensors have been given. However, the convergence of these methods when the tensor is not symmetric but weakly symmetric is not assured. In this paper, we give two convergent gradient projection methods for computing some generalized eigenvalues of weakly symmetric tensors. The gradient projection method with Armijo step-size rule (AGP) can be viewed as a modification of the GEAP method. The spectral gradient projection method which is born from the combination of the BB method with the gradient projection method is superior to the GEAP, AG and AGP methods. We also make comparisons among the four methods. Some competitive numerical results are reported at the end of this paper.  相似文献   

8.
刘亚君  刘新为 《计算数学》2016,38(1):96-112
梯度法是求解无约束最优化的一类重要方法.步长选取的好坏与梯度法的数值表现息息相关.注意到BB步长隐含了目标函数的二阶信息,本文将BB法与信赖域方法相结合,利用BB步长的倒数去近似目标函数的Hesse矩阵,同时利用信赖域子问题更加灵活地选取梯度法的步长,给出求解无约束最优化问题的单调和非单调信赖域BB法.在适当的假设条件下,证明了算法的全局收敛性.数值试验表明,与已有的求解无约束优化问题的BB类型的方法相比,非单调信赖域BB法中e_k=‖x_k-x~*‖的下降呈现更明显的阶梯状和单调性,因此收敛速度更快.  相似文献   

9.
A new search method is presented for unconstrained optimization. The method requires the evaluation of first and second derivatives and defines a curve along which a undimensional step takes place. For large step-size, the method performs as Newton's method, but it does not fail where the latter fails. For small step-size, the method behaves as the gradient method.  相似文献   

10.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

11.
讨论非线性不等式约束优化问题, 借鉴于滤子算法思想,提出了一个新型广义梯度投影算法.该方法既不使用罚函数又无真正意义下的滤子.每次迭代通过一个简单的显式广义投影法产生搜索方向,步长由目标函数值或者约束违反度函数值充分下降的Armijo型线搜索产生.算法的主要特点是: 不需要迭代序列的有界性假设;不需要传统滤子算法所必需的可行恢复阶段;使用了ε积极约束集减小计算量.在合适的假设条件下算法具有全局收敛性, 最后对算法进行了初步的数值实验.  相似文献   

12.
In the prediction-correction method for variational inequality (VI) problems, the step size selection plays an important role for its performance. In this paper, we employ the Barzilai-Borwein (BB) strategy in the prediction step, which is efficient for many optimization problems from a computational point of view. To guarantee the convergence, we adopt the line search technique, and relax the conditions to accept the BB step sizes as large as possible. In the correction step, we utilize a longer step length to calculate the next iteration point. Finally, we present some preliminary numerical results to show the efficiency of the algorithms.  相似文献   

13.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.  相似文献   

14.
《Optimization》2012,61(12):2275-2290
ABSTRACT

We suggest simple modifications of the conditional gradient method for smooth optimization problems, which maintain the basic convergence properties, but reduce the implementation cost of each iteration essentially. Namely, we propose an adaptive step-size procedure without any line-search and inexact solution of the direction finding subproblem. Preliminary results of computational tests confirm efficiency of the proposed modifications.  相似文献   

15.
Nonmonotone line search approach is a new technique for solving optimization problems. It relaxes the line search range and finds a larger step-size at each iteration, so as to possibly avoid local minimizer and run away from narrow curved valley. It is helpful to find the global minimizer of optimization problems. In this paper we develop a new modification of matrix-free nonmonotone Armijo line search and analyze the global convergence and convergence rate of the resulting method. We also address several approaches to estimate the Lipschitz constant of the gradient of objective functions that would be used in line search algorithms. Numerical results show that this new modification of Armijo line search is efficient for solving large scale unconstrained optimization problems.  相似文献   

16.
Aggregate function is a useful smoothing function to the max-function of some smooth functions and has been used to solve minimax problems, linear and nonlinear programming, generalized complementarity problems, etc. The aggregate function is a single smooth but complex function, its gradient and Hessian calculations are time-consuming. In this paper, a truncated aggregate smoothing stabilized Newton method for solving minimax problems is presented. At each iteration, only a small subset of the components in the max-function are aggregated, hence the number of gradient and Hessian calculations is reduced dramatically. The subset is adaptively updated with some truncating criterions, concerning only with computation of function values and not their gradients or Hessians, to guarantee the global convergence and, for the inner iteration, locally quadratic convergence with as few computational cost as possible. Numerical results show the efficiency of the proposed algorithm.  相似文献   

17.
We study methods for solving a class of the quasivariational inequalities in Hilbert space when the changeable set is described by translation of a fixed, closed and convex set. We consider one variant of the gradient-type projection method and an extragradient method. The possibilities of the choice of parameters of the gradient projection method in this case are wider than in the general case of a changeable set. The extragradient method on each iteration makes one trial step along the gradient, and the value of the gradient at the obtained point is used at the first point as the iteration direction. In the paper, we establish sufficient conditions for the convergence of the proposed methods and derive a new estimate of the rates of the convergence. The main result of this paper is contained in the convergence analysis of the extragradient method.  相似文献   

18.
The Barzilai–Borwein (BB) gradient method has received many studies due to its simplicity and numerical efficiency. By incorporating a nonmonotone line search, Raydan (SIAM J Optim. 1997;7:26–33) has successfully extended the BB gradient method for solving general unconstrained optimization problems so that it is competitive with conjugate gradient methods. However, the numerical results reported by Raydan are poor for very ill-conditioned problems because the effect of the degree of nonmonotonicity may be noticeable. In this paper, we focus more on the nonmonotone line search technique used in the global Barzilai–Borwein (GBB) gradient method. We improve the performance of the GBB gradient method by proposing an adaptive nonmonotone line search based on the morphology of the objective function. We also prove the global convergence and the R-linear convergence rate of the proposed method under reasonable assumptions. Finally, we give some numerical experiments made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed method in the sense of the performance profile introduced (Math Program. 2002;91:201–213) by Dolan and Moré.  相似文献   

19.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

20.
R-linear convergence of the Barzilai and Borwein gradient method   总被引:4,自引:0,他引:4  
Combined with non-monotone line search, the Barzilai and Borwein(BB) gradient method has been successfully extended for solvingunconstrained optimization problems and is competitive withconjugate gradient methods. In this paper, we establish theR-linear convergence of the BB method for any-dimensional stronglyconvex quadratics. One corollary of this result is that theBB method is also locally R-linear convergent for general objectivefunctions, and hence the stepsize in the BB method will alwaysbe accepted by the non-monotone line search when the iterateis close to the solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号