首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
变步长非单调模式搜索法   总被引:6,自引:0,他引:6  
A varied steplength nonmonotone pattern search method is proposed in this paper. The varied steplength search strategy is designed in this method such that the pattern direction is more approximated to efficient descent direction. The interpolation and nonmonotone technique are used for improving local search and global convergence. The theoretical and numerical results show that this method is an efficient direct search method.  相似文献   

2.
In this article, by slightly modifying the search direction of the nonmonotone Hestenes–Stiefel method, a variant Hestenes–Stiefel conjugate gradient method is proposed that satisfies the su?cient descent condition independent of any line search. This algorithm also possesses information about the gradient value and the function value. We establish the global convergence of our methods without the assumption that the steplength is bounded away from zero. Numerical results illustrate that our method can e?ciently solve the test problems, and therefore is promising.  相似文献   

3.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

4.
The spectral gradient method has proved to be effective for solving large-scale uncon-strained optimization problems.It has been recently extended and combined with theprojected gradient method for solving optimization problems on convex sets.This combi-nation includes the use of nonmonotone line search techniques to preserve the fast localconvergence.In this work we further extend the spectral choice of steplength to accept pre-conditioned directions when a good preconditioner is available.We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties.We discussimplementation details for solving large-scale problems.  相似文献   

5.
It is proved that any cluster point of a sequence defined by a steepest descent algorithm in a general normed vector space is a critical point. The function is just assumed to be continuously differentiable. The class of algorithms we consider encompasses several choices such as the Cauchy steplength and the Curry steplength.  相似文献   

6.
We introduce a gradient descent algorithm for solving large scale unconstrained nonlinear optimization problems. The computation of the initial trial steplength is based on the usage of both the quasi-Newton property and the Hessian inverse approximation by an appropriate scalar matrix. The nonmonotone line search technique for the steplength calculation is applied later. The computational and storage complexity of the new method is equal to the computational and storage complexity of the Barzilai and Borwein method. On the other hand, the reported numerical results indicate improvements in favor of the new method with respect to the well known global Barzilai and Borwein method.  相似文献   

7.
A fully derivative-free spectral residual method for solving large-scale nonlinear systems of equations is presented. It uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonotone behavior. The global convergence analysis of the combined scheme is presented. An extensive set of numerical experiments that indicate that the new combination is competitive and frequently better than well-known Newton-Krylov methods for large-scale problems is also presented.

  相似文献   


8.
周群艳  陈俊 《应用数学》2012,25(1):202-208
本文提出一种新的解大规模无约束优化问题的全局收敛的梯度法.新算法沿着负梯度方向选择步长,而初始步长根据目标函数的海赛矩阵的近似数量矩阵来确定.理论上证明了新算法产生的点列的每个聚点都是稳定的,数值试验表明新算法是可靠且有效的.  相似文献   

9.
We propose a new gradient method for quadratic programming, named SDC, which alternates some steepest descent (SD) iterates with some gradient iterates that use a constant steplength computed through the Yuan formula. The SDC method exploits the asymptotic spectral behaviour of the Yuan steplength to foster a selective elimination of the components of the gradient along the eigenvectors of the Hessian matrix, i.e., to push the search in subspaces of smaller and smaller dimensions. The new method has global and \(R\) -linear convergence. Furthermore, numerical experiments show that it tends to outperform the Dai–Yuan method, which is one of the fastest methods among the gradient ones. In particular, SDC appears superior as the Hessian condition number and the accuracy requirement increase. Finally, if the number of consecutive SD iterates is not too small, the SDC method shows a monotonic behaviour.  相似文献   

10.
A new subspace minimization conjugate gradient algorithm with a nonmonotone Wolfe line search is proposed and analyzed. In the scheme, we propose two choices of the search direction by minimizing a quadratic approximation of the objective function in special subspaces, and state criterions on how to choose the direction. Under given conditions, we obtain the significant conclusion that each choice of the direction satisfies the sufficient descent property. Based on the idea on how the function is close to a quadratic function, a new strategy for choosing the initial stepsize is presented for the line search. With the used nonmonotone Wolfe line search, we prove the global convergence of the proposed method for general nonlinear functions under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

11.
On the Nonmonotone Line Search   总被引:10,自引:0,他引:10  
The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.  相似文献   

12.
In this paper, an adaptive nonmonotone line search method for unconstrained minimization problems is proposed. At every iteration, the new algorithm selects only one of the two directions: a Newton-type direction and a negative curvature direction, to perform the line search. The nonmonotone technique is included in the backtracking line search when the Newton-type direction is the search direction. Furthermore, if the negative curvature direction is the search direction, we increase the steplength under certain conditions. The global convergence to a stationary point with second-order optimality conditions is established. Some numerical results which show the efficiency of the new algorithm are reported.   相似文献   

13.
On the Newton Interior-Point Method for Nonlinear Programming Problems   总被引:2,自引:0,他引:2  
Interior-point methods have been developed largely for nonlinear programming problems. In this paper, we generalize the global Newton interior-point method introduced in Ref. 1 and we establish a global convergence theory for it, under the same assumptions as those stated in Ref. 1. The generalized algorithm gives the possibility of choosing different descent directions for a merit function so that difficulties due to small steplength for the perturbed Newton direction can be avoided. The particular choice of the perturbation enables us to interpret the generalized method as an inexact Newton method. Also, we suggest a more general criterion for backtracking, which is useful when the perturbed Newton system is not solved exactly. We include numerical experimentation on discrete optimal control problems.  相似文献   

14.
一类非单调修正PRP算法的全局收敛性   总被引:1,自引:0,他引:1  
易芳 《经济数学》2006,23(1):99-103
本文给出一类非单调线性搜索下的修正PRP算法,该方法保证每次迭代中的搜索方向是充分下降的.在较弱的条件下,我们证明了此类非单调修正PRP算法具有全局收敛性.  相似文献   

15.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

16.
Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization   总被引:6,自引:0,他引:6  
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.  相似文献   

17.
In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained optimization problems, where the search direction was a kind of combination of the gradient and the obtained direction, and the steplength was obtained by the Wolfe-type inexact line search. It was proved that the search direction at each iteration is a descent direction of objective function. Under mild conditions, we have established the global convergence theorem of the proposed method. Numerical results showed that the algorithm is promising, particularly, compared with the existing several main methods.  相似文献   

18.
This article presents enhancement strategies for the Hermitian and skew‐Hermitian splitting method based on gradient iterations. The spectral properties are exploited for the parameter estimation, often resulting in a better convergence. In particular, steepest descent with early stopping can generate a rough estimate of the optimal upper bound. This is better than an arbitrary choice since the latter often causes stability problems or slow convergence. In addition, delayed gradient methods are considered as inner solvers for the splitting method. Experiments verify the effectiveness of the proposed estimation strategies and show that delayed gradient methods are competitive with conjugate gradient in low precision.  相似文献   

19.
On the Barzilai and Borwein choice of steplength for the gradient method   总被引:16,自引:0,他引:16  
In a recent paper, Barzilai and Borwein presented a new choiceof steplength for the gradient method. Their choice does notguarantee descent in the objective function and greatly speedsup the convergence of the method. They presented a convergenceanalysis of their method only in the two-dimensional quadraticcase. We establish the convergence of the Barzilai and Borweingradient method when applied to the minimization of a strictlyconvex quadratic function of any number of variables.  相似文献   

20.
In this paper, we first present an adaptive nonmonotone term to improve the efficiency of nonmonotone line search, and then an active set identification technique is suggested to get more efficient descent direction such that it improves the local convergence behavior of algorithm and decreases the computation cost. By means of the adaptive nonmonotone line search and the active set identification technique, we put forward a global convergent gradient-based method to solve the nonnegative matrix factorization (NMF) based on the alternating nonnegative least squares framework, in which we introduce a modified Barzilai-Borwein (BB) step size. The new modified BB step size and the larger step size strategy are exploited to accelerate convergence. Finally, the results of extensive numerical experiments using both synthetic and image datasets show that our proposed method is efficient in terms of computational speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号