首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In global optimization, a typical population-based stochastic search method works on a set of sample points from the feasible region. In this paper, we study a recently proposed method of this sort. The method utilizes an attraction-repulsion mechanism to move sample points toward optimality and is thus referred to as electromagnetism-like method (EM). The computational results showed that EM is robust in practice, so we further investigate the theoretical structure. After reviewing the original method, we present some necessary modifications for the convergence proof. We show that in the limit, the modified method converges to the vicinity of global optimum with probability one.  相似文献   

2.
讨论了具有一般约束的全局优化问题,给出该问题的一个随机搜索算法,证明了该算法依概率1收敛到问题的全局最优解.数值结果显示该方法是有效的.  相似文献   

3.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

4.
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.  相似文献   

5.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

6.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.  相似文献   

7.
本文对求解无约束优化问题提出一类三项混合共轭梯度算法,新算法将Hestenes- stiefel算法与Dai-Yuan方法相结合,并在不需给定下降条件的情况下,证明了算法在Wolfe线搜索原则下的收敛性,数值试验亦显示出这种混合共轭梯度算法较之HS和PRP的优势.  相似文献   

8.
We propose an algorithm for multistage stochastic linear programs with recourse where random quantities in different stages are independent. The algorithm approximates successively expected recourse functions by building up valid cutting planes to support these functions from below. In each iteration, for the expected recourse function in each stage, one cutting plane is generated using the dual extreme points of the next-stage problem that have been found so far. We prove that the algorithm is convergent with probability one.  相似文献   

9.
Matyas' random optimization method (Ref. 1) is applied to the constrained nonlinear minimization problem, and its convergence properties are studied. It is shown that the global minimum can be found with probability one, even if the performance function is multimodal (has several local minima) and even if its differentiability is not ensured.The author would like to thank Professors Y. Sawaragi (Kyoto University), T. Soeda (Tokushima University), and T. Shoman (Tokushima University) for their kind advice.  相似文献   

10.
A class of simulated annealing algorithms for continuous global optimization is considered in this paper. The global convergence property is analyzed with respect to the objective value sequence and the minimum objective value sequence induced by simulated annealing algorithms. The convergence analysis provides the appropriate conditions on both the generation probability density function and the temperature updating function. Different forms of temperature updating functions are obtained with respect to different kinds of generation probability density functions, leading to different types of simulated annealing algorithms which all guarantee the convergence to the global optimum.  相似文献   

11.
A new recursive algorithm for searching the global minimizer of a function is proposed when the function is observed with noise. The algorithm is based on switches between the stochastic approximation and the random search. The combination of SA with RS is not a new idea in such combination, the difficulty consists in creating a good switching rule and in designing an efficient method to reduce the noise effect. The proposed switching rule is easily realizable, the noise reducing method is effective, and the whole recursive optimization algorithm is simply calculated. It is proved that the algorithm a.s. converges to the global minimizer and is asymptotically normal. In comparison with existing methods, the proposed algorithm not only requires much weaker conditions, but also is more efficient as shown by simulation.  相似文献   

12.
共轭梯度法是求解大规模元约束优化同题的一种有效方法,本文提出一种新的共轭梯度法,证明了在推广的Wolfe线搜索条件下方法具有全局收敛性。最后对算法进行了数值试验,试验结果表明该算法具有良好的收敛性和有效性。  相似文献   

13.
一族新共轭梯度法的全局收敛性   总被引:4,自引:0,他引:4  
杜学武  徐成贤 《数学研究》1999,32(3):277-280
提出求解无约束优化问题的一族新共轭梯度法,证明了它的一个子族在一种非精确线搜索下的下降性和全局收敛性  相似文献   

14.
包含FR方法的一类无约束极小化方法的全局收敛性   总被引:5,自引:0,他引:5  
本文对包含Fletcher-Reeves共轭梯度法的一类无约束最优化方法的全局收敛性进行了研究.Fletcher-Reeves方法的某些性质在收敛性分析中起着重要的作用.我们以一种简单的方式证明了这类方法在一种Wolfe型非精确线搜索条件下对光滑的非凸函数具有下降性和全局收敛性.全局收敛性结果也被推广到了一种广义Wolfe型非精确线搜索.  相似文献   

15.
利用积分中值定理阐述了积分型方法的实质,指出了其优点与不足,提出相应的改进方法—变测度算法,并对变测度算法的收敛性进行了证明.  相似文献   

16.
在这篇文章中,我们给出了一些新的共轭梯度算法的收敛性条件,这些条件推广了已有的条件,使的已有的共轭梯度算法的收敛性结果成为本文结果的特殊情况。  相似文献   

17.
In this paper, a new nonmonotone BFGS algorithmfor unconstrained optimization is introduced. Under mild conditions,the global convergence of this new algorithm on convex functions isproved. Some numerical experiments show that this new nonmonotoneBFGS algorithm is competitive to the BFGS algorithm.  相似文献   

18.
Wolfe线搜索下新的共轭梯度法的全局收敛性   总被引:1,自引:0,他引:1  
共轭梯度法是求解无约束优化问题的一种重要的方法.本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值实验,实验结果验证了该算法的有效性.  相似文献   

19.
本文就非拟牛顿法在无约束最优化问题上,对采用非单调线搜索的情况下是否具有全局收敛性进行了研究,在目标函数满足一致凸的条件下,证明了非拟牛顿族是全局收敛的.  相似文献   

20.
研究一种新的无约束优化超记忆梯度算法,算法在每步迭代中充分利用前面迭代点的信息产生下降方向,利用Wolfe线性搜索产生步长,在较弱的条件下证明了算法的全局收敛性。新算法在每步迭代中不需计算和存储矩阵,适于求解大规模优化问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号