首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
关于共轭梯度法的下降性和收敛性   总被引:2,自引:0,他引:2  
本文给出了重新开始的一个准则,其准则是为保证共轭梯度法的下降性,我们不仅得到了具有不同参数选择的一般共轭梯度法的收敛性,而且将Ref.1中的结论给予推广。  相似文献   

2.
本文在文献[1]中提出了一类新共轭梯度法的基础上,给出求解无约束优化问题的两类新的非线性下降共轭梯度法,此两类方法在无任何线搜索下,能够保证在每次迭代中产生下降方向.对一般非凸函数,我们在Wolfe线搜索条件下证明了两类新方法的全局收敛性.  相似文献   

3.
本文提出了一类与HS方法相关的新的共轭梯度法.在强Wolfe线搜索的条件下,该方法能够保证搜索方向的充分下降性,并且在不需要假设目标函数为凸的情况下,证明了该方法的全局收敛性.同时,给出了这类新共轭梯度法的一种特殊形式,通过调整参数ρ,验证了它对给定测试函数的有效性.  相似文献   

4.
一类新共轭下降算法的全局收敛性   总被引:2,自引:1,他引:1  
本文提出一类新的共轭下降算法,在算法的迭代过程中,迭代方向保持下降性,在非精确线搜索下,证明了算法的整体收敛性.  相似文献   

5.
一族新的共轭梯度法的全局收敛性   总被引:1,自引:0,他引:1  
共轭梯度法是求解无约束优化问题的一种重要的方法,尤其适用于大规模优化问题的求解。本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性。最后对算法进行了数值试验,试验结果验证了该算法的有效性。  相似文献   

6.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性  相似文献   

7.
In this paper, the continuously differentiable optimization problem min{f(x) : x∈Ω}, where Ω ∈ R^n is a nonempty closed convex set, the gradient projection method by Calamai and More (Math. Programming, Vol.39. P.93-116, 1987) is modified by memory gradient to improve the convergence rate of the gradient projection method is considered. The convergence of the new method is analyzed without assuming that the iteration sequence {x^k} of bounded. Moreover, it is shown that, when f(x) is pseudo-convex (quasiconvex) function, this new method has strong convergence results. The numerical results show that the method in this paper is more effective than the gradient projection method.  相似文献   

8.
本文给出了一类具有4个参数的共轭梯度法,并且分析了其中两个子类的方法.证明了在步长满足更一般的Wolfe条件时,这两个子类的方法是下降算法.同时还证明了这两个子类算法的全局收敛性.  相似文献   

9.
一类比式和问题的全局优化方法   总被引:1,自引:1,他引:0  
对于一类比式和问题(P)给出一全局优化算法.首先利用线性约束的特征推导出问题(P)的等价问题(P1),然后利用新的线性松弛方法建立了问题(P1)的松弛线性规划(RLP),通过对目标函数可行域线性松弛的连续细分以及求解一系列线性规划,提出的分枝定界算法收敛到问题(P)的全局最优解.最终数值实验结果表明了该算法的可行性和高效性.  相似文献   

10.
A kind of general convexification and concavification methods is proposed for solving some classes of global optimization problems with certain monotone properties. It is shown that these minimization problems can be transformed into equivalent concave minimization problem or reverse convex programming problem or canonical D.C. programming problem by using the proposed convexification and concavification schemes. The existing algorithms then can be used to find the global solutions of the transformed problems.  相似文献   

11.
This paper presents a family of projected descent direction algorithms with inexact line search for solving large-scale minimization problems subject to simple bounds on the decision variables. The global convergence of algorithms in this family is ensured by conditions on the descent directions and line search. Whenever a sequence constructed by an algorithm in this family enters a sufficiently small neighborhood of a local minimizer satisfying standard second-order sufficiency conditions, it gets trapped and converges to this local minimizer. Furthermore, in this case, the active constraint set at is identified in a finite number of iterations. This fact is used to ensure that the rate of convergence to a local minimizer, satisfying standard second-order sufficiency conditions, depends only on the behavior of the algorithm in the unconstrained subspace. As a particular example, we present projected versions of the modified Polak–Ribière conjugate gradient method and the limited-memory BFGS quasi-Newton method that retain the convergence properties associated with those algorithms applied to unconstrained problems.  相似文献   

12.
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts.  相似文献   

13.
《Journal of Complexity》1994,10(1):64-95
We introduce the notion of expected hitting time to a goal as a measure of the convergence rate of a Monte Carlo optimization method. The techniques developed apply to simulated annealing, genetic algorithms, and other stochastic search schemes. The expected hitting time can itself be calculated from the more fundamental complementary hitting time distribution (CHTD) which completely characterizes a Monte Carlo method. The CHTD is asymptotically a geometric series, (1/s)/(1 − λ), characterized by two parameters, s, λ, related to the search process in a simple way. The main utility of the CHTD is in comparing Monte Carlo algorithms. In particular we show that independent, identical Monte Carlo algorithms run in parallel, IIP parallelism, and exhibit superlinear speedup. We give conditions under which this occurs and note that equally likely search is linearly sped up. Further we observe that a serial Monte Carlo search can have an infinite expected hitting time, but the same algorithm when parallelized can have a finite expected hitting time. One consequence of the observed superlinear speedup is an improved uniprocessor algorithm by the technique of in-code parallelism.  相似文献   

14.
本文考虑了一类特殊的多项式整数规划问题。此类问题有很广泛的实际应用,并且是NP难问题。对于这类问题,最优性必要条件和最优性充分条件已经给出。我们在本文中将要利用这些最优性条件设计最优化算法。首 先,利用最优性必要条件,我们给出了一种新的局部优化算法。进而我们结合最优性充分条件、新的局部优化算法和辅助函数,设计了新的全局最优化算法。本文给出的算例展示出我们的算法是有效的和可靠的。  相似文献   

15.
A class of parallel characteristical algorithms for global optimization ofone-dimensional multiextremal functions is introduced. General convergence andefficiency conditions for the algorithms of the class introduced areestablished. A generalization for the multidimensional case is considered.Examples of parallel characteristical algorithms and numerical experiments arepresented.  相似文献   

16.
一类全局收敛的记忆梯度法及其线性收敛性   总被引:18,自引:0,他引:18  
本文研究一类新的解无约束最优化问题的记忆梯度法,在强Wolfe线性搜索下证明了其全局收敛性.当目标函数为一致凸函数时,对其线性收敛速率进行了分析.数值试验表明算法是很有效的.  相似文献   

17.
结合实测数据,以三个对数正态分布函数的和函数为拟合函数,以梯度下降法为主要方法,对沉积物粒度分布进行了数据拟合,通过数值实验我们发现:利用梯度下降法可以有效地优化分布函数的各参数,实现拟合残差的稳步持续减小,具有良好的可操作性,拟合效果是令人满意的,它为我们进行数据拟合提供了一条新的思路,同时此方法也可以推广到解决其他极值问题.  相似文献   

18.
共轭梯度法是求解无约束优化问题的一种重要的方法,尤其适用于大规模优化问题的求解.本文提出一族包含FR方法和CD方法的新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值试验,试验结果验证了该算法的有效性。  相似文献   

19.
求解无约束总体优化问题的一类双参数填充函数算法需要假设该问题的局部极小解的个数只有有限个,而且填充函数中参数的选取与局部极小解的谷域的半径有关.该文对其填充函数作了适当改进,使得新的填充函数算法不仅无需对问题的局部极小解的个数作假设,而且填充函数中参数的选取与局部极小解的谷域的半径无关.数值试验表明算法是有效的.  相似文献   

20.
Conjugate gradient methods are appealing for large scale nonlinear optimization problems. Recently, expecting the fast convergence of the methods, Dai and Liao (2001) used secant condition of quasi-Newton methods. In this paper, we make use of modified secant condition given by Zhang et al. (1999) and Zhang and Xu (2001) and propose a new conjugate gradient method following to Dai and Liao (2001). It is new features that this method takes both available gradient and function value information and achieves a high-order accuracy in approximating the second-order curvature of the objective function. The method is shown to be globally convergent under some assumptions. Numerical results are reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号