共查询到20条相似文献,搜索用时 9 毫秒
1.
关于共轭梯度法的下降性和收敛性 总被引:2,自引:0,他引:2
本文给出了重新开始的一个准则,其准则是为保证共轭梯度法的下降性,我们不仅得到了具有不同参数选择的一般共轭梯度法的收敛性,而且将Ref.1中的结论给予推广。 相似文献
2.
本文在文献[1]中提出了一类新共轭梯度法的基础上,给出求解无约束优化问题的两类新的非线性下降共轭梯度法,此两类方法在无任何线搜索下,能够保证在每次迭代中产生下降方向.对一般非凸函数,我们在Wolfe线搜索条件下证明了两类新方法的全局收敛性. 相似文献
3.
本文提出了一类与HS方法相关的新的共轭梯度法.在强Wolfe线搜索的条件下,该方法能够保证搜索方向的充分下降性,并且在不需要假设目标函数为凸的情况下,证明了该方法的全局收敛性.同时,给出了这类新共轭梯度法的一种特殊形式,通过调整参数ρ,验证了它对给定测试函数的有效性. 相似文献
4.
5.
6.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性 相似文献
7.
Qing-ying Sun Chang-yu Wang Zhen-jun Shi 《应用数学学报(英文版)》2006,22(2):227-242
In this paper, the continuously differentiable optimization problem min{f(x) : x∈Ω}, where Ω ∈ R^n is a nonempty closed convex set, the gradient projection method by Calamai and More (Math. Programming, Vol.39. P.93-116, 1987) is modified by memory gradient to improve the convergence rate of the gradient projection method is considered. The convergence of the new method is analyzed without assuming that the iteration sequence {x^k} of bounded. Moreover, it is shown that, when f(x) is pseudo-convex (quasiconvex) function, this new method has strong convergence results. The numerical results show that the method in this paper is more effective than the gradient projection method. 相似文献
8.
9.
10.
A kind of general convexification and concavification methods is proposed for solving some classes of global optimization problems with certain monotone properties. It is shown that these minimization problems can be transformed into equivalent concave minimization problem or reverse convex programming problem or canonical D.C. programming problem by using the proposed convexification and concavification schemes. The existing algorithms then can be used to find the global solutions of the transformed problems. 相似文献
11.
This paper presents a family of projected descent direction algorithms with inexact line search for solving large-scale minimization problems subject to simple bounds on the decision variables. The global convergence of algorithms in this family is ensured by conditions on the descent directions and line search. Whenever a sequence constructed by an algorithm in this family enters a sufficiently small neighborhood of a local minimizer satisfying standard second-order sufficiency conditions, it gets trapped and converges to this local minimizer. Furthermore, in this case, the active constraint set at is identified in a finite number of iterations. This fact is used to ensure that the rate of convergence to a local minimizer, satisfying standard second-order sufficiency conditions, depends only on the behavior of the algorithm in the unconstrained subspace. As a particular example, we present projected versions of the modified Polak–Ribière conjugate gradient method and the limited-memory BFGS quasi-Newton method that retain the convergence properties associated with those algorithms applied to unconstrained problems. 相似文献
12.
Liu G. H. Jing L. L. Han L. X. Han D. 《Journal of Optimization Theory and Applications》1999,101(1):127-140
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts. 相似文献
13.
14.
《Journal of Complexity》1994,10(1):64-95
We introduce the notion of expected hitting time to a goal as a measure of the convergence rate of a Monte Carlo optimization method. The techniques developed apply to simulated annealing, genetic algorithms, and other stochastic search schemes. The expected hitting time can itself be calculated from the more fundamental complementary hitting time distribution (CHTD) which completely characterizes a Monte Carlo method. The CHTD is asymptotically a geometric series, (1/s)/(1 − λ), characterized by two parameters, s, λ, related to the search process in a simple way. The main utility of the CHTD is in comparing Monte Carlo algorithms. In particular we show that independent, identical Monte Carlo algorithms run in parallel, IIP parallelism, and exhibit superlinear speedup. We give conditions under which this occurs and note that equally likely search is linearly sped up. Further we observe that a serial Monte Carlo search can have an infinite expected hitting time, but the same algorithm when parallelized can have a finite expected hitting time. One consequence of the observed superlinear speedup is an improved uniprocessor algorithm by the technique of in-code parallelism. 相似文献
15.
Vladimir A. Grishagin Yaroslav D. Sergeyev Roman G. Strongin 《Journal of Global Optimization》1997,10(2):185-206
A class of parallel characteristical algorithms for global optimization ofone-dimensional multiextremal functions is introduced. General convergence andefficiency conditions for the algorithms of the class introduced areestablished. A generalization for the multidimensional case is considered.Examples of parallel characteristical algorithms and numerical experiments arepresented. 相似文献
16.
结合实测数据,以三个对数正态分布函数的和函数为拟合函数,以梯度下降法为主要方法,对沉积物粒度分布进行了数据拟合,通过数值实验我们发现:利用梯度下降法可以有效地优化分布函数的各参数,实现拟合残差的稳步持续减小,具有良好的可操作性,拟合效果是令人满意的,它为我们进行数据拟合提供了一条新的思路,同时此方法也可以推广到解决其他极值问题. 相似文献
17.
一类全局收敛的记忆梯度法及其线性收敛性 总被引:18,自引:0,他引:18
本文研究一类新的解无约束最优化问题的记忆梯度法,在强Wolfe线性搜索下证明了其全局收敛性.当目标函数为一致凸函数时,对其线性收敛速率进行了分析.数值试验表明算法是很有效的. 相似文献
18.
19.
朱文兴 《数学物理学报(A辑)》1999,(Z1)
求解无约束总体优化问题的一类双参数填充函数算法需要假设该问题的局部极小解的个数只有有限个,而且填充函数中参数的选取与局部极小解的谷域的半径有关.该文对其填充函数作了适当改进,使得新的填充函数算法不仅无需对问题的局部极小解的个数作假设,而且填充函数中参数的选取与局部极小解的谷域的半径无关.数值试验表明算法是有效的. 相似文献
20.
Global Convergence Properties of Nonlinear Conjugate Gradient Methods with Modified Secant Condition 总被引:2,自引:1,他引:1
Conjugate gradient methods are appealing for large scale nonlinear optimization problems. Recently, expecting the fast convergence of the methods, Dai and Liao (2001) used secant condition of quasi-Newton methods. In this paper, we make use of modified secant condition given by Zhang et al. (1999) and Zhang and Xu (2001) and propose a new conjugate gradient method following to Dai and Liao (2001). It is new features that this method takes both available gradient and function value information and achieves a high-order accuracy in approximating the second-order curvature of the objective function. The method is shown to be globally convergent under some assumptions. Numerical results are reported. 相似文献