共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a unified gradient flow approach to nonlinear constrained optimization problems. This method is based on a continuous gradient flow reformulation of constrained optimization problems and on a two level time discretization of the gradient flow equation with a splitting parameter . The convergence of the scheme is analyzed and it is shown that the scheme becomes first order when [0, 1] and second order when = 1 and the time discretization step length is sufficiently large. Numerical experiments for continuous, discrete and mixed discrete optimization problems were performed, and the numerical results show that the approach is effective for solving these problems. 相似文献
2.
Two modified Dai-Yuan nonlinear conjugate gradient methods 总被引:1,自引:0,他引:1
Li Zhang 《Numerical Algorithms》2009,50(1):1-16
In this paper, we propose two modified versions of the Dai-Yuan (DY) nonlinear conjugate gradient method. One is based on
the MBFGS method (Li and Fukushima, J Comput Appl Math 129:15–35, 2001) and inherits all nice properties of the DY method. Moreover, this method converges globally for nonconvex functions even
if the standard Armijo line search is used. The other is based on the ideas of Wei et al. (Appl Math Comput 183:1341–1350,
2006), Zhang et al. (Numer Math 104:561–572, 2006) and possesses good performance of the Hestenes-Stiefel method. Numerical results are also reported.
This work was supported by the NSF foundation (10701018) of China. 相似文献
3.
In the paper, we will discuss the gradient estimate for the evolutionary surfaces of prescribed mean curvature with Neumann boundary value under the condition $f_tauge -kappa$, which is the same as the one in the interior estimate by K. Ecker and generalizes the condition $f_tauge 0$ studied by Gerhardt etc. Also, based on the elliptic result obtained recently, we will show the longtime behavior of surfaces moving by the velocity being equal to the mean curvature. 相似文献
4.
Pengjie Liu Xiaoyu Wu Hu Shao Yan Zhang Shuhan Cao 《Numerical Linear Algebra with Applications》2023,30(2):e2471
In this work, by considering the hyperplane projection and hybrid techniques, three scaled three-term conjugate gradient methods are extended to solve the system of constrained monotone nonlinear equations, and the developed methods have the advantages of low storage and only using function values. The new methods satisfy the sufficient descent condition independent of any line search criterion. It has been proved that three new methods converge globally under some mild conditions. The numerical experiments for constrained monotone nonlinear equations and image de-blurring problems illustrate that the proposed methods are numerically effective and efficient. 相似文献
5.
In this paper a new nonmonotone conjugate gradient method is introduced, which can be regarded as a generalization of the Perry and Shanno memoryless quasi-Newton method. For convex objective functions, the proposed nonmonotone conjugate gradient method is proved to be globally convergent. Its global convergence for non-convex objective functions has also been studied. Numerical experiments indicate that it is able to efficiently solve large scale optmization problems. 相似文献
6.
孙清滢 《数学的实践与认识》2002,32(4):621-628
对无约束规划 ( P) :minx∈ Rnf ( x) ,其中 f ( x)是 Rn→ R1上的一阶连续可微函数 ,设计了一个超记忆梯度求解算法 ,并在去掉迭代点列 { xk}有界和广义 Armijo步长搜索下 ,讨论了算法的全局的收敛性 ,证明了算法具有较强的收敛性质 相似文献
7.
研究无约束优化问题的共轭梯度算法,提出了一种计算主要参数的新形式,分析了Wolfe搜索下该算法的全局收敛性. 相似文献
8.
Martin Hanke 《Numerical Functional Analysis & Optimization》2013,34(9-10):971-993
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled, e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data. 相似文献
9.
《Optimization》2012,61(2):249-263
New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods. 相似文献
10.
本文提出了二类解约束优化问题的广义既约型梯度法,从统一角度研究了投影梯度法和既约梯度法的结构及其全局收敛性.本文结果统一、推广了常见的可行方向法. 相似文献
11.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation. 相似文献
12.
共轭梯度法是一种解决大规模无约束优化问题的重要方法.本文对Dai-Liao(DL)共轭梯度法的参数进行了研究,提出了一种新的自适应DL共轭梯度法.在适当的条件下,证明了该方法的全局收敛性.数值结果表明,我们的方法对给定的测试问题是有效的. 相似文献
13.
14.
Two fundamental convergence theorems for nonlinear conjugate gradient methods and their applications
1. IntroductionWe consider the global convergence of conjugate gradient methods for the unconstrainednonlinear optimization problemadn f(x),where f: Re - RI is continuously dtherelltiable and its gradiellt is denoted by g. Weconsider only the cajse where the methods are implemented without regular restarts. Theiterative formula is given byXk 1 = Xk Akdk, (1'1).and the seaxch direction da is defined bywhere gb is a scalar, ^k is a stenlength, and gb denotes g(xk).The best-known formulas fo… 相似文献
15.
16.
无约束最优化的Polak—Ribiere和Hestenes—Stiefel共轭梯度法的全局收敛性 总被引:2,自引:0,他引:2
本文在很弱的条件下得到了无约束最优化的Polak-Ribiere和Hestenes-Stiefel共轭梯度法的全局收敛性的新结果,这里PR方法和HS方法中的参数β^PRk和β^HSk可以在某个负的区域内取值,这一负的区域与k有关,这些新的收敛性结果改进了文献中已有的结果。数值检验的结果表明了本文中新的PR方法和HS方法是相当有效的。 相似文献
17.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method
combines line search method and trust region method to generate new iterative points at each iteration and therefore has both
advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information
at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so
that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and
analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and
robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method
is effective, stable and robust in practical computation, compared with other similar methods. 相似文献
18.
It is well known that global convergence has not been established for the Polak-Ribière-Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk?0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging. 相似文献
19.
提出了一种三项超记忆梯度方法.该方法的最大优点是:在无需线性搜索的条件下,迭代方向就是充分下降方向.在较弱的条件下,分析了方法的全局收敛性.初步数值试验表明方法是有效的. 相似文献
20.
《Optimization》2012,61(5):1173-1175
In this note, we show that the assumption condition (3.3) in Theorem 3.2 of Babaie-Kafaki can be deleted. As mentioned in Babaie-Kafaki, it is not appropriate to consider condition (3.3) as an assumption. Throughout, we use the same notations and equation numbers as in Babaie-Kafaki. 相似文献