首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

2.
In this paper, by means of an active set strategy, we present a projected spectral gradient algorithm for solving large-scale bound constrained optimization problems. A nice property of the active set estimation technique is that it can identify the active set at the optimal point without requiring strict complementary condition, which is potentially used to solve degenerated optimization problems. Under appropriate conditions, we show that this proposed method is globally convergent. We also do some numerical experiments by using some bound constrained problems from CUTEr library. The numerical comparisons with SPG, TRON, and L-BFGS-B show that the proposed method is effective and promising.  相似文献   

3.
The spectral projected gradient method SPG is an algorithm for large-scale bound-constrained optimization introduced recently by Birgin, Martínez, and Raydan. It is based on the Raydan unconstrained generalization of the Barzilai-Borwein method for quadratics. The SPG algorithm turned out to be surprisingly effective for solving many large-scale minimization problems with box constraints. Therefore, it is natural to test its perfomance for solving the sub-problems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as the underlying convex-constraint solver are introduced (ALSPG) and the methods are tested in two sets of problems. First, a meaningful subset of large-scale nonlinearly constrained problems of the CUTE collection is solved and compared with the perfomance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP.  相似文献   

4.
The spectral gradient method is a nonmonotone gradient method for large-scale unconstrained minimization. We strengthen the algorithm by modifications which globalize the method and present strategies to apply preconditioning techniques. The modified algorithm replaces a condition of uniform positive definitness of the preconditioning matrices, with mild conditions on the search directions. The result is a robust algorithm which is effective on very large problems. Encouraging numerical experiments are presented for a variety of standard test problems, for solving nonlinear Poisson-type equations, an also for finding molecular conformations by distance geometry.  相似文献   

5.
共轭梯度法是求解大规模无约束优化问题最有效的方法之一.对HS共轭梯度法参数公式进行改进,得到了一个新公式,并以新公式建立一个算法框架.在不依赖于任何线搜索条件下,证明了由算法框架产生的迭代方向均满足充分下降条件,且在标准Wolfe线搜索条件下证明了算法的全局收敛性.最后,对新算法进行数值测试,结果表明所改进的方法是有效的.  相似文献   

6.
一族新的共轭梯度法的全局收敛性   总被引:1,自引:0,他引:1  
共轭梯度法是求解无约束优化问题的一种重要的方法,尤其适用于大规模优化问题的求解。本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性。最后对算法进行了数值试验,试验结果验证了该算法的有效性。  相似文献   

7.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

8.
郭洁  万中 《计算数学》2022,44(3):324-338
基于指数罚函数,对最近提出的一种求解无约束优化问题的三项共轭梯度法进行了修正,并用它求解更复杂的大规模极大极小值问题.证明了该方法生成的搜索方向对每一个光滑子问题是充分下降方向,而且与所用的线搜索规则无关.以此为基础,设计了求解大规模极大极小值问题的算法,并在合理的假设下,证明了算法的全局收敛性.数值实验表明,该算法优于文献中已有的类似算法.  相似文献   

9.
强Wolfe条件不能保证标准CD共轭梯度法全局收敛.本文通过建立新的共轭参数,提出无约束优化问题的一个新谱共轭梯度法,该方法在精确线搜索下与标准CD共轭梯度法等价,在标准wolfe线搜索下具有下降性和全局收敛性.初步的数值实验结果表明新方法是有效的,适合于求解非线性无约束优化问题.  相似文献   

10.
This article combines techniques from two fields of applied mathematics: optimization theory and inverse problems. We investigate a generalized conditional gradient method and its connection to an iterative shrinkage method, which has been recently proposed for solving inverse problems. The iterative shrinkage method aims at the solution of non-quadratic minimization problems where the solution is expected to have a sparse representation in a known basis. We show that it can be interpreted as a generalized conditional gradient method. We prove the convergence of this generalized method for general class of functionals, which includes non-convex functionals. This also gives a deeper understanding of the iterative shrinkage method.  相似文献   

11.
Many constrained sets in problems such as signal processing and optimal control can be represented as a fixed point set of a certain nonexpansive mapping, and a number of iterative algorithms have been presented for solving a convex optimization problem over a fixed point set. This paper presents a novel gradient method with a three-term conjugate gradient direction that is used to accelerate conjugate gradient methods for solving unconstrained optimization problems. It is guaranteed that the algorithm strongly converges to the solution to the problem under the standard assumptions. Numerical comparisons with the existing gradient methods demonstrate the effectiveness and fast convergence of this algorithm.  相似文献   

12.
本文提出一种新的无约束优化记忆梯度算法,在Armijo搜索下,该算法在每步迭代时利用了前面迭代点的信息,增加了参数选择的自由度,适于求解大规模无约束优化问题。分析了算法的全局收敛性。  相似文献   

13.
《Optimization》2012,61(12):2679-2691
In this article, we present an improved three-term conjugate gradient algorithm for large-scale unconstrained optimization. The search directions in the developed algorithm are proved to satisfy an approximate secant equation as well as the Dai-Liao’s conjugacy condition. With the standard Wolfe line search and the restart strategy, global convergence of the algorithm is established under mild conditions. By implementing the algorithm to solve 75 benchmark test problems with dimensions from 1000 to 10,000, the obtained numerical results indicate that the algorithm outperforms the state-of-the-art algorithms available in the literature. It costs less CPU time and smaller number of iterations in solving the large-scale unconstrained optimization.  相似文献   

14.
刘金魁 《计算数学》2016,38(2):113-124
本文在著名PRP共轭梯度算法的基础上研究了一种无导数谱PRP投影算法,并证明了算法在求解带有凸约束条件的非线性单调方程组问题的全局收敛性.由于无导数和储存量小的特性,它更适应于求解大规模非光滑的非线性单调方程组问题.数值试验表明,新算法对给定的测试问题是有效的和稳定的.  相似文献   

15.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

16.
本文提出一种新的无约束优化记忆梯度算法,算法在每步迭代时利用了前面迭代点的信息,增加了参数选择的自由度,适于求解大规模无约束优化问题.分析了算法的全局收敛性.数值试验表明算法是有效的.  相似文献   

17.
The Barzilai–Borwein (BB) gradient method has received many studies due to its simplicity and numerical efficiency. By incorporating a nonmonotone line search, Raydan (SIAM J Optim. 1997;7:26–33) has successfully extended the BB gradient method for solving general unconstrained optimization problems so that it is competitive with conjugate gradient methods. However, the numerical results reported by Raydan are poor for very ill-conditioned problems because the effect of the degree of nonmonotonicity may be noticeable. In this paper, we focus more on the nonmonotone line search technique used in the global Barzilai–Borwein (GBB) gradient method. We improve the performance of the GBB gradient method by proposing an adaptive nonmonotone line search based on the morphology of the objective function. We also prove the global convergence and the R-linear convergence rate of the proposed method under reasonable assumptions. Finally, we give some numerical experiments made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed method in the sense of the performance profile introduced (Math Program. 2002;91:201–213) by Dolan and Moré.  相似文献   

18.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

19.
In this paper, an improved spectral conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. Different from the existent methods, the spectral and conjugate parameters are chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the additional assumption in the method proposed by Andrei on the boundedness of the spectral parameter is removed. Under some mild conditions, global convergence is established. Numerical experiments are employed to demonstrate the efficiency of the algorithm for solving large-scale benchmark test problems, particularly in comparison with the existent state-of-the-art algorithms available in the literature.  相似文献   

20.
In this PaPer we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems.The methods are divided in two groups:the first group includes five basic CG methods and the second five hybrid CG methods.A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems.CUTE.The conjugate gradient methods are ranked according to the numerical results.Some remarks are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号