首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
An active set limited memory BFGS algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction is determined by a lower dimensional system of linear equations in free subspace. The implementations of the method on CUTE test problems are described, which show the efficiency of the proposed algorithm. The work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036.  相似文献   

2.
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.

  相似文献   


3.
The application of quasi-Newton methods is widespread in numerical optimization. Independently of the application, the techniques used to update the BFGS matrices seem to play an important role in the performance of the overall method. In this paper, we address precisely this issue. We compare two implementations of the limited memory BFGS method for large-scale unconstrained problems. They differ in the updating technique and the choice of initial matrix. L-BFGS performs continuous updating, whereas SNOPT uses a restarted limited memory strategy. Our study shows that continuous updating techniques are more effective, particularly for large problems.  相似文献   

4.
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust.  相似文献   

5.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

6.
A new diagonal quasi-Newton updating algorithm for unconstrained optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite differences directional derivatives of the components of the gradient. Under mild classical assumptions, the convergence of the algorithm is proved to be linear. Numerical experiments with 80 unconstrained optimization test problems, of different structures and complexities, as well as five applications from MINPACK-2 collection, prove that the suggested algorithm is more efficient and more robust than the quasi-Newton diagonal algorithm retaining only the diagonal elements of the BFGS update, than the weak quasi-Newton diagonal algorithm, than the quasi-Cauchy diagonal algorithm, than the diagonal approximation of the Hessian by the least-change secant updating strategy and minimizing the trace of the matrix, than the Cauchy with Oren and Luenberger scaling algorithm in its complementary form (i.e. the Barzilai-Borwein algorithm), than the steepest descent algorithm, and than the classical BFGS algorithm. However, our algorithm is inferior to the limited memory BFGS algorithm (L-BFGS).  相似文献   

7.
This study presents a novel adaptive trust-region method for solving symmetric nonlinear systems of equations. The new method uses a derivative-free quasi-Newton formula in place of the exact Jacobian. The global convergence and local quadratic convergence of the new method are established without the nondegeneracy assumption of the exact Jacobian. Using the compact limited memory BFGS, we adapt a version of the new method for solving large-scale problems and develop the dogleg scheme for solving the associated trust-region subproblems. The sufficient decrease condition for the adapted dogleg scheme is established. While the efficiency of the present trust-region approach can be improved by using adaptive radius techniques, utilizing the compact limited memory BFGS adjusts this approach to handle large-scale symmetric nonlinear systems of equations. Preliminary numerical results for both medium- and large-scale problems are reported.  相似文献   

8.
四种无约束优化算法的比较研究   总被引:1,自引:0,他引:1  
从数值试验的角度 ,通过对 3个测试问题 (其中构造了一个规模大小可变的算例 )的求解 ,对共轭梯度法、BFGS拟牛顿法、DFP拟牛顿法和截断牛顿法进行比较研究 ,根据测试结果的分析 ,显示截断牛顿法在求解大规模优化问题时具有优势 ,从而为大规模寻优算法的研究提供了有益的借鉴 .  相似文献   

9.
One impediment to the use of neural networks in pattern classification problems is the excessive time required for supervised learning in larger multilayer feedforward networks. The use of nonlinear optimization techniques to perform neural network training offers a means of reducing that computing time. Two key issues in the implementation of nonlinear programming are the choice of a method for computing search direction and the degree of accuracy required of the subsequent line search. This paper examines these issues through a designed experiment using six different pattern classification tasks, four search direction methods (conjugate gradient, quasi-Newton, and two levels of limited memory quasi-Newton), and three levels of line search accuracy. It was found that for the simplest pattern classification problems, the conjugate gradient performed well. For more complicated pattern classification problems, the limited memory BFGS or the BFGS should be preferred. For very large problems, the best choice seems to be the limited memory BFGS. It was also determined that, for the line search methods used in this study, increasing accuracy did not improve efficiency.  相似文献   

10.
An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection. This work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036.  相似文献   

11.
In this paper, an active set limited BFGS algorithm is proposed for bound constrained optimization. The global convergence will be established under some suitable conditions. Numerical results show that the given method is effective.  相似文献   

12.
We present modifications of the generalized conjugate gradient algorithm of Liu and Storey for unconstrained optimization problems (Ref. 1), extending its applicability to situations where the search directions are not defined. The use of new search directions is proposed and one additional condition is imposed on the inexact line search. The convergence of the resulting algorithm can be established under standard conditions for a twice continuously differentiable function with a bounded level set. Algorithms based on these modifications have been tested on a number of problems, showing considerable improvements. Comparisons with the BFGS and other quasi-Newton methods are also given.  相似文献   

13.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

14.
In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient.  相似文献   

15.
This paper is aimed to extend a certain damped technique, suitable for the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method, to the limited memory BFGS method in the case of the large-scale unconstrained optimization. It is shown that the proposed technique maintains the global convergence property on uniformly convex functions for the limited memory BFGS method. Some numerical results are described to illustrate the important role of the damped technique. Since this technique enforces safely the positive definiteness property of the BFGS update for any value of the steplength, we also consider only the first Wolfe–Powell condition on the steplength. Then, as for the backtracking framework, only one gradient evaluation is performed on each iteration. It is reported that the proposed damped methods work much better than the limited memory BFGS method in several cases.  相似文献   

16.
A new limited memory quasi-Newton algorithm is developed, in which the self-scaling symmetric rank one update with Davidon's optimal condition is applied. Preliminary numerical tests show that the new algorithm is very efficient for large-scale problems as well as general nonlinear optimization.  相似文献   

17.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

18.
王周宏 《计算数学》2005,27(4):395-404
本文针对大规模无约束优化问题研究了一个新的有限内存信赖域实现方法,提出了一个在有限维(维数≤2m+1)子空间上精确求解信赖域子问题的方法,大大减少了计算量;分析了方法的收敛性,并详细给出了数值计算方法,最后通过数值实验验证了方法的有效性.  相似文献   

19.
The limited memory BFGS method (L-BFGS) is an adaptation of the BFGS method for large-scale unconstrained optimization. However, The L-BFGS method need not converge for nonconvex objective functions and it is inefficient on highly ill-conditioned problems. In this paper, we proposed a regularization strategy on the L-BFGS method, where the used regularization parameter may play a compensation role in some sense when the condition number of Hessian approximation tends to become ill-conditioned. Then we proposed a regularized L-BFGS method and established its global convergence even when the objective function is nonconvex. Numerical results show that the proposed method is efficient.  相似文献   

20.
1.IntroductionIn[6],aQPFTHmethodwasproposedforsolvingthefollowingnonlinearprogrammingproblemwherefunctionsf:R"-- RIandgi:R"-- R',jeJaretwicecontinuouslydifferentiable.TheQPFTHalgorithmwasdevelopedforsolvingsparselarge-scaleproblem(l.l)andwastwo-stepQ-quadraticallyandR-quadraticallyconvergent(see[6]).Theglobalconvergenceofthisalgorithmisdiscussedindetailinthispaper.Forthefollowinginvestigationwerequiresomenotationsandassumptions.TheLagrangianofproblem(1.1)isdefinedbyFOundationofJiangs…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号