首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

2.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

3.
An active set limited memory BFGS algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction is determined by a lower dimensional system of linear equations in free subspace. The implementations of the method on CUTE test problems are described, which show the efficiency of the proposed algorithm. The work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036.  相似文献   

4.
《Optimization》2012,61(12):1491-1509
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.  相似文献   

5.
In this paper, an active set limited BFGS algorithm is proposed for bound constrained optimization. The global convergence will be established under some suitable conditions. Numerical results show that the given method is effective.  相似文献   

6.
An algorithm based on a combination of the polyhedral and quadratic approximation is given for finding stationary points for unconstrained minimization problems with locally Lips-chitz problem functions that are not necessarily convex or differentiable. Global convergence of the algorithm is established. Under additional assumptions, it is shown that the algorithm generates Newton iterations and that the convergence is superlinear. Some encouraging numerical experience is reported. This work was supported by the grant No. 201/96/0918 given by the Czech Republic Grant Agency.  相似文献   

7.
This paper introduces an algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm. The method uses the Hessian of a certain outer function which depends on the Jacobian of a proximal point mapping which, in turn, depends on the preconditioner matrix and on a Lagrangian Hessian relative to a certain tangent space. Convergence is proved under boundedness assumptions on the preconditioner sequence. Research supported by NSF Grant No. DMS-9402018 and by Institut National de Recherche en Informatique et en Automatique, France.  相似文献   

8.
9.
We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almost optimal efficiency estimates. We give extensions for solving convex constrained problems, convex-concave saddle-point problems and variational inequalities with monotone operators. We present several variants, establish their efficiency estimates, and discuss possible implementations. In particular, our methods require bounded storage in contrast to the original level methods of Lemaréchal, Nemirovskii and Nesterov.This research was supported by the Polish Academy of Sciences.Supported by a grant from the French Ministry of Research and Technology.  相似文献   

10.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

11.
This paper presents a descent method for minimizing a sum of possibly nonsmooth convex functions. Search directions are found by solving subproblems obtained by replacing all but one of the component functions with their polyhedral approximations and adding a quadratic term. The algorithm is globally convergent and terminates when the objective function happens to be polyhedral. It yields a new decomposition method for solving large-scale linear programs with dual block-angular structure.Supported by Program CPBP 02.15.The author thanks the two referees for their helpful suggestions.  相似文献   

12.
Robinson has proposed the bundle-based decomposition algorithm to solve a class of structured large-scale convex optimization problems. In this method, the original problem is transformed (by dualization) to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. In this paper, we give a posteriori error estimates on the approximate primal optimal solution and on the duality gap. We describe implementation and present computational experience with a special case of this class of problems, namely, block-angular linear programming problems. We observe that the method is efficient in obtaining the approximate optimal solution and compares favorably with MINOS and an advanced implementation of the Dantzig—Wolfe decomposition method.  相似文献   

13.
《Optimization》2012,61(2):249-263
New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods.  相似文献   

14.
In this paper, a subspace limited memory BFGS algorithm for solving large-scale bound constrained optimization problems is developed. It is modifications of the subspace limited memory quasi-Newton method proposed by Ni and Yuan [Q. Ni, Y.X. Yuan, A subspace limited memory quasi-Newton algorithm for large-scale nonlinear bound constrained optimization, Math. Comput. 66 (1997) 1509–1520]. An important property of our proposed method is that more limited memory BFGS update is used. Under appropriate conditions, the global convergence of the method is established. The implementations of the method on CUTE test problems are presented, which indicate the modifications are beneficial to the performance of the algorithm.  相似文献   

15.
In this work we consider the problem of minimizing a continuously differentiable function over a feasible set defined by box constraints. We present a decomposition method based on the solution of a sequence of subproblems. In particular, we state conditions on the rule for selecting the subproblem variables sufficient to ensure the global convergence of the generated sequence without convexity assumptions. The conditions require to select suitable variables (related to the violation of the optimality conditions) to guarantee theoretical convergence properties, and leave the degree of freedom of selecting any other group of variables to accelerate the convergence.  相似文献   

16.
On the limited memory BFGS method for large scale optimization   总被引:60,自引:0,他引:60  
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.This work was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contract DE-FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

17.
18.
This paper presents some variants of the inexact Newton method for solving systems of nonlinear equations defined by locally Lipschitzian functions. These methods use variants of Newton's iteration in association with Krylov subspace methods for solving the Jacobian linear systems. Global convergence of the proposed algorithms is established under a nonmonotonic backtracking strategy. The local convergence based on the assumptions of semismoothness and BD‐regularity at the solution is characterized, and the way to choose an inexact forcing sequence that preserves the rapid convergence of the proposed methods is also indicated. Numerical examples are given to show the practical viability of these approaches. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we propose a strongly sub-feasible direction method for the solution of inequality constrained optimization problems whose objective functions are not necessarily differentiable. The algorithm combines the subgradient aggregation technique with the ideas of generalized cutting plane method and of strongly sub-feasible direction method, and as results a new search direction finding subproblem and a new line search strategy are presented. The algorithm can not only accept infeasible starting points but also preserve the “strong sub-feasibility” of the current iteration without unduly increasing the objective value. Moreover, once a feasible iterate occurs, it becomes automatically a feasible descent algorithm. Global convergence is proved, and some preliminary numerical results show that the proposed algorithm is efficient.  相似文献   

20.
尝试在有限存储类算法中利用目标函数值所提供的信息.首先利用插值条件构造了一个新的二次函数逼近目标函数,得到了一个新的弱割线方程,然后将此弱割线方程与袁[1]的弱割线方程相结合,给出了一族包括标准LBFGS的有限存储BFGS类算法,证明了这族算法的收敛性.从标准试验函数库CUTE中选择试验函数进行了数值试验,试验结果表明...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号