首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
《Optimization》2012,61(12):1491-1509
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.  相似文献   

2.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

3.
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.  相似文献   

4.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

5.
张清叶  高岩 《运筹学学报》2016,20(2):113-120
提出一种求解非光滑凸规划问题的混合束方法. 该方法通过对目标函数增加迫近项, 且对可行域增加信赖域约束进行迭代, 做为迫近束方法与信赖域束方法的有机结合, 混合束方法自动在二者之间切换, 收敛性分析表明该方法具有全局收敛性. 最后的数值算例验证了算法的有效性.  相似文献   

6.
7.
考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是R-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率.最后,数值实验结果验证了方法的有效性.  相似文献   

8.
This paper presents a descent method for minimizing a sum of possibly nonsmooth convex functions. Search directions are found by solving subproblems obtained by replacing all but one of the component functions with their polyhedral approximations and adding a quadratic term. The algorithm is globally convergent and terminates when the objective function happens to be polyhedral. It yields a new decomposition method for solving large-scale linear programs with dual block-angular structure.Supported by Program CPBP 02.15.The author thanks the two referees for their helpful suggestions.  相似文献   

9.
An algorithm for solving linearly constrained optimization problems is proposed. The search direction is computed by a bundle principle and the constraints are treated through an active set strategy. Difficulties that arise when the objective function is nonsmooth, require a clever choice of a constraint to relax. A certain nondegeneracy assumption is necessary to obtain convergence. Most of this research was performed when the author was with I.N.R.I.A. (Domaine de Voluceau-Rocquencourt, B.P. 105, 78153 Le Chesnay Cédex, France). This research was supported in part by the National Science Foundation, Grants No. DMC-84-51515 and OIR-85-00108.  相似文献   

10.
In the lines of our previous approach to devise proximal algorithms for nonsmooth convex optimization by applying Nesterov fast gradient concept to the Moreau–Yosida regularization of a convex function, we develop three new proximal algorithms for nonsmooth convex optimization. In these algorithms, the errors in computing approximate solutions for the Moreau–Yosida regularization are not fixed beforehand, while preserving the complexity estimates already established. We report some preliminary computational results to give a first estimate of their performance.  相似文献   

11.
A proximal bundle method is presented for minimizing a nonsmooth convex functionf. At each iteration, it requires only one approximate evaluation off and its -subgradient, and it finds a search direction via quadratic programming. When applied to the Lagrangian decomposition of convex programs, it allows for inexact solutions of decomposed subproblems; yet, increasing their required accuracy automatically, it asymptotically finds both the primal and dual solutions. It is an implementable approximate version of the proximal point algorithm. Some encouraging numerical experience is reported.The author thanks two anonymous referees for their valuable comments.Research supported by the State Committee for Scientific Research under Grant 8550502206.  相似文献   

12.
An algorithm based on a combination of the polyhedral and quadratic approximation is given for finding stationary points for unconstrained minimization problems with locally Lips-chitz problem functions that are not necessarily convex or differentiable. Global convergence of the algorithm is established. Under additional assumptions, it is shown that the algorithm generates Newton iterations and that the convergence is superlinear. Some encouraging numerical experience is reported. This work was supported by the grant No. 201/96/0918 given by the Czech Republic Grant Agency.  相似文献   

13.
Conjugate gradient methods are important for large-scale unconstrained optimization. This paper proposes an acceleration of these methods using a modification of steplength. The idea is to modify in a multiplicative manner the steplength αk, computed by Wolfe line search conditions, by means of a positive parameter ηk, in such a way to improve the behavior of the classical conjugate gradient algorithms. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with some conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that the accelerated computational scheme outperform the corresponding conjugate gradient algorithms.  相似文献   

14.
We give a bundle method for constrained convex optimization. Instead of using penalty functions, it shifts iterates towards feasibility, by way of a Slater point, assumed to be known. Besides, the method accepts an oracle delivering function and subgradient values with unknown accuracy. Our approach is motivated by a number of applications in column generation, in which constraints are positively homogeneous—so that zero is a natural Slater point—and an exact oracle may be time consuming. Finally, our convergence analysis employs arguments which have been little used so far in the bundle community. The method is illustrated on a number of cutting-stock problems. Research supported by INRIA New Investigation Grant “Convex Optimization and Dantzig–Wolfe Decomposition”.  相似文献   

15.
《Optimization》2012,61(5):1131-1151
We present a bundle-type method for minimizing non-convex non-smooth functions. Our approach is based on the partition of the bundle into two sets, taking into account the local convex or concave behaviour of the objective function. Termination at a point satisfying an approximate stationarity condition is proved and numerical results are provided.  相似文献   

16.
This paper presents new versions of proximal bundle methods for solving convex constrained nondifferentiable minimization problems. The methods employ 1 or exact penalty functions with new penalty updates that limit unnecessary penalty growth. In contrast to other methods, some of them are insensitive to problem function scaling. Global convergence of the methods is established, as well as finite termination for polyhedral problems. Some encouraging numerical experience is reported. The ideas presented may also be used in variable metric methods for smooth nonlinear programming.This research was supported by the Polish Academy of Sciences.  相似文献   

17.
In this paper, we present a general scheme for bundle-type algorithms which includes a nonmonotone line search procedure and for which global convergence can be proved. Some numerical examples are reported, showing that the nonmonotonicity can be beneficial from a computational point of view.This work was partially supported by the National Research Program on Metodi di ottimizzazione per le decisioni, Ministero dell' Universitá e della Ricerca Scientifica e Tecnologica and by ASI: Agenzia Spaziale Italiana.  相似文献   

18.
We present a new bundle method in which the use of the proximal trajectory of the cutting plane function allows the automatic tuning of the proximity parameter. An updating criterion of the stability center based on the agreement between the objective function and the polyhedral model is presented. Convergence properties are provided together with some numerical results.This research has been partially supported by the Italian Ministero dell’Istruzione, dell’Università e della Ricerca Scientifica under FIRB Project RBNE01WBBB, Large-Scale Nonlinear Optimization.  相似文献   

19.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

20.
Variable metric methods from the Broyden family are well known and commonly used for unconstrained minimization. These methods have good theoretical and practical convergence properties which depend on a selection of free parameters. We demonstrate, using extensive computational experiments, the influence of both the Biggs stabilization parameter and Oren scaling parameter on 12 individual variable metric updates, two of which are new. This paper focuses on a class of variable metric updates belonging to the so-called preconvex part of the Broyden family. These methods outperform the more familiar BFGS method. We also experimentally demonstrate the efficiency of the controlled scaling strategy for problems of sufficient size and sparsity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号