首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Optimization》2012,61(12):1491-1509
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.  相似文献   

2.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

3.
An efficient algorithm for solving nonlinear programs with noisy equality constraints is introduced and analyzed. The unknown exact constraints are replaced by surrogates based on the bundle idea, a well-known strategy from nonsmooth optimization. This concept allows us to perform a fast computation of the surrogates by solving simple quadratic optimization problems, control the memory needed by the algorithm, and prove the differentiability properties of the surrogate functions. The latter aspect allows us to invoke a sequential quadratic programming method. The overall algorithm is of the quasi-Newton type. Besides convergence theorems, qualification results are given and numerical test runs are discussed.  相似文献   

4.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

5.
Hemivariational inequalities can be considered as a generalization of variational inequalities. Their origin is in nonsmooth mechanics of solid, especially in nonmonotone contact problems. The solution of a hemivariational inequality proves to be a substationary point of some functional, and thus can be found by the nonsmooth and nonconvex optimization methods. We consider two type of bundle methods in order to solve hemivariational inequalities numerically: proximal bundle and bundle-Newton methods. Proximal bundle method is based on first order polyhedral approximation of the locally Lipschitz continuous objective function. To obtain better convergence rate bundle-Newton method contains also some second order information of the objective function in the form of approximate Hessian. Since the optimization problem arising in the hemivariational inequalities has a dominated quadratic part the second order method should be a good choice. The main question in the functioning of the methods is how remarkable is the advantage of the possible better convergence rate of bundle-Newton method when compared to the increased calculation demand.  相似文献   

6.
New Bundle Methods for Solving Lagrangian Relaxation Dual Problems   总被引:5,自引:0,他引:5  
Bundle methods have been used frequently to solve nonsmooth optimization problems. In these methods, subgradient directions from past iterations are accumulated in a bundle, and a trial direction is obtained by performing quadratic programming based on the information contained in the bundle. A line search is then performed along the trial direction, generating a serious step if the function value is improved by or a null step otherwise. Bundle methods have been used to maximize the nonsmooth dual function in Lagrangian relaxation for integer optimization problems, where the subgradients are obtained by minimizing the performance index of the relaxed problem. This paper improves bundle methods by making good use of near-minimum solutions that are obtained while solving the relaxed problem. The bundle information is thus enriched, leading to better search directions and less number of null steps. Furthermore, a simplified bundle method is developed, where a fuzzy rule is used to combine linearly directions from near-minimum solutions, replacing quadratic programming and line search. When the simplified bundle method is specialized to an important class of problems where the relaxed problem can be solved by using dynamic programming, fuzzy dynamic programming is developed to obtain efficiently near-optimal solutions and their weights for the linear combination. This method is then applied to job shop scheduling problems, leading to better performance than previously reported in the literature.  相似文献   

7.
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.  相似文献   

8.
Image recovery problems can be solved using optimization techniques. They lead often to the solution of either a large-scale convex quadratic program or equivalently a nondifferentiable minimization problem. To solve the quadratic program, we use an infeasible predictor-corrector interior-point method, presented in the more general framework of monotone LCP. The algorithm has polynomial complexity and it converges with asymptotic quadratic rate. When implementing the method to recover images, we take advantage of the underlying sparsity of the problem. We obtain good performances, that we assess by comparing the method with a variable-metric proximal bundle algorithm applied to the solution of equivalent nonsmooth problem.  相似文献   

9.
We propose restricted memory level bundle methods for minimizing constrained convex nonsmooth optimization problems whose objective and constraint functions are known through oracles (black-boxes) that might provide inexact information. Our approach is general and covers many instances of inexact oracles, such as upper, lower and on-demand accuracy oracles. We show that the proposed level bundle methods are convergent as long as the memory is restricted to at least four well chosen linearizations: two linearizations for the objective function, and two linearizations for the constraints. The proposed methods are particularly suitable for both joint chance-constrained problems and two-stage stochastic programs with risk measure constraints. The approach is assessed on realistic joint constrained energy problems, arising when dealing with robust cascaded-reservoir management.  相似文献   

10.
This paper introduces a new derivative-free class of mesh adaptive direct search (MADS) algorithms for solving constrained mixed variable optimization problems, in which the variables may be continuous or categorical. This new class of algorithms, called mixed variable MADS (MV-MADS), generalizes both mixed variable pattern search (MVPS) algorithms for linearly constrained mixed variable problems and MADS algorithms for general constrained problems with only continuous variables. The convergence analysis, which makes use of the Clarke nonsmooth calculus, similarly generalizes the existing theory for both MVPS and MADS algorithms, and reasonable conditions are established for ensuring convergence of a subsequence of iterates to a suitably defined stationary point in the nonsmooth and mixed variable sense.  相似文献   

11.
In this paper we propose an alternating block version of a variable metric linesearch proximal gradient method. This algorithm addresses problems where the objective function is the sum of a smooth term, whose variables may be coupled, plus a separable part given by the sum of two or more convex, possibly nonsmooth functions, each depending on a single block of variables. Our approach is characterized by the possibility of performing several proximal gradient steps for updating every block of variables and by the Armijo backtracking linesearch for adaptively computing the steplength parameter. Under the assumption that the objective function satisfies the Kurdyka-?ojasiewicz property at each point of its domain and the gradient of the smooth part is locally Lipschitz continuous, we prove the convergence of the iterates sequence generated by the method. Numerical experience on an image blind deconvolution problem show the improvements obtained by adopting a variable number of inner block iterations combined with a variable metric in the computation of the proximal operator.  相似文献   

12.
In solving certain optimization problems, the corresponding Lagrangian dual problem is often solved simply because in these problems the dual problem is easier to solve than the original primal problem. Another reason for their solution is the implication of the weak duality theorem which suggests that under certain conditions the optimal dual function value is smaller than or equal to the optimal primal objective value. The dual problem is a special case of a bilevel programming problem involving Lagrange multipliers as upper-level variables and decision variables as lower-level variables. Another interesting aspect of dual problems is that both lower and upper-level optimization problems involve only box constraints and no other equality of inequality constraints. In this paper, we propose a coevolutionary dual optimization (CEDO) algorithm for co-evolving two populations—one involving Lagrange multipliers and other involving decision variables—to find the dual solution. On 11 test problems taken from the optimization literature, we demonstrate the efficacy of CEDO algorithm by comparing it with a couple of nested smooth and nonsmooth algorithms and a couple of previously suggested coevolutionary algorithms. The performance of CEDO algorithm is also compared with two classical methods involving nonsmooth (bundle) optimization methods. As a by-product, we analyze the test problems to find their associated duality gap and classify them into three categories having zero, finite or infinite duality gaps. The development of a coevolutionary approach, revealing the presence or absence of duality gap in a number of commonly-used test problems, and efficacy of the proposed coevolutionary algorithm compared to usual nested smooth and nonsmooth algorithms and other existing coevolutionary approaches remain as the hallmark of the current study.  相似文献   

13.
A new approximation method is presented for directly minimizing a composite nonsmooth function that is locally Lipschitzian. This method approximates only the generalized gradient vector, enabling us to use directly well-developed smooth optimization algorithms for solving composite nonsmooth optimization problems. This generalized gradient vector is approximated on each design variable coordinate by using only the active components of the subgradient vectors; then, its usability is validated numerically by the Pareto optimum concept. In order to show the performance of the proposed method, we solve four academic composite nonsmooth optimization problems and two dynamic response optimization problems with multicriteria. Specifically, the optimization results of the two dynamic response optimization problems are compared with those obtained by three typical multicriteria optimization strategies such as the weighting method, distance method, and min–max method, which introduces an artificial design variable in order to replace the max-value cost function with additional inequality constraints. The comparisons show that the proposed approximation method gives more accurate and efficient results than the other methods.  相似文献   

14.
张清叶  高岩 《运筹学学报》2016,20(2):113-120
提出一种求解非光滑凸规划问题的混合束方法. 该方法通过对目标函数增加迫近项, 且对可行域增加信赖域约束进行迭代, 做为迫近束方法与信赖域束方法的有机结合, 混合束方法自动在二者之间切换, 收敛性分析表明该方法具有全局收敛性. 最后的数值算例验证了算法的有效性.  相似文献   

15.
We consider the inclusion of commitment of thermal generation units in the optimal management of the Brazilian power system. By means of Lagrangian relaxation we decompose the problem and obtain a nondifferentiable dual function that is separable. We solve the dual problem with a bundle method. Our purpose is twofold: first, bundle methods are the methods of choice in nonsmooth optimization when it comes to solve large-scale problems with high precision. Second, they give good starting points for recovering primal solutions. We use an inexact augmented Lagrangian technique to find a near-optimal primal feasible solution. We assess our approach with numerical results.  相似文献   

16.
In this work, we combine outer-approximation (OA) and bundle method algorithms for dealing with mixed-integer non-linear programming (MINLP) problems with nonsmooth convex objective and constraint functions. As the convergence analysis of OA methods relies strongly on the differentiability of the involved functions, OA algorithms may fail to solve general nonsmooth convex MINLP problems. In order to obtain OA algorithms that are convergent regardless the structure of the convex functions, we solve the underlying OA’s non-linear subproblems by a specialized bundle method that provides necessary information to cut off previously visited (non-optimal) integer points. This property is crucial for proving (finite) convergence of OA algorithms. We illustrate the numerical performance of the given proposal on a class of hybrid robust and chance-constrained problems that involve a random variable with finite support.  相似文献   

17.
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper, we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz continuous functions and give some preliminary results from numerical experiments.  相似文献   

18.
In this paper solvability and Lipschitzian stability properties for a special class of nonsmooth parametric generalized systems defined in Banach are studied via a variational analysis approach. Verifiable sufficient conditions for such properties to hold under scalar quasidifferentiability assumptions are formulated by combining *-difference and Demyanov difference of convex compact subsets of the dual space with classic quasidifferential calculus constructions. Applications to the formulation of sufficient conditions for metric regularity/open covering of nonsmooth maps, along with their employment in deriving optimality conditions for quasidifferentiable extremum problems, as well as an application to the study of semicontinuity of the optimal value function in parametric optimization are discussed. In memory of Aleksandr Moiseevich Rubinov (1940–2006).  相似文献   

19.
In this paper we describe a number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities. We outline the ideas underlying these methods and present rate-of-convergence estimates.Corresponding author.  相似文献   

20.
We discuss the energy generation expansion planning with environmental constraints, formulated as a nonsmooth convex constrained optimization problem. To solve such problems, methods suitable for constrained nonsmooth optimization need to be employed. We describe a recently developed approach, which applies the usual unconstrained bundle techniques to a dynamically changing ??improvement function??. Numerical results for the generation expansion planning are reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号