首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper, we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz continuous functions and give some preliminary results from numerical experiments.  相似文献   

2.
In this paper, we present several new implementable methods for solving a generalized fractional program with convex data. They are Dinkelbach-type methods where a prox-regularization term is added to avoid the numerical difficulties arising when the solution of the problem is not unique. In these methods, at each iteration a regularized parametric problem is solved inexactly to obtain an approximation of the optimal value of the problem. Since the parametric problem is nonsmooth and convex, we propose to solve it by using a classical bundle method where the parameter is updated after each ‘serious step’. We mainly study two kinds of such steps, and we prove the convergence and the rate of convergence of each of the corresponding methods. Finally, we present some numerical experience to illustrate the behavior of the proposed algorithms, and we discuss the practical efficiency of each one.   相似文献   

3.
A rigorous decomposition approach to solve separable mixed-integer nonlinear programs where the participating functions are nonconvex is presented. The proposed algorithms consist of solving an alternating sequence of Relaxed Master Problems (mixed-integer linear program) and two nonlinear programming problems (NLPs). A sequence of valid nondecreasing lower bounds and upper bounds is generated by the algorithms which converge in a finite number of iterations. A Primal Bounding Problem is introduced, which is a convex NLP solved at each iteration to derive valid outer approximations of the nonconvex functions in the continuous space. Two decomposition algorithms are presented in this work. On finite termination, the first yields the global solution to the original nonconvex MINLP and the second finds a rigorous bound to the global solution. Convergence and optimality properties, and refinement of the algorithms for efficient implementation are presented. Finally, numerical results are compared with currently available algorithms for example problems, illuminating the potential benefits of the proposed algorithm.  相似文献   

4.
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.  相似文献   

5.
In this paper, a nonsmooth bundle algorithm to minimize the maximum eigenvalue function of a nonconvex smooth function is presented. The bundle method uses an oracle to compute separately the function and subgradient information for a convex function, and the function and derivative values for the smooth mapping. Using this information, in each iteration, we replace the smooth inner mapping by its Taylor-series linearization around the current serious step. To solve the convex approximate eigenvalue problem with affine mapping faster, we adopt the second-order bundle method based on ????-decomposition theory. Through the backtracking test, we can make a better approximation for the objective function. Quadratic convergence of our special bundle method is given, under some additional assumptions. Then we apply our method to some particular instance of nonconvex eigenvalue optimization, specifically: bilinear matrix inequality problems.  相似文献   

6.
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping, regularized by the group reproducing kernel norm. This class of problems arise naturally from applications in group Lasso, which is a popular technique for variable selection. An effective approach to solve such problems is by the proximal gradient method. In this paper we derive and study theoretically the efficient algorithms for the class of the convex problems, analyze the convergence of the algorithm and its subalgorithm.  相似文献   

7.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

8.
Robinson has proposed the bundle-based decomposition algorithm to solve a class of structured large-scale convex optimization problems. In this method, the original problem is transformed (by dualization) to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. In this paper, we give a posteriori error estimates on the approximate primal optimal solution and on the duality gap. We describe implementation and present computational experience with a special case of this class of problems, namely, block-angular linear programming problems. We observe that the method is efficient in obtaining the approximate optimal solution and compares favorably with MINOS and an advanced implementation of the Dantzig—Wolfe decomposition method.  相似文献   

9.
Distributed consensus optimization has received considerable attention in recent years and several distributed consensus-based algorithms have been proposed for (nonsmooth) convex and (smooth) nonconvex objective functions. However, the behavior of these distributed algorithms on nonconvex, nonsmooth and stochastic objective functions is not understood. Such class of functions and distributed setting are motivated by several applications, including problems in machine learning and signal processing. This paper presents the first convergence analysis of the decentralized stochastic subgradient method for such classes of problems, over networks modeled as undirected, fixed, graphs.  相似文献   

10.
《Optimization》2012,61(7):1057-1073
In this article, generalization of some mixed-integer nonlinear programming algorithms to cover convex nonsmooth problems is studied. In the extended cutting plane method, gradients are replaced by the subgradients of the convex function and the resulting algorithm shall be proved to converge to a global optimum. It is shown through a counterexample that this type of generalization is insufficient with certain versions of the outer approximation algorithm. However, with some modifications to the outer approximation method a special type of nonsmooth functions for which the subdifferential at any point is a convex combination of a finite number of subgradients at the point can be considered. Numerical results with extended cutting plane method are also reported.  相似文献   

11.
In this paper, we present parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems. An example in this class of problems is the block-angular linear programming problem. By dualizing, we transform the original problem to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. Further, this dual problem consists of a collection of smaller independent subproblems which give rise to the parallel algorithms. We discuss the implementation on the CRYSTAL multi-computer. Finally, we present computational experience with block-angular linear programming problems and observe that more than 70% efficiency can be obtained using up to eleven processors for one group of test problems, and more than 60% efficiency can be obtained for relatively smaller problems using up to five processors for another group of problems.  相似文献   

12.
The semi-infinite programming (SIP) problem is a program with infinitely many constraints. It can be reformulated as a nonsmooth nonlinear programming problem with finite constraints by using an integral function. Due to the nondifferentiability of the integral function, gradient-based algorithms cannot be used to solve this nonsmooth nonlinear programming problem. To overcome this difficulty, we present a robust smoothing sequential quadratic programming (SQP) algorithm for solving the nonsmooth nonlinear programming problem. At each iteration of the algorthm, we need to solve only a quadratic program that is always feasible and solvable. The global convergence of the algorithm is established under mild conditions. Numerical results are given. Communicated by F. Giannessi His work was supported by the Hong Kong Research Grant Council His work was supported by the Australian Research Council.  相似文献   

13.
Joydeep Dutta 《TOP》2005,13(2):185-279
During the early 1960’s there was a growing realization that a large number of optimization problems which appeared in applications involved minimization of non-differentiable functions. One of the important areas where such problems appeared was optimal control. The subject of nonsmooth analysis arose out of the need to develop a theory to deal with the minimization of nonsmooth functions. The first impetus in this direction came with the publication of Rockafellar’s seminal work titledConvex Analysis which was published by the Princeton University Press in 1970. It would be impossible to overstate the impact of this book on the development of the theory and methods of optimization. It is also important to note that a large part of convex analysis was already developed by Werner Fenchel nearly twenty years earlier and was circulated through his mimeographed lecture notes titledConvex Cones, Sets and Functions, Princeton University, 1951. In this article we trace the dramatic development of nonsmooth analysis and its applications to optimization in finite dimensions. Beginning with the fundamentals of convex optimization we quickly move over to the path breaking work of Clarke which extends the domain of nonsmooth analysis from convex to locally Lipschitz functions. Clarke was the second doctoral student of R.T. Rockafellar. We discuss the notions of Clarke directional derivative and the Clarke generalized gradient and also the relevant calculus rules and applications to optimization. While discussing locally Lipschitz optimization we also try to blend in the computational aspects of the theory wherever possible. This is followed by a discussion of the geometry of sets with nonsmooth boundaries. The approach to develop the notion of the normal cone to an arbitrary set is sequential in nature. This approach does not rely on the standard techniques of convex analysis. The move away from convexity was pioneered by Mordukhovich and later culminated in the monographVariational Analysis by Rockafellar and Wets. The approach of Mordukhovich relied on a nonconvex separation principle called theextremal principle while that of Rockafellar and Wets relied on various convergence notions developed to suit the needs of optimization. We then move on to a parallel development in nonsmooth optimization due to Demyanov and Rubinov called Quasidifferentiable optimization. They study the class of directionally differentiable functions whose directional derivatives can be represented as a difference of two sublinear functions. On other hand the directional derivative of a convex function and also the Clarke directional derivatives are sublinear functions of the directions. Thus it was thought that the most useful generalizations of directional derivatives must be a sublinear function of the directions. Thus Demyanov and Rubinov made a major conceptual change in nonsmooth optimization. In this section we define the notion of a quasidifferential which is a pair of convex compact sets. We study some calculus rules and their applications to optimality conditions. We also study the interesting notion of Demyanov difference between two sets and their applications to optimization. In the last section of this paper we study some second-order tools used in nonsmooth analysis and try to see their relevance in optimization. In fact it is important to note that unlike the classical case, the second-order theory of nonsmoothness is quite complicated in the sense that there are many approaches to it. However we have chosen to describe those approaches which can be developed from the first order nonsmooth tools discussed here. We shall present three different approaches, highlight the second order calculus rules and their applications to optimization.  相似文献   

14.
B. Jin 《Optimization》2016,65(6):1151-1166
In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms: we derive a Lagrange–Newton algorithm for the nonsmooth convex optimization, and establish the nonsingularity of the Newton system and the local convergence of the algorithm.  相似文献   

15.
We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems.  相似文献   

16.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

17.
Two distributed algorithms are described that enable all users connected over a network to cooperatively solve the problem of minimizing the sum of all users’ objective functions over the intersection of all users’ constraint sets, where each user has its own private nonsmooth convex objective function and closed convex constraint set, which is the intersection of a number of simple, closed convex sets. One algorithm enables each user to adjust its estimate using the proximity operator of its objective function and the metric projection onto one constraint set randomly selected from a number of simple, closed convex sets. The other determines each user’s estimate using the subdifferential of its objective function instead of the proximity operator. Investigation of the two algorithms’ convergence properties for a diminishing step-size rule revealed that, under certain assumptions, the sequences of all users generated by each of the two algorithms converge almost surely to the same solution. It also showed that the rate of convergence depends on the step size and that a smaller step size results in quicker convergence. The results of numerical evaluation using a nonsmooth convex optimization problem support the convergence analysis and demonstrate the effectiveness of the two algorithms.  相似文献   

18.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called αBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems.  相似文献   

19.
Hemivariational inequalities can be considered as a generalization of variational inequalities. Their origin is in nonsmooth mechanics of solid, especially in nonmonotone contact problems. The solution of a hemivariational inequality proves to be a substationary point of some functional, and thus can be found by the nonsmooth and nonconvex optimization methods. We consider two type of bundle methods in order to solve hemivariational inequalities numerically: proximal bundle and bundle-Newton methods. Proximal bundle method is based on first order polyhedral approximation of the locally Lipschitz continuous objective function. To obtain better convergence rate bundle-Newton method contains also some second order information of the objective function in the form of approximate Hessian. Since the optimization problem arising in the hemivariational inequalities has a dominated quadratic part the second order method should be a good choice. The main question in the functioning of the methods is how remarkable is the advantage of the possible better convergence rate of bundle-Newton method when compared to the increased calculation demand.  相似文献   

20.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号