首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an \(\varepsilon \)-critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.  相似文献   

2.
For solving nonsmooth convex constrained optimization problems, we propose an algorithm which combines the ideas of the proximal bundle methods with the filter strategy for evaluating candidate points. The resulting algorithm inherits some attractive features from both approaches. On the one hand, it allows effective control of the size of quadratic programming subproblems via the compression and aggregation techniques of proximal bundle methods. On the other hand, the filter criterion for accepting a candidate point as the new iterate is sometimes easier to satisfy than the usual descent condition in bundle methods. Some encouraging preliminary computational results are also reported.  相似文献   

3.
Motivated by weakly convex optimization and quadratic optimization problems, we first show that there is no duality gap between a difference of convex (DC) program over DC constraints and its associated dual problem. We then provide certificates of global optimality for a class of nonconvex optimization problems. As an application, we derive characterizations of robust solutions for uncertain general nonconvex quadratic optimization problems over nonconvex quadratic constraints.  相似文献   

4.
In this paper, we consider the case of downside risk measures with cardinality and bounding constraints in portfolio selection. These constraints limit the amount of capital to be invested in each asset as well as the number of assets composing the portfolio. While the standard Markowitz’s model is a convex quadratic program, this new model is a NP-hard mixed integer quadratic program. Realizing the computational intractability for this class of problems, especially large-scale problems, we first reformulate it as a DC program with the help of exact penalty techniques in Difference of Convex functions (DC) programming and then solve it by DC Algorithms (DCA). To check globality of computed solutions, a global method combining the local algorithm DCA with a Branch-and-Bound algorithm is investigated. Numerical simulations show that DCA is an efficient and promising approach for the considered problem.   相似文献   

5.
In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given.  相似文献   

6.
In this paper, we investigate the use of DC (Difference of Convex functions) models and algorithms in the application of trust-region methods to the solution of a class of nonlinear optimization problems where the constrained set is closed and convex (and, from a practical point of view, where projecting onto the feasible region is computationally affordable). We consider DC local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of the corresponding trust-region subproblems. One is able to prove that the resulting scheme is globally convergent to first-order stationary points. The theory requires the use of exact second-order derivatives but, in turn, the computation of the trust-region step asks only for one projection onto the feasible region (in comparison to the calculation of the generalized Cauchy point which may require more). The numerical efficiency and robustness of the proposed new scheme when applied to bound-constrained problems is measured by comparing its performance against some of the current state-of-the-art nonlinear programming solvers on a vast collection of test problems.  相似文献   

7.
An algorithm is developed for minimizing nonsmooth convex functions. This algorithm extends Elzinga–Moore cutting plane algorithm by enforcing the search of the next test point not too far from the previous ones, thus removing compactness assumption. Our method is to Elzinga–Moore’s algorithm what a proximal bundle method is to Kelley’s algorithm. Instead of lower approximations used in proximal bundle methods, the present approach is based on some objects regularizing translated functions of the objective function. We propose some variants and using some academic test problems, we conduct a numerical comparative study with Elzinga–Moore algorithm and two other well-known nonsmooth methods.   相似文献   

8.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

9.
Image recovery problems can be solved using optimization techniques. They lead often to the solution of either a large-scale convex quadratic program or equivalently a nondifferentiable minimization problem. To solve the quadratic program, we use an infeasible predictor-corrector interior-point method, presented in the more general framework of monotone LCP. The algorithm has polynomial complexity and it converges with asymptotic quadratic rate. When implementing the method to recover images, we take advantage of the underlying sparsity of the problem. We obtain good performances, that we assess by comparing the method with a variable-metric proximal bundle algorithm applied to the solution of equivalent nonsmooth problem.  相似文献   

10.
张清叶  高岩 《运筹学学报》2016,20(2):113-120
提出一种求解非光滑凸规划问题的混合束方法. 该方法通过对目标函数增加迫近项, 且对可行域增加信赖域约束进行迭代, 做为迫近束方法与信赖域束方法的有机结合, 混合束方法自动在二者之间切换, 收敛性分析表明该方法具有全局收敛性. 最后的数值算例验证了算法的有效性.  相似文献   

11.
We study a new trust region affine scaling method for general bound constrained optimization problems. At each iteration, we compute two trial steps. We compute one along some direction obtained by solving an appropriate quadratic model in an ellipsoidal region. This region is defined by an affine scaling technique. It depends on both the distances of current iterate to boundaries and the trust region radius. For convergence and avoiding iterations trapped around nonstationary points, an auxiliary step is defined along some newly defined approximate projected gradient. By choosing the one which achieves more reduction of the quadratic model from the two above steps as the trial step to generate next iterate, we prove that the iterates generated by the new algorithm are not bounded away from stationary points. And also assuming that the second-order sufficient condition holds at some nondegenerate stationary point, we prove the Q-linear convergence of the objective function values. Preliminary numerical experience for problems with bound constraints from the CUTEr collection is also reported.  相似文献   

12.
A descent method is given for minimizing a nondifferentiable function which can be locally approximated by pointwise minima of convex functions. At each iterate the algorithm finds several directions by solving several linear or quadratic programming subproblems. These directions are then used in an Armijo-like search for the next iterate. A feasible direction extension to inequality constrained minimization problems is also presented. The algorithms converge to points satisfying necessary optimality conditions which are sharper than the ones involved in convergence results for algorithms based on the Clarke subdifferential.This research was sponsored by Project 02.15.  相似文献   

13.
New results are established for multiobjective DC programs with infinite convex constraints (MOPIC) that are defined on Banach spaces (finite or infinite dimensional) with objectives given as the difference of convex functions. This class of problems can also be called multiobjective DC semi-infinite and infinite programs, where decision variables run over finite-dimensional and infinite-dimensional spaces, respectively. Such problems have not been studied as yet. Necessary and sufficient optimality conditions for the weak Pareto efficiency are introduced. Further, we seek a connection between multiobjective linear infinite programs and MOPIC. Both Wolfe and Mond-Weir dual problems are presented, and corresponding weak, strong, and strict converse duality theorems are derived for these two problems respectively. We also extend above results to multiobjective fractional DC programs with infinite convex constraints. The results obtained are new in both semi-infinite and infinite frameworks.  相似文献   

14.
In this paper, we consider the proximal point algorithm for the problem of finding zeros of any given maximal monotone operator in an infinite-dimensional Hilbert space. For the usual distance between the origin and the operator’s value at each iterate, we put forth a new idea to achieve a new result on the speed at which the distance sequence tends to zero globally, provided that the problem’s solution set is nonempty and the sequence of squares of the regularization parameters is nonsummable. We show that it is comparable to a classical result of Brézis and Lions in general and becomes better whenever the proximal point algorithm does converge strongly. Furthermore, we also reveal its similarity to Güler’s classical results in the context of convex minimization in the sense of strictly convex quadratic functions, and we discuss an application to an ?-approximation solution of the problem above.  相似文献   

15.
We present an iterative method for minimizing strictly convex quadratic functions over the intersection of a finite number of convex sets. The method consists in computing projections onto the individual sets simultaneously and the new iterate is a convex combination of those projections. We give convergence proofs even for the inconsistent case, i.e. when the intersection of the sets is empty.Work of this author was partially supported by CNPq under grant No. 301280/86-MA.  相似文献   

16.
We consider a quadratic d. c. optimization problem on a convex set. The objective function is represented as the difference of two convex functions. By reducing the problem to the equivalent concave programming problem we prove a sufficient optimality condition in the form of an inequality for the directional derivative of the objective function at admissible points of the corresponding level surface.  相似文献   

17.
Portfolio selection with higher moments is a NP-hard nonconvex polynomial optimization problem. In this paper, we propose an efficient local optimization approach based on DC (Difference of Convex functions) programming—called DCA (DC Algorithm)—that consists of solving the nonconvex program by a sequence of convex ones. DCA will construct, in each iteration, a suitable convex quadratic subproblem which can be easily solved by explicit method, due to the proposed special DC decomposition. Computational results show that DCA almost always converges to global optimal solutions while comparing with the global optimization methods (Gloptipoly, Branch-and-Bound) and it outperforms several standard local optimization algorithms.  相似文献   

18.
In this paper, we propose a line-search procedure for the logarithmic barrier function in the context of an interior point algorithm for convex quadratic programming. Preliminary testing shows that the proposed procedure is superior to some other line-search methods developed specifically for the logarithmic barrier function in the literature.  相似文献   

19.
Abstract

We present an interior proximal method for solving constrained nonconvex optimization problems where the objective function is given by the difference of two convex function (DC function). To this end, we consider a linearized proximal method with a proximal distance as regularization. Convergence analysis of particular choices of the proximal distance as second-order homogeneous proximal distances and Bregman distances are considered. Finally, some academic numerical results are presented for a constrained DC problem and generalized Fermat–Weber location problems.  相似文献   

20.
We propose a modification of the proximal decomposition method investigated by Spingarn [30] and Mahey et al. [19] for minimizing a convex function on a subspace. For the method to be favorable from a computational point of view, particular importance is the introduction of approximations in the proximal step. First, we couple decomposition on the graph of the epsilon-subdifferential mapping and cutting plane approximations to get an algorithmic pattern that falls in the general framework of Rockafellar inexact proximal-point algorithms [26]. Recently, Solodov and Svaiter [27] proposed a new proximal point-like algorithm that uses improved error criteria and an enlargement of the maximal monotone operator defining the problem. We combine their idea with bundle mecanism to devise an inexact proximal decomposition method with error condition which is not hard to satisfy in practice. Then, we present some applications favorable to our development. First, we give a new regularized version of Benders decomposition method in convex programming called the proximal convex Benders decomposition algorithm. Second, we derive a new algorithm for nonlinear multicommodity flow problems among which the message routing problem in telecommunications data networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号