首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Composite proximal bundle method   总被引:1,自引:0,他引:1  
We consider minimization of nonsmooth functions which can be represented as the composition of a positively homogeneous convex function and a smooth mapping. This is a sufficiently rich class that includes max-functions, largest eigenvalue functions, and norm-1 regularized functions. The bundle method uses an oracle that is able to compute separately the function and subgradient information for the convex function, and the function and derivatives for the smooth mapping. With this information, it is possible to solve approximately certain proximal linearized subproblems in which the smooth mapping is replaced by its Taylor-series linearization around the current serious step. Our numerical results show the good performance of the Composite Bundle method for a large class of problems.  相似文献   

2.
In this paper, we consider a constrained nonconvex nonsmooth optimization, in which both objective and constraint functions may not be convex or smooth. With the help of the penalty function, we transform the problem into an unconstrained one and design an algorithm in proximal bundle method in which local convexification of the penalty function is utilized to deal with it. We show that, if adding a special constraint qualification, the penalty function can be an exact one, and the sequence generated by our algorithm converges to the KKT points of the problem under a moderate assumption. Finally, some illustrative examples are given to show the good performance of our algorithm.  相似文献   

3.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

4.
In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an \(\varepsilon \)-critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.  相似文献   

5.
In the past decade, eigenvalue optimization has gained remarkable attention in various engineering applications. One of the main difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not smooth at those points where they are multiple. We propose a new explicit nonsmooth second-order bundle algorithm based on the idea of the proximal bundle method on minimizing the arbitrary eigenvalue over an affine family of symmetric matrices, which is a special class of eigenvalue function–D.C. function. To the best of our knowledge, few methods currently exist for minimizing arbitrary eigenvalue function. In this work, we apply the -Lagrangian theory to this class of D.C. functions: the arbitrary eigenvalue function λi with affine matrix-valued mappings, where λi is usually not convex. We prove the global convergence of our method in the sense that every accumulation point of the sequence of iterates is stationary. Moreover, under mild conditions we show that, if started close enough to the minimizer x*, the proposed algorithm converges to x* quadratically. The method is tested on some constrained optimization problems, and some encouraging preliminary numerical results show the efficiency of our method.  相似文献   

6.
We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems.  相似文献   

7.
In this paper, we study nonlinear optimization problems involving eigenvalues of symmetric matrices. One of the difficulties in solving these problems is that the eigenvalue functions are not differentiable when the multiplicity of the function is not one. We apply the \({\mathcal {U}}\)-Lagrangian theory to analyze the largest eigenvalue function of a convex matrix-valued mapping which extends the corresponding results for linear mapping in the literature. We also provides the formula of first-and second-order derivatives of the \({\mathcal {U}}\)-Lagrangian under mild assumptions. These theoretical results provide us new second-order information about the largest eigenvalue function along a suitable smooth manifold, and leads to a new algorithmic framework for analyzing the underlying optimization problem.  相似文献   

8.
The problem of finding the best rank-one approximation to higher-order tensors has extensive engineering and statistical applications. It is well-known that this problem is equivalent to a homogeneous polynomial optimization problem. In this paper, we study theoretical results and numerical methods of this problem, particularly focusing on the 4-th order symmetric tensor case. First, we reformulate the polynomial optimization problem to a matrix programming, and show the equivalence between these two problems. Then, we prove that there is no duality gap between the reformulation and its Lagrangian dual problem. Concerning the approaches to deal with the problem, we propose two relaxed models. The first one is a convex quadratic matrix optimization problem regularized by the nuclear norm, while the second one is a quadratic matrix programming regularized by a truncated nuclear norm, which is a D.C. function and therefore is nonconvex. To overcome the difficulty of solving this nonconvex problem, we approximate the nonconvex penalty by a convex term. We propose to use the proximal augmented Lagrangian method to solve these two relaxed models. In order to obtain a global solution, we propose an alternating least eigenvalue method after solving the relaxed models and prove its convergence. Numerical results presented in the last demonstrate, especially for nonpositive tensors, the effectiveness and efficiency of our proposed methods.  相似文献   

9.
The proximal point mapping is the basis of many optimization techniques for convex functions. By means of variational analysis, the concept of proximal mapping was recently extended to nonconvex functions that are prox-regular and prox-bounded. In such a setting, the proximal point mapping is locally Lipschitz continuous and its set of fixed points coincide with the critical points of the original function. This suggests that the many uses of proximal points, and their corresponding proximal envelopes (Moreau envelopes), will have a natural extension from convex optimization to nonconvex optimization. For example, the inexact proximal point methods for convex optimization might be redesigned to work for nonconvex functions. In order to begin the practical implementation of proximal points in a nonconvex setting, a first crucial step would be to design efficient methods of approximating nonconvex proximal points. This would provide a solid foundation on which future design and analysis for nonconvex proximal point methods could flourish. In this paper we present a methodology based on the computation of proximal points of piecewise affine models of the nonconvex function. These models can be built with only the knowledge obtained from a black box providing, for each point, the function value and one subgradient. Convergence of the method is proved for the class of nonconvex functions that are prox-bounded and lower- ${\mathcal{C}}^2$ and encouraging preliminary numerical testing is reported.  相似文献   

10.
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.  相似文献   

11.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

12.
A coordinate gradient descent method for nonsmooth separable minimization   总被引:1,自引:0,他引:1  
We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with ?1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the ?1-regularization of unconstrained optimization problems from Moré et al. in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the ?1-regularized problem as a bound-constrained optimization problem, is also reported.  相似文献   

13.
Smooth Convex Approximation to the Maximum Eigenvalue Function   总被引:6,自引:0,他引:6  
In this paper, we consider smooth convex approximations to the maximum eigenvalue function. To make it applicable to a wide class of applications, the study is conducted on the composite function of the maximum eigenvalue function and a linear operator mapping m to , the space of n-by-n symmetric matrices. The composite function in turn is the natural objective function of minimizing the maximum eigenvalue function over an affine space in . This leads to a sequence of smooth convex minimization problems governed by a smoothing parameter. As the parameter goes to zero, the original problem is recovered. We then develop a computable Hessian formula of the smooth convex functions, matrix representation of the Hessian, and study the regularity conditions which guarantee the nonsingularity of the Hessian matrices. The study on the well-posedness of the smooth convex function leads to a regularization method which is globally convergent.  相似文献   

14.
We investigate the problem of minimizing a nonconvex function with respect to convex constraints, and we study different techniques to compute a lower bound on the optimal value: The method of using convex envelope functions on one hand, and the method of exploiting nonconvex duality on the other hand. We investigate which technique gives the better bound and develop conditions under which the dual bound is strictly better than the convex envelope bound. As a byproduct, we derive some interesting results on nonconvex duality.  相似文献   

15.
We present an inexact spectral bundle method for solving convex quadratic semidefinite optimization problems. This method is a first-order method, hence requires much less computational cost in each iteration than second-order approaches such as interior-point methods. In each iteration of our method, we solve an eigenvalue minimization problem inexactly, and solve a small convex quadratic semidefinite program as a subproblem. We give a proof of the global convergence of this method using techniques from the analysis of the standard bundle method, and provide a global error bound under a Slater type condition for the problem in question. Numerical experiments with matrices of order up to 3000 are performed, and the computational results establish the effectiveness of this method.  相似文献   

16.
A global optimization method, QBB, for twice-differentiable NLPs (Non-Linear Programming) is developed to operate within a branch-and-bound framework and require the construction of a relaxed convex problem on the basis of the quadratic lower bounding functions for the generic nonconvex structures. Within an exhaustive simplicial division of the constrained region, the rigorous quadratic underestimation function is constructed for the generic nonconvex function structure by virtue of the maximal eigenvalue analysis of the interval Hessian matrix. Each valid lower bound of the NLP problem with the division progress is computed by the convex programming of the relaxed optimization problem obtained by preserving the convex or linear terms, replacing the concave term with linear convex envelope, underestimating the special terms and the generic terms by using their customized tight convex lower bounding functions or the valid quadratic lower bounding functions, respectively. The standard convergence properties of the QBB algorithm for nonconvex global optimization problems are guaranteed. The preliminary computation studies are presented in order to evaluate the algorithmic efficiency of the proposed QBB approach.  相似文献   

17.
Two-phase image segmentation is a fundamental task to partition an image into foreground and background. In this paper, two types of nonconvex and nonsmooth regularization models are proposed for basic two-phase segmentation. They extend the convex regularization on the characteristic function on the image domain to the nonconvex case, which are able to better obtain piecewise constant regions with neat boundaries. By analyzing the proposed non-Lipschitz model, we combine the proximal alternating minimization framework with support shrinkage and linearization strategies to design our algorithm. This leads to two alternating strongly convex subproblems which can be easily solved. Similarly, we present an algorithm without support shrinkage operation for the nonconvex Lipschitz case. Using the Kurdyka-Łojasiewicz property of the objective function, we prove that the limit point of the generated sequence is a critical point of the original nonconvex nonsmooth problem. Numerical experiments and comparisons illustrate the effectiveness of our method in two-phase image segmentation.  相似文献   

18.
Using the concept of a subdifferential of a vector-valued convex mapping, we provide duality formulas for the minimization of nonconvex composite functions and related optimization problems such as the minimization of a convex function over a vectorial DC constraint.  相似文献   

19.
This paper analyses the properties of the projection mapping over a set defined by a constraint function whose image is possibly a nonpolyhedral convex set. Under some nondegeneracy assumptions, we prove the (strong) semismoothness of the projection mapping. In particular, we derive the strong semismoothness of the projection mapping when the nonpolyhedral convex set under consideration is taken to be the second-order cone or the semidefinite cone. We also derive the semismoothness of the solution to the Moreau–Yosida regularization of the maximum eigenvalue function.  相似文献   

20.
We present a new bundle algorithm for minimizing convex not necessarily smooth functions. The novelty of our approach is based on a bundle modification strategy that we apply whenever the stability center is updated and which is aimed at substituting the points of the bundle by new points characterized by possibly better values of the objective function. Convergence of the algorithm is proved and numerical results are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号