首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computational Optimization and Applications - A method, called an augmented subgradient method, is developed to solve unconstrained nonsmooth difference of convex (DC) optimization problems. At...  相似文献   

2.
This paper presents a descent method for minimizing a sum of possibly nonsmooth convex functions. Search directions are found by solving subproblems obtained by replacing all but one of the component functions with their polyhedral approximations and adding a quadratic term. The algorithm is globally convergent and terminates when the objective function happens to be polyhedral. It yields a new decomposition method for solving large-scale linear programs with dual block-angular structure.Supported by Program CPBP 02.15.The author thanks the two referees for their helpful suggestions.  相似文献   

3.
This paper considers local convergence and rate of convergence results for algorithms for minimizing the composite functionF(x)=f(x)+h(c(x)) wheref andc are smooth buth(c) may be nonsmooth. Local convergence at a second order rate is established for the generalized Gauss—Newton method whenh is convex and globally Lipschitz and the minimizer is strongly unique. Local convergence at a second order rate is established for a generalized Newton method when the minimizer satisfies nondegeneracy, strict complementarity and second order sufficiency conditions. Assuming the minimizer satisfies these conditions, necessary and sufficient conditions for a superlinear rate of convergence for curvature approximating methods are established. Necessary and sufficient conditions for a two-step superlinear rate of convergence are also established when only reduced curvature information is available. All these local convergence and rate of convergence results are directly applicable to nonlinearing programming problems.This work was done while the author was a Research fellow at the Mathematical Sciences Research Centre, Australian National University.  相似文献   

4.
Methods for minimization of composite functions with a nondifferentiable polyhedral convex part are considered. This class includes problems involving minimax functions and norms. Local convergence results are given for “active set” methods, in which an equality-constrained quadratic programming subproblem is solved at each iteration. The active set consists of components of the polyhedral convex function which are active or near-active at the current iteration. The effects of solving the subproblem inexactly at each iteration are discussed; rate-of-convergence results which depend on the degree of inexactness are given.  相似文献   

5.
We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems.  相似文献   

6.
Newton's method for a class of nonsmooth functions   总被引:1,自引:0,他引:1  
This paper presents and justifies a Newton iterative process for finding zeros of functions admitting a certain type of approximation. This class includes smooth functions as well as nonsmooth reformulations of variational inequalities. We prove for this method an analogue of the fundamental local convergence theorem of Kantorovich including optimal error bounds.The research reported here was sponsored by the National Science Foundation under Grants CCR-8801489 and CCR-9109345, by the Air Force Systems Command, USAF, under Grants AFOSR-88-0090 and F49620-93-1-0068, by the U. S. Army Research Office under Grant No. DAAL03-92-G-0408, and by the U. S. Army Space and Strategic Defense Command under Contract No. DASG60-91-C-0144. The U. S. Government has certain rights in this material, and is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.  相似文献   

7.
8.
Many functions of several variables used in nonlinear programming are factorable, i.e., complicated compositions of transformed sums and products of functions of a single variable. The Hessian matrices of twice-differentiable factorable functions can easily be expressed as sums of outer products (dyads) of vectors. A modified Newton's method for minimizing unconstrained factorable functions which exploits this special form of the Hessian is developed. Computational experience with the method is presented.This material is based upon work supported by the National Science Foundation under Grant No. MCS-79-04106.The author would like to thank Professor G. P. McCormick, George Washington University, for several enlightening discussions on factorable programming and for his valuable comments which improved an earlier version of this paper.  相似文献   

9.
A modified gradient method is developed for minimization of nonsmooth exact penalty functions. The complexity of each iteration in the proposed method is lower than in the original method.Translated from Vychislitel'naya i Prikladnaya Matematika, No. 73, pp. 108–112, 1992.  相似文献   

10.
An iterative algorithm is proposed for the constrained minimization of a convex nonsmooth function on a set given as a convex smooth surface. The convergence of the algorithm in the sense of necessary conditions for a local minimum is proved.  相似文献   

11.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

12.
To minimize a continuously differentiable quasiconvex functionf: n , Armijo's steepest descent method generates a sequencex k+1 =x k t k f(x k ), wheret k >0. We establish strong convergence properties of this classic method: either , s.t. ; or arg minf = , x k andf(x k ) inff. We also discuss extensions to other line searches.The research of the first author was supported by the Polish Academy of Sciences. The second author acknowledges the support of the Department of Industrial Engineering, Hong Kong University of Science and Technology.We wish to thank two anonymous referees for their valuable comments. In particular, one referee has suggested the use of quasiconvexity instead of convexity off.  相似文献   

13.
A method is described for globally minimizing concave functions over convex sets whose defining constraints may be nonlinear. The algorithm generates linear programs whose solutions minimize the convex envelope of the original function over successively tighter polytopes enclosing the feasible region. The algorithm does not involve cuts of the feasible region, requires only simplex pivot operations and univariate search computations to be performed, allows the objective function to be lower semicontinuous and nonseparable, and is guaranteed to converge to the global solution. Computational aspects of the algorithm are discussed.  相似文献   

14.
In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an \(\varepsilon \)-critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.  相似文献   

15.
16.
17.
In this paper, we propose a strongly sub-feasible direction method for the solution of inequality constrained optimization problems whose objective functions are not necessarily differentiable. The algorithm combines the subgradient aggregation technique with the ideas of generalized cutting plane method and of strongly sub-feasible direction method, and as results a new search direction finding subproblem and a new line search strategy are presented. The algorithm can not only accept infeasible starting points but also preserve the “strong sub-feasibility” of the current iteration without unduly increasing the objective value. Moreover, once a feasible iterate occurs, it becomes automatically a feasible descent algorithm. Global convergence is proved, and some preliminary numerical results show that the proposed algorithm is efficient.  相似文献   

18.
Summary An unconstrained nonlinear programming problem with nondifferentiabilities is considered. The nondifferentiabilities arise from terms of the form max [f 1(x), ...,f n (x)], which may enter nonlinearly in the objective function. Local convex polyhedral upper approximations to the objective function are introduced. These approximations are used in an iterative method for solving the problem. The algorithm proceeds by solving quadratic programming subproblems to generate search directions. Approximate line searches ensure global convergence of the method to stationary points. The algorithm is conceptually simple and easy to implement. It generalizes efficient variable metric methods for minimax calculations.  相似文献   

19.
We introduce a trust region algorithm for minimization of nonsmooth functions with linear constraints. At each iteration, the objective function is approximated by a model function that satisfies a set of assumptions stated recently by Qi and Sun in the context of unconstrained nonsmooth optimization. The trust region iteration begins with the resolution of an “easy problem”, as in recent works of Martínez and Santos and Friedlander, Martínez and Santos, for smooth constrained optimization. In practical implementations we use the infinity norm for defining the trust region, which fits well with the domain of the problem. We prove global convergence and report numerical experiments related to a parameter estimation problem. Supported by FAPESP (Grant 90/3724-6), FINEP and FAEP-UNICAMP. Supported by FAPESP (Grant 90/3724-6 and grant 93/1515-9).  相似文献   

20.
In this paper we propose a parametrized Newton method for nonsmooth equations with finitely many maximum functions. The convergence result of this method is proved and numerical experiments are listed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号