首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we study the existence of weakly efficient solutions for some nonsmooth and nonconvex vector optimization problems. We consider problems whose objective functions are defined between infinite and finite-dimensional Banach spaces. Our results are stated under hypotheses of generalized convexity and make use of variational-like inequalities.  相似文献   

2.
In this work, we combine outer-approximation (OA) and bundle method algorithms for dealing with mixed-integer non-linear programming (MINLP) problems with nonsmooth convex objective and constraint functions. As the convergence analysis of OA methods relies strongly on the differentiability of the involved functions, OA algorithms may fail to solve general nonsmooth convex MINLP problems. In order to obtain OA algorithms that are convergent regardless the structure of the convex functions, we solve the underlying OA’s non-linear subproblems by a specialized bundle method that provides necessary information to cut off previously visited (non-optimal) integer points. This property is crucial for proving (finite) convergence of OA algorithms. We illustrate the numerical performance of the given proposal on a class of hybrid robust and chance-constrained problems that involve a random variable with finite support.  相似文献   

3.
The present paper deals with a new type of eigenvalue problems arising in problems involving nonconvex nonsmooth energy functions. They lead to the search of critical points (e.g. local minima) for nonconvex nonsmooth potential functions which in turn give rise to hemivariational inequalities. For this type of variational expressions the eigenvalue problem is studied here concerning the existence and multiplicity of solutions by applying a critical point theory appropriate for nonsmooth nonconvex functionals.  相似文献   

4.
In this paper, the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth. Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance. Moreover, the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an (affine-scaled) Clarke stationary point of the original nonsmooth and nonconvex problem. Their experimental results indicate the effectiveness of the proposed algorithm.  相似文献   

5.
本文首次讨论了用不动点刻划的不可微多目标优化的最优性必要条件和充分条件,并研究了不动点算法求解此问题的方法及大范围收敛性.为不可微多目标优化研究提供了另一条新的途径.  相似文献   

6.
《Optimization》2012,61(12):1491-1509
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.  相似文献   

7.
In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–?ojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.  相似文献   

8.
Great strides have been made in nonlinear programming (NLP) in the last 5 years. In smooth NLP, there are now several reliable and efficient codes capable of solving large problems. Most of these implement GRG or SQP methods, and new software using interior point algorithms is under development. NLP software is now much easier to use, as it is interfaced with many modeling systems, including MSC/NASTRAN, and ANSYS for structural problems, GAMS and AMPL for general optimization, Matlab and Mathcad for general mathematical problems, and the widely used Microsoft Excel spreadsheet. For mixed integer problems, branch and bound and outer approximation codes are now available and are coupled to some of the above modeling systems, while search methods like Tabu Search and Genetic algorithms permit combinatorial, nonsmooth, and nonconvex problems to be attacked.  相似文献   

9.
This paper is concerned with the optimality for multi-objective programming problems with nonsmooth and nonconvex (but directionally differentiable) objective and constraint functions. The main results are Kuhn-Tucker type necessary conditions for properly efficient solutions and weakly efficient solutions. Our proper efficiency is a natural extension of the Kuhn-Tucker one to the nonsmooth case. Some sufficient conditions for an efficient solution to be proper are also given. As an application, we derive optimality conditions for multi-objective programming problems including extremal-value functions.This work was done while the author was visiting George Washington University, Washington, DC.  相似文献   

10.
《Optimization》2012,61(4):333-347
Necessary and sufficient conditions are established for properly efficient solutions of a class of nonsmooth nonconvex variational problems with multiple fractional objective functions and nonlinear inequality constraints. Based on these proper efficiency criteria. two multiobjective dual problems are constructed and appropriate duality theorems are proved. These proper efficiency and duality results also contain as special cases similar rcsults fer constrained variational problems with multiplei fractional. and conventional objective functions, which are particular cases of the main variational problem considered in this paper  相似文献   

11.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

12.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

13.
This paper presents, within a unified framework, a potentially powerful canonical dual transformation method and associated generalized duality theory in nonsmooth global optimization. It is shown that by the use of this method, many nonsmooth/nonconvex constrained primal problems in n can be reformulated into certain smooth/convex unconstrained dual problems in m with m n and without duality gap, and some NP-hard concave minimization problems can be transformed into unconstrained convex minimization dual problems. The extended Lagrange duality principles proposed recently in finite deformation theory are generalized suitable for solving a large class of nonconvex and nonsmooth problems. The very interesting generalized triality theory can be used to establish nice theoretical results and to develop efficient alternative algorithms for robust computations.  相似文献   

14.
Minimax programming problems involving locally Lipschitz (Φρ)-invex functions are considered. The parametric and non-parametric necessary and sufficient optimality conditions for a class of nonsmooth minimax programming problems are obtained under nondifferentiable (Φρ)-invexity assumption imposed on objective and constraint functions. When the sufficient conditions are utilized, parametric and non-parametric dual problems in the sense of Mond-Weir and Wolfe may be formulated and duality results are derived for the considered nonsmooth minimax programming problem. With the reference to the said functions we extend some results of optimality and duality for a larger class of nonsmooth minimax programming problems.  相似文献   

15.
非光滑非凸多目标规划解的充分条件   总被引:4,自引:0,他引:4  
刘三阳 《应用数学》1991,4(1):58-63
Kuhn-Tucker型条件的充分性一直是最优化理论中引人注意的一个问题.本文对非光滑函数提出了几个非凸概念,然后,讨论了非光滑非凸多目标规划中Kuhn-Tucker型条件和Fritz John型条件的充分性,在很弱的条件下,建立了一系列充分条件.  相似文献   

16.
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.  相似文献   

17.
《Optimization》2012,61(7):1439-1469
In the article we use abstract convexity theory in order to unify and generalize many different concepts of nonsmooth analysis. We introduce the concepts of abstract codifferentiability, abstract quasidifferentiability and abstract convex (concave) approximations of a nonsmooth function mapping a topological vector space to an order complete topological vector lattice. We study basic properties of these notions, construct elaborate calculus of abstract codifferentiable functions and discuss continuity of abstract codifferential. We demonstrate that many classical concepts of nonsmooth analysis, such as subdifferentiability and quasidifferentiability, are particular cases of the concepts of abstract codifferentiability and abstract quasidifferentiability. We also show that abstract convex and abstract concave approximations are a very convenient tool for the study of nonsmooth extremum problems. We use these approximations in order to obtain various necessary optimality conditions for nonsmooth nonconvex optimization problems with the abstract codifferentiable or abstract quasidifferentiable objective function and constraints. Then, we demonstrate how these conditions can be transformed into simpler and more constructive conditions in some particular cases.  相似文献   

18.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

19.
In this paper, we investigate the convergence of the proximal iteratively reweighted algorithm for a class of nonconvex and nonsmooth problems. Such problems actually include numerous models in the area of signal processing and machine learning research. Two extensions of the algorithm are also studied. We provide a unified scheme for these three algorithms. With the Kurdyka–?ojasiewicz property, we prove that the unified algorithm globally converges to a critical point of the objective function.  相似文献   

20.
Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Späth algorithm on several publicly available data sets for regression analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号