首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
主要研究了两类近似凸集的关系和性质.首先,举例说明两类近似凸集没有相互包含关系.其次,在近似凸集(nearly convex)条件下,证明了在一定条件下函数上图是近似凸集与凸集的等价关系.同时,考虑了近似凸函数与函数上图是近似凸集的等价刻画、近似凸函数与函数水平集是近似凸集的必要性,并用例子说明近似凸函数与函数水平集是...  相似文献   

2.
交替最小化算法(简称AMA)最早由[SIAM J.Control Optim.,1991,29(1):119-138]提出,并能用于求解强凸函数与凸函数和的极小值问题.本文直接利用AMA算法来求解强凸函数与弱凸函数和的极小值问题.在强凸函数的模大于弱凸函数的模的假设下,我们证明了AMA生成的点列全局收敛到优化问题的解,并且若该优化问题中的某个函数是光滑函数时,AMA所生成的点列的收敛率是线性的.  相似文献   

3.
This paper considers planar location problems with rectilinear distance and barriers where the objective function is any convex, nondecreasing function of distance. Such problems have a non-convex feasible region and a nonconvex objective function. Based on an equivalent problem with modified barriers, derived in a companion paper [3], the non convex feasible set is partitioned into a network and rectangular cells. The rectangular cells are further partitioned into a polynomial number of convex subcells, called convex domains, on which the distance function, and hence the objective function, is convex. Then the problem is solved over the network and convex domains for an optimal solution. Bounds are given that reduce the number of convex domains to be examined. The number of convex domains is bounded above by a polynomial in the size of the problem.  相似文献   

4.
Generalized polyhedral convex sets, generalized polyhedral convex functions on locally convex Hausdorff topological vector spaces, and the related constructions such as sum of sets, sum of functions, directional derivative, infimal convolution, normal cone, conjugate function, subdifferential are studied thoroughly in this paper. Among other things, we show how a generalized polyhedral convex set can be characterized through the finiteness of the number of its faces. In addition, it is proved that the infimal convolution of a generalized polyhedral convex function and a polyhedral convex function is a polyhedral convex function. The obtained results can be applied to scalar optimization problems described by generalized polyhedral convex sets and generalized polyhedral convex functions.  相似文献   

5.
A convex function defined on an open convex set of a finite-dimensional space is known to be continuous at every point of this set. In fact, a convex function has a strengthened continuity property. The notion of strong continuity is introduced in this study to show that a convex function has this property. The proof is based on only the definition of convexity and Jensen’s inequality. The definition of strong continuity involves a constant (the constant of strong continuity). An unimprovable value of this constant is given in the case of convex functions. The constant of strong continuity depends, in particular, on the form of a norm introduced in the space of arguments of a convex function. The polyhedral norm is of particular interest. It is straightforward to calculate the constant of strong continuity when it is used. This requires a finite number of values of the convex function.  相似文献   

6.
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples.  相似文献   

7.
The goal of this Note is to prove results in optimization of two integer variables which correspond to fundamental results in convex analysis of real variables, viz. that a local minimum of a convex function is global; that the marginal function of a convex function is convex; and that two disjoint convex sets can be separated by a hyperplane. To cite this article: C.O. Kiselman, C. R. Acad. Sci. Paris, Ser. I 346 (2008).  相似文献   

8.
Karmarkar's potential function is quasi-convex, but not convex. This note investigates the multiplicative version of the potential function, and shows that it is not necessarily convex in general, but is strictly convex when the corresponding feasible region is bounded. This implies that the multiplicative version of the potential function in Karmarkar's algorithm is convex, since it works on a simplex.  相似文献   

9.
Neyman-Pearson classification has been studied in several articles before.But they all proceeded in the classes of indicator functions with indicator function as the loss function,which make the calculation to be difficult.This paper investigates NeymanPearson classification with convex loss function in the arbitrary class of real measurable functions.A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function.We give analysis to NP-ERM with convex loss function and prove it's performance guarantees.An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied,which produces a tight PAC bound of the NP-ERM with convex loss function.  相似文献   

10.
Abstract

A function f is said to be iteratively convex if f possesses convex iterative roots of all orders. We give several constructions of iteratively convex diffeomorphisms and explain the phenomenon that the non-existence of convex iterative roots is a typical property of convex functions. We show also that a slight perturbation of iteratively convex functions causes loss of iterative convexity. However, every convex function can be approximate by iteratively convex functions.  相似文献   

11.
考察十余种国内通用的高等数学教材对凸函数定义的异同,论证凸函数的两种原始定义之间的关系,给出凸函数所满足的三个重要不等式,并证明凸函数在一元函数微分学范围内所满足的两个重要性质——连续性和单侧可导性。利用实分析中的Hausdorff极大定理给出满足第一种原始定义而不满足第二种原始定义的一个函数的例子.  相似文献   

12.
This paper addresses itself to the algorithm for minimizing the product of two nonnegative convex functions over a convex set. It is shown that the global minimum of this nonconvex problem can be obtained by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in a higher dimensional space and to apply a branch-and-bound algorithm using an underestimating function. Computational results indicate that our algorithm is efficient when the objective function is the product of a linear and a quadratic functions and the constraints are linear. An extension of our algorithm for minimizing the sum of a convex function and a product of two convex functions is also discussed.  相似文献   

13.
We analyze the least increment function, a convex function of n variables associated to an n-person cooperative game. Another convex representation of cooperative games, the indirect function, has previously been studied. At every point the least increment function is greater than or equal to the indirect function, and both functions coincide in the case of convex games, but an example shows that they do not necessarily coincide if the game is totally balanced but not convex. We prove that the least increment function of a game contains all the information of the game if and only if the game is totally balanced. We also give necessary and sufficient conditions for a function to be the least increment function of a game as well as an expression for the core of a game in terms of its least increment function.  相似文献   

14.
Subgradient projectors play an important role in optimization and for solving convex feasibility problems. For every locally Lipschitz function, we can define a subgradient projector via generalized subgradients even if the function is not convex. The paper consists of three parts. In the first part, we study basic properties of subgradient projectors and give characterizations when a subgradient projector is a cutter, a local cutter, or a quasi-nonexpansive mapping. We present global and local convergence analyses of subgradent projectors. Many examples are provided to illustrate the theory. In the second part, we investigate the relationship between the subgradient projector of a prox-regular function and the subgradient projector of its Moreau envelope. We also characterize when a mapping is the subgradient projector of a convex function. In the third part, we focus on linearity properties of subgradient projectors. We show that, under appropriate conditions, a linear operator is a subgradient projector of a convex function if and only if it is a convex combination of the identity operator and a projection operator onto a subspace. In general, neither a convex combination nor a composition of subgradient projectors of convex functions is a subgradient projector of a convex function.  相似文献   

15.
In this paper we consider cardinality-constrained convex programs that minimize a convex function subject to a cardinality constraint and other linear constraints. This class of problems has found many applications, including portfolio selection, subset selection and compressed sensing. We propose a successive convex approximation method for this class of problems in which the cardinality function is first approximated by a piecewise linear DC function (difference of two convex functions) and a sequence of convex subproblems is then constructed by successively linearizing the concave terms of the DC function. Under some mild assumptions, we establish that any accumulation point of the sequence generated by the method is a KKT point of the DC approximation problem. We show that the basic algorithm can be refined by adding strengthening cuts in the subproblems. Finally, we report some preliminary computational results on cardinality-constrained portfolio selection problems.  相似文献   

16.
We consider a class of convex programming problems whose objective function is given as a linear function plus a convex function whose arguments are linear functions of the decision variables and whose feasible region is a polytope. We show that there exists an optimal solution to this class of problems on a face of the constraint polytope of dimension not more than the number of arguments of the convex function. Based on this result, we develop a method to solve this problem that is inspired by the simplex method for linear programming. It is shown that this method terminates in a finite number of iterations in the special case that the convex function has only a single argument. We then use this insight to develop a second algorithm that solves the problem in a finite number of iterations for an arbitrary number of arguments in the convex function. A computational study illustrates the efficiency of the algorithm and suggests that the average-case performance of these algorithms is a polynomial of low order in the number of decision variables. The work of T. C. Sharkey was supported by a National Science Foundation Graduate Research Fellowship. The work of H. E. Romeijn was supported by the National Science Foundation under Grant No. DMI-0355533.  相似文献   

17.
We present a new class of convex underestimators for arbitrarily nonconvex and twice continuously differentiable functions. The underestimators are derived by augmenting the original nonconvex function by a nonlinear relaxation function. The relaxation function is a separable convex function, that involves the sum of univariate parametric exponential functions. An efficient procedure that finds the appropriate values for those parameters is developed. This procedure uses interval arithmetic extensively in order to verify whether the new underestimator is convex. For arbitrarily nonconvex functions it is shown that these convex underestimators are tighter than those generated by the BB method. Computational studies complemented with geometrical interpretations demonstrate the potential benefits of the proposed improved convex underestimators.  相似文献   

18.
This paper addresses itself to the algorithm for minimizing the sum of a convex function and a product of two linear functions over a polytope. It is shown that this nonconvex minimization problem can be solved by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in higher dimension and apply a parametric programming (path following) approach. Also it is shown that the same idea can be applied to a generalized linear fractional programming problem whose objective function is the sum of a convex function and a linear fractional function.  相似文献   

19.
Shang  S. 《Acta Mathematica Hungarica》2021,164(1):265-281
Acta Mathematica Hungarica - We prove that if $$X^{*}$$ is strictly convex, a convex function $$f$$ is coercive and b-Lipschitzian iff there exist two convex function sequences...  相似文献   

20.
We show that a locally Lipschitz homeomorphism function is semismooth at a given point if and only if its inverse function is semismooth at its image point. We present a sufficient condition for the semismoothness of solutions to generalized equations over cone reducible (nonpolyhedral) convex sets. We prove that the semismoothness of solutions to the Moreau-Yosida regularization of a lower semicontinuous proper convex function is implied by the semismoothness of the metric projector over the epigraph of the convex function. This paper is dedicated to Terry Rockafellar on the occasion of his seventieth birthday  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号