首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take advantage of the history of the iterates. We show weak convergence of the generated sequence of iterates to an optimal solution of the optimization problem, provided a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. We also prove convergence for the objective function values to the optimal objective value. The convergence analysis carried out in this paper relies on the celebrated Opial Lemma and generalized Fejér monotonicity techniques. We illustrate the functionality of the method via a numerical experiment addressing image classification via support vector machines.  相似文献   

2.
Given an optimization problem with a composite of a convex and componentwise increasing function with a convex vector function as objective function, by means of the conjugacy approach based on the perturbation theory, we determine a dual to it. Necessary and sufficient optimality conditions are derived using strong duality. Furthermore, as special case of this problem, we consider a location problem, where the “distances” are measured by gauges of closed convex sets. We prove that the geometric characterization of the set of optimal solutions for this location problem given by Hinojosa and Puerto in a recently published paper can be obtained via the presented dual problem. Finally, the Weber and the minmax location problems with gauges are given as applications.  相似文献   

3.
For a convex program in a normed vector space with the objective function admitting the Gateaux derivative at an optimal solution, we show that the solution set consists of the feasible points lying in the hyperplane whose normal vector equals the Gateaux derivative. For a general continuous convex program, a feasible point is an optimal solution iff it lies in a hyperplane with a normal vector belonging to the subdifferential of the objective function at this point. In several cases, the solution set of a variational inequality problem is shown to coincide with the solution set of a convex program with its dual gap function as objective function, while the mapping involved can be used to express the above normal vectors.The research was supported by the National Science Council of the Republic of China. The authors are grateful to the referees for valuable comments and constructive suggestions.  相似文献   

4.
With this note we bring again into attention a vector dual problem neglected by the contributions who have recently announced the successful healing of the trouble encountered by the classical duals to the classical linear vector optimization problem. This vector dual problem has, different to the mentioned works which are of set-valued nature, a vector objective function. Weak, strong and converse duality for this “new-old” vector dual problem are proven and we also investigate its connections to other vector duals considered in the same framework in the literature. We also show that the efficient solutions of the classical linear vector optimization problem coincide with its properly efficient solutions (in any sense) when the image space is partially ordered by a nontrivial pointed closed convex cone, too.  相似文献   

5.
We define a version of the Inverse Linear Programming problem that we call Linear Programming System Identification. This version of the problem seeks to identify both the objective function coefficient vector and the constraint matrix of a linear programming problem that best fits a set of observed vector pairs. One vector is that of actual decisions that we call outputs. These are regarded as approximations of optimal decision vectors. The other vector consists of the inputs or resources actually used to produce the corresponding outputs. We propose an algorithm for approximating the maximum likelihood solution. The major limitation of the method is the computation of exact volumes of convex polytopes. A numerical illustration is given for simulated data.  相似文献   

6.
A method for solving the following inverse linear programming (LP) problem is proposed. For a given LP problem and one of its feasible vectors, it is required to adjust the objective function vector as little as possible so that the given vector becomes optimal. The closeness of vectors is estimated by means of the Euclidean vector norm. The inverse LP problem is reduced to a problem of unconstrained minimization for a convex piecewise quadratic function. This minimization problem is solved by means of the generalized Newton method.  相似文献   

7.
本文考虑非可微凸规划的一个对偶问题,它使用目标函数的扰动函数的次微分及外法向量锥,它不同于已知结果.我们给出相应的对偶性质.  相似文献   

8.
We present a primal-dual row-action method for the minimization of a convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions and their Jacobian matrix (thus, the row-action nature of the algorithm), and at each iteration a subproblem is solved consisting of minimization of the objective function subject to one or two linear equations. The algorithm generates two sequences: one of them, called primal, converges to the solution of the problem; the other one, called dual, approximates a vector of optimal KKT multipliers for the problem. We prove convergence of the primal sequence for general convex constraints. In the case of linear constraints, we prove that the primal sequence converges at least linearly and obtain as a consequence the convergence of the dual sequence.The research of the first author was partially supported by CNPq Grant No. 301280/86.  相似文献   

9.
Abstract

We consider the minimization of a convex objective function subject to the set of minima of another convex function, under the assumption that both functions are twice continuously differentiable. We approach this optimization problem from a continuous perspective by means of a second-order dynamical system with Hessian-driven damping and a penalty term corresponding to the constrained function. By constructing appropriate energy functionals, we prove weak convergence of the trajectories generated by this differential equation to a minimizer of the optimization problem as well as convergence for the objective function values along the trajectories. The performed investigations rely on Lyapunov analysis in combination with the continuous version of the Opial Lemma. In case the objective function is strongly convex, we can even show strong convergence of the trajectories.  相似文献   

10.
Conjugate maps and duality in multiobjective optimization   总被引:5,自引:0,他引:5  
This paper considers duality in convex vector optimization. A vector optimization problem requires one to find all the efficient points of the attainable value set for given multiple objective functions. Embedding the primal problem into a family of perturbed problems enables one to define a dual problem in terms of the conjugate map of the perturbed objective function. Every solution of the stable primal problem is associated with a certain solution of the dual problem, which is characterized as a subgradient of the perturbed efficient value map. This pair of solutions also provides a saddle point of the Lagrangian map.  相似文献   

11.
In this paper we present a robust conjugate duality theory for convex programming problems in the face of data uncertainty within the framework of robust optimization, extending the powerful conjugate duality technique. We first establish robust strong duality between an uncertain primal parameterized convex programming model problem and its uncertain conjugate dual by proving strong duality between the deterministic robust counterpart of the primal model and the optimistic counterpart of its dual problem under a regularity condition. This regularity condition is not only sufficient for robust duality but also necessary for it whenever robust duality holds for every linear perturbation of the objective function of the primal model problem. More importantly, we show that robust strong duality always holds for partially finite convex programming problems under scenario data uncertainty and that the optimistic counterpart of the dual is a tractable finite dimensional problem. As an application, we also derive a robust conjugate duality theorem for support vector machines which are a class of important convex optimization models for classifying two labelled data sets. The support vector machine has emerged as a powerful modelling tool for machine learning problems of data classification that arise in many areas of application in information and computer sciences.  相似文献   

12.
《Optimization》2012,61(1):155-165
In this article, we study well-posedness and stability aspects for vector optimization in terms of minimizing sequences defined using the notion of Henig proper efficiency. We justify the importance of set convergence in the study of well-posedness of vector problems by establishing characterization of well-posedness in terms of upper Hausdorff convergence of a minimizing sequence of sets to the set of Henig proper efficient solutions. Under certain compactness assumptions, a convex vector optimization problem is shown to be well-posed. Finally, the stability of vector optimization is discussed by considering a perturbed problem with the objective function being continuous. By assuming the upper semicontinuity of certain set-valued maps associated with the perturbed problem, we establish the upper semicontinuity of the solution map.  相似文献   

13.
We consider a quadratic d. c. optimization problem on a convex set. The objective function is represented as the difference of two convex functions. By reducing the problem to the equivalent concave programming problem we prove a sufficient optimality condition in the form of an inequality for the directional derivative of the objective function at admissible points of the corresponding level surface.  相似文献   

14.
In this paper, we first derive several characterizations of the nonemptiness and compactness for the solution set of a convex scalar set-valued optimization problem (with or without cone constraints) in which the decision space is finite-dimensional. The characterizations are expressed in terms of the coercivity of some scalar set-valued maps and the well-posedness of the set-valued optimization problem, respectively. Then we investigate characterizations of the nonemptiness and compactness for the weakly efficient solution set of a convex vector set-valued optimization problem (with or without cone constraints) in which the objective space is a normed space ordered by a nontrivial, closed and convex cone with nonempty interior and the decision space is finite-dimensional. We establish that the nonemptiness and compactness for the weakly efficient solution set of a convex vector set-valued optimization problem (with or without cone constraints) can be exactly characterized as those of a family of linearly scalarized convex set-valued optimization problems and the well-posedness of the original problem.  相似文献   

15.
X. B. Li  Z. Lin  Z. Y. Peng 《Optimization》2016,65(8):1615-1627
In this paper, we first discuss the Painlevé–Kuratowski set convergence of (weak) minimal point set for a convex set, when the set and the ordering cone are both perturbed. Next, we consider a convex vector optimization problem, and take into account perturbations with respect to the feasible set, the objective function and the ordering cone. For this problem, by assuming that the data of the approximate problems converge to the data of the original problem in the sense of Painlevé–Kuratowski convergence and continuous convergence, we establish the Painlevé–Kuratowski set convergence of (weak) minimal point and (weak) efficient point sets of the approximate problems to the corresponding ones of original problem. We also compare our main theorems with existing results related to the same topic.  相似文献   

16.
一类不可微二次规划逆问题   总被引:1,自引:0,他引:1  
本文求解了一类二次规划的逆问题,具体为目标函数是矩阵谱范数与向量无穷范数之和的最小化问题.首先将该问题转化为目标函数可分离变量的凸优化问题,提出用G-ADMM法求解.并结合奇异值阈值算法,Moreau-Yosida正则化算法,matlab优化工具箱的quadprog函数来精确求解相应的子问题.而对于其中一个子问题的精确...  相似文献   

17.
集值映射最优化问题超有效解集的连通性   总被引:7,自引:0,他引:7  
本文在局部凸空间中对集值映射最优化问题引入超有效解的概念.首先研究了超 有效点的一些重要特性.其后证明了当目标函数为锥类凸的集值映射时,其目标空间里 的超有效点集是连通的;若目标函数为锥凸的集值映射时,其超有效解集也是连通的.  相似文献   

18.
We study the convergence rate of the proximal-gradient homotopy algorithm applied to norm-regularized linear least squares problems, for a general class of norms. The homotopy algorithm reduces the regularization parameter in a series of steps, and uses a proximal-gradient algorithm to solve the problem at each step. Proximal-gradient algorithm has a linear rate of convergence given that the objective function is strongly convex, and the gradient of the smooth component of the objective function is Lipschitz continuous. In many applications, the objective function in this type of problem is not strongly convex, especially when the problem is high-dimensional and regularizers are chosen that induce sparsity or low-dimensionality. We show that if the linear sampling matrix satisfies certain assumptions and the regularizing norm is decomposable, proximal-gradient homotopy algorithm converges with a linear rate even though the objective function is not strongly convex. Our result generalizes results on the linear convergence of homotopy algorithm for \(\ell _1\)-regularized least squares problems. Numerical experiments are presented that support the theoretical convergence rate analysis.  相似文献   

19.
This paper is devoted to the study of the pseudo-Lipschitz property of the efficient (Pareto) solution map for the perturbed convex semi-infinite vector optimization problem (CSVO). We establish sufficient conditions for the pseudo-Lipschitz property of the efficient solution map of (CSVO) under continuous perturbations of the right-hand side of the constraints and functional perturbations of the objective function. Examples are given to illustrate the obtained results.  相似文献   

20.
《Journal of Complexity》1999,15(2):282-293
We study the complexity of a barrier method for linear-inequality constrained optimization problems where the objective function is only assumed to be analytic and convex. As a special case, we obtain the usual complexity bounds for the linear programming problem and for when the objective function is convex and quadratic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号