首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
考虑一类非线性不等式约束的非光滑minimax分式规划问题;目标函数中的分子是可微函数与凸函数之和形式而分母是可微函数与凸函数之差形式,且约束函数是可微的.在Arrow- Hurwicz-Uzawa约束品性下,给出了这类规划的最优解的Kuhn-Tucker型必要条件.所得结果改进和推广了已有文献中的相应结果.  相似文献   

2.
Convex composite multi-objective nonsmooth programming   总被引:4,自引:0,他引:4  
This paper examines nonsmooth constrained multi-objective optimization problems where the objective function and the constraints are compositions of convex functions, and locally Lipschitz and Gâteaux differentiable functions. Lagrangian necessary conditions, and new sufficient optimality conditions for efficient and properly efficient solutions are presented. Multi-objective duality results are given for convex composite problems which are not necessarily convex programming problems. Applications of the results to new and some special classes of nonlinear programming problems are discussed. A scalarization result and a characterization of the set of all properly efficient solutions for convex composite problems are also discussed under appropriate conditions.This research was partially supported by the Australian Research Council grant A68930162.This author wishes to acknowledge the financial support of the Australian Research Council.  相似文献   

3.
The phrase convex optimization refers to the minimization of a convex function over a convex set. However the feasible convex set need not be always described by convex inequalities. In this article we consider a convex feasible set which is described by inequality constraints that are locally Lipschitz and not necessarily convex or differentiable. We show that if the Slater constraint qualification and a simple non-degeneracy condition is satisfied then the Karush–Kuhn–Tucker type optimality condition is both necessary and sufficient.  相似文献   

4.
In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients and function values. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.  相似文献   

5.
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.  相似文献   

6.
In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker.  相似文献   

7.
The problem under consideration is a maximization problem over a constraint set defined by a finite number of inequality and equality constraints over an arbitrary set in a reflexive Banach space. A generalization of the Kuhn-Tucker necessary conditions is developed where neither the objective function nor the constraint functions are required to be differentiable. A new constraint qualification is imposed in order to validate the optimality criteria. It is shown that this qualification is the weakest possible in the sense that it is necessary for the optimality criteria to hold at the point under investigation for all families of objective functions having a constrained local maximum at this point  相似文献   

8.
Two distributed algorithms are described that enable all users connected over a network to cooperatively solve the problem of minimizing the sum of all users’ objective functions over the intersection of all users’ constraint sets, where each user has its own private nonsmooth convex objective function and closed convex constraint set, which is the intersection of a number of simple, closed convex sets. One algorithm enables each user to adjust its estimate using the proximity operator of its objective function and the metric projection onto one constraint set randomly selected from a number of simple, closed convex sets. The other determines each user’s estimate using the subdifferential of its objective function instead of the proximity operator. Investigation of the two algorithms’ convergence properties for a diminishing step-size rule revealed that, under certain assumptions, the sequences of all users generated by each of the two algorithms converge almost surely to the same solution. It also showed that the rate of convergence depends on the step size and that a smaller step size results in quicker convergence. The results of numerical evaluation using a nonsmooth convex optimization problem support the convergence analysis and demonstrate the effectiveness of the two algorithms.  相似文献   

9.
Second-order optimality conditions are studied for the constrained optimization problem where the objective function and the constraints are compositions of convex functions and twice strictly differentiable functions. A second-order sufficient condition of a global minimizer is obtained by introducing a generalized representation condition. Second-order minimizer characterizations for a convex program and a linear fractional program are derived using the generalized representation condition  相似文献   

10.
Second-order optimality conditions are studied for the constrained optimization problem where the objective function and the constraints are compositions of convex functions and twice strictly differentiable functions. A second-order sufficient condition of a global minimizer is obtained by introducing a generalized representation condition. Second-order minimizer characterizations for a convex program and a linear fractional program are derived using the generalized representation condition  相似文献   

11.
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations.  相似文献   

12.
本文对一类在Rn的开子集X上的非线性不等式约束的广义分式规划问题: 目标函数中的分子是可微函数与凸函数之和而分母是可微函数与凸函数之差,且约束函数是可微的,在Abadie约束品性或Calmness约束品性下,给出了最优解的Kuhn-Tucker 型必要条件,所得结果改进和推广了已有文献中的相应结果.  相似文献   

13.
Two approximation algorithms for solving convex vector optimization problems (CVOPs) are provided. Both algorithms solve the CVOP and its geometric dual problem simultaneously. The first algorithm is an extension of Benson’s outer approximation algorithm, and the second one is a dual variant of it. Both algorithms provide an inner as well as an outer approximation of the (upper and lower) images. Only one scalar convex program has to be solved in each iteration. We allow objective and constraint functions that are not necessarily differentiable, allow solid pointed polyhedral ordering cones, and relate the approximations to an appropriate \(\epsilon \) -solution concept. Numerical examples are provided.  相似文献   

14.
The Kuhn-Tucker type necessary optimality conditions are given for the problem of minimizing the sum of a differentiable function and a convex function subject to a set of differentiable nonlinear inequalities on a convex subset C of , under the conditions similar to the Kuhn-Tucker constraint qualification or the Arrow-Hurwicz-Uzawa constraint qualification. The case when the set C is open (not necessarily convex) is shown to be a special one of our results, which helps us to improve some of the existing results in the literature.  相似文献   

15.
We present an algorithm which solves a convex program with faithfully convex (not necessarily differentiable) constraints. While finding a feasible starting point, the algorithm reduces the program to an equivalent program for which Slater's condition is satisfied. Included are algorithms for calculating various objects which have recently appeared in the literature. Stability of the algorithm is discussed.  相似文献   

16.
We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone of first-order feasible variations at the current iterate. We show that these methods have desirable convergence properties under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant and the regularity assumptions hold automatically. Numerical experience with the methods in the linearly constrained case is reported. Received: November 12, 1999 / Accepted: April 6, 2001?Published online October 26, 2001  相似文献   

17.
In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient method when the constraint set is compact or the objective function has a set of generalized weak sharp minima. In both cases, using the constant and diminishing stepsize rules, we describe convergence results in both objective values and iterates, and finite convergence to approximate optimality. We also investigate efficiency estimates of iterates and apply the inexact subgradient algorithm to solve the Cobb–Douglas production efficiency problem. The numerical results verify our theoretical analysis and show the high efficiency of our proposed algorithm, especially for the large-scale problems.  相似文献   

18.
We study the applicability of the Peaceman–Rachford (PR) splitting method for solving nonconvex optimization problems. When applied to minimizing the sum of a strongly convex Lipschitz differentiable function and a proper closed function, we show that if the strongly convex function has a large enough strong convexity modulus and the step-size parameter is chosen below a threshold that is computable, then any cluster point of the sequence generated, if exists, will give a stationary point of the optimization problem. We also give sufficient conditions guaranteeing boundedness of the sequence generated. We then discuss one way to split the objective so that the proposed method can be suitably applied to solving optimization problems with a coercive objective that is the sum of a (not necessarily strongly) convex Lipschitz differentiable function and a proper closed function; this setting covers a large class of nonconvex feasibility problems and constrained least squares problems. Finally, we illustrate the proposed algorithm numerically.  相似文献   

19.
We present a primal-dual interior-point method for constrained nonlinear, discrete minimax problems where the objective functions and constraints are not necessarily convex. The algorithm uses two merit functions to ensure progress toward the points satisfying the first-order optimality conditions of the original problem. Convergence properties are described and numerical results provided.  相似文献   

20.
In this paper we consider optimization problems defined by a quadratic objective function and a finite number of quadratic inequality constraints. Given that the objective function is bounded over the feasible set, we present a comprehensive study of the conditions under which the optimal solution set is nonempty, thus extending the so-called Frank-Wolfe theorem. In particular, we first prove a general continuity result for the solution set defined by a system of convex quadratic inequalities. This result implies immediately that the optimal solution set of the aforementioned problem is nonempty when all the quadratic functions involved are convex. In the absence of the convexity of the objective function, we give examples showing that the optimal solution set may be empty either when there are two or more convex quadratic constraints, or when the Hessian of the objective function has two or more negative eigenvalues. In the case when there exists only one convex quadratic inequality constraint (together with other linear constraints), or when the constraint functions are all convex quadratic and the objective function is quasi-convex (thus allowing one negative eigenvalue in its Hessian matrix), we prove that the optimal solution set is nonempty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号