首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
《Optimization》2012,61(3):349-355
In [1], [2] HOÀNG TUY gave an approach to the main theorems of the convex analysis and the convex optimization being based on a lemma; he has proved it by means o induction. In [1] the equivalence of the main theorems of convex optimization given in [1], [2] does not use a separation theorem or equivalent statements. In this note the author has proved that the lemma of HOÀNG TUY can be characterized as a special separation theorem and be obtained from a separation theorem of Eidelheit. That means that the lemma is equivalent to the theorem of Hahn-Banach.  相似文献   

2.
Finding all solutions of nonlinearly constrained systems of equations   总被引:8,自引:0,他引:8  
A new approach is proposed for finding all-feasible solutions for certain classes of nonlinearly constrained systems of equations. By introducing slack variables, the initial problem is transformed into a global optimization problem (P) whose multiple global minimum solutions with a zero objective value (if any) correspond to all solutions of the initial constrained system of equalities. All-globally optimal points of (P) are then localized within a set of arbitrarily small disjoint rectangles. This is based on a branch and bound type global optimization algorithm which attains finite-convergence to each of the multiple global minima of (P) through the successive refinement of a convex relaxation of the feasible region and the subsequent solution of a series of nonlinear convex optimization problems. Based on the form of the participating functions, a number of techniques for constructing this convex relaxation are proposed. By taking advantage of the properties of products of univariate functions, customized convex lower bounding functions are introduced for a large number of expressions that are or can be transformed into products of univariate functions. Alternative convex relaxation procedures involve either the difference of two convex functions employed in BB [23] or the exponential variable transformation based underestimators employed for generalized geometric programming problems [24]. The proposed approach is illustrated with several test problems. For some of these problems additional solutions are identified that existing methods failed to locate.  相似文献   

3.
《Optimization》2012,61(9):2039-2041
We provide a counterexample to the remark in Löhne and Schrage [An algorithm to solve polyhedral convex set optimization problems, Optimization 62 (2013), pp. 131-141] that every solution of a polyhedral convex set optimization problem is a pre-solution. A correct statement is that every solution of a polyhedral convex set optimization problem obtained by the algorithm SetOpt is a pre-solution. We also show that every finite infimizer and hence every solution of a polyhedral convex set optimization problem contains a pre-solution.  相似文献   

4.
《Optimization》2012,61(3):555-575
On the base of a given strictly convex function defined on the Euclidean space E n ( n S 2) we can-without the assumption that it is differentiable - introduce some manifolds in topologic sense. Such manifolds are sets of all optimal points of a certain parametric non-linear optimization problem. This paper presents above all certain generalization of some results of [F. No ? i ) ka and L. Grygarová (1991). Some topological questions connected with strictly convex functions. Optimization , 22 , 177-191. Akademie Verlag, Berlin] and [L. Grygarová (1988). Über Lösungsmengen spezieller konvexer parametrischer Optimierungsaufgaben . Optimization 19 , 215-228. Akademie Verlag Berlin], under less strict assumptions. The main results are presented in Sections 3 and 4, in Section 3 the geometrical characterization of the set of optimal points of a certain parametric minimization problem is presented; in Section 4 we study a maximization non-linear parametric problem assigned to it. It seems that it is a certain pair of parametric optimization problems with the same set of their optimal points, so that this pair of problems can be denoted as a pair of dual parametric non-linear optimization problems. This paper presents, most of all in Section 2, a number of interesting geometric facts about strictly convex functions. From the point of view of non-smooth analysis the present article is a certain complement to Chapter 4.3 of the book [B. Bank, J. Guddat, D. Klatte, B. Kummer and K. Tammer (1982). Nonlinear Parametric Optimization . Akademie Verlag, Berlin] where a convex parametric minimization problem is considered under more general and stronger conditions (but without any assumptions concerning strict convexity and without geometrical aspects).  相似文献   

5.
Duality in nonlinear fractional programming   总被引:5,自引:0,他引:5  
Summary The purpose of the present paper is to introduce, on the lines similar to that ofWolfe [1961], a dual program to a nonlinear fractional program in which the objective function, being the ratio of a convex function to a strictly positive linear function, is a special type of pseudo-convex function and the constraint set is a convex set constrained by convex functions in the form of inequalities. The main results proved are, (i) Weak duality theorem, (ii)Wolfe's (Direct) duality theorem and (iii)Mangasarian's Strict Converse duality theorem.Huard's [1963] andHanson's [1961] converse duality theorems for the present problem have just been stated because they can be obtained as a special case ofMangasarian's theorem [1969, p. 157]. The other important discussion included is to show that the dual program introduced in the present paper can also be obtained throughDinkelbach's Parametric Replacement [1967] of a nonlinear fractional program. Lastly, duality in convex programming is shown to be a special case of the present problem.The present research is partially supported by National Research Council of Canada.  相似文献   

6.
In convex composite NDO one studies the problem of minimizing functions of the formF:=hf whereh:ℝ m → ℝ is a finite valued convex function andf:ℝ n → ℝ m is continuously differentiable. This problem model has a wide range of application in mathematical programming since many important problem classes can be cast within its framework, e.g. convex inclusions, minimax problems, and penalty methods for constrained optimization. In the present work we extend the second order theory developed by A.D. Ioffe in [11, 12, 13] for the case in whichh is sublinear, to arbitrary finite valued convex functionsh. Moreover, a discussion of the second order regularity conditions is given that illuminates their essentially geometric nature.  相似文献   

7.
We introduce a new barrier function to build new interior-point algorithms to solve optimization problems with bounded variables. First, we show that this function is a (3/2)n-self-concordant barrier for the unitary hypercube [0,1] n , assuring thus the polynomial property of related algorithms. Second, using the Hessian metric of that barrier, we present new explicit algorithms from the point of view of Riemannian geometry applications. Third, we prove that the central path defined by the new barrier to solve a certain class of linearly constrained convex problems maintains most of the properties of the central path defined by the usual logarithmic barrier. We present also a primal long-step path-following algorithm with similar complexity to the classical barrier. Finally, we introduce a new proximal-point Bregman type algorithm to solve linear problems in [0,1] n and prove its convergence. P.R. Oliveira was partially supported by CNPq/Brazil.  相似文献   

8.
We address a class of particularly hard-to-solve combinatorial optimization problems, namely that of multicommodity network optimization when the link cost functions are discontinuous step increasing. Unlike usual approaches consisting in the development of relaxations for such problems (in an equivalent form of a large scale mixed integer linear programming problem) in order to derive lower bounds, our d.c.(difference of convex functions) approach deals with the original continuous version and provides upper bounds. More precisely we approximate step increasing functions as closely as desired by differences of polyhedral convex functions and then apply DCA (difference of convex function algorithm) to the resulting approximate polyhedral d.c. programs. Preliminary computational experiments are presented on a series of test problems with structures similar to those encountered in telecommunication networks. They show that the d.c. approach and DCA provide feasible multicommodity flows x * such that the relative differences between upper bounds (computed by DCA) and simple lower bounds r:=(f(x*)-LB)/{f(x*)} lies in the range [4.2 %, 16.5 %] with an average of 11.5 %, where f is the cost function of the problem and LB is a lower bound obtained by solving the linearized program (that is built from the original problem by replacing step increasing cost functions with simple affine minorizations). It seems that for the first time so good upper bounds have been obtained.  相似文献   

9.
We generalize the disjunctive approach of Balas, Ceria, and Cornuéjols [2] and devevlop a branch-and-cut method for solving 0-1 convex programming problems. We show that cuts can be generated by solving a single convex program. We show how to construct regions similar to those of Sherali and Adams [20] and Lovász and Schrijver [12] for the convex case. Finally, we give some preliminary computational results for our method. Received January 16, 1996 / Revised version received April 23, 1999?Published online June 28, 1999  相似文献   

10.
In this paper, we study inverse optimization for linearly constrained convex separable programming problems that have wide applications in industrial and managerial areas. For a given feasible point of a convex separable program, the inverse optimization is to determine whether the feasible point can be made optimal by adjusting the parameter values in the problem, and when the answer is positive, find the parameter values that have the smallest adjustments. A sufficient and necessary condition is given for a feasible point to be able to become optimal by adjusting parameter values. Inverse optimization formulations are presented with 1 and 2 norms. These inverse optimization problems are either linear programming when 1 norm is used in the formulation, or convex quadratic separable programming when 2 norm is used.  相似文献   

11.
A branch and bound global optimization method,BB, for general continuous optimization problems involving nonconvexities in the objective function and/or constraints is presented. The nonconvexities are categorized as being either of special structure or generic. A convex relaxation of the original nonconvex problem is obtained by (i) replacing all nonconvex terms of special structure (i.e. bilinear, fractional, signomial) with customized tight convex lower bounding functions and (ii) by utilizing the parameter as defined in [17] to underestimate nonconvex terms of generic structure. The proposed branch and bound type algorithm attains finite-convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems. The global optimization method,BB, is implemented in C and tested on a variety of example problems.  相似文献   

12.
Generalized Disjunctive Programming (GDP) has been introduced recently as an alternative to mixed-integer programming for representing discrete/continuous optimization problems. The basic idea of GDP consists of representing these problems in terms of sets of disjunctions in the continuous space, and logic propositions in terms of Boolean variables. In this paper we consider GDP problems involving convex nonlinear inequalities in the disjunctions. Based on the work by Stubbs and Mehrotra [21] and Ceria and Soares [6], we propose a convex nonlinear relaxation of the nonlinear convex GDP problem that relies on the convex hull of each of the disjunctions that is obtained by variable disaggregation and reformulation of the inequalities. The proposed nonlinear relaxation is used to formulate the GDP problem as a Mixed-Integer Nonlinear Programming (MINLP) problem that is shown to be tighter than the conventional big-M formulation. A disjunctive branch and bound method is also presented, and numerical results are given for a set of test problems.  相似文献   

13.
We show that a simple and elegant method of Bismut [J. Math. Analysis Appl., 44 (1973), pp. 384–404] for applying conjugate duality to convex problems of Bolza adapts directly to problems of utility maximization with portfolio constraints in mathematical finance. This gives a straightforward construction of an associated dual problem together with Euler–Lagrange and transversality relations, which are then used to establish existence of optimal portfolios in terms of solutions of the dual problem. The approach is completely synthetic, and does not require the rather difficult a priori hypothesis of a fictitious complete market for unconstrained optimization, which has been the standard approach for synthesizing optimal portfolios in problems of utility maximization with trading constraints. It also complements a duality synthesis of Rogers [Lecture Notes in Mathematics, No. LNM-1814, Springer-Verlag, New York, 2003, pp. 95–131] and Klein and Rogers [Math. Finance, 17 (2007), pp. 225–247] for general problems of utility maximization with market imperfections.  相似文献   

14.
In this paper the problem dual to a convex vector optimization problem is defined. Under suitable assumptions, a weak, strong and strict converse duality theorem are proved. In the case of linear mappings the formulation of the dual is refined such that well-known dual problems of Gale, Kuhn and Tucker [8] and Isermann [12] are generalized by this approach.  相似文献   

15.
Portfolio optimization with linear and fixed transaction costs   总被引:1,自引:0,他引:1  
We consider the problem of portfolio selection, with transaction costs and constraints on exposure to risk. Linear transaction costs, bounds on the variance of the return, and bounds on different shortfall probabilities are efficiently handled by convex optimization methods. For such problems, the globally optimal portfolio can be computed very rapidly. Portfolio optimization problems with transaction costs that include a fixed fee, or discount breakpoints, cannot be directly solved by convex optimization. We describe a relaxation method which yields an easily computable upper bound via convex optimization. We also describe a heuristic method for finding a suboptimal portfolio, which is based on solving a small number of convex optimization problems (and hence can be done efficiently). Thus, we produce a suboptimal solution, and also an upper bound on the optimal solution. Numerical experiments suggest that for practical problems the gap between the two is small, even for large problems involving hundreds of assets. The same approach can be used for related problems, such as that of tracking an index with a portfolio consisting of a small number of assets.  相似文献   

16.
Summary Necessary and sufficient conditions are derived, which are fulfilled by quadratic functions that can be written as a product of two affine-linear functions plus an additive constant. This criterium characterizes convex and nonconvex quadratic programming problems, which can be solved by one ofSwarup's [1966a, 1966b, 1966c] algorithm.
Zusammenfassung Es werden notwendige und hinreichende Bedingungen dafür angegeben, daß sich eine quadratische Funktion bis auf eine additive Konstante als Produkt zweier affin-linearer Funktionen darstellen läßt. Durch dieses Kriterium werden diejenigen konvexen und nichtkonvexen quadratischen Programmierungsmodelle gekennzeichnet, die mit einem der Algorithmen vonSwarup [1966a, 1966b, 1966c] gelöst werden können.
  相似文献   

17.
Solutions to non–convex variational problems typically exhibit enforced finer and finer oscillations called microstructures such that the infimal energy is not attained. Those oscillations are physically meaningful, but finite element approximations typically experience dramatic difficulty in their reproduction. The relaxation of the non–convex minimisation problem by (semi–)convexification leads to a macroscopic model for the effective energy. The resulting discrete macroscopic problem is degenerate in the sense that it is convex but not strictly convex. This paper discusses a modified discretisation by adding a stabilisation term to the discrete energy. It will be announced that, for a wide class of problems, this stabilisation technique leads to strong H1–convergence of the macroscipic variables even on unstructured triangulations. This is in contrast to the work [2] for quasi–uniform triangulations and enables the use of adaptive algorithms for the stabilised formulations. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
Joydeep Dutta 《TOP》2005,13(2):185-279
During the early 1960’s there was a growing realization that a large number of optimization problems which appeared in applications involved minimization of non-differentiable functions. One of the important areas where such problems appeared was optimal control. The subject of nonsmooth analysis arose out of the need to develop a theory to deal with the minimization of nonsmooth functions. The first impetus in this direction came with the publication of Rockafellar’s seminal work titledConvex Analysis which was published by the Princeton University Press in 1970. It would be impossible to overstate the impact of this book on the development of the theory and methods of optimization. It is also important to note that a large part of convex analysis was already developed by Werner Fenchel nearly twenty years earlier and was circulated through his mimeographed lecture notes titledConvex Cones, Sets and Functions, Princeton University, 1951. In this article we trace the dramatic development of nonsmooth analysis and its applications to optimization in finite dimensions. Beginning with the fundamentals of convex optimization we quickly move over to the path breaking work of Clarke which extends the domain of nonsmooth analysis from convex to locally Lipschitz functions. Clarke was the second doctoral student of R.T. Rockafellar. We discuss the notions of Clarke directional derivative and the Clarke generalized gradient and also the relevant calculus rules and applications to optimization. While discussing locally Lipschitz optimization we also try to blend in the computational aspects of the theory wherever possible. This is followed by a discussion of the geometry of sets with nonsmooth boundaries. The approach to develop the notion of the normal cone to an arbitrary set is sequential in nature. This approach does not rely on the standard techniques of convex analysis. The move away from convexity was pioneered by Mordukhovich and later culminated in the monographVariational Analysis by Rockafellar and Wets. The approach of Mordukhovich relied on a nonconvex separation principle called theextremal principle while that of Rockafellar and Wets relied on various convergence notions developed to suit the needs of optimization. We then move on to a parallel development in nonsmooth optimization due to Demyanov and Rubinov called Quasidifferentiable optimization. They study the class of directionally differentiable functions whose directional derivatives can be represented as a difference of two sublinear functions. On other hand the directional derivative of a convex function and also the Clarke directional derivatives are sublinear functions of the directions. Thus it was thought that the most useful generalizations of directional derivatives must be a sublinear function of the directions. Thus Demyanov and Rubinov made a major conceptual change in nonsmooth optimization. In this section we define the notion of a quasidifferential which is a pair of convex compact sets. We study some calculus rules and their applications to optimality conditions. We also study the interesting notion of Demyanov difference between two sets and their applications to optimization. In the last section of this paper we study some second-order tools used in nonsmooth analysis and try to see their relevance in optimization. In fact it is important to note that unlike the classical case, the second-order theory of nonsmoothness is quite complicated in the sense that there are many approaches to it. However we have chosen to describe those approaches which can be developed from the first order nonsmooth tools discussed here. We shall present three different approaches, highlight the second order calculus rules and their applications to optimization.  相似文献   

19.
The convex feasibility problem asks to find a point in the intersection of finitely many closed convex sets in Euclidean space. This problem is of fundamental importance in the mathematical and physical sciences, and it can be solved algorithmically by the classical method of cyclic projections.In this paper, the case where one of the constraints is an obtuse cone is considered. Because the nonnegative orthant as well as the set of positive-semidefinite symmetric matrices form obtuse cones, we cover a large and substantial class of feasibility problems. Motivated by numerical experiments, the method of reflection-projection is proposed: it modifies the method of cyclic projections in that it replaces the projection onto the obtuse cone by the corresponding reflection.This new method is not covered by the standard frameworks of projection algorithms because of the reflection. The main result states that the method does converge to a solution whenever the underlying convex feasibility problem is consistent. As prototypical applications, we discuss in detail the implementation of two-set feasibility problems aiming to find a nonnegative [resp. positive semidefinite] solution to linear constraints in n [resp. in , the space of symmetric n×n matrices] and we report on numerical experiments. The behavior of the method for two inconsistent constraints is analyzed as well.  相似文献   

20.
A mathematical programming problem is said to have separated nonconvex variables when the variables can be divided into two groups: x=(x 1,...,x n ) and y=( y 1,...,y n ), such that the objective function and any constraint function is a sum of a convex function of (x, y) jointly and a nonconvex function of x alone. A method is proposed for solving a class of such problems which includes Lipschitz optimization, reverse convex programming problems and also more general nonconvex optimization problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号