共查询到20条相似文献,搜索用时 15 毫秒
1.
《Optimization》2012,61(11):1331-1345
Li and Sun [D. Li and X.L. Sun, Existence of a saddle point in nonconvex constrained optimization, J. Global Optim. 21 (2001), pp. 39--50; D. Li and X.L. Sun, Convexification and existence of saddle point in a p-th-power reformulation for nonconvex constrained optimization, Nonlinear Anal. 47 (2001), pp. 5611--5622], present the existence of a global saddle point of the p-th power Lagrangian functions for constrained nonconvex optimization, under second-order sufficiency conditions and additional conditions that the feasible set is compact and the global solution of the primal problem is unique. In this article, it is shown that the same results can be obtained under additional assumptions that do not require the compactness of the feasible set and the uniqueness of global solution of the primal problem. 相似文献
2.
Local and global saddle point conditions for a general augmented Lagrangian function proposed by Mangasarian are investigated in the paper for inequality and equality constrained nonconvex optimization problems. Under second order sufficiency conditions, it is proved that the augmented Lagrangian admits a local saddle point, but without requiring the strict complementarity condition. The existence of a global saddle point is then obtained under additional assumptions that do not require the compactness of the feasible set and the uniqueness of global solution of the original problem. 相似文献
3.
Zero duality gap for a class of nonconvex optimization problems 总被引:8,自引:0,他引:8
D. Li 《Journal of Optimization Theory and Applications》1995,85(2):309-324
By an equivalent transformation using thepth power of the objective function and the constraint, a saddle point can be generated for a general class of nonconvex optimization problems. Zero duality gap is thus guaranteed when the primal-dual method is applied to the constructed equivalent form.The author very much appreciates the comments from Prof. Douglas J. White. 相似文献
4.
Local saddle point and a class of convexification methods for nonconvex optimization problems 总被引:1,自引:1,他引:0
A class of general transformation methods are proposed to convert a nonconvex optimization problem to another equivalent problem.
It is shown that under certain assumptions the existence of a local saddle point or local convexity of the Lagrangian function
of the equivalent problem (EP) can be guaranteed. Numerical experiments are given to demonstrate the main results geometrically. 相似文献
5.
In this article, by introducing a class of nonlinear separation functions, the image space analysis is employed to investigate a class of constrained optimization problems. Furthermore, the equivalence between the existence of nonlinear separation function and a saddle point condition for a generalized Lagrangian function associated with the given problem is proved. 相似文献
6.
In this paper, we develop the sufficient conditions for the existence of local and global saddle points of two classes of
augmented Lagrangian functions for nonconvex optimization problem with both equality and inequality constraints, which improve
the corresponding results in available papers. The main feature of our sufficient condition for the existence of global saddle
points is that we do not need the uniqueness of the optimal solution. Furthermore, we show that the existence of global saddle
points is a necessary and sufficient condition for the exact penalty representation in the framework of augmented Lagrangians.
Based on these, we convert a class of generalized semi-infinite programming problems into standard semi-infinite programming
problems via augmented Lagrangians. Some new first-order optimality conditions are also discussed.
This research was supported by the National Natural Science Foundation of P.R. China (Grant No. 10571106 and No. 10701047). 相似文献
7.
Guoyin Li Tianxiang Liu Ting Kei Pong 《Computational Optimization and Applications》2017,68(2):407-436
We study the applicability of the Peaceman–Rachford (PR) splitting method for solving nonconvex optimization problems. When applied to minimizing the sum of a strongly convex Lipschitz differentiable function and a proper closed function, we show that if the strongly convex function has a large enough strong convexity modulus and the step-size parameter is chosen below a threshold that is computable, then any cluster point of the sequence generated, if exists, will give a stationary point of the optimization problem. We also give sufficient conditions guaranteeing boundedness of the sequence generated. We then discuss one way to split the objective so that the proposed method can be suitably applied to solving optimization problems with a coercive objective that is the sum of a (not necessarily strongly) convex Lipschitz differentiable function and a proper closed function; this setting covers a large class of nonconvex feasibility problems and constrained least squares problems. Finally, we illustrate the proposed algorithm numerically. 相似文献
8.
We present in this paper new sufficient conditions for verifying zero duality gap in nonconvex quadratically/linearly constrained
quadratic programs (QP). Based on saddle point condition and conic duality theorem, we first derive a sufficient condition
for the zero duality gap between a quadratically constrained QP and its Lagrangian dual or SDP relaxation. We then use a distance
measure to characterize the duality gap for nonconvex QP with linear constraints. We show that this distance can be computed
via cell enumeration technique in discrete geometry. Finally, we revisit two sufficient optimality conditions in the literature
for two classes of nonconvex QPs and show that these conditions actually imply zero duality gap. 相似文献
9.
10.
Hoang Tuy 《Journal of Global Optimization》1992,2(1):21-40
We show the importance of exploiting the complementary convex structure for efficiently solving a wide class of specially structured nonconvex global optimization problems. Roughly speaking, a specific feature of these problems is that their nonconvex nucleus can be transformed into a complementary convex structure which can then be shifted to a subspace of much lower dimension than the original underlying space. This approach leads to quite efficient algorithms for many problems of practical interest, including linear and convex multiplicative programming problems, concave minimization problems with few nonlinear variables, bilevel linear optimization problems, etc... 相似文献
11.
Hoang Tuy 《Journal of Global Optimization》2010,47(3):485-501
For solving global optimization problems with nonconvex feasible sets existing methods compute an approximate optimal solution
which is not guaranteed to be close, within a given tolerance, to the actual optimal solution, nor even to be feasible. To
overcome these limitations, a robust solution approach is proposed that can be applied to a wide class of problems called
D(C){{\mathcal {D}(\mathcal {C})}}-optimization problems. DC optimization and monotonic optimization are particular cases of D(C){{\mathcal {D}(\mathcal {C})}}-optimization, so this class includes virtually every nonconvex global optimization problem of interest. The approach is a
refinement and extension of an earlier version proposed for dc and monotonic optimization. 相似文献
12.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program. 相似文献
13.
The paper is a manifestation of the fundamental importance of the linear program with linear complementarity constraints (LPCC)
in disjunctive and hierarchical programming as well as in some novel paradigms of mathematical programming. In addition to
providing a unified framework for bilevel and inverse linear optimization, nonconvex piecewise linear programming, indefinite
quadratic programs, quantile minimization, and ℓ
0 minimization, the LPCC provides a gateway to a mathematical program with equilibrium constraints, which itself is an important
class of constrained optimization problems that has broad applications. We describe several approaches for the global resolution
of the LPCC, including a logical Benders approach that can be applied to problems that may be infeasible or unbounded. 相似文献
14.
Francisco Guerra-Vázquez Jan-J. Rückmann Ralf Werner 《Journal of Global Optimization》2012,54(3):433-447
In this paper we apply two convexification procedures to the Lagrangian of a nonconvex semi-infinite programming problem. Under the reduction approach it is shown that, locally around a local minimizer, this problem can be transformed equivalently in such a way that the transformed Lagrangian fulfills saddle point optimality conditions, where for the first procedure both the original objective function and constraints (and for the second procedure only the constraints) are substituted by their pth powers with sufficiently large power p. These results allow that local duality theory and corresponding numerical methods (e.g. dual search) can be applied to a broader class of nonconvex problems. 相似文献
15.
16.
Zhiqing Meng Rui Shen Chuangyin Dang Min Jiang 《Numerical Functional Analysis & Optimization》2013,34(11):1471-1492
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions. 相似文献
17.
18.
In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained
optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence
between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling
probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace.
Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity
as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each
independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically
converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional
simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete
constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned
nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization
benchmarks and compare their performance to that of other penalty methods. 相似文献
19.
It is shown that, for very general classes of nonconvex global optimization problems, the duality gap obtained by solving a corresponding Lagrangian dual in reduced to zero in the limit when combined with suitably refined partitioning of the feasible set. A similar result holds for partly convex problems where exhaustive partitioning is applied only in the space of nonconvex variables. Applications include branch-and-bound approaches for linearly constrained problems where convex envelopes can be computed, certain generalized bilinear problems, linearly constrained optimization of the sum of ratios of affine functions, and concave minimization under reverse convex constraints. 相似文献
20.
Yixin Zhao Torbjörn Larsson Elina Rönnberg Panos M. Pardalos 《Journal of Global Optimization》2018,70(3):517-549
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided. 相似文献