首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Existence theorems for saddle points of vector-valued maps   总被引:2,自引:0,他引:2  
In this paper, we prove some new existence theorems for loose saddle points and for saddle points of set-valued maps or vector-valued functions. These theorems generalize the corresponding results of Tanaka and those of Luc and Vargas via different proofs.The authors would like to thank two referees for their careful reading of the paper and helpful comments.  相似文献   

2.
基于乘子交替方向法(ADMM)和序列二次规划(SQP)方法思想, 致力于研究线 性约束两分块非凸优化的新型高效算法. 首先, 以SQP思想为主线, 在其二次规划(QP)子问题的求解中引入ADMM思想, 将QP分解为两个相互独立的小规模QP求解. 其次, 借助增广拉格朗日函数和Armijo线搜索产生原始变量新迭代点. 最后, 以显式解析式更新对偶变量. 因此, 构建了一个新型ADMM-SQP算法. 在较弱条件下, 分析了算法通常意义下的全局收敛性, 并对算法进行了初步的数值试验.  相似文献   

3.
This article presents a novel neural network (NN) based on NCP function for solving nonconvex nonlinear optimization (NCNO) problem subject to nonlinear inequality constraints. We first apply the p‐power convexification of the Lagrangian function in the NCNO problem. The proposed NN is a gradient model which is constructed by an NCP function and an unconstrained minimization problem. The main feature of this NN is that its equilibrium point coincides with the optimal solution of the original problem. Under a proper assumption and utilizing a suitable Lyapunov function, it is shown that the proposed NN is Lyapunov stable and convergent to an exact optimal solution of the original problem. Finally, simulation results on two numerical examples and two practical examples are given to show the effectiveness and applicability of the proposed NN. © 2015 Wiley Periodicals, Inc. Complexity 21: 130–141, 2016  相似文献   

4.
J. Dutta 《TOP》2005,13(1):127-143
In this article we study approximate optimality in the setting of a Banach space. We study various solution concepts existing in the literature and develop very general necessary optimality conditions in terms of limiting subdifferentials. We also study saddle point conditions and relate them to various solution concepts. Part of this research was carried out when the author was a post-doctoral fellow at UAB, Barcelona by the Grant No. SB99-B0771103B of the Spanish Ministry of Education and Culture. The hospitality and the facilities provided at CODE, UAB is gratefully acknowledged.  相似文献   

5.
For vector-valued functions, cone saddle points are defined, and some existence theorems for them are established in infinite-dimensional spaces. Most of our results rely on Condition 2.1, which has been given by Sterna-Karwat. In particular, it shows that the scalarization of vector-valued functions plays an important role for the condition. Some interesting examples in infinite-dimensional spaces are presented. Moreover, necessary conditions for the existence of cone saddle points are investigated.The author would like to thank the referees for their valuable suggestions on the original draft.  相似文献   

6.
A class of general transformation methods are proposed to convert a nonconvex optimization problem to another equivalent problem. It is shown that under certain assumptions the existence of a local saddle point or local convexity of the Lagrangian function of the equivalent problem (EP) can be guaranteed. Numerical experiments are given to demonstrate the main results geometrically.  相似文献   

7.
Zero duality gap for a class of nonconvex optimization problems   总被引:8,自引:0,他引:8  
By an equivalent transformation using thepth power of the objective function and the constraint, a saddle point can be generated for a general class of nonconvex optimization problems. Zero duality gap is thus guaranteed when the primal-dual method is applied to the constructed equivalent form.The author very much appreciates the comments from Prof. Douglas J. White.  相似文献   

8.
《Optimization》2012,61(3):403-419
In this article, the application of the electromagnetism-like method (EM) for solving constrained optimization problems is investigated. A number of penalty functions have been tested with EM in this investigation, and their merits and demerits have been discussed. We have also provided motivations for such an investigation. Finally, we have compared EM with two recent global optimization algorithms from the literature. We have shown that EM is a suitable alternative to these methods and that it has a role to play in solving constrained global optimization problems.  相似文献   

9.
We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations, we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of the unified classes of augmented Lagrangian functions.  相似文献   

10.
In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace. Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization benchmarks and compare their performance to that of other penalty methods.  相似文献   

11.
We establish the following theorems: (i) an existence theorem for weak type generalized saddle points; (ii) an existence theorem for strong type generalized saddle points; (iii) a generalized minimax theorem for a vector-valued function. These theorems are generalizations and extensions of the author's recent results. For such extensions, we propose new concepts of convexity and continuity of vector-valued functions, which are weaker than ordinary ones. Some of the proofs are based on a few key observations and also on the Browder coincidence theorem or the Tychonoff fixed-point theorem. Also, the minimax theorem follows from the existence theorem for weak type generalized saddle points. The main spaces with mathematical structures considered are real locally convex spaces and real ordered topological vector spaces.This paper is dedicated to Professor Kensuke Tanaka on his sixtieth birthday.This paper was written when the author was a visitor at the Department of Mathematical Science, Graduate School of Science and Technology, Niigata University, Niigata, Japan. The author is indebted to Prof. K. Tanaka for suggesting this work.The author is very grateful to Prof. P. L. Yu for his useful encouragement and suggestions and to the referees for their valuable suggestions and comments.  相似文献   

12.
A trust region algorithm for equality constrained optimization   总被引:2,自引:0,他引:2  
A trust region algorithm for equality constrained optimization is proposed that employs a differentiable exact penalty function. Under certain conditions global convergence and local superlinear convergence results are proved.  相似文献   

13.
In this paper we adopt and generalize the basic idea of the method presented in [3] and [4] to construct test problems that involve arbitrary, not necessarily quadratic, concave functions, for both Concave Minimization and Reverse Convex Programs  相似文献   

14.
In Floudas and Visweswaran (1990), a new global optimization algorithm (GOP) was proposed for solving constrained nonconvex problems involving quadratic and polynomial functions in the objective function and/or constraints. In this paper, the application of this algorithm to the special case of polynomial functions of one variable is discussed. The special nature of polynomial functions enables considerable simplification of the GOP algorithm. The primal problem is shown to reduce to a simple function evaluation, while the relaxed dual problem is equivalent to the simultaneous solution of two linear equations in two variables. In addition, the one-to-one correspondence between the x and y variables in the problem enables the iterative improvement of the bounds used in the relaxed dual problem. The simplified approach is illustrated through a simple example that shows the significant improvement in the underestimating function obtained from the application of the modified algorithm. The application of the algorithm to several unconstrained and constrained polynomial function problems is demonstrated.  相似文献   

15.
Matyas' random optimization method (Ref. 1) is applied to the constrained nonlinear minimization problem, and its convergence properties are studied. It is shown that the global minimum can be found with probability one, even if the performance function is multimodal (has several local minima) and even if its differentiability is not ensured.The author would like to thank Professors Y. Sawaragi (Kyoto University), T. Soeda (Tokushima University), and T. Shoman (Tokushima University) for their kind advice.  相似文献   

16.
The paper analyzes the rate of local convergence of the augmented Lagrangian method for nonlinear second-order cone optimization problems. Under the constraint nondegeneracy condition and the strong second order sufficient condition, we demonstrate that the sequence of iterate points generated by the augmented Lagrangian method locally converges to a local minimizer at a linear rate, whose ratio constant is proportional to 1/τ with penalty parameter τ not less than a threshold . Importantly and interestingly enough, the analysis does not require the strict complementarity condition. Supported by the National Natural Science Foundation of China under Project 10771026 and by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.  相似文献   

17.
The iterative Uzawa method with a modified Lagrangian functional is examined in the framework of the Signorini problem.  相似文献   

18.
Geometric consideration of duality in vector optimization   总被引:1,自引:0,他引:1  
Recently, duality in vector optimization has been attracting the interest of many researchers. In order to derive duality in vector optimization, it seems natural to introduce some vector-valued Lagrangian functions with matrix (or linear operator, in some cases) multipliers. This paper gives an insight into the geometry of vector-valued Lagrangian functions and duality in vector optimization. It is observed that supporting cones for convex sets play a key role, as well as supporting hyperplanes, traditionally used in single-objective optimization.The author would like to express his sincere gratitude to Prof. T. Tanino of Tohoku University and to some anonymous referees for their valuable comments.  相似文献   

19.
Mangasarian and Solodov (Ref. 1) proposed to solve nonlinear complementarity problems by seeking the unconstrained global minima of a new merit function, which they called implicit Lagrangian. A crucial point in such an approach is to determine conditions which guarantee that every unconstrained stationary point of the implicit Lagrangian is a global solution, since standard unconstrained minimization techniques are only able to locate stationary points. Some authors partially answered this question by giving sufficient conditions which guarantee this key property. In this paper, we settle the issue by giving a necessary and sufficient condition for a stationary point of the implicit Lagrangian to be a global solution and, hence, a solution of the nonlinear complementarity problem. We show that this new condition easily allows us to recover all previous results and to establish new sufficient conditions. We then consider a constrained reformulation based on the implicit Lagrangian in which nonnegative constraints on the variables are added to the original unconstrained reformulation. This is motivated by the fact that often, in applications, the function which defines the complementarity problem is defined only on the nonnegative orthant. We consider the KKT-points of this new reformulation and show that the same necessary and sufficient condition which guarantees, in the unconstrained case, that every unconstrained stationary point is a global solution, also guarantees that every KKT-point of the new problem is a global solution.  相似文献   

20.
We present a branch and cut algorithm that yields in finite time, a globally ε-optimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a branching tree using Reformulation-Linearization Techniques (RLT). To do so, four classes of linearizations (cuts), depending on one to three parameters, are detailed. For each class, we show how to select the best member with respect to a precise criterion. The cuts introduced at any node of the tree are valid in the whole tree, and not only within the subtree rooted at that node. In order to enhance the computational speed, the structure created at any node of the tree is flexible enough to be used at other nodes. Computational results are reported that include standard test problems taken from the literature. Some of these problems are solved for the first time with a proof of global optimality. Received December 19, 1997 / Revised version received July 26, 1999?Published online November 9, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号