首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 275 毫秒
1.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

2.
We present a primal–dual algorithm for solving a constrained optimization problem. This method is based on a Newtonian method applied to a sequence of perturbed KKT systems. These systems follow from a reformulation of the initial problem under the form of a sequence of penalized problems, by introducing an augmented Lagrangian for handling the equality constraints and a log-barrier penalty for the inequalities. We detail the updating rules for monitoring the different parameters (Lagrange multiplier estimate, quadratic penalty and log-barrier parameter), in order to get strong global convergence properties. We show that one advantage of this approach is that it introduces a natural regularization of the linear system to solve at each iteration, for the solution of a problem with a rank deficient Jacobian of constraints. The numerical experiments show the good practical performances of the proposed method especially for degenerate problems.  相似文献   

3.
In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace. Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization benchmarks and compare their performance to that of other penalty methods.  相似文献   

4.
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solving nonsmooth and nonconvex constrained optimization problems. We prove that the general constrained optimization problem is equivalent to a bound constrained problem in the sense that they have the same global solutions. The global minimizer of the penalty function subject to a set of bound constraints may be obtained by a population-based meta-heuristic. Further, a hybrid self-adaptive penalty firefly algorithm, with a local intensification search, is designed, and its convergence analysis is established. The numerical experiments and a comparison with other penalty-based approaches show the effectiveness of the new self-adaptive penalty algorithm in solving constrained global optimization problems.  相似文献   

5.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

6.
In order to solve the constrained global optimization problem,we use penalty functions not only on constraints but also on objective function. Then within the framework of interval analysis,an interval Branch-and-Bound algorithm is given,which does not need to solve a sequence of unconstrained problems. Global convergence is proved. Numerical examples show that this algorithm is efficient.  相似文献   

7.
This paper describes a new algorithm for solving constrained optimization problems, based on a method proposed by Chattopadhyay. The proposed algorithm replaces the original problem withm constraints,m>1, by a sequence of optimization problems, with one constraint. Here, we modify the algorithm given by Chattopadhyay in order to make it applicable for a larger class of optimization problems and to improve its convergence characteristics.  相似文献   

8.
Double penalty method for bilevel optimization problems   总被引:1,自引:0,他引:1  
A penalty function method approach for solving a constrained bilevel optimization problem is proposed. In the algorithm, both the upper level and the lower level problems are approximated by minimization problems of augmented objective functions. A convergence theorem is presented. The method is applicable to the non-singleton lower-level reaction set case. Constraint qualifications which imply the assumptions of the general convergence theorem are given.A part of this paper was presented in a talk at the 11th Symposium on Mathematical Programming with Data Perturbations, Washington, DC, May 1989.  相似文献   

9.
This article introduces a smoothing technique to the l1 exact penalty function. An application of the technique yields a twice continuously differentiable penalty function and a smoothed penalty problem. Under some mild conditions, the optimal solution to the smoothed penalty problem becomes an approximate optimal solution to the original constrained optimization problem. Based on the smoothed penalty problem, we propose an algorithm to solve the constrained optimization problem. Every limit point of the sequence generated by the algorithm is an optimal solution. Several numerical examples are presented to illustrate the performance of the proposed algorithm.  相似文献   

10.
陈传  孔伟程 《计算数学》1988,10(3):299-310
1.引言 本文所讨论的问题如下: Min f(x) x∈R~n, s.t. c_i(x)=0,i=1,…,q,(1.1) c_i(x)≤0,i=q+1,…,p.解此问题的递归等式约束二次逼近算法,是由Murry(1969)提出,而后由Biggs(1972)发展的.此项研究是从罚函数的轨迹出发,建立一个只包含等式约束的二次规划子问题,从而可用代数的方法求得搜索方向.并沿该方向作线性搜索而完成一次迭代过程.Biggs将二次罚函数作为效应函数用于线性搜索,并证明了该算法具有全局收敛性和局部超线  相似文献   

11.
The recently proposed quasi-Newton method for constrained optimization has very attractive local convergence properties. To force global convergnce of the method, a descent method which uses Zangwill's penalty function and an exact line search has been proposed by Han. In this paper a new method which adopts a differentiable penalty function and an approximate line is presented. The proposed penalty function has the form of the augmented Lagrangian function. An algorithm for updating parameters which appear in the penalty function is described. Global convergence of the given method is proved.  相似文献   

12.
In this article, we aim to extend the firefly algorithm (FA) to solve bound constrained mixed-integer nonlinear programming (MINLP) problems. An exact penalty continuous formulation of the MINLP problem is used. The continuous penalty problem comes out by relaxing the integrality constraints and by adding a penalty term to the objective function that aims to penalize integrality constraint violation. Two penalty terms are proposed, one is based on the hyperbolic tangent function and the other on the inverse hyperbolic sine function. We prove that both penalties can be used to define the continuous penalty problem, in the sense that it is equivalent to the MINLP problem. The solutions of the penalty problem are obtained using a variant of the metaheuristic FA for global optimization. Numerical experiments are given on a set of benchmark problems aiming to analyze the quality of the obtained solutions and the convergence speed. We show that the firefly penalty-based algorithm compares favourably with the penalty algorithm when the deterministic DIRECT or the simulated annealing solvers are invoked, in terms of convergence speed.  相似文献   

13.
In this paper, we consider convergence properties of a class of penalization methods for a general vector optimization problem with cone constraints in infinite dimensional spaces. Under certain assumptions, we show that any efficient point of the cone constrained vector optimization problem can be approached by a sequence of efficient points of the penalty problems. We also show, on the other hand, that any limit point of a sequence of approximate efficient solutions to the penalty problems is a weekly efficient solution of the original cone constrained vector optimization problem. Finally, when the constrained space is of finite dimension, we show that any limit point of a sequence of stationary points of the penalty problems is a KKT stationary point of the original cone constrained vector optimization problem if Mangasarian–Fromovitz constraint qualification holds at the limit point.This work is supported by the Postdoctoral Fellowship of Hong Kong Polytechnic University.  相似文献   

14.
In this paper, we consider a class of optimal control problems subject to equality terminal state constraints and continuous state and control inequality constraints. By using the control parametrization technique and a time scaling transformation, the constrained optimal control problem is approximated by a sequence of optimal parameter selection problems with equality terminal state constraints and continuous state inequality constraints. Each of these constrained optimal parameter selection problems can be regarded as an optimization problem subject to equality constraints and continuous inequality constraints. On this basis, an exact penalty function method is used to devise a computational method to solve these optimization problems with equality constraints and continuous inequality constraints. The main idea is to augment the exact penalty function constructed from the equality constraints and continuous inequality constraints to the objective function, forming a new one. This gives rise to a sequence of unconstrained optimization problems. It is shown that, for sufficiently large penalty parameter value, any local minimizer of the unconstrained optimization problem is a local minimizer of the optimization problem with equality constraints and continuous inequality constraints. The convergent properties of the optimal parameter selection problems with equality constraints and continuous inequality constraints to the original optimal control problem are also discussed. For illustration, three examples are solved showing the effectiveness and applicability of the approach proposed.  相似文献   

15.
介绍一种非线性约束优化的不可微平方根罚函数,为这种非光滑罚函数提出了一个新的光滑化函数和对应的罚优化问题,获得了原问题与光滑化罚优化问题目标之间的误差估计. 基于这种罚函数,提出了一个算法和收敛性证明,数值例子表明算法对解决非线性约束优化具有有效性.  相似文献   

16.
The max-bisection problem is an NP-hard combinatorial optimization problem. In this paper, a new Lagrangian net algorithm is proposed to solve max-bisection problems. First, we relax the bisection constraints to the objective function by introducing the penalty function method. Second, a bisection solution is calculated by a discrete Hopfield neural network (DHNN). The increasing penalty factor can help the DHNN to escape from the local minimum and to get a satisfying bisection. The convergence analysis of the proposed algorithm is also presented. Finally, numerical results of large-scale G-set problems show that the proposed method can find a better optimal solutions.  相似文献   

17.
We present a null-space primal-dual interior-point algorithm for solving nonlinear optimization problems with general inequality and equality constraints. The algorithm approximately solves a sequence of equality constrained barrier subproblems by computing a range-space step and a null-space step in every iteration. The ℓ2 penalty function is taken as the merit function. Under very mild conditions on range-space steps and approximate Hessians, without assuming any regularity, it is proved that either every limit point of the iterate sequence is a Karush-Kuhn-Tucker point of the barrier subproblem and the penalty parameter remains bounded, or there exists a limit point that is either an infeasible stationary point of minimizing the 2 norm of violations of constraints of the original problem, or a Fritz-John point of the original problem. In addition, we analyze the local convergence properties of the algorithm, and prove that by suitably controlling the exactness of range-space steps and selecting the barrier parameter and Hessian approximation, the algorithm generates a superlinearly or quadratically convergent step. The conditions on guaranteeing that all slack variables are still positive for a full step are presented.  相似文献   

18.
Penalty function is an important tool in solving many constrained optimization problems in areas such as industrial design and management. In this paper, we study exactness and algorithm of an objective penalty function for inequality constrained optimization. In terms of exactness, this objective penalty function is at least as good as traditional exact penalty functions. Especially, in the case of a global solution, the exactness of the proposed objective penalty function shows a significant advantage. The sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved. Based on the objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the objective penalty function is proved for a local solution. An algorithm is presented in the paper in finding a local solution, with its convergence proved under some conditions. Finally, numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

19.
基于增广Lagrange函数的RQP方法   总被引:3,自引:0,他引:3  
王秀国  薛毅 《计算数学》2003,25(4):393-406
Recursive quadratic programming is a family of techniques developd by Bartholomew-Biggs and other authors for solving nonlinear programming problems.This paperdescribes a new method for constrained optimization which obtains its search di-rections from a quadratic programming subproblem based on the well-known aug-mented Lagrangian function.It avoids the penalty parameter to tend to infinity.We employ the Fletcher‘s exact penalty function as a merit function and the use of an approximate directional derivative of the function that avoids the need toevaluate the second order derivatives of the problem functions.We prove that thealgorithm possesses global and superlinear convergence properties.At the sametime, numerical results are reported.  相似文献   

20.
This paper introduces a second-order differentiability smoothing technique to the classical l 1 exact penalty function for constrained optimization problems(COP). Error estimations among the optimal objective values of the nonsmooth penalty problem, the smoothed penalty problem and the original optimization problem are obtained. Based on the smoothed problem, an algorithm for solving COP is proposed and some preliminary numerical results indicate that the algorithm is quite promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号