共查询到20条相似文献,搜索用时 80 毫秒
1.
A Globally and Superlinearly Convergent SQP Algorithm for Nonlinear Constrained Optimization 总被引:2,自引:0,他引:2
Based on a continuously differentiable exact penalty function and a regularization technique for dealing with the inconsistency of subproblems in the SQP method, we present a new SQP algorithm for nonlinear constrained optimization problems. The proposed algorithm incorporates automatic adjustment rules for the choice of the parameters and makes use of an approximate directional derivative of the merit function to avoid the need to evaluate second order derivatives of the problem functions. Under mild assumptions the algorithm is proved to be globally convergent, and in particular the superlinear convergence rate is established without assuming that the strict complementarity condition at the solution holds. Numerical results reported show that the proposed algorithm is promising. 相似文献
2.
Smoothing Method for Minimax Problems 总被引:7,自引:0,他引:7
Song Xu 《Computational Optimization and Applications》2001,20(3):267-279
In this paper, we propose a smoothing method for minimax problem. The method is based on the exponential penalty function of Kort and Bertsekas for constrained optimization. Under suitable condition, the method is globally convergent. Preliminary numerical experiments indicate the promising of the algorithm. 相似文献
3.
4.
In this paper, a new sequential penalty algorithm, based on the Linfin exact penalty function, is proposed for a general nonlinear constrained optimization problem. The algorithm has the following characteristics: it can start from an arbitrary initial point; the feasibility of the subproblem is guaranteed; the penalty parameter is adjusted automatically; global convergence without any regularity assumption is proved. The update formula of the penalty parameter is new. It is proved that the algorithm proposed in this paper behaves equivalently to the standard SQP method after sufficiently many iterations. Hence, the local convergence results of the standard SQP method can be applied to this algorithm. Preliminary numerical experiments show the efficiency and stability of the algorithm. 相似文献
5.
B. Rustem 《Journal of Optimization Theory and Applications》1993,76(3):429-453
In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker. 相似文献
6.
马国栋 《数学物理学报(A辑)》2020,(3):641-649
该文考虑求解带非线性不等式和等式约束的极大极小优化问题,借助半罚函数思想,提出了一个新的广义投影算法.该算法具有以下特点:由一个广义梯度投影显式公式产生的搜索方向是可行下降的;构造了一个新型的最优识别控制函数;在适当的假设条件下具有全局收敛性和强收敛性.最后,通过初步的数值试验验证了算法的有效性. 相似文献
7.
Zhiqing Meng Chuangyin Dang Min Jiang Xinsheng Xu Rui Shen 《Journal of Global Optimization》2013,56(2):691-711
Penalty function is an important tool in solving many constrained optimization problems in areas such as industrial design and management. In this paper, we study exactness and algorithm of an objective penalty function for inequality constrained optimization. In terms of exactness, this objective penalty function is at least as good as traditional exact penalty functions. Especially, in the case of a global solution, the exactness of the proposed objective penalty function shows a significant advantage. The sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved. Based on the objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the objective penalty function is proved for a local solution. An algorithm is presented in the paper in finding a local solution, with its convergence proved under some conditions. Finally, numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm. 相似文献
8.
基于增广Lagrange函数的RQP方法 总被引:3,自引:0,他引:3
Recursive quadratic programming is a family of techniques developd by Bartholomew-Biggs and other authors for solving nonlinear programming problems.This paperdescribes a new method for constrained optimization which obtains its search di-rections from a quadratic programming subproblem based on the well-known aug-mented Lagrangian function.It avoids the penalty parameter to tend to infinity.We employ the Fletcher‘s exact penalty function as a merit function and the use of an approximate directional derivative of the function that avoids the need toevaluate the second order derivatives of the problem functions.We prove that thealgorithm possesses global and superlinear convergence properties.At the sametime, numerical results are reported. 相似文献
9.
D. G. Luenberger 《Journal of Optimization Theory and Applications》1974,14(5):477-495
A new programming algorithm for nonlinear constrained optimization problems is proposed. The method is based on the penalty function approach and thereby circumyents the necessity to maintain feasibility at each iteration, but it also behaves much like the gradient projection method. Although only first-order information is used, the algorithm converges asymptotically at a rate which is independent of the magnitude of the penalty term; hence, unlike the simple gradient method, the asymptotic rate of the proposed method is not affected by the ill-conditioning associated with the introduction of the penalty term. It is shown that the asymptotic rate of convergence of the proposed method is identical with that of the gradient projection method.Dedicated to Professor M. R. HestenesThis research was supported by the National Science Foundation, Grant No. GK-16125. 相似文献
10.
11.
In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained
optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence
between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling
probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace.
Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity
as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each
independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically
converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional
simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete
constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned
nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization
benchmarks and compare their performance to that of other penalty methods. 相似文献
12.
在本文中,我们提出了带不等式约束的非线性规划问题的一类新的罚函数,它的一个子类可以光滑逼近$l_1$罚函数.
基于此类新的罚函数我们给出了一种罚算法,这个算法的特点是每次迭代求出罚函数的全局精确解或非精确解.
在很弱的条件下算法总是可行的.
我们在不需要任何约束规范的情况下,证明了算法的全局收敛性.
最后给出了数值实验. 相似文献
13.
A Superlinearly Convergent SSLE Algorithm for Optimization Problems with Linear Complementarity Constraints 总被引:2,自引:0,他引:2
In this paper we study a special kind of optimization problems with linear complementarity constraints. First, by a generalized
complementarity function and perturbed technique, the discussed problem is transformed into a family of general nonlinear
optimization problems containing parameters. And then, using a special penalty function as a merit function, we establish
a sequential systems of linear equations (SSLE) algorithm. Three systems of equations solved at each iteration have the same
coefficients. Under some suitable conditions, the algorithm is proved to possess not only global convergence, but also strong
and superlinear convergence. At the end of the paper, some preliminary numerical experiments are reported. 相似文献
14.
Zhiqing Meng Rui Shen Chuangyin Dang Min Jiang 《Numerical Functional Analysis & Optimization》2013,34(11):1471-1492
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions. 相似文献
15.
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved; in second phase, the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between consecutive iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along consecutive iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and large-scale implementation of the resulting method is possible. We prove that, under suitable conditions, which do not include regularity or existence of second derivatives, all the limit points of an infinite sequence generated by the algorithm are feasible, and that a suitable optimality measure can be made as small as desired. The algorithm is implemented and tested against the LANCELOT algorithm using a set of hard-spheres problems. 相似文献
16.
针对约束非线性l_1问题不可微的特点,提出了一种光滑近似算法.该方法利用" "函数的光滑近似函数和罚函数技术将非线性l_1问题转化为无约束可微问题,并在适当的假设下,该算法是全局收敛的.初步的数值试验表明算法的有效性. 相似文献
17.
This paper presents a primal-dual interior-point algorithm for solving general constrained nonlinear programming problems. The inequality constraints are incorporated into the objective function by means of a logarithmic barrier function. Also, satisfaction of the equality constraints is enforced through the use of an adaptive quadratic penalty function. The penalty parameter is determined using a strategy that ensures a descent property for a merit function. Global convergence of the algorithm is achieved through the monotonic decrease of a merit function. Finally, extensive computational results show that the algorithm can solve large and difficult problems in an efficient and robust way.Communicated by L. C. W. DixonThe research reported in this paper was done while the first author was at Imperial College. The authors gratefully acknowledge constructive comments from Professor L. C. W. Dixon and an anonymous referee. They are also grateful to Dr. Stanislav Zakovic for helpful suggestions and comments. Financial support was provided by EPSRC Grants M16016 and GR/G51377/01. 相似文献
18.
Zhiqing Meng Chuangyin Dang Xiaoqi Yang 《Computational Optimization and Applications》2006,35(3):375-398
In this paper we propose two methods for smoothing a nonsmooth square-root exact penalty function for inequality constrained
optimization. Error estimations are obtained among the optimal objective function values of the smoothed penalty problem,
of the nonsmooth penalty problem and of the original optimization problem. We develop an algorithm for solving the optimization
problem based on the smoothed penalty function and prove the convergence of the algorithm. The efficiency of the smoothed
penalty function is illustrated with some numerical examples, which show that the algorithm seems efficient. 相似文献
19.
In this paper, a primal-dual interior point method is proposed for general constrained optimization, which incorporated a penalty function and a kind of new identification technique of the active set. At each iteration, the proposed algorithm only needs to solve two or three reduced systems of linear equations with the same coefficient matrix. The size of systems of linear equations can be decreased due to the introduction of the working set, which is an estimate of the active set. The penalty parameter is automatically updated and the uniformly positive definiteness condition on the Hessian approximation of the Lagrangian is relaxed. The proposed algorithm possesses global and superlinear convergence under some mild conditions. Finally, some preliminary numerical results are reported. 相似文献