共查询到20条相似文献,搜索用时 0 毫秒
1.
B. Rustem 《Journal of Optimization Theory and Applications》1993,76(3):429-453
In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker. 相似文献
2.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented. 相似文献
3.
Global convergence properties are established for a quite general form of algorithms for solving nonlinearly constrained minimization problems. A useful feature of the methods considered is that they can be implemented easily either with or without using quadratic programming techniques. A particular implementation, designed to be both efficient and robust, is described in detail. Numerical results are presented and discussed.This work was carried out by the first author under the support of the Science Research Council (UK), Grant Nos. B/RG/95124 and GR/A/44480, which are gratefully acknowledged. 相似文献
4.
Superlinear and quadratic convergence of some primal-dual interior point methods for constrained optimization 总被引:6,自引:0,他引:6
This paper proves local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization
problems. Conditions to be satisfied at a solution are those given by the usual Jacobian uniqueness conditions. Proofs about
convergence rates are given for three kinds of step size rules. They are: (i) the step size rule adopted by Zhang et al. in
their convergence analysis of a primal-dual interior point method for linear programs, in which they used single step size
for primal and dual variables; (ii) the step size rule used in the software package OB1, which uses different step sizes for
primal and dual variables; and (iii) the step size rule used by Yamashita for his globally convergent primal-dual interior
point method for general constrained optimization problems, which also uses different step sizes for primal and dual variables.
Conditions to the barrier parameter and parameters in step size rules are given for each case. For these step size rules,
local and quadratic convergence of the Newton method and local and superlinear convergence of the quasi-Newton method are
proved.
A preliminary version of this paper was presented at the conference “Optimization-Models and Algorithms” held at the Institute
of Statistical Mathematics, Tokyo, March 1993. 相似文献
5.
6.
Chun-ming Tang Jin-bao Jian 《European Journal of Operational Research》2012,218(1):28-37
In this paper, we propose a strongly sub-feasible direction method for the solution of inequality constrained optimization problems whose objective functions are not necessarily differentiable. The algorithm combines the subgradient aggregation technique with the ideas of generalized cutting plane method and of strongly sub-feasible direction method, and as results a new search direction finding subproblem and a new line search strategy are presented. The algorithm can not only accept infeasible starting points but also preserve the “strong sub-feasibility” of the current iteration without unduly increasing the objective value. Moreover, once a feasible iterate occurs, it becomes automatically a feasible descent algorithm. Global convergence is proved, and some preliminary numerical results show that the proposed algorithm is efficient. 相似文献
7.
Aiping Liao 《Computational Optimization and Applications》1993,2(2):129-143
Reduced Hessian methods have been shown to be successful for equality constrained problems. However there are few results on reduced Hessian methods for general constrained problems. In this paper we propose a method for general constrained problems, based on Byrd and Schnabel's basis-independent algorithm. It can be regarded as a smooth extension of the standard reduced Hessian Method.Research supported in part by NSF, AFORS and ONR through NSF grant DMS-8920550. 相似文献
8.
This paper presents a novel framework for developing globally convergent algorithms without evaluating the value of a given function. One way to implement such a framework for a twice continuously differentiable function is to apply linear bounding functions (LBFs) to its gradient function. The algorithm thus obtained can get a better point in each iteration without using a line search. Under certain conditions, it can achieve at least superlinear convergent rate 1.618 without calculating explicitly the Hessian. Furthermore, the strategy of switching from the negative gradient direction to the Newton-alike direction is derived in a natural manner and is computationally effective. Numerical examples are used to show how the algorithm works. 相似文献
9.
T. L. Vincent B. S. Goh K. L. Teo 《Journal of Optimization Theory and Applications》1992,75(3):501-519
In this paper, we consider a class of nonlinear minimum-maximum optimization problems subject to boundedness constraints on the decision vectors. Three algorithms are developed for finding the min-max point using the concept of solving an associated dynamical system. In the first and third algorithms, solutions are obtained by solving systems of differential equations. The second algorithm is a discrete version of the first algorithm. The trajectories generated by the first and second algorithms may move inside or on the boundary of the constraint set, while the third algorithm ensures that any trajectory that begins inside the constraint region remains in its interior. Sufficient conditions for global convergence of the two algorithms are also established. For illustration, four numerical examples are solved.This work was partially supported by a research grant from the Australian Research Committee. 相似文献
10.
T. F. Coleman 《Mathematical Programming》1978,15(1):239-242
An unconstrained minimax algorithm of Charalambous and Conn is easily modified to solve the constrained case. Here we present some numerical results and find that this algorithm compares favourably to those of Dutta and Vidyasagar.This research was supported in part by Natural Sciences and Engineering Research Council grant number A8639. 相似文献
11.
In this paper, motivated by Zhu et al. methods [Z.B. Zhu, K.C. Zhang, J.B. Jian, An improved SQP algorithm for inequality constrained optimization, Math. Meth. Oper. Res. 58 (2003) 271-282; Zhibin Zhu, Jinbao Jian, An efficient feasible SQP algorithm for inequality constrained optimization, Nonlinear Anal. Real World Appl. 10(2) (2009) 1220-1228], we propose a type of efficient feasible SQP algorithms to solve nonlinear inequality constrained optimization problems. By solving only one QP subproblem with a subset of the constraints estimated as active, a class of revised feasible descent directions are generated per single iteration. These methods are implementable and globally convergent. We also prove that the algorithms have superlinear convergence rate under some mild conditions. 相似文献
12.
We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations,
we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the
Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and
any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient
and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier
algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented
Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize
in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of
the unified classes of augmented Lagrangian functions. 相似文献
13.
This research presents a new constrained optimization approach for solving systems of nonlinear equations. Particular advantages are realized when all of the equations are convex. For example, a global algorithm for finding the zero of a convex real-valued function of one variable is developed. If the algorithm terminates finitely, then either the algorithm has computed a zero or determined that none exists; if an infinite sequence is generated, either that sequence converges to a zero or again no zero exists. For solving n-dimensional convex equations, the constrained optimization algorithm has the capability of determining that the system of equations has no solution. Global convergence of the algorithm is established under weaker conditions than previously known and, in this case, the algorithm reduces to Newton’s method together with a constrained line search at each iteration. It is also shown how this approach has led to a new algorithm for solving the linear complementarity problem. 相似文献
14.
José Herskovits 《Mathematical Programming》1986,36(1):19-38
We present a feasible directions algorithm, based on Lagrangian concepts, for the solution of the nonlinear programming problem
with equality and inequality constraints. At each iteration a descent direction is defined; by modifying it, we obtain a feasible
descent direction. The line search procedure assures the global convergence of the method and the feasibility of all the iterates.
We prove the global convergence of the algorithm and apply it to the solution of some test problems. Although the present
version of the algorithm does not include any second-order information, like quasi-Newton methods, these numerical results
exhibit a behavior comparable to that of the best methods known at present for nonlinear programming.
Research performed while the author was on a two years appointment at INRIA, Rocquencourt, France, and partially supported
by the Brazilian Research Council (CNPq). 相似文献
15.
提出了一个处理等式约束优化问题新的SQP算法,该算法通过求解一个增广Lagrange函数的拟Newton方法推导出一个等式约束二次规划子问题,从而获得下降方向.罚因子具有自动调节性,并能避免趋于无穷.为克服Maratos效应采用增广Lagrange函数作为效益函数并结合二阶步校正方法.在适当的条件下,证明算法是全局收敛的,并且具有超线性收敛速度. 相似文献
16.
S. M. Grzegórski 《Mathematical Programming》1987,37(1):91-116
This paper deals with the application of multilevel least-change Newton-like methods for solving twice continuously differentiable
equality constrained optimization problems. We define multilevel partial-inverse least-change updates, multilevel least-change
Newton-like methods without derivatives and multilevel projections of fragments of the matrix for Newton-like methods without
derivatives. Local andq-superlinear convergence of these methods is proved. The theorems here also imply local andq-superlinear convergence of many standard Newton-like methods for nonconstrained and equality constraine optimization problems. 相似文献
17.
G. Haeser 《Operations Research Letters》2018,46(3):295-299
In second-order algorithms, we investigate the relevance of the constant rank of the full set of active constraints in ensuring global convergence to a second-order stationary point under a constraint qualification. We show that second-order stationarity is not expected in the non-constant rank case if the growth of so-called tangent AKKT2 sequences is not controlled. Since no algorithm controls their growth, we argue that there is a theoretical limitation of algorithms in finding second-order stationary points without constant rank assumptions. 相似文献
18.
Self-adaptive velocity particle swarm optimization for solving constrained optimization problems 总被引:4,自引:0,他引:4
Particle swarm optimization (PSO) is originally developed as an unconstrained optimization technique, therefore lacks an explicit
mechanism for handling constraints. When solving constrained optimization problems (COPs) with PSO, the existing research
mainly focuses on how to handle constraints, and the impact of constraints on the inherent search mechanism of PSO has been
scarcely explored. Motivated by this fact, in this paper we mainly investigate how to utilize the impact of constraints (or
the knowledge about the feasible region) to improve the optimization ability of the particles. Based on these investigations,
we present a modified PSO, called self-adaptive velocity particle swarm optimization (SAVPSO), for solving COPs. To handle
constraints, in SAVPSO we adopt our recently proposed dynamic-objective constraint-handling method (DOCHM), which is essentially
a constituent part of the inherent search mechanism of the integrated SAVPSO, i.e., DOCHM + SAVPSO. The performance of the
integrated SAVPSO is tested on a well-known benchmark suite and the experimental results show that appropriately utilizing
the knowledge about the feasible region can substantially improve the performance of the underlying algorithm in solving COPs. 相似文献
19.
A trust region algorithm for equality constrained optimization 总被引:2,自引:0,他引:2
A trust region algorithm for equality constrained optimization is proposed that employs a differentiable exact penalty function. Under certain conditions global convergence and local superlinear convergence results are proved. 相似文献