首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations, we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of the unified classes of augmented Lagrangian functions.  相似文献   

2.
A tolerant algorithm for linearly constrained optimization calculations   总被引:3,自引:0,他引:3  
Two extreme techniques when choosing a search direction in a linearly constrained optimization calculation are to take account of all the constraints or to use an active set method that satisfies selected constraints as equations, the remaining constraints being ignored. We prefer an intermediate method that treats all inequality constraints with small residuals as inequalities with zero right hand sides and that disregards the other inequality conditions. Thus the step along the search direction is not restricted by any constraints with small residuals, which can help efficiency greatly, particularly when some constraints are nearly degenerate. We study the implementation, convergence properties and performance of an algorithm that employs this idea. The implementation considerations include the choice and automatic adjustment of the tolerance that defines the small residuals, the calculation of the search directions, and the updating of second derivative approximations. The main convergence theorem imposes no conditions on the constraints except for boundedness of the feasible region. The numerical results indicate that a Fortran implementation of our algorithm is much more reliable than the software that was tested by Hock and Schittkowski (1981). Therefore the algorithm seems to be very suitable for general use, and it is particularly appropriate for semi-infinite programming calculations that have many linear constraints that come from discretizations of continua.  相似文献   

3.
In this paper, we study inverse optimization for linearly constrained convex separable programming problems that have wide applications in industrial and managerial areas. For a given feasible point of a convex separable program, the inverse optimization is to determine whether the feasible point can be made optimal by adjusting the parameter values in the problem, and when the answer is positive, find the parameter values that have the smallest adjustments. A sufficient and necessary condition is given for a feasible point to be able to become optimal by adjusting parameter values. Inverse optimization formulations are presented with 1 and 2 norms. These inverse optimization problems are either linear programming when 1 norm is used in the formulation, or convex quadratic separable programming when 2 norm is used.  相似文献   

4.
An algorithm for nonlinear programming problems with equality constraints is presented which is globally and superlinearly convergent. The algorithm employs a recursive quadratic programming scheme to obtain a search direction and uses a differentiable exact augmented Lagrangian as line search function to determine the steplength along this direction. It incorporates an automatic adjustment rule for the selection of the penalty parameter and avoids the need to evaluate second-order derivatives of the problem functions. Some numerical results are reported.  相似文献   

5.
A trust region algorithm for equality constrained optimization   总被引:2,自引:0,他引:2  
A trust region algorithm for equality constrained optimization is proposed that employs a differentiable exact penalty function. Under certain conditions global convergence and local superlinear convergence results are proved.  相似文献   

6.
Duality for Multiobjective Optimization via Nonlinear Lagrangian Functions   总被引:1,自引:0,他引:1  
In this paper, a strong nonlinear Lagrangian duality result is established for an inequality constrained multiobjective optimization problem. This duality result improves and unifies existing strong nonlinear Lagrangian duality results in the literature. As a direct consequence, a strong nonlinear Lagrangian duality result for an inequality constrained scalar optimization problem is obtained. Also, a variant set of conditions is used to derive another version of the strong duality result via nonlinear Lagrangian for an inequality constrained multiobjective optimization problem.  相似文献   

7.
The augmented Lagrangian SQP subroutine OPALQP was originally designed for small-to-medium sized constrained optimization problems in which the main calculation on each iteration, the solution of a quadratic program, involves dense, rather than sparse, matrices. In this paper, we consider some reformulations of OPALQP which are better able to take advantage of sparsity in the objective function and constraints.The modified versions of OPALQP differ from the original in using sparse data structures for the Jacobian matrix of constraints and in replacing the dense quasi-Newton estimate of the inverse Hessian of the Lagrangian by a sparse approximation to the Hessian. We consider a very simple sparse update for estimating 2 L and also investigate the benefits of using exact second derivatives, noting in the latter case that safeguards are needed to ensure that a suitable search direction is obtained when 2 L is not positive definite on the null space of the active constraints.The authors are grateful to John Reid and Nick Gould of the Rutherford Appleton Laboratory for a number of helpful and interesting discussions. Thanks are also due to Laurence Dixon for comments which led to the clarification of some parts of the paper.This work has been partly supported by a CAPES Research Studentship funded by the Brazilian Government.  相似文献   

8.
Reduced Hessian methods have been shown to be successful for equality constrained problems. However there are few results on reduced Hessian methods for general constrained problems. In this paper we propose a method for general constrained problems, based on Byrd and Schnabel's basis-independent algorithm. It can be regarded as a smooth extension of the standard reduced Hessian Method.Research supported in part by NSF, AFORS and ONR through NSF grant DMS-8920550.  相似文献   

9.
A potential reduction algorithm is proposed for optimization of a convex function subject to linear constraints. At each step of the algorithm,a system of linear equations is solved toget a search direction and the Armijo‘s rule is used to determine a stepsize. It is proved that thealgorithm is globally convergent. Computational results are reported.  相似文献   

10.
In this paper we study two inexact fast augmented Lagrangian algorithms for solving linearly constrained convex optimization problems. Our methods rely on a combination of the excessive-gap-like smoothing technique introduced in Nesterov (SIAM J Optim 16(1):235–249, 2005) and the general inexact oracle framework studied in Devolder (Math Program 146:37–75, 2014). We develop and analyze two augmented based algorithmic instances with constant and adaptive smoothness parameters, and derive a total computational complexity estimate in terms of projections on a simple primal feasible set for each algorithm. For the constant parameter algorithm we obtain the overall computational complexity of order \(\mathcal {O}(\frac{1}{\epsilon ^{5/4}})\), while for the adaptive one we obtain \(\mathcal {O}(\frac{1}{\epsilon })\) total number of projections onto the primal feasible set in order to achieve an \(\epsilon \)-optimal solution for the original problem.  相似文献   

11.
《Optimization》2012,61(4-5):459-466
The algorithm presented in this article incorporates the trust region method (TR) into the restricted decomposition algorithm for convex-constrained nonlinear problems (RSDCC) to solve the master problem of RSDCC. The global convergence is proved. The computational comparison between the presented algorithm and RSDCC is given. The results show that the former is much better than the latter.  相似文献   

12.
Algorithms to solve constrained optimization problems are derived. These schemes combine an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Since an augmented Lagrangian can be ill conditioned, a preconditioning strategy is developed to eliminate the instabilities associated with the penalty term. A criterion for deciding when to increase the penalty is presented.This work was supported by the National Science Foundation, Grant Nos. MCS-81-01892, DMS-84-01758, and DMS-85-20926, and by the Air Force Office of Scientific Research, Grant No. AFOSR-ISSA-860091.  相似文献   

13.
This paper considers the problem of minimizing a special convex function subject to one linear constraint. Based upon a theorem for lower and upper bounds on the Lagrange multiplier a fully polynomial time approximation scheme is proposed. The efficiency of the algorithm is demonstrated by a computational experiment.  相似文献   

14.
In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker.  相似文献   

15.
The usual approach to Newton's method for mathematical programming problems with equality constraints leads to the solution of linear systems ofn +m equations inn +m unknowns, wheren is the dimension of the space andm is the number of constraints. Moreover, these linear systems are never positive definite. It is our feeling that this approach is somewhat artificial, since in the unconstrained case the linear systems are very often positive definite. With this in mind, we present an alternate Newton-like approach for the constrained problem in which all the linear systems are of order less than or equal ton. Furthermore, when the Hessian of the Lagrangian at the solution is positive definite (a situation frequently occurring), all our systems will be positive definite. Hence, in all cases, our Newton-like method offers greater numerical stability. We demonstrate that the convergence properties of this Newton-like method are superior to those of the standard approach to Newton's method. The operation count for the new method using Gaussian elimination is of the same order as the operation count for the standard method. However, if the Hessian of the Lagrangian at the solution is positive definite and we use Cholesky decomposition, then the order of the operation count for the new method is half that for the standard approach to Newton's method. This theory is generalized to problems with both equality and inequality constraints.  相似文献   

16.
A method of constructing test problems with known global solution for a class of reverse convex programs or linear programs with an additional reverse convex constraint is presented. The initial polyhedron is assumed to be a hypercube. The method then systematically generates cuts that slice the cube in such a way that a prespecified global solution on its edge remains intact. The proposed method does not require the solution of linear programs or systems of linear equations as is often required by existing techniques.The author would like to thank Prof. S. E. Jacobsen for his valuable remarks on initial drafts of this paper and the referees for their constructive suggestions.  相似文献   

17.
Trust region methods are powerful and effective optimization methods.The conic model method is a new type of method with more information available at each iteration than standard quadratic-based methods.The advantages of the above two methods can be combined to form a more powerful method for constrained optimization.The trust region subproblem of our method is to minimize a conic function subject to the linearized constraints and trust region bound.At the same time,the new algorithm still possesses robust global properties.The global convergence of the new algorithm under standard conditions is established.  相似文献   

18.
We study the convergence properties of reduced Hessian successive quadratic programming for equality constrained optimization. The method uses a backtracking line search, and updates an approximation to the reduced Hessian of the Lagrangian by means of the BFGS formula. Two merit functions are considered for the line search: the 1 function and the Fletcher exact penalty function. We give conditions under which local and superlinear convergence is obtained, and also prove a global convergence result. The analysis allows the initial reduced Hessian approximation to be any positive definite matrix, and does not assume that the iterates converge, or that the matrices are bounded. The effects of a second order correction step, a watchdog procedure and of the choice of null space basis are considered. This work can be seen as an extension to reduced Hessian methods of the well known results of Powell (1976) for unconstrained optimization.This author was supported, in part, by National Science Foundation grant CCR-8702403, Air Force Office of Scientific Research grant AFOSR-85-0251, and Army Research Office contract DAAL03-88-K-0086.This author was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contracts W-31-109-Eng-38 and DE FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

19.
An Augmented Lagrangian algorithm that uses Gauss-Newton approximations of the Hessian at each inner iteration is introduced and tested using a family of Hard-Spheres problems. The Gauss-Newton model convexifies the quadratic approximations of the Augmented Lagrangian function thus increasing the efficiency of the iterative quadratic solver. The resulting method is considerably more efficient than the corresponding algorithm that uses true Hessians. A comparative study using the well-known package LANCELOT is presented.  相似文献   

20.
The primary concern of this paper is to investigate stability conditions for the mathematical program: findx E n that maximizesf(x):g j(x)0 for somej J, wheref is a real scalarvalued function and eachg is a real vector-valued function of possibly infinite dimension. It should be noted that we allow, possibly infinitely many, disjunctive forms. In an earlier work, Evans and Gould established stability theorems wheng is a continuous finite-dimensional real-vector function andJ=1. It is pointed out that the results of this paper reduce to the Evans-Gould results under their assumptions. Furthermore, since we use a slightly more general definition of lower and upper semicontinuous point-to-set mappings, we can dispense with the continuity ofg (except in a few instances where it is implied by convexity assumptions).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号