共查询到20条相似文献,搜索用时 15 毫秒
1.
R. H. Byrd 《Journal of Optimization Theory and Applications》1978,26(4):485-500
In this study, we consider a modification of the method of multipliers of Hestenes and Powell in which the iteration is diagonalized, that is, only a fixed finite number of iterations of Newton's method are taken in the primal minimization stage. Conditions are obtained for quadratic convergence of the standard method, and it is shown that a diagonalization where two Newton steps are taken preserves the quadratic convergence for all multipler update formulas satisfying these conditions.This work constitutes part of the author's doctoral dissertation in the Department of Mathematical Sciences, Rice University, under the direction of Professor R. A. Tapia and was supported in part by ERDA Contract No. E-(40-1)-5046.The author would like to thank Professor Richard Tapia for his comments, suggestions, and discussions on this material. 相似文献
2.
D. Sciutti 《Journal of Optimization Theory and Applications》1977,22(2):227-237
The connection between the convergence of the Hestenes method of multipliers and the existence of augmented Lagrange multipliers for the constrained minimum problem (P): minimizef(x), subject tog(x)=0, is investigated under very general assumptions onX,f, andg.In the first part, we use the existence of augmented Lagrange multipliers as a sufficient condition for the convergence of the algorithm. In the second part, we prove that this is also a necessary condition for the convergence of the method and the boundedness of the sequence of the multiplier estimates.Further, we give very simple examples to show that the existence of augmented Lagrange multipliers is independent of smoothness condition onf andg. Finally, an application to the linear-convex problem is given. 相似文献
3.
Recently, Kort and Bertsekas (Ref. 1) and Hartman (Ref. 2) presented independently a new penalty function algorithm of exponential type for solving inequality-constrained minimization problems. The main purpose of this work is to give a proof on the rate of convergence of a modification of the exponential penalty method proposed by these authors. We show that the sequence of points generated by the modified algorithm converges to the solution of the original nonconvex problem linearly and that the sequence of estimates of the optimal Lagrange multiplier converges to this multiplier superlinearly. The question of convergence of the modified method is discussed. The present paper hinges on ideas of Mangasarian (Ref. 3), but the case considered here is not covered by Mangasarian's theory. 相似文献
4.
S. T. Glad 《Journal of Optimization Theory and Applications》1979,28(2):135-156
The convergence properties of different updating methods for the multipliers in augmented Lagrangians are considered. It is assumed that the updating of the multipliers takes place after each line search of a quasi-Newton method. Two of the updating methods are shown to be linearly convergent locally, while a third method has superlinear convergence locally. Modifications of the algorithms to ensure global convergence are considered. The results of a computational comparison with other methods are presented.This work was supported by the Swedish Institute of Applied Mathematics. 相似文献
5.
《Operations Research Letters》2019,47(3):185-189
We consider the augmented Lagrangian method (ALM) for constrained optimization problems in the presence of convex inequality and convex abstract constraints. We focus on the case where the Lagrangian sub-problems are solved up to approximate stationary points, with increasing accuracy. We analyze two different criteria of approximate stationarity for the sub-problems and we prove the global convergence to stationary points of ALM in both cases. 相似文献
6.
In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy proximal term in place of a quadratic. We strengthen substantially the available convergence results for these methods, and we derive the convergence rate of these methods when applied to linear programs.Research supported by the National Science Foundation under Grant DDM-8903385, and the Army Research Office under Grant DAAL03-86-K-0171. 相似文献
7.
8.
E. J. Beltrami 《Journal of Optimization Theory and Applications》1985,45(3):477-480
Implementation of the penalty function method for constrained optimization poses numerical difficulties as the penalty parameter increases. To offset this problem, one often resorts to Newton's method. In this note, working in the context of the penalty function method, we establish an intimate connection between the second-order updating formulas which result from Newton's method on the primal problem and Newton's method on the dual problem.The author wishes to thank Professor R. A. Tapia for his careful review of this note. He has contributed significantly to its content through several crucial observations. 相似文献
9.
M. A. Ibiejugba 《Journal of Optimization Theory and Applications》1985,47(2):195-216
The paper studies the role of the multipliers when the multiplier method is applied as a computational technique for minimizing penalized cost functionals for optimal control problems characterized by linear systems and integral quadratic costs.Theauthor would like to gratefully thank two anonymous referees for many helpful suggestions which led to a major improvement in both the quality and clarity of the paper, and to Professor Angelo Miele for his encouragement. 相似文献
10.
We analyze the rate of local convergence of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence
of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices,
an implicit function theorem for semismooth functions, and variational analysis on the projection operator in the symmetric
matrix space. Without requiring strict complementarity, we prove that, under the constraint nondegeneracy condition and the
strong second order sufficient condition, the rate of convergence is linear and the ratio constant is proportional to 1/c, where c is the penalty parameter that exceeds a threshold .
The research of Defeng Sun is partly supported by the Academic Research Fund from the National University of Singapore. The
research of Jie Sun and Liwei Zhang is partly supported by Singapore–MIT Alliance and by Grants RP314000-042/057-112 of the
National University of Singapore. The research of Liwei Zhang is also supported by the National Natural Science Foundation
of China under project grant no. 10471015 and by the Scientific Research Foundation for the Returned Overseas Chinese Scholars,
State Education Ministry, China. 相似文献
11.
J. T. Betts 《Journal of Optimization Theory and Applications》1977,21(2):137-174
This paper describes an accelerated multiplier method for solving the general nonlinear programming problem. The algorithm poses a sequence of unconstrained optimization problems. The unconstrained problems are solved using a rank-one recursive algorithm described in an earlier paper. Multiplier estimates are obtained by minimizing the error in the Kuhn-Tucker conditions using a quadratic programming algorithm. The convergence of the sequence of unconstrained problems is accelerated by using a Newton-Raphson extrapolation process. The numerical effectiveness of the algorithm is demonstrated on a relatively large set of test problems.This work was supported by the US Air Force under Contract No. F04701-74-C-0075. 相似文献
12.
In this paper, we construct appropriate aggregate mappings and a new aggregate constraint homotopy (ACH) equation by converting equality constraints to inequality constraints and introducing two variable parameters. Then, we propose an ACH method for nonlinear programming problems with inequality and equality constraints. Under suitable conditions, we obtain the global convergence of this ACH method, which makes us prove the existence of a bounded smooth path that connects a given point to a Karush–Kuhn–Tucker point of nonlinear programming problems. The numerical tracking of this path can lead to an implementable globally convergent algorithm. A numerical procedure is given to implement the proposed ACH method, and the computational results are reported. 相似文献
13.
We study the projected gradient algorithm for linearly constrained optimization. Wolfe (Ref. 1) has produced a counterexample to show that this algorithm can jam. However, his counterexample is only 1(
n
), and it is conjectured that the algorithm is convergent for 2-functions. We show that this conjecture is partly right. We also show that one needs more assumptions to prove convergence, since we present a family of counterexamples. We finally give a demonstration that no jamming can occur for quadratic objective functions.This work was supported by the Natural Sciences and Engineering Research Council of Canada 相似文献
14.
N. Watanabe Y. Nishimura M. Matsubara 《Journal of Optimization Theory and Applications》1978,25(2):181-193
Although the method of multipliers can resolve the dual gaps which will often appear between the true optimum point and the saddle point of the Lagrangian in large system optimization using the Lagrangian approach, it is impossible to decompose the generalized Lagrangian in a straightforward manner because of its quadratic character. A technique using the linear approximation of the perturbed generalized Lagrangian has recently been proposed by Stephanopoulos and Westerberg for the decomposition. In this paper, another attractive decomposition technique which transforms the nonseparable crossproduct terms into the minimum of sum of separable terms is proposed. The computational efforts required for large system optimization can be much reduced by adopting the latter technique in place of the former, as illustrated by application of these two techniques to an optimization problem of a chemical reactor system.The authors would like to acknowledge the valuable comments given by Professor D. G. Luenberger of Stanford University. 相似文献
15.
R. Mifflin 《Journal of Optimization Theory and Applications》1976,18(2):199-228
Convergence of a method of centers algorithm for solving nonlinear programming problems is considered. The algorithm is defined so that the subproblems that must be solved during its execution may be solved by finite-step procedures. Conditions are given under which the algorithm generates sequences of feasible points and constraint multiplier vectors that have accumulation points satisfying the Fritz John or the Kuhn-Tucker optimality conditions. Under stronger assumptions, linear convergence rates are established for the sequences of objective function, constraint function, feasible point, and multiplier values.This work was supported in part by the National Aeronautics and Space Administration, Predoctoral Traineeship No. NsG(T)-117, and by the National Science Foundation, Grants No. GP-25081 and No. GK-32710.The author wishes to thank Donald M. Topkis for his valuable criticism of an earlier version of this paper and a referee for his helpful comments. 相似文献
16.
On convergence properties of a least-distance programming procedure for minimization problems under linear constraints 总被引:3,自引:0,他引:3
G. L. Xue 《Journal of Optimization Theory and Applications》1986,50(2):365-370
In Ref. 1, Bazaraa and Goode provided an algorithm for solving a nonlinear programming problem with linear constraints. In this paper, we show that this algorithm possesses good convergence properties.This paper was written under the guidance of Associate Professor C. Y. Wang. The author takes great pleasure in thanking him. 相似文献
17.
18.
The paper analyzes the rate of local convergence of the augmented Lagrangian method for nonlinear second-order cone optimization
problems. Under the constraint nondegeneracy condition and the strong second order sufficient condition, we demonstrate that
the sequence of iterate points generated by the augmented Lagrangian method locally converges to a local minimizer at a linear
rate, whose ratio constant is proportional to 1/τ with penalty parameter τ not less than a threshold
. Importantly and interestingly enough, the analysis does not require the strict complementarity condition.
Supported by the National Natural Science Foundation of China under Project 10771026 and by the Scientific Research Foundation
for the Returned Overseas Chinese Scholars, State Education Ministry. 相似文献
19.
Application of the alternating direction method of multipliers to separable convex programming problems 总被引:3,自引:0,他引:3
Masao Fukushima 《Computational Optimization and Applications》1992,1(1):93-111
This paper presents a decomposition algorithm for solving convex programming problems with separable structure. The algorithm is obtained through application of the alternating direction method of multipliers to the dual of the convex programming problem to be solved. In particular, the algorithm reduces to the ordinary method of multipliers when the problem is regarded as nonseparable. Under the assumption that both primal and dual problems have at least one solution and the solution set of the primal problem is bounded, global convergence of the algorithm is established. 相似文献
20.
Iterative methods, such as Newton’s, behave poorly when solving ill-conditioned problems: they become slow (first order), and decrease their accuracy. In this paper we analyze deeply and widely the convergence of a modified Newton method, which we call perturbed Newton, in order to overcome the usual disadvantages Newton’s one presents. The basic point of this method is the dependence of a parameter affording a degree of freedom that introduces regularization. Choices for that parameter are proposed. The theoretical analysis will be illustrated through examples. 相似文献