共查询到20条相似文献,搜索用时 0 毫秒
1.
L.F. Escudero 《European Journal of Operational Research》1984,17(3):401-414
We present an algorithm for super-scale linearly constrained nonlinear programming (LCNP) based on Newton's method. In large-scale programming solving the Newton equation at each iteration can be expensive and may not be justified when far from a local solution. For super-scale problems, the truncated Newton method (where an inaccurate solution is computed by using the conjugate-gradient method) is recommended; a diagonal BFGS preconditioning of the gradient is used, so that the number of iterations to solve the equation is reduced. The procedure for updating that preconditioning is described for LCNP when the set of active constraints or the partition of basic, superbasic and nonbasic (structural) variables have been changed. 相似文献
2.
On the limited memory BFGS method for large scale optimization 总被引:60,自引:0,他引:60
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.This work was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contract DE-FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071. 相似文献
3.
E. Spedicato 《Journal of Optimization Theory and Applications》1982,36(2):175-190
We show that Tapia's quasi-Newton diagonalized approach to constrained minimization can be formulated in such a way that no linear systems have to be solved of dimension larger than the natural ones or which present singularities. Numerical experiments indicate fast local convergence, but also substantial difficulties of global convergence.This work was supported by the National Science Council of Italy in the framework of the SOFTMAT project. Part of the work was done during visits at the Computer Science Department, Stanford University, and at the Numerical Optimization Centre, Hatfield Polytechnic; the author thanks Prof. G. Golub and Prof. L. Dixon for providing a stimulating atmosphere. Thanks are also due to Dr. M. Bertocchi, University of Bergamo, for collaboration in performing the numerical experiments. 相似文献
4.
J. E. Dennis Jr H. J. Martinez R. A. Tapia 《Journal of Optimization Theory and Applications》1989,61(2):161-178
In 1981, Dennis and Walker developed a convergence theory for structured secant methods which included the PSB and the DFP secant methods but not the straightforward structured version of the BFGS secant method. Here, we fill this gap in the theory by establishing a convergence theory for the structured BFGS secant method. A direct application of our new theory gives the first proof of local andq-superlinear convergence of the important structured BFGS secant method for the nonlinear least-squares problem, which is used by Dennis, Gay, and Welsh in the current version of the popular and successful NL2SOL code.This research was sponsored by SDIO/IST/ARO, AFOSR-85-0243, and DOE-DEFG05-86 ER-25017.A portion of this work is contained in the second author's doctoral thesis under the supervision of the other two authors in the Department of Mathematical Sciences, Rice University. The second author would like to thank Universidad del Valle, Cali, Columbia, for support during his graduate studies.An early draft of this work was presented at the SIAM 35th Anniversary Meeting, October 12–15, 1987, Denver, Colorado. 相似文献
5.
Mehiddin Al‐Baali 《Numerical Algorithms》1999,22(1):99-112
This paper considers simple modifications of the limited memory BFGS (L-BFGS) method for large scale optimization. It describes
algorithms in which alternating ways of re-using a given set of stored difference vectors are outlined. The proposed algorithms
resemble the L-BFGS method, except that the initial Hessian approximation is defined implicitly like the L-BFGS Hessian in
terms of some stored vectors rather than the usual choice of a multiple of the unit matrix. Numerical experiments show that
the new algorithms yield desirable improvement over the L-BFGS method.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
6.
Narges Bidabadi 《Optimization》2013,62(6):797-815
We present an exact penalty approach for solving constrained non-linear least-squares problems, when the projected structured Hessian is approximated by a projected version of the structured BFGS formula and prove its local two-step Q-superlinear convergence. The numerical results obtained in an extensive comparative testing experiment confirm the approach to be reliable and efficient. 相似文献
7.
R. Fontecilla 《Journal of Optimization Theory and Applications》1988,58(3):431-442
In this paper, a heuristic algorithm for nonlinear programming is presented. The algorithm uses two search directions, and the Hessian of the Lagrangian function is approximated with the BFGS secant update. We show that the sequence of iterates convergeq-superlinearly if the sequence of approximating matrices satisfies a particular condition. Numerical results are presented. 相似文献
8.
This paper extends prior work by the authors on solving nonlinear least squares unconstrained problems using a factorized
quasi-Newton technique. With this aim we use a primal-dual interior-point algorithm for nonconvex nonlinear programming. The
factorized quasi-Newton technique is now applied to the Hessian of the Lagrangian function for the transformed problem which
is based on a logarithmic barrier formulation. We emphasize the importance of establishing and maintaining symmetric quasi-definiteness
of the reduced KKT system. The algorithm then tries to choose a step size that reduces a merit function, and to select a penalty
parameter that ensures descent directions along the iterative process. Computational results are included for a variety of
least squares constrained problems and preliminary numerical testing indicates that the algorithm is robust and efficient
in practice. 相似文献
9.
Y. C. Cheng 《Journal of Optimization Theory and Applications》1987,53(2):237-246
A new dual gradient method is given to solve linearly constrained, strongly convex, separable mathematical programming problems. The dual problem can be decomposed into one-dimensional problems whose solutions can be computed extremely easily. The dual objective function is shown to have a Lipschitz continuous gradient, and therefore a gradient-type algorithm can be used for solving the dual problem. The primal optimal solution can be obtained from the dual optimal solution in a straightforward way. Convergence proofs and computational results are given. 相似文献
10.
AGENERALTECHNIQUEFORDEALINGWITHDEGENERACYINREDUCEDGRADIENTMETHODSFORLINEARLYCONSTRAINED NONLINEAR PROGRAMMINGHANJIYE(韩继业);HUX... 相似文献
11.
12.
On the formulation and theory of the Newton interior-point method for nonlinear programming 总被引:16,自引:0,他引:16
A. S. El-Bakry R. A. Tapia T. Tsuchiya Y. Zhang 《Journal of Optimization Theory and Applications》1996,89(3):507-541
In this work, we first study in detail the formulation of the primal-dual interior-point method for linear programming. We show that, contrary to popular belief, it cannot be viewed as a damped Newton method applied to the Karush-Kuhn-Tucker conditions for the logarithmic barrier function problem. Next, we extend the formulation to general nonlinear programming, and then validate this extension by demonstrating that this algorithm can be implemented so that it is locally and Q-quadratically convergent under only the standard Newton method assumptions. We also establish a global convergence theory for this algorithm and include promising numerical experimentation.The first two authors were supported in part by NSF Cooperative Agreement No. CCR-8809615, by Grants AFOSR 89-0363, DOE DEFG05-86ER25017, ARO 9DAAL03-90-G-0093, and the REDI Foundation. The fourth author was supported in part by NSF DMS-9102761 and DOE DE-FG02-93ER25171. The authors would like to thank Sandra Santos for painstakingly proofreading an earlier verion of this paper. 相似文献
13.
In this paper, a new SQP algorithm is presented to solve the general nonlinear programs with mixed equality and inequality constraints. Quoted from P. Spellucci (see [9]), this method maybe be named sequential equality constrained quadratic programming (SECQP) algorithm. Per single iteration, based on an active set strategy ( see [9]), this SECQP algorithm requires only to solve equality constrained quadratic programming subproblems or system of linear equations. The theoretical analysis shows that global and superlinear convergence can be induced under some suitable conditions. 相似文献
14.
A variation of the Polak method of feasible directions for solving nonlinear programming problems is shown to be related to the Topkis and Veinott method of feasible directions. This new method is proven to converge to a Fritz John point under rather weak assumptions. Finally, numerical results show that the method converges with fewer iterations than that of Polak with a proper choice of parameters. 相似文献
15.
J. T. Betts 《Journal of Optimization Theory and Applications》1978,24(4):523-548
This paper describes a gradient projection-multiplier method for solving the general nonlinear programming problem. The algorithm poses a sequence of unconstrained optimization problems which are solved using a new projection-like formula to define the search directions. The unconstrained minimization of the augmented objective function determines points where the gradient of the Lagrangian function is zero. Points satisfying the constraints are located by applying an unconstrained algorithm to a penalty function. New estimates of the Lagrange multipliers and basis constraints are made at points satisfying either a Lagrangian condition or a constraint satisfaction condition. The penalty weight is increased only to prevent cycling. The numerical effectiveness of the algorithm is demonstrated on a set of test problems.The author gratefully acknowledges the helpful suggestions of W. H. Ailor, J. L. Searcy, and D. A. Schermerhorn during the preparation of this paper. The author would also like to thank D. M. Himmelblau for supplying a number of interesting test problems. 相似文献
16.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method. 相似文献
17.
J. T. Betts 《Journal of Optimization Theory and Applications》1977,21(2):137-174
This paper describes an accelerated multiplier method for solving the general nonlinear programming problem. The algorithm poses a sequence of unconstrained optimization problems. The unconstrained problems are solved using a rank-one recursive algorithm described in an earlier paper. Multiplier estimates are obtained by minimizing the error in the Kuhn-Tucker conditions using a quadratic programming algorithm. The convergence of the sequence of unconstrained problems is accelerated by using a Newton-Raphson extrapolation process. The numerical effectiveness of the algorithm is demonstrated on a relatively large set of test problems.This work was supported by the US Air Force under Contract No. F04701-74-C-0075. 相似文献
18.
One class of the lately developed methods for solving optimization problems are filter methods. In this paper we attached a multidimensional filter to the Gauss-Newton-based BFGS method given by Li and Fukushima [D. Li, M. Fukushima, A globally and superlinearly convergent Gauss-Newton-based BFGS method for symmetric nonlinear equations, SIAM Journal of Numerical Analysis 37(1) (1999) 152-172] in order to reduce the number of backtracking steps. The proposed filter method for unconstrained minimization problems converges globally under the standard assumptions. It can also be successfully used in solving systems of symmetric nonlinear equations. Numerical results show reasonably good performance of the proposed algorithm. 相似文献
19.
《高校应用数学学报(英文版)》2017,(1)
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k~2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O(1/k). Numerical experiments on the linearly constrained l_1-l_2minimization problem are presented to demonstrate the effectiveness of AALM. 相似文献
20.
In this paper, the classical Gauss-Newton method for the unconstrained least squares problem is modified by introducing a quasi-Newton approximation to the second-order term of the Hessian. Various quasi-Newton formulas are considered, and numerical experiments show that most of them are more efficient on large residual problems than the Gauss-Newton method and a general purpose minimization algorithm based upon the BFGS formula. A particular quasi-Newton formula is shown numerically to be superior. Further improvements are obtained by using a line search that exploits the special form of the function. 相似文献