共查询到20条相似文献,搜索用时 15 毫秒
1.
E. Spedicato 《Mathematical Programming》1978,15(1):123-129
A conjecture of Dixon relating to the behaviour of variable metric methods on functions with special symmetry is validated under suitable onditions. The relation between Huang's class and Oren's class is explored. Then the equivalence of Davidon's and Oren and Spedicato's approaches to optimal conditioning is demonstrated. 相似文献
2.
Kenneth W. Brodlie 《Mathematical Programming》1977,12(1):344-355
Two recent suggestions in the field of variable metric methods for function minimization are reviewed: the self-scaling method, first introduced by Oren and Luenberger, and the method of Biggs. The two proposals are considered both from a theoretical and computational aspect. They are compared with methods which use correction formulae from the Broyden one-parameter family, in particular the BFGS formula and the Fletcher switching strategy. 相似文献
3.
R. M. Chamberlain 《Mathematical Programming》1979,16(1):378-383
Although variable metric methods for constrained minimization generally give good numerical results, many of their convergence properties are still open. In this note two examples are presented to show that variable metric methods may cycle between two points instead of converging to the required solution. 相似文献
4.
Jean-Louis Goffin 《Mathematical Programming》1984,30(2):147-162
The deepest, or least shallow, cut ellipsoid method is a polynomial (time and space) method which finds an ellipsoid, representable
by polynomial space integers, such that the maximal ellipsoidal distance relaxation method using this fixed ellipsoid is polynomial:
this is equivalent to finding a linear transforming such that the maximal distance relaxation method of Agmon, Motzkin and
Schoenberg in this transformed space is polynomial. If perfect arithmetic is used, then the sequence of ellipsoids generated
by the method converges to a set of ellipsoids, which share some of the properties of the classical Hessian at an optimum
point of a function; and thus the ellipsoid method is quite analogous to a variable metric quasi-Newton method.
This research was supported in part by the F.C.A.C. of Quebec, and the N.S.E.R.C. of Canada under Grant A 4152. 相似文献
5.
The class of nondifferentiable problems treated in this paper constitutes the dual of a class of convex differentiable problems. The primal problem involves faithfully convex functions of linear mappings of the independent variables in the objective function and in the constraints. The points of the dual problem where the objective function is nondifferentiable are known: the method presented here takes advantage of this fact to propose modifications necessary in the reduced gradient method to guarantee convergence. 相似文献
6.
We present first an -descent basic method for minimizing a convex minmax problem. We consider first- and second-order information in order to generate the search direction. Preliminarily, we introduce some properties for the second-order information, the subhessian, and its characterization for max functions. The algorithm has -global convergence. Finally, we give a generalization of this algorithm for an unconstrained convex problem having second-order information. In this case, we obtain global -convergence.This research was supported in part by CAPES (Coordinação de Aperfeiçoamento de Pessoal de Nivel Superior) and in part by the IM-COPPE/UFRJ, Brazil. The authors wish to thank Nguyen Van Hien and Jean-Jacques Strodiot for their constructive remarks on an earlier draft of the paper. This work was completed while the first author was with the Department of Mathematics of the Facultés Universitaires de Namur. The authors are also grateful for the helpful comments of two anonymous referees. 相似文献
7.
In this paper, we present an algorithm for minimizing a class of locally Lipschitz functions. The method generalized the -smeared method to a class of functions whose Clarke generalized gradients are singleton at almost all differentiable poinst. We analyze the global convergence of the method and report some numerical results.This work was supported by the National Science Foundation of China. The authors would like to thank the referees for their valuable comments, which led to significant improvements in the presentation. 相似文献
8.
We present an algorithm for minimax optimization that combines LP methods and quasi-Newton methods. The quasi-Newton algorithm is used only if an irregular solution is detected, in which case second-order derivative information is needed in order to obtain a fast final rate of convergence. We prove that the algorithm can converge only to a stationary point and that normally the final rate of convergence will be either quadratic or superlinear. The performance is illustrated through some numerical examples. 相似文献
9.
A. G. Buckley 《Mathematical Programming》1978,15(1):200-210
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient algorithms but not so good as in a quasi-Newton approach. The new algorithm uses a strategy of generating a form of conjugate gradient search direction for most iterations, but it periodically uses a quasi-Newton step to improve the convergence. Some theoretical background for a new algorithm has been presented in an earlier paper; here we examine properties of the new algorithm and its implementation. We also present the results of some computational experience.This research was supported by the National Research Council of Canada grant number A-8962. 相似文献
10.
Larry Nazareth 《Mathematical Programming》1984,30(1):99-104
We describe a variational principle based upon minimizing the extent to which the inverse hessian approximation, sayH, violates the quasi-Newton relation, on the step immediately prior to the step used to constructH. Its application to the case when line searches are exact suggests use of the BFGS update.
This paper is based upon results first presented at the 1979 Mathematical Programming Symposium. Montreal, Canada. 相似文献
11.
J. L. Goffin 《Mathematical Programming》1982,22(1):239-260
A variable metric optimization method for non differentiable functions due to Shor, Yudin and Nemirovskii, and Khacian, is applied to a full rank system of linear equalities, under a cyclic implementation. At each step of the iteration, the variable metric matrix is used to construct an ellipsoid around the current iterate; this ellipsoid contains the solution. The variable metric matrix is updated in such a way that this inclusion property always hold, and that the volume of the ellipsoid shrinks as a geometric progression, whose rate depends only on the dimension of the space.For the problem of linear equalities, with cyclic implementation, it is shown that every axis of the ellipsoid shrinks more or less as a geometric progression (with the same rate for every axis) and thus that the distance between the iterates and the solution converges to zero as a geometric series whose rate depends only on the dimension of the space. The variable metric matrix is shown to build up information about the inverse of the matrix defining the system of equalities.A somewhat different implementation of the method is shown to converge in a number of steps equal at most to the dimension of the space.This research was supported in part by the D.G.E.S. (Quebec), the N.S.E.R.C. of Canada, under Grant A4152 and the S.S.H.R.C. of Canada. 相似文献
12.
In this paper, we are concerned with a class of nondifferentiable minimax programming problem and its two types of second
order dual models. Weak, strong and strict converse duality theorems from a view point of generalized convexity are established.
Our study naturally unifies and extends some previously known results on minimax programming. 相似文献
13.
M. J. D. Powell 《Mathematical Programming》1986,34(1):34-47
We study the use of the BFGS and DFP algorithms with step-lengths of one for minimizing quadratic functions of only two variables. The updating formulae in this case imply nonlinear three term recurrence relations between the eigenvalues of consecutive second derivative approximations, which are analysed in order to explain some gross inefficiencies that can occur. Specifically, the BFGS algorithm may require more than 10 iterations to achieve the first decimal place of accuracy, while the performance of the DFP method is far worse. The results help to explain why the DFP method is often less suitable than the BFGS algorithm for general unconstrained optimization calculations, and they show that quadratic functions provide much information about efficiency when the current vector of variables is too far from the solution for an asymptotic convergence analysis. 相似文献
14.
This paper gives first and in particular second order necessary and sufficient conditions for a class of nondifferentiable optimization problems in which there are both objective and constraint functions defined in terms of a norm. The conditions are expressed in terms of a Lagrangian function and its derivatives, and use the ideas of feasible directional sequence and subgradients. Certain regularity assumptions are required and for the second order necessary conditions it is shown that the assumption is realistic for polyhedral norms. Illustrative examples are discussed. 相似文献
15.
In this paper, we consider a class of nondifferentiable penalty functions for the solution of nonlinear programming problems
without convexity assumptions. Preliminarily, we introduce a notion of exactness which appears to be of relevance in connection
with the solution of the constrained problem by means of unconstrained minimization methods. Then, we show that the class
of penalty functions considered is exact, according to this notion.
This research was partially supported by the National Research Program on “Modelli e Algoritmi per l'Ottimizzazione,” Ministero
della Pubblica, Istruzione, Roma, Italy. 相似文献
16.
Zengkun Xu 《Journal of Mathematical Analysis and Applications》2005,302(2):282-290
The Kuhn-Tucker type necessary optimality conditions are given for the problem of minimizing the sum of a differentiable function and a convex function subject to a set of differentiable nonlinear inequalities on a convex subset C of , under the conditions similar to the Kuhn-Tucker constraint qualification or the Arrow-Hurwicz-Uzawa constraint qualification. The case when the set C is open (not necessarily convex) is shown to be a special one of our results, which helps us to improve some of the existing results in the literature. 相似文献
17.
Krzysztof C. Kiwiel 《Mathematical Programming》1991,52(1-3):285-302
This paper presents new versions of proximal bundle methods for solving convex constrained nondifferentiable minimization problems. The methods employ
1 or
exact penalty functions with new penalty updates that limit unnecessary penalty growth. In contrast to other methods, some of them are insensitive to problem function scaling. Global convergence of the methods is established, as well as finite termination for polyhedral problems. Some encouraging numerical experience is reported. The ideas presented may also be used in variable metric methods for smooth nonlinear programming.This research was supported by the Polish Academy of Sciences. 相似文献
18.
To minimize a convex function, we combine Moreau-Yosida regularizations, quasi-Newton matrices and bundling mechanisms. First
we develop conceptual forms using “reversal” quasi-Newton formulae and we state their global and local convergence. Then,
to produce implementable versions, we incorporate a bundle strategy together with a “curve-search”. No convergence results
are given for the implementable versions; however some numerical illustrations show their good behaviour even for large-scale
problems. 相似文献
19.
XUCHENGXIAN CHENZHIPING 《高校应用数学学报(英文版)》1995,10(2):141-154
The minimization of nonconvex, nondifferentiable functions that are compositions of max-type functions formed by nondifferentiable convex functions is discussed in this paper. It is closely related to practical engineering problems. By utilizing the globality of ε-subdifferential and the theory of quasidifferential, and by introducing a new scheme which selects several search directions and consider them simultaneously at each iteration, a minimizing algorithm is derived. It is simple in structure, implementable, numerically efficient and has global convergence. The shortcomings of the existing algorithms are thus overcome both in theory and in application. 相似文献
20.
Philip Wolfe 《Mathematical Programming》1974,7(1):380-383
An algorithm is described for finding the minimum of any convex, not necessarily differentiable, functionf of several variables. The algorithm yields a sequence of points tending to the solution of the problem, if any; it requires the calculation off and one subgradient off at designated points. Its rate of convergence is estimated for convex and also for twice differentiable convex functions. It is an extension of the method of conjugate gradients, and terminates whenf is quadratic.
Editor's note. The communications of Wolfe and Lemarechal which follow — received almost simultaneously — display different points of view, but deal with the same problem and use similar techniques. They are preliminary versions of promising attacks on the problem of minimizing a convex, but not necessarily differentiable, function of many variables. MATHEMATICAL PROGRAMMING STUDY 3 entitledNondifferentiable optimization is to be devoted to this subject. 相似文献