共查询到20条相似文献,搜索用时 0 毫秒
1.
It is known that the problem of minimizing a convex functionf(x) over a compact subsetX of
n
can be expressed as minimizing max{g(x, y)|y X}, whereg is a support function forf[f(x) g(x, y), for ally X andf(x)=g(x, x)]. Standard outer-approximation theory can then be employed to obtain outer-approximation algorithms with procedures for dropping previous cuts. It is shown here how this methodology can be extended to nonconvex nondifferentiable functions.This research was supported by the Science and Engineering Research Council, UK, and by the National Science Foundation under Grant No. ECS-79-13148. 相似文献
2.
Omissions from the list of references of Ref. 1 are corrected. 相似文献
3.
A nonlinear programming problem with nondifferentiabilities is considered. The nondifferentiabilities are due to terms of the form min(f
1(x),...,f
n(x)), which may enter nonlinearly in the cost and the constraints. Necessary and sufficient conditions are developed. Two algorithms for solving this problem are described, and their convergence is studied. A duality framework for interpretation of the algorithms is also developed.This work was supported in part by the National Science Foundation under Grant No. ENG-74-19332 and Grant No. ECS-79-19396, in part by the U.S. Air Force under Grant AFOSR-78-3633, and in part by the Joint Services Electronics Program (U.S. Army, U.S. Navy, and U.S. Air Force) under Contract N00014-79-C-0424. 相似文献
4.
Variable-metric methods are presented which do not need an accurate one-dimensional search and eliminate roundoff error problems which can occur in updating the metric for large-dimension systems. The methods are based on updating the square root of the metric, so that a positive-definite metric always results. The disadvantage of intentionally relaxing the accuracy of the one-dimensional search is that the number of iterations (and hence, gradient evaluations) increases. For problems involving a large number of variables, the square-root method is presented in a triangular form to reduce the amount of computation. Also, for usual optimization problems, the square-root procedure can be carried out entirely in terms of the metric, eliminating storage and computer time associated with computations of the square root of the metric. 相似文献
5.
6.
T. L. Vincent B. S. Goh K. L. Teo 《Journal of Optimization Theory and Applications》1992,75(3):501-519
In this paper, we consider a class of nonlinear minimum-maximum optimization problems subject to boundedness constraints on the decision vectors. Three algorithms are developed for finding the min-max point using the concept of solving an associated dynamical system. In the first and third algorithms, solutions are obtained by solving systems of differential equations. The second algorithm is a discrete version of the first algorithm. The trajectories generated by the first and second algorithms may move inside or on the boundary of the constraint set, while the third algorithm ensures that any trajectory that begins inside the constraint region remains in its interior. Sufficient conditions for global convergence of the two algorithms are also established. For illustration, four numerical examples are solved.This work was partially supported by a research grant from the Australian Research Committee. 相似文献
7.
R. D. Brownrigg 《Journal of Optimization Theory and Applications》1979,28(2):163-184
This paper presents a potentially parallel iterative algorithm for the solution of the unconstrainedN-stage decision problem of dynamic programming. The basis of the algorithm is the use of variable-metric minimization techniques to develop a quadratic approximation to the cost function at each stage. The algorithm is applied to various problems, and comparisons with other algorithms are made.This research forms part of the author's PhD program, and is supported by the Department of Scientific and Industrial Research of the New Zealand Government. The author is indebted to Dr. B. A. Murtagh, PhD supervisor, for his encouragement and support during the preparation of this paper. 相似文献
8.
Eliane R. Panier 《Mathematical Programming》1987,37(3):269-292
An algorithm for solving linearly constrained optimization problems is proposed. The search direction is computed by a bundle
principle and the constraints are treated through an active set strategy. Difficulties that arise when the objective function
is nonsmooth, require a clever choice of a constraint to relax. A certain nondegeneracy assumption is necessary to obtain
convergence.
Most of this research was performed when the author was with I.N.R.I.A. (Domaine de Voluceau-Rocquencourt, B.P. 105, 78153
Le Chesnay Cédex, France).
This research was supported in part by the National Science Foundation, Grants No. DMC-84-51515 and OIR-85-00108. 相似文献
9.
D. F. Shanno 《Journal of Optimization Theory and Applications》1985,46(1):87-94
An example is given where truncation error, caused by finite computer arithmetic, leads to the BFGS variable-metric method becoming stuck, despite the approximated Hessian matrix, the gradient vector, and the search vector satisfying analytical conditions for convergence. A restart criterion to eliminate the problem is suggested. 相似文献
10.
The most time-consuming part of the Karmarkar algorithm for linear programming is the projection of a vector onto the nullspace
of a matrix that changes at each iteration. We present a variant of the Karmarkar algorithm that uses standard variable-metric
techniques in an innovative way to approximate this projection. In limited tests, this modification greatly reduces the number
of matrix factorizations needed for the solution of linear programming problems.
Research sponsored by DOE DE-AS05-82ER13016, ARO DAAG-29-83-K-0035, AFOSR 85-0243.
Research sponsored by ARO DAAG-29-83-K-0035, AFOSR 85-0243, Shell Development Company. 相似文献
11.
New least-square algorithms 总被引:1,自引:0,他引:1
W. C. Davidon 《Journal of Optimization Theory and Applications》1976,18(2):187-197
New algorithms are presented for approximating the minimum of the sum of squares ofM real and differentiable functions over anN-dimensional space. These algorithms update estimates for the location of a minimum after each one of the functions and its first derivatives are evaluated, in contrast with other least-square algorithms which evaluate allM functions and their derivatives at one point before using any of this information to make an update. These new algorithms give estimates which fluctuate about a minimum rather than converging to it. For many least-square problems, they give an adequate approximation for the solution more quickly than do other algorithms.It is a pleasure to thank J. Chesick of Haverford College for suggesting and implementing the application of this algorithm to x-ray crystallography. 相似文献
12.
A class of nondifferentiable control problems 总被引:2,自引:0,他引:2
S. Chandra B. D. Craven I. Husain 《Journal of Optimization Theory and Applications》1988,56(2):227-243
Optimality conditions and duality results are obtained for a class of control problems having a nondifferentiable term in the integrand of the objective functional. These results generalize many well-known results in optimal control theory involving differentiable functions, and also provide a relationship with certain nondifferentiable mathematical programming problems. Some extensions concerning the unified treatment of optimal control theory and continuous programming are also mentioned. Finally, a control problem containing an arbitrary norm, along with its appropriate norm, is given. 相似文献
13.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented. 相似文献
14.
K. C. Kiwiel 《Journal of Optimization Theory and Applications》1986,48(3):437-449
This paper presents a method for minimizing the sum of a possibly nonsmooth convex function and a continuously differentiable function. As in the convex case developed by the author, the algorithm is a descent method which generates successive search directions by solving quadratic programming subproblems. An inexact line search ensures global convergence of the method to stationary points. 相似文献
15.
J. C. Allwright 《Journal of Optimization Theory and Applications》1978,26(3):367-372
Convergence to the minimal value is studied for the important type of descent algorithm which, at each interation, uses a search direction making an angle with the negative gradient which is smaller than a prespecified angle. Improvements on existing convergence rate results are obtained.Paper received on 4 October, 1977; in revised form, April 3, 1978 相似文献
16.
The composite functions which appear in various optimal feedback system design problems, as well as in open-loop optimal control problems, can lead to severely ill-conditioned minimax problems. This ill-conditioning can cause first-order minimax algorithms to converge very slowly. We propose a variable-metric technique which substantially mitigates this ill-conditioning. The technique does not require the evaluation of second derivatives and can be used to speed the convergence of any first-order minimax algorithm which produces estimates of the optimal multipliers. Numerical experiments are presented which show that the variable-metric technique increases the speed of two algorithms. 相似文献
17.
Krzysztof C. Kiwiel 《Mathematical Programming》1995,69(1-3):89-109
We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almost optimal efficiency estimates. We give extensions for solving convex constrained problems, convex-concave saddle-point problems and variational inequalities with monotone operators. We present several variants, establish their efficiency estimates, and discuss possible implementations. In particular, our methods require bounded storage in contrast to the original level methods of Lemaréchal, Nemirovskii and Nesterov.This research was supported by the Polish Academy of Sciences.Supported by a grant from the French Ministry of Research and Technology. 相似文献
18.
This paper presents the results of computational studies of the properties of cutting plane algorithms as applied to posynomial geometric programs. The four cutting planes studied represent the gradient method of Kelley and an extension to develop tangential cuts; the geometric inequality of Duffin and an extension to generate several cuts at each iteration. As a result of over 200 problem solutions, we will draw conclusions regarding the effectiveness of acceleration procedures, feasible and infeasible starting point, and the effect of the initial bounds on the variables. As a result of these experiments, certain cutting plane methods are seen to be attractive means of solving large scale geometric programs.This author's research was supported in part by the Center for the Study of Environmental Policy, The Pennsylvania State University. 相似文献
19.
Dorit S. Hochbaum 《Annals of Operations Research》2007,153(1):257-296
Nonlinear optimization algorithms are rarely discussed from a complexity point of view. Even the concept of solving nonlinear
problems on digital computers is not well defined. The focus here is on a complexity approach for designing and analyzing
algorithms for nonlinear optimization problems providing optimal solutions with prespecified accuracy in the solution space.
We delineate the complexity status of convex problems over network constraints, dual of flow constraints, dual of multi-commodity,
constraints defined by a submodular rank function (a generalized allocation problem), tree networks, diagonal dominant matrices,
and nonlinear knapsack problem’s constraint. All these problems, except for the latter in integers, have polynomial time algorithms
which may be viewed within a unifying framework of a proximity-scaling technique or a threshold technique. The complexity of many of these algorithms is furthermore best possible in that it matches lower bounds on the
complexity of the respective problems.
In general nonseparable optimization problems are shown to be considerably more difficult than separable problems. We compare
the complexity of continuous versus discrete nonlinear problems and list some major open problems in the area of nonlinear
optimization.
An earlier version of this paper appeared in 4OR, 3:3, 171–216, 2005. 相似文献
20.
Semidefinite relaxations of certain combinatorial optimization problems lead to approximation algorithms with performance guarantees. For large-scale problems, it may not be computationally feasible to solve the semidefinite relaxations to optimality. In this paper, we investigate the effect on the performance guarantees of an approximate solution to the semidefinite relaxation for MaxCut, Max2Sat, and Max3Sat. We show that it is possible to make simple modifications to the approximate solutions and obtain performance guarantees that depend linearly on the most negative eigenvalue of the approximate solution, the size of the problem, and the duality gap. In every case, we recover the original performance guarantees in the limit as the solution approaches the optimal solution to the semidefinite relaxation. 相似文献