首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An algorithm for minimization of functions of many variables, subject possibly to linear constraints on the variables, is described. In it a subproblem is solved in which a quadratic approximation is made to the object function and minimized over a region in which the approximation is valid. A strategy for deciding when this region should be expanded or contracted is given. The quadratic approximation involves estimating the hessian of the object function by a matrix which is updated at each iteration by a formula recently reported by Powell [6]. This formula enables convergence of the algorithm from any feasible point to be proved. Use of such an approximation, as against using exact second derivatives, also enables a reduction of about 60% to be made in the number of operations to solve the subproblem. Numerical evidence is reported showing that the algorithm is efficient in the number of function evaluations required to solve well known test problems.This paper was presented at the 7th International Mathematical Programming Symposium 1970, The Hague, The Netherlands.  相似文献   

2.
Many real life problems can be stated as a minimax problem, such as economics, finance, management, engineering and other fields, which demonstrate the importance of having reliable methods to tackle minimax problems. In this paper, an algorithm for linearly constrained minimax problems is presented in which we combine the trust-region methods with the line-search methods and curve-search methods. By means of this hybrid technique, it avoids possibly solving the trust-region subproblems many times, and make better use of the advantages of different methods. Under weaker conditions, the global and superlinear convergence are achieved. Numerical experiments show that the new algorithm is robust and efficient.  相似文献   

3.
In this work, an improved SQP method is proposed for solving minimax problems, and a new method with small computational cost is proposed to avoid the Maratos effect. In addition, its global and superlinear convergence are obtained under some suitable conditions.  相似文献   

4.
In this paper an algorithm for solving a linearly constrained nonlinear programming problem is developed. Given a feasible point, a correction vector is computed by solving a least distance programming problem over a polyhedral cone defined in terms of the gradients of the “almost” binding constraints. Mukai's approximate scheme for computing the step size is generalized to handle the constraints. This scheme provides an estimate for the step size based on a quadratic approximation of the function. This estimate is used in conjunction with Armijo line search to calculate a new point. It is shown that each accumulation point is a Kuhn-Tucker point to a slight perturbation of the original problem. Furthermore, under suitable second order optimality conditions, it is shown that eventually only one trial is needed to compute the step size.  相似文献   

5.
A readily implementable algorithm is proposed for minimizing any convex, not necessarily differentiable, function f of several variables subject to a finite number of linear constraints. The algorithm requires only the calculation of f at designated feasible points. At each iteration a lower polyhedral approximation to f is constructed from a few previously calculated subgradients and an aggregate subgradient. The recursively updated aggregate subgradient accumulates the subgradient information to deal with nondifferentiability of f. The polyhedral approximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a step length is found by approximate line search. It is shown that the algorithm converges to a solution, if any.  相似文献   

6.
An algorithm for solving linearly constrained optimization problems is proposed. The search direction is computed by a bundle principle and the constraints are treated through an active set strategy. Difficulties that arise when the objective function is nonsmooth, require a clever choice of a constraint to relax. A certain nondegeneracy assumption is necessary to obtain convergence. Most of this research was performed when the author was with I.N.R.I.A. (Domaine de Voluceau-Rocquencourt, B.P. 105, 78153 Le Chesnay Cédex, France). This research was supported in part by the National Science Foundation, Grants No. DMC-84-51515 and OIR-85-00108.  相似文献   

7.
In this paper, an interior point algorithm based on trust region techniques is proposed for solving nonlinear optimization problems with linear equality constraints and nonnegative variables. Unlike those existing interior-point trust region methods, this proposed method does not require that a general quadratic subproblem with a trust region bound be solved at each iteration. Instead, a system of linear equations is solved to get a search direction, and then a linesearch of Armijo type is performed in this direction to obtain a new iteration point. From a computational point of view, this approach may in general reduce a computational effort, and thus improve the computational efficiency. Under suitable conditions, it is proven that any accumulation of the sequence generated by the algorithm satisfies the first-order optimality condition.  相似文献   

8.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

9.
A concave function defined on a polytope may have many local minima (in fact every extreme point may be a local minimum). Sufficient conditions are given such that if they are satisfied at a point, this point is known to be a global minimum. It is only required to solve a single linear program to test whether the sufficient conditions are satisfied. This test has been incorporated into an earlier algorithm to give improved performance. Computational results presented show that these sufficient conditions are satisfied for certain types of problems and may substantially reduce the effort needed to find and recognize a global minimum.  相似文献   

10.
《Optimization》2012,61(1):101-131
In this article, non-linear minimax problems with general constraints are discussed. By means of solving one quadratic programming an improved direction is yielded and a second-order correction direction can also be at hand via one system of linear equations. So a new algorithm for solving the discussed problems is presented. In connection with a special merit function, the generalized monotone line search is used to yield the step size at each iteration. Under mild conditions, we can ensure global and superlinear convergence. Finally, some numerical experiments are operated to test our algorithm, and the results demonstrate that it is promising.  相似文献   

11.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported.  相似文献   

12.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported.  相似文献   

13.
A parallel stochastic algorithm is presented for solving the linearly constrained concave global minimization problem. The algorithm is a multistart method and makes use of a Bayesian stopping rule to identify the global minimum with high probability. Computational results are presented for more than 200 problems on a Cray X-MP EA/464 supercomputer.  相似文献   

14.
A modification of an existing barrier-function method is presented. The modified algorithm may be used to solve semi-infinite minimax problems arising in engineering design. The modification preserves the global convergence properties, simple structure, and numerical robustness of the original algorithm, while substantially reducing the computational cost.This research was supported by the National Science Foundation Grant ECS-8517362, the Air Force Office Scientific Research Grant 86-0116, and the California State MICRO program.  相似文献   

15.
A primal-dual version of the proximal point algorithm is developed for linearly constrained convex programming problems. The algorithm is an iterative method to find a saddle point of the Lagrangian of the problem. At each iteration of the algorithm, we compute an approximate saddle point of the Lagrangian function augmented by quadratic proximal terms of both primal and dual variables. Specifically, we first minimize the function with respect to the primal variables and then approximately maximize the resulting function of the dual variables. The merit of this approach exists in the fact that the latter function is differentiable and the maximization of this function is subject to no constraints. We discuss convergence properties of the algorithm and report some numerical results for network flow problems with separable quadratic costs.  相似文献   

16.
为了更好地解决二次约束二次规划问题(QCQP), 本文基于分支定界算法框架提出了自适应线性松弛技术, 在理论上证明了这种新的定界技术对于解决(QCQP)是可观的。文中分支操作采用条件二分法便于对矩形进行有效剖分; 通过缩减技术删除不包含全局最优解的部分区域, 以加快算法的收敛速度。最后, 通过数值结果表明提出的算法是有效可行的。  相似文献   

17.
We consider a minimax resource allocation problem in which each term of the objective function is a strictly decreasing, invertible function of a single decision variable. The objective is to minimize the maximum term subject to non-negativity constraints and a set of linear constraints with only non-negative parameters. We develop an algorithm that finds an optimal solution by repeatedly solving a relaxed minimax problem. In general, each relaxed problem is solved by simple search methods; however, for certain non-linear functions the algorithm employs closed form expressions.  相似文献   

18.
为了更好地解决二次约束二次规划问题(QCQP), 本文基于分支定界算法框架提出了自适应线性松弛技术, 在理论上证明了这种新的定界技术对于解决(QCQP)是可观的。文中分支操作采用条件二分法便于对矩形进行有效剖分; 通过缩减技术删除不包含全局最优解的部分区域, 以加快算法的收敛速度。最后, 通过数值结果表明提出的算法是有效可行的。  相似文献   

19.
An ODE-based approach to nonlinearly constrained minimax problems   总被引:1,自引:0,他引:1  
We consider the following problem: Choosex 1, ...,x n to wherem 1,m 2,m 3 are integers with 0m 1m 2m 3, thef i are given real numbers, and theg i are given smooth functions. Constraints of the formg i (x 1, ...,x n )=0 can also be handled without problem. Each iteration of our algorithm involves approximately solving a certain non-linear system of first-order ordinary differential equations to get a search direction for a line search and using a Newton-like approach to correct back into the feasible region when necessary. The algorithm and our Fortran implementation of it will be discussed along with some examples. Our experience to date has been that the program is more robust than any of the library routines we have tried, although it generally requires more computer time. We have found this program to be an extremely useful tool in diverse areas, including polymer rheology, computer vision, and computation of convexity-preserving rational splines.  相似文献   

20.
We introduce a first-order Mirror-Descent (MD) type algorithm for solving nondifferentiable convex problems having a combination of simple constraint set X (ball, simplex, etc.) and an additional functional constraint. The method is tuned to exploit the structure of X by employing an appropriate non-Euclidean distance-like function. Convergence results and efficiency estimates are derived. The performance of the algorithm is demonstrated by solving certain image deblurring problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号