首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A dual algorithm for minimax problems   总被引:1,自引:0,他引:1  
In this paper, a dual algorithm, based on a smoothing function of Bertsekas (1982), is established for solving unconstrained minimax problems. It is proven that a sequence of points, generated by solving a sequence of unconstrained minimizers of the smoothing function with changing parametert, converges with Q-superlinear rate to a Kuhn-Tucker point locally under some mild conditions. The relationship between the condition number of the Hessian matrix of the smoothing function and the parameter is studied, which also validates the convergence theory. Finally the numerical results are reported to show the effectiveness of this algorithm.  相似文献   

3.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

4.
In this paper, a modified nonmonotone line search SQP algorithm for nonlinear minimax problems is presented. During each iteration of the proposed algorithm, a main search direction is obtained by solving a reduced quadratic program (QP). In order to avoid the Maratos effect, a correction direction is generated by solving the reduced system of linear equations. Under mild conditions, the global and superlinear convergence can be achieved. Finally, some preliminary numerical results are reported.  相似文献   

5.
This paper presents a proximal point algorithm for solving discretel approximation problems of the form minimize ∥Ax−b∥. Let ε be a preassigned positive constant and let ε l ,l = 0,1,2,... be a sequence of positive real numbers such that 0 < ε l < ε. Then, starting from an arbitrary pointz 0, the proposed method generates a sequence of points z l ,l= 0,1,2,..., via the rule . One feature that characterizes this algorithm is its finite termination property. That is, a solution is reached within a finite number of iterations. The smaller are the numbers ε l the smaller is the number of iterations. In fact, if ε 0 is sufficiently small then z1 solves the original minimax problem. The practical value of the proposed iteration depends on the availability of an efficient code for solving a regularized minimax problem of the form minimize where ∈ is a given positive constant. It is shown that the dual of this problem has the form maximize , and ify solves the dual thenx=A T y solves the primal. The simple structure of the dual enables us to apply a wide range of methods. In this paper we design and analyze a row relaxation method which is suitable for solving large sparse problems. Numerical experiments illustrate the feasibility of our ideas.  相似文献   

6.
A hybrid algorithm for nonlinear minimax problems   总被引:1,自引:0,他引:1  
In this paper, a hybrid algorithm for solving finite minimax problem is presented. In the algorithm, we combine the trust-region methods with the line-search methods and curve-search methods. By means of this hybrid technique, the algorithm, according to the specific situation at each iteration, can adaptively performs the trust-region step, line-search step or curve-search step, so as to avoid possibly solving the trust-region subproblems many times, and make better use of the advantages of different methods. Moreover, we use second-order correction step to circumvent the difficulties of the Maratos effect occurred in the nonsmooth optimization. Under mild conditions, we prove that the new algorithm is of global convergence and locally superlinear convergence. The preliminary experiments show that the new algorithm performs efficiently.  相似文献   

7.
A new method for nonlinear minimax problems is presented. The method is of the trust region type and based on sequential linear programming. It is a first order method that only uses first derivatives and does not approximate Hessians. The new method is well suited for large sparse problems as it only requires that software for sparse linear programming and a sparse symmetric positive definite equation solver are available. On each iteration a special linear/quadratic model of the function is minimized, but contrary to the usual practice in trust region methods the quadratic model is only defined on a one dimensional path from the current iterate to the boundary of the trust region. Conjugate gradients are used to define this path. One iteration involves one LP subproblem and requires three function evaluations and one gradient evaluation. Promising numerical results obtained with the method are presented. In fact, we find that the number of iterations required is comparable to that of state-of-the-art quasi-Newton codes.Research supported by The Nordic Council of Ministers, The Icelandic Science Council, The University of Iceland Research Fund and The Danish Natural Science Research Council.  相似文献   

8.
Many real life problems can be stated as a minimax problem, such as economics, finance, management, engineering and other fields, which demonstrate the importance of having reliable methods to tackle minimax problems. In this paper, an algorithm for linearly constrained minimax problems is presented in which we combine the trust-region methods with the line-search methods and curve-search methods. By means of this hybrid technique, it avoids possibly solving the trust-region subproblems many times, and make better use of the advantages of different methods. Under weaker conditions, the global and superlinear convergence are achieved. Numerical experiments show that the new algorithm is robust and efficient.  相似文献   

9.
In this paper, we propose pattern search methods for finite minimax problems. Due to the nonsmoothness of this class of problems, we convert the original problem into a smooth one by using a smoothing technique based on the exponential penalty function of Kort and Bertsekas, which technique depends on a smoothing parameter that control the approximation to the finite minimax problems. The proposed methods are based on a sampling of the smooth function along a set of suitable search directions and on an updating rule for the step-control parameter. Under suitable conditions, we get the global convergence results despite the fact that pattern search methods do not have explicit information concerning the gradient and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.  相似文献   

10.
Many real life problems can be stated as a minimax optimization problem, such as the problems in economics, finance, management, engineering and other fields. In this paper, we present an algorithm with nonmonotone strategy and second-order correction technique for minimax optimization problems. Using this scheme, the new algorithm can overcome the difficulties of the Maratos effect occurred in the nonsmooth optimization, and the global and superlinear convergence of the algorithm can be achieved accordingly. Numerical experiments indicate some advantages of this scheme.  相似文献   

11.
A stochastic approximation algorithm for minimax optimization problems is analyzed. At each iterate, it performs one random experiment, based on which it computes a direction vector. It is shown that, under suitable conditions, it a.s. converges to the set of points satisfying necessary optimality conditions. The algorithm and its analysis bring together ideas from stochastic approximation and nondifferentiable optimization.  相似文献   

12.
An algorithm is presented for the numerical solution of nonlinear programming problems in which the objective function is to be minimized over feasiblex after having been maximized over feasibley. The vectorsx andy are subjected to separate nonlinear constraints. The algorithm is obtained as follows: One starts with an outer algorithm for the minimization overx, that algorithm being taken here to be a method of centers; then, one inserts into this algorithm an adaptive inner procedure, which is designed to compute a suitable approximation to the maximizery in a finite number of steps. The outer and inner algorithms are blended in such a way as to cause the inner one to converge more rapidly. The results on convergence and rate of convergence for the outer algorithm continue to hold (essentially) for the composite algorithm. Thus, what is considered here, for the first time for this type of problem, is the question of how one inserts an approximation procedure into an algorithm so as to preserve its convergence and its rate of convergence.  相似文献   

13.
We present a subgradient algorithm for minimizing the maximum of a finite collection of functions. It is assumed that each function is the sum of a finite collection of basic convex functions and that the number of different subgradient sets associated with nondifferentiable points of each basic function is finite on any bounded set. Problems belonging to this class include the linear approximation problem and both the minimax and minisum problems of location theory. Convergence of the algorithm to an epsilon-optimal solution is proven and its effectiveness is demonstrated by solving a number of location problems and linear approximation problems.This research was partially supported by the Army Research Office, Triangle Park, NC, under contract number DAH-CO4-75-G-0150, and by NSF grants ENG 16-24294 and ENG 75-10225.  相似文献   

14.
We consider a minimax resource allocation problem in which each term of the objective function is a strictly decreasing, invertible function of a single decision variable. The objective is to minimize the maximum term subject to non-negativity constraints and a set of linear constraints with only non-negative parameters. We develop an algorithm that finds an optimal solution by repeatedly solving a relaxed minimax problem. In general, each relaxed problem is solved by simple search methods; however, for certain non-linear functions the algorithm employs closed form expressions.  相似文献   

15.
A DERIVATIVE-FREE ALGORITHM FOR UNCONSTRAINED OPTIMIZATION   总被引:1,自引:0,他引:1  
In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear.  相似文献   

16.
In this study, we propose an algorithm for solving a minimax problem over a polyhedral set defined in terms of a system of linear inequalities. At each iteration a direction is found by solving a quadratic programming problem and then a suitable step size along that direction is taken through an extension of Armijo's approximate line search technique. We show that each accumulation point is a Kuhn-Tucker solution and give a condition that guarantees convergence of the whole sequence of iterations. Through the use of an exact penalty function, the algorithm can be used for solving constrained nonlinear programming. In this case, our algorithm resembles that of Han, but differs from it both in the direction-finding and the line search steps.  相似文献   

17.
In this work, an improved SQP method is proposed for solving minimax problems, and a new method with small computational cost is proposed to avoid the Maratos effect. In addition, its global and superlinear convergence are obtained under some suitable conditions.  相似文献   

18.
A two-step derivative-free iterative algorithm is presented for solving nonlinear equations. Error analysis shows that the algorithm is fourth-order with efficiency index equal to 1.5874. A lot of numerical results show that the algorithm is effective and is preferable to some existing derivative-free methods in terms of computation cost.  相似文献   

19.
We focus on the numerical solution of closed-loop stochastic problems, and propose a perturbed gradient algorithm to achieve this goal. The main hurdle in such problems is the fact that the control variables are infinite-dimensional, due to, e.g., the information constraints. Alternatively said, control variables are feedbacks, i.e., functions. Such controls have hence to be represented in a finite way in order to solve the problem numerically. In the same way, the gradient of the criterion is itself an infinite-dimensional object. Our algorithm replaces this exact (and unknown) gradient by a perturbed one, which consists of the product of the true gradient evaluated at a random point and a kernel function which extends this gradient to the neighbourhood of the random point. Proceeding this way, we explore the whole space iteration after iteration through random points. Since each kernel function is perfectly known by a small number of parameters, say N, the control at iteration k is perfectly known as an infinite-dimensional object by at most N × k parameters. The main strength of this method is that it avoids any discretization of the underlying space, provided that we can sample as many points as needed in this space. Moreover, our algorithm can take into account the possible measurability constraints of the problem in a new way. Finally, the randomized strategy implemented by the algorithm causes the most probable parts of the space to be the most explored ones, which is a priori an interesting feature. In this paper, we first prove two convergence results of this algorithm in the strongly convex and convex cases, and then give some numerical examples showing the interest of this method for practical stochastic optimization problems. In Memoriam: Jean-Sébastien Roy passed away July 04, 2007. He was 33 years old.  相似文献   

20.
In this paper, we present a scenario aggregation algorithm for the solution of the dynamic minimax problem in stochastic programming. We consider the case where the joint probability distribution has a known finite support. The algorithm applies the Alternating Direction of Multipliers Method on a reformulation of the minimax problem using a double duality framework. The problem is solved by decomposition into scenario sub-problems, which are deterministic multi-period problems. Convergence properties are deduced from the Alternating Direction of Multipliers. The resulting algorithm can be seen as an extension of Rockafellar and Wets Progressive Hedging algorithm to the dynamic minimax context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号