首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An algorithm for solving linearly constrained optimization problems is proposed. The search direction is computed by a bundle principle and the constraints are treated through an active set strategy. Difficulties that arise when the objective function is nonsmooth, require a clever choice of a constraint to relax. A certain nondegeneracy assumption is necessary to obtain convergence. Most of this research was performed when the author was with I.N.R.I.A. (Domaine de Voluceau-Rocquencourt, B.P. 105, 78153 Le Chesnay Cédex, France). This research was supported in part by the National Science Foundation, Grants No. DMC-84-51515 and OIR-85-00108.  相似文献   

2.
In this paper, we study inverse optimization for linearly constrained convex separable programming problems that have wide applications in industrial and managerial areas. For a given feasible point of a convex separable program, the inverse optimization is to determine whether the feasible point can be made optimal by adjusting the parameter values in the problem, and when the answer is positive, find the parameter values that have the smallest adjustments. A sufficient and necessary condition is given for a feasible point to be able to become optimal by adjusting parameter values. Inverse optimization formulations are presented with 1 and 2 norms. These inverse optimization problems are either linear programming when 1 norm is used in the formulation, or convex quadratic separable programming when 2 norm is used.  相似文献   

3.
Large-scale linearly constrained optimization   总被引:4,自引:0,他引:4  
An algorithm for solving large-scale nonlinear programs with linear constraints is presented. The method combines efficient sparse-matrix techniques as in the revised simplex method with stable quasi-Newton methods for handling the nonlinearities. A general-purpose production code (MINOS) is described, along with computational experience on a wide variety of problems.This research was supported by the U.S. Office of Naval Research (Contract N00014-75-C-0267), the National Science Foundation (Grants MCS71-03341 A04, DCR75-04544), the U.S. Energy Research and Development Administration (Contract E(04-3)-326 PA #18), the Victoria University of Wellington, New Zealand, and the Department of Scientific and Industrial Research Wellington, New Zealand.  相似文献   

4.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported.  相似文献   

5.
A procedure is described for preventing cycling in active-set methods for linearly constrained optimization, including the simplex method. The key ideas are a limited acceptance of infeasibilities in all variables, and maintenance of a working feasibility tolerance that increases over a long sequence of iterations. The additional work per iteration is nominal, and stalling cannot occur with exact arithmetic. The method appears to be reliable, based on computational results for the first 53 linear programming problems in theNetlib set.The material contained in this report is based upon research supported by the Air Force Office of Scientific Research Grant 87-01962; the U.S. Department of Energy Grant DE-FG03-87ER25030; National Science Foundation Grants CCR-8413211 and ECS-8715153; and the Office of Naval Research Contract N00014-87-K-0142.  相似文献   

6.
A tolerant algorithm for linearly constrained optimization calculations   总被引:3,自引:0,他引:3  
Two extreme techniques when choosing a search direction in a linearly constrained optimization calculation are to take account of all the constraints or to use an active set method that satisfies selected constraints as equations, the remaining constraints being ignored. We prefer an intermediate method that treats all inequality constraints with small residuals as inequalities with zero right hand sides and that disregards the other inequality conditions. Thus the step along the search direction is not restricted by any constraints with small residuals, which can help efficiency greatly, particularly when some constraints are nearly degenerate. We study the implementation, convergence properties and performance of an algorithm that employs this idea. The implementation considerations include the choice and automatic adjustment of the tolerance that defines the small residuals, the calculation of the search directions, and the updating of second derivative approximations. The main convergence theorem imposes no conditions on the constraints except for boundedness of the feasible region. The numerical results indicate that a Fortran implementation of our algorithm is much more reliable than the software that was tested by Hock and Schittkowski (1981). Therefore the algorithm seems to be very suitable for general use, and it is particularly appropriate for semi-infinite programming calculations that have many linear constraints that come from discretizations of continua.  相似文献   

7.
This paper describes a direct search method for a class of linearly constrained optimization problem. Through research we find it can be treated as an unconstrained optimization problem. And with the decrease of dimension of the variables need to be computed in the algorithms, the implementation of convergence to KKT points will be simplified to some extent. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.  相似文献   

8.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

9.
An algorithm for minimization of functions of many variables, subject possibly to linear constraints on the variables, is described. In it a subproblem is solved in which a quadratic approximation is made to the object function and minimized over a region in which the approximation is valid. A strategy for deciding when this region should be expanded or contracted is given. The quadratic approximation involves estimating the hessian of the object function by a matrix which is updated at each iteration by a formula recently reported by Powell [6]. This formula enables convergence of the algorithm from any feasible point to be proved. Use of such an approximation, as against using exact second derivatives, also enables a reduction of about 60% to be made in the number of operations to solve the subproblem. Numerical evidence is reported showing that the algorithm is efficient in the number of function evaluations required to solve well known test problems.This paper was presented at the 7th International Mathematical Programming Symposium 1970, The Hague, The Netherlands.  相似文献   

10.
11.
In this paper we first establish a Lagrange multiplier condition characterizing a regularized Lagrangian duality for quadratic minimization problems with finitely many linear equality and quadratic inequality constraints, where the linear constraints are not relaxed in the regularized Lagrangian dual. In particular, in the case of a quadratic optimization problem with a single quadratic inequality constraint such as the linearly constrained trust-region problems, we show that the Slater constraint qualification (SCQ) is necessary and sufficient for the regularized Lagrangian duality in the sense that the regularized duality holds for each quadratic objective function over the constraints if and only if (SCQ) holds. A new theorem of the alternative for systems involving both equality constraints and two quadratic inequality constraints plays a key role. We also provide classes of quadratic programs, including a class of CDT-subproblems with linear equality constraints, where (SCQ) ensures regularized Lagrangian duality.  相似文献   

12.
In this paper, we first establish characterizations of the nonemptiness and compactness of the set of weakly efficient solutions of a convex vector optimization problem with a general ordering cone (with or without a cone constraint) defined in a finite dimensional space. Using one of the characterizations, we further establish for a convex vector optimization problem with a general ordering cone and a cone constraint defined in a finite dimensional space the equivalence between the nonemptiness and compactness of its weakly efficient solution set and the generalized type I Levitin-Polyak well-posednesses. Finally, for a cone-constrained convex vector optimization problem defined in a Banach space, we derive sufficient conditions for guaranteeing the generalized type I Levitin-Polyak well-posedness of the problem.  相似文献   

13.
1.IntroductionTheproblemconsideredinthispaperiswhereX={xER"laTx5hi,jEI={l,.'.,m}},ajeR"(jEI)areallcolumn*ThisresearchissupportedbytheNationalNaturalSciencesFoundationofChinaandNaturalSciencesFoundationofHunanProvince.vectors,hiERI(j6I)areallscalars,andf:R"-- Risacontinuouslydifferentiablefunction.Weonlyconsiderinequalityconstraintsheresinceanyequalitycanbeexpressedastwoinequalities.Withoutassumingregularityofthelinearconstraints,thereisnotanydifficultyinextendingtheresultstothegenera…  相似文献   

14.
Matrix augmentation is used for the inversion of bases associated with large linearly constrained control problems. It is shown how an efficient data structure can be maintained by keeping all state variables in the basis, and then nullifying some of them explicitly by using additional constraints. The proposed methodology, together with a basis updating scheme based on augmentation, forms the skeleton for an in-core algorithm using either the revised simplex method or the generalized reduced gradient method.  相似文献   

15.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported.  相似文献   

16.
概率约束最优化问题是随机规划的一类重要问题,在金融、管理和工程计划等领域有广泛的应用. 概率约束优化问题近年来受到了广泛的关注和重视,在应用建模、理论和方法等方面取得了不少重要的进展. 这里主要概述和总结处理概率约束的主要方法和思想,包括凸内逼近方法、情景逼近方法、DC方法和整数规划方法等,并对概率约束最优化的研究前景进行讨论.  相似文献   

17.
Projected gradient methods for linearly constrained problems   总被引:23,自引:0,他引:23  
The aim of this paper is to study the convergence properties of the gradient projection method and to apply these results to algorithms for linearly constrained problems. The main convergence result is obtained by defining a projected gradient, and proving that the gradient projection method forces the sequence of projected gradients to zero. A consequence of this result is that if the gradient projection method converges to a nondegenerate point of a linearly constrained problem, then the active and binding constraints are identified in a finite number of iterations. As an application of our theory, we develop quadratic programming algorithms that iteratively explore a subspace defined by the active constraints. These algorithms are able to drop and add many constraints from the active set, and can either compute an accurate minimizer by a direct method, or an approximate minimizer by an iterative method of the conjugate gradient type. Thus, these algorithms are attractive for large scale problems. We show that it is possible to develop a finite terminating quadratic programming algorithm without non-degeneracy assumptions. Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38. Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38.  相似文献   

18.
This note serves two purposes. Firstly, we construct a counterexample to show that the statement on the convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex optimization problems in a highly influential paper by Boyd et al. (Found Trends Mach Learn 3(1):1–122, 2011) can be false if no prior condition on the existence of solutions to all the subproblems involved is assumed to hold. Secondly, we present fairly mild conditions to guarantee the existence of solutions to all the subproblems of the ADMM and provide a rigorous convergence analysis on the ADMM with a computationally more attractive large step-length that can even exceed the practically much preferred golden ratio of \((1+\sqrt{5})/2\).  相似文献   

19.
本文以火箭最大速度值的一般变化规律为基础, 改进了以前考虑火箭发射的成本问题的常用数学模型:最省的最省推进剂方案, 详细研究了各种情况下串联式多级火箭的成本问题,并以算例验证了所得的新成本计算模型的有效性.  相似文献   

20.
Most papers concerning nonlinear programming problems with linear constraints assume linear independence of the gradients of the active constraints at any feasible point. In this paper we remove this assumption and give an algorithm and prove its convergency. Also, under appropriate assumptions on the objective function, including one which could be viewed as an extension of the strict complementary slackness condition at the optimal solution, we prove the rate of convergence of the algorithm to be superlinear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号