首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
Multiobjective optimization has a large number of real-life applications. Under this motivation, in this paper, we present a new method for solving multiobjective optimization problems with both linear constraints and bound constraints on the variables. This method extends, to the multiobjective setting, the classical reduced gradient method for scalar-valued optimization. The proposed algorithm generates a feasible descent direction by solving an appropriate quadratic subproblem, without the use of any scalarization approaches. We prove that the sequence generated by the algorithm converges to Pareto-critical points of the problem. We also present some numerical results to show the efficiency of the proposed method.  相似文献   

2.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported.  相似文献   

3.
An algorithm for solving linearly constrained optimization problems is proposed. The search direction is computed by a bundle principle and the constraints are treated through an active set strategy. Difficulties that arise when the objective function is nonsmooth, require a clever choice of a constraint to relax. A certain nondegeneracy assumption is necessary to obtain convergence. Most of this research was performed when the author was with I.N.R.I.A. (Domaine de Voluceau-Rocquencourt, B.P. 105, 78153 Le Chesnay Cédex, France). This research was supported in part by the National Science Foundation, Grants No. DMC-84-51515 and OIR-85-00108.  相似文献   

4.
An algorithm for minimization of functions of many variables, subject possibly to linear constraints on the variables, is described. In it a subproblem is solved in which a quadratic approximation is made to the object function and minimized over a region in which the approximation is valid. A strategy for deciding when this region should be expanded or contracted is given. The quadratic approximation involves estimating the hessian of the object function by a matrix which is updated at each iteration by a formula recently reported by Powell [6]. This formula enables convergence of the algorithm from any feasible point to be proved. Use of such an approximation, as against using exact second derivatives, also enables a reduction of about 60% to be made in the number of operations to solve the subproblem. Numerical evidence is reported showing that the algorithm is efficient in the number of function evaluations required to solve well known test problems.This paper was presented at the 7th International Mathematical Programming Symposium 1970, The Hague, The Netherlands.  相似文献   

5.
6.
In this paper a general problem of constrained minimization is studied. The minima are determined by searching for the asymptotical values of the solutions of a suitable system of ordinary differential equations.For this system, if the initial point is feasible, then any trajectory is always inside the set of constraints and tends towards a set of critical points. Each critical point that is not a relative minimum is unstable. For formulas of one-step numerical integration, an estimate of the step of integration is given, so that the above mentioned qualitative properties of the system of ordinary differential equations are kept.  相似文献   

7.
In this paper, we provide a new generalized gradient projection algorithm for nonlinear programming problems with linear constraints. This algorithm has simple structure and is very practical and stable. Under the weaker assumptions, we have proved the global convergence of our algorithm.  相似文献   

8.
A parallel stochastic algorithm is presented for solving the linearly constrained concave global minimization problem. The algorithm is a multistart method and makes use of a Bayesian stopping rule to identify the global minimum with high probability. Computational results are presented for more than 200 problems on a Cray X-MP EA/464 supercomputer.  相似文献   

9.
Hirabayashi  R.  Jongen  H. Th.  Shida  M. 《Mathematical Programming》1994,66(1-3):351-360
We deal with finite dimensional differentiable optimization problems under linear constraints. Stability of stationary solutions under restricted perturbations of the constraints will be characterized. The restriction on the constraint perturbations is given by means of a certain rank condition; in particular, righthandside perturbations are allowed.Corresponding author.  相似文献   

10.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

11.
Projected gradient methods for linearly constrained problems   总被引:23,自引:0,他引:23  
The aim of this paper is to study the convergence properties of the gradient projection method and to apply these results to algorithms for linearly constrained problems. The main convergence result is obtained by defining a projected gradient, and proving that the gradient projection method forces the sequence of projected gradients to zero. A consequence of this result is that if the gradient projection method converges to a nondegenerate point of a linearly constrained problem, then the active and binding constraints are identified in a finite number of iterations. As an application of our theory, we develop quadratic programming algorithms that iteratively explore a subspace defined by the active constraints. These algorithms are able to drop and add many constraints from the active set, and can either compute an accurate minimizer by a direct method, or an approximate minimizer by an iterative method of the conjugate gradient type. Thus, these algorithms are attractive for large scale problems. We show that it is possible to develop a finite terminating quadratic programming algorithm without non-degeneracy assumptions. Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38. Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38.  相似文献   

12.
Based on the maximum entropy principle and the idea of a penalty function, an evaluation function is derived to solve multiobjective optimization problems with equality constraints. Combining with interval analysis method, we define a generalized Krawczyk operator, design interval iteration with constrained functions and new region deletion test rules, present an interval algorithm for equality constrained multiobjective optimization problems, and also prove relevant properties. A theoretical analysis and numerical results indicate that the algorithm constructed is effective and reliable.  相似文献   

13.
In this study, we propose an algorithm for solving a minimax problem over a polyhedral set defined in terms of a system of linear inequalities. At each iteration a direction is found by solving a quadratic programming problem and then a suitable step size along that direction is taken through an extension of Armijo's approximate line search technique. We show that each accumulation point is a Kuhn-Tucker solution and give a condition that guarantees convergence of the whole sequence of iterations. Through the use of an exact penalty function, the algorithm can be used for solving constrained nonlinear programming. In this case, our algorithm resembles that of Han, but differs from it both in the direction-finding and the line search steps.  相似文献   

14.
Most papers concerning nonlinear programming problems with linear constraints assume linear independence of the gradients of the active constraints at any feasible point. In this paper we remove this assumption and give an algorithm and prove its convergency. Also, under appropriate assumptions on the objective function, including one which could be viewed as an extension of the strict complementary slackness condition at the optimal solution, we prove the rate of convergence of the algorithm to be superlinear.  相似文献   

15.
A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: .  相似文献   

16.
A new super-linear convergent gradient projection type algorithm for linearly constrained problems is presented. This algorithm is more practical than the algorithm presented in [5].  相似文献   

17.
This paper describes a direct search method for a class of linearly constrained optimization problem. Through research we find it can be treated as an unconstrained optimization problem. And with the decrease of dimension of the variables need to be computed in the algorithms, the implementation of convergence to KKT points will be simplified to some extent. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.  相似文献   

18.
We propose a first-order interior-point method for linearly constrained smooth optimization that unifies and extends first-order affine-scaling method and replicator dynamics method for standard quadratic programming. Global convergence and, in the case of quadratic program, (sub)linear convergence rate and iterate convergence results are derived. Numerical experience on simplex constrained problems with 1000 variables is reported.  相似文献   

19.
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar’s notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.  相似文献   

20.
Reduced Hessian methods have been shown to be successful for equality constrained problems. However there are few results on reduced Hessian methods for general constrained problems. In this paper we propose a method for general constrained problems, based on Byrd and Schnabel's basis-independent algorithm. It can be regarded as a smooth extension of the standard reduced Hessian Method.Research supported in part by NSF, AFORS and ONR through NSF grant DMS-8920550.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号