共查询到20条相似文献,搜索用时 78 毫秒
1.
Uriel G. Rothblum 《Mathematical Programming》1978,15(1):77-86
The purpose of this paper is to establish sufficient conditions for the existence of solutions to mathematical programs where the variables of the solution satisfy given proportions. These conditions rely on convergence properties of powers of nonnegative matrices when these powers form a bounded sequence. We assume that if an arbitrary vectorx is premultiplied by elements of this sequence, the limit of the sequence (which might be a Cesaro (C, 1) limit) gives an improvement of the objective.This research was supported by NSF Grants ENG 76-15599 and ENG76-12266 and ONR Contract N00014-75-C-0493. 相似文献
2.
Gabriele Eichfelder 《Computational Optimization and Applications》2009,44(2):249-273
In this paper several parameter dependent scalarization approaches for solving nonlinear multi-objective optimization problems
are discussed. It is shown that they can be considered as special cases of a scalarization problem by Pascoletti and Serafini
(or a modification of this problem).
Based on these connections theoretical results as well as a new algorithm for adaptively controlling the choice of the parameters
for generating almost equidistant approximations of the efficient set, lately developed for the Pascoletti-Serafini scalarization,
can be applied to these problems. For instance for such well-known scalarizations as the ε-constraint or the normal boundary intersection problem algorithms for adaptively generating high quality approximations are
derived. 相似文献
3.
Fang Lu 《Applicable analysis》2013,92(8):1567-1586
In the context of Euclidean spaces, we present an extension of the Newton-like method for solving vector optimization problems, with respect to the partial orders induced by a pointed, closed and convex cone with a nonempty interior. We study both exact and inexact versions of the Newton-like method. Under reasonable hypotheses, we prove stationarity of accumulation points of the sequences produced by Newton-like methods. Moreover, assuming strict cone-convexity of the objective map to the vector optimization problem, we establish convergence of the sequences to an efficient point whenever the initial point is in a compact level set. 相似文献
4.
This paper presents a numerical method for solving quantile optimization problems, i.e. stochastic programming problems in which the quantile of the distribution of an objective function is the criterion to be optimized. 相似文献
5.
Self-adaptive velocity particle swarm optimization for solving constrained optimization problems 总被引:4,自引:0,他引:4
Particle swarm optimization (PSO) is originally developed as an unconstrained optimization technique, therefore lacks an explicit
mechanism for handling constraints. When solving constrained optimization problems (COPs) with PSO, the existing research
mainly focuses on how to handle constraints, and the impact of constraints on the inherent search mechanism of PSO has been
scarcely explored. Motivated by this fact, in this paper we mainly investigate how to utilize the impact of constraints (or
the knowledge about the feasible region) to improve the optimization ability of the particles. Based on these investigations,
we present a modified PSO, called self-adaptive velocity particle swarm optimization (SAVPSO), for solving COPs. To handle
constraints, in SAVPSO we adopt our recently proposed dynamic-objective constraint-handling method (DOCHM), which is essentially
a constituent part of the inherent search mechanism of the integrated SAVPSO, i.e., DOCHM + SAVPSO. The performance of the
integrated SAVPSO is tested on a well-known benchmark suite and the experimental results show that appropriately utilizing
the knowledge about the feasible region can substantially improve the performance of the underlying algorithm in solving COPs. 相似文献
6.
Cosimo Resina 《European Journal of Operational Research》1985,21(1):93-100
In this paper a general problem of constrained minimization is studied. The minima are determined by searching for the asymptotical values of the solutions of a suitable system of ordinary differential equations.For this system, if the initial point is feasible, then any trajectory is always inside the set of constraints and tends towards a set of critical points. Each critical point that is not a relative minimum is unstable. For formulas of one-step numerical integration, an estimate of the step of integration is given, so that the above mentioned qualitative properties of the system of ordinary differential equations are kept. 相似文献
7.
A cooperative strategy for solving dynamic optimization problems 总被引:1,自引:0,他引:1
Optimization in dynamic environments is a very active and important area which tackles problems that change with time (as most real-world problems do). In this paper we present a new centralized cooperative strategy based on trajectory methods (tabu search) for solving Dynamic Optimization Problems (DOPs). Two additional methods are included for comparison purposes. The first method is a Particle Swarm Optimization variant with multiple swarms and different types of particles where there exists an implicit cooperation within each swarm and competition among different swarms. The second method is an explicit decentralized cooperation scheme where multiple agents cooperate to improve a grid of solutions. The main goals are: firstly, to assess the possibilities of trajectory methods in the context of DOPs, where populational methods have traditionally been the recommended option; and secondly, to draw attention on explicitly including cooperation schemes in methods for DOPs. The results show how the proposed strategy can consistently outperform the results of the two other methods. 相似文献
8.
R. Fletcher 《Mathematical Programming》1972,2(1):133-165
An algorithm for minimization of functions of many variables, subject possibly to linear constraints on the variables, is described. In it a subproblem is solved in which a quadratic approximation is made to the object function and minimized over a region in which the approximation is valid. A strategy for deciding when this region should be expanded or contracted is given. The quadratic approximation involves estimating the hessian of the object function by a matrix which is updated at each iteration by a formula recently reported by Powell [6]. This formula enables convergence of the algorithm from any feasible point to be proved. Use of such an approximation, as against using exact second derivatives, also enables a reduction of about 60% to be made in the number of operations to solve the subproblem. Numerical evidence is reported showing that the algorithm is efficient in the number of function evaluations required to solve well known test problems.This paper was presented at the 7th International Mathematical Programming Symposium 1970, The Hague, The Netherlands. 相似文献
9.
Mohammedi R. Abdel-Aziz 《Numerical Functional Analysis & Optimization》2013,34(3-4):319-336
An algorithm for solving the problem of minimizing a quadratic function subject to ellipsoidal constraints is introduced. This algorithm is based on the impHcitly restarted Lanczos method to construct a basis for the Krylov subspace in conjunction with a model trust region strategy to choose the step. The trial step is computed on the small dimensional subspace that lies inside the trust region. One of the main advantages of this algorithm is the way that the Krylov subspace is terminated. We introduce a terminationcondition that allows the gradient to be decreased on that subspace. A convergence theory for this algorithm is presented. It is shown that this algorithm is globally convergent and it shouldcope quite well with large scale minimization problems. This theory is sufficiently general that it holds for any algorithm that projects the problem on a lower dimensional subspace. 相似文献
10.
《Optimization》2012,61(2):249-263
New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods. 相似文献
11.
This paper deals with the generalized Nash equilibrium problem (GNEP), i.e. a noncooperative game in which the strategy set
of each player, as well as his payoff function, depends on the strategies of all players. We consider an equivalent optimization
reformulation of GNEP using a regularized Nikaido–Isoda function so that solutions of GNEP coincide with global minima of
the optimization problem. We then propose a derivative-free descent type method with inexact line search to solve the equivalent
optimization problem and we prove that our algorithm is globally convergent. The convergence analysis is not based on conditions
guaranteeing that every stationary point of the optimization problem is a solution of GNEP. Finally, we present the performance
of our algorithm on some examples. 相似文献
12.
E. G. Birgin L. F. Bueno N. Krejić J. M. Martínez 《Journal of Global Optimization》2011,51(4):715-742
In Low Order-Value Optimization (LOVO) problems the sum of the r smallest values of a finite sequence of q functions is involved as the objective to be minimized or as a constraint. The latter case is considered in the present paper.
Portfolio optimization problems with a constraint on the admissible Value at Risk (VaR) can be modeled in terms of a LOVO
problem with constraints given by Low order-value functions. Different algorithms for practical solution of this problem will
be presented. Using these techniques, portfolio optimization problems with transaction costs will be solved. 相似文献
13.
Li-Yeh Chuang 《Applied mathematics and computation》2011,217(16):6900-6916
Chaotic catfish particle swarm optimization (C-CatfishPSO) is a novel optimization algorithm proposed in this paper. C-CatfishPSO introduces chaotic maps into catfish particle swarm optimization (CatfishPSO), which increase the search capability of CatfishPSO via the chaos approach. Simple CatfishPSO relies on the incorporation of catfish particles into particle swarm optimization (PSO). The introduced catfish particles improve the performance of PSO considerably. Unlike other ordinary particles, the catfish particles initialize a new search from extreme points of the search space when the gbest fitness value (global optimum at each iteration) has not changed for a certain number of consecutive iterations. This results in further opportunities of finding better solutions for the swarm by guiding the entire swarm to promising new regions of the search space and accelerating the search. The introduced chaotic maps strengthen the solution quality of PSO and CatfishPSO significantly. The resulting improved PSO and CatfishPSO are called chaotic PSO (C-PSO) and chaotic CatfishPSO (C-CatfishPSO), respectively. PSO, C-PSO, CatfishPSO, C-CatfishPSO, as well as other advanced PSO procedures from the literature were extensively compared on several benchmark test functions. Statistical analysis of the experimental results indicate that the performance of C-CatfishPSO is better than the performance of PSO, C-PSO, CatfishPSO and that C-CatfishPSO is also superior to advanced PSO methods from the literature. 相似文献
14.
《European Journal of Operational Research》1999,114(2):430-436
The two-dimensional layout problem is known to be NP-complete, and the current research work is basically in the heuristic way. In this paper, we mainly discuss the methods for solving layout problem about the artificial satellite module by virtue of graph theory and group theory. Also, an algorithm of global optimization is presented first time. The method given here can be extended to solve other type of layout problems. 相似文献
15.
Organization of parallel calculations with the use of MPI functions in problems of discrete optimization is considered. The branch and bound method is applied to problems of integer linear and integer quadratic programming as well as to problems of set covering. The efficiency of the algorithms is analyzed on the basis of numerical experiments. 相似文献
16.
We formulate minmax flow problems as a DC optimization problem. We then apply a DC primal-dual algorithm to solve the resulting problem. The obtained computational results show that the proposed algorithm is efficient thanks to particular structures of the minmax flow problems. 相似文献
17.
Fuzzy Optimization models and methods has been one of the most and well studied topics inside the broad area of Soft Computing.
Particularly relevant is the field of fuzzy linear programming (FLP). Its applications as well as practical realizations can
be found in all the real world areas. As FLP problems constitute the basis for solving fuzzy optimization problems, in this
paper a basic introduction to the main models and methods in FLP is presented and, as a whole, Linear Programming problems
with fuzzy costs, fuzzy constraints and fuzzy coefficients in the technological matrix are analyzed. But fuzzy sets and systems
based optimization methods do not end with FLP, and hence in order to solve more complex optimization problems, fuzzy sets
based Meta-heuristics are considered, and two main operative approaches described. Provided that these techniques obtain efficient
and/or effective solutions, we present a fuzzy rule based methodology for coordinating Meta-heuristics and in addition, to
provide intelligence, we propose a process of extraction of the knowledge to conduct the coordination of the system. 相似文献
18.
Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems 总被引:2,自引:0,他引:2
W.F. Abd-El-WahedA.A. Mousa M.A. El-Shorbagy 《Journal of Computational and Applied Mathematics》2011,235(5):1446-1453
Heuristic optimization provides a robust and efficient approach for solving complex real-world problems. The aim of this paper is to introduce a hybrid approach combining two heuristic optimization techniques, particle swarm optimization (PSO) and genetic algorithms (GA). Our approach integrates the merits of both GA and PSO and it has two characteristic features. Firstly, the algorithm is initialized by a set of random particles which travel through the search space. During this travel an evolution of these particles is performed by integrating PSO and GA. Secondly, to restrict velocity of the particles and control it, we introduce a modified constriction factor. Finally, the results of various experimental studies using a suite of multimodal test functions taken from the literature have demonstrated the superiority of the proposed approach to finding the global optimal solution. 相似文献
19.
Membrane algorithms (MAs), which inherit from P systems, constitute a new parallel and distribute framework for approximate computation. In the paper, a membrane algorithm is proposed with the improvement that the involved parameters can be adaptively chosen. In the algorithm, some membranes can evolve dynamically during the computing process to specify the values of the requested parameters. The new algorithm is tested on a well-known combinatorial optimization problem, the travelling salesman problem. The empirical evidence suggests that the proposed approach is efficient and reliable when dealing with 11 benchmark instances, particularly obtaining the best of the known solutions in eight instances. Compared with the genetic algorithm, simulated annealing algorithm, neural network and a fine-tuned non-adaptive membrane algorithm, our algorithm performs better than them. In practice, to design the airline network that minimize the total routing cost on the CAB data with twenty-five US cities, we can quickly obtain high quality solutions using our algorithm. 相似文献
20.
Interval methods have shown their ability to locate and prove the existence of a global optima in a safe and rigorous way. Unfortunately, these methods are rather slow. Efficient solvers for optimization problems are based on linear relaxations. However, the latter are unsafe, and thus may overestimate, or, worst, underestimate the very global minima. This paper introduces QuadOpt, an efficient and safe framework to rigorously bound the global optima as well as its location. QuadOpt uses consistency techniques to speed up the initial convergence of the interval narrowing algorithms. A lower bound is computed on a linear relaxation of the constraint system and the objective function. All these computations are based on a safe and rigorous implementation of linear programming techniques. First experimental results are very promising. 相似文献