首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
《Optimization》2012,61(10):1661-1686
ABSTRACT

Optimization over the efficient set of a multi-objective optimization problem is a mathematical model for the problem of selecting a most preferred solution that arises in multiple criteria decision-making to account for trade-offs between objectives within the set of efficient solutions. In this paper, we consider a particular case of this problem, namely that of optimizing a linear function over the image of the efficient set in objective space of a convex multi-objective optimization problem. We present both primal and dual algorithms for this task. The algorithms are based on recent algorithms for solving convex multi-objective optimization problems in objective space with suitable modifications to exploit specific properties of the problem of optimization over the efficient set. We first present the algorithms for the case that the underlying problem is a multi-objective linear programme. We then extend them to be able to solve problems with an underlying convex multi-objective optimization problem. We compare the new algorithms with several state of the art algorithms from the literature on a set of randomly generated instances to demonstrate that they are considerably faster than the competitors.  相似文献   

2.
New versions and extensions of Benson’s outer approximation algorithm for solving linear vector optimization problems are presented. Primal and dual variants are provided in which only one scalar linear program has to be solved in each iteration rather than two or three as in previous versions. Extensions are given to problems with arbitrary pointed solid polyhedral ordering cones. Numerical examples are provided, one of them involving a new set-valued risk measure for multivariate positions.  相似文献   

3.
The geometric duality theory of Heyde and Löhne (2006) defines a dual to a multiple objective linear programme (MOLP). In objective space, the primal problem can be solved by Benson’s outer approximation method (Benson 1998a,b) while the dual problem can be solved by a dual variant of Benson’s algorithm (Ehrgott et al. 2007). Duality theory then assures that it is possible to find the (weakly) nondominated set of the primal MOLP by solving its dual. In this paper, we propose an algorithm to solve the dual MOLP approximately but within specified tolerance. This approximate solution set can be used to calculate an approximation of the weakly nondominated set of the primal. We show that this set is a weakly ε-nondominated set of the original primal MOLP and provide numerical evidence that this approach can be faster than solving the primal MOLP approximately.  相似文献   

4.
Two approximation algorithms for solving convex vector optimization problems (CVOPs) are provided. Both algorithms solve the CVOP and its geometric dual problem simultaneously. The first algorithm is an extension of Benson’s outer approximation algorithm, and the second one is a dual variant of it. Both algorithms provide an inner as well as an outer approximation of the (upper and lower) images. Only one scalar convex program has to be solved in each iteration. We allow objective and constraint functions that are not necessarily differentiable, allow solid pointed polyhedral ordering cones, and relate the approximations to an appropriate \(\epsilon \) -solution concept. Numerical examples are provided.  相似文献   

5.
In this paper, a parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented. This algorithm can be seen as a variant of the multi-objective simplex (the Evans–Steuer) algorithm (Math Program 5(1):54–72, 1973). Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a solution in the sense of Löhne (Vector optimization with infimum and supremum. Springer, Berlin, 2011), that is, it finds a subset of efficient solutions that allows to generate the whole efficient frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear optimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone in Ruszczyński and Vanderbei (Econometrica 71(4):1287–1297, 2003). The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone C and for bounded as well as unbounded problems. Numerical results are provided to compare the proposed algorithm with an objective space based LVOP algorithm [Benson’s algorithm in Hamel et al. (J Global Optim 59(4):811–836, 2014)], that also provides a solution in the sense of Löhne (2011), and with the Evans–Steuer algorithm (1973). The results show that for non-degenerate problems the proposed algorithm outperforms Benson’s algorithm and is on par with the Evans–Steuer algorithm. For highly degenerate problems Benson’s algorithm (Hamel et al. 2014) outperforms the simplex-type algorithms; however, the parametric simplex algorithm is for these problems computationally much more efficient than the Evans–Steuer algorithm.  相似文献   

6.
Multiplicative programming problems are global optimisation problems known to be NP-hard. In this paper we propose an objective space cut and bound algorithm for approximately solving convex multiplicative programming problems. This method is based on an objective space approximation algorithm for convex multi-objective programming problems. We show that this multi-objective optimisation algorithm can be changed into a cut and bound algorithm to solve convex multiplicative programming problems. We use an illustrative example to demonstrate the working of the algorithm. Computational experiments illustrate the superior performance of our algorithm compared to other methods from the literature.  相似文献   

7.
Multiobjective linear optimization problems (MOLPs) arise when several linear objective functions have to be optimized over a convex polyhedron. In this paper, we propose a new method for generating the entire efficient set for MOLPs in the outcome space. This method is based on the concept of adjacencies between efficient extreme points. It uses a local exploration approach to generate simultaneously efficient extreme points and maximal efficient faces. We therefore define an efficient face as the combination of adjacent efficient extreme points that define its border. We propose to use an iterative simplex pivoting algorithm to find adjacent efficient extreme points. Concurrently, maximal efficient faces are generated by testing relative interior points. The proposed method is constructive such that each extreme point, while searching for incident faces, can transmit some local informations to its adjacent efficient extreme points in order to complete the faces’ construction. The performance of our method is reported and the computational results based on randomly generated MOLPs are discussed.  相似文献   

8.
Outcome space methods construct the set of nondominated points in the objective (outcome) space of a multiple objective linear programme. In this paper, we employ results from geometric duality theory for multiple objective linear programmes to derive a dual variant of Benson’s “outer approximation algorithm” to solve multiobjective linear programmes in objective space. We also suggest some improvements of the original version of the algorithm and prove that solving the dual provides a weight set decomposition. We compare both algorithms on small illustrative and on practically relevant examples.  相似文献   

9.
Based on a new efficient identification technique of active constraints introduced in this paper, a new sequential systems of linear equations (SSLE) algorithm generating feasible iterates is proposed for solving nonlinear optimization problems with inequality constraints. In this paper, we introduce a new technique for constructing the system of linear equations, which recurs to a perturbation for the gradients of the constraint functions. At each iteration of the new algorithm, a feasible descent direction is obtained by solving only one system of linear equations without doing convex combination. To ensure the global convergence and avoid the Maratos effect, the algorithm needs to solve two additional reduced systems of linear equations with the same coefficient matrix after finite iterations. The proposed algorithm is proved to be globally and superlinearly convergent under some mild conditions. What distinguishes this algorithm from the previous feasible SSLE algorithms is that an improving direction is obtained easily and the computation cost of generating a new iterate is reduced. Finally, a preliminary implementation has been tested.  相似文献   

10.
Image space analysis of generalized fractional programs   总被引:2,自引:0,他引:2  
The solution of a particular nonconvex program is usually very dependent on the structure of the problem. In this paper we identify classes of nonconvex problems involving either sums or products of ratios of linear terms which may be treated by analysis in a transformed space. In each class, the image space is defined by a mapping which associates a new variable with each original ratio of linear terms. In the image space, optimization is easy in certain directions, and the overall solution may be realized by sequentially optimizing in these directions.In addition to these ratio problems, we also show how to use image space analysis to treat the subclass of problems whose objective is to optimize a product of linear terms. For each class of nonconvex problems, we present an algorithm that locates global solutions by computing both upper and lower bounds on the solution and then solving a sequence of linear programming sub-problems. We also demonstrate the algorithms described in this paper by solving several example problems.  相似文献   

11.
For current sequential quadratic programming (SQP) type algorithms, there exist two problems: (i) in order to obtain a search direction, one must solve one or more quadratic programming subproblems per iteration, and the computation amount of this algorithm is very large. So they are not suitable for the large-scale problems; (ii) the SQP algorithms require that the related quadratic programming subproblems be solvable per iteration, but it is difficult to be satisfied. By using ε-active set procedure with a special penalty function as the merit function, a new algorithm of sequential systems of linear equations for general nonlinear optimization problems with arbitrary initial point is presented. This new algorithm only needs to solve three systems of linear equations having the same coefficient matrix per iteration, and has global convergence and local superlinear convergence. To some extent, the new algorithm can overcome the shortcomings of the SQP algorithms mentioned above. Project partly supported by the National Natural Science Foundation of China and Tianyuan Foundation of China.  相似文献   

12.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

13.
Two of the main approaches in multiple criteria optimization are optimization over the efficient set and utility function program. These are nonconvex optimization problems in which local optima can be different from global optima. Existing global optimization methods for solving such problems can only work well for problems of moderate dimensions. In this article, we propose some ways to reduce the number of criteria and the dimension of a linear multiple criteria optimization problem. By the concept of so-called representative and extreme criteria, which is motivated by the concept of redundant (or nonessential) objective functions of Gal and Leberling, we can reduce the number of criteria without altering the set of efficient solutions. Furthermore, by using linear independent criteria, the linear multiple criteria optimization problem under consideration can be transformed into an equivalent linear multiple criteria optimization problem in the space of linear independent criteria. This equivalence is understood in a sense that efficient solutions of each problem can be derived from efficient solutions of the other by some affine transformation. As a result, such criteria and dimension reduction techniques could help to increase the efficiency of existing algorithms and to develop new methods for handling global optimization problems arisen from multiple objective optimization.  相似文献   

14.
Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal–dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal–dual path-following interior-point algorithm for solving large scale log-det SDP problems arising from sparse covariance selection problems. Our inexact algorithm solves the large and ill-conditioned linear system of equations in each iteration by a preconditioned iterative solver. By exploiting the structures in sparse covariance selection problems, we are able to design highly effective preconditioners to efficiently solve the large and ill-conditioned linear systems. Numerical experiments on both synthetic and real covariance selection problems show that our algorithm is highly efficient and outperforms other existing algorithms.  相似文献   

15.
The linear multiplicative programming is the minimization of the product of affine functions over a polyhedral set. The problem with two affine functions reduces to a parametric linear program and can be solved efficiently. For the objective function with more than two affine functions multiplied, no efficient algorithms that solve the problem to optimality have been proposed, however Benson and Boger have proposed a heuristic algorithm that exploits links of the problem with concave minimization and multicriteria optimization. We will propose a heuristic method for the problem as well as its modification to enhance the accuracy of approximation. Computational experiments demonstrate that the method and its modification solve randomly generated problems within a few percent of relative error.  相似文献   

16.
A Single Component Mutation Evolutionary Programming   总被引:1,自引:0,他引:1  
In this paper, a novel evolutionary programming is proposed for solving the upper and lower bound optimization problems as well as the linear constrained optimization problems. There are two characteristics of the algorithm: first, only one component of the current solution is mutated in each iteration; second, it can solve the linear constrained optimization problems directly without converting it into unconstrained problems. By solving two kinds of the optimization problems, the algorithm can not only effectively find optimal or close to optimal solutions but also reduce the number of function evolutions compared with the other heuristic algorithms.  相似文献   

17.
Traditionally, minimum cost transshipment problems have been simplified as linear cost problems, which are not practical in real applications. Some advanced local search algorithms have been developed to solve concave cost bipartite network problems. These have been found to be more effective than the traditional linear approximation methods and local search methods. Recently, a genetic algorithm and an ant colony system algorithm were employed to develop two global search algorithms for solving concave cost transshipment problems. These two global search algorithms were found to be more effective than the advanced local search algorithms for solving concave cost transshipment problems. Although the particle swarm optimization algorithm has been used to obtain good results in many applications, to the best of our knowledge, it has not yet been applied in minimum concave cost network flow problems. Thus, in this study, we employ an arc-based particle swarm optimization algorithm, coupled with some genetic algorithm and threshold accepting method techniques, as well as concave cost network heuristics, to develop a hybrid global search algorithm for efficiently solving minimum cost network flow problems with concave arc costs. The proposed algorithm is evaluated by solving several randomly generated network flow problems. The results indicate that the proposed algorithm is more effective than several other recently designed methods, such as local search algorithms, genetic algorithms and ant colony system algorithms, for solving minimum cost network flow problems with concave arc costs.  相似文献   

18.
In this paper, we focus on approximating convex compact bodies. For a convex body described as the feasible set in objective space of a multiple objective programme, we show that finding it is equivalent to finding the non-dominated set of a multiple objective programme. This equivalence implies that convex bodies can be approximated using multiple objective optimization algorithms. Therefore, we propose a revised outer approximation algorithm for convex multiple objective programming problems to approximate convex bodies. Finally, we apply the algorithm to solve reachable sets of control systems and use numerical examples to show the effectiveness of the algorithm.  相似文献   

19.
In Ref. 1, a new superlinearly convergent algorithm of sequential systems of linear equations (SSLE) for nonlinear optimization problems with inequality constraints was proposed. At each iteration, this new algorithm only needs to solve four systems of linear equations having the same coefficient matrix, which is much less than the amount of computation required for existing SQP algorithms. Moreover, unlike the quadratic programming subproblems of the SQP algorithms (which may not have a solution), the subproblems of the SSLE algorithm are always solvable. In Ref. 2, it is shown that the new algorithm can also be used to deal with nonlinear optimization problems having both equality and inequality constraints, by solving an auxiliary problem. But the algorithm of Ref. 2 has to perform a pivoting operation to adjust the penalty parameter per iteration. In this paper, we improve the work of Ref. 2 and present a new algorithm of sequential systems of linear equations for general nonlinear optimization problems. This new algorithm preserves the advantages of the SSLE algorithms, while at the same time overcoming the aforementioned shortcomings. Some numerical results are also reported.  相似文献   

20.
In solving certain optimization problems, the corresponding Lagrangian dual problem is often solved simply because in these problems the dual problem is easier to solve than the original primal problem. Another reason for their solution is the implication of the weak duality theorem which suggests that under certain conditions the optimal dual function value is smaller than or equal to the optimal primal objective value. The dual problem is a special case of a bilevel programming problem involving Lagrange multipliers as upper-level variables and decision variables as lower-level variables. Another interesting aspect of dual problems is that both lower and upper-level optimization problems involve only box constraints and no other equality of inequality constraints. In this paper, we propose a coevolutionary dual optimization (CEDO) algorithm for co-evolving two populations—one involving Lagrange multipliers and other involving decision variables—to find the dual solution. On 11 test problems taken from the optimization literature, we demonstrate the efficacy of CEDO algorithm by comparing it with a couple of nested smooth and nonsmooth algorithms and a couple of previously suggested coevolutionary algorithms. The performance of CEDO algorithm is also compared with two classical methods involving nonsmooth (bundle) optimization methods. As a by-product, we analyze the test problems to find their associated duality gap and classify them into three categories having zero, finite or infinite duality gaps. The development of a coevolutionary approach, revealing the presence or absence of duality gap in a number of commonly-used test problems, and efficacy of the proposed coevolutionary algorithm compared to usual nested smooth and nonsmooth algorithms and other existing coevolutionary approaches remain as the hallmark of the current study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号