首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
2.
We consider optimization problems for minimizing conditional value-at-risk (CVaR) from a computational point of view, with an emphasis on financial applications. As a general solution approach, we suggest to reformulate these CVaR optimization problems as two-stage recourse problems of stochastic programming. Specializing the L-shaped method leads to a new algorithm for minimizing conditional value-at-risk. We implemented the algorithm as the solver CVaRMin. For illustrating the performance of this algorithm, we present some comparative computational results with two kinds of test problems. Firstly, we consider portfolio optimization problems with 5 random variables. Such problems involving conditional value at risk play an important role in financial risk management. Therefore, besides testing the performance of the proposed algorithm, we also present computational results of interest in finance. Secondly, with the explicit aim of testing algorithm performance, we also present comparative computational results with randomly generated test problems involving 50 random variables. In all our tests, the experimental solver, based on the new approach, outperformed by at least one order of magnitude all general-purpose solvers, with an accuracy of solution being in the same range as that with the LP solvers. János Mayer: Financial support by the national center of competence in research "Financial Valuation and Risk Management" is gratefully acknowledged. The national centers in research are managed by the Swiss National Science Foundation on behalf of the federal authorities.  相似文献   

3.
While research in robust optimization has attracted considerable interest over the last decades, its algorithmic development has been hindered by several factors. One of them is a missing set of benchmark instances that make algorithm performance better comparable, and makes reproducing instances unnecessary. Such a benchmark set should contain hard instances in particular, but so far, the standard approach to produce instances has been to sample values randomly from a uniform distribution.In this paper we introduce a new method to produce hard instances for min-max combinatorial optimization problems, which is based on an optimization model itself. Our approach does not make any assumptions on the problem structure and can thus be applied to any combinatorial problem. Using the Selection and Traveling Salesman problems as examples, we show that it is possible to produce instances which are up to 500 times harder to solve for a mixed-integer programming solver than the current state-of-the-art instances.  相似文献   

4.
Semidefinite relaxations of certain combinatorial optimization problems lead to approximation algorithms with performance guarantees. For large-scale problems, it may not be computationally feasible to solve the semidefinite relaxations to optimality. In this paper, we investigate the effect on the performance guarantees of an approximate solution to the semidefinite relaxation for MaxCut, Max2Sat, and Max3Sat. We show that it is possible to make simple modifications to the approximate solutions and obtain performance guarantees that depend linearly on the most negative eigenvalue of the approximate solution, the size of the problem, and the duality gap. In every case, we recover the original performance guarantees in the limit as the solution approaches the optimal solution to the semidefinite relaxation.  相似文献   

5.
We study the spatial Brand-and-Bound algorithm for the global optimization of nonlinear problems. In particular we are interested in a method to find quickly good feasible solutions. Most spatial Branch-and-Bound-based solvers use a non-global solver at a few nodes to try to find better incumbents. We show that it is possible to improve the branching rules and the node priority by exploiting the solutions from the non-global solver. We also propose several smart adaptive strategies to choose when to run the non-global solver. We show that despite the time spent in solving more NLP problems in the nodes, the new strategies enable the algorithm to find the first good incumbents faster and to prove the global optimality faster. Numerous easy, medium size as well as hard NLP instances from the Coconut library are benchmarked. All experiments are run using the open source solver Couenne.  相似文献   

6.
In this paper, an ensemble of discrete differential evolution algorithms with parallel populations is presented. In a single populated discrete differential evolution (DDE) algorithm, the destruction and construction (DC) procedure is employed to generate the mutant population whereas the trial population is obtained through a crossover operator. The performance of the DDE algorithm is substantially affected by the parameters of DC procedure as well as the choice of crossover operator. In order to enable the DDE algorithm to make use of different parameter values and crossover operators simultaneously, we propose an ensemble of DDE (eDDE) algorithms where each parameter set and crossover operator is assigned to one of the parallel populations. Each parallel parent population does not only compete with offspring population generated by its own population but also the offspring populations generated by all other parallel populations which use different parameter settings and crossover operators. As an application area, the well-known generalized traveling salesman problem (GTSP) is chosen, where the set of nodes is divided into clusters so that the objective is to find a tour with minimum cost passing through exactly one node from each cluster. The experimental results show that none of the single populated variants was effective in solving all the GTSP instances whereas the eDDE performed substantially better than the single populated variants on a set of problem instances. Furthermore, through the experimental analysis of results, the performance of the eDDE algorithm is also compared against the best performing algorithms from the literature. Ultimately, all of the best known averaged solutions for larger instances are further improved by the eDDE algorithm.  相似文献   

7.
This paper introduces a new cutting plane method for two-stage stochastic mixed-integer programming (SMIP) called Fenchel decomposition (FD). FD uses a class of valid inequalities termed, FD cuts, which are derived based on Fenchel cutting planes from integer programming. First, we derive FD cuts based on both the first and second-stage variables, and devise an FD algorithm for SMIP and establish finite convergence for binary first-stage. Second, we derive FD cuts based on the second-stage variables only and use an idea from disjunctive programming to lift the cuts to the higher dimension space including the first-stage variables. We then devise an alternative algorithm (FD-L algorithm) based on the lifted FD cuts. Finally, we report on computational results based on several test instances from the literature involving the special structure of knapsack problems with nonnegative left-hand side coefficients. The results are promising and show that both algorithms can outperform a standard direct solver and a disjunctive decomposition algorithm on large-scale instances. Furthermore, the FD-L algorithm provides better performance than the FD algorithm in general. Since Fenchel cuts can be computationally expensive in general and are best suited for problems with special structure, both algorithms exploit the special structure of the test instances by reducing the size of the cut generation problems based on the number of nonzero components in the non-integer solution that needs to be cut off.  相似文献   

8.
9.
In this paper we study an extension of the Resource-Constrained Project Scheduling Problem (RCPSP) with minimum makespan objective by introducing a special type of precedence constraints called “Feeding Precedences” (FP). To the best of our knowledge no exact algorithm exists for this problem. Exploiting the lower bound and the mathematical formulation proposed in Bianco and Caramia (4OR 9(4):371–389 2011) for the RCPSP with FP, in this paper we propose an exact algorithm based on branch and bound rules. A computational experimentation on randomly generated instances and a comparison with the results achieved by a commercial solver, show that the proposed approach is able to behave satisfactorily.  相似文献   

10.
This paper presents a detailed analysis of the scalability and parallelization of Local Search algorithms for constraint-based and SAT (Boolean satisfiability) solvers. We propose a framework to estimate the parallel performance of a given algorithm by analyzing the runtime behavior of its sequential version. Indeed, by approximating the runtime distribution of the sequential process with statistical methods, the runtime behavior of the parallel process can be predicted by a model based on order statistics. We apply this approach to study the parallel performance of a constraint-based Local Search solver (Adaptive Search), two SAT Local Search solvers (namely Sparrow and CCASAT), and a propagation-based constraint solver (Gecode, with a random labeling heuristic). We compare the performance predicted by our model to actual parallel implementations of those methods using up to 384 processes. We show that the model is accurate and predicts performance close to the empirical data. Moreover, as we study different types of problems, we observe that the experimented solvers exhibit different behaviors and that their runtime distributions can be approximated by two types of distributions: exponential (shifted and non-shifted) and lognormal. Our results show that the proposed framework estimates the runtime of the parallel algorithm with an average discrepancy of 21 % w.r.t. the empirical data across all the experiments with the maximum allowed number of processors for each technique.  相似文献   

11.
A robust search algorithm should ideally exhibit reasonable performance on a diverse and varied set of problems. In an earlier paper Lim et al. (Computational Optimization and Applications, vol. 15, no. 3, 2000), we outlined a class of hybrid genetic algorithms based on the k-gene exchange local search for solving the quadratic assignment problem (QAP). We follow up on our development of the algorithms by reporting in this paper the results of comprehensive testing of the hybrid genetic algorithms (GA) in solving QAP. Over a hundred instances of QAP benchmarks were tested using a standard set of parameters setting and the results are presented along with the results obtained using simple GA for comparisons. Results of our testing on all the benchmarks show that the hybrid GA can obtain good quality solutions of within 2.5% above the best-known solution for 98% of the instances of QAP benchmarks tested. The computation time is also reasonable. For all the instances tested, all except for one require computation time not exceeding one hour. The results will serve as a useful baseline for performance comparison against other algorithms using the QAP benchmarks as a basis for testing.  相似文献   

12.
Diethelm  Kai  Walz  Guido 《Numerical Algorithms》1997,16(3-4):231-253
We present an extrapolation type algorithm for the numerical solution of fractional order differential equations. It is based on the new result that the sequence of approximate solutions of these equations, computed by means of a recently published algorithm by Diethelm [6], possesses an asymptotic expansion with respect to the stepsize. From this we conclude that the application of extrapolation is justified, and we obtain a very efficient differential equation solver with practically no additional numerical costs. This is also illustrated by a number of numerical examples. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

13.
We consider the 1.52-approximation algorithm of Mahdian et al. for the metric uncapacitated facility location problem. We show that their algorithm does not close the gap with the lower bound on approximability, 1.463, by providing a construction of instances for which its approximation ratio is not better than 1.494.  相似文献   

14.
Let (MQP) be a general mixed integer quadratic program that consists of minimizing a quadratic function subject to linear constraints. In this paper, we present a convex reformulation of (MQP), i.e. we reformulate (MQP) into an equivalent program, with a convex objective function. Such a reformulation can be solved by a standard solver that uses a branch and bound algorithm. We prove that our reformulation is the best one within a convex reformulation scheme, from the continuous relaxation point of view. This reformulation, that we call MIQCR (Mixed Integer Quadratic Convex Reformulation), is based on the solution of an SDP relaxation of (MQP). Computational experiences are carried out with instances of (MQP) including one equality constraint or one inequality constraint. The results show that most of the considered instances with up to 40 variables can be solved in 1?h of CPU time by a standard solver.  相似文献   

15.
A practical nurse rostering problem, which arises at a ward of an Italian private hospital, is considered. In this problem, it is required each month to assign shifts to the nursing staff subject to various requirements. A matheuristic approach is introduced, based on a set of neighborhoods iteratively searched by a commercial integer programming solver within a defined global time limit, relying on a starting solution generated by the solver running on the general integer programming formulation of the problem. Generally speaking, a matheuristic algorithm is a heuristic algorithm that uses non trivial optimization and mathematical programming tools to explore the solutions space with the aim of analyzing large scale neighborhoods. Randomly generated instances, based on the considered nurse rostering problem, were solved and solutions computed by the proposed procedure are compared to the solutions achieved by pure solvers within the same time limit. The results show that the proposed solution approach outperforms the solvers in terms of solution quality. The proposed approach has also been tested on the well known Nurse Rostering Competition instances where several new best results were reached.  相似文献   

16.
How much can randomness help computation? Motivated by this general question and by volume computation, one of the few instances where randomness provably helps, we analyze a notion of dispersion and connect it to asymptotic convex geometry. We obtain a nearly quadratic lower bound on the complexity of randomized volume algorithms for convex bodies in Rn (the current best algorithm has complexity roughly n4, conjectured to be n3). Our main tools, dispersion of random determinants and dispersion of the length of a random point from a convex body, are of independent interest and applicable more generally; in particular, the latter is closely related to the variance hypothesis from convex geometry. This geometric dispersion also leads to lower bounds for matrix problems and property testing.  相似文献   

17.
Rollout Algorithms for Stochastic Scheduling Problems   总被引:8,自引:0,他引:8  
Stochastic scheduling problems are difficult stochastic control problems with combinatorial decision spaces. In this paper we focus on a class of stochastic scheduling problems, the quiz problem and its variations. We discuss the use of heuristics for their solution, and we propose rollout algorithms based on these heuristics which approximate the stochastic dynamic programming algorithm. We show how the rollout algorithms can be implemented efficiently, with considerable savings in computation over optimal algorithms. We delineate circumstances under which the rollout algorithms are guaranteed to perform better than the heuristics on which they are based. We also show computational results which suggest that the performance of the rollout policies is near-optimal, and is substantially better than the performance of their underlying heuristics.  相似文献   

18.
Approximation Algorithms for Dispersion Problems   总被引:2,自引:0,他引:2  
Given a collection of weighted sets, each containing at most k elements drawn from a finite base set, the k-set packing problem is to find a maximum weight sub-collection of disjoint sets. A greedy algorithm for this problem approximates it to within a factor of k, and a natural local search has been shown to approximate it to within a factor of roughly k − 1. However, neither paradigm can yield approximations that improve on this.We present an approximation algorithm for the weighted k-set packing problem that combines the two paradigms by starting with an initial greedy solution and then repeatedly choosing the best possible local improvement. The algorithm has a performance ratio of 2(k + 1)/3, which we show is asymptotically tight. This is the first asymptotic improvement over the straightforward ratio of k.  相似文献   

19.
The optimization of stochastic linear problems, via scenario analysis, based on Benders decomposition requires appending feasibility and/or optimality cuts to the master problem until the iterative procedure reaches the optimal solution. The cuts are identified by solving the auxiliary submodels attached to the scenarios. In this work, we propose the algorithm named scenario Cluster Benders Decomposition (CBD) for dealing with the feasibility cut identification in the Benders method for solving large-scale two-stage stochastic linear problems. The scenario tree is decomposed into a set of scenario clusters and tighter feasibility cuts are obtained by solving the auxiliary submodel for each cluster instead of each individual scenario. Then, the scenario cluster based scheme allows to identify tighter feasibility cuts that yield feasible second stage decisions in reasonable computing time. Some computational experience is reported by using CPLEX as the solver of choice for the auxiliary LP submodels at each iteration of the algorithm CBD. The results that are reported show the favorable performance of the new approach over the traditional single scenario based Benders decomposition; it also outperforms the plain use of CPLEX for medium-large and large size instances.  相似文献   

20.
The capacitated p-median problem (CPMP) consists of finding p nodes (the median nodes) minimizing the total distance to the other nodes of the graph, with the constraint that the total demand of the nodes assigned to each median does not exceed its given capacity. In this paper we propose a cutting plane algorithm, based on Fenchel cuts, which allows us to considerably reduce the integrality gap of hard CPMP instances. The formulation strengthened with Fenchel cuts is solved by a commercial MIP solver. Computational results show that this approach is effective in solving hard instances or considerably reducing their integrality gap.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号