共查询到20条相似文献,搜索用时 15 毫秒
1.
Linear Programming, LP, problems with finite optimal value have a zero duality gap and a primal–dual strictly complementary
optimal solution pair. On the other hand, there exist Semidefinite Programming, SDP, problems which have a nonzero duality
gap (different primal and dual optimal values; not both infinite). The duality gap is assured to be zero if a constraint qualification,
e.g., Slater’s condition (strict feasibility) holds. Measures of strict feasibility, also called distance to infeasibility, have been used in complexity analysis, and, it is known that (near) loss of strict feasibility results in numerical difficulties.
In addition, there exist SDP problems which have a zero duality gap but no strict complementary primal–dual optimal solution.
We refer to these problems as hard instances of SDP. The assumption of strict complementarity is essential for asymptotic superlinear and quadratic rate convergence proofs.
In this paper, we introduce a procedure for generating hard instances of SDP with a specified complementarity nullity (the dimension of the common nullspace of primal–dual optimal pairs). We then show, empirically, that the complementarity
nullity correlates well with the observed local convergence rate and the number of iterations required to obtain optimal solutions
to a specified accuracy, i.e., we show, even when Slater’s condition holds, that the loss of strict complementarity results
in numerical difficulties. We include two new measures of hardness that correlate well with the complementarity nullity. 相似文献
2.
In robust optimization, the uncertainty set is used to model all possible outcomes of uncertain parameters. In the classic setting, one assumes that this set is provided by the decision maker based on the data available to her. Only recently it has been recognized that the process of building useful uncertainty sets is in itself a challenging task that requires mathematical support. In this paper, we propose an approach to go beyond the classic setting, by assuming multiple uncertainty sets to be prepared, each with a weight showing the degree of belief that the set is a “true” model of uncertainty. We consider theoretical aspects of this approach and show that it is as easy to model as the classic setting. In an extensive computational study using a shortest path problem based on real-world data, we auto-tune uncertainty sets to the available data, and show that with regard to out-of-sample performance, the combination of multiple sets can give better results than each set on its own. 相似文献
3.
As most robust combinatorial min–max and min–max regret problems with discrete uncertainty sets are NP-hard, research in approximation algorithm and approximability bounds has been a fruitful area of recent work. A simple and well-known approximation algorithm is the midpoint method, where one takes the average over all scenarios, and solves a problem of nominal type. Despite its simplicity, this method still gives the best-known bound on a wide range of problems, such as robust shortest path or robust assignment problems. In this paper, we present a simple extension of the midpoint method based on scenario aggregation, which improves the current best K-approximation result to an \((\varepsilon K)\)-approximation for any desired \(\varepsilon > 0\). Our method can be applied to min–max as well as min–max regret problems. 相似文献
4.
We survey some recent advances in the field of polynomially solvable special cases of hard combinatorial optimization problems
like the travelling salesman problem, quadratic assignment problems and Steiner tree problems. Such special cases can be found
by considering special cost structures, the geometry of the problem, the special topology of the underlying graph structure
or by analyzing special algorithms. In particular we stress the importance of recognition algorithms. We comment on open problems
in this area and outline some lines for future research in this field.
This research has been supported by the Spezialforschungsbereich F 003 “Optimierung und Kontrolle”, Projektbereich Diskrete
Optimierung. 相似文献
6.
This paper presents a novel Information Extraction system able to generate complex instances from free text available on the Web. The approach is based on a non-monotonical processing over ontologies, and makes use of entity recognizers and disambiguators in order to adequately extract and combine instances and their relations. Experiments conducted over the archaeology research domain provide encouraging results in both efficiency and efficacy and suggest that the tool is suitable for its application on other similar Semantic Web resources. 相似文献
7.
We improve the well-known result presented in Bertsimas and Sim (Math Program B98:49–71, 2003) regarding the computation of optimal solutions of Robust Combinatorial Optimization problems with interval uncertainty in the objective function coefficients. We also extend this improvement to a more general class of Combinatorial Optimization problems with interval uncertainty. 相似文献
8.
Multi-start methods strategically sample the solution space of an optimization problem. The most successful of these methods have two phases that are alternated for a certain number of global iterations. The first phase generates a solution and the second seeks to improve the outcome. Each global iteration produces a solution that is typically a local optimum, and the best overall solution is the output of the algorithm. The interaction between the two phases creates a balance between search diversification (structural variation) and search intensification (improvement), to yield an effective means for generating high-quality solutions. This survey briefly sketches historical developments that have motivated the field, and then focuses on modern contributions that define the current state-of-the-art. We consider two categories of multi-start methods: memory-based and memoryless procedures. The former are based on identifying and recording specific types of information (attributes) to exploit in future constructions. The latter are based on order statistics of sampling and generate unconnected solutions. An interplay between the features of these two categories provides an inviting area for future exploration. 相似文献
10.
Graph colorability (COL), is a typical constraint satisfaction problem to which phase transition phenomena (PTs), are important in the computational complexity of combinatorial search algorithms. PTs are significant and subtle because, in the PT region, extraordinarily hard problem instances are found, which may require exponential-order computational time to solve. To clarify PT mechanism, many studies have been undertaken to produce very hard instances, many of which were based on generate-and-test approaches. We propose a rather systematic or constructive algorithm that repeats the embedding of 4-critical graphs to arbitrarily generate large extraordinarily hard 3-colorability instances. We demonstrated experimentally that the computational cost to solve our generated instances is of an exponential order of the number of vertices by using a few actual coloring algorithms and constraint satisfaction algorithms. 相似文献
11.
Mathematical Programming - We commence an algorithmic study of Bulk-Robustness, a new model of robustness in combinatorial optimization. Unlike most existing models, Bulk-Robust combinatorial... 相似文献
14.
CHIP (Constraint Handling In Prolog) is a new logic programming language combining the declarative aspect of logic programming for stating search problems with the efficiency of constraint handling techniques for solving them. CHIP has been applied to many real-life problems in Operations Research and hardware design with an efficiency comparable to specific programs written in procedural languages. The main advantage of CHIP is the short development time of the programs and their great modifiability and extensibility. In this paper, we discuss the application of the finite domain part of CHIP to the solving of discrete combinatorial problems occurring in Operations Research. The basic mechanisms underlying CHIP are explained through simple examples. Solutions in CHIP of several real-life problems (e.g., cutting stock, warehouses location problems) are presented and compared with usual approaches, showing the versatility and the interest of the approach. 相似文献
15.
We present the main results in the author’s Ph.D. thesis (Iori 2004), defended at the University of Bologna in April 2004 and supervised by S. Martello. The thesis is written in English and is available from the author upon request. It proposes exact and metaheuristic algorithms for solving some relevant combinatorial optimization problems, with particular emphasis on scheduling, two-dimensional cutting and packing and capacitated vehicle routing. The performance of each algorithm is tested through extensive computational experiments and comparison with other approaches in the literature.Received: 21 September 2004, AMS classification:
90-08, 90C27, 90C59 相似文献
16.
This paper proposes an efficient computational technique for the optimal control of linear discrete-time systems subject to
bounded disturbances with mixed linear constraints on the states and inputs. The problem of computing an optimal state feedback
control policy, given the current state, is non-convex. A recent breakthrough has been the application of robust optimization
techniques to reparameterize this problem as a convex program. While the reparameterized problem is theoretically tractable,
the number of variables is quadratic in the number of stages or horizon length N and has no apparent exploitable structure, leading to computational time of per iteration of an interior-point method. We focus on the case when the disturbance set is ∞-norm bounded or the linear
map of a hypercube, and the cost function involves the minimization of a quadratic cost. Here we make use of state variables
to regain a sparse problem structure that is related to the structure of the original problem, that is, the policy optimization
problem may be decomposed into a set of coupled finite horizon control problems. This decomposition can then be formulated
as a highly structured quadratic program, solvable by primal-dual interior-point methods in which each iteration requires
time. This cubic iteration time can be guaranteed using a Riccati-based block factorization technique, which is standard
in discrete-time optimal control. Numerical results are presented, using a standard sparse primal-dual interior point solver,
that illustrate the efficiency of this approach. 相似文献
17.
Robust optimization problems, which have uncertain data, are considered. We prove surrogate duality theorems for robust quasiconvex optimization problems and surrogate min–max duality theorems for robust convex optimization problems. We give necessary and sufficient constraint qualifications for surrogate duality and surrogate min–max duality, and show some examples at which such duality results are used effectively. Moreover, we obtain a surrogate duality theorem and a surrogate min–max duality theorem for semi-definite optimization problems in the face of data uncertainty. 相似文献
18.
There are significant research opportunities in the integration of Machine Learning (ML) methods and Combinatorial Optimization Problems (COPs). In this work, we focus on metaheuristics to solve COPs that have an important learning component. These algorithms must explore a solution space and learn from the information they obtain in order to find high-quality solutions. Among the metaheuristics, we study Hyper-Heuristics (HHs), algorithms that, given a number of low-level heuristics, iteratively select and apply heuristics to a solution. The HH we consider has a Markov model to produce sequences of low-level heuristics, which we combine with a Multi-Armed Bandit Problem (MAB)-based method to learn its parameters. This work proposes several improvements to the HH metaheuristic that yields a better learning for solving problem instances. Specifically, this is the first work in HHs to present Exponential Weights for Exploration and Exploitation (EXP3) as a learning method, an algorithm that is able to deal with adversarial settings. We also present a case study for the Vehicle Routing Problem with Time Windows (VRPTW), for which we include a list of low-level heuristics that have been proposed in the literature. We show that our algorithms can handle a large and diverse list of heuristics, illustrating that they can be easily configured to solve COPs of different nature. The computational results indicate that our algorithms are competitive methods for the VRPTW (2.16% gap on average with respect to the best known solutions), demonstrating the potential of these algorithms to solve COPs. Finally, we show how algorithms can even detect low-level heuristics that do not contribute to finding better solutions to the problem. 相似文献
19.
Inverse multi-objective combinatorial optimization consists of finding a minimal adjustment of the objective functions coefficients such that a given set of feasible solutions becomes efficient. An algorithm is proposed for rendering a given feasible solution into an efficient one. This is a simplified version of the inverse problem when the cardinality of the set is equal to one. The adjustment is measured by the Chebyshev distance. It is shown how to build an optimal adjustment in linear time based on this distance, and why it is right to perform a binary search for determining the optimal distance. These results led us to develop an approach based on the resolution of mixed-integer linear programs. A second approach based on a branch-and-bound is proposed to handle any distance function that can be linearized. Finally, the initial inverse problem is solved by a cutting plane algorithm. 相似文献
20.
We study the loss in objective value when an inaccurate objective is optimized instead of the true one, and show that “on average” this loss is very small, for an arbitrary compact feasible region. 相似文献
|