首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A n-vertex graph is said to be decomposable if for any partition (λ1,…,λp) of the integer n, there exists a sequence (V1,…,Vp) of connected vertex-disjoint subgraphs with |Vi|=λi. In this paper, we focus on decomposable trees. We show that a decomposable tree has degree at most 4. Moreover, each degree-4 vertex of a decomposable tree is adjacent to a leaf. This leads to a polynomial time algorithm to decide if a multipode (a tree with only one vertex of degree greater than 2) is decomposable. We also exhibit two families of decomposable trees: arbitrary large trees with one vertex of degree 4, and trees with an arbitrary number of degree-3 vertices.  相似文献   

2.
We study a class of mixed-integer programs for solving linear programs with joint probabilistic constraints from random right-hand side vectors with finite distributions. We present greedy and dual heuristic algorithms that construct and solve a sequence of linear programs. We provide optimality gaps for our heuristic solutions via the linear programming relaxation of the extended mixed-integer formulation of Luedtke et al. (2010) [13] as well as via lower bounds produced by their cutting plane method. While we demonstrate through an extensive computational study the effectiveness and scalability of our heuristics, we also prove that the theoretical worst-case solution quality for these algorithms is arbitrarily far from optimal. Our computational study compares our heuristics against both the extended mixed-integer programming formulation and the cutting plane method of Luedtke et al. (2010) [13]. Our heuristics efficiently and consistently produce solutions with small optimality gaps, while for larger instances the extended formulation becomes intractable and the optimality gaps from the cutting plane method increase to over 5%.  相似文献   

3.
George Dantzig created the simplex algorithm for linear programming, perhaps the most important algorithm developed in the 20th century. This paper traces a single historical thread: Dantzig’s work on linear programming and its application and extension to combinatorial optimization, and the investigations it has stimulated about the performance of the simplex algorithm and the intrinsic complexity of linear programming and combinatorial optimization.  相似文献   

4.
This paper presents two effective algorithms for clustering n entities into p mutually exclusive and exhaustive groups where the ‘size’ of each group is restricted. As its objective, the clustering model minimizes the sum of distance between each entity and a designated group median. Empirical results using both a primal heuristic and a hybrid heuristic-subgradient method for problems having n ? 100 (i.e. 10 100 binary variables) show that the algorithms locate close to optimal solutions without resorting to tree enumeration. The capacitated clustering model is applied to the problem of sales force territorial design.  相似文献   

5.
This article presents algorithms for computing optima in decision trees with imprecise probabilities and utilities. In tree models involving uncertainty expressed as intervals and/or relations, it is necessary for the evaluation to compute the upper and lower bounds of the expected values. Already in its simplest form, computing a maximum of expectancies leads to quadratic programming (QP) problems. Unfortunately, standard optimization methods based on QP (and BLP – bilinear programming) are too slow for the evaluation of decision trees in computer tools with interactive response times. Needless to say, the problems with computational complexity are even more emphasized in multi-linear programming (MLP) problems arising from multi-level decision trees. Since standard techniques are not particularly useful for these purposes, other, non-standard algorithms must be used. The algorithms presented here enable user interaction in decision tools and are equally applicable to all multi-linear programming problems sharing the same structure as a decision tree.  相似文献   

6.
Shmuel Onn 《Discrete Mathematics》2009,309(9):2934-2936
The convex hull ψn,n of certain (n!)2 tensors was considered recently in connection with graph isomorphism. We consider the convex hull ψn of the n! diagonals among these tensors. We show: 1. The polytope ψn is a face of ψn,n. 2. Deciding if a graph G has a subgraph isomorphic to H reduces to optimization over ψn. 3. Optimization over ψn reduces to optimization over ψn,n. In particular, this implies that the subgraph isomorphism problem reduces to optimization over ψn,n.  相似文献   

7.
8.
Multi-level production planning problems in which multiple items compete for the same resources frequently occur in practice, yet remain daunting in their difficulty to solve. In this paper, we propose a heuristic framework that can generate high quality feasible solutions quickly for various kinds of lot-sizing problems. In addition, unlike many other heuristics, it generates high quality lower bounds using strong formulations, and its simple scheme allows it to be easily implemented in the Xpress-Mosel modeling language. Extensive computational results from widely used test sets that include a variety of problems demonstrate the efficiency of the heuristic, particularly for challenging problems.  相似文献   

9.
10.
We give a bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered. When the point satisfies the linear inequalities of the complementarity problem, the residual consists of the complementarity condition plus its square root. This latter term is essential and without it the error bound cannot hold. We also show that another natural residual that has been employed to bound errors for strictly monotone linear complementarity problems fails to bound errors for the monotone case considered here. Sponsored by the United States Army under contract No. DAAG29-80-C-0041. This material is based on research sponsored by National Foundation Grant DCR-8420963 and Air Force Office of Scientific Research Grant AFOSR-ISSA-85-00080.  相似文献   

11.
12.
We prove polynomial-time solvability of a large class of clustering problems where a weighted set of items has to be partitioned into clusters with respect to some balancing constraints. The data points are weighted with respect to different features and the clusters adhere to given lower and upper bounds on the total weight of their points with respect to each of these features. Further the weight-contribution of a vector to a cluster can depend on the cluster it is assigned to. Our interest in these types of clustering problems is motivated by an application in land consolidation where the ability to perform this kind of balancing is crucial.Our framework maximizes an objective function that is convex in the summed-up utility of the items in each cluster. Despite hardness of convex maximization and many related problems, for fixed dimension and number of clusters, we are able to show that our clustering model is solvable in time polynomial in the number of items if the weight-balancing restrictions are defined using vectors from a fixed, finite domain. We conclude our discussion with a new, efficient model and algorithm for land consolidation.  相似文献   

13.
Given a double round-robin tournament, the traveling umpire problem (TUP) consists of determining which games will be handled by each one of several umpire crews during the tournament. The objective is to minimize the total distance traveled by the umpires, while respecting constraints that include visiting every team at home, and not seeing a team or venue too often. We strengthen a known integer programming formulation for the TUP and use it to implement a relax-and-fix heuristic that improves the quality of 24 out of 25 best-known feasible solutions to instances in the TUP benchmark. We also improve all best-known lower bounds for those instances and, for the first time, provide lower bounds for instances with more than 16 teams.  相似文献   

14.
In this paper, we propose a new greedy-like heuristic method, which is primarily intended for the general MDKP, but proves itself effective also for the 0-1 MDKP. Our heuristic differs from the existing greedy-like heuristics in two aspects. First, existing heuristics rely on each item’s aggregate consumption of resources to make item selection decisions, whereas our heuristic uses the effective capacity, defined as the maximum number of copies of an item that can be accepted if the entire knapsack were to be used for that item alone, as the criterion to make item selection decisions. Second, other methods increment the value of each decision variable only by one unit, whereas our heuristic adds decision variables to the solution in batches and consequently improves computational efficiency significantly for large-scale problems. We demonstrate that the new heuristic significantly improves computational efficiency of the existing methods and generates robust and near-optimal solutions. The new heuristic proves especially efficient for high dimensional knapsack problems with small-to-moderate numbers of decision variables, usually considered as “hard” MDKP and no computationally efficient heuristic is available to treat such problems. Supported in part by the NSF grant DMI 9812994.  相似文献   

15.
Approximation algorithms for Hamming clustering problems   总被引:1,自引:0,他引:1  
We study Hamming versions of two classical clustering problems. The Hamming radius p-clustering problem (HRC) for a set S of k binary strings, each of length n, is to find p binary strings of length n that minimize the maximum Hamming distance between a string in S and the closest of the p strings; this minimum value is termed the p-radius of S and is denoted by . The related Hamming diameter p-clustering problem (HDC) is to split S into p groups so that the maximum of the Hamming group diameters is minimized; this latter value is called the p-diameter of S.We provide an integer programming formulation of HRC which yields exact solutions in polynomial time whenever k is constant. We also observe that HDC admits straightforward polynomial-time solutions when k=O(logn) and p=O(1), or when p=2. Next, by reduction from the corresponding geometric p-clustering problems in the plane under the L1 metric, we show that neither HRC nor HDC can be approximated within any constant factor smaller than two unless P=NP. We also prove that for any >0 it is NP-hard to split S into at most pk1/7− clusters whose Hamming diameter does not exceed the p-diameter, and that solving HDC exactly is an NP-complete problem already for p=3. Furthermore, we note that by adapting Gonzalez' farthest-point clustering algorithm [T. Gonzalez, Theoret. Comput. Sci. 38 (1985) 293–306], HRC and HDC can be approximated within a factor of two in time O(pkn). Next, we describe a 2O(p/)kO(p/)n2-time (1+)-approximation algorithm for HRC. In particular, it runs in polynomial time when p=O(1) and =O(log(k+n)). Finally, we show how to find in

time a set L of O(plogk) strings of length n such that for each string in S there is at least one string in L within distance (1+), for any constant 0<<1.  相似文献   

16.
This paper addresses multi-depot location arc routing problems with vehicle capacity constraints. Two mixed integer programming models are presented for single and multi-depot problems. Relaxing these formulations leads to other integer programming models whose solutions provide good lower bounds for the total cost. A powerful insertion heuristic has been developed for solving the underlying capacitated arc routing problem. This heuristic is used together with a novel location–allocation heuristic to solve the problem within a simulated annealing framework. Extensive computational results demonstrate that the proposed algorithm can find high quality solutions. We also show that the potential cost saving resulting from adding location decisions to the capacitated arc routing problem is significant.  相似文献   

17.
A code , where Z 2 = {0,1}, is said to be a binary μ-fold R-covering code, if for any word there are at least μ distinct codewords which differ from v in at most R coordinates. The size of the smallest binary μ-fold R-covering code of length n is denoted by K(n, R, μ). In this paper we use integer programming and exhaustive search to improve 57 lower bounds on K(n, R, μ) for 6 ≤ n ≤ 16, 1 ≤ R ≤ 4 and 2 ≤ μ ≤ 4.   相似文献   

18.
The last decade has seen paper-and-pencil (P&P) tests being replaced by computerized adaptive tests (CATs) within many testing programs. A CAT may yield several advantages relative to a conventional P&P test. A CAT can determine the questions or test items to administer, allowing each test form to be tailored to a test taker’s skill level. Subsequent items can be chosen to match the capability of the test taker. By adapting to a test taker’s ability, a CAT can acquire more information about a test taker while administering fewer items. A Multiple Stage Adaptive test (MST) provides a means to implement a CAT that allows review before the administration. The MST format is a hybrid between the conventional P&P and CAT formats. This paper presents mixed integer programming models for MST assembly problems. Computational results with commercial optimization software will be given and advantages of the models evaluated.  相似文献   

19.
We show the existence of a fully polynomial-time approximation scheme (FPTAS) for the problem of maximizing a non-negative polynomial over mixed-integer sets in convex polytopes, when the number of variables is fixed. Moreover, using a weaker notion of approximation, we show the existence of a fully polynomial-time approximation scheme for the problem of maximizing or minimizing an arbitrary polynomial over mixed-integer sets in convex polytopes, when the number of variables is fixed. A conference version of this article, containing a part of the results presented here, appeared in Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, Miami, FL, January 22–24, 2006, pp. 743–748. The first author gratefully acknowledges support from NSF grant DMS-0608785, a 2003 UC-Davis Chancellor’s fellow award, the Alexander von Humboldt foundation, and IMO Magdeburg. The remaining authors were supported by the European TMR network ADONET 504438.  相似文献   

20.
The blocks relocation problem (BRP) may be defined as follows: given a set of homogeneous blocks stored in a two-dimensional stock, which relocations are necessary to retrieve the blocks from the stock in a predefined order while minimizing the number of those relocations? In this paper, we first prove NP-hardness of the BRP as well as a special case, closing open research questions. Moreover, we propose different solution approaches. First, a mathematical model is presented that provides optimal solutions to the general BRP in cases where instances are small. To overcome such limitation, some realistic assumption taken from the literature is introduced, leading to the definition of a binary linear programming model. In terms of computational time, this approach is reasonably fast to be used to solve medium-sized instances. In addition, we propose a simple heuristic based upon a set of relocation rules. This heuristic is used to generate “good” quality solutions for larger instances in very short computational time, and, consequently, is proposed for tackling problem instances where solutions are required (almost) immediately. Solution quality of the heuristic is measured against optimal solutions obtained using a state-of-the-art commercial solver and both of them are compared with reference results from literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号