首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The field of cluster analysis is primarily concerned with the sorting of data points into different clusters so as to optimize a certain criterion. Rapid advances in technology have made it possible to address clustering problems via optimization theory. In this paper, we present a global optimization algorithm to solve the hard clustering problem, where each data point is to be assigned to exactly one cluster. The hard clustering problem is formulated as a nonlinear program, for which a tight linear programming relaxation is constructed via the Reformulation-Linearization Technique (RLT) in concert with additional valid inequalities that serve to defeat the inherent symmetry in the problem. This construct is embedded within a specialized branch-and-bound algorithm to solve the problem to global optimality. Pertinent implementation issues that can enhance the efficiency of the branch-and-bound algorithm are also discussed. Computational experience is reported using several standard data sets found in the literature as well as using synthetically generated larger problem instances. The results validate the robustness of the proposed algorithmic procedure and exhibit its dominance over the popular k-means clustering technique. Finally, a heuristic procedure to obtain a good quality solution at a relative ease of computational effort is also described.  相似文献   

2.
The paper proposes a new exact approach, based on a Branch, Bound, and Remember (BB&R) algorithm that uses the Cyclic Best First Search (CBFS) strategy, for the 1|r i |∑U i scheduling problem, a single machine scheduling problem, where the objective is to find a schedule with the minimum number of tardy jobs. The search space is reduced using new and improved dominance properties and tighter upper bounds, based on a new dynamic programming algorithm. Computational results establish the effectiveness of the BB&R algorithm with CBFS for a broad spectrum of problem instances. In particular, this algorithm was able to solve all problems instances, up to 300 jobs, while existing best known algorithms only solve problems instances up to 200 jobs. Furthermore, the BB&R algorithm with CBFS runs one to two orders of magnitude faster than the current best known algorithm on comparable instances.  相似文献   

3.
This paper describes a Diversification-Driven Tabu Search (D2TS) algorithm for solving unconstrained binary quadratic problems. D2TS is distinguished by the introduction of a perturbation-based diversification strategy guided by long-term memory. The performance of the proposed algorithm is assessed on the largest instances from the ORLIB library (up to 2500 variables) as well as still larger instances from the literature (up to 7000 variables). The computational results show that D2TS is highly competitive in terms of both solution quality and computational efficiency relative to some of the best performing heuristics in the literature.  相似文献   

4.
The Node Weighted Steiner Tree Problem (NW-STP) is a generalization of the Steiner Tree Problem. A lagrangean heuristic presented in EngevallS: StrLBN: 98, and based on the work in Lucena: 92, solves the problem by relaxing an exponential family of generalized subtour elimination constraints and taking into account only the violated ones as the computation proceeds. In EngevallS: StrLBN: 98 the computational results refer to complete graphs up to one hundred vertices. In this paper, we present a branch-and-bound algorithm based on this formulation. Its performance on the instances from the literature confirms the effectiveness of the approach. The experimentation on a newly generated set of benchmark problems, more similar to the real-world applications, shows that the approach is still valid, provided that suitable refinements on the bounding procedures and a preprocessing phase are introduced. The algorithm solves to optimality all of the considered instances up to one thousand vertices, with the exception of 11 hard instances, derived from the literature of a similar problem, the Prize Collecting Steiner Tree Problem. Received: March 2005, Revised: September 2005 AMS classification: 68M10, 90C10, 90C57 This work has been partially supported by the Ministero dell'Istruzione, Universitá e Ricerca (MIUR), Italy  相似文献   

5.
We introduce the prize-collecting generalized minimum spanning tree problem. In this problem a network of node clusters needs to be connected via a tree architecture using exactly one node per cluster. Nodes in each cluster compete by offering a payment for selection. This problem is NP-hard, and we describe several heuristic strategies, including local search and a genetic algorithm. Further, we present a simple and computationally efficient branch-and-cut algorithm. Our computational study indicates that our branch-and-cut algorithm finds optimal solutions for networks with up to 200 nodes within two hours of CPU time, while the heuristic search procedures rapidly find near-optimal solutions for all of the test instances.  相似文献   

6.
7.
The field of cluster analysis is primarily concerned with the partitioning of data points into different clusters so as to optimize a certain criterion. Rapid advances in technology have made it possible to address clustering problems via optimization theory. In this paper, we present a global optimization algorithm to solve the fuzzy clustering problem, where each data point is to be assigned to (possibly) several clusters, with a membership grade assigned to each data point that reflects the likelihood of the data point belonging to that cluster. The fuzzy clustering problem is formulated as a nonlinear program, for which a tight linear programming relaxation is constructed via the Reformulation-Linearization Technique (RLT) in concert with additional valid inequalities. This construct is embedded within a specialized branch-and-bound (B&B) algorithm to solve the problem to global optimality. Computational experience is reported using several standard data sets from the literature as well as using synthetically generated larger problem instances. The results validate the robustness of the proposed algorithmic procedure and exhibit its dominance over the popular fuzzy c-means algorithmic technique and the commercial global optimizer BARON.  相似文献   

8.
This paper presents comparative computational results using three decomposition algorithms on a battery of instances drawn from two different applications. In order to preserve the commonalities among the algorithms in our experiments, we have designed a testbed which is used to study instances arising in server location under uncertainty and strategic supply chain planning under uncertainty. Insights related to alternative implementation issues leading to more efficient implementations, benchmarks for serial processing, and scalability of the methods are also presented. The computational experience demonstrates the promising potential of the disjunctive decomposition (D 2) approach towards solving several large-scale problem instances from the two application areas. Furthermore, the study shows that convergence of the D 2 methods for stochastic combinatorial optimization (SCO) is in fact attainable since the methods scale well with the number of scenarios.  相似文献   

9.
When dealing with extremely hard global optimization problems, i.e. problems with a large number of variables and a huge number of local optima, heuristic procedures are the only possible choice. In this situation, lacking any possibility of guaranteeing global optimality for most problem instances, it is quite difficult to establish rules for discriminating among different algorithms. We think that in order to judge the quality of new global optimization methods, different criteria might be adopted like, e.g.:
1.  efficiency – measured in terms of the computational effort necessary to obtain the putative global optimum
2.  robustness – measured in terms of “percentage of successes”, i.e. of the number of times the algorithm, re-started with different seeds or starting points, is able to end up at the putative global optimum
3.  discovery capability – measured in terms of the possibility that an algorithm discovers, for the first time, a putative optimum for a given problem which is better than the best known up to now.
Of course the third criterion cannot be considered as a compulsory one, as it might be the case that, for a given problem, the best known putative global optimum is indeed the global one, so that no algorithm will ever be able to discover a better one. In this paper we present a computational framework based on a population-based stochastic method in which different candidate solutions for a single problem are maintained in a population which evolves in such a way as to guarantee a sufficient diversity among solutions. This diversity enforcement is obtained through the definition of a dissimilarity measure whose definition is dependent on the specific problem class. We show in the paper that, for some well known and particularly hard test classes, the proposed method satisfies the above criteria, in that it is both much more efficient and robust when compared with other published approaches. Moreover, for the very hard problem of determining the minimum energy conformation of a cluster of particles which interact through short-range Morse potential, our approach was able to discover four new putative optima.  相似文献   

10.
We study both weighted and unweighted unconstrained two-dimensional guillotine cutting problems. We develop a hybrid approach which combines two heuristics from the literature. The first one (DH) uses a tree-search procedure introducing two strategies: Depth-first search and Hill-climbing. The second one (KD) is based on a series of one-dimensional Knapsack problems using Dynamic programming techniques. The DH /KD algorithm starts with a good initial lower bound obtained by using the KD algorithm. At each level of the tree-search, the proposed algorithm uses also the KD algorithm for constructing new lower bounds and uses another one-dimensional knapsack for constructing refinement upper bounds. The resulting algorithm can be seen as a generalization of the two heuristics and solves large problem instances very well within small computational time. Our algorithm is compared to Morabito et al.'s algorithm (the unweighted case), and to Beasley's [2] approach (the weighted case) on some examples taken from the literature as well as randomly generated instances.  相似文献   

11.
For a number of computational search problems, the existence of a polynomial-time algorithm for the problem implies that a polynomial-time algorithm for the problem is constructively known. Some instances of such self-witnessing polynomial-time complexity are presented. Our main result demonstrates this property for the problem of computing the prime factorization of a positive integer, based on a lemma which shows that a certificate for primality or compositeness can be constructed for a positive integer p in deterministic polynomial time given a complete factorization of p - 1.  相似文献   

12.
The Capacitated Facility Location Problem (CFLP) is to locate a set of facilities with capacity constraints, to satisfy at the minimum cost the order-demands of a set of clients. A multi-source version of the problem is considered in which each client can be served by more than one facility. In this paper we present a reformulation of the CFLP based on Mixed Dicut Inequalities, a family of minimum knapsack inequalities of a mixed type, containing both binary and continuous (flow) variables. By aggregating flow variables, any Mixed Dicut Inequality turns into a binary minimum knapsack inequality with a single continuous variable. We will refer to the convex hull of the feasible solutions of this minimum knapsack problem as the Mixed Dicut polytope. We observe that the Mixed Dicut polytope is a rich source of valid inequalities for the CFLP: basic families of valid CFLP inequalities, like Variable Upper Bounds, Cover, Flow Cover and Effective Capacity Inequalities, are valid for the Mixed Dicut polytope. Furthermore we observe that new families of valid inequalities for the CFLP can be derived by the lifting procedures studied for the minimum knapsack problem with a single continuous variable. To deal with large-scale instances, we have developed a Branch-and-Cut-and-Price algorithm, where the separation algorithm consists of the complete enumeration of the facets of the Mixed Dicut polytope for a set of candidate Mixed Dicut Inequalities. We observe that our procedure returns inequalities that dominate most of the known classes of inequalities presented in the literature. We report on computational experience with instances up to 1000 facilities and 1000 clients to validate the approach.  相似文献   

13.
The purpose of this article is to present and solve the Biobjective Travelling Purchaser Problem, which consists in determining a route through a subset of markets in order to collect a set of products, minimizing the travel distance and the purchasing cost simultaneously. The most convenient purchase of the product in the visited markets is easily computed once the route has been determined. Therefore, this problem contains a finite set of solutions (one for each route) and the problem belongs to the field of the Biobjective Combinatorial Optimization. It is here formulated as a Biobjective Mixed Integer Linear Programming model with an exponential number of valid inequalities, and this model is used within a cutting plane algorithm to generate the set of all supported and non-supported efficient points in the objective space. A variant of the algorithm computes only supported efficient points. For each efficient point in the objective space exactly one Pareto optimal solution in the decision space is computed by solving a single-objective problem. Each of these single-objective problems, in turn, is solved by a specific branch-and-cut approach. A heuristic improvement based on saving previously generated cuts in a common cut-pool structure has also been developed with the aim of speeding up the algorithm performance. Results based on benchmark instances from literature show that the common cut-pool heuristic is very useful, and that the proposed algorithm manages to solve instances containing up to 100 markets and 200 different products. The general procedure can be extended to address other biobjective combinatorial optimization problems whenever a branch-and-cut algorithm is available to solve a single-objective linear combination of these criteria.  相似文献   

14.
The Vehicle Routing Problem (VRP) is one of the most well studied problems in operations research, both in real life problems and for scientific research purposes. During the last 50 years a number of different formulations have been proposed, together with an even greater number of algorithms for the solution of the problem. In this paper, the VRP is formulated as a problem of two decision levels. In the first level, the decision maker assigns customers to the vehicles checking the feasibility of the constructed routes (vehicle capacity constraints) and without taking into account the sequence by which the vehicles will visit the customers. In the second level, the decision maker finds the optimal routes of these assignments. The decision maker of the first level, once the cost of each routing has been calculated in the second level, estimates which assignment is the better one to choose. Based on this formulation, a bilevel genetic algorithm is proposed. In the first level of the proposed algorithm, a genetic algorithm is used for calculating the population of the most promising assignments of customers to vehicles. In the second level of the proposed algorithm, a Traveling Salesman Problem (TSP) is solved, independently for each member of the population and for each assignment to vehicles. The algorithm was tested on two sets of benchmark instances and gave very satisfactory results. In both sets of instances the average quality is less than 1%. More specifically in the set with the 14 classic instances proposed by Christofides, the quality is 0.479% and in the second set with the 20 large scale vehicle routing problems, the quality is 0.826%. The algorithm is ranked in the tenth place among the 36 most known and effective algorithms in the literature for the first set of instances and in the sixth place among the 16 algorithms for the second set of instances. The computational time of the algorithm is decreased significantly compared to other heuristic and metaheuristic algorithms due to the fact that the Expanding Neighborhood Search Strategy is used.  相似文献   

15.
The method fast inverse using nested dissection (FIND) was proposed to calculate the diagonal entries of the inverse of a large sparse symmetric matrix. In this paper, we show how the FIND algorithm can be generalized to calculate off‐diagonal entries of the inverse that correspond to ‘short’ geometric distances within the computational mesh of the original matrix. The idea is to extend the downward pass in FIND that eliminates all nodes outside of each node cluster. In our advanced downwards pass, it eliminates all nodes outside of each ‘node cluster pair’ from a subset of all node cluster pairs. The complexity depends on how far (i,j) is from the main diagonal. In the extension of the algorithm, all entries of the inverse that correspond to vertex pairs that are geometrically closer than a predefined length limit l will be calculated. More precisely, let α be the total number of nodes in a two‐dimensional square mesh. We will show that our algorithm can compute O(α3 ∕ 2 + 2ε) entries of the inverse in O(α3 ∕ 2 + 2ε) time where l = O(α1 ∕ 4 + ε) and 0 ≤ ε ≤1 ∕ 4. Numerical examples are given to illustrate the efficiency of the proposed method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
We present a biased random-key genetic algorithm (BRKGA) for finding small covers of computationally difficult set covering problems that arise in computing the 1-width of incidence matrices of Steiner triple systems. Using a parallel implementation of the BRKGA, we compute improved covers for the two largest instances in a standard set of test problems used to evaluate solution procedures for this problem. The new covers for instances A 405 and A 729 have sizes 335 and 617, respectively. On all other smaller instances our algorithm consistently produces covers of optimal size.  相似文献   

17.
In this paper, we study a k-Travelling Repairmen Problem where the objective is to minimize the sum of clients waiting time to receive service. This problem is relevant in applications that involve distribution of humanitarian aid in disaster areas, delivery and collection of perishable products and personnel transportation, where reaching demand points to perform service, fast and fair, is a priority. This paper presents a new mixed integer formulation and a simple and efficient metaheuristic algorithm. The proposed formulation consumes less computational time and allows solving to optimality more than three times larger data instances than the previous formulation published in literature, even outperforming a recently published Branch and Price and Cut algorithm for this problem. The proposed metaheuristic algorithm solved to optimality 386 out of 389 tested instances in a very short computational time. For larger instances, the algorithm was assessed using the best results reported in the literature for the Cumulative Capacitated Vehicle Routing Problem.  相似文献   

18.
We describe a deterministic algorithm for computing the diameter of a finite set of points in R 3 , that is, the maximum distance between any pair of points in the set. The algorithm runs in optimal time O(nlog n) for a set of n points. The first optimal, but randomized, algorithm for this problem was proposed more than 10 years ago by Clarkson and Shor [11] in their ground-breaking paper on geometric applications of random sampling. Our algorithm is relatively simple except for a procedure by Matoušek [25] for the efficient deterministic construction of epsilon-nets. This work improves previous deterministic algorithms by Ramos [31] and Bespamyatnikh [7], both with running time O(nlog 2 n) . The diameter algorithm appears to be the last one in Clarkson and Shor's paper that up to now had no deterministic counterpart with a matching running time. Received May 10, 2000, and in revised form November 3, 2000. Online publication June 22, 2001.  相似文献   

19.
Airline companies seek to solve the problem of determining an assignment of crews to a pre-determined flight schedule with minimum total cost, called the Crew Pairing Problem (CPP). Most of the existing studies focus on the CPP of North American airlines, which widely differs from that of most European airline companies in terms of the objective function, the flight structure, and the planning horizon. In this study, we develop an optimization-driven heuristic algorithm that can efficiently handle large-scale instances of the CPP that must be solved on a monthly basis. We perform computational experiments using flight schedules of an European airline company to test the performance of the solution method. Our computational results demonstrate that our algorithm is able to provide high-quality solutions to monthly instances with up to 27?000 flight legs.  相似文献   

20.
In the Capacitated Clustering Problem (CCP), a given set of n weighted points is to be partitioned into p clusters such that, the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centers that minimises total scatter of points allocated to them. In this paper a new constructive method, a general framework to improve the performance of greedy constructive heuristics, and a problem space search procedure for the CCP are proposed. The constructive heuristic finds patterns of natural subgrouping in the input data using concept of density of points. Elements of adaptive computation and periodic construction–deconstruction concepts are implemented within the constructive heuristic to develop a general framework for building efficient heuristics. The problem-space search procedure is based on perturbations of input data for which a controlled perturbation strategy, intensification and diversification strategies are developed. The implemented algorithms are compared with existing methods on a standard set of bench-marks and on new sets of large-sized instances. The results illustrate the strengths of our algorithms in terms of solution quality and computational efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号