首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 584 毫秒
1.
In this paper we analyze several approaches to the Maximum Independent Set (MIS) problem in hypergraphs with degree bounded by a parameter Δ. Since independent sets in hypergraphs can be strong and weak, we denote by MIS (MSIS) the problem of finding a maximum weak (strong) independent set in hypergraphs, respectively. We propose a general technique that reduces the worst case analysis of certain algorithms on hypergraphs to their analysis on ordinary graphs. This technique allows us to show that the greedy algorithm for MIS that corresponds to the classical greedy set cover algorithm has a performance ratio of (Δ+1)/2. It also allows us to apply results on local search algorithms on graphs to obtain a (Δ+1)/2 approximation for the weighted MIS and (Δ+3)/5−? approximation for the unweighted case. We improve the bound in the weighted case to ⌈(Δ+1)/3⌉ using a simple partitioning algorithm. We also consider another natural greedy algorithm for MIS that adds vertices of minimum degree and achieves only a ratio of Δ−1, significantly worse than on ordinary graphs. For MSIS, we give two variations of the basic greedy algorithm and describe a family of hypergraphs where both algorithms approach the bound of Δ.  相似文献   

2.
In this paper we present a genetic algorithm-based heuristic especially for the weighted maximum independent set problem (IS). The proposed approach treats also some equivalent combinatorial optimization problems. We introduce several modifications to the basic genetic algorithm, by (i) using a crossover called two-fusion operator which creates two new different children and (ii) replacing the mutation operator by the heuristic-feasibility operator tailored specifically for the weighted independent set. The performance of our algorithm was evaluated on several randomly generated problem instances for the weighted independent set and on some instances of the DIMACS Workshop for the particular case: the unweighted maximum clique problem. Computational results show that the proposed approach is able to produce high-quality solutions within reasonable computational times. This algorithm is easily parallelizable and this is one of its important features.  相似文献   

3.
We consider local Markov chain Monte–Carlo algorithms for sampling from the weighted distribution of independent sets with activity λ, where the weight of an independent set I is λ|I|. A recent result has established that Gibbs sampling is rapidly mixing in sampling the distribution for graphs of maximum degree d and λ < λ c (d), where λ c (d) is the critical activity for uniqueness of the Gibbs measure (i.e., for decay of correlations with distance in the weighted distribution over independent sets) on the d-regular infinite tree. We show that for d ≥ 3, λ just above λ c (d) with high probability over d-regular bipartite graphs, any local Markov chain Monte–Carlo algorithm takes exponential time before getting close to the stationary distribution. Our results provide a rigorous justification for “replica” method heuristics. These heuristics were invented in theoretical physics and are used in order to derive predictions on Gibbs measures on random graphs in terms of Gibbs measures on trees. A major theoretical challenge in recent years is to provide rigorous proofs for the correctness of such predictions. Our results establish such rigorous proofs for the case of hard-core model on bipartite graphs. We conjecture that λ c is in fact the exact threshold for this computational problem, i.e., that for λ > λ c it is NP-hard to approximate the above weighted sum over independent sets to within a factor polynomial in the size of the graph.  相似文献   

4.
A k-cluster in a graph is an induced subgraph on k vertices which maximizes the number of edges. Both the k-cluster problem and the k-dominating set problem are NP-complete for graphs in general. In this paper we investigate the complexity status of these problems on various sub-classes of perfect graphs. In particular, we examine comparability graphs, chordal graphs, bipartite graphs, split graphs, cographs and κ-trees. For example, it is shown that the k-cluster problem is NP-complete for both bipartite and chordal graphs and the independent k-dominating set problem is NP-complete for bipartite graphs. Furthermore, where the k-cluster problem is polynomial we study the weighted and connected versions as well. Similarly we also look at the minimum k-dominating set problem on families which have polynomial k-dominating set algorithms.  相似文献   

5.
An even factor in a digraph is a vertex-disjoint collection of directed cycles of even length and directed paths. An even factor is called independent if it satisfies a certain matroid constraint. The problem of finding an independent even factor of maximum size is a common generalization of the nonbipartite matching and matroid intersection problems. In this paper, we present a primal-dual algorithm for the weighted independent even factor problem in odd-cycle-symmetric weighted digraphs. Cunningham and Geelen have shown that this problem is solvable via valuated matroid intersection. Their method yields a combinatorial algorithm running in O(n 3 γ +? n 6 m) time, where n and m are the number of vertices and edges, respectively, and γ is the time for an independence test. In contrast, combining the weighted even factor and independent even factor algorithms, our algorithm works more directly and runs in O(n 4 γ?+?n 5) time. The algorithm is fully combinatorial, and thus provides a new dual integrality theorem which commonly extends the total dual integrality theorems for matching and matroid intersection.  相似文献   

6.
This paper investigates the problem of robust L2L filtering for continuous-time switched systems under asynchronous switching. When there exists asynchronous switching between the filter and the system, based on the average dwell time approach, sufficient conditions for the existence of a linear filter that guarantee the filtering error system to be exponentially stable with a prescribed weighted L2L performance for switched systems are derived, and filter parameters can be obtained by solving a set of matrix inequalities. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.  相似文献   

7.
In traditional edge searching one tries to clean all of the edges in a graph employing the least number of searchers. It is assumed that each edge of the graph initially has a weight equal to one. In this paper we modify the problem and introduce the Weighted Edge Searching Problem by considering graphs with arbitrary positive integer weights assigned to its edges. We give bounds on the weighted search number in terms of related graph parameters including pathwidth. We characterize the graphs for which two searchers are sufficient to clear all edges. We show that for every weighted graph the minimum number of searchers needed for a not-necessarily-monotonic weighted edge search strategy is enough for a monotonic weighted edge search strategy, where each edge is cleaned only once. This result proves the NP-completeness of the problem.  相似文献   

8.
A well known industry application that allows controllable processing times is the manufacturing operations on CNC machines. For each turning operation as an example, there is a nonlinear relationship between the manufacturing cost and its required processing time on a CNC turning machine. If we consider total manufacturing cost (F1) and total weighted completion time (F2) objectives simultaneously on a single CNC machine, making appropriate processing time decisions is as critical as making job sequencing decisions. We first give an effective model for the problem of minimizing F1 subject to a given F2 level. We deduce some optimality properties for this problem. Based on these properties, we propose a heuristic algorithm to generate an approximate set of efficient solutions. Our computational results indicate that the proposed algorithm performs better than the GAMS/MINOS commercial solver both in terms of solution quality and computational requirements such that the average CPU time is only 8% of the time required by the GAMS/MINOS.  相似文献   

9.
An apple A k is the graph obtained from a chordless cycle C k of length k ≥ 4 by adding a vertex that has exactly one neighbor on the cycle. The class of apple-free graphs is a common generalization of claw-free graphs and chordal graphs, two classes enjoying many attractive properties, including polynomial-time solvability of the maximum weight independent set problem. Recently, Brandstädt et al. showed that this property extends to the class of apple-free graphs. In the present paper, we study further generalization of this class called graphs without large apples: these are (A k , A k+1, . . .)-free graphs for values of k strictly greater than 4. The complexity of the maximum weight independent set problem is unknown even for k = 5. By exploring the structure of graphs without large apples, we discover a sufficient condition for claw-freeness of such graphs. We show that the condition is satisfied by bounded-degree and apex-minor-free graphs of sufficiently large tree-width. This implies an efficient solution to the maximum weight independent set problem for those graphs without large apples, which either have bounded vertex degree or exclude a fixed apex graph as a minor.  相似文献   

10.
The nonlinear convex programming problem of finding the minimum covering weighted ball of a given finite set of points in ${\mathbb{R}^n}$ is solved by generating a finite sequence of subsets of the points and by finding the minimum covering weighted ball of each subset in the sequence until all points are covered. Each subset has at most n + 1 points and is affinely independent. The radii of the covering weighted balls are strictly increasing. The minimum covering weighted ball of each subset is found by using a directional search along either a ray or a circular arc, starting at the solution to the previous subset. The step size is computed explicitly at each iteration.  相似文献   

11.
O(n3) algorithms to solve the weighted domination and weighted independent domination problems in permutation graphs, and an O(n2) algorithm to solve the cardinality domination problem in permutation graphs are presented.  相似文献   

12.
We develop data dependent worst case bounds for three simple greedy algorithms for the maximum weighted independent set problem applied to maximum weighted set packing. We exploit the property that the generated output will attain at least a certain weight. These weight quantities are a function of the individual weights corresponding to the vertices of the problem. By using an argument based on linear programming duality we develop a priori bounds that are a function of the minimum guaranteed weight quantities, the highest average reward for a ground item, and cardinality of the ground set. This extends the current bounds which are only a function of the maximum vertex degree in the associated conflict graph. Examples are given that show the benefits of incorporating this data dependent information into bounds.  相似文献   

13.
In this paper, we address the problem of complex object tracking using the particle filter framework, which essentially amounts to estimate high-dimensional distributions by a sequential Monte Carlo algorithm. For this purpose, we first exploit Dynamic Bayesian Networks to determine conditionally independent subspaces of the object’s state space, which allows us to independently perform the particle filter’s propagations and corrections over small spaces. Second, we propose a swapping process to transform the weighted particle set provided by the update step of the particle filter into a “new particle set” better focusing on high peaks of the posterior distribution. This new methodology, called Swapping-Based Partitioned Sampling, is proved to be mathematically sound and is successfully tested and validated on synthetic video sequences for single or multiple articulated object tracking.  相似文献   

14.
In Ahlswede et al. [Discrete Math. 273(1-3) (2003) 9-21] we posed a series of extremal (set system) problems under dimension constraints. In the present paper, we study one of them: the intersection problem. The geometrical formulation of our problem is as follows. Given integers 0?t, k?n determine or estimate the maximum number of (0,1)-vectors in a k-dimensional subspace of the Euclidean n-space Rn, such that the inner product (“intersection”) of any two is at least t. Also we are interested in the restricted (or the uniform) case of the problem; namely, the problem considered for the (0,1)-vectors of the same weight ω.The paper consists of two parts, which concern similar questions but are essentially independent with respect to the methods used.In Part I, we consider the unrestricted case of the problem. Surprisingly, in this case the problem can be reduced to a weighted version of the intersection problem for systems of finite sets. A general conjecture for this problem is proved for the cases mentioned in Ahlswede et al. [Discrete Math. 273(1-3) (2003) 9-21]. We also consider a diametric problem under dimension constraint.In Part II, we study the restricted case and solve the problem for t=1 and k<2ω, and also for any fixed 1?t?ω and k large.  相似文献   

15.
In the classical sequential assignment problem, “machines” are to be allocated sequentially to “jobs” so as to maximize the expected total return, where the return from an allocation of job j to machine k is the product of the value xj of the job and the weight pk of the machine. The set of m machines and their weights are given ahead of time, but n jobs arrive in sequential order and their values are usually treated as independent, identically distributed random variables from a known univariate probability distribution with known parameter values. In the paper, we consider a rank-based version of the sequential selection and assignment problem that minimizes the sum of weighted ranks of jobs and machines. The so-called “secretary problem” is shown to be a special case of our sequential assignment problem (i.e., m = 1). Due to its distribution-free property, our rank-based assignment strategy can be successfully applied to various managerial decision problems such as machine scheduling, job interview, kidney allocations for transplant, and emergency evacuation plan of patients in a mass-casualty situation.  相似文献   

16.
In the capacitated p-median problem with single source constraint, also known as the capacitated clustering problem, a given set of n weighted points is to be partitioned into p clusters such that the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centres that minimizes the total scatter of points allocated to these clusters. In this paper, a (λ, μ)-interchange neighbourhood based on the concept of λ-interchange of points restricted to μ-adjacent clusters is proposed. Structural properties of centres are identified and exploited to derive special data structures for their efficient evaluations. Different search and selection strategies including the variable neighbourhood search descent with respect to μ-nearest points are investigated. The most efficient strategies are then embedded in a guided construction search metaheuristic framework based either on a periodic local search procedure or a greedy random adaptive search procedure to solve the problem. Computational experience is reported on a standard set of benchmarks. The computational experience demonstrates the competitive performance of the proposed algorithms when compared to the best-known procedures in the literature in terms of solution quality and computational requirement.  相似文献   

17.
A priori estimates of the solution to the Dirichlet problem and of its first derivatives in terms of weighted Lebesgue norms are obtained for linear and quasilinear equations with degeneracy from A p Muckenhoupt classes.  相似文献   

18.
This paper deals with the problem of fault estimation for a class of switched nonlinear systems of neutral type. The nonlinearities are assumed to satisfy global Lipschitz conditions and appear in both the state and measured output equations. By employing a switched observer-based fault estimator, the problem is formulated as an H filtering problem. Sufficient delay-dependent existence conditions of the H fault estimator (H-FE) are given in terms of certain matrix inequalities based on the average dwell time approach. In addition, by using cone complementarity algorithm, the solutions to the observer gain matrices are obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is provided to demonstrate the effectiveness of the proposed approach.  相似文献   

19.
Let G be an undirected graph on n vertices and let S(G) be the set of all real symmetric n×n matrices whose nonzero off-diagonal entries occur in exactly the positions corresponding to the edges of G. The inverse inertia problem for G asks which inertias can be attained by a matrix in S(G). We give a complete answer to this question for trees in terms of a new family of graph parameters, the maximal disconnection numbers of a graph. We also give a formula for the inertia set of a graph with a cut vertex in terms of inertia sets of proper subgraphs. Finally, we give an example of a graph that is not inertia-balanced, which settles an open problem from the October 2006 AIM Workshop on Spectra of Families of Matrices described by Graphs, Digraphs and Sign Patterns. We also determine some restrictions on the inertia set of any graph.  相似文献   

20.
In the Capacitated Clustering Problem (CCP), a given set of n weighted points is to be partitioned into p clusters such that, the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centers that minimises total scatter of points allocated to them. In this paper a new constructive method, a general framework to improve the performance of greedy constructive heuristics, and a problem space search procedure for the CCP are proposed. The constructive heuristic finds patterns of natural subgrouping in the input data using concept of density of points. Elements of adaptive computation and periodic construction–deconstruction concepts are implemented within the constructive heuristic to develop a general framework for building efficient heuristics. The problem-space search procedure is based on perturbations of input data for which a controlled perturbation strategy, intensification and diversification strategies are developed. The implemented algorithms are compared with existing methods on a standard set of bench-marks and on new sets of large-sized instances. The results illustrate the strengths of our algorithms in terms of solution quality and computational efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号