首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
This article introduces a new approach to studying difference sets via their additive properties. We introduce the concept of special subsets, which are interesting combinatorial objects in their own right, but also provide a mechanism for measuring additive regularity. Skew Hadamard difference sets are given special attention, and the structure of their special subsets leads to several results on multipliers, including a categorisation of the full multiplier group of an abelian skew Hadamard difference set. We also count the number of ways to write elements as a product of any number of elements of a skew Hadamard difference set.   相似文献   

2.
In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.  相似文献   

3.
In this paper we introduce an algorithm for the creation of polyhedral approximations for certain kinds of digital objects in a three-dimensional space. The objects are sets of voxels represented as strongly connected subsets of an abstract cell complex. The proposed algorithm generates the convex hull of a given object and modifies the hull afterwards by recursive repetitions of generating convex hulls of subsets of the given voxel set or subsets of the background voxels. The result of this method is a polyhedron which separates object voxels from background voxels. The objects processed by this algorithm and also the background voxel components inside the convex hull of the objects are restricted to have genus 0. The second aim of this paper is to present some practical improvements to the discussed convex hull algorithm to reduce computation time.  相似文献   

4.
We study the complete set packing problem (CSPP) where the family of feasible subsets may include all possible combinations of objects. This setting arises in applications such as combinatorial auctions (for selecting optimal bids) and cooperative game theory (for finding optimal coalition structures). Although the set packing problem has been well-studied in the literature, where exact and approximation algorithms can solve very large instances with up to hundreds of objects and thousands of feasible subsets, these methods are not extendable to the CSPP since the number of feasible subsets is exponentially large. Formulating the CSPP as an MILP and solving it directly, using CPLEX for example, is impossible for problems with more than 20 objects. We propose a new mathematical formulation for the CSPP that directly leads to an efficient algorithm for finding feasible set packings (upper bounds). We also propose a new formulation for finding tighter lower bounds compared to LP relaxation and develop an efficient method for solving the corresponding large-scale MILP. We test the algorithm with the winner determination problem in spectrum auctions, the coalition structure generation problem in coalitional skill games, and a number of other simulated problems that appear in the literature.  相似文献   

5.
In this paper we handle the general problem of finding q(> 1) central relations on a set of objects which best fit the information contained in a finite number of given relations on that set. The proposed CAR (clusterwise aggregation of relations) algorithm allows one to consider the well-known situation of determining a single central relation as a special case (q = 1) and takes into account the fact that the representation of appropriately selected subsets of relations by different central relations can provide additional insights into whether different clusters or segments of relations exist in the given set of relations. Two examples demonstrate the usefulness of the suggested approach.  相似文献   

6.
Isomorph-Free Exhaustive Generation   总被引:1,自引:0,他引:1  
We describe a very general technique for generating families of combinatorial objects without isomorphs. It applies to almost any class of objects for which an inductive construction process exists. In one form of our technique, no explicit isomorphism testing is required. In the other form, isomorph testing is restricted to within small subsets of the entire set of objects. A variety of different examples are presented, including the generation of graphs with some hereditary property, the generation of Latin rectangles and the generation of balanced incomplete block designs. The technique can also be used to perform unbiased statistical analysis, including approximate counting, of sets of objects too large to generate exhaustively.  相似文献   

7.
The original rough set approach proved to be very useful in dealing with inconsistency problems following from information granulation. It operates on a data table composed of a set U of objects (actions) described by a set Q of attributes. Its basic notions are: indiscernibility relation on U, lower and upper approximation of either a subset or a partition of U, dependence and reduction of attributes from Q, and decision rules derived from lower approximations and boundaries of subsets identified with decision classes. The original rough set idea is failing, however, when preference-orders of attribute domains (criteria) are to be taken into account. Precisely, it cannot handle inconsistencies following from violation of the dominance principle. This inconsistency is characteristic for preferential information used in multicriteria decision analysis (MCDA) problems, like sorting, choice or ranking. In order to deal with this kind of inconsistency a number of methodological changes to the original rough sets theory is necessary. The main change is the substitution of the indiscernibility relation by a dominance relation, which permits approximation of ordered sets in multicriteria sorting. To approximate preference relations in multicriteria choice and ranking problems, another change is necessary: substitution of the data table by a pairwise comparison table, where each row corresponds to a pair of objects described by binary relations on particular criteria. In all those MCDA problems, the new rough set approach ends with a set of decision rules playing the role of a comprehensive preference model. It is more general than the classical functional or relational model and it is more understandable for the users because of its natural syntax. In order to workout a recommendation in one of the MCDA problems, we propose exploitation procedures of the set of decision rules. Finally, some other recently obtained results are given: rough approximations by means of similarity relations, rough set handling of missing data, comparison of the rough set model with Sugeno and Choquet integrals, and results on equivalence of a decision rule preference model and a conjoint measurement model which is neither additive nor transitive.  相似文献   

8.
This paper first presents an improved method of temporal decomposition for minimising the searching space of scheduling problems. Next, the effects of the temporal decomposition procedure in scheduling problems are analysed. It is theoretically shown that the complexity of a scheduling algorithm using decomposed subsets varies inversely with that of the decomposition procedure. Therefore, the efficiency of the overall scheduling algorithm is strongly related to the decomposability of the set of operations to be processed on each machine. This decomposability is evaluated using a probabilistic approach where the probability distributions of the scheduling parameters are obtained from historical workshop data. The average number of decomposed subsets and the average size of these subsets are estimated. Both theoretical analysis and simulation results have revealed that the decomposition procedure leads to optimal effects when some conditions on scheduling parameters are met.  相似文献   

9.
Parametric identification for a class of nonlinear objects with lumped parameters described by systems of ordinary differential equations is studied. The problem is to recover the coefficients of a dynamical system depending on the phase state. For that purpose, the phase space is subdivided into a finite set of subsets or zones in which the coefficients are assumed to be constant or linear functions of state. Once the coefficients in such a form are obtained, interpolation and approximation can be used to represent the coefficients as functions of the phase variables.  相似文献   

10.
Normal categories are pointed categorical counterparts of 0-regular varieties, i.e., varieties where each congruence is uniquely determined by the equivalence class of a fixed constant 0. In this paper, we give a new axiomatic approach to normal categories, which uses self-dual axioms on a functor defined using subobjects of objects in the category. We also show that a similar approach can be developed for 0-regular varieties, if we replace subobjects with subsets of algebras containing 0.  相似文献   

11.
The Moran fractal considered in this paper is an extension of the self-similar sets satisfying the open set condition. We consider those subsets of the Moran fractal that are the union of an uncountable number of sets each of which consists of the points with their location codes having prescribed mixed group frequencies. It is proved that the Hausdorff and packing dimensions of each of these subsets coincide and are equal to the supremum of the Hausdorff (or packing) dimensions of the sets in the union. An approach is given to calculate their Hausdorff and packing dimensions. The main advantage of our approach is that we treat these subsets in a unified manner. Another advantage of this approach is that the values of the Hausdorff and packing dimensions do not need to be guessed a priori.  相似文献   

12.
The traditional maximum likelihood estimator (MLE) is often of limited use in complex high-dimensional data due to the intractability of the underlying likelihood function. Maximum composite likelihood estimation (McLE) avoids full likelihood specification by combining a number of partial likelihood objects depending on small data subsets, thus enabling inference for complex data. A fundamental difficulty in making the McLE approach practicable is the selection from numerous candidate likelihood objects for constructing the composite likelihood function. In this article, we propose a flexible Gibbs sampling scheme for optimal selection of sub-likelihood components. The sampled composite likelihood functions are shown to converge to the one maximally informative on the unknown parameters in equilibrium, since sub-likelihood objects are chosen with probability depending on the variance of the corresponding McLE. A penalized version of our method generates sparse likelihoods with a relatively small number of components when the data complexity is intense. Our algorithms are illustrated through numerical examples on simulated data as well as real genotype single nucleotide polymorphism (SNP) data from a case–control study.  相似文献   

13.
We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set.  相似文献   

14.
Rough sets are efficient for data pre-processing during data mining. However, some important problems such as attribute reduction in rough sets are NP-hard and the algorithms required to solve them are mostly greedy ones. The transversal matroid is an important part of matroid theory, which provides well-established platforms for greedy algorithms. In this study, we investigate transversal matroids using the rough set approach. First, we construct a covering induced by a family of subsets and we propose the approximation operators and upper approximation number based on this covering. We present a sufficient condition under which a subset is a partial transversal, and also a necessary condition. Furthermore, we characterize the transversal matroid with the covering-based approximation operator and construct some types of circuits. Second, we explore the relationships between closure operators in transversal matroids and upper approximation operators based on the covering induced by a family of subsets. Finally, we study two types of axiomatic characterizations of the covering approximation operators based on the set theory and matroid theory, respectively. These results provide more methods for investigating the combination of transversal matroids with rough sets.  相似文献   

15.
16.
In this paper, belief functions, defined on the lattice of intervals partitions of a set of objects, are investigated as a suitable framework for combining multiple clusterings. We first show how to represent clustering results as masses of evidence allocated to sets of partitions. Then a consensus belief function is obtained using a suitable combination rule. Tools for synthesizing the results are also proposed. The approach is illustrated using synthetic and real data sets.  相似文献   

17.
Various computational difficulties arise in using decision set-based vector maximization methods to solve multiple objective linear programming problems. As a result, several researchers have begun to explore the possibility of solving these problems by examining subsets of their outcome sets, rather than of their decision sets. In this article, we present and validate a basic weight set decomposition approach for generating the set of all efficient extreme points in the outcome set of a multiple objective linear program. Based upon this approach, we then develop an algorithm, called the Weight Set Decomposition Algorithm, for generating this set. A sample problem is solved using this algorithm, and the main potential computational and practical advantages of the algorithm are indicated.  相似文献   

18.
The graph partitioning problem is to partition the vertex set of a graph into a number of nonempty subsets so that the total weight of edges connecting distinct subsets is minimized. Previous research requires the input of cardinalities of subsets or the number of subsets for equipartition. In this paper, the problem is formulated as a zero-one quadratic programming problem without the input of cardinalities. We also present three equivalent zero-one linear integer programming reformulations. Because of its importance in data biclustering, the bipartite graph partitioning is also studied. Several new methods to determine the number of subsets and the cardinalities are presented for practical applications. In addition, hierarchy partitioning and partitioning of bipartite graphs without reordering one vertex set, are studied.  相似文献   

19.
Multiobjective approach is the common way of generalization single-criterion dynamic programming models. Another way is to consider partially ordered criteria structures. That approach is rather rare. The aim of the paper is to present such a model. Generalization of Bellman’s principle of optimality is employed to create a forward procedure to find the set of all maximal elements. As this set is usual large, the second problem under consideration is to find its subsets. To reduce the number of solutions presented to decision maker we propose to apply a family of narrowing relations. That approach is similar to scalarization in multiobjective programming. Ordered structures of random variables based on mean–variance, stochastic dominance and inverse stochastic dominance are considered. Numerical illustration is given at the end of the paper.  相似文献   

20.
In this paper, we propose the THESEUS method, a new approach based on fuzzy outranking relations to multi-criteria sorting problems. Compared with other outranking-based methods, THESEUS is inspired by another view of multi-criteria classification problems. It utilizes a new way of evaluating the assignment of an object to an element of the set of ordered categories that were previously defined. This way is based on comparing every possible assignment with the information from various preference relations that are derived from a fuzzy outranking relation defined on the universe of objects. The appropriate assignment is determined by solving a simple selection problem.The capacity of a reference set for making appropriate assignments is related to a good characterization of the categories. A single reference action characterizing a category may be insufficient to achieve well-determined assignments. In this paper, the reference set capacity to perform appropriate assignments is characterized by some new concepts. This capacity may be increased when more objects are added to the reference set. THESEUS is a method for handling the preference information contained in such larger reference sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号