首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Covering rough sets generalize traditional rough sets by considering coverings of the universe instead of partitions, and neighborhood-covering rough sets have been demonstrated to be a reasonable selection for attribute reduction with covering rough sets. In this paper, numerical algorithms of attribute reduction with neighborhood-covering rough sets are developed by using evidence theory. We firstly employ belief and plausibility functions to measure lower and upper approximations in neighborhood-covering rough sets, and then, the attribute reductions of covering information systems and decision systems are characterized by these respective functions. The concepts of the significance and the relative significance of coverings are also developed to design algorithms for finding reducts. Based on these discussions, connections between neighborhood-covering rough sets and evidence theory are set up to establish a basic framework of numerical characterizations of attribute reduction with these sets.  相似文献   

2.
3.
截集形式的模糊粗糙集及其性质   总被引:2,自引:0,他引:2  
用模糊集的截集构造了模糊集的粗糙集,给出了模糊粗糙集的更加严格的数学定义,证明了与文[1]中的等价性,并用新的定义给出模糊粗糙集的相应性质.  相似文献   

4.
In this paper, belief functions, defined on the lattice of intervals partitions of a set of objects, are investigated as a suitable framework for combining multiple clusterings. We first show how to represent clustering results as masses of evidence allocated to sets of partitions. Then a consensus belief function is obtained using a suitable combination rule. Tools for synthesizing the results are also proposed. The approach is illustrated using synthetic and real data sets.  相似文献   

5.
Automatic image annotation is concerned with the task of assigning one or more semantic concepts to a given image. It is a typical multi-label classification problem. This paper presents a novel multi-label classification framework MLNRS based on neighborhood rough sets for automatic image annotation which considers the uncertainty of the mapping from visual feature space to semantic concepts space. Given a new instances, its neighbors in the training set are firstly identified. After that, based on the concept of upper and lower approximations of neighborhood rough sets, all possible labels of the given instance are found. Then, based on the statistical information gained from the label sets of the neighbors, maximum a posteriori (MAP) principle is utilized to determine the label set for the given instance. Experiments completed for three different image datasets show that MLNRS achieves more promising performance in comparison with to some well-known multi-label learning algorithms.  相似文献   

6.
Kernel methods and rough sets are two general pursuits in the domain of machine learning and intelligent systems. Kernel methods map data into a higher dimensional feature space, where the resulting structure of the classification task is linearly separable; while rough sets granulate the universe with the use of relations and employ the induced knowledge granules to approximate arbitrary concepts existing in the problem at hand. Although it seems there is no connection between these two methodologies, both kernel methods and rough sets explicitly or implicitly dwell on relation matrices to represent the structure of sample information. Based on this observation, we combine these methodologies by incorporating Gaussian kernel with fuzzy rough sets and propose a Gaussian kernel approximation based fuzzy rough set model. Fuzzy T-equivalence relations constitute the fundamentals of most fuzzy rough set models. It is proven that fuzzy relations with Gaussian kernel are reflexive, symmetric and transitive. Gaussian kernels are introduced to acquire fuzzy relations between samples described by fuzzy or numeric attributes in order to carry out fuzzy rough data analysis. Moreover, we discuss information entropy to evaluate the kernel matrix and calculate the uncertainty of the approximation. Several functions are constructed for evaluating the significance of features based on kernel approximation and fuzzy entropy. Algorithms for feature ranking and reduction based on the proposed functions are designed. Results of experimental analysis are included to quantify the effectiveness of the proposed methods.  相似文献   

7.
Multi-sensor data fusion is an evolving technology whereby data from multiple sensor inputs are processed and combined. The data derived from multiple sensors can, however, be uncertain, imperfect, and conflicting. The present study is undertaken to help contribute to the continuous search for viable approaches to overcome the problems associated with data conflict and imperfection. Sensor readings, represented by belief functions, have to be fused according to their corresponding weights. Previous studies have often estimated the weights of sensor readings based on a single criterion. Mono-criteria approaches for the assessment of sensor reading weights are, however, often unreliable and inadequate for the reflection of reality. Accordingly, this work opts for the use of a multi-criteria decision aid. A modified Analytical Hierarchy Process (AHP) that incorporates several criteria is proposed to determine the weights of a sensor reading set. The approach relies on the automation of pairwise comparisons to eliminate subjectivity and reduce inconsistency. It assesses the weight of each sensor reading, and fuses the weighed readings obtained using a modified average combination rule. The efficiency of this approach is evaluated in a target recognition context. Several tests, sensitivity analysis, and comparisons with other approaches available in the literature are described.  相似文献   

8.
Inference algorithms in directed evidential networks (DEVN) obtain their efficiency by making use of the represented independencies between variables in the model. This can be done using the disjunctive rule of combination (DRC) and the generalized Bayesian theorem (GBT), both proposed by Smets [Ph. Smets, Belief functions: the disjunctive rule of combination and the generalized Bayesian theorem, International Journal of Approximate Reasoning 9 (1993) 1–35]. These rules make possible the use of conditional belief functions for reasoning in directed evidential networks, avoiding the computations of joint belief function on the product space. In this paper, new algorithms based on these two rules are proposed for the propagation of belief functions in singly and multiply directed evidential networks.  相似文献   

9.
10.
This paper proposes a general study of (I,T)-interval-valued fuzzy rough sets on two universes of discourse integrating the rough set theory with the interval-valued fuzzy set theory by constructive and axiomatic approaches. Some primary properties of interval-valued fuzzy logical operators and the construction approaches of interval-valued fuzzy T-similarity relations are first introduced. Determined by an interval-valued fuzzy triangular norm and an interval-valued fuzzy implicator, a pair of lower and upper generalized interval-valued fuzzy rough approximation operators with respect to an arbitrary interval-valued fuzzy relation on two universes of discourse is then defined. Properties of I-lower and T-upper interval-valued fuzzy rough approximation operators are examined based on the properties of interval-valued fuzzy logical operators discussed above. Connections between interval-valued fuzzy relations and interval-valued fuzzy rough approximation operators are also established. Finally, an operator-oriented characterization of interval-valued fuzzy rough sets is proposed, that is, interval-valued fuzzy rough approximation operators are characterized by axioms. Different axiom sets of I-lower and T-upper interval-valued fuzzy set-theoretic operators guarantee the existence of different types of interval-valued fuzzy relations which produce the same operators.  相似文献   

11.
In this paper, we address the problem of identifying the potential sources of conflict between information sources in the framework of belief function theory. To this aim, we propose a decomposition of the global measure of conflict as a function defined over the power set of the discernment frame. This decomposition, which associates a part of the conflict to some hypotheses, allows identifying the origin of conflict, which is hence considered as “local” to some hypotheses. This is more informative than usual global measures of conflict or disagreement between sources. Having shown the unicity of this decomposition, we illustrate its use on two examples. The first one is a toy example where the fact that conflict is mainly brought by one hypothesis allows identifying its origin. The second example is a real application, namely robot localization, where we show that focusing the conflict measure on the “favored” hypothesis (the one that would be decided) helps us to robustify the fusion process.  相似文献   

12.
Nowadays, with the volume of data growing at an unprecedented rate, large-scale data mining and knowledge discovery have become a new challenge. Rough set theory for knowledge acquisition has been successfully applied in data mining. The recently introduced MapReduce technique has received much attention from both scientific community and industry for its applicability in big data analysis. To mine knowledge from big data, we present parallel large-scale rough set based methods for knowledge acquisition using MapReduce in this paper. We implemented them on several representative MapReduce runtime systems: Hadoop, Phoenix and Twister. Performance comparisons on these runtime systems are reported in this paper. The experimental results show that (1) The computational time is mostly minimum on Twister while employing the same cores; (2) Hadoop has the best speedup for larger data sets; (3) Phoenix has the best speedup for smaller data sets. The excellent speedups also demonstrate that the proposed parallel methods can effectively process very large data on different runtime systems. Pitfalls and advantages of these runtime systems are also illustrated through our experiments, which are helpful for users to decide which runtime system should be used in their applications.  相似文献   

13.
Rough set theory, a mathematical tool to deal with inexact or uncertain knowledge in information systems, has originally described the indiscernibility of elements by equivalence relations. Covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to coverings. Recently, some topological concepts such as neighborhood have been applied to covering rough sets. In this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. We show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. To analyze the coverings themselves, we introduce unary and composition operations on coverings. A notion of homomorphism is provided to relate two covering approximation spaces. We also examine the properties of approximations preserved by the operations and homomorphisms, respectively.  相似文献   

14.
In this note, we show by examples that Theorem 5.3, partial proof of Theorem 5.3′, Lemma 5.4 and Remark 5.2 in [1] contain slight flaws and then provide the correct versions.  相似文献   

15.
An expert system is a computer program that is designed to solve problems at a level comparable to that of a human expert in a given domain. Often expert systems require a representation of uncertainty. This paper highlights some of the key developments in the history of representing uncertainty in expert systems. An uncertainty representation called belief networks is then introduced and its use in expert systems is motivated. The paper concludes with a discussion of current directions in belief network research.  相似文献   

16.
We address in this paper the problem of defining belief functions, typically for multi-source classification applications in image processing. We propose to use mathematical morphology for introducing imprecision in the mass and belief functions while estimating disjunctions of hypotheses. The basic idea relies on the similarity between some properties of morphological operators and properties of belief functions. The framework of mathematical morphology guarantees that the derived functions have all required properties. We illustrate the proposed approach on synthetic and real images.  相似文献   

17.
Difference systems of sets (DSS) are combinatorial configurations that arise in connection with code synchronization. This paper proposes a new method to construct DSSs, which uses known DSSs to partition some of the cosets of Zv relative to subgroup of order k, where v = km is a composite number. As applications, we obtain some new optimal DSSs.  相似文献   

18.
In this paper we present new methods for solving multi-criteria decision-making problem in an intuitionistic fuzzy environment. First, we define an evaluation function for the decision-making problem to measure the degrees to which alternatives satisfy and do not satisfy the decision-maker’s requirement. Then, we introduce and discuss the concept of intuitionistic fuzzy point operators. By using the intuitionistic fuzzy point operators, we can reduce the degree of uncertainty of the elements in a universe corresponding to an intuitionistic fuzzy set. Furthermore, a series of new score functions are defined for multi-criteria decision-making problem based on the intuitionistic fuzzy point operators and the evaluation function and their effectiveness and advantage are illustrated by examples.  相似文献   

19.
Some methods of making fuzzy decisions include a comparison of fuzzy sets on the same space. Methods have been published which suffer from lack of discrimination between alternatives and occasional conflict with intuitive choice. These methods are reviewed in this paper and then a new approach is described which overcomes their drawbacks. Methods of evaluating the parameter used in decision-making are given which can be varied to incorporate different utility functions.  相似文献   

20.
In the present paper, we concentrate on dealing with a class of multi-objective programming problems with random coefficients and present its application to the multi-item inventory problem. The P-model is proposed to obtain the maximum probability of the objective functions and rough approximation is applied to deal with the feasible set with random parameters. The fuzzy programming technique and genetic algorithm are then applied to solve the crisp programming problem. Finally, the application to Auchan’s inventory system is given in order to show the efficiency of the proposed models and algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号