首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
集值决策表基于邻域关系的属性约简   总被引:1,自引:0,他引:1  
集值信息系统是完备信息系统的广义形式,它当中的一些对象在某些属性下的取值可能不止一个,反映的是信息的不确定性.本文在集值信息系统上引入对象的邻域关系,并以每个对象的邻域作为基本集,建立了集值信息系统的粗糙集方法.为了简化的知识表示,我们进一步讨论了邻域协调集值决策表的正域约简与邻域不协调集值决策表的近似分布约简,给出了正域约简与近似分布约简的等价刻画条件,并借助区分函数给出了计算正域约简与近似分布约简的方法.  相似文献   

2.
比较知识库精细关系的两种不同定义,得出等价关系与划分、二元关系与覆盖、以及二元关系与覆盖约简之间的一些有趣联系;进而探讨这两种定义在粗糙集中对提高知识确定性程度的不同作用.这些结论将对基于粗糙集的不确定性研究提供一定的帮助.  相似文献   

3.
自Pawlak提出粗糙集概念以来,人们就一直对粗糙集的近似精度很感兴趣,出现了不少有关近似精度的文献.在粗糙集理论中,精度是量化由粗糙集边界引起的不精确性的一种重要数字特征.在分析传统精度和基于等价关系图的过剩熵的近似精度的基础上,提出了一种新的精度定义.比较发现,新定义的精度更具有合理性.同时把这个新定义的精度运用到了属性约简上,通过实例比较发现,本文提出的属性约简更具有可行性.  相似文献   

4.
阐明集值信息系统具有知识表达的实际意义;引入关于相容关系的最大相容分类方法对论域中的对象分类,以保证每个相容类中的对象具有共同的属性特征;讨论集值信息系统的属性约简问题,利用区分函数,给出核及约简的求法.  相似文献   

5.
覆盖广义粗糙集是Pawlak粗糙集的重要推广,其属性约简是粗糙集理论中最重要的问题之一.Tsang等基于一种生成覆盖设计了覆盖信息系统属性约简算法,但并未明确指出其适用的覆盖粗糙集类型.在本文中,我们首先指出Tsang的属性约简算法适用的覆盖粗糙集是第五,第六和第七类.其次,我们通过建立覆盖与自反且传递的二元关系之间的等价关系,提出了一种时间复杂度更低的属性约简算法,并证明了本文中的属性约简方法就是Wang等所提出的一般二元关系属性约简的特例.本文不仅提出了属性约简的简化算法,还首次建立起覆盖属性约简与二元关系属性约简之间的联系,具有理论和实际的双重意义.  相似文献   

6.
区间值信息系统是单值信息系统的一种广义模型,通过引入变精度相容关系以及极大变精度相容类,提出区间值信息系统的属性约简与对象的相对属性约简.进一步,基于区分矩阵,定义一种区分函数与相对区分函数,得到计算区间值信息系统上属性约简与相对约简的具体操作方法.  相似文献   

7.
集覆盖问题和粗糙集属性约简问题都是当前的研究热点,两者均有广泛的应用背景.目前,集覆盖理论与粗糙集理论的交叉研究还处于起步阶段.文章的工作主要是把集覆盖问题转化成测试代价敏感粗糙集属性约简问题,使得可应用粗糙集理论来研究集覆盖问题,目的在于丰富集覆盖理论与粗糙集理论的交叉研究.首先构造集覆盖的分辨矩阵,然后在该分辨矩阵...  相似文献   

8.
多粒度模糊粗糙集研究   总被引:1,自引:0,他引:1       下载免费PDF全文
李聪 《数学杂志》2016,36(1):124-134
本文研究了模糊粗糙集中属性约简问题.利用模糊粗糙集和多粒度粗糙集各自优点的结合,提出了两类多粒度模糊粗糙集模型,使得两类粗糙集中的上下近似算子关于负算子对偶.同时研究了多粒度模糊粗糙集的性质及与单粒度模糊粗糙集的关系.并通过构造区分函数的方法提出了一类多粒度模糊粗糙集模型的近似约简方法.最后用一个实例核对了该类多粒度模糊粗糙决策系统近似约简方法的有效性.  相似文献   

9.
属性约简是粗糙集理论研究的核心内容之一.在集值信息系统中引入信息量和属性重要性,给出它们的性质及与属性约简之间的关系.针对集值信息系统提出了一种基于信息量和属性重要性的属性约简算法及算法的时间复杂度.通过实例说明,该算法是有效的.  相似文献   

10.
基于覆盖的概率粗糙集模型及其Bayes决策   总被引:4,自引:0,他引:4  
经典的Pawlak概率粗糙集模型是基于论域上的等价关系而建立的,然而在实际应用中等价关系很难得到.因此,许多学者建立了基于一般关系(如容差关系、相似关系等)的Pawlak粗糙集模型.本文建立了基于覆盖关系的概率粗糙集模型,推广和总结了前人的工作.同时,提出了该模型下的Bayes决策方法和应用实例.  相似文献   

11.
The original rough set approach proved to be very useful in dealing with inconsistency problems following from information granulation. It operates on a data table composed of a set U of objects (actions) described by a set Q of attributes. Its basic notions are: indiscernibility relation on U, lower and upper approximation of either a subset or a partition of U, dependence and reduction of attributes from Q, and decision rules derived from lower approximations and boundaries of subsets identified with decision classes. The original rough set idea is failing, however, when preference-orders of attribute domains (criteria) are to be taken into account. Precisely, it cannot handle inconsistencies following from violation of the dominance principle. This inconsistency is characteristic for preferential information used in multicriteria decision analysis (MCDA) problems, like sorting, choice or ranking. In order to deal with this kind of inconsistency a number of methodological changes to the original rough sets theory is necessary. The main change is the substitution of the indiscernibility relation by a dominance relation, which permits approximation of ordered sets in multicriteria sorting. To approximate preference relations in multicriteria choice and ranking problems, another change is necessary: substitution of the data table by a pairwise comparison table, where each row corresponds to a pair of objects described by binary relations on particular criteria. In all those MCDA problems, the new rough set approach ends with a set of decision rules playing the role of a comprehensive preference model. It is more general than the classical functional or relational model and it is more understandable for the users because of its natural syntax. In order to workout a recommendation in one of the MCDA problems, we propose exploitation procedures of the set of decision rules. Finally, some other recently obtained results are given: rough approximations by means of similarity relations, rough set handling of missing data, comparison of the rough set model with Sugeno and Choquet integrals, and results on equivalence of a decision rule preference model and a conjoint measurement model which is neither additive nor transitive.  相似文献   

12.
Classical rough set theory is based on the conventional indiscernibility relation. It is not suitable for analyzing incomplete information. Some successful extended rough set models based on different non-equivalence relations have been proposed. The data-driven valued tolerance relation is such a non-equivalence relation. However, the calculation method of tolerance degree has some limitations. In this paper, known same probability dominant valued tolerance relation is proposed to solve this problem. On this basis, an extended rough set model based on known same probability dominant valued tolerance relation is presented. Some properties of the new model are analyzed. In order to compare the classification performance of different generalized indiscernibility relations, based on the category utility function in cluster analysis, an incomplete category utility function is proposed, which can measure the classification performance of different generalized indiscernibility relations effectively. Experimental results show that the known same probability dominant valued tolerance relation can get better classification results than other generalized indiscernibility relations.  相似文献   

13.
Rough set theory, a mathematical tool to deal with inexact or uncertain knowledge in information systems, has originally described the indiscernibility of elements by equivalence relations. Covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to coverings. Recently, some topological concepts such as neighborhood have been applied to covering rough sets. In this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. We show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. To analyze the coverings themselves, we introduce unary and composition operations on coverings. A notion of homomorphism is provided to relate two covering approximation spaces. We also examine the properties of approximations preserved by the operations and homomorphisms, respectively.  相似文献   

14.
粗糙集理论在属性约简及知识分类中的应用   总被引:3,自引:0,他引:3  
本针对不完备信息系统属性约简的两种定义,证明了两的等价性。在此基础上结合粗糙集理论提出了相似矩阵、相似区间的概念,并将其应用于不完备信息系统知识分类的问题中。  相似文献   

15.
Knowledge reduction is one of the most important problems in the study of rough set theory. However, in real-world, most of information systems are based on dominance relations in stead of the classical equivalence relation because of various factors. The ordering of properties of attributes plays a crucial role in those systems. To acquire brief decision rules from the systems, knowledge reductions are needed. The main objective of this paper is to deal with this problem. The distribution reduction and maximum distribution reduction are proposed in inconsistent ordered information systems. Moreover, properties and relationship between them are discussed. Furthermore, judgment theorem and discernibility matrix are obtained, from which an approach to knowledge reductions can be provided in inconsistent ordered information systems.  相似文献   

16.
标准粗糙集使用等价类作为粒来描述概念.本文弱化对等价关系的要求, 将更广泛的粒计算模型建立到泛系粗糙集上去.本文通过对全域的分割和覆盖来诱导出泛系粗糙集上的粒计算模型.  相似文献   

17.
In this paper, we prove that the set of probability measures which are ergodic with respect to an analytic equivalence relation is an analytic set. This is obtained by approximating analytic equivalence relations by measures, and is used to give an elementary proof of an ergodic decomposition theorem of Kechris.

  相似文献   


18.
函数S-粗集(function singular rough sets)是用R-函数等价类定义的,函数是一个规律,函数S-粗集具有规律特征.函数S-粗集推广了Z.Pawlak粗集.利用函数S-粗集,给出规律生成,规律分离的讨论,提出规律分离定理.给出的结果在投资分险规律估计中得到了应用.  相似文献   

19.
Attribute reduction is one of the key issues in rough set theory. Many heuristic attribute reduction algorithms such as positive-region reduction, information entropy reduction and discernibility matrix reduction have been proposed. However, these methods are usually computationally time-consuming for large data. Moreover, a single attribute significance measure is not good for more attributes with the same greatest value. To overcome these shortcomings, we first introduce a counting sort algorithm with time complexity O(∣C∣ ∣U∣) for dealing with redundant and inconsistent data in a decision table and computing positive regions and core attributes (∣C∣ and ∣U∣ denote the cardinalities of condition attributes and objects set, respectively). Then, hybrid attribute measures are constructed which reflect the significance of an attribute in positive regions and boundary regions. Finally, hybrid approaches to attribute reduction based on indiscernibility and discernibility relation are proposed with time complexity no more than max(O(∣C2U/C∣), O(∣C∣∣U∣)), in which ∣U/C∣ denotes the cardinality of the equivalence classes set U/C. The experimental results show that these proposed hybrid algorithms are effective and feasible for large data.  相似文献   

20.
Transfer algorithms are usually used to optimize an objective function that is defined on the set of partitions of a finite set X. In this paper we define an equivalence relation ? on the set of fuzzy equivalence relations on X and establish a bijection from the set of hierarchies on X to the set of equivalence classes with respect to ?. Thus, hierarchies can be identified with fuzzy equivalence relations and the transfer algorithm can be modified in order to optimize an objective function that is defined on the set of hierarchies on X.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号