共查询到19条相似文献,搜索用时 125 毫秒
1.
首先分析了粗糙集理论中现在常用的属性依赖性度量中的不合理性,然后结合概率统计中的卡方分布的思想,给出一种新的属性依赖性度量,根据这种依赖性度量,给出属性重要性的定义,最后详细讨论了它们的性质. 相似文献
2.
模糊信息系统属性重要性度量 总被引:2,自引:0,他引:2
利用包含度工具将粗糙集方法应用在模糊信息系统中,给出了模糊信息系统中属性重要性度量计算方法,通过举例说明了[3]中关于属性重要性度量概念的局限性。 相似文献
3.
4.
P-集合是把动态特性引入到有限普通集合X内,改进有限普通集合X得到的.层次P-集合是对普通P-集合的扩展,具有层次结构和链式结构.利用层次P-集合的性质,研究层次P-集合属性元素与规律,给出层次结构间属性元素的关系及度量,给出链式结构中属性元素的关系及度量,给出属性规律. 相似文献
5.
属性约简是粗糙集理论研究的核心内容之一.在集值信息系统中引入信息量和属性重要性,给出它们的性质及与属性约简之间的关系.针对集值信息系统提出了一种基于信息量和属性重要性的属性约简算法及算法的时间复杂度.通过实例说明,该算法是有效的. 相似文献
6.
7.
针对复杂系统分析中的数据信息冗余问题,提出一种基于Vague粗糙集信息熵的属性约简算法。首先,对Vague粗糙集相关概念进行拓展,提出Vague粗糙集的扩展信息熵和广义信息熵的模型;其次,对基于信息熵的属性重要性度量和属性约简原理进行研究,进而提出了一种基于Vague粗糙集信息熵的监督式属性约简算法;最后,选取UCI数据库对算法性能进行验证,计算结果表明该算法实用有效。 相似文献
8.
本文借助于Riemann曲面的长度谱给出了Teichmiiller空间的一种度量,并证明了这种度量与Teichüller度量拓扑等价。本文的结果解了1975年Sorval所提出的问题。 相似文献
9.
10.
本文研究属性权重完全未知且属性值为L-R模糊数的多属性决策问题,提出了一种在多属性决策中,基于模糊数及α-截集理论,用离差最大化方法来估计属性权重的方法,随后研究了这种方法的相关性质,给出方程的解,得到方案排序结果。研究表明,由本文所提出的方法是有效、可行的。 相似文献
11.
针对三角模糊偏好下冲突型群决策问题,本文提出一种新的决策方法。在冲突消解阶段,用三角模糊数表示决策专家偏好,定义两三角模糊数型偏好矢量间的相似度,通过计算专家对各个方案的偏好矢量与各方案的群偏好矢量间的相似度,以此为基础定义专家的冲突测度。给出阈值和协商机制调控专家的冲突测度,直到所有的专家的冲突测度都小于给定阈值,进入决策阶段。在决策阶段,利用三角模糊数的期望函数确定属性权重,计算各个方案群偏好矢量与理想方案偏好矢量之间的加权相似度,由加权相似度大小排列决策,选出最优方案。最后给出案例应用,利用Matlab画出各方案的冲突测度图,数值结果表明本文方法的可行性及有效性。 相似文献
12.
专家知识库粗集建模中基于熵的数据离散化 总被引:2,自引:0,他引:2
首先分析了专家知识库粗集建模中连续数据离散化存在的问题 ,指出允许引入少量的冲突对专家知识库的建模分析是有益的 ;提出了一种基于信息熵的数据离散化方法 ,并分析了数据离散化的熵的度量 ,根据求解问题设计了一种问题求解的遗传算法 ;最后以基于多 Agent车间调度系统中调度 Agent任务分派知识库的粗集建模为例说明了方法的应用过程 相似文献
13.
Daisuke Yamaguchi 《International Journal of Approximate Reasoning》2009,51(1):89-98
Pawlak’s attribute dependency degree model is applicable to feature selection in pattern recognition. However, the dependency degrees given by the model are often inadequately computed as a result of the indiscernibility relation. This paper discusses an improvement to Pawlak’s model and presents a new attribute dependency function. The proposed model is based on decision-relative discernibility matrices and measures how many times condition attributes are used to determine the decision value by referring to the matrix. The proposed dependency degree is computed by considering the two cases that two decision values are equal or unequal. A feature of the proposed model is that attribute dependency degrees have significant properties related to those of Armstrong’s axioms. An advantage of the proposed model is that data efficiency is considered in the computation of dependency degrees. It is shown through examples that the proposed model is able to compute dependency degrees more strictly than Pawlak’s model. 相似文献
14.
15.
《International Journal of Approximate Reasoning》2010,51(1):89-98
Pawlak’s attribute dependency degree model is applicable to feature selection in pattern recognition. However, the dependency degrees given by the model are often inadequately computed as a result of the indiscernibility relation. This paper discusses an improvement to Pawlak’s model and presents a new attribute dependency function. The proposed model is based on decision-relative discernibility matrices and measures how many times condition attributes are used to determine the decision value by referring to the matrix. The proposed dependency degree is computed by considering the two cases that two decision values are equal or unequal. A feature of the proposed model is that attribute dependency degrees have significant properties related to those of Armstrong’s axioms. An advantage of the proposed model is that data efficiency is considered in the computation of dependency degrees. It is shown through examples that the proposed model is able to compute dependency degrees more strictly than Pawlak’s model. 相似文献
16.
17.
阐明集值信息系统具有知识表达的实际意义;引入关于相容关系的最大相容分类方法对论域中的对象分类,以保证每个相容类中的对象具有共同的属性特征;讨论集值信息系统的属性约简问题,利用区分函数,给出核及约简的求法. 相似文献
18.
《International Journal of Approximate Reasoning》2014,55(3):908-923
Covering rough sets generalize traditional rough sets by considering coverings of the universe instead of partitions, and neighborhood-covering rough sets have been demonstrated to be a reasonable selection for attribute reduction with covering rough sets. In this paper, numerical algorithms of attribute reduction with neighborhood-covering rough sets are developed by using evidence theory. We firstly employ belief and plausibility functions to measure lower and upper approximations in neighborhood-covering rough sets, and then, the attribute reductions of covering information systems and decision systems are characterized by these respective functions. The concepts of the significance and the relative significance of coverings are also developed to design algorithms for finding reducts. Based on these discussions, connections between neighborhood-covering rough sets and evidence theory are set up to establish a basic framework of numerical characterizations of attribute reduction with these sets. 相似文献
19.
In this paper, we proposed a new statistical dependency measure for two random vectors based on copula, called copula dependency coefficient (CDC). The CDC is proved to be robust to outliers and easy to be implemented. Especially, it is powerful and applicable to high-dimensional problems. All these properties make CDC practically important in related applications. Both experimental and application results show that CDC is a good robust dependence measure for association detecting. 相似文献