首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
2.
As an extension of Pawlak rough set model, decision-theoretic rough set model (DTRS) adopts the Bayesian decision theory to compute the required thresholds in probabilistic rough set models. It gives a new semantic interpretation of the positive, boundary and negative regions by using three-way decisions. DTRS has been widely discussed and applied in data mining and decision making. However, one limitation of DTRS is its lack of ability to deal with numerical data directly. In order to overcome this disadvantage and extend the theory of DTRS, this paper proposes a neighborhood based decision-theoretic rough set model (NDTRS) under the framework of DTRS. Basic concepts of NDTRS are introduced. A positive region related attribute reduct and a minimum cost attribute reduct in the proposed model are defined and analyzed. Experimental results show that our methods can get a short reduct. Furthermore, a new neighborhood classifier based on three-way decisions is constructed and compared with other classifiers. Comparison experiments show that the proposed classifier can get a high accuracy and a low misclassification cost.  相似文献   

3.
In rough set theory, crisp and/or fuzzy binary relations play an important role in both constructive and axiomatic considerations of various generalized rough sets. This paper considers the uniqueness problem of the (fuzzy) relation in some generalized rough set model. Our results show that by using the axiomatic approach, the (fuzzy) relation determined by (fuzzy) approximation operators is unique in some (fuzzy) double-universe model.  相似文献   

4.
In this paper, a variable-precision dominance-based rough set approach (VP-DRSA) is proposed together with several VP-DRSA-based approaches to attribute reduction. The properties of VP-DRSA are shown in comparison to previous dominance-based rough set approaches. An advantage of VP-DRSA over variable-consistency dominance-based rough set approach in decision rule induction is emphasized. Some relations among the VP-DRSA-based attribute reduction approaches are investigated.  相似文献   

5.
For decision-theoretic rough sets, a key issue is determining the thresholds for the probabilistic rough set model by setting appropriate cost functions. However, it is not easy to obtain correct cost functions because of a lack of prior knowledge and few previous studies have addressed the determination of learning thresholds and cost functions from datasets. In the present study, a multi-objective optimization model is proposed for threshold learning. In our model, we integrate an objective function that minimizes the decision cost with another that decreases the size of the boundary region. The ranges of the thresholds and two types of F_measure are used as constraints. In addition, a multi-objective genetic algorithm is employed to obtain the Pareto optimal set. We used 12 UCI datasets to validate the performance of our method, where the experimental results demonstrated the trade-off between the two objectives as well as showing that the thresholds obtained by our method were more intuitive than those obtained using other methods. The classification abilities of the solutions were improved by the F_measure constraints.  相似文献   

6.
7.
粗集理论对知识进行了形式化定义,它为处理不确定,不完整的海量数据知识提供了一套严密的数据分析处理工具.但粗集概念及运算的代数意义表示往往不易被人理解.本文针对于此。在知识库中提出了知识的信息熵问题,证明了知识的某些信息表示与其代数表示是等价的,最后还讨论了知识库上的粗动力系统的一些性质。  相似文献   

8.
This paper investigates the relationship between topology and generalized rough sets induced by binary relations. Some known results regarding the relation based rough sets are reviewed, and some new results are given. Particularly, the relationship between different topologies corresponding to the same rough set model is examined. These generalized rough sets are induced by inverse serial relations, reflexive relations and pre-order relations, respectively. We point that inverse serial relations are weakest relations which can induce topological spaces, and that different relation based generalized rough set models will induce different topological spaces. We proved that two known topologies corresponding to reflexive relation based rough set model given recently are different, and gave a condition under which the both are the same topology.  相似文献   

9.
基于熵权的投资评价模型在风险投资中的应用   总被引:9,自引:0,他引:9  
本着“实用性和现实操作性”原则,本文根据风险投资评价的实际操作,在引入粗糙集信息熵理论,导出基于多指标评价的熵权投资模型的基础上,通过问卷调查的实证研究方法,确定评价指标和权重,并例举实际(经适当简化)案例演算具体运算过程,以验证在实际风险投资中的可操作性。从而试图克服目前相关领域研究文献基本停留在方法研究阶段、所给的证例过于简单、没有实际运用价值的缺陷,也尝试探索粗糙集理论在风险投资管理中的应用。  相似文献   

10.
Human beings often observe objects or deal with data hierarchically structured at different levels of granulations. In this paper, we study optimal scale selection in multi-scale decision tables from the perspective of granular computation. A multi-scale information table is an attribute-value system in which each object under each attribute is represented by different scales at different levels of granulations having a granular information transformation from a finer to a coarser labelled value. The concept of multi-scale information tables in the context of rough sets is introduced. Lower and upper approximations with reference to different levels of granulations in multi-scale information tables are defined and their properties are examined. Optimal scale selection with various requirements in multi-scale decision tables with the standard rough set model and a dual probabilistic rough set model are discussed respectively. Relationships among different notions of optimal scales in multi-scale decision tables are further analyzed.  相似文献   

11.
In this paper, we discuss the properties of the probabilistic rough set over two universes in detail. We present the parameter dependence or the continuous of the lower and upper approximations on parameters for probabilistic rough set over two universes. We also investigate some properties of the uncertainty measure, i.e., the rough degree and the precision, for probabilistic rough set over two universes. Meanwhile, we point out the limitation of the uncertainty measure for the traditional method and then define the general Shannon entropy of covering-based on universe. Then we discuss the uncertainty measure of the knowledge granularity and rough entropy for probabilistic rough set over two universes by the proposed concept. Finally, the validity of the methods and conclusions is tested by a numerical example.  相似文献   

12.
目的是探讨精度与程度的复合,建立并研究新的粗糙集拓展模型.基于程度与精度的逻辑差需求,提出了程度下近似算子与变精度上近似算子的差运算模型,得到了程度下近似算子与变精度上近似算子的差运算的宏观本质、精确描述与基本性质.并用一个医疗实例说明了模型的意义和应用.程度下近似算子与变精度上近似算子的差运算模型,部分的拓展了程度粗糙集模型和经典粗糙集模型.  相似文献   

13.
Classical rough set theory is based on the conventional indiscernibility relation. It is not suitable for analyzing incomplete information. Some successful extended rough set models based on different non-equivalence relations have been proposed. The data-driven valued tolerance relation is such a non-equivalence relation. However, the calculation method of tolerance degree has some limitations. In this paper, known same probability dominant valued tolerance relation is proposed to solve this problem. On this basis, an extended rough set model based on known same probability dominant valued tolerance relation is presented. Some properties of the new model are analyzed. In order to compare the classification performance of different generalized indiscernibility relations, based on the category utility function in cluster analysis, an incomplete category utility function is proposed, which can measure the classification performance of different generalized indiscernibility relations effectively. Experimental results show that the known same probability dominant valued tolerance relation can get better classification results than other generalized indiscernibility relations.  相似文献   

14.
Feature reduction based on rough set theory is an effective feature selection method in pattern recognition applications. Finding a minimal subset of the original features is inherent in rough set approach to feature selection. As feature reduction is a Nondeterministic Polynomial‐time‐hard problem, it is necessary to develop fast optimal or near‐optimal feature selection algorithms. This article aims to propose an exact feature selection algorithm in rough set that is efficient in terms of computation time. The proposed algorithm begins the examination of a solution tree by a breadth‐first strategy. The pruned nodes are held in a version of the trie data structure. Based on the monotonic property of dependency degree, all subsets of the pruned nodes cannot be optimal solutions. Thus, by detecting these subsets in trie, it is not necessary to calculate their dependency degree. The search on the tree continues until the optimal solution is found. This algorithm is improved by selecting an initial search level determined by the hill‐climbing method instead of searching the tree from the level below the root. The length of the minimal reduct and the size of data set can influence which starting search level is more efficient. The experimental results using some of the standard UCI data sets, demonstrate that the proposed algorithm is effective and efficient for data sets with more than 30 features. © 2014 Wiley Periodicals, Inc. Complexity 20: 50–62, 2015  相似文献   

15.
In the past, the choices of β values to be applied to find the β-reducts in VPRS for an information system are somewhat arbitrary. In this study, a systematic method which bridges the fuzzy set methodology and probabilistic approach of RS to solve the threshold value β determination problem in variable precision rough sets (VPRS) is proposed. Different from the existing probabilistic methods, the proposed method relies on the fuzzy membership degrees of each attribute of the objects to calculate β. The proposed method gives the membership degrees and fuzzy aggregation operators the probabilistic interpretations. Based on the probabilistic interpretations, the threshold value β of VPRS is directly derived from fuzzy membership degree by Implication Relations and Fuzzy Algorithms, in which the membership degrees are obtained by the standard Fuzzy C-means method. The argument is that errors of system classification would occur in the fuzzy-clustering phase prior to information classification, therefore the threshold value β should be constrained by the probability of belongingness of an object to the fuzzy clusters, i.e., through the values of membership functions. A few examples are given in the paper to demonstrate the differences with other β-determining methods.  相似文献   

16.
《Optimization》2012,61(5):603-611
Classical mathematics is usually crisp while most real-life problems are not; therefore, classical mathematics is usually not suitable for dealing with real-life problems. In this article, we present a systematic and focused study of the application of rough sets (Z. Pawlak, Rough sets, In. J. Comput. Informa. Sci. 11 (1982), pp. 341–356.) to a basic area of decision theory, namely ‘mathematical programming’. This new framework concerns mathematical programming in a rough environment and is called ‘rough programming’ (L. Baoding, Theory and Practice of Uncertain Programming, 1st ed., Physica-Verlag, Heidelberg, 2002; E.A. Youness, Characterizing solutions of rough programming problems, Eut. J. Oper. Res. 168 (2006), pp. 1019–1029). It implies the existence of the roughness in any part of the problem as a result of the leakage, uncertainty and vagueness in the available information. We classify rough programming problems into three classes according to the place of the roughness. In rough programming, wherever roughness exists, new concepts like rough feasibility and rough optimality come to the front of our interest. The study of convexity for rough programming problems plays a key role in understanding global optimality in a rough environment. For this, a theoretical framework of convexity in rough programming and conceptualization of the solution is created on the lines of their crisp counterparts.  相似文献   

17.
目的是结合精度与程度,探索新的粗糙集拓展模型.从程度与精度的逻辑差运算出发,定义了程度与精度的逻辑差粗糙集模型.模型中,通过变精度近似与程度近似的转化公式,研究了程度与精度的逻辑差近似算子,并得到了近似算子的幂作用等性质.用程度与精度的逻辑差粗糙集模型拓展了程度粗糙集模型、经典粗糙集模型,并在这些模型中得到了近似算子幂作用的相应性质.  相似文献   

18.
用模糊集合与模糊等价关系对单向奇异粗集进行了研究,并给出了单向奇异粗糙模糊集合的数学结构及其并、交、补运算和性质.同时证明了单向奇异粗糙模糊集合对并、交、补运算构成完全可无限分配的软代数.  相似文献   

19.
20.
针对复杂系统分析中的数据信息冗余问题,提出一种基于Vague粗糙集信息熵的属性约简算法。首先,对Vague粗糙集相关概念进行拓展,提出Vague粗糙集的扩展信息熵和广义信息熵的模型;其次,对基于信息熵的属性重要性度量和属性约简原理进行研究,进而提出了一种基于Vague粗糙集信息熵的监督式属性约简算法;最后,选取UCI数据库对算法性能进行验证,计算结果表明该算法实用有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号