首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Granular Computing is an emerging conceptual and computing paradigm of information-processing. A central notion is an information-processing pyramid with different levels of clarifications. Each level is usually represented by ‘chunks’ of data or granules, also known as information granules. Rough Set Theory is one of the most widely used methodologies for handling or defining granules.Ontologies are used to represent the knowledge of a domain for specific applications. A challenge is to define semantic knowledge at different levels of human-depending detail.In this paper we propose four operations in order to have several granular perspectives for a specific ontological commitment. Then these operations are used to have various views of an ontology built with a rough-set approach. In particular, a rough methodology is introduced to construct a specific granular view of an ontology.  相似文献   

2.
Group decision making is a type of decision problem in which multiple experts acting collectively, analyze problems, evaluate alternatives, and select a solution from a collection of alternatives. As the natural language is the standard representation of those concepts that humans use for communication, it seems natural that they use words (linguistic terms) instead of numerical values to provide their opinions. However, while linguistic information is readily available, it is not operational and thus it has to be made usable though expressing it in terms of information granules. To do so, Granular Computing, which has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules, can be used. The aim of this paper is to present an information granulation of the linguistic information used in group decision making problems defined in heterogeneous contexts, i.e., where the experts have associated importance degrees reflecting their ability to handle the problem. The granulation of the linguistic terms is formulated as an optimization problem, solved by using the particle swarm optimization, in which a performance index is maximized by a suitable mapping of the linguistic terms on information granules formalized as sets. This performance index is expressed as a weighted aggregation of the individual consistency achieved by each expert.  相似文献   

3.
Attribute reduction is a key step to discover interesting patterns in the decision system with numbers of attributes available. In recent years, with the fast development of data processing tools, the information system may increase quickly in attributes over time. How to update attribute reducts efficiently under the attribute generalization becomes an important task in knowledge discovery related tasks since the result of attribute reduction may alter with the increase of attributes. This paper aims for investigation of incremental attribute reduction algorithm based on knowledge granularity in the decision system under the variation of attributes. Incremental mechanisms to calculate the new knowledge granularity are first introduced. Then, the corresponding incremental algorithms are presented for attribute reduction based on the calculated knowledge granularity when multiple attributes are added to the decision system. Finally, experiments performed on UCI data sets and the complexity analysis show that the proposed incremental methods are effective and efficient to update attribute reducts with the increase of attributes.  相似文献   

4.
Human beings often observe objects or deal with data hierarchically structured at different levels of granulations. In this paper, we study optimal scale selection in multi-scale decision tables from the perspective of granular computation. A multi-scale information table is an attribute-value system in which each object under each attribute is represented by different scales at different levels of granulations having a granular information transformation from a finer to a coarser labelled value. The concept of multi-scale information tables in the context of rough sets is introduced. Lower and upper approximations with reference to different levels of granulations in multi-scale information tables are defined and their properties are examined. Optimal scale selection with various requirements in multi-scale decision tables with the standard rough set model and a dual probabilistic rough set model are discussed respectively. Relationships among different notions of optimal scales in multi-scale decision tables are further analyzed.  相似文献   

5.
With the rapid growth of data sets nowadays, the object sets in an information system may evolve in time when new information arrives. In order to deal with the missing data and incomplete information in real decision problems, this paper presents a matrix based incremental approach in dynamic incomplete information systems. Three matrices (support matrix, accuracy matrix and coverage matrix) under four different extended relations (tolerance relation, similarity relation, limited tolerance relation and characteristic relation), are introduced to incomplete information systems for inducing knowledge dynamically. An illustration shows the procedure of the proposed method for knowledge updating. Extensive experimental evaluations on nine UCI datasets and a big dataset with millions of records validate the feasibility of our proposed approach.  相似文献   

6.
Reduction about approximation spaces of covering generalized rough sets   总被引:1,自引:0,他引:1  
The introduction of covering generalized rough sets has made a substantial contribution to the traditional theory of rough sets. The notion of attribute reduction can be regarded as one of the strongest and most significant results in rough sets. However, the efforts made on attribute reduction of covering generalized rough sets are far from sufficient. In this work, covering reduction is examined and discussed. We initially construct a new reduction theory by redefining the approximation spaces and the reducts of covering generalized rough sets. This theory is applicable to all types of covering generalized rough sets, and generalizes some existing reduction theories. Moreover, the currently insufficient reducts of covering generalized rough sets are improved by the new reduction. We then investigate in detail the procedures to get reducts of a covering. The reduction of a covering also provides a technique for data reduction in data mining.  相似文献   

7.
标准粗糙集使用等价类作为粒来描述概念.本文弱化对等价关系的要求, 将更广泛的粒计算模型建立到泛系粗糙集上去.本文通过对全域的分割和覆盖来诱导出泛系粗糙集上的粒计算模型.  相似文献   

8.
9.
To get a true hybrid framework for taking operational decisions from data, we extend the Algorithmic Inference approach to the Granular Computing paradigm. The key idea is that whether or not we need to make decisions instead of mere computations depends on the fact that collected data are not sufficiently definite; rather, they are representative of whole sets of data that could be virtually observed, and we need to manage this indeterminacy. The distinguishing feature is that we face indeterminacy exactly where it affects the quality of the decision. This gives rise to a family of inference algorithms which can be tailored to many specific decisional problems that are generally solved only in approximate ways. In the paper we discuss the bases of the paradigm and provide some examples of its implementation.  相似文献   

10.
Traditional c-means clustering partitions a group of objects into a number of non-overlapping sets. Rough sets provide more flexible and objective representation than classical sets with hard partition and fuzzy sets with subjective membership function for a given dataset. Rough c-means clustering and its extensions were introduced and successfully applied in many real life applications in recent years. Each cluster is represented by a reasonable pair of lower and upper approximations. However, the most available algorithms pay no attention to the influence of the imbalanced spatial distribution within a cluster. The limitation of the mean iterative calculation function, with the same weight for all the data objects in a lower or upper approximation, is analyzed. A hybrid imbalanced measure of distance and density for the rough c-means clustering is defined, and a modified rough c-means clustering algorithm is presented in this paper. To evaluate the proposed algorithm, it has been applied to several real world data sets from UCI. The validity of this algorithm is demonstrated by the results of comparative experiments.  相似文献   

11.
传统聚类分析主要从相似性度量的不同设计和判别函数的选择等方面进行研究。本从商空间理论和信息粒度原理角度出发,引入分层递阶结构,论述聚类分析的本质以及Fuzzy聚类分析这种软件统计方法。  相似文献   

12.
模糊粗糙集理论研究进展   总被引:21,自引:3,他引:18  
介绍模糊粗糙集的概念及发展进程.提出了理论建立过程中,分别以推广到模糊集、引入模糊逻辑算子、拓展到两个论域为特点的三个发展阶段;分析、比较了各阶段代表性理论的特点,并对模糊粗糙集的未来发展作出了预期。  相似文献   

13.
In the present paper, we concentrate on dealing with a class of multi-objective programming problems with random coefficients and present its application to the multi-item inventory problem. The P-model is proposed to obtain the maximum probability of the objective functions and rough approximation is applied to deal with the feasible set with random parameters. The fuzzy programming technique and genetic algorithm are then applied to solve the crisp programming problem. Finally, the application to Auchan’s inventory system is given in order to show the efficiency of the proposed models and algorithms.  相似文献   

14.
A sequential pattern mining algorithm using rough set theory   总被引:1,自引:0,他引:1  
Sequential pattern mining is a crucial but challenging task in many applications, e.g., analyzing the behaviors of data in transactions and discovering frequent patterns in time series data. This task becomes difficult when valuable patterns are locally or implicitly involved in noisy data. In this paper, we propose a method for mining such local patterns from sequences. Using rough set theory, we describe an algorithm for generating decision rules that take into account local patterns for arriving at a particular decision. To apply sequential data to rough set theory, the size of local patterns is specified, allowing a set of sequences to be transformed into a sequential information system. We use the discernibility of decision classes to establish evaluation criteria for the decision rules in the sequential information system.  相似文献   

15.
Large-scale organizations have used social computing platforms for various purposes. This research focuses on how hospitals utilize these platforms to attract potential customers (which represents the “extensivity” of a social computing platform) and generate interests in specific topics (which represents the “intensivity” of a platform). Specifically, we examine the effects of size of a hospital (or “size”) and the time that the social computing platform has been in existence (or “time”) on extensivity and intensivity. Our findings show that time is a significant variable on both dimensions; whereas size affects intensivity under certain conditions. We discuss the implications of these findings, and set the stage for future research.  相似文献   

16.
17.
We present in this paper an improved non-smooth Discrete Element Method (DEM) in 3D based on the Non-Smooth Contact Dynamics (NSCD) method. We consider a three-dimensional collection of rigid particles (spheres) during the motion of which contacts can occur or break. The dry friction is modeled by Coulomb’s law which is typically non-associated. The non-associativity of the constitutive law poses numerical challenges. By adopting the use of the bi-potential concept in the framework of the NSCD DEM, a faster and more robust time stepping algorithm with only one predictor-corrector step where the contact and the friction are coupled can be devised. This contrasts with the classical method where contact and friction are treated separately leading to a time stepping algorithm that involves two predictor-corrector steps. The algorithm has been introduced in a 3D version of the NSCD DEM software MULTICOR. Numerical applications will show the robustness of the algorithm and the possibilities of the MULTICOR software for solving three-dimensional problems.  相似文献   

18.
The theory New Foundations (NF) of Quine was introduced in [14]. This theory is finitely axiomatizable as it has been proved in [9]. A similar result is shown in [8] using a system called K. Particular subsystems of NF, inspired by [8] and [9], have models in ZF. Very little is known about subsystems of NF satisfying typical properties of ZF; for example in [11] it is shown that the existence of some sets which appear naturally in ZF is an axiom independent from NF (see also [12]). Here we discuss a model of subsystems of NF in which there is a set which is a model of ZF. MSC: 03E70.  相似文献   

19.
This paper proposes a continuous covering location model with risk consideration. The investigated model is an extension of the discrete covering location models in continuous space. The objective function consists of installation and risk costs. Because of uncertain covering radius, customer satisfaction degree of covering radius is introduced by fuzzy concept. Since, the uncertainty may cause risk of uncovering customers; the risk cost is added to the objective function. The installation cost is assigned to a zone with a predetermined radius from its center. The model is solved by a fuzzy method named αα-cut. After solving the model based on different αα-values, the zones with the largest possibilities are determined for locating new facilities and the best locations are calculated based on the obtained possibilities. Then, the model is solved to determine the best covering values. This paper, also introduces a risk analysis method based on Response Surface Methodology (RSM) to consider risk management in the location models. Finally, a numerical example is expressed to illustrate the proposed model.  相似文献   

20.
The article considers estimating a parameter θ in an imprecise probability model which consists of coherent upper previsions . After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications. It is shown that approximate minimum distances on the discretized sample space can be calculated by linear programming. After a discussion of some computational aspects, the estimator is applied in a simulation study consisting of two different models. Finally, the estimator is applied on a real data set in a linear regression model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号