首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Attribute reduction is very important in rough set-based data analysis (RSDA) because it can be used to simplify the induced decision rules without reducing the classification accuracy. The notion of reduct plays a key role in rough set-based attribute reduction. In rough set theory, a reduct is generally defined as a minimal subset of attributes that can classify the same domain of objects as unambiguously as the original set of attributes. Nevertheless, from a relational perspective, RSDA relies on a kind of dependency principle. That is, the relationship between the class labels of a pair of objects depends on component-wise comparison of their condition attributes. The larger the number of condition attributes compared, the greater the probability that the dependency will hold. Thus, elimination of condition attributes may cause more object pairs to violate the dependency principle. Based on this observation, a reduct can be defined alternatively as a minimal subset of attributes that does not increase the number of objects violating the dependency principle. While the alternative definition coincides with the original one in ordinary RSDA, it is more easily generalized to cases of fuzzy RSDA and relational data analysis.  相似文献   

2.
Rough set theory is a new data mining approach to manage vagueness. It is capable to discover important facts hidden in the data. Literature indicate the current rough set based approaches can’t guarantee that classification of a decision table is credible and it is not able to generate robust decision rules when new attributes are incrementally added in. In this study, an incremental attribute oriented rule-extraction algorithm is proposed to solve this deficiency commonly observed in the literature related to decision rule induction. The proposed approach considers incremental attributes based on the alternative rule extraction algorithm (AREA), which was presented for discovering preference-based rules according to the reducts with the maximum of strength index (SI), specifically the case that the desired reducts are not necessarily unique since several reducts could include the same value of SI. Using the AREA, an alternative rule can be defined as the rule which holds identical preference to the original decision rule and may be more attractive to a decision-maker than the original one. Through implementing the proposed approach, it can be effectively operating with new attributes to be added in the database/information systems. It is not required to re-compute the updated data set similar to the first step at the initial stage. The proposed algorithm also excludes these repetitive rules during the solution search stage since most of the rule induction approaches generate the repetitive rules. The proposed approach is capable to efficiently and effectively generate the complete, robust and non-repetitive decision rules. The rules derived from the data set provide an indication of how to effectively study this problem in further investigations.  相似文献   

3.
In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.  相似文献   

4.
The original rough set approach proved to be very useful in dealing with inconsistency problems following from information granulation. It operates on a data table composed of a set U of objects (actions) described by a set Q of attributes. Its basic notions are: indiscernibility relation on U, lower and upper approximation of either a subset or a partition of U, dependence and reduction of attributes from Q, and decision rules derived from lower approximations and boundaries of subsets identified with decision classes. The original rough set idea is failing, however, when preference-orders of attribute domains (criteria) are to be taken into account. Precisely, it cannot handle inconsistencies following from violation of the dominance principle. This inconsistency is characteristic for preferential information used in multicriteria decision analysis (MCDA) problems, like sorting, choice or ranking. In order to deal with this kind of inconsistency a number of methodological changes to the original rough sets theory is necessary. The main change is the substitution of the indiscernibility relation by a dominance relation, which permits approximation of ordered sets in multicriteria sorting. To approximate preference relations in multicriteria choice and ranking problems, another change is necessary: substitution of the data table by a pairwise comparison table, where each row corresponds to a pair of objects described by binary relations on particular criteria. In all those MCDA problems, the new rough set approach ends with a set of decision rules playing the role of a comprehensive preference model. It is more general than the classical functional or relational model and it is more understandable for the users because of its natural syntax. In order to workout a recommendation in one of the MCDA problems, we propose exploitation procedures of the set of decision rules. Finally, some other recently obtained results are given: rough approximations by means of similarity relations, rough set handling of missing data, comparison of the rough set model with Sugeno and Choquet integrals, and results on equivalence of a decision rule preference model and a conjoint measurement model which is neither additive nor transitive.  相似文献   

5.
Among the large amount of genes presented in microarray gene expression data, only a small fraction of them is effective for performing a certain diagnostic test. In this regard, a new feature selection algorithm is presented based on rough set theory. It selects a set of genes from microarray data by maximizing the relevance and significance of the selected genes. A theoretical analysis is presented to justify the use of both relevance and significance criteria for selecting a reduced gene set with high predictive accuracy. The importance of rough set theory for computing both relevance and significance of the genes is also established. The performance of the proposed algorithm, along with a comparison with other related methods, is studied using the predictive accuracy of K-nearest neighbor rule and support vector machine on five cancer and two arthritis microarray data sets. Among seven data sets, the proposed algorithm attains 100% predictive accuracy for three cancer and two arthritis data sets, while the rough set based two existing algorithms attain this accuracy only for one cancer data set.  相似文献   

6.
We are considering the problem of multi-criteria classification. In this problem, a set of “if … then …” decision rules is used as a preference model to classify objects evaluated by a set of criteria and regular attributes. Given a sample of classification examples, called learning data set, the rules are induced from dominance-based rough approximations of preference-ordered decision classes, according to the Variable Consistency Dominance-based Rough Set Approach (VC-DRSA). The main question to be answered in this paper is how to classify an object using decision rules in situation where it is covered by (i) no rule, (ii) exactly one rule, (iii) several rules. The proposed classification scheme can be applied to both, learning data set (to restore the classification known from examples) and testing data set (to predict classification of new objects). A hypothetical example from the area of telecommunications is used for illustration of the proposed classification method and for a comparison with some previous proposals.  相似文献   

7.
A Condorcet domain is a subset of the set of linear orders on a finite set of candidates (alternatives to vote), such that if voters preferences are linear orders belonging to this subset, then the simple majority rule does not yield cycles. It is well-known that the set of linear orders is the Bruhat lattice. We prove that a maximal Condorcet domain is a distributive sublattice in the Bruhat lattice. An explicit lattice formula for the simple majority rule is given. We introduce the notion of a symmetric Condorcet domain and characterize symmetric Condorcet domains of maximal size.  相似文献   

8.
This paper is concerned with the problem of deciding whether a semialgebraic set S of an algebraic variety X over R is basic. Furthermore, in such a case, we decide what is the sharp number of inequalities defining S. For that, it suffices to desingularize X, as well as the boundary of S, and then ask the same question for the trace of S on its boundary. In this way, after a finite number of blowing-ups, we lower the dimension of the data and by induction we get a finite decision procedure to solve this problem. Decidability of other known criteria is also analyzed.  相似文献   

9.
A method is proposed for solving optimization problems with continuous variables and a function taking a large finite set of values. Problems of this type arise in the multicriteria construction of a control rule for a discrete-time dynamical system whose performance criteria coincide with the number of violations of requirements imposed on the system. The rule depends on a finite set of parameters whose set of admissible values defines a collection of admissible control rules. An example is the problem of choosing a control rule for a cascade of reservoirs. The optimization method is based on solving a modified problem in which the original function is replaced by a continuous ersatz function. A theorem on the relation between the average-minimal values of the original and ersatz functions is proved. Optimization problems are solved with power-law ersatz functions, and the influence exerted by the exponent on the quality of the solution is determined. It is experimentally shown that the solutions produced by the method are of fairly high quality.  相似文献   

10.
The paper describes a methodology used for selecting the most relevant clinical features and for generating decision rules based on selected attributes from a medical data set with missing values. These rules will help emergency room (ER) medical personnel in triage (initial assessment) of children with abdominal pain. Presented approach is based on rough set theory extended with the ability of handling missing values and with the fuzzy measures allowing estimation of a value of information brought by particular attributes. The proposed methodology was applied for analyzing the data set containing records of patients with abdominal pain, collected in the emergency room of the cooperating hospital. Generated rules will be embedded into a computer decision support system that will be used in the emergency room. The system based on results of presented approach should allow improving of triage accuracy by the emergency room staff, and reducing management costs.  相似文献   

11.
Many rule systems generated from decision trees (like CART, ID3, C4.5, etc.) or from direct counting frequency methods (like Apriori) are usually non-significant or even contradictory. Nevertheless, most papers on this subject demonstrate that important reductions can be made to generate rule sets by searching and removing redundancies and conflicts and simplifying the similarities between them. The objective of this paper is to present an algorithm (RBS: Reduction Based on Significance) for allocating a significance value to each rule in the system so that experts may select the rules that should be considered as preferable and understand the exact degree of correlation between the different rule attributes. Significance is calculated from the antecedent frequency and rule frequency parameters for each rule; if the first one is above the minimal level and rule frequency is in a critical interval, its significance ratio is computed by the algorithm. These critical boundaries are calculated by an incremental method and the rule space is divided according to them. The significance function is defined for these intervals. As with other methods of rule reduction, our approach can also be applied to rule sets generated from decision trees or frequency counting algorithms, in an independent way and after the rule set has been created. Three simulated data sets are used to carry out a computational experiment. Other standard data sets from UCI repository (UCI Machine Learning Repository) and two particular data sets with expert interpretation are used too, in order to obtain a greater consistency. The proposed method offers a more reduced and more easily understandable rule set than the original sets, and highlights the most significant attribute correlations quantifying their influence on consequent attribute.  相似文献   

12.
This work promotes a novel point of view in rough set applications: rough sets rule learning for ordinal prediction is based on rough graphical representation of the rules. Our approach tackles two barriers of rule learning. Unlike in typical rule learning, we construct ordinal prediction with a mathematical approach, rough sets, rather than purely rule quality measures. This construction results in few but significant rules. Moreover, the rules are given in terms of ordinal predictions rather than as unique values. This study also focuses on advancing rough sets theory in favor of soft-computing. Both theoretical and a designed architecture are presented. The features of our proposed approach are illustrated using an experiment in survival analysis. A case study has been performed on melanoma data. The results demonstrate that this innovative system provides an improvement of rule learning both in computing performance for finding the rules and the usefulness of the derived rules.  相似文献   

13.
This paper clarifies the connection between multiple criteria decision-making and decision under uncertainty in a qualitative setting relying on a finite value scale. While their mathematical formulations are very similar, the underlying assumptions differ and the latter problem turns out to be a special case of the former. Sugeno integrals are very general aggregation operations that can represent preference relations between uncertain acts or between multifactorial alternatives where attributes share the same totally ordered domain. This paper proposes a generalized form of the Sugeno integral that can cope with attributes having distinct domains via the use of qualitative utility functions. It is shown that in the case of decision under uncertainty, this model corresponds to state-dependent preferences on consequences of acts. Axiomatizations of the corresponding preference functionals are proposed in the cases where uncertainty is represented by possibility measures, by necessity measures, and by general order-preserving set-functions, respectively. This is achieved by weakening previously proposed axiom systems for Sugeno integrals.  相似文献   

14.
This paper presents a review of the extended finite element method X-FEM for computational fracture mechanics. The work is dedicated to discussing the basic ideas and formulation for the newly developed X-FEM method. The advantage of the method is that the element topology need not conform to the surfaces of the cracks. Moreover, X-FEM coupled with LSM makes possible the accurate solution of engineering problems in complex domains, which may be practically impossible to solve using the standard finite element method.  相似文献   

15.
16.
Many thin-plate and thin-shell problems are set on plane reference domains with a curved boundary. Their approximation by conforming finite-elements methods requires 1-curved finite elements entirely compatible with the associated 1-rectilinear finite elements. In this contribution we introduce a 1-curved finite element compatible with the P5-Argyris element, we study its approximation properties, and then, we use such an element to approximate the solution of thin-plate or thin-shell problems set on a plane-curved boundary domain. We prove the convergence and we get a priori asymptotic error estimates which show the very high degree of accuracy of the method. Moreover we obtain criteria to observe when choosing the numerical integration schemes in order to preserve the order of the error estimates obtained for exact integration.  相似文献   

17.
We study rule induction from two decision tables as a basis of rough set analysis of more than one decision tables. We regard the rule induction process as enumerating minimal conditions satisfied with positive examples but unsatisfied with negative examples and/or with negative decision rules. From this point of view, we show that seven kinds of rule induction are conceivable for a single decision table. We point out that the set of all decision rules from two decision tables can be split in two levels: a first level decision rule is positively supported by a decision table and does not have any conflict with the other decision table and a second level decision rule is positively supported by both decision tables. To each level, we propose rule induction methods based on decision matrices. Through the discussions, we demonstrate that many kinds of rule induction are conceivable.  相似文献   

18.
In this paper, on the basis of the Lyapunov stability theory and finite‐time stability lemma, the finite‐time synchronization problem for memristive neural networks with time‐varying delays is studied by two control methods. First, the discontinuous state‐feedback control rule containing integral part for square sum of the synchronization error and the discontinuous adaptive control rule are designed for realizing synchronization of drive‐response memristive neural networks in finite time, respectively. Then, by using some important inequalities and defining suitable Lyapunov functions, some algebraic sufficient criteria guaranteeing finite‐time synchronization are deduced for drive‐response memristive neural networks in finite time. Furthermore, we give the estimation of the upper bounds of the settling time of finite‐time synchronization. Lastly, the effectiveness of the obtained sufficient criteria guaranteeing finite‐time synchronization is validated by simulation.  相似文献   

19.
The subject of this paper is to study the problem of the minimum distance to the complement of a convex set. Nirenberg has stated a duality theorem treating the minimum norm problem for a convex set. We state a duality result which presents some analogy with the Nirenberg theorem, and we apply this result to polyhedral convex sets. First, we assume that the polyhedral set is expressed as the intersection of some finite collection of m given half-spaces. We show that a global solution is determined by solving m convex programs. If the polyhedral set is expressed as the convex hull of a given finite set of extreme points, we show that a global minimum for a polyhedral norm is obtained by solving a finite number of linear programs.  相似文献   

20.
We present a new axiomatization of logic for dependencies in data with grades, which includes ordinal data and data over domains with similarity relations, and an efficient reasoning method that is based on the axiomatization. The logic has its ordinary-style completeness characterizing the ordinary, bivalent entailment as well as the graded-style completeness characterizing the general, possibly intermediate degrees of entailment. A core of the method is a new inference rule, called the rule of simplification, from which we derive convenient equivalences that allow us to simplify sets of dependencies while retaining semantic closure. The method makes it possible to compute a closure of a given collection of attributes with respect to a collection of dependencies, decide whether a given dependency is entailed by a given collection of dependencies, and more generally, compute the degree to which the dependency is entailed by a collection of dependencies. We also present an experimental evaluation of the presented method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号