首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
2.
《Fuzzy Sets and Systems》2004,141(1):47-58
This paper presents a novel boosting algorithm for genetic learning of fuzzy classification rules. The method is based on the iterative rule learning approach to fuzzy rule base system design. The fuzzy rule base is generated in an incremental fashion, in that the evolutionary algorithm optimizes one fuzzy classifier rule at a time. The boosting mechanism reduces the weight of those training instances that are classified correctly by the new rule. Therefore, the next rule generation cycle focuses on fuzzy rules that account for the currently uncovered or misclassified instances. The weight of a fuzzy rule reflects the relative strength the boosting algorithm assigns to the rule class when it aggregates the casted votes. The approach is compared with other classification algorithms for a number problem sets from the UCI repository.  相似文献   

3.
This paper examines the interpretability-accuracy tradeoff in fuzzy rule-based classifiers using a multiobjective fuzzy genetics-based machine learning (GBML) algorithm. Our GBML algorithm is a hybrid version of Michigan and Pittsburgh approaches, which is implemented in the framework of evolutionary multiobjective optimization (EMO). Each fuzzy rule is represented by its antecedent fuzzy sets as an integer string of fixed length. Each fuzzy rule-based classifier, which is a set of fuzzy rules, is represented as a concatenated integer string of variable length. Our GBML algorithm simultaneously maximizes the accuracy of rule sets and minimizes their complexity. The accuracy is measured by the number of correctly classified training patterns while the complexity is measured by the number of fuzzy rules and/or the total number of antecedent conditions of fuzzy rules. We examine the interpretability-accuracy tradeoff for training patterns through computational experiments on some benchmark data sets. A clear tradeoff structure is visualized for each data set. We also examine the interpretability-accuracy tradeoff for test patterns. Due to the overfitting to training patterns, a clear tradeoff structure is not always obtained in computational experiments for test patterns.  相似文献   

4.
A Dual-Objective Evolutionary Algorithm for Rules Extraction in Data Mining   总被引:1,自引:0,他引:1  
This paper presents a dual-objective evolutionary algorithm (DOEA) for extracting multiple decision rule lists in data mining, which aims at satisfying the classification criteria of high accuracy and ease of user comprehension. Unlike existing approaches, the algorithm incorporates the concept of Pareto dominance to evolve a set of non-dominated decision rule lists each having different classification accuracy and number of rules over a specified range. The classification results of DOEA are analyzed and compared with existing rule-based and non-rule based classifiers based upon 8 test problems obtained from UCI Machine Learning Repository. It is shown that the DOEA produces comprehensible rules with competitive classification accuracy as compared to many methods in literature. Results obtained from box plots and t-tests further examine its invariance to random partition of datasets. An erratum to this article is available at .  相似文献   

5.
When combining classifiers in the Dempster-Shafer framework, Dempster’s rule is generally used. However, this rule assumes the classifiers to be independent. This paper investigates the use of other operators for combining non independent classifiers, including the cautious rule and, more generally, t-norm based rules with behavior ranging between Dempster’s rule and the cautious rule. Two strategies are investigated for learning an optimal combination scheme, based on a parameterized family of t-norms. The first one learns a single rule by minimizing an error criterion. The second strategy is a two-step procedure, in which groups of classifiers with similar outputs are first identified using a clustering algorithm. Then, within- and between-cluster rules are determined by minimizing an error criterion. Experiments with various synthetic and real data sets demonstrate the effectiveness of both the single rule and two-step strategies. Overall, optimizing a single t-norm based rule yields better results than using a fixed rule, including Dempster’s rule, and the two-step strategy brings further improvements.  相似文献   

6.
Reinforcement learning schemes perform direct on-line search in control space. This makes them appropriate for modifying control rules to obtain improvements in the performance of a system. The effectiveness of a reinforcement learning strategy is studied here through the training of a learning classifier system (LCS) that controls the movement of an autonomous vehicle in simulated paths including left and right turns. The LCS comprises a set of condition-action rules (classifiers) that compete to control the system and evolve by means of a genetic algorithm (GA). Evolution and operation of classifiers depend upon an appropriate credit assignment mechanism based on reinforcement learning. Different design options and the role of various parameters have been investigated experimentally. The performance of vehicle movement under the proposed evolutionary approach is superior compared with that of other (neural) approaches based on reinforcement learning that have been applied previously to the same benchmark problem.  相似文献   

7.
A fuzzy random forest   总被引:4,自引:0,他引:4  
When individual classifiers are combined appropriately, a statistically significant increase in classification accuracy is usually obtained. Multiple classifier systems are the result of combining several individual classifiers. Following Breiman’s methodology, in this paper a multiple classifier system based on a “forest” of fuzzy decision trees, i.e., a fuzzy random forest, is proposed. This approach combines the robustness of multiple classifier systems, the power of the randomness to increase the diversity of the trees, and the flexibility of fuzzy logic and fuzzy sets for imperfect data management. Various combination methods to obtain the final decision of the multiple classifier system are proposed and compared. Some of them are weighted combination methods which make a weighting of the decisions of the different elements of the multiple classifier system (leaves or trees). A comparative study with several datasets is made to show the efficiency of the proposed multiple classifier system and the various combination methods. The proposed multiple classifier system exhibits a good accuracy classification, comparable to that of the best classifiers when tested with conventional data sets. However, unlike other classifiers, the proposed classifier provides a similar accuracy when tested with imperfect datasets (with missing and fuzzy values) and with datasets with noise.  相似文献   

8.
Diversity being inherent in classifiers is widely acknowledged as an important issue in constructing successful classifier ensembles. Although many statistics have been employed in measuring diversity among classifiers to ascertain whether it correlates with ensemble performance in the literature, most of these measures are incorporated and explained in a non-evidential context. In this paper, we provide a modelling for formulating classifier outputs as triplet mass functions and a uniform notation for defining diversity measures. We then assess the relationship between diversity obtained by four pairwise and non-pairwise diversity measures and the improvement in accuracy of classifiers combined in different orders by Demspter’s rule of combination, Smets’ conjunctive rule, the Proportion and Yager’s rules in the framework of belief functions. Our experimental results demonstrate that the accuracy of classifiers combined by Dempster’s rule is not strongly correlated with the diversity obtained by the four measures, and the correlation between the diversity and the ensemble accuracy made by Proportion and Yager’s rules is negative, which is not in favor of the claim that increasing diversity could lead to reduction of generalization error of classifier ensembles.  相似文献   

9.
I-binomial scrambling of digital nets and sequences   总被引:1,自引:0,他引:1  
The computational complexity of the integration problem in terms of the expected error has recently been an important topic in Information-Based Complexity. In this setting, we assume some sample space of integration rules from which we randomly choose one. The most popular sample space is based on Owen's random scrambling scheme whose theoretical advantage is the fast convergence rate for certain smooth functions.This paper considers a reduction of randomness required for Owen's random scrambling by using the notion of i-binomial property. We first establish a set of necessary and sufficient conditions for digital (0,s)-sequences to have the i-binomial property. Then based on these conditions, the left and right i-binomial scramblings are defined. We show that Owen's key lemma (Lemma 4, SIAM J. Numer. Anal. 34 (1997) 1884) remains valid with the left i-binomial scrambling, and thereby conclude that all the results on the expected errors of the integration problem so far obtained with Owen's scrambling also hold with the left i-binomial scrambling.  相似文献   

10.
This paper proposes a method of interval estimation (based on the autoregressive method) that exploits a priori information about activity level (traffic intensity) and data skewness in a queueing simulation. The method relies on two rules: One determines when to start collecting data during a run and the other determines when to stop collection. The rules are designed to use the a priori information in a way that mitigates the effects of the initial conditions in the simulation and ensures representative congested behavior in the collected data.Experiments with a simulation of the M/M//c queue, with c = 1, 2, 4 and = 0.7, 0.8, 0.9, 0.95 produce favorable results. For = 0.7, 0.8, 0.9, the coverage rates for interval estimates are close to the specified theoretical coverage rates and are higher than those reported in the literature for other methods of interval estimation. The sample sizes to obtain the coverage rates are moderate and are insensitive to variation in the number of servers and the activity level. Experiments with a fixed truncation starting rule and a fixed sample-size stopping rule clearly demonstrate the effectiveness of the proposed method. It is anticipated that a priori information also exists in more complex simulations, and that such information can be used in an analogous way to achieve desired coverage levels.  相似文献   

11.
Let (U,V) be a random vector with U0, V0. The random variables Z=V/(U+V), C=U+V are the Pickands coordinates of (U,V). They are a useful tool for the investigation of the tail behavior in bivariate peaks-over-threshold models in extreme value theory.We compute the distribution of (Z,C) among others under the assumption that the distribution function H of (U,V) is in a smooth neighborhood of a generalized Pareto distribution (GP) with uniform marginals. It turns out that if H is a GP, then Z and C are independent, conditional on C>c−1.These results are used to derive approximations of the empirical point process of the exceedances (Zi,Ci) with Ci>c in an iid sample of size n. Local asymptotic normality is established for the approximating point process in a parametric model, where c=c(n)↑0 as n→∞.  相似文献   

12.
A learning process for fuzzy control rules using genetic algorithms   总被引:10,自引:0,他引:10  
The purpose of this paper is to present a genetic learning process for learning fuzzy control rules from examples. It is developed in three stages: the first one is a fuzzy rule genetic generating process based on a rule learning iterative approach, the second one combines two kinds of rules, experts rules if there are and the previously generated fuzzy control rules, removing the redundant fuzzy rules, and the thrid one is a tuning process for adjusting the membership functions of the fuzzy rules. The three components of the learning process are developed formulating suitable genetic algorithms.  相似文献   

13.
Classifying magnetic resonance spectra is often difficult due to the curse of dimensionality; scenarios in which a high-dimensional feature space is coupled with a small sample size. We present an aggregation strategy that combines predicted disease states from multiple classifiers using several fuzzy integration variants. Rather than using all input features for each classifier, these multiple classifiers are presented with different, randomly selected, subsets of the spectral features. Results from a set of detailed experiments using this strategy are carefully compared against classification performance benchmarks. We empirically demonstrate that the aggregated predictions are consistently superior to the corresponding prediction from the best individual classifier.  相似文献   

14.
《Fuzzy Sets and Systems》2004,141(2):281-299
In this paper, we consider the issue of clustering when outliers exist. The outlier set is defined as the complement of the data set. Following this concept, a specially designed fuzzy membership weighted objective function is proposed and the corresponding optimal membership is derived. Unlike the membership of fuzzy c-means, the derived fuzzy membership does not reduce with the increase of the cluster number. With the suitable redefinition of the distance metric, we demonstrate that the objective function could be used to extract c spherical shells. A hard clustering algorithm alleviating the prototype under-utilization problem is also derived. Artificially generated data are used for comparisons.  相似文献   

15.
In this paper, a theoretical method is presented to select fuzzy implication operators for the fuzzy inference sentence “if x is A, then y is B”. By applying representation theorems, thirty-two fuzzy implication operators are obtained. It is shown that the obtained operators are generalizations of classical inference rule AB, A c B, AB c and A c B c respectively and can be divided into four classes. By discussion, it is found that thirty of them among 420 fuzzy implication operators presented by Li can be derived by applying representation theorems and another two new ones are obtained by the use of our methods.  相似文献   

16.
One of the problems that focus the research in the linguistic fuzzy modeling area is the trade-off between interpretability and accuracy. To deal with this problem, different approaches can be found in the literature. Recently, a new linguistic rule representation model was presented to perform a genetic lateral tuning of membership functions. It is based on the linguistic 2-tuples representation that allows the lateral displacement of a label considering an unique parameter. This way to work involves a reduction of the search space that eases the derivation of optimal models and therefore, improves the mentioned trade-off.Based on the 2-tuples rule representation, this work proposes a new method to obtain linguistic fuzzy systems by means of an evolutionary learning of the data base a priori (number of labels and lateral displacements) and a simple rule generation method to quickly learn the associated rule base. Since this rule generation method is run from each data base definition generated by the evolutionary algorithm, its selection is an important aspect. In this work, we also propose two new ad hoc data-driven rule generation methods, analyzing the influence of them and other rule generation methods in the proposed learning approach. The developed algorithms will be tested considering two different real-world problems.  相似文献   

17.
In this paper we introduce new results in fuzzy connected spaces. Among the results obtained we can mention the good extension of local connectedness. Also we prove that in a T 1-fuzzy compact space the notions c-zero dimensional, strong c-zero dimensional and totally ci-disconnected are equivalent.  相似文献   

18.
For an element w in the Weyl algebra generated by D and U with relation DU=UD+1, the normally ordered form is w=∑ci,jUiDj. We demonstrate that the normal order coefficients ci,j of a word w are rook numbers on a Ferrers board. We use this interpretation to give a new proof of the rook factorization theorem, which we use to provide an explicit formula for the coefficients ci,j. We calculate the Weyl binomial coefficients: normal order coefficients of the element (D+U)n in the Weyl algebra. We extend these results to the q-analogue of the Weyl algebra. We discuss further generalizations using i-rook numbers.  相似文献   

19.
In this paper, we present two classification approaches based on Rough Sets (RS) that are able to learn decision rules from uncertain data. We assume that the uncertainty exists only in the decision attribute values of the Decision Table (DT) and is represented by the belief functions. The first technique, named Belief Rough Set Classifier (BRSC), is based only on the basic concepts of the Rough Sets (RS). The second, called Belief Rough Set Classifier, is more sophisticated. It is based on Generalization Distribution Table (BRSC-GDT), which is a hybridization of the Generalization Distribution Table and the Rough Sets (GDT-RS). The two classifiers aim at simplifying the Uncertain Decision Table (UDT) in order to generate significant decision rules for classification process. Furthermore, to improve the time complexity of the construction procedure of the two classifiers, we apply a heuristic method of attribute selection based on rough sets. To evaluate the performance of each classification approach, we carry experiments on a number of standard real-world databases by artificially introducing uncertainty in the decision attribute values. In addition, we test our classifiers on a naturally uncertain web usage database. We compare our belief rough set classifiers with traditional classification methods only for the certain case. Besides, we compare the results relative to the uncertain case with those given by another similar classifier, called the Belief Decision Tree (BDT), which also deals with uncertain decision attribute values.  相似文献   

20.
A family of fuzzification schemes is proposed that can be used to transform cardinality-based similarity measures for ordinary sets into similarity measures for fuzzy sets in a finite universe. The family is based on rules for fuzzy set cardinality and for the standard operations on fuzzy sets. In particular, the fuzzy set intersections are pointwisely generated by Frank t-norms. The fuzzification schemes are applied to a variety of previously studied rational cardinality-based similarity measures for ordinary sets and it is demonstrated that transitivity is preserved in the fuzzification process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号