首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The classification problem statement of multicriteria decision analysis is to model the classification of the alternatives/actions according to the decision maker's preferences. These models are based on outranking relations, utility functions or (linear) discriminant functions. Model parameters can be given explicitly or learnt from a preclassified set of alternatives/actions.In this paper we propose a novel approach, the Continuous Decision (CD) method, to learn parameters of a discriminant function, and we also introduce its extension, the Continuous Decision Tree (CDT) method, which describes the classification more accurately.The proposed methods are results of integration of Machine Learning methods in Decision Analysis. From a Machine Learning point of view, the CDT method can be considered as an extension of the C4.5 decision tree building algorithm that handles only numeric criteria but applies more complex tests in the inner nodes of the tree. For the sake of easier interpretation, the decision trees are transformed to rules.  相似文献   

2.
In this paper a Decision Support System Architecture is proposed for the heart sound diagnosis problem, and in general for complex medical diagnosis problems. It is based on the division of a complex diagnostic problem into simpler sub-problems; each of them is handled by a specialized decision tree. This Multiple Decision Trees Architecture in general consists of a network of detection decision trees and arbitration decision trees, and can also incorporate other classification methods as well (e.g. patterns recognition, neural networks, etc.). The initial motivation for developing this Multiple Decision Trees Architecture has been the problem of differentiation among Opening Snap (OS), 2nd Heart Sound Split (A2_P2), and 3rd Heart Sound (S3), which is a crucial and at the same time difficult and complicated part of the heart sound diagnosis problem. The Multiple Decision Tree Architecture developed for the above diagnosis/differentiation problem has been tested with real heart sound signals, and its performance and generalisation capabilities were found to be higher than the previous traditional architectures.Mathematics Subject Classification (2000): 68U35D. Koutsouris: The authors would like to thank the clinician Dr D.E. Skarpalezos for his clinical support, Dr G. Koundourakis, and Neurosoft S.A. for their support and provision of Envisioner, a data-mining tool that was used to execute algorithms related to the Decision Trees.  相似文献   

3.
Supervised classification learning can be considered as an important tool for decision support. In this paper, we present a method for supervised classification learning, which ensembles decision trees obtained via convex sets of probability distributions (also called credal sets) and uncertainty measures. Our method forces the use of different decision trees and it has mainly the following characteristics: it obtains a good percentage of correct classifications and an improvement in time of processing compared with known classification methods; it not needs to fix the number of decision trees to be used; and it can be parallelized to apply it on very large data sets.  相似文献   

4.
Cancer classification using genomic data is one of the major research areas in the medical field. Therefore, a number of binary classification methods have been proposed in recent years. Top Scoring Pair (TSP) method is one of the most promising techniques that classify genomic data in a lower dimensional subspace using a simple decision rule. In the present paper, we propose a supervised classification technique that utilizes incremental generalized eigenvalue and top scoring pair classifiers to obtain higher classification accuracy with a small training set. We validate our method by applying it to well known microarray data sets.  相似文献   

5.
In this paper, the classification power of the eigenvalues of six graph-associated matrices is investigated. Each matrix contains a certain type of geometric/ spatial information, which may be important for the classification process. The performances of the different feature types is evaluated on two data sets: first a benchmark data set for optical character recognition, where the extracted eigenvalues were utilized as feature vectors for multi-class classification using support vector machines. Classification results are presented for all six feature types, as well as for classifier combinations at decision level. For the decision level combination, probabilistic output support vector machines have been applied, with a performance up to 92.4 %. To investigate the power of the spectra for time dependent tasks, too, a second data set was investigated, consisting of human activities in video streams. To model the time dependency, hidden Markov models were utilized and the classification rate reached 98.3 %.  相似文献   

6.
The topic of clustering has been widely studied in the field of Data Analysis, where it is defined as an unsupervised process of grouping objects together based on notions of similarity. Clustering in the field of Multi-Criteria Decision Aid (MCDA) has seen a few adaptations of methods from Data Analysis, most of them however using concepts native to that field, such as the notions of similarity and distance measures. As in MCDA we model the preferences of a decision maker over a set of decision alternatives, we can find more diverse ways of comparing them than in Data Analysis. As a result, these alternatives may also be arranged into different potential structures. In this paper we wish to formally define the problem of clustering in MCDA using notions that are native to this field alone, and highlight the different structures which we may try to uncover through this process. Following this we propose a method for finding these structures. As in any clustering problem, finding the optimal result in an exact manner is impractical, and so we propose a stochastic heuristic approach, which we validate through tests on a large set of artificially generated benchmarks.  相似文献   

7.
Advances in Data Analysis and Classification - In a standard classification framework a set of trustworthy learning data are employed to build a decision rule, with the final aim of classifying...  相似文献   

8.
The selection of the optimal ensembles of classifiers in multiple-classifier selection technique is un-decidable in many cases and it is potentially subjected to a trial-and-error search. This paper introduces a quantitative meta-learning approach based on neural network and rough set theory in the selection of the best predictive model. This approach depends directly on the characteristic, meta-features of the input data sets. The employed meta-features are the degree of discreteness and the distribution of the features in the input data set, the fuzziness of these features related to the target class labels and finally the correlation and covariance between the different features. The experimental work that consider these criteria are applied on twenty nine data sets using different classification techniques including support vector machine, decision tables and Bayesian believe model. The measures of these criteria and the best result classification technique are used to build a meta data set. The role of the neural network is to perform a black-box prediction of the optimal, best fitting, classification technique. The role of the rough set theory is the generation of the decision rules that controls this prediction approach. Finally, formal concept analysis is applied for the visualization of the generated rules.  相似文献   

9.
We consider a problem of ranking alternatives based on their deterministic performance evaluations on multiple criteria. We apply additive value theory and assume the Decision Maker’s (DM) preferences to be representable with general additive monotone value functions. The DM provides indirect preference information in form of pair-wise comparisons of reference alternatives, and we use this to derive the set of compatible value functions. Then, this set is analyzed to describe (1) the possible and necessary preference relations, (2) probabilities of the possible relations, (3) ranges of ranks the alternatives may obtain, and (4) the distributions of these ranks. Our work combines previous results from Robust Ordinal Regression, Extreme Ranking Analysis and Stochastic Multicriteria Acceptability Analysis under a unified decision support framework. We show how the four different results complement each other, discuss extensions of the main proposal, and demonstrate practical use of the approach by considering a problem of ranking 20 European countries in terms of 4 criteria reflecting the quality of their universities.  相似文献   

10.
We are considering the problem of multi-criteria classification. In this problem, a set of “if … then …” decision rules is used as a preference model to classify objects evaluated by a set of criteria and regular attributes. Given a sample of classification examples, called learning data set, the rules are induced from dominance-based rough approximations of preference-ordered decision classes, according to the Variable Consistency Dominance-based Rough Set Approach (VC-DRSA). The main question to be answered in this paper is how to classify an object using decision rules in situation where it is covered by (i) no rule, (ii) exactly one rule, (iii) several rules. The proposed classification scheme can be applied to both, learning data set (to restore the classification known from examples) and testing data set (to predict classification of new objects). A hypothetical example from the area of telecommunications is used for illustration of the proposed classification method and for a comparison with some previous proposals.  相似文献   

11.
A recently developed data separation/classification method, called isotonic separation, is applied to breast cancer prediction. Two breast cancer data sets, one with clean and sufficient data and the other with insufficient data, are used for the study and the results are compared against those of decision tree induction methods, linear programming discrimination methods, learning vector quantization, support vector machines, adaptive boosting, and other methods. The experiment results show that isotonic separation is a viable and useful tool for data classification in the medical domain.  相似文献   

12.
In this paper, we present two classification approaches based on Rough Sets (RS) that are able to learn decision rules from uncertain data. We assume that the uncertainty exists only in the decision attribute values of the Decision Table (DT) and is represented by the belief functions. The first technique, named Belief Rough Set Classifier (BRSC), is based only on the basic concepts of the Rough Sets (RS). The second, called Belief Rough Set Classifier, is more sophisticated. It is based on Generalization Distribution Table (BRSC-GDT), which is a hybridization of the Generalization Distribution Table and the Rough Sets (GDT-RS). The two classifiers aim at simplifying the Uncertain Decision Table (UDT) in order to generate significant decision rules for classification process. Furthermore, to improve the time complexity of the construction procedure of the two classifiers, we apply a heuristic method of attribute selection based on rough sets. To evaluate the performance of each classification approach, we carry experiments on a number of standard real-world databases by artificially introducing uncertainty in the decision attribute values. In addition, we test our classifiers on a naturally uncertain web usage database. We compare our belief rough set classifiers with traditional classification methods only for the certain case. Besides, we compare the results relative to the uncertain case with those given by another similar classifier, called the Belief Decision Tree (BDT), which also deals with uncertain decision attribute values.  相似文献   

13.
The classification system is very important for making decision and it has been attracted much attention of many researchers. Usually, the traditional classifiers are either domain specific or produce unsatisfactory results over classification problems with larger size and imbalanced data. Hence, genetic algorithms (GA) are recently being combined with traditional classifiers to find useful knowledge for making decision. Although, the main concerns of such GA-based system are the coverage of less search space and increase of computational cost with the growth of population. In this paper, a rule-based knowledge discovery model, combining C4.5 (a Decision Tree based rule inductive algorithm) and a new parallel genetic algorithm based on the idea of massive parallelism, is introduced. The prime goal of the model is to produce a compact set of informative rules from any kind of classification problem. More specifically, the proposed model receives a base method C4.5 to generate rules which are then refined by our proposed parallel GA. The strength of the developed system has been compared with pure C4.5 as well as the hybrid system (C4.5 + sequential genetic algorithm) on six real world benchmark data sets collected from UCI (University of California at Irvine) machine learning repository. Experiments on data sets validate the effectiveness of the new model. The presented results especially indicate that the model is powerful for volumetric data set.  相似文献   

14.
15.
Although Operational Research (OR) has successfully provided many methodologies to address complex decision problems, in particular based on the rationality principle, there has been too little discussion regarding their limited consideration in IT evaluation practice and associated decision making satisfaction levels in an organisational context. The aim of this paper is to address these issues through providing a current account of diffusion and infusion of OR methodologies in IT decision making practice, and by analysing factors affecting decision making satisfaction from a Technological, Organisational, and Environmental (TOE) framework in the context of IT induced business transformations. We developed a structural equation model and conducted an empirical survey, which supported four out of five developed research hypotheses. Our results show that while Decision Support Systems (DSSs), holistic IT evaluation methods, and management support seem to positively affect individual satisfaction, legislative regulation has an adverse effect. Results also revealed a persistent methodology diffusion and infusion gap. The paper discusses implications in each of these aspects and presents opportunities for future work.  相似文献   

16.
Multiple Criteria Decision Aid methods are increasingly used in financial decision making in order to capture the multifaceted character of modern enterprises activated in a complex and versatile market environment. This paper presents a multiple criteria approach for the selection of firms applying for financial support from public funds. Besides the budget constraint, the specific decision situation imposes the consideration of additional policy constraints that prevent from directly exploiting rankings provided by a multiple criteria method. In such a case the problem solution is to find a set of alternatives satisfying the constraints and at the same time maximizing a measure of global performance. The proposed procedure relies on the PROMETHEE V method which belongs to the well-known PROMETHEE family of multiple criteria outranking methods and is combined with an integer programming formulation capable to effectively deal with the problem’s combinatorial character. This method is modified in order to avoid any bias in the selection of the optimal set that may arrive because of the apparent contradiction between the rate of resources consumption and the coefficients of the alternatives in the additive objective function.  相似文献   

17.
Managers increasingly face netsourcing decisions of whether and how to outsource selected software applications over the Internet. This paper illustrates the development of a netsourcing decision support system (DSS) that provides support for the first netsourcing decision of whether to netsource or not to do so. The development follows a five-stage methodology focusing on empirical modeling with internal validation during the development. It begins with identifying potential decision criteria from the literature followed by the collection of empirical data. Logistic regression is then used as a statistical method for selecting relevant decision criteria. Applying the logistic regression analysis to the dataset delivers competitive relevance and strategic vulnerability as relevant decision criteria. The development concludes with designing a core and a complementary DSS module. The paper critiques the developed DSS and its underlying development methodology. Recommendations for further research are offered.  相似文献   

18.
Several interactive methods exist to identify nondominated solutions in a Multiple Objective Mixed Integer Linear Program. But what if the Decision Maker is also interested in sorting those solutions (assigning them to pre-established ordinal categories)? We propose an interactive “branch-and-bound like” technique to progressively build the nondominated set, combined with ELECTRE TRI method (Pessimistic procedure) to sort identified nondominated solutions. A disaggregation approach is considered in order to avoid direct definition of all ELECTRE TRI preference parameters. Weight-importance coefficients are inferred and category reference profiles are determined based on assignment examples provided by the Decision Maker. A computation tool was developed with a twofold purpose: support the Decision Maker involved in a decision process and provide a test bed for research purposes.  相似文献   

19.
Classification is one of the most extensively studied problems in the fields of multivariate statistical analysis, operations research and artificial intelligence. Decisions involving a classification of the alternative solutions are of major interest in finance, since several financial decision problems are best studied by classifying a set of alternative solutions (firms, loan applications, investment projects, etc.) in predefined classes. This paper proposes an alternative approach to the classical statistical methodologies that have been extensively used for the study of financial classification problems. The proposed methodology combines the preference disaggregation approach (a multicriteria decision aid method) with decision support systems. More specifically, the FINancial CLASsification (FINCLAS) multicriteria decision support system is presented. The system incorporates a plethora of financial modeling tools, along with powerful preference disaggregation methods that lead to the development of additive utility models for the classification of the considered alternatives into predefined classes. An application in credit granting is used to illustrate the capabilities of the system.  相似文献   

20.
This paper presents a comparative study of the use of two different methods of data analysis on a common set of data. The first is a method based on rough sets theory and the second is the location model method from the field of discriminant analysis. To investigate the comparative performance of these methods, a set of real medical data has been used. The data considered are of both discrete and continuous character. During the comparison, particular attention is paid to data reduction and to the derivation of decision rules and classification functions from the reduced set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号