首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 192 毫秒
1.
The credit scoring is a risk evaluation task considered as a critical decision for financial institutions in order to avoid wrong decision that may result in huge amount of losses. Classification models are one of the most widely used groups of data mining approaches that greatly help decision makers and managers to reduce their credit risk of granting credits to customers instead of intuitive experience or portfolio management. Accuracy is one of the most important criteria in order to choose a credit‐scoring model; and hence, the researches directed at improving upon the effectiveness of credit scoring models have never been stopped. In this article, a hybrid binary classification model, namely FMLP, is proposed for credit scoring, based on the basic concepts of fuzzy logic and artificial neural networks (ANNs). In the proposed model, instead of crisp weights and biases, used in traditional multilayer perceptrons (MLPs), fuzzy numbers are used in order to better model of the uncertainties and complexities in financial data sets. Empirical results of three well‐known benchmark credit data sets indicate that hybrid proposed model outperforms its component and also other those classification models such as support vector machines (SVMs), K‐nearest neighbor (KNN), quadratic discriminant analysis (QDA), and linear discriminant analysis (LDA). Therefore, it can be concluded that the proposed model can be an appropriate alternative tool for financial binary classification problems, especially in high uncertainty conditions. © 2013 Wiley Periodicals, Inc. Complexity 18: 46–57, 2013  相似文献   

2.
In this paper, we study the performance of various state-of-the-art classification algorithms applied to eight real-life credit scoring data sets. Some of the data sets originate from major Benelux and UK financial institutions. Different types of classifiers are evaluated and compared. Besides the well-known classification algorithms (eg logistic regression, discriminant analysis, k-nearest neighbour, neural networks and decision trees), this study also investigates the suitability and performance of some recently proposed, advanced kernel-based classification algorithms such as support vector machines and least-squares support vector machines (LS-SVMs). The performance is assessed using the classification accuracy and the area under the receiver operating characteristic curve. Statistically significant performance differences are identified using the appropriate test statistics. It is found that both the LS-SVM and neural network classifiers yield a very good performance, but also simple classifiers such as logistic regression and linear discriminant analysis perform very well for credit scoring.  相似文献   

3.
Corporate credit granting is a key commercial activity of financial institutions nowadays. A critical first step in the credit granting process usually involves a careful financial analysis of the creditworthiness of the potential client. Wrong decisions result either in foregoing valuable clients or, more severely, in substantial capital losses if the client subsequently defaults. It is thus of crucial importance to develop models that estimate the probability of corporate bankruptcy with a high degree of accuracy. Many studies focused on the use of financial ratios in linear statistical models, such as linear discriminant analysis and logistic regression. However, the obtained error rates are often high. In this paper, Least Squares Support Vector Machine (LS-SVM) classifiers, also known as kernel Fisher discriminant analysis, are applied within the Bayesian evidence framework in order to automatically infer and analyze the creditworthiness of potential corporate clients. The inferred posterior class probabilities of bankruptcy are then used to analyze the sensitivity of the classifier output with respect to the given inputs and to assist in the credit assignment decision making process. The suggested nonlinear kernel based classifiers yield better performances than linear discriminant analysis and logistic regression when applied to a real-life data set concerning commercial credit granting to mid-cap Belgian and Dutch firms.  相似文献   

4.
The purpose of the present study is the development of classification models for the identification of acquirers and targets in the Asian banking sector. We use a sample of 52 targets and 47 acquirers that were involved in acquisitions in 9 Asian banking markets during 1998–2004 and match them by country and time with an equal number of non-involved banks. The models are developed and validated through a tenfold cross-validation approach using two multicriteria decision aid techniques. For comparison purposes we also develop models through discriminant analysis. The results indicate that the multicriteria decision aid models are more efficient that the ones developed through discriminant analysis. Furthermore, in all the cases the models are more efficient in distinguishing between acquirers and non-involved banks than between targets and non-involved banks. Finally, the models with a binary outcome achieve higher accuracies than the ones which simultaneously distinguish between acquirers, targets and non-involved banks.  相似文献   

5.
An expert system was desired for a group decision-making process. A highly variable data set from previous groups' decisions was available to simulate past group decisions. This data set has much missing information and contains many possible errors. Classification and regression trees (CART) was selected for rule induction, and compared with multiple linear regression and discriminant analysis. We conclude that CART's decision rules can be used for rule induction. CART uses all available information and can predict observations with missing data. Errors in results from CART compare well with those from multiple linear regression and discriminant analysis. CART results are easier to understand.  相似文献   

6.
In repetitive judgmental discrete decision-making with multiple criteria, the decision maker usually behaves as if there is a set of appropriate criterion weights such that the decisions chosen are based on the weighted sum of all the criteria. Many different procedures for estimating these implied criterion weights have been proposed. Most of these procedures emphasize the preference trade-off among the multiple criteria of the decision maker, and thus the criterion weights obtained are not directly related to the hit ratio of matching decisions. Based on past data, statistical discriminant analysis can be used to determine the implied criterion weights that would reflect the past decisions. The most interesting performance measure is the hit ratio. In this work, we use the integer linear goal-programming technique to determine optimal criterion weights which minimize the number of misclassification of decisions. The linear goal-programming formulation has m constraints and m + k + 1 variables, where m is the number of cases and k is the number of criteria. Empirical study is done by using two different procedures on the actual past admission data of an M.B.A. programme. The hit ratios of the different procedures are compared.  相似文献   

7.
The classification problem statement of multicriteria decision analysis is to model the classification of the alternatives/actions according to the decision maker's preferences. These models are based on outranking relations, utility functions or (linear) discriminant functions. Model parameters can be given explicitly or learnt from a preclassified set of alternatives/actions.In this paper we propose a novel approach, the Continuous Decision (CD) method, to learn parameters of a discriminant function, and we also introduce its extension, the Continuous Decision Tree (CDT) method, which describes the classification more accurately.The proposed methods are results of integration of Machine Learning methods in Decision Analysis. From a Machine Learning point of view, the CDT method can be considered as an extension of the C4.5 decision tree building algorithm that handles only numeric criteria but applies more complex tests in the inner nodes of the tree. For the sake of easier interpretation, the decision trees are transformed to rules.  相似文献   

8.
Classification models can be developed by statistical or mathematical programming discriminant analysis techniques. Variable selection extensions of these techniques allow the development of classification models with a limited number of variables. Although stepwise statistical variable selection methods are widely used, the performance of the resultant classification models may not be optimal because of the stepwise selection protocol and the nature of the group separation criterion. A mixed integer programming approach for selecting variables for maximum classification accuracy is developed in this paper and the performance of this approach, measured by the leave-one-out hit rate, is compared with the published results from a statistical approach in which all possible variable subsets were considered. Although this mixed integer programming approach can only be applied to problems with a relatively small number of observations, it may be of great value where classification decisions must be based on a limited number of observations.  相似文献   

9.
Banking crises can be damaging for the economy, and as the recent experience has shown, nowadays they can spread rapidly across the globe with contagious effects. Therefore, the assessment of the stability of a county’s banking sector is important for regulators, depositors, investors and the general public. In the present study, we propose the development of classification models that assign the banking sectors of various countries in three classes, labelled “low stability”, “medium stability”, and “high stability”. The models are developed using three multicriteria decision aid techniques, which are well-suited to ordinal classification problems. We use a sample of 114 banking sectors (i.e., countries), and a set of criteria that includes indicators of the macroeconomic, institutional and regulatory environment, as well as basic characteristics of the banking and financial sector. The models are developed and tested using a tenfold cross-validation approach and they are benchmarked against models developed with discriminant analysis and logistic regression.  相似文献   

10.
The main objective is to present a framework for analysing decisions under risk. The nature of much information available to decision makers is vague and imprecise, be it information for human managers in organisations or for process agents in a distributed computer environment. Some approaches address the problem of uncertainty, but many of them concentrate more on representation and less on evaluation. The emphasis in this paper is on evaluation and even though the representation used is that of probability theory, other well-established formalisms can be used. The approach allows the decision maker to be as deliberately imprecise as he feels is natural and provides him with the means for expressing varying degrees of imprecision in the input sentences. The framework we present is intended to be tolerant and to provide means for evaluating decision situations using several decision rules beside the conventional maximisation of the expected utility.  相似文献   

11.
This paper describes a framework that combines decision theory and stochastic optimisation techniques to address tide routing (i.e. optimisation of cargo loading and ship scheduling decisions in tidal ports and shallow seas). Unlike weather routing, tidal routing has been little investigated so far, especially from the perspective of risk analysis. Considering the journey of a bulk carrier between N ports, a shipping decision model is designed to compute cargo loading and scheduling decisions, given the time series of the sea level point forecasts in these ports. Two procedures based on particle swarm optimisation and Monte Carlo simulations are used to solve the shipping net benefit constrained optimisation problem. The outputs of probabilistic risk minimisation are compared with those of net benefit maximisation, the latter including the possibility of a ‘rule-of-the-thumb’ safety margin. Distributional robustness is discussed as well, with respect to the modelling of sea level residuals. Our technique is assessed on two realistic case studies in British ports. Results show that the decision taking into account the stochastic dimension of sea levels is not only robust in real port and weather conditions, but also closer to optimality than standard practices using a fixed safety margin. Furthermore, it is shown that the proposed technique remains more interesting when sea level variations are artificially increased beyond the extremes of the current residual models.  相似文献   

12.
Credit-risk evaluation decisions are important for the financial institutions involved due to the high level of risk associated with wrong decisions. The process of making credit-risk evaluation decision is complex and unstructured. Neural networks are known to perform reasonably well compared to alternate methods for this problem. However, a drawback of using neural networks for credit-risk evaluation decision is that once a decision is made, it is extremely difficult to explain the rationale behind that decision. Researchers have developed methods using neural network to extract rules, which are then used to explain the reasoning behind a given neural network output. These rules do not capture the learned knowledge well enough. Neurofuzzy systems have been recently developed utilizing the desirable properties of both fuzzy systems as well as neural networks. These neurofuzzy systems can be used to develop fuzzy rules naturally. In this study, we analyze the beneficial aspects of using both neurofuzzy systems as well as neural networks for credit-risk evaluation decisions.  相似文献   

13.
This paper reports on a decision support system for assigning a liver from a donor to a recipient on a waiting-list that maximises the probability of belonging to the survival graft class after a year of transplant and/or minimises the probability of belonging to the non-survival graft class in a two objective framework. This is done with two models of neural networks for classification obtained from the Pareto front built by a multi-objective evolutionary algorithm – called MPENSGA2. This type of neural network is a new model of the generalised radial basis functions for obtaining optimal values in C (Correctly Classified Rate) and MS (Minimum Sensitivity) in the classifier, and is compared to other competitive classifiers. The decision support system has been proposed using, as simply as possible, those models which lead to making the correct decision about receptor choice based on efficient and impartial criteria.  相似文献   

14.
We study six real-world major strategic decisions and discuss the role that analytic Multiple Criteria Decision Making (MCDM) models could play in helping decision makers structure and solve such problems. We have interviewed successful and well-educated managers who had access to quantitative decision models, but did not use them as part of their decision process. Our approach is a clinical one that takes a close look at the decision processes. We believe that the normative MCDM framework is oversimplified and does not always fit well with complex, real-world organizational decision processes. This may be one reason why decision tools are not used more widely for solving high-level decision problems. We believe that it would be worthwhile to revise some of the MCDM mainstream postulates and practices to make existing models and tools more suitable for practical purposes. The MCDM mainstream research has until today focused on the choice among alternatives. One should realize that MCDM models could also be used in creating alternatives, in assessing the importance of criteria, in providing the decision makers with “post-commitment support”, and as part of a devil's advocate approach.  相似文献   

15.
Recent progress in data processing technology has made the accumulation and systematic organization of large volumes of data a routine activity. As a result of these developments, there is an increasing need for data-based or data-driven methods of model development. This paper describes data-driven classification methods and shows that the automatic development and refinement of decision support models is now possible when the machine is given a large (or sometimes even a small) amount of observations that express instances of a certain task domain. The classifier obtained may be used to build a decision support system, to refine or update an existing system and to understand or improve a decision-making process. The described AI classification methods are compared with statistical classification methods for a marketing application. They can act as a basis for data-driven decision support systems that have two basic components: an automated knowledge module and an advice module or, in different terms, an automated knowledge acquisition/retrieval module and a knowledge processing module. When these modules are integrated or linked, a decision support system can be created which enables an organization to make better-quality decisions, with reduced variance, probably using fewer people.  相似文献   

16.
Weighted voting classifiers (WVCs) consist of N units that each provide individual classification decisions. The entire system output is based on tallying the weighted votes for each decision and choosing the winning one (plurality voting) or one which has the total weight of supporting votes greater than some specified threshold (threshold voting). Each individual unit may abstain from voting. The entire system may also abstain from voting if no decision is ultimately winning. Existing methods of evaluating the correct classification probability (CCP) of WVCs can be applied to limited special cases of these systems (threshold voting) and impose some restrictions on their parameters. In this paper a method is suggested which allows the CCP of WVCs with both plurality and threshold voting to be exactly evaluated without imposing constraints on unit weights. The method is based on using the modified universal generating function technique.  相似文献   

17.
We consider in this paper the robustness of decisions based on probabilistic thresholds. To this effect, we propose the same-decision probability as a query that can be used as a confidence measure for threshold-based decisions. More specifically, the same-decision probability is the probability that we would have made the same threshold-based decision, had we known the state of some hidden variables pertaining to our decision.We study a number of properties about the same-decision probability. First, we analyze its computational complexity. We then derive a bound on its value, which we can compute using a variable elimination algorithm that we propose. Finally, we consider decisions based on noisy sensors in particular, showing through examples that the same-decision probability can be used to reason about threshold-based decisions in a more refined way.  相似文献   

18.
All UK companies are required by company law to prepare financial statements that must comply with law and accounting standards. With the exception of very small companies, financial accounts must then be audited by UK registered auditors who must express an opinion on whether these statements are free from material misstatements, and have been prepared in accordance with legislation and relevant accounting standards (unqualified opinion) or not (qualified opinion). The objective of the present study is to explore the potentials of developing multicriteria decision aid models for reproducing, as accurately as possible, the auditors’ opinion on the financial statements of the firms. A sample of 625 company audited years with qualified statements and 625 ones with unqualified financial statements over the period 1998–2003 from 823 manufacturing private and public companies is being used in contrast to most of the previous works in the UK that have mainly focused on very small or very large public companies. Furthermore, the models are being developed and tested using the walk-forward approach as opposed to previous studies that employ simple holdout tests or resampling techniques. Discriminant analysis and logit analysis are also used for comparison purposes. The out-of-time and out-of-sample testing results indicate that the two multicriteria decision aid techniques achieve almost equal classification accuracies and are both more efficient than discriminant and logit analysis.  相似文献   

19.
Many decisions people make are based on multitudes of inferences. People have been shown to generate sense quite effortlessly––and compulsively––even in highly opaque situations. Recently, it has been suggested that the making of an inference-based decision may be accompanied by an increase in the coherence of assessments of the individual arguments related to the alternatives at hand. This suggests a constraint satisfaction reasoning process. In two complex and ambiguous law-related emerging decisions, assessments of inferences increasingly spread apart, even if no additional information was provided. Two approaches for studying emerging coherence are developed. First, the structures that emerge as participants progress from stage to stage in the judgment process are captured as principal components through factor analysis. Second, discriminant analysis is employed to test the predictive strength of the emerging cognitive structures vis-à-vis each sequential decision.  相似文献   

20.
Hurricane forecasts are intended to convey information that is useful in helping individuals and organizations make decisions. For example, decisions include whether a mandatory evacuation should be issued, where emergency evacuation shelters should be located, and what are the appropriate quantities of emergency supplies that should be stockpiled at various locations. This paper incorporates one of the National Hurricane Center's official prediction models into a Bayesian decision framework to address complex decisions made in response to an observed tropical cyclone. The Bayesian decision process accounts for the trade-off between improving forecast accuracy and deteriorating cost efficiency (with respect to implementing a decision) as the storm evolves, which is characteristic of the above-mentioned decisions. The specific application addressed in this paper is a single-supplier, multi-retailer supply chain system in which demand at each retailer location is a random variable that is affected by the trajectory of an observed hurricane. The solution methodology is illustrated through numerical examples, and the benefit of the proposed approach compared to a traditional approach is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号