首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
We developed an end-to-end process for inducing models of behavior from expert task performance through in-depth case study. A subject matter expert (SME) performed navigational and adversarial tasks in a virtual tank combat simulation, using the dTank and Unreal platforms. Using eye tracking and Cognitive Task Analysis, we identified the key goals pursued by and attributes used by the SME, including reliance on an egocentric spatial representation, and on the fly re-representation of terrain in qualitative terms such as “safe” and “risky”. We demonstrated methods for automatic extraction of these qualitative higher-order features from combinations of surface features present in the simulation, producing a terrain map that was visually similar to the SME annotated map. The application of decision-tree and instance-based machine learning methods to the transformed task data supported prediction of SME task selection with greater than 95 % accuracy, and SME action selection at a frequency of 10 Hz with greater than 63 % accuracy, with real time constraints placing limits on algorithm selection. A complete processing model is presented for a path driving task, with the induced generative model deviating from the SME chosen path by less than 2 meters on average. The derived attributes also enabled environment portability, with path driving models induced from dTank performance and deployed in Unreal demonstrating equivalent accuracy to those induced and deployed completely within Unreal.  相似文献   

2.
Korean government has been funding the small and medium enterprises (SME) with superior technology based on scorecard. However high default rate of funded SMEs has been reported. In order to effectively manage such governmental fund, it is important to develop accurate scoring model for SMEs. In this paper, we provide a random effects logistic regression model to predict the default of funded SMEs based on both financial and non-financial factors. Advantage of such a random effects model lies in the ability of accommodating not only the individual characteristics of each SME but also the uncertainty that cannot be explained by such individual factors. It is expected that our study can contribute to effective management of government funds by proposing the prediction models for defaults of funded SMEs.  相似文献   

3.
The 2004 Basel II Accord has pointed out the benefits of credit risk management through internal models using internal data to estimate risk components: probability of default (PD), loss given default, exposure at default and maturity. Internal data are the primary data source for PD estimates; banks are permitted to use statistical default prediction models to estimate the borrowers’ PD, subject to some requirements concerning accuracy, completeness and appropriateness of data. However, in practice, internal records are usually incomplete or do not contain adequate history to estimate the PD. Current missing data are critical with regard to low default portfolios, characterised by inadequate default records, making it difficult to design statistically significant prediction models. Several methods might be used to deal with missing data such as list-wise deletion, application-specific list-wise deletion, substitution techniques or imputation models (simple and multiple variants). List-wise deletion is an easy-to-use method widely applied by social scientists, but it loses substantial data and reduces the diversity of information resulting in a bias in the model's parameters, results and inferences. The choice of the best method to solve the missing data problem largely depends on the nature of missing values (MCAR, MAR and MNAR processes) but there is a lack of empirical analysis about their effect on credit risk that limits the validity of resulting models. In this paper, we analyse the nature and effects of missing data in credit risk modelling (MCAR, MAR and NMAR processes) and take into account current scarce data set on consumer borrowers, which include different percents and distributions of missing data. The findings are used to analyse the performance of several methods for dealing with missing data such as likewise deletion, simple imputation methods, MLE models and advanced multiple imputation (MI) alternatives based on MarkovChain-MonteCarlo and re-sampling methods. Results are evaluated and discussed between models in terms of robustness, accuracy and complexity. In particular, MI models are found to provide very valuable solutions with regard to credit risk missing data.  相似文献   

4.
Mixture cure models were originally proposed in medical statistics to model long-term survival of cancer patients in terms of two distinct subpopulations - those that are cured of the event of interest and will never relapse, along with those that are uncured and are susceptible to the event. In the present paper, we introduce mixture cure models to the area of credit scoring, where, similarly to the medical setting, a large proportion of the dataset may not experience the event of interest during the loan term, i.e. default. We estimate a mixture cure model predicting (time to) default on a UK personal loan portfolio, and compare its performance to the Cox proportional hazards method and standard logistic regression. Results for credit scoring at an account level and prediction of the number of defaults at a portfolio level are presented; model performance is evaluated through cross validation on discrimination and calibration measures. Discrimination performance for all three approaches was found to be high and competitive. Calibration performance for the survival approaches was found to be superior to logistic regression for intermediate time intervals and useful for fixed 12 month time horizon estimates, reinforcing the flexibility of survival analysis as both a risk ranking tool and for providing robust estimates of probability of default over time. Furthermore, the mixture cure model’s ability to distinguish between two subpopulations can offer additional insights by estimating the parameters that determine susceptibility to default in addition to parameters that influence time to default of a borrower.  相似文献   

5.
The paper proposes a novel model for the prediction of bank failures, on the basis of both macroeconomic and bank-specific microeconomic factors. As bank failures are rare, in the paper we apply a regression method for binary data based on extreme value theory, which turns out to be more effective than classical logistic regression models, as it better leverages the information in the tail of the default distribution. The application of this model to the occurrence of bank defaults in a highly bank dependent economy (Italy) shows that, while microeconomic factors as well as regulatory capital are significant to explain proper failures, macroeconomic factors are relevant only when failures are defined not only in terms of actual defaults but also in terms of mergers and acquisitions. In terms of predictive accuracy, the model based on extreme value theory outperforms classical logistic regression models.  相似文献   

6.
Traditionally, credit scoring aimed at distinguishing good payers from bad payers at the time of the application. The timing when customers default is also interesting to investigate since it can provide the bank with the ability to do profit scoring. Analysing when customers default is typically tackled using survival analysis. In this paper, we discuss and contrast statistical and neural network approaches for survival analysis. Compared to the proportional hazards model, neural networks may offer an interesting alternative because of their universal approximation property and the fact that no baseline hazard assumption is needed. Several neural network survival analysis models are discussed and evaluated according to their way of dealing with censored observations, time-varying inputs, the monotonicity of the generated survival curves and their scalability. In the experimental part, we contrast the performance of a neural network survival analysis model with that of the proportional hazards model for predicting both loan default and early repayment using data from a UK financial institution.  相似文献   

7.
Bankruptcy prediction by generalized additive models   总被引:2,自引:0,他引:2  
We compare several accounting‐based models for bankruptcy prediction. The models are developed and tested on large data sets containing annual financial statements for Norwegian limited liability firms. Out‐of‐sample and out‐of‐time validation shows that generalized additive models significantly outperform popular models like linear discriminant analysis, generalized linear models and neural networks at all levels of risk. Further, important issues like default horizon and performance depreciation are examined. We clearly see a performance depreciation as the default horizon is increased and as time goes by. Finally a multi‐year model, developed on all available data from three consecutive years, is compared with a one‐year model, developed on data from the most recent year only. The multi‐year model exhibits a desirable robustness to yearly fluctuations that is not present in the one‐year model. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
This paper establish a first passage time model based on the Merton's structural model by using the method of geometric Brownian motion. In this paper, we consider the accounting noise and historical default record and then introduce a new incomplete information hypothesis. Besides, we introduce the stock's liquidity value into the model, and apply its method measurement which based on Merton's structural model to the first passage time model to obtain the endogenous default boundary. Based on the incomplete information, the conditional default probability is derived by using the default boundary. And at the last of this passage, we analysis the effect of the correlation between stock's price and company assets on the default probability.  相似文献   

9.
The qualitative and quantitative combined nonlinear dynamics model proposed in this paper fill the gap in nonlinear dynamics model in terms of qualitative and quantitative combined methods, allowing the qualitative model and quantitative model to perfectly combine and overcome their weaknesses by learning from each other. These two types of models use their strengths to make up for the other’s deficiencies. The qualitative and quantitative combined models can surmount the weakness that the qualitative model cannot be applied and verified in a quantitative manner, and the high costs and long time of multiple construction as well as verification of the quantitative model. The combined model is more practical and efficient, which is of great significance for nonlinear dynamics. The qualitative and quantitative combined modeling and model analytical method raised in this paper is not only applied to nonlinear dynamics, but can be adopted and drawn on in the modeling and model analysis of other fields. Additionally, the analytical method of qualitative and quantitative combined nonlinear dynamics model proposed in this paper can satisfactorily resolve the problems with the price system’s existing nonlinear dynamics model analytical method. The three-dimensional dynamics model of price, supply–demand ratio and selling rate established in this paper make estimates about the best commodity prices using the model results, thereby providing a theoretical basis for the government’s macro-control of price. Meanwhile, this model also offer theoretical guidance to how to enhance people’s purchasing power and consumption levels through price regulation and hence to improve people’s living standards.  相似文献   

10.
通过一个基于信贷配给的模型,揭示了为什么中小企业融资困难.而关系型信贷是解决"信贷配给"矛盾的一种方式.关系型借贷是市场交易的一种内生制度.这里的关系反映的是企业和一家基本银行(或少数几家银行)建立的长期封闭及规范化的契约关系,他的维系有助于银行收集关于企业发展前景和贷款偿还概率等方面的信息,进而为做出贷款决策提供便利.对于中小企业来说,也能够便利的获得所需的贷款.  相似文献   

11.
This paper proposes a proportional odds model to combine systemic and non-systemic risk for prediction of default and prepay performance in cohorts of booked loan accounts. We assume that performance odds is proportional to two independent factors, one based on age-dependent systemic, possibly external, global disruptions to a cohort of individual accounts, the other on traditional non-systemic information odds based on demographic, behavioural and financial payment patterns of the individual accounts. A proportional odds model provides a natural formulation that can combine hazard rate predictions of baseline defaults, prepayments and active accounts with traditional non-systemic risk scores of individuals within the cohort. Theoretical comparisons with proportional hazard models are illustrated. Although our model is developed in terms of Good/Bad performance, it can include late payments, prepayments, defaults, as well as responses to offers and other classifications. We make 60-month default and prepay forecasts under two different systemic risk scenarios for a portfolio of Alt A mortgages with 24-month ‘teaser rates’ originated in 2004.  相似文献   

12.
This paper evaluates the resurrection event regarding defaulted firms and incorporates observable cure events in the default prediction of SME. Due to the additional cure-related observable data, a completely new information set is applied to predict individual default and cure events. This is a new approach in credit risk that, to our knowledge, has not been followed yet. Different firm-specific and macroeconomic default and cure-event-influencing risk drivers are identified. The significant variables allow a firm-specific default risk evaluation combined with an individual risk reducing cure probability. The identification and incorporation of cure-relevant factors in the default risk framework enable lenders to support the complete resurrection of a firm in the case of its default and hence reduce the default risk itself. The estimations are developed with a database that contains 5930 mostly small and medium-sized German firms and a total of more than 23000 financial statements over a time horizon from January 2002 to December 2007. Due to the significant influence on the default risk probability as well as the bank’s possible profit prospects concerning a cured firm, it seems essential for risk management to incorporate the additional cure information into credit risk evaluation.  相似文献   

13.
Summary New Bayesian cohort models designed to resolve the identification problem in cohort analysis are proposed in this paper. At first, the basic cohort model which represents the statistical structure of time-series social survey data in terms of age, period and cohort effects is explained. The logit cohort model for qualitative data from a binomial distribution and the normal-type cohort model for quantitative data from a normal distribution are considered as two special cases of the basic model. In order to overcome the identification problem in cohort analysis, a Bayesian approach is adopted, based on the assumption that the effect parameters change gradually. A Bayesian information criterion ABIC is introduced for the selection of the optimal model. This approach is so flexible that both the logit and the normal-type cohort models can be made applicable, not only to standard cohort tables but also to general cohort tables in which the range of age group is not equal to the interval between periods. The practical utility of the proposed models is demonstrated by analysing two data sets from the literature on cohort analysis. The Institute of Statistical Mathematics  相似文献   

14.
Credit risk models are commonly based on large internal data sets to produce reliable estimates of the probability of default (PD) that should be validated with time. However, in the real world, a substantial portion of the exposures is included in low-default portfolios (LDPs) in which the number of defaulted loans is usually much lower than the number of non-default observations. Modelling of these imbalanced data sets is particularly problematic with small portfolios in which the absence of information increases the specification error. Sovereigns, banks, or specialised retail exposures are recent examples of post-crisis portfolios with insufficient data for PD estimates, which require specific tools for risk quantification and validation. This paper explores the suitability of cooperative strategies for managing such scarce LDPs. In addition to the use of statistical and machine-learning classifiers, this paper explores the suitability of cooperative models and bootstrapping strategies for default prediction and multi-grade PD setting using two real-world credit consumer data sets. The performance is assessed in terms of out-of-sample and out-of-time discriminatory power, PD calibration, and stability. The results indicate that combinational approaches based on correlation-adjusted strategies are promising techniques for managing sparse LDPs and providing accurate and well-calibrated credit risk estimates.  相似文献   

15.
周颖 《运筹与管理》2021,30(1):209-216
信用评级就是衡量一笔债务违约的可能性,评价债务违约风险的大小。本文利用信息增益方法建立了信用评级模型,并以小型工业企业贷款数据为对象进行了实证分析。本文的创新与特色:一是按照指标的信息增益越大、越能将违约与非违约企业区分出来的思路,筛选出对违约状态有较大影响的指标。改变了现有研究不以违约鉴别力作为指标遴选标准的不足。二是在相关程度高的一对冗余指标中,删除信息增益小、即违约鉴别能力差的指标,既避免指标间反映信息重复,又避免误删违约鉴别能力强的指标。三是利用信息增益值对指标进行赋权,保证违约鉴别能力越大的指标赋予的权重越大。改变了现有研究赋权不反映指标的违约鉴别能力大小的弊端。实证结果表明:本文遴选的包括资产负债率、行业景气指数、抵质押担保等31个指标对违约状态有显著的鉴别能力,且反映信息不重复。偿债能力是影响小型工业企业信用评级的关键要素。  相似文献   

16.
Retail credit models are implemented using discrete survival analysis, enabling macroeconomic conditions to be included as time-varying covariates. In consequence, these models can be used to estimate changes in probability of default given downturn economic scenarios. Compared with traditional models, we offer improved methodologies for scenario generation and for the use of them to predict default rates. Monte Carlo simulation is used to generate a distribution of estimated default rates from which Value at Risk and Expected Shortfall are computed as a means of stress testing. Several macroeconomic variables are considered and in particular factor analysis is employed to model the structure between these variables. Two large UK data sets are used to test this approach, resulting in plausible dynamic models and stress test outcomes.  相似文献   

17.
Managerial strategies, especially at the higher echelons of management, are often linguistically stated. This is because they need to be based on information which often defies quantification. Such verbal strategies and qualitative information have often been found to be difficult to incorporate in quantitative models. Thus, the quantitative effects of implementing one strategy as opposed to another have generally been difficult to forecast.In this paper, we show that, through the use of fuzzy logic, we can incorporate such qualitative (linguistically stated) information. Furthermore, we show that a fuzzy controller can be designed so as to reach desired goals while being cognizant of linguistically stated strategies, scenarios, and decision rules as well as quantitative data types.The approach is applied to the modeling and control of market penetration, a field which has attracted considerable attention in recent years.  相似文献   

18.
Data semantics plays a fundamental role in computer science, in general, and in computing with words, in particular. The semantics of words arises as a sophisticated problem, since words being actually vague linguistic terms are pieces of information characterized by impreciseness, incompleteness, uncertainty and/or vagueness. The qualitative semantics and the quantitative semantics are two aspects of vague linguistic information, which are closely related. However, the qualitative semantics of linguistic terms, and even the qualitative semantics of the symbolic approaches, seem to be not elaborated on directly in the literature. In this study, we propose an interpretation of the inherent order-based semantics of terms through their qualitative semantics modeled by hedge algebra structures. The quantitative semantics of terms are developed based on the quantification of hedge algebras. With this explicit approach, we propose two concepts of assessment scales to address decision problems: linguistic scales used for representing expert linguistic assessments and semantic linguistic scales based on 4-tuple linguistic representation model, which forms a formalized structure useful for computing with words. An example of a simple multi-criteria decision problem is examined by running a comparative study. We also analyze the main advantages of the proposed approach.  相似文献   

19.
The internal‐rating‐based Basel II approach increases the need for the development of more realistic default probability models. In this paper, we follow the approach taken in McNeil A and Wendin J 7 (J. Empirical Finance 2007) by constructing generalized linear mixed models for estimating default probabilities from annual data on companies with different credit ratings. The models considered, in contrast to McNeil A and Wendin J 7 (J. Empirical Finance 2007), allow parsimonious parametric models to capture simultaneously dependencies of the default probabilities on time and credit ratings. Macro‐economic variables can also be included. Estimation of all model parameters are facilitated with a Bayesian approach using Markov chain Monte Carlo methods. Special emphasis is given to the investigation of predictive capabilities of the models considered. In particular, predictable model specifications are used. The empirical study using default data from Standard and Poor's gives evidence that the correlation between credit ratings further apart decreases and is higher than the one induced by the autoregressive time dynamics. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
Operations research models are used in many business and non-business entities to support a variety of decision making activities, primarily well-defined, operational decisions. This is due to the traditional emphasis of these models on optimal solutions to pre-specified problems. Some attempts have been made to use OR models in support of more complex, strategic decision making. Traditionally, these models have been developed without explicit consideration for the information processing abilities and limitations of the decision makers, who interact with, provide input to, and receive output from such models.Research in judgement and decision making show that human decisions are influenced by a number of factors including, but not limited to, information presentation modes; information content, modes, e.g., quantitative versus qualitative; order effects such as primacy, recency; and simultaneous versus sequential presentation of data.This article presents empirical research findings involving executive business decision makers and their preferences for information in decision making scenarios. These preference functions were evaluated using OR techniques. The results indicate that decision makers view information in different ways. Some decision makers prefer qualitative, narrative, social information, whereas other prefer quantitative, numerical, firm specific information. Results also show that decision making tasks influence the preference structure of decision makers, but that in general, the preference are relatively stable across tasks.The results imply that for OR models to be more useful in support of non-routine decision making, attention needs to be focused on the information content and presentation effects of model inputs and outputs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号