首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In recent years, financial regulations such as Basel II and Solvency II have highlighted the utility of credit risk assessments through internal rating systems, particularly for estimating the probability of default (PD) of credit exposures.  相似文献   

2.
The 2004 Basel II Accord has pointed out the benefits of credit risk management through internal models using internal data to estimate risk components: probability of default (PD), loss given default, exposure at default and maturity. Internal data are the primary data source for PD estimates; banks are permitted to use statistical default prediction models to estimate the borrowers’ PD, subject to some requirements concerning accuracy, completeness and appropriateness of data. However, in practice, internal records are usually incomplete or do not contain adequate history to estimate the PD. Current missing data are critical with regard to low default portfolios, characterised by inadequate default records, making it difficult to design statistically significant prediction models. Several methods might be used to deal with missing data such as list-wise deletion, application-specific list-wise deletion, substitution techniques or imputation models (simple and multiple variants). List-wise deletion is an easy-to-use method widely applied by social scientists, but it loses substantial data and reduces the diversity of information resulting in a bias in the model's parameters, results and inferences. The choice of the best method to solve the missing data problem largely depends on the nature of missing values (MCAR, MAR and MNAR processes) but there is a lack of empirical analysis about their effect on credit risk that limits the validity of resulting models. In this paper, we analyse the nature and effects of missing data in credit risk modelling (MCAR, MAR and NMAR processes) and take into account current scarce data set on consumer borrowers, which include different percents and distributions of missing data. The findings are used to analyse the performance of several methods for dealing with missing data such as likewise deletion, simple imputation methods, MLE models and advanced multiple imputation (MI) alternatives based on MarkovChain-MonteCarlo and re-sampling methods. Results are evaluated and discussed between models in terms of robustness, accuracy and complexity. In particular, MI models are found to provide very valuable solutions with regard to credit risk missing data.  相似文献   

3.
Estimation of probability of default has considerable importance in risk management applications where default risk is referred to as credit risk. Basel II (Committee on Banking Supervision) proposes a revision to the international capital accord that implies a more prominent role for internal credit risk assessments based on the determination of default probability of borrowers. In our study, we classify borrower firms into rating classes with respect to their default probability. The classification of firms into rating classes necessitates the finding of threshold values separating the rating classes. We aim at solving two problems: to distinguish the defaults from non-defaults, and to put the firms in an order based on their credit quality and classify them into sub-rating classes. For using a model to obtain the probability of default of each firm, Receiver Operating Characteristics (ROC) analysis is employed to assess the distinction power of our model. In our new functional approach, we optimise the area under the ROC curve for a balanced choice of the thresholds; and we include accuracy of the solution into the program. Thus, a constrained optimisation problem on the area under the curve (or its complement) is carefully modelled, discretised and turned into a penalized sum-of-squares problem of nonlinear regression; we apply the Levenberg–Marquardt algorithm. We present numerical evaluations and their interpretations based on real-world data from firms in the Turkish manufacturing sector. We conclude with a discussion of structural frontiers, parametrical and computational features, and an invitation to future work.  相似文献   

4.
In order to manage model risk, financial institutions need to set up validation processes so as to monitor the quality of the models on an ongoing basis. Validation can be considered from both a quantitative and qualitative point of view. Backtesting and benchmarking are key quantitative validation tools, and the focus of this paper. In backtesting, the predicted risk measurements (PD, LGD, EAD) will be contrasted with observed measurements using a workbench of available test statistics to evaluate the calibration, discrimination and stability of the model. A timely detection of reduced performance is crucial since it directly impacts profitability and risk management strategies. The aim of benchmarking is to compare internal risk measurements with external risk measurements so as to better gauge the quality of the internal rating system. This paper will focus on the quantitative PD validation process within a Basel II context. We will set forth a traffic light indicator approach that employs all relevant statistical tests to quantitatively validate the used PD model, and document this approach with a real-life case study. The set forth methodology and tests are the summary of the authors’ statistical expertise and experience of world-wide observed business practices.  相似文献   

5.
Regulatory authorities pay considerable attention to setting minimum capital levels for different kinds of financial institutions. Solvency II, the European Commission’s planned reform of the regulation of insurance companies is well underway. One of its consequences will be a shift in focus to internally based models in determining the regulatory capital needed to cover unexpected losses. This evolution emphasises the importance of credit risk assessment through internal ratings. In light of this new prudential regulation, this paper suggests a Basel II compliant approach to predicting credit ratings for non-rated corporations and evaluates its performance compared to external ratings. The paper provides an interesting modelling of non-financial European companies rated by S&P. In developing the model, broad applicability is set as an important boundary condition. Even though the model developed is fairly simple and maintains a high level of granularity, it gives high rates of accuracy and is very interpretable.  相似文献   

6.
The internal‐rating‐based Basel II approach increases the need for the development of more realistic default probability models. In this paper, we follow the approach taken in McNeil A and Wendin J 7 (J. Empirical Finance 2007) by constructing generalized linear mixed models for estimating default probabilities from annual data on companies with different credit ratings. The models considered, in contrast to McNeil A and Wendin J 7 (J. Empirical Finance 2007), allow parsimonious parametric models to capture simultaneously dependencies of the default probabilities on time and credit ratings. Macro‐economic variables can also be included. Estimation of all model parameters are facilitated with a Bayesian approach using Markov chain Monte Carlo methods. Special emphasis is given to the investigation of predictive capabilities of the models considered. In particular, predictable model specifications are used. The empirical study using default data from Standard and Poor's gives evidence that the correlation between credit ratings further apart decreases and is higher than the one induced by the autoregressive time dynamics. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
One of the issues that the Basel Accord highlighted was that, though techniques for estimating the probability of default and hence the credit risk of loans to individual consumers are well established, there were no models for the credit risk of portfolios of such loans. Motivated by the reduced form models for credit risk in corporate lending, we seek to exploit the obvious parallels between behavioural scores and the ratings ascribed to corporate bonds to build consumer-lending equivalents. We incorporate both consumer-specific ratings and macroeconomic factors in the framework of Cox Proportional Hazard models. Our results show that default intensities of consumers are significantly influenced by macro factors. Such models then can be used as the basis for simulation approaches to estimate the credit risk of portfolios of consumer loans.  相似文献   

8.
Credit risk models are commonly based on large internal data sets to produce reliable estimates of the probability of default (PD) that should be validated with time. However, in the real world, a substantial portion of the exposures is included in low-default portfolios (LDPs) in which the number of defaulted loans is usually much lower than the number of non-default observations. Modelling of these imbalanced data sets is particularly problematic with small portfolios in which the absence of information increases the specification error. Sovereigns, banks, or specialised retail exposures are recent examples of post-crisis portfolios with insufficient data for PD estimates, which require specific tools for risk quantification and validation. This paper explores the suitability of cooperative strategies for managing such scarce LDPs. In addition to the use of statistical and machine-learning classifiers, this paper explores the suitability of cooperative models and bootstrapping strategies for default prediction and multi-grade PD setting using two real-world credit consumer data sets. The performance is assessed in terms of out-of-sample and out-of-time discriminatory power, PD calibration, and stability. The results indicate that combinational approaches based on correlation-adjusted strategies are promising techniques for managing sparse LDPs and providing accurate and well-calibrated credit risk estimates.  相似文献   

9.
Fierce competition as well as the recent financial crisis in financial and banking industries made credit scoring gain importance. An accurate estimation of credit risk helps organizations to decide whether or not to grant credit to potential customers. Many classification methods have been suggested to handle this problem in the literature. This paper proposes a model for evaluating credit risk based on binary quantile regression, using Bayesian estimation. This paper points out the distinct advantages of the latter approach: that is (i) the method provides accurate predictions of which customers may default in the future, (ii) the approach provides detailed insight into the effects of the explanatory variables on the probability of default, and (iii) the methodology is ideally suited to build a segmentation scheme of the customers in terms of risk of default and the corresponding uncertainty about the prediction. An often studied dataset from a German bank is used to show the applicability of the method proposed. The results demonstrate that the methodology can be an important tool for credit companies that want to take the credit risk of their customer fully into account.  相似文献   

10.
In credit scoring, low-default portfolios (LDPs) are those for which very little default history exists. This makes it problematic for financial institutions to estimate a reliable probability of a customer defaulting on a loan. Banking regulation (Basel II Capital Accord), and best practice, however, necessitate an accurate and valid estimate of the probability of default. In this article the suitability of semi-supervised one-class classification (OCC) algorithms as a solution to the LDP problem is evaluated. The performance of OCC algorithms is compared with the performance of supervised two-class classification algorithms. This study also investigates the suitability of over sampling, which is a common approach to dealing with LDPs. Assessment of the performance of one- and two-class classification algorithms using nine real-world banking data sets, which have been modified to replicate LDPs, is provided. Our results demonstrate that only in the near or complete absence of defaulters should semi-supervised OCC algorithms be used instead of supervised two-class classification algorithms. Furthermore, we demonstrate for data sets whose class labels are unevenly distributed that optimising the threshold value on classifier output yields, in many cases, an improvement in classification performance. Finally, our results suggest that oversampling produces no overall improvement to the best performing two-class classification algorithms.  相似文献   

11.
Survival analysis can be applied to build models for time to default on debt. In this paper, we report an application of survival analysis to model default on a large data set of credit card accounts. We explore the hypothesis that probability of default (PD) is affected by general conditions in the economy over time. These macroeconomic variables (MVs) cannot readily be included in logistic regression models. However, survival analysis provides a framework for their inclusion as time-varying covariates. Various MVs, such as interest rate and unemployment rate, are included in the analysis. We show that inclusion of these indicators improves model fit and affects PD yielding a modest improvement in predictions of default on an independent test set.  相似文献   

12.
Behavioural scoring models are generally used to estimate the probability that a customer of a financial institution who owns a credit product will default on this product in a fixed time horizon. However, one single customer usually purchases many credit products from an institution while behavioural scoring models generally treat each of these products independently. In order to make credit risk management easier and more efficient, it is interesting to develop customer default scoring models. These models estimate the probability that a customer of a certain financial institution will have credit issues with at least one product in a fixed time horizon. In this study, three strategies to develop customer default scoring models are described. One of the strategies is regularly utilized by financial institutions and the other two will be proposed herein. The performance of these strategies is compared by means of an actual data bank supplied by a financial institution and a Monte Carlo simulation study.  相似文献   

13.
We propose a structural credit risk model for consumer lending using option theory and the concept of the value of the consumer’s reputation. Using Brazilian empirical data and a credit bureau score as proxy for creditworthiness we compare a number of alternative models before suggesting one that leads to a simple analytical solution for the probability of default. We apply the proposed model to portfolios of consumer loans introducing a factor to account for the mean influence of systemic economic factors on individuals. This results in a hybrid structural-reduced-form model. And comparisons are made with the Basel II approach. Our conclusions partially support that approach for modelling the credit risk of portfolios of retail credit.  相似文献   

14.
In this document a method is discussed to incorporate stochastic Loss-Given-Default (LGD) in factor models, i.e. structural models for credit risk. The general idea exhibited in this text is to introduce a common dependence of the LGD and the probability of default (PD) on a latent variable, representing the systemic risk. Though our theory can be applied to any arbitrary firm-value model and any underlying distribution for the LGD, provided its support is a compact subset of [0,1], special attention is given to the extension of the well-known cases of the Gaussian copula framework and the shifted Gamma one-factor model (a particular case of the generic one-factor Lévy model), and the LGD is modeled by a Beta distribution, in accordance with rating agency models and the Credit Metrics model.In order to introduce stochastic LGD, a monotonically decreasing relation is derived between the loss rate L, i.e. the loss as a percentage of the total exposure, and the standardized log-return R of the obligor’s asset value, which is assumed to be a function of one or more systematic and idiosyncratic risk factors. The property that the relation is decreasing guarantees that the LGD is negatively correlated to R and hence positively correlated to the default rate. From this relation, expressions are then derived for the cumulative distribution function (CDF) and the expected value of the loss rate and the LGD, conditionally on a realization of the systematic risk factor(s). It is important to remark that all our results are derived under the large homogeneous portfolio (LHP) assumption and that they are fully consistent with the IRB approach outlined by the Basel II Capital Accord.We will demonstrate the impact of incorporating stochastic LGD and using models based on skew and fat-tailed distributions in determining adequate capital requirements. Furthermore, we also skim the potential application of the proposed framework in a credit risk environment. It will turn out that both building blocks, i.e. stochastic LGD and fat-tailed distributions, separately, increase the projected loss and thus the required capital charge. Hence, the aggregation of a model based on a fat-tailed underlying distribution that accounts for stochastic LGD will lead to sound capital requirements.  相似文献   

15.
In the consumer credit industry, assessment of default risk is critically important for the financial health of both the lender and the borrower. Methods for predicting risk for an applicant using credit bureau and application data, typically based on logistic regression or survival analysis, are universally employed by credit card companies. Because of the manner in which the predictive models are fit using large historical sets of existing customer data that extend over many years, default trends, anomalies, and other temporal phenomena that result from dynamic economic conditions are not brought to light. We introduce a modification of the proportional hazards survival model that includes a time-dependency mechanism for capturing temporal phenomena, and we develop a maximum likelihood algorithm for fitting the model. Using a very large, real data set, we demonstrate that incorporating the time dependency can provide more accurate risk scoring, as well as important insight into dynamic market effects that can inform and enhance related decision making.  相似文献   

16.
This paper analyzes the level and cyclicality of regulatory bank capital for asset portfolio securitizations in relation to the cyclicality of capital requirements for the underlying loan portfolio as under Basel II/III. We find that the cyclicality of capital requirements is higher for (i) asset portfolio securitizations relative to primary loan portfolios, (ii) Ratings Based Approach (RBA) relative to the Supervisory Formula Approach, (iii) given the RBA for a point-in-time rating methodology relative to a rate-and-forget rating methodology, and (iv) under the passive reinvestment rule relative to alternative rules. Capital requirements of the individual tranches reveal that the volatility of aggregated capital charges for the securitized portfolio is triggered by the most senior tranches. This is due to the fact that senior tranches are more sensitive to the macroeconomy. An empirical analysis provides evidence that current credit ratings are time-constant and that economic losses for securitizations have exceeded the required capital in the recent financial crisis.  相似文献   

17.
传统的聚合信用风险模型在度量信用风险过程中,假定违约损失是给定不变的,但是近年的实际研究表明在实际金融市场中,违约损失是变化的。针对传统模型的这个不合理性。本文充分考虑了违约时损失程度的变化,引入信用等级转移矩阵和违约风险调整的短期利率来刘画这种变化,并利用总索赔量分布的Panjer递推算法给出了信用风险的度量.改进了文献[4,5]中概率生成函数算法的不足,对模型作了发展。  相似文献   

18.
In banking, the default behaviour of the counterpart is not only of interest for the pricing of transactions under credit risk but also for the assessment of a portfolio credit risk. We develop a test against the hypothesis that default intensities are chronologically constant within a group of similar counterparts, e.g. a rating class. The Kolmogorov–Smirnov‐type test builds up on the asymptotic normality of counting processes in event history analysis. The right censoring accommodates for Markov processes with more than one no‐absorbing state. A simulation study and two examples of rating systems demonstrate that partial homogeneity can be assumed, however occasionally, certain migrations must be modelled and estimated inhomogeneously. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
构建农村信用社信用风险模型对完善农村金融风险管理体系、提高农村信用社经营管理意义重大.基于还款意愿和还款能力两方面,系统分析了影响农信社贷款债务人违约率的主要因素,在此基础上应用logistic方法建立农信社债务人违约率预测模型,并通过Gini系数对模型区分能力和识别能力进行验证评估.实证结果表明,模型中债务人年龄、所在地区、贷款额所占家庭收入比例、与信用社信贷关系密切程度以及户口状况等因素都表现显著;违约率预测模型在样本内和样本外均有较好的违约识别能力,从而可为农信社放贷前的债务人信用评估、贷款发放和风险管理提供有力参考.  相似文献   

20.
The contagion credit risk model is used to describe the contagion effect among different financial institutions. Under such a model, the default intensities are driven not only by the common risk factors, but also by the defaults of other considered firms. In this paper, we consider a two-dimensional credit risk model with contagion and regime-switching. We assume that the default intensity of one firm will jump when the other firm defaults and that the intensity is controlled by a Vasicek model with the coefficients allowed to switch in different regimes before the default of other firm. By changing measure, we derive the marginal distributions and the joint distribution for default times. We obtain some closed form results for pricing the fair spreads of the first and the second to default credit default swaps (CDSs). Numerical results are presented to show the impacts of the model parameters on the fair spreads.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号