首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 382 毫秒
1.
Loss given default (LGD) models predict losses as a proportion of the outstanding loan, in the event a debtor goes into default. The literature on corporate sector LGD models suggests LGD is correlated to the economy and so changes in the economy could translate into different predictions of losses. In this work, the role of macroeconomic variables in loan-level retail LGD models is examined by testing the inclusion of macroeconomic variables in two different retail LGD models: a two-stage model for a residential mortgage loans data set and an ordinary least squares model for an unsecured personal loans data set. To improve loan-level predictions of LGD, indicators relating to the macroeconomy are considered with mixed results: the selected macroeconomic variable seemed able to improve the predictive performance of mortgage loan LGD estimates, but not for personal loan LGD. For mortgage loan LGD, interest rate was most beneficial but only predicted better during downturn periods, underestimating LGD during non-downturn periods. For personal loan LGD, only net lending growth is statistically significant but including this variable did not bring any improvement to R2.  相似文献   

2.
On the basis of two data sets containing Loss Given Default (LGD) observations of home equity and corporate loans, we consider non-linear and non-parametric techniques to model and forecast LGD. These techniques include non-linear Support Vector Regression (SVR), a regression tree, a transformed linear model and a two-stage model combining a linear regression with SVR. We compare these models with an ordinary least squares linear regression. In addition, we incorporate several variants of 11 macroeconomic indicators to estimate the influence of the economic state on loan losses. The out-of-time set-up is complemented with an out-of-sample set-up to mitigate the limited number of credit crisis observations available in credit risk data sets. The two-stage/transformed model outperforms the other techniques when forecasting out-of-time for the home equity/corporate data set, while the non-parametric regression tree is the best performer when forecasting out-of-sample. The incorporation of macroeconomic variables significantly improves the prediction performance. The downturn impact ranges up to 5% depending on the data set and the macroeconomic conditions defining the downturn. These conclusions can help financial institutions when estimating LGD under the internal ratings-based approach of the Basel Accords in order to estimate the downturn LGD needed to calculate the capital requirements. Banks are also required as part of stress test exercises to assess the impact of stressed macroeconomic scenarios on their Profit and Loss (P&L) and banking book, which favours the accurate identification of relevant macroeconomic variables driving LGD evolutions.  相似文献   

3.
The internal estimates of Loss Given Default (LGD) must reflect economic downturn conditions, thus estimating the “downturn LGD”, as the new Basel Capital Accord Basel II establishes. We suggest a methodology to estimate the downturn LGD distribution to overcome the arbitrariness of the methods suggested by Basel II. We assume that LGD is a mixture of an expansion and recession distribution. In this work, we propose an accurate parametric model for LGD and we estimate its parameters by the EM algorithm. Finally, we apply the proposed model to empirical data on Italian bank loans.  相似文献   

4.
In this article we study penalized regression splines (P-splines), which are low-order basis splines with a penalty to avoid undersmoothing. Such P-splines are typically not spatially adaptive, and hence can have trouble when functions are varying rapidly. Our approach is to model the penalty parameter inherent in the P-spline method as a heteroscedastic regression function. We develop a full Bayesian hierarchical structure to do this and use Markov chain Monte Carlo techniques for drawing random samples from the posterior for inference. The advantage of using a Bayesian approach to P-splines is that it allows for simultaneous estimation of the smooth functions and the underlying penalty curve in addition to providing uncertainty intervals of the estimated curve. The Bayesian credible intervals obtained for the estimated curve are shown to have pointwise coverage probabilities close to nominal. The method is extended to additive models with simultaneous spline-based penalty functions for the unknown functions. In simulations, the approach achieves very competitive performance with the current best frequentist P-spline method in terms of frequentist mean squared error and coverage probabilities of the credible intervals, and performs better than some of the other Bayesian methods.  相似文献   

5.
The New Basel Accord, which was implemented in 2007, has made a significant difference to the use of modelling within financial organisations. In particular it has highlighted the importance of Loss Given Default (LGD) modelling. We propose a decision tree approach to modelling LGD for unsecured consumer loans where the uncertainty in some of the nodes is modelled using a mixture model, where the parameters are obtained using regression. A case study based on default data from the in-house collections department of a UK financial organisation is used to show how such regression can be undertaken.  相似文献   

6.
本通过对比住房抵押贷款和汽车消费贷款的违约特性,得出在利率由央行统一规定的条件下,对住房抵押贷款和汽车消费贷款,实行同样的首付款政策是不合理的,这导致目前汽车消费贷款的违约率居高不下。对于住房抵押贷款,银行可以适当降低首付款,来提高本银行住房抵押贷款在市场的竞争力,对于汽车消费贷款,可以通过采取提高汽车贷款首付款的措施,来降低违约率,控制抵押贷款风险。  相似文献   

7.
国有商业银行企业不良贷款的主因子分析   总被引:6,自引:0,他引:6  
论文首先在银行企业不良贷款成因的研究成果进行回顾梳理基础上,从探究现行发展环境下,影响企业不良贷款的主要原因的角度,通过对国有商业银行贷款部的中高层访谈问卷调查,运用探索性因子分析模型,对不良贷款主成因进行萃取和诠释,最后构建银行业不良贷款的成因层次构面,以支持国有商业银行授信决策之依据。  相似文献   

8.
In applications involving count data, it is common to encounter an excess number of zeros. In the study of outpatient service utilization, for example, the number of utilization days will take on integer values, with many subjects having no utilization (zero values). Mixed-distribution models, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB), are often used to fit such data. A more general class of mixture models, called hurdle models, can be used to model zero-deflation as well as zero-inflation. Several authors have proposed frequentist approaches to fitting zero-inflated models for repeated measures. We describe a practical Bayesian approach which incorporates prior information, has optimal small-sample properties, and allows for tractable inference. The approach can be easily implemented using standard Bayesian software. A study of psychiatric outpatient service use illustrates the methods.  相似文献   

9.
This paper presents the mathematical model for determination of the optimal amount of short term commercial bank loans to corporate sector in Slovenia which is based on probabilistic inventory models. The goal of this paper is complete optimisation of cash inventories of corporate sector in the national economy. The results of optimisation are important for the corporate sector and for commercial banks. The optimal order quantity is an amount of short term commercial bank loan to corporates and defines a lending potential of commercial banks in a national economy. As such is important for the central bank when conducting monetary policy. Special emphasis has been given to determinants of optimal order quantity, which reflect market conditions in national economy.  相似文献   

10.
在贷款的买方市场或充分竞争的金融环境中,贷款利率不会由银行自己说了算,因此建立银企双方共同接受的贷款利率定价模型在现实中尤为重要。本文采用区间数的形式反映存款利息支出率、违约风险补偿率等定价指标的不确定性,以已结清贷款最小定价效率、最大定价效率组成的贷款定价效率区间为目标,以新贷款的贷款利率为决策变量,通过逆向求解区间数DEA模型反推出新贷款的贷款利率区间,建立了基于区间数DEA的贷款定价模型。本文的创新与特色一是以已结清贷款的存款利息支出率、目标利润率等指标为输入,以已结清贷款的贷款利率为输出,利用DEA模型求得已结清贷款的实际最小效率及最大效率。二是以银企双方均可接受的贷款定价效率区间为目标、以新贷款的存款利息支出率等用区间数形式表示的贷款成本为投入,反推出贷款利率的取值区间。三是通过区间数形式来反映违约风险补偿率、目标利润率等定价指标的不确定性,改变了现有研究将目标利润、贷款费用、违约损失等变量看作常数来定价的不合理现状。研究表明:存款利息支出率、费用支出率、违约风险补偿率及目标利润率均与贷款利率成正比。企业提高在贷款银行中的资金结算比率、存贷比率可以降低贷款利率。  相似文献   

11.
In this paper a comparative evaluation study on popular non-homogeneous Poisson models for count data is performed. For the study the standard homogeneous Poisson model (HOM) and three non-homogeneous variants, namely a Poisson changepoint model (CPS), a Poisson free mixture model (MIX), and a Poisson hidden Markov model (HMM) are implemented in both conceptual frameworks: a frequentist and a Bayesian framework. This yields eight models in total, and the goal of the presented study is to shed some light onto their relative merits and shortcomings. The first major objective is to cross-compare the performances of the four models (HOM, CPS, MIX and HMM) independently for both modelling frameworks (Bayesian and frequentist). Subsequently, a pairwise comparison between the four Bayesian and the four frequentist models is performed to elucidate to which extent the results of the two paradigms (‘Bayesian vs. frequentist’) differ. The evaluation study is performed on various synthetic Poisson data sets as well as on real-world taxi pick-up counts, extracted from the recently published New York City Taxi database.  相似文献   

12.
The authors describe the structural solution of the loan rate as a function of default and response risk that maximizes expected return on equity for a lender's portfolio of risky loans. Under the assumptions of our model, the non-linear differential equation for the optimizing price is found to be separable in transformed financial, response and risk variables. With an end-point condition where default-free borrowers are willing to borrow at loan rates higher than the lender's cost of funds, general solutions are obtained for cases where default probabilities may depend explicitly on the offered loan rate and where adverse selection may or may not be present. For the general solution, we suggest a numerical algorithm that involves the sequential solutions of two separate transcendental equations each one of which depends on parameters of the risk and response scores. For the special case where the borrower's default probability is conditionally independent of loan rate, it is shown that the optimal solution is independent of Basel regulations on equity capital.  相似文献   

13.
Received on 1 May 1991. Revision received 1 January 1991. In this paper, the decision process of commercial loan officersis modelled. Specifically, commercial loan applications froma major Canadian bank were analysed. Creditscoring models usingdiscriminant analysis and logit regression were created in aneffort to replicate the decision process. Results indicate thatcredit-scoring models for small-business loans are effectiveand that further analysis in this area is warranted.  相似文献   

14.
Some models of loan default are binary, simply modelling the probability of default, while others go further and model the extent of default (eg number of outstanding payments; amount of arrears). The double-hurdle model, originally due to Cragg (Econometrica, 1971), and conventionally applied to household consumption or labour supply decisions, contains two equations, one which determines whether or not a customer is a potential defaulter (the ‘first hurdle’), and the other which determines the extent of default. In separating these two processes, the model recognizes that there exists a subset of the observed non-defaulters who would never default whatever their circumstances. A Box-Cox transformation applied to the dependent variable is a useful generalization to the model. Estimation is relatively easy using the Maximum Likelihood routine available in STATA. The model is applied to a sample of 2515 loan applicants for whom loans were approved, a sizeable proportion of whom defaulted in varying degrees. The dependent variables used are amount in arrears and number of days in arrears. The value of the hurdle approach is confirmed by finding that certain key explanatory variables have very different effects between the two equations. Most notably, the effect of loan amount is strongly positive on arrears, while being U-shaped on the probability of default. The former effect is seriously under-estimated when the first hurdle is ignored.  相似文献   

15.
A finite mixture model has been used to fit the data from heterogeneous populations to many applications. An Expectation Maximization (EM) algorithm is the most popular method to estimate parameters in a finite mixture model. A Bayesian approach is another method for fitting a mixture model. However, the EM algorithm often converges to the local maximum regions, and it is sensitive to the choice of starting points. In the Bayesian approach, the Markov Chain Monte Carlo (MCMC) sometimes converges to the local mode and is difficult to move to another mode. Hence, in this paper we propose a new method to improve the limitation of EM algorithm so that the EM can estimate the parameters at the global maximum region and to develop a more effective Bayesian approach so that the MCMC chain moves from one mode to another more easily in the mixture model. Our approach is developed by using both simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS). Although SA is a well-known approach for detecting distinct modes, the limitation of SA is the difficulty in choosing sequences of proper proposal distributions for a target distribution. Since ARMS uses a piecewise linear envelope function for a proposal distribution, we incorporate ARMS into an SA approach so that we can start a more proper proposal distribution and detect separate modes. As a result, we can detect the maximum region and estimate parameters for this global region. We refer to this approach as ARMS annealing. By putting together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM-ARMS annealing algorithm and a Bayesian-ARMS annealing approach. We compare our two approaches with traditional EM algorithm alone and Bayesian approach alone using simulation, showing that our two approaches are comparable to each other but perform better than EM algorithm alone and Bayesian approach alone. Our two approaches detect the global maximum region well and estimate the parameters in this region. We demonstrate the advantage of our approaches using an example of the mixture of two Poisson regression models. This mixture model is used to analyze a survey data on the number of charitable donations.  相似文献   

16.
In the typical analysis of a data set, a single method is selected for statistical reporting even when equally applicable methods yield very different results. Examples of equally applicable methods can correspond to those of different ancillary statistics in frequentist inference and of different prior distributions in Bayesian inference. More broadly, choices are made between parametric and nonparametric methods and between frequentist and Bayesian methods. Rather than choosing a single method, it can be safer, in a game-theoretic sense, to combine those that are equally appropriate in light of the available information. Since methods of combining subjectively assessed probability distributions are not objective enough for that purpose, this paper introduces a method of distribution combination that does not require any assignment of distribution weights. It does so by formalizing a hedging strategy in terms of a game between three players: nature, a statistician combining distributions, and a statistician refusing to combine distributions. The optimal move of the first statistician reduces to the solution of a simpler problem of selecting an estimating distribution that minimizes the Kullback–Leibler loss maximized over the plausible distributions to be combined. The resulting combined distribution is a linear combination of the most extreme of the distributions to be combined that are scientifically plausible. The optimal weights are close enough to each other that no extreme distribution dominates the others. The new methodology is illustrated by combining conflicting empirical Bayes methods in the context of gene expression data analysis.  相似文献   

17.
在货到付款支付模式下二级供应链定价决策中,供应链企业资金闲置时向银行存款或资金约束时向银行贷款(银行存贷)的行为是不可忽视的重要因素,如何构建基于货到付款支付模式且考虑银行存贷的二级供应链Stackelberg定价决策模型是需要关注的重要问题。在本文中,首先给出了市场需求函数;然后,基于货到付款支付模式,针对制造商资金或零售商资金约束情形,分别构建针对不同供应链权力结构的定价决策模型;进一步地,通过模型求解确定了不同情形下不同权力结构的制造商与零售商的最优策略,并分析了模型参数对最优策略的影响;最后,针对不同资金约束情形与不同权力结构的最优策略以及银行利率对最优策略及利润影响,给出了对比分析。研究表明三种银行利率均会影响最优策略,且资金约束对象差异的影响明显。  相似文献   

18.
To evaluate consumer loan applications, loan officers use many techniques such as judgmental systems, statistical models, or simply intuitive experience. In recent years, fuzzy systems and neural networks have attracted the growing interest of researchers and practitioners. This study compares the performance of artificial neuro-fuzzy inference systems (ANFIS) and multiple discriminant analysis models to screen potential defaulters on consumer loans. Using a modeling sample and a test sample, we find that the neuro-fuzzy system performs better than the multiple discriminant analysis approach to identify bad credit applications. Further, neuro-fuzzy systems have many advantages over traditional computational methods. Neuro-fuzzy system models are flexible, more tolerant of imprecise data, and can model non-linear functions of arbitrary complexity.  相似文献   

19.
Abstract A fundamental problem of interest to contemporary natural resource scientists is that of assessing whether a critical population parameter such as population proportion p has been maintained above (or below) a specified critical threshold level pc. This problem has been traditionally analyzed using frequentist estimation of parameters with confidence intervals or frequentist hypothesis testing. Bayesian statistical analysis provides an alternative approach that has many advantages. It has a more intuitive interpretation, providing probability assessments of parameters. It provides the Bayesian logic of “if (data), then probability (parameters)” rather than the frequentist logic of “if (parameters), then probability (data).” It provides a sequential, cumulative, scientific approach to analysis, using prior information and reassessing the probability distribution of parameters for adaptive management decision making. It has been integrated with decision theory and provides estimates of risk. Natural resource scientists have the opportunity of using Bayesian statistical analysis to their advantage now that this alternative approach to statistical inference has become practical and accessible.  相似文献   

20.
This article introduces a sequence of four systematic methods to examine the extent to which the economic efficiency of Taiwan’s commercial banks persists and to uncover the potential dynamic link between bank performance and various financial indicators. Quasi-fixed inputs are explicitly incorporated in the DEA model to account for possible adjustment costs, regulation, or indivisibilities. Among the four methods, the dynamic panel data model and the Markov model appear to be exploited for the first time in the area of the DEA approach. Evidence is found that bank efficiency exhibits moderate persistence over the sample period, implying that the given sample banks fail to adjust their production techniques in a timely manner. Regulatory authorities and bank managers are suggested to be aware of the level of undesirable non-performing loans due to their close relationship with bank performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号