首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Credit scoring discriminates between ‘good’ and ‘bad’ credit risks to assist credit-grantors in making lending decisions. Such discrimination may not be a good indicator of profit, while survival analysis allows profit to be modelled. The paper explores the application of parametric accelerated failure time and proportional hazards models and Cox non-parametric model to the data from the retail card (revolving credit) from three European countries. The predictive performance of three national models is tested for different timescales of default and then compared to that of a single generic model for a timescale of 25 months. It is found that survival analysis national and generic models produce predictive quality, which is very close to the current industry standard—logistic regression. Stratification is investigated as a way of extending Cox non-parametric proportional hazards model to tackle heterogeneous segments in the population.  相似文献   

2.
Received on 1 July 1991. The benefit to consumers from the use of informative creditreports is demonstrated by showing the improvement in creditdecisions when generic scoring models based on credit reportsare implemented. If these models are highly predictive, thenthe truncation of credit reports will reduce the predictivepower of bureau-based generic scoring systems. As a result,more good credit risks will be denied credit, and more poorcredit risks will be granted credit. It is shown that, evenwhen applied to credit applications that had already been screenedand approved, the use of generic scoring models significantlyimproves credit grantors' ability to predict and eliminate bankruptcies,charge-offs, and delinquencies. As applied to existing accounts,bureau-based generic scores are shown to have predictive valuefor at least 3 months, while scores 12 months old may not bevery powerful. Even though bureau-based scores shift towardsthe high-risk end of the distribution during a recession, theycontinue to rank risk very well. When coupled with application-basedcredit-scoring models, scores based on credit-bureau data furtherimprove the predictive power of the model-the improvements beinggreater with more complete bureau information. We conclude thatgovernment-imposed limits on credit information are anti-consumerby fostering more errors in credit decisions.  相似文献   

3.
Numerous results about capturing complexity classes of queries by means of logical languages work for ordered structures only, and deal with non-generic, or order-dependent, queries. Recent attempts to improve the situation by characterizing wide classes of finite models where linear order is definable by certain simple means have not been very promising, as certain commonly believed conjectures were recently refuted (Dawar's Conjecture). We take on another approach that has to do with normalization of a given order (rather than with defining a linear order from scratch). To this end, we show that normalizability of linear order is a strictly weaker condition than definability (say, in the least fixpoint logic), and still allows for extending Immerman-Vardi-style results to generic queries. It seems to be the weakest such condition. We then conjecture that linear order is normalizable in the least fixpoint logic for any finitely axiomatizable class of rigid structures. Truth of this conjecture, which is a strengthened version of Stolboushkin's conjecture, would have the same practical implications as Dawar's Conjecture. Finally, we suggest a series of reductions of the two conjectures to specialized classes of graphs, which we believe should simplify further work. Received: 13 July 1996  相似文献   

4.
Although credit-scoring models represent a widely used managerialaid for large financial intermediaries, the vast majority ofU.S. credit unions—relatively small cooperatively ownedretail intermediaries, constrained by sample and funding limitations—haveyet to adopt such techniques. Lovie & Lovie (1986) havetheorized that the flat-maximum effect or curve of insensitivityassociated with linear scoring models could be advantageousin areas of applied prediction such as credit scoring. In thiscontext, we reported the relative predictive power of genericcredit-scoring models versus customized models in an earlierpaper (Overstreet et al. 1992). Unfortunately, these findingswere not readily adaptable to the credit-union industry dueto a dated sample with incomplete credit-bureau information.Consequently, from 1988 to 1991, we gathered a refined databasefrom which to further develop and field-test generic scoringmodels in the credit-union environment. The results reportedherein not only confirm, but amplify, the relative predictivepower of such models found earlier. Relative costs and benefitsof generic versus customized models are modelled for a representativecredit union. Future research directions are set forth in theconclusions.  相似文献   

5.
This paper considers the post-J test inference in non-nested linear regression models. Post-J test inference means that the inference problem is considered by taking the first stage J test into account. We first propose a post-J test estimator and derive its asymptotic distribution. We then consider the test problem of the unknown parameters, and a Wald statistic based on the post-J test estimator is proposed. A simulation study shows that the proposed Wald statistic works perfectly as well as the two-stage test from the view of the empirical size and power in large-sample cases, and when the sample size is small, it is even better. As a result,the new Wald statistic can be used directly to test the hypotheses on the unknown parameters in non-nested linear regression models.  相似文献   

6.
We propose an empirical likelihood method to test whether the coefficients in a possibly high-dimensional linear model are equal to given values. The asymptotic distribution of the test statistic is independent of the number of covariates in the linear model.  相似文献   

7.
The purpose of the present paper is to explore the ability of neural networks such as multilayer perceptrons and modular neural networks, and traditional techniques such as linear discriminant analysis and logistic regression, in building credit scoring models in the credit union environment. Also, since funding and small sample size often preclude the use of customized credit scoring models at small credit unions, we investigate the performance of generic models and compare them with customized models. Our results indicate that customized neural networks offer a very promising avenue if the measure of performance is percentage of bad loans correctly classified. However, if the measure of performance is percentage of good and bad loans correctly classified, logistic regression models are comparable to the neural networks approach. The performance of generic models was not as good as the customized models, particularly when it came to correctly classifying bad loans. Although we found significant differences in the results for the three credit unions, our modular neural network could not accommodate these differences, indicating that more innovative architectures might be necessary for building effective generic models.  相似文献   

8.
9.
10.
We investigate the problem of testing equality and inequality constraints on regression coefficients in linear models with multivariate power exponential (MPE) distribution. This distribution has received considerable attention in recent years and provides a useful generalization of the multivariate normal distribution. We examine the performance of the power of the likelihood ratio, Wald and Score tests for grouped data and in the presence of regressors, in small and moderate sample sizes, using Monte Carlo simulations. Additionally, we present a real example to illustrate the performance of the proposed tests under the MPE model.  相似文献   

11.
12.
If a credit scoring model is built using only applicants who have been previously accepted for credit such a non-random sample selection may produce bias in the estimated model parameters and accordingly the model's predictions of repayment performance may not be optimal. Previous empirical research suggests that omission of rejected applicants has a detrimental impact on model estimation and prediction. This paper explores the extent to which, given the previous cutoff score applied to decide on accepted applicants, the number of included variables influences the efficacy of a commonly used reject inference technique, reweighting. The analysis benefits from the availability of a rare sample, where virtually no applicant was denied credit. The general indication is that the efficacy of reject inference is little influenced by either model leanness or interaction between model leanness and the rejection rate that determined the sample. However, there remains some hint that very lean models may benefit from reject inference where modelling is conducted on data characterized by a very high rate of applicant rejection.  相似文献   

13.
We consider the problem of estimation of the parameters in Generalized Linear Models (GLM) with binary data when it is suspected that the parameter vector obeys some exact linear restrictions which are linearly independent with some degree of uncertainty. Based on minimum -divergence estimation (ME), we consider some estimators for the parameters of the GLM: Unrestricted ME, restricted ME, Preliminary ME, Shrinkage ME, Shrinkage preliminary ME, James–Stein ME, Positive-part of Stein-Rule ME and Modified preliminary ME. Asymptotic bias as well as risk with a quadratic loss function are studied under contiguous alternative hypotheses. Some discussion about dominance among the estimators studied is presented. Finally, a simulation study is carried out.  相似文献   

14.
This paper suggests a modified serial correlation test for linear panel data models, which is based on the parameter estimates for an artificial autoregression modeled by differencing and centering residual vectors. Specifically, the differencing operator over the time index and the centering operator over the individual index are, respectively, used to eliminate the potential individual effects and time effects so that the resultant serial correlation test is robust to the two potential effects. Clearly, the test is also robust to the potential correlation between the covariates and the random effects. The test is asymptotically chi-squared distributed under the null hypothesis. Power study shows that the test can detect local alternatives distinct at the parametric rate from the null hypothesis. The finite sample properties of the test are investigated by means of Monte Carlo simulation experiments, and a real data example is analyzed for illustration.  相似文献   

15.
Mixture cure models were originally proposed in medical statistics to model long-term survival of cancer patients in terms of two distinct subpopulations - those that are cured of the event of interest and will never relapse, along with those that are uncured and are susceptible to the event. In the present paper, we introduce mixture cure models to the area of credit scoring, where, similarly to the medical setting, a large proportion of the dataset may not experience the event of interest during the loan term, i.e. default. We estimate a mixture cure model predicting (time to) default on a UK personal loan portfolio, and compare its performance to the Cox proportional hazards method and standard logistic regression. Results for credit scoring at an account level and prediction of the number of defaults at a portfolio level are presented; model performance is evaluated through cross validation on discrimination and calibration measures. Discrimination performance for all three approaches was found to be high and competitive. Calibration performance for the survival approaches was found to be superior to logistic regression for intermediate time intervals and useful for fixed 12 month time horizon estimates, reinforcing the flexibility of survival analysis as both a risk ranking tool and for providing robust estimates of probability of default over time. Furthermore, the mixture cure model’s ability to distinguish between two subpopulations can offer additional insights by estimating the parameters that determine susceptibility to default in addition to parameters that influence time to default of a borrower.  相似文献   

16.
In this article,the Bayes linear unbiased estimator (BALUE) of parameters is derived for the multivariate linear models.The superiorities of the BALUE over the least square estimator (LSE) is studied in terms of the mean square error matrix (MSEM) criterion and Bayesian Pitman closeness (PC) criterion.  相似文献   

17.
《Mathematical Modelling》1986,7(2-3):301-340
Some of the results in the literature on simple one-dimensional, density-dependent, discrete and continuous models—with and without harvesting—are reviewed. Both deterministic and stochastic models are included. Some comparisons of the various models are made, and the results are discussed in terms of their ramifications in population model building.  相似文献   

18.
In memoriam Deane Montgomery  相似文献   

19.
In actuarial practice, regression models serve as a popular statistical tool for analyzing insurance data and tariff ratemaking. In this paper, we consider classical credibility models that can be embedded within the framework of mixed linear models. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators are commonly pursued. However, it is well-known that these standard and fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to the occurrence of outliers. To obtain better estimators for premium calculation and prediction of future claims, various robust methods have been successfully adapted to credibility theory in the actuarial literature. The objective of this work is to develop robust and efficient methods for credibility when heavy-tailed claims are approximately log-location-scale distributed. To accomplish that, we first show how to express additive credibility models such as Bühlmann-Straub and Hachemeister ones as mixed linear models with symmetric or asymmetric errors. Then, we adjust adaptively truncated likelihood methods and compute highly robust credibility estimates for the ordinary but heavy-tailed claims part. Finally, we treat the identified excess claims separately and find robust-efficient credibility premiums. Practical performance of this approach is examined-via simulations-under several contaminating scenarios. A widely studied real-data set from workers’ compensation insurance is used to illustrate functional capabilities of the new robust credibility estimators.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号