首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In retail banking, predictive statistical models called ‘scorecards’ are used to assign customers to classes, and hence to appropriate actions or interventions. Such assignments are made on the basis of whether a customer's predicted score is above or below a given threshold. The predictive power of such scorecards gradually deteriorates over time, so that performance needs to be monitored. Common performance measures used in the retail banking sector include the Gini coefficient, the Kolmogorov–Smirnov statistic, the mean difference, and the information value. However, all of these measures use irrelevant information about the magnitude of scores, and fail to use crucial information relating to numbers misclassified. The result is that such measures can sometimes be seriously misleading, resulting in poor quality decisions being made, and mistaken actions being taken. The weaknesses of these measures are illustrated. Performance measures not subject to these risks are defined, and simple numerical illustrations are given.  相似文献   

2.
The development of sensor networks has enabled detailed tracking of customer behavior in stores. Shopping path data which records each customer??s position and time information is attracting attention as new marketing data. However, there are no proposed marketing models which can identify good customers from huge amounts of time series data on customer movement in the store. This research aims to use shopping path data resulting from tracking customer behavior in the store, using information on the sequence of visiting each product zone in the store and staying time at each product zone, to find how they affect purchasing. To discover useful knowledge for store management, shopping paths data has been transformed into sequence data including information on visit sequence and staying times in the store, and LCMseq has been applied to them to extract frequent sequence patterns. In this paper, we find characteristic in-store behavior patterns of good customers by using actual data of a Japanese supermarket.  相似文献   

3.
Many credit scoring systems depend on scorecards which order applicants by credit risk. However the scorecards may also have other properties with certain scores reflecting certain good:bad odds or differences in scores having the same property throughout the score range. Other properties like positivity of attribute points may be required for palatability or internal marketing reasons. The paper outlines the results of a small survey of what properties scorecard builders require of their scorecards. It then discusses how these properties can be obtained and describes a linear programming formulation which recalibrates scorecards so as to produce the best approximate scorecard with the properties required.  相似文献   

4.
We consider two balking queue models with different types of information about delays. Potential customers arrive according to a Poisson process, and they decide whether to stay or balk based on the available delay information. In the first model, an arriving customer learns a rough range of the current queue length. In the second model, each customer’s service time is the sum of a geometric number of i.i.d. exponential phases, and an arriving customer learns the total number of phases remaining in the system. For each information model, we compare two systems, identical except that one has more precise information. In many cases, better information increases throughput and thus benefits the service provider. But this is not always so. The effect depends on the shape of the distribution describing customers’ sensitivities to delays. We also study the effects of information on performance as seen by customers. Again, more information is often good for customers, but not always.  相似文献   

5.
Selection bias is a perennial problem when constructing and evaluating scorecards. It is familiar in the context of reject inference, but crops up in many other situations as well. In this paper, we examine the impact of how accepting or rejecting customers using one scorecard leads to biased comparisons of performance between that scorecard and others. This has important implications for organisations seeking to improve or replace scorecards.  相似文献   

6.
Credit scoring is one of the most widely used applications of quantitative analysis in business. Behavioural scoring is a type of credit scoring that is performed on existing customers to assist lenders in decisions like increasing the balance or promoting new products. This paper shows how using survival analysis tools from reliability and maintenance modelling, specifically Cox's proportional hazards regression, allows one to build behavioural scoring models. Their performance is compared with that of logistic regression. Also the advantages of using survival analysis techniques in building scorecards are illustrated by estimating the expected profit from personal loans. This cannot be done using the existing risk behavioural systems.  相似文献   

7.
A massive amount of data about individual electrical consumptions are now provided with new metering technologies and smart grids. These new data are especially useful for load profiling and load modeling at different scales of the electrical network. A new methodology based on mixture of high‐dimensional regression models is used to perform clustering of individual customers. It leads to uncovering clusters corresponding to different regression models. Temporal information is incorporated in order to prepare the next step, the fit of a forecasting model in each cluster. Only the electrical signal is involved, slicing the electrical signal into consecutive curves to consider it as a discrete time series of curves. Interpretation of the models is given on a real smart meter dataset of Irish customers.  相似文献   

8.
Traditionally, credit scoring aimed at distinguishing good payers from bad payers at the time of the application. The timing when customers default is also interesting to investigate since it can provide the bank with the ability to do profit scoring. Analysing when customers default is typically tackled using survival analysis. In this paper, we discuss and contrast statistical and neural network approaches for survival analysis. Compared to the proportional hazards model, neural networks may offer an interesting alternative because of their universal approximation property and the fact that no baseline hazard assumption is needed. Several neural network survival analysis models are discussed and evaluated according to their way of dealing with censored observations, time-varying inputs, the monotonicity of the generated survival curves and their scalability. In the experimental part, we contrast the performance of a neural network survival analysis model with that of the proportional hazards model for predicting both loan default and early repayment using data from a UK financial institution.  相似文献   

9.
Choice behaviour prediction is valuable for developing suitable customer segmentation and finding target customers in marketing management. Constructing good choice models for choice behaviour prediction usually requires a sufficient amount of customer data. However, there is only a small amount of data in many marketing applications due to resource constraints. In this paper, we focus on choice behaviour prediction with a small sample size by introducing the idea of transfer learning and present a method that is applicable to choice prediction. The new model called transfer bagging extracts information from similar customers from different areas to improve the performance of the choice model for customers of interest. We illustrate an application of the new model for customer mode choice analysis in the long-distance communication market and compare it with other benchmark methods without information transfer. The results show that the new model can provide significant improvements in choice prediction.  相似文献   

10.
Lenders are under increasing pressure to consider measures of affordability and indebtedness as well as risk, when assessing consumer credit applications. In order to evaluate the affordability of a new credit product, a lender needs information about the applicant's income and outgoings. However, while most lenders obtain information about income and credit commitments many do not have much, if any, pertaining to other expenditure. Therefore, they are not well positioned to determine an individual's ability to fund new borrowing. This paper demonstrates that using only data captured on a typical application form, combined with data from a credit bureau, it is possible to develop good predictive models of expenditure and over-indebtedness that can be used in conjunction with measures of risk to reject applications from individuals who are likely to already be over-indebted, or to restrict the volume of credit advanced to that which the applicant can afford.  相似文献   

11.
In consumer credit markets lending decisions are usually represented as a set of classification problems. The objective is to predict the likelihood of customers ending up in one of a finite number of states, such as good/bad payer, responder/non-responder and transactor/non-transactor. Decision rules are then applied on the basis of the resulting model estimates. However, this represents a misspecification of the true objectives of commercial lenders, which are better described in terms of continuous financial measures such as bad debt, revenue and profit contribution. In this paper, an empirical study is undertaken to compare predictive models of continuous financial behaviour with binary models of customer default. The results show models of continuous financial behaviour to outperform classification approaches. They also demonstrate that scoring functions developed to specifically optimize profit contribution, using genetic algorithms, outperform scoring functions derived from optimizing more general functions such as sum of squared error.  相似文献   

12.
Data-based scorecards, such as those used in credit scoring, age with time and need to be rebuilt or readjusted. Unlike the huge literature on modelling the replacement and maintenance of equipment there have been hardly any models that deal with this problem for scorecards. This paper identifies an effective way of describing the predictive ability of the scorecard and from this describes a simple model for how its predictive ability will develop. Using a dynamic programming approach one is then able to find when it is optimal to rebuild and when to readjust a scorecard. Failing to readjust or rebuild a scorecard when they aged was one of the defects in credit scoring identified in the investigations into the sub-prime mortgage crisis.  相似文献   

13.
14.
The importance of accurately measuring consumer preference for service quality management to firms in exceedingly competitive environments where customers have an increasing array of access to information cannot be overstated. There has been a resurgence of interest in consumer preference measurement and service quality management, specifically real-time service management, as more data about customer behavior and means to process these data to generate actionable policies become available. Recent years have also witnessed the incorporation of Radio-Frequency Identification (RFID) tags in a wide variety of applications where item-level information can be beneficially leveraged to provide competitive advantage. We propose a knowledge-based framework for real-time service management incorporating RFID-generated item-level identification data. We consider the economic motivations for adopting RFID solutions for customer service management through analysis of service quality, response speed and service dependability. We conclude by providing managerial insights on when and where managers should consider RFID-generated identification information to improve their customer services.  相似文献   

15.
Mobile phone carriers in a saturated market must focus on customer retention to maintain profitability. This study investigates the incorporation of social network information into churn prediction models to improve accuracy, timeliness, and profitability. Traditional models are built using customer attributes, however these data are often incomplete for prepaid customers. Alternatively, call record graphs that are current and complete for all customers can be analysed. A procedure was developed to build the call graph and extract relevant features from it to be used in classification models. The scalability and applicability of this technique are demonstrated on a telecommunications data set containing 1.4 million customers and over 30 million calls each month. The models are evaluated based on ROC plots, lift curves, and expected profitability. The results show how using network features can improve performance over local features while retaining high interpretability and usability.  相似文献   

16.
Classification of items as good or bad can often be achieved more economically by examining the items in groups rather than individually. If the result of a group test is good, all items within it can be classified as good, whereas one or more items are bad in the opposite case. Whether it is necessary to identify the bad items or not, and if so, how, is described by the screening policy. In the course of time, a spectrum of group screening models has been studied, each including some policy. However, the majority ignores that items may arrive at random time epochs at the testing center in real life situations. This dynamic aspect leads to two decision variables: the minimum and maximum group size. In this paper, we analyze a discrete-time batch-service queueing model with a general dependency between the service time of a batch and the number of items within it. We deduce several important quantities, by which the decision variables can be optimized. In addition, we highlight that every possible screening policy can, in principle, be studied, by defining the dependency between the service time of a batch and the number of items within it appropriately.  相似文献   

17.
The main purpose of this paper is to investigate the retailer’s optimal cycle time and optimal payment time under the supplier’s cash discount and trade credit policy within the economic production quantity (EPQ) framework. In this paper, we assume that the retailer will provide a full trade credit to his/her good credit customers and request his/her bad credit customers pay for the items as soon as receiving them. Under this assumption, we model the retailer’s inventory system as a cost minimization problem to determine the retailer’s optimal inventory cycle time and optimal payment time under the replenishment rate is finite. Then, an algorithm is established to obtain the optimal strategy. Finally, numerical examples are given to illustrate the theoretical results and obtain some managerial phenomena.  相似文献   

18.
One of the major challenges associated with the measurement of customer lifetime value is selecting an appropriate model for predicting customer future transactions. Among such models, the Pareto/negative binomial distribution (Pareto/NBD) is the most prevalent in noncontractual relationships characterized by latent customer defections; ie, defections are not observed by the firm when they happen. However, this model and its applications have some shortcomings. Firstly, a methodological shortcoming is that the Pareto/NBD, like all lifetime transaction models based on statistical distributions, assumes that the number of transactions by a customer follows a Poisson distribution. However, many applications have an empirical distribution that does not fit a Poisson model. Secondly, a computational concern is that the implementation of Pareto/NBD model presents some estimation challenges specifically related to the numerous evaluation of the Gaussian hypergeometric function. Finally, the model provides 4 parameters as output, which is insufficient to link the individual purchasing behavior to socio‐demographic information and to predict the behavior of new customers. In this paper, we model a customer's lifetime transactions using the Conway‐Maxwell‐Poisson distribution, which is a generalization of the Poisson distribution, offering more flexibility and a better fit to real‐world discrete data. To estimate parameters, we propose a Markov chain Monte Carlo algorithm, which is easy to implement. Use of this Bayesian paradigm provides individual customer estimates, which help link purchase behavior to socio‐demographic characteristics and an opportunity to target individual customers.  相似文献   

19.
Propensity scorecards allow forecasting, which bank customers would like to be granted new credits in the near future, through assessing their willingness to apply for new loans. Kalman filtering can help to monitor scorecard performance. Data from successive months are used to update the baseline model. The updated scorecard is the output of the Kalman filter. There is no assumption concerning the scoring model specification and no specific estimation method is presupposed. Thus, the estimator covariance is derived from the bootstrap. The focus is on a relationship between the score and the natural logarithm of the odds for that score, which is used to determine a customer's propensity level. The propensity levels corresponding to the baseline and updated scores are compared. That comparison allows for monitoring whether the scorecard is still up-to-date in terms of assigning the odds. The presented technique is illustrated with an example of a propensity scorecard developed on the basis of credit bureau data.  相似文献   

20.
We introduce a new approach to assigning bank account holders to ‘good’ or ‘bad’ classes based on their future behaviour. Traditional methods simply treat the classes as qualitatively distinct, and seek to predict them directly, using statistical techniques such as logistic regression or discriminant analysis based on application data or observations of previous behaviour. We note, however, that the ‘good’ and ‘bad’ classes are defined in terms of variables such as the amount overdrawn at the time at which the classification is required. This permits an alternative, ‘indirect’, form of classification model in which, first, the variables defining the classes are predicted, for example using regression, and then the class membership is derived deterministically from these predicted values. We compare traditional direct methods with these new indirect methods using both real bank data and simulated data. The new methods appear to perform very similarly to the traditional methods, and we discuss why this might be. Finally, we note that the indirect methods also have certain other advantages over the traditional direct methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号