«Case Study: Credit Scoring STATISTICA Solutions for Business Intelligence, Data Mining, Quality Control, and Web-based Analytics Table of Contents ...»
Case Study: Credit Scoring
Solutions for Business Intelligence,
Data Mining, Quality Control, and
Table of Contents
INTRODUCTION: WHAT IS CREDIT SCORING?
CREDIT SCORING: BUSINESS OBJECTIVES
1. Marketing Aspect
2. Application Aspect
3. Performance Aspect
4. Bad Debt Management
CASE STUDY: CONSUMER CREDIT SCORING
DATA ANALYSIS WITH STATISTICA
STATISTICA Data Miner Workspace
Decision Tree - CHAID
Classification Matrix - CHAID Model
COMPARATIVE EVALUATION OF THE MODELS
Classification Matrix - Boosting Trees
DEPLOYING THE MODEL FOR PREDICTION
Introduction: What is Credit Scoring?
In the financial industry, consumers regularly request credit to make purchases. The risk for financial institutions to extend the requested credit depends on how well they distinguish the good credit applicants from the bad credit applicants. One widely adopted technique for solving this problem is “Credit Scoring.” Credit scoring is the set of decision models and their underlying techniques that aid lenders in the granting of consumer credit. These techniques decide who will get credit, how much credit they should get, and what operational strategies will enhance the profitability of the borrowers to the lenders.
Further, it helps to assess the risk in lending. Credit scoring is a dependable assessment of a person’s credit worthiness since it is based on actual data.
A lender commonly makes two types of decisions: first, whether to grant credit to a new applicant or not, and second, how to deal with existing applicants, including whether to increase their credit limits or not. In both cases, whatever the techniques used, it is critical that there is a large sample of previous customers with their application details, behavioral patterns, and subsequent credit history available.
Most of the techniques use this sample to identify the connection between the characteristics of the consumers (annual income, age, number of years in employment with their current employer, etc.) and how “good” or “bad” their subsequent history is.
Typical application areas in the consumer market include: credit cards, auto loans, home mortgages, home equity loans, mail catalog orders, and a wide variety of personal loan products.
Credit Scoring: Business Objectives The application of scoring models in today’s business environment covers a wide range of objectives.
The original task of estimating the risk of default has been augmented by credit scoring models to include other aspects of credit risk management: at the pre-application stage (identification of potential applicants), at the application stage (identification of acceptable applicants), and at the performance stage (identification of possible behavior of current customers). Scoring models with different objectives have been developed. They can be generalized into four categories as listed below.
1. Marketing Aspect Purposes
1.1. Identify credit-worthy customers most likely to respond to promotional activity in order to reduce the cost of customer acquisition and minimize customer dissatisfaction.
1.2. Predict the likelihood of losing valuable customers and enable organizations to formulate effective customer retention strategy.
Page 2 Examples Response scoring. The scoring models that estimate how likely a consumer would respond to a direct mailing of a new product.
Retention/attrition scoring. The scoring models that predict how likely a consumer would keep using the product or change to another lender after the introductory offer period is over.
2. Application Aspect Purposes
2.1. Decide whether to extend credit, and how much credit to extend.
2.2. Forecast the future behavior of a new credit applicant by predicting loan-default chances or poorrepayment behaviors at the time the credit is granted.
Example Applicant scoring. The scoring models that estimate how likely a new applicant of credit will become default.
3. Performance AspectPurpose
3.1. Predict the future payment behavior of existing debtors in order to identify/isolate bad customers to direct more attention and assistance to them, thereby reducing the likelihood that these debtors will later become a problem.
Example Behavioral scoring. Scoring models that evaluate the risk levels of existing debtors.
4. Bad Debt ManagementPurpose:
4.1. Select optimal collections policies in order to minimize the cost of administering collections or maximizing the amount recovered from a delinquent’s account.
Page 3 Example Scoring models for collection decisions: Scoring models that decide when actions should be taken on the accounts of delinquents and which of several alternative collection techniques might be more appropriate and successful.
Thus, the overall objective of credit scoring is not only to determine whether the applicant is credit worthy, but also to attract quality credit applicants who can subsequently be retained and controlled while maintaining an overall profitable portfolio.
Case Study: Consumer Credit Scoring Case Description In credit business, banks are interested in learning whether a prospective consumer will pay back their credit. The goal of this study is to model or predict the probability with which a credit applicant can be categorized as a good or bad customer.
The techniques explained in this case will illustrate how to build a credit-scoring model using STATISTICA Data Miner to identify the inputs or predictors that differentiate “risky” customers from others (based on patterns pertaining to previous customers), identify predictive techniques that perform well on test data, and later deploy those models to predict new risky customers.
Data File The example data set used in this case, CreditScoring.sta, contains 1,000 cases and 20 variables (or predictors) with information pertaining to past and current customers who borrowed from a German bank (source:http://www.stat.uni-muenchen.de/service/datenarchiv/kredit/kredit_e.html) for various reasons. The data set contains information related to the customers’ financial standing, reason to loan, employment, demographic information, etc. The example data file is found in the STATISTICA example data folder For each customer, the binary outcome “creditability” is also available. This variable contains information about whether each customer’s credit is deemed Good or Bad. The data set has a distribution of 70% credit worthy (good) customers and 30% not credit worthy (bad) customers.
Customers who have missed 90 days of payment can be thought of as bad risks, and customers who have missed no payment can be thought of as good risks. Other typical measures for determining good and bad customers are the amount over the overdraft limit, current account turnover, number of months of missed payments, or a function of these and other variables.
Following is the complete list of variables used in this data set:
In this example, we will look at how well the variables listed above enable us to discriminate between whether someone has Good or Bad Credit Standing. If we can discriminate between these two groups, we can then use the predictive model to classify or predict new cases where we have the abovementioned information but do not know the person’s credit standing. This would be useful, for example, to decide whether to qualify a person for a loan.
Data Analysis with STATISTICA Data Preparation With STATISTICA Data Miner, it is straightforward to apply powerful modeling tools to data and judge the value of resulting models based on their predictive or descriptive value. This does not diminish the role of careful attention to data preparation efforts. Data is the main resource for data mining – therefore it should be prepared properly before applying any data-mining tool. Otherwise, it would be just a case of Garbage-In Garbage-Out (GIGO). Since major strategic decisions are impacted by these results, any error might give rise to huge losses. Thus, it is important to preprocess the data and improve the accuracy of the model so that one can make the best possible decision.
The following aspects of the data were noted during this stage Insight into data: Descriptive statistics (by looking at distributions, means, minimum and maximum values, quartiles, etc.) There are no outliers in the data There are no missing values in the data No transformations are required Feature selection – Variables reduced from 20 to 10 Page 5 Feature Selection In order to reduce the complexity of the problem, the data set can be transformed into a data set of lower dimension. The Feature Selection and Variable Screening tool available in STATISTICA Data Miner automatically found important predictors that clearly discriminate between good and bad customers.
The bar plot and spreadsheet of the predictor importance give insight into the variables that are related to the prediction of the dependent variable of interest. For example, shown below is the bar plot of predictor importance for the dependent variable “Creditability.” In this case, variables Balance of current account, Payment of previous credits, and Duration in months stand out as the most important predictors.
These predictors will be further examined using a wide array of data mining and machine learning
algorithms available in STATISTICA Data Miner such as:
• Standard Classification Trees with Deployment
• Standard Classification CHAID with Deployment
• Boosting Classification Trees with Deployment
• STATISTICA Automated Neural Networks with Deployment
• Support Vector Machine with Deployment (Classification)
• MARSplines for Classification with Deployment Page 6 The novelty and abundance of available techniques and algorithms involved in the modeling phase make this the most interesting part of the data mining process. Classification methods are the most commonly used data mining techniques that are applied in the domain of credit scoring to predict the risk level of credit takers. Moreover, it is good practice to experiment with a number of different methods when modeling or mining data. Different techniques may shed new light on a problem or confirm previous conclusions.
STATISTICA Data Miner is a comprehensive and user-friendly set of complete data mining tools designed to enable users to more easily and quickly analyze their data to uncover hidden trends, explain known patterns, and predict the future. From querying databases and drilling down, to generating final reports and graphs, it offers ease of use without sacrificing power or comprehensiveness. Moreover, STATISTICA Data Miner features the largest selection of algorithms on the market for classification, prediction, clustering, and modeling as well as an intuitive icon-based interface. It offers simple techniques such as C&RT and CHAID to more advanced techniques such as Neural Networks, Boosted Trees, Random Forests, Support Vector Machines, MARSplines, etc.
The following steps summarize the data preparation and analysis flow:
1. Split the original data set into two subsets; 34% of cases were retained for testing and 66% of cases were used for model building.
2. Used Stratified Random Sampling method to extract equal numbers of observations for both good and bad risk customers.
3. Used Feature Selection tool to rank the best predictor variables for distinguishing good and bad customers.
4. Reduced the number of possible predictors from 20 to 10 based on the results of Feature Selection.
5. Used different advanced Predictive Models (Machine Learning algorithms) to detect and understand relationships among words.
6. Used comparative tools such as Lift Charts, Gains Charts, Cross tabulation, etc., to find the best model for prediction purposes.
7. Applied the model to the Test Set (hold-out sample) to validate prediction accuracy.
Analyzing Results Next, we will review the analysis results to better understand the characteristics of bad and good customers. Let’s first start with the CHAID decision tree results.
Decision Tree - CHAID Decision trees are powerful and popular tools for classification and prediction. The fact that decision trees can readily be summarized graphically makes them particularly easy to interpret.
Note that the results you will see on your computer may vary because of different training and testing samples that will be created every time you update the project, at which point the input data are split into training and testing samples. However, in general, the results should be similar with respect to the major split variables and types of splits depicted in the tree shown above.
Looking at the tree shown here, you can see that the CHAID algorithm created a tree with 6 terminal nodes (highlighted in red), resulting from 4 if-then conditions to predict good/bad customers. Terminal nodes (or terminal leaves as they are sometimes called) are those where no further splits could be applied to further improve the predictive accuracy of the solution (given the current parameters that were selected to guide the tree-building process). The tree starts with the top decision node (also called the root node) with 411 cases in the training data set with approximately equal proportions of customers from both “good” and “bad” categories obtained by using the Stratified Random Sampling tool. The legend identifying which bars in the node histograms correspond to the two categories is located in the upper-left corner of the graph.