Project: Identify Customer Segments

In this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.

This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks. The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.

It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. There will be times in the project where you will need to make and justify your own decisions on how to treat the data. These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.

At the end of most sections, there will be a Markdown cell labeled Discussion. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.

In [14]:
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import time
import pprint
import operator
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA    
from sklearn.preprocessing import LabelEncoder

# magic word for producing visualizations in notebook
%matplotlib inline

Step 0: Load the Data

There are four files associated with this project (not including this one):

  • Udacity_AZDIAS_Subset.csv: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).
  • Udacity_CUSTOMERS_Subset.csv: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).
  • Data_Dictionary.md: Detailed information file about the features in the provided datasets.
  • AZDIAS_Feature_Summary.csv: Summary of feature attributes for demographics data; 85 features (rows) x 4 columns

Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.

To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the .csv data files in this project: they're semicolon (;) delimited, so you'll need an additional argument in your read_csv() call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.

Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.

In [153]:
# Load in the general demographics data.
azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv', delimiter=';')
In [233]:
# Load in the feature summary file.
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', delimiter=';')
In [155]:
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
num_rows, num_cols  = azdias.shape
print('Number of columns: {}'.format(num_cols))
print('Number of rows: {}'.format(num_rows))
azdias.head(5)
Number of columns: 85
Number of rows: 891221
Out[155]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
0 -1 2 1 2.0 3 4 3 5 5 3 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 -1 1 2 5.0 1 5 2 5 4 5 ... 2.0 3.0 2.0 1.0 1.0 5.0 4.0 3.0 5.0 4.0
2 -1 3 2 3.0 1 4 1 2 3 5 ... 3.0 3.0 1.0 0.0 1.0 4.0 4.0 3.0 5.0 2.0
3 2 4 2 2.0 4 2 5 2 1 2 ... 2.0 2.0 2.0 0.0 1.0 3.0 4.0 2.0 3.0 3.0
4 -1 3 1 5.0 4 3 4 1 3 2 ... 2.0 4.0 2.0 1.0 2.0 3.0 3.0 4.0 6.0 5.0

5 rows × 85 columns

In [156]:
azdias.describe()
Out[156]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
count 891221.000000 891221.000000 891221.000000 886367.000000 891221.000000 891221.000000 891221.000000 891221.000000 891221.000000 891221.000000 ... 774706.000000 774706.000000 774706.000000 774706.000000 774706.000000 774706.000000 774706.000000 794005.000000 794005.000000 794005.00000
mean -0.358435 2.777398 1.522098 3.632838 3.074528 2.821039 3.401106 3.033328 2.874167 3.075121 ... 2.253330 2.801858 1.595426 0.699166 1.943913 3.612821 3.381087 3.167854 5.293002 3.07222
std 1.198724 1.068775 0.499512 1.595021 1.321055 1.464749 1.322134 1.529603 1.486731 1.353248 ... 0.972008 0.920309 0.986736 0.727137 1.459654 0.973967 1.111598 1.002376 2.303739 1.36298
min -1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 ... 0.000000 0.000000 0.000000 0.000000 1.000000 1.000000 1.000000 1.000000 0.000000 1.00000
25% -1.000000 2.000000 1.000000 2.000000 2.000000 1.000000 3.000000 2.000000 2.000000 2.000000 ... 1.000000 2.000000 1.000000 0.000000 1.000000 3.000000 3.000000 3.000000 4.000000 2.00000
50% -1.000000 3.000000 2.000000 4.000000 3.000000 3.000000 3.000000 3.000000 3.000000 3.000000 ... 2.000000 3.000000 2.000000 1.000000 1.000000 4.000000 3.000000 3.000000 5.000000 3.00000
75% -1.000000 4.000000 2.000000 5.000000 4.000000 4.000000 5.000000 5.000000 4.000000 4.000000 ... 3.000000 3.000000 2.000000 1.000000 3.000000 4.000000 4.000000 4.000000 7.000000 4.00000
max 3.000000 9.000000 2.000000 6.000000 5.000000 5.000000 5.000000 5.000000 5.000000 5.000000 ... 4.000000 4.000000 3.000000 2.000000 5.000000 5.000000 5.000000 9.000000 9.000000 9.00000

8 rows × 81 columns

In [157]:
azdias.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891221 entries, 0 to 891220
Data columns (total 85 columns):
AGER_TYP                 891221 non-null int64
ALTERSKATEGORIE_GROB     891221 non-null int64
ANREDE_KZ                891221 non-null int64
CJT_GESAMTTYP            886367 non-null float64
FINANZ_MINIMALIST        891221 non-null int64
FINANZ_SPARER            891221 non-null int64
FINANZ_VORSORGER         891221 non-null int64
FINANZ_ANLEGER           891221 non-null int64
FINANZ_UNAUFFAELLIGER    891221 non-null int64
FINANZ_HAUSBAUER         891221 non-null int64
FINANZTYP                891221 non-null int64
GEBURTSJAHR              891221 non-null int64
GFK_URLAUBERTYP          886367 non-null float64
GREEN_AVANTGARDE         891221 non-null int64
HEALTH_TYP               891221 non-null int64
LP_LEBENSPHASE_FEIN      886367 non-null float64
LP_LEBENSPHASE_GROB      886367 non-null float64
LP_FAMILIE_FEIN          886367 non-null float64
LP_FAMILIE_GROB          886367 non-null float64
LP_STATUS_FEIN           886367 non-null float64
LP_STATUS_GROB           886367 non-null float64
NATIONALITAET_KZ         891221 non-null int64
PRAEGENDE_JUGENDJAHRE    891221 non-null int64
RETOURTYP_BK_S           886367 non-null float64
SEMIO_SOZ                891221 non-null int64
SEMIO_FAM                891221 non-null int64
SEMIO_REL                891221 non-null int64
SEMIO_MAT                891221 non-null int64
SEMIO_VERT               891221 non-null int64
SEMIO_LUST               891221 non-null int64
SEMIO_ERL                891221 non-null int64
SEMIO_KULT               891221 non-null int64
SEMIO_RAT                891221 non-null int64
SEMIO_KRIT               891221 non-null int64
SEMIO_DOM                891221 non-null int64
SEMIO_KAEM               891221 non-null int64
SEMIO_PFLICHT            891221 non-null int64
SEMIO_TRADV              891221 non-null int64
SHOPPER_TYP              891221 non-null int64
SOHO_KZ                  817722 non-null float64
TITEL_KZ                 817722 non-null float64
VERS_TYP                 891221 non-null int64
ZABEOTYP                 891221 non-null int64
ALTER_HH                 817722 non-null float64
ANZ_PERSONEN             817722 non-null float64
ANZ_TITEL                817722 non-null float64
HH_EINKOMMEN_SCORE       872873 non-null float64
KK_KUNDENTYP             306609 non-null float64
W_KEIT_KIND_HH           783619 non-null float64
WOHNDAUER_2008           817722 non-null float64
ANZ_HAUSHALTE_AKTIV      798073 non-null float64
ANZ_HH_TITEL             794213 non-null float64
GEBAEUDETYP              798073 non-null float64
KONSUMNAEHE              817252 non-null float64
MIN_GEBAEUDEJAHR         798073 non-null float64
OST_WEST_KZ              798073 non-null object
WOHNLAGE                 798073 non-null float64
CAMEO_DEUG_2015          792242 non-null object
CAMEO_DEU_2015           792242 non-null object
CAMEO_INTL_2015          792242 non-null object
KBA05_ANTG1              757897 non-null float64
KBA05_ANTG2              757897 non-null float64
KBA05_ANTG3              757897 non-null float64
KBA05_ANTG4              757897 non-null float64
KBA05_BAUMAX             757897 non-null float64
KBA05_GBZ                757897 non-null float64
BALLRAUM                 797481 non-null float64
EWDICHTE                 797481 non-null float64
INNENSTADT               797481 non-null float64
GEBAEUDETYP_RASTER       798066 non-null float64
KKK                      770025 non-null float64
MOBI_REGIO               757897 non-null float64
ONLINE_AFFINITAET        886367 non-null float64
REGIOTYP                 770025 non-null float64
KBA13_ANZAHL_PKW         785421 non-null float64
PLZ8_ANTG1               774706 non-null float64
PLZ8_ANTG2               774706 non-null float64
PLZ8_ANTG3               774706 non-null float64
PLZ8_ANTG4               774706 non-null float64
PLZ8_BAUMAX              774706 non-null float64
PLZ8_HHZ                 774706 non-null float64
PLZ8_GBZ                 774706 non-null float64
ARBEIT                   794005 non-null float64
ORTSGR_KLS9              794005 non-null float64
RELAT_AB                 794005 non-null float64
dtypes: float64(49), int64(32), object(4)
memory usage: 578.0+ MB
In [158]:
azdias.dtypes
Out[158]:
AGER_TYP                   int64
ALTERSKATEGORIE_GROB       int64
ANREDE_KZ                  int64
CJT_GESAMTTYP            float64
FINANZ_MINIMALIST          int64
FINANZ_SPARER              int64
FINANZ_VORSORGER           int64
FINANZ_ANLEGER             int64
FINANZ_UNAUFFAELLIGER      int64
FINANZ_HAUSBAUER           int64
FINANZTYP                  int64
GEBURTSJAHR                int64
GFK_URLAUBERTYP          float64
GREEN_AVANTGARDE           int64
HEALTH_TYP                 int64
LP_LEBENSPHASE_FEIN      float64
LP_LEBENSPHASE_GROB      float64
LP_FAMILIE_FEIN          float64
LP_FAMILIE_GROB          float64
LP_STATUS_FEIN           float64
LP_STATUS_GROB           float64
NATIONALITAET_KZ           int64
PRAEGENDE_JUGENDJAHRE      int64
RETOURTYP_BK_S           float64
SEMIO_SOZ                  int64
SEMIO_FAM                  int64
SEMIO_REL                  int64
SEMIO_MAT                  int64
SEMIO_VERT                 int64
SEMIO_LUST                 int64
                          ...   
OST_WEST_KZ               object
WOHNLAGE                 float64
CAMEO_DEUG_2015           object
CAMEO_DEU_2015            object
CAMEO_INTL_2015           object
KBA05_ANTG1              float64
KBA05_ANTG2              float64
KBA05_ANTG3              float64
KBA05_ANTG4              float64
KBA05_BAUMAX             float64
KBA05_GBZ                float64
BALLRAUM                 float64
EWDICHTE                 float64
INNENSTADT               float64
GEBAEUDETYP_RASTER       float64
KKK                      float64
MOBI_REGIO               float64
ONLINE_AFFINITAET        float64
REGIOTYP                 float64
KBA13_ANZAHL_PKW         float64
PLZ8_ANTG1               float64
PLZ8_ANTG2               float64
PLZ8_ANTG3               float64
PLZ8_ANTG4               float64
PLZ8_BAUMAX              float64
PLZ8_HHZ                 float64
PLZ8_GBZ                 float64
ARBEIT                   float64
ORTSGR_KLS9              float64
RELAT_AB                 float64
Length: 85, dtype: object

Tip: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut esc --> a (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and esc --> b adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use esc --> m and to convert to a code cell, use esc --> y.

Step 1: Preprocessing

Step 1.1: Assess Missing Data

The feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the Discussion cell with your findings and decisions at the end of each step that has one!

Step 1.1.1: Convert Missing Value Codes to NaNs

The fourth column of the feature attributes summary (loaded in above as feat_info) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. [-1,0]), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.

As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.

In [ ]:
 
In [159]:
col_names = azdias.columns
col_names
Out[159]:
Index(['AGER_TYP', 'ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'CJT_GESAMTTYP',
       'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER',
       'FINANZ_ANLEGER', 'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER',
       'FINANZTYP', 'GEBURTSJAHR', 'GFK_URLAUBERTYP', 'GREEN_AVANTGARDE',
       'HEALTH_TYP', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB',
       'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_STATUS_FEIN',
       'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'PRAEGENDE_JUGENDJAHRE',
       'RETOURTYP_BK_S', 'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL', 'SEMIO_MAT',
       'SEMIO_VERT', 'SEMIO_LUST', 'SEMIO_ERL', 'SEMIO_KULT', 'SEMIO_RAT',
       'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM', 'SEMIO_PFLICHT', 'SEMIO_TRADV',
       'SHOPPER_TYP', 'SOHO_KZ', 'TITEL_KZ', 'VERS_TYP', 'ZABEOTYP',
       'ALTER_HH', 'ANZ_PERSONEN', 'ANZ_TITEL', 'HH_EINKOMMEN_SCORE',
       'KK_KUNDENTYP', 'W_KEIT_KIND_HH', 'WOHNDAUER_2008',
       'ANZ_HAUSHALTE_AKTIV', 'ANZ_HH_TITEL', 'GEBAEUDETYP', 'KONSUMNAEHE',
       'MIN_GEBAEUDEJAHR', 'OST_WEST_KZ', 'WOHNLAGE', 'CAMEO_DEUG_2015',
       'CAMEO_DEU_2015', 'CAMEO_INTL_2015', 'KBA05_ANTG1', 'KBA05_ANTG2',
       'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_BAUMAX', 'KBA05_GBZ', 'BALLRAUM',
       'EWDICHTE', 'INNENSTADT', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO',
       'ONLINE_AFFINITAET', 'REGIOTYP', 'KBA13_ANZAHL_PKW', 'PLZ8_ANTG1',
       'PLZ8_ANTG2', 'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_BAUMAX', 'PLZ8_HHZ',
       'PLZ8_GBZ', 'ARBEIT', 'ORTSGR_KLS9', 'RELAT_AB'],
      dtype='object')
In [ ]:
 
In [160]:
# turn missing_or_unknown to list 
feat_info['missing_or_unknown'] = feat_info['missing_or_unknown'].apply(lambda x: x[1:-1].split(','))

start_time = time.time()

# Identify missing or unknown data values and convert them to NaNs.
for attrib, missing_values in zip(feat_info['attribute'], feat_info['missing_or_unknown']):
    if missing_values[0] != '':
        for value in missing_values:
            if value.isnumeric() or value.lstrip('-').isnumeric():
                value = int(value)
            azdias.loc[azdias[attrib] == value, attrib] = np.nan
            
print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 0.82 mins ---
In [161]:
azdias.head()
Out[161]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
0 NaN 2.0 1.0 2.0 3.0 4.0 3.0 5.0 5.0 3.0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN 1.0 2.0 5.0 1.0 5.0 2.0 5.0 4.0 5.0 ... 2.0 3.0 2.0 1.0 1.0 5.0 4.0 3.0 5.0 4.0
2 NaN 3.0 2.0 3.0 1.0 4.0 1.0 2.0 3.0 5.0 ... 3.0 3.0 1.0 0.0 1.0 4.0 4.0 3.0 5.0 2.0
3 2.0 4.0 2.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 ... 2.0 2.0 2.0 0.0 1.0 3.0 4.0 2.0 3.0 3.0
4 NaN 3.0 1.0 5.0 4.0 3.0 4.0 1.0 3.0 2.0 ... 2.0 4.0 2.0 1.0 2.0 3.0 3.0 4.0 6.0 5.0

5 rows × 85 columns

In [162]:
azdias.to_csv('azdias_parsed.csv', sep=';', index = False)

LOAD NEW DATA SET

In [205]:
#load the post parsed data set and perform similar exploratory functions
azdiasPP = pd.read_csv('azdias_parsed.csv', delimiter=';')
In [164]:
#As you can see, all of our missing or unknown codes have been transformed to NaNs
azdiasPP.head()
Out[164]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
0 NaN 2.0 1.0 2.0 3.0 4.0 3.0 5.0 5.0 3.0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN 1.0 2.0 5.0 1.0 5.0 2.0 5.0 4.0 5.0 ... 2.0 3.0 2.0 1.0 1.0 5.0 4.0 3.0 5.0 4.0
2 NaN 3.0 2.0 3.0 1.0 4.0 1.0 2.0 3.0 5.0 ... 3.0 3.0 1.0 0.0 1.0 4.0 4.0 3.0 5.0 2.0
3 2.0 4.0 2.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 ... 2.0 2.0 2.0 0.0 1.0 3.0 4.0 2.0 3.0 3.0
4 NaN 3.0 1.0 5.0 4.0 3.0 4.0 1.0 3.0 2.0 ... 2.0 4.0 2.0 1.0 2.0 3.0 3.0 4.0 6.0 5.0

5 rows × 85 columns

In [165]:
azdiasPP.describe()
Out[165]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
count 205378.000000 888340.000000 891221.000000 886367.000000 891221.000000 891221.000000 891221.000000 891221.000000 891221.000000 891221.000000 ... 774706.000000 774706.000000 774706.000000 774706.000000 774706.000000 774706.000000 774706.000000 793846.000000 793947.000000 793846.000000
mean 1.743410 2.757217 1.522098 3.632838 3.074528 2.821039 3.401106 3.033328 2.874167 3.075121 ... 2.253330 2.801858 1.595426 0.699166 1.943913 3.612821 3.381087 3.166686 5.293389 3.071033
std 0.674312 1.009951 0.499512 1.595021 1.321055 1.464749 1.322134 1.529603 1.486731 1.353248 ... 0.972008 0.920309 0.986736 0.727137 1.459654 0.973967 1.111598 0.999072 2.303379 1.360532
min 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 ... 0.000000 0.000000 0.000000 0.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
25% 1.000000 2.000000 1.000000 2.000000 2.000000 1.000000 3.000000 2.000000 2.000000 2.000000 ... 1.000000 2.000000 1.000000 0.000000 1.000000 3.000000 3.000000 3.000000 4.000000 2.000000
50% 2.000000 3.000000 2.000000 4.000000 3.000000 3.000000 3.000000 3.000000 3.000000 3.000000 ... 2.000000 3.000000 2.000000 1.000000 1.000000 4.000000 3.000000 3.000000 5.000000 3.000000
75% 2.000000 4.000000 2.000000 5.000000 4.000000 4.000000 5.000000 5.000000 4.000000 4.000000 ... 3.000000 3.000000 2.000000 1.000000 3.000000 4.000000 4.000000 4.000000 7.000000 4.000000
max 3.000000 4.000000 2.000000 6.000000 5.000000 5.000000 5.000000 5.000000 5.000000 5.000000 ... 4.000000 4.000000 3.000000 2.000000 5.000000 5.000000 5.000000 5.000000 9.000000 5.000000

8 rows × 83 columns

In [166]:
azdiasPP.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891221 entries, 0 to 891220
Data columns (total 85 columns):
AGER_TYP                 205378 non-null float64
ALTERSKATEGORIE_GROB     888340 non-null float64
ANREDE_KZ                891221 non-null float64
CJT_GESAMTTYP            886367 non-null float64
FINANZ_MINIMALIST        891221 non-null float64
FINANZ_SPARER            891221 non-null float64
FINANZ_VORSORGER         891221 non-null float64
FINANZ_ANLEGER           891221 non-null float64
FINANZ_UNAUFFAELLIGER    891221 non-null float64
FINANZ_HAUSBAUER         891221 non-null float64
FINANZTYP                891221 non-null float64
GEBURTSJAHR              498903 non-null float64
GFK_URLAUBERTYP          886367 non-null float64
GREEN_AVANTGARDE         891221 non-null int64
HEALTH_TYP               780025 non-null float64
LP_LEBENSPHASE_FEIN      793589 non-null float64
LP_LEBENSPHASE_GROB      796649 non-null float64
LP_FAMILIE_FEIN          813429 non-null float64
LP_FAMILIE_GROB          813429 non-null float64
LP_STATUS_FEIN           886367 non-null float64
LP_STATUS_GROB           886367 non-null float64
NATIONALITAET_KZ         782906 non-null float64
PRAEGENDE_JUGENDJAHRE    783057 non-null float64
RETOURTYP_BK_S           886367 non-null float64
SEMIO_SOZ                891221 non-null float64
SEMIO_FAM                891221 non-null float64
SEMIO_REL                891221 non-null float64
SEMIO_MAT                891221 non-null float64
SEMIO_VERT               891221 non-null float64
SEMIO_LUST               891221 non-null float64
SEMIO_ERL                891221 non-null float64
SEMIO_KULT               891221 non-null float64
SEMIO_RAT                891221 non-null float64
SEMIO_KRIT               891221 non-null float64
SEMIO_DOM                891221 non-null float64
SEMIO_KAEM               891221 non-null float64
SEMIO_PFLICHT            891221 non-null float64
SEMIO_TRADV              891221 non-null float64
SHOPPER_TYP              780025 non-null float64
SOHO_KZ                  817722 non-null float64
TITEL_KZ                 2160 non-null float64
VERS_TYP                 780025 non-null float64
ZABEOTYP                 891221 non-null float64
ALTER_HH                 580954 non-null float64
ANZ_PERSONEN             817722 non-null float64
ANZ_TITEL                817722 non-null float64
HH_EINKOMMEN_SCORE       872873 non-null float64
KK_KUNDENTYP             306609 non-null float64
W_KEIT_KIND_HH           743233 non-null float64
WOHNDAUER_2008           817722 non-null float64
ANZ_HAUSHALTE_AKTIV      791610 non-null float64
ANZ_HH_TITEL             794213 non-null float64
GEBAEUDETYP              798073 non-null float64
KONSUMNAEHE              817252 non-null float64
MIN_GEBAEUDEJAHR         798073 non-null float64
OST_WEST_KZ              798073 non-null object
WOHNLAGE                 798073 non-null float64
CAMEO_DEUG_2015          791869 non-null float64
CAMEO_DEU_2015           791869 non-null object
CAMEO_INTL_2015          791869 non-null float64
KBA05_ANTG1              757897 non-null float64
KBA05_ANTG2              757897 non-null float64
KBA05_ANTG3              757897 non-null float64
KBA05_ANTG4              757897 non-null float64
KBA05_BAUMAX             414697 non-null float64
KBA05_GBZ                757897 non-null float64
BALLRAUM                 797481 non-null float64
EWDICHTE                 797481 non-null float64
INNENSTADT               797481 non-null float64
GEBAEUDETYP_RASTER       798066 non-null float64
KKK                      733157 non-null float64
MOBI_REGIO               757897 non-null float64
ONLINE_AFFINITAET        886367 non-null float64
REGIOTYP                 733157 non-null float64
KBA13_ANZAHL_PKW         785421 non-null float64
PLZ8_ANTG1               774706 non-null float64
PLZ8_ANTG2               774706 non-null float64
PLZ8_ANTG3               774706 non-null float64
PLZ8_ANTG4               774706 non-null float64
PLZ8_BAUMAX              774706 non-null float64
PLZ8_HHZ                 774706 non-null float64
PLZ8_GBZ                 774706 non-null float64
ARBEIT                   793846 non-null float64
ORTSGR_KLS9              793947 non-null float64
RELAT_AB                 793846 non-null float64
dtypes: float64(82), int64(1), object(2)
memory usage: 578.0+ MB
In [167]:
azdiasPP.isnull().sum()
Out[167]:
AGER_TYP                 685843
ALTERSKATEGORIE_GROB       2881
ANREDE_KZ                     0
CJT_GESAMTTYP              4854
FINANZ_MINIMALIST             0
FINANZ_SPARER                 0
FINANZ_VORSORGER              0
FINANZ_ANLEGER                0
FINANZ_UNAUFFAELLIGER         0
FINANZ_HAUSBAUER              0
FINANZTYP                     0
GEBURTSJAHR              392318
GFK_URLAUBERTYP            4854
GREEN_AVANTGARDE              0
HEALTH_TYP               111196
LP_LEBENSPHASE_FEIN       97632
LP_LEBENSPHASE_GROB       94572
LP_FAMILIE_FEIN           77792
LP_FAMILIE_GROB           77792
LP_STATUS_FEIN             4854
LP_STATUS_GROB             4854
NATIONALITAET_KZ         108315
PRAEGENDE_JUGENDJAHRE    108164
RETOURTYP_BK_S             4854
SEMIO_SOZ                     0
SEMIO_FAM                     0
SEMIO_REL                     0
SEMIO_MAT                     0
SEMIO_VERT                    0
SEMIO_LUST                    0
                          ...  
OST_WEST_KZ               93148
WOHNLAGE                  93148
CAMEO_DEUG_2015           99352
CAMEO_DEU_2015            99352
CAMEO_INTL_2015           99352
KBA05_ANTG1              133324
KBA05_ANTG2              133324
KBA05_ANTG3              133324
KBA05_ANTG4              133324
KBA05_BAUMAX             476524
KBA05_GBZ                133324
BALLRAUM                  93740
EWDICHTE                  93740
INNENSTADT                93740
GEBAEUDETYP_RASTER        93155
KKK                      158064
MOBI_REGIO               133324
ONLINE_AFFINITAET          4854
REGIOTYP                 158064
KBA13_ANZAHL_PKW         105800
PLZ8_ANTG1               116515
PLZ8_ANTG2               116515
PLZ8_ANTG3               116515
PLZ8_ANTG4               116515
PLZ8_BAUMAX              116515
PLZ8_HHZ                 116515
PLZ8_GBZ                 116515
ARBEIT                    97375
ORTSGR_KLS9               97274
RELAT_AB                  97375
Length: 85, dtype: int64
In [168]:
azdiasPP.isnull().sum().sum()
Out[168]:
8373929

Step 1.1.2: Assess Missing Data in Each Column

How much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's hist() function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)

For the remaining features, are there any patterns in which columns have, or share, missing data?

In [206]:
# Perform an assessment of how much missing data there is in each column of the
# dataset.
missing_data = azdiasPP.isnull().sum()
#get the percentage
missing_data = missing_data[missing_data > 0]/(azdiasPP.shape[0]) * 100
#sort the values
missing_data.sort_values(inplace=True)
In [207]:
#plot the data as specified/sorted above

plt.hist(missing_data, bins = 25)

plt.xlabel('Percentage of missing data(%)')
plt.ylabel('Counts')
plt.title('Histogram of number of missing data')
plt.grid(True)
plt.show()

Discussion:

In the histogram above, we can see that there are several columns with more than a quarter of the values missing. I want to have an understanding of the total percentage of missing values per column before I make a decision about removing columns from this analysis.

In [208]:
# Investigate patterns in the amount of missing data in each column.
missing_data.plot.barh(figsize=(15,30))
plt.xlabel('Column name with missing values')
plt.ylabel('Percentage of missing values')

plt.show()
In [209]:
#just to see the numbers as portrayed above in the plot
missing_data.sort_values(ascending=False)
Out[209]:
TITEL_KZ                 99.757636
AGER_TYP                 76.955435
KK_KUNDENTYP             65.596749
KBA05_BAUMAX             53.468668
GEBURTSJAHR              44.020282
ALTER_HH                 34.813699
REGIOTYP                 17.735668
KKK                      17.735668
W_KEIT_KIND_HH           16.605084
KBA05_ANTG1              14.959701
KBA05_ANTG3              14.959701
KBA05_ANTG2              14.959701
KBA05_ANTG4              14.959701
MOBI_REGIO               14.959701
KBA05_GBZ                14.959701
PLZ8_ANTG1               13.073637
PLZ8_ANTG2               13.073637
PLZ8_ANTG3               13.073637
PLZ8_GBZ                 13.073637
PLZ8_ANTG4               13.073637
PLZ8_BAUMAX              13.073637
PLZ8_HHZ                 13.073637
SHOPPER_TYP              12.476816
VERS_TYP                 12.476816
HEALTH_TYP               12.476816
NATIONALITAET_KZ         12.153551
PRAEGENDE_JUGENDJAHRE    12.136608
KBA13_ANZAHL_PKW         11.871354
ANZ_HAUSHALTE_AKTIV      11.176913
CAMEO_DEUG_2015          11.147852
                           ...    
CAMEO_INTL_2015          11.147852
LP_LEBENSPHASE_FEIN      10.954859
RELAT_AB                 10.926022
ARBEIT                   10.926022
ORTSGR_KLS9              10.914689
ANZ_HH_TITEL             10.884842
LP_LEBENSPHASE_GROB      10.611509
INNENSTADT               10.518154
BALLRAUM                 10.518154
EWDICHTE                 10.518154
GEBAEUDETYP_RASTER       10.452514
MIN_GEBAEUDEJAHR         10.451729
OST_WEST_KZ              10.451729
WOHNLAGE                 10.451729
GEBAEUDETYP              10.451729
LP_FAMILIE_GROB           8.728699
LP_FAMILIE_FEIN           8.728699
KONSUMNAEHE               8.299737
WOHNDAUER_2008            8.247000
ANZ_TITEL                 8.247000
SOHO_KZ                   8.247000
ANZ_PERSONEN              8.247000
HH_EINKOMMEN_SCORE        2.058749
LP_STATUS_GROB            0.544646
LP_STATUS_FEIN            0.544646
RETOURTYP_BK_S            0.544646
ONLINE_AFFINITAET         0.544646
GFK_URLAUBERTYP           0.544646
CJT_GESAMTTYP             0.544646
ALTERSKATEGORIE_GROB      0.323264
Length: 61, dtype: float64

Discussion

It appears that there is a stark difference between the columns with more than 25% missing data and the rest of the data set. because of this, I would like remove them so as not to skew the results too much. But first, let's see what those columns are.

In [210]:
# find columns with more that 25% of data missing
missing_25P = [col for col in azdiasPP.columns 
               if (azdiasPP[col].isnull().sum()/azdiasPP.shape[0])
               * 100 > 25]

print(missing_25P)
['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX']
In [211]:
missing_data.sort_values(ascending=False)
Out[211]:
TITEL_KZ                 99.757636
AGER_TYP                 76.955435
KK_KUNDENTYP             65.596749
KBA05_BAUMAX             53.468668
GEBURTSJAHR              44.020282
ALTER_HH                 34.813699
REGIOTYP                 17.735668
KKK                      17.735668
W_KEIT_KIND_HH           16.605084
KBA05_ANTG1              14.959701
KBA05_ANTG3              14.959701
KBA05_ANTG2              14.959701
KBA05_ANTG4              14.959701
MOBI_REGIO               14.959701
KBA05_GBZ                14.959701
PLZ8_ANTG1               13.073637
PLZ8_ANTG2               13.073637
PLZ8_ANTG3               13.073637
PLZ8_GBZ                 13.073637
PLZ8_ANTG4               13.073637
PLZ8_BAUMAX              13.073637
PLZ8_HHZ                 13.073637
SHOPPER_TYP              12.476816
VERS_TYP                 12.476816
HEALTH_TYP               12.476816
NATIONALITAET_KZ         12.153551
PRAEGENDE_JUGENDJAHRE    12.136608
KBA13_ANZAHL_PKW         11.871354
ANZ_HAUSHALTE_AKTIV      11.176913
CAMEO_DEUG_2015          11.147852
                           ...    
CAMEO_INTL_2015          11.147852
LP_LEBENSPHASE_FEIN      10.954859
RELAT_AB                 10.926022
ARBEIT                   10.926022
ORTSGR_KLS9              10.914689
ANZ_HH_TITEL             10.884842
LP_LEBENSPHASE_GROB      10.611509
INNENSTADT               10.518154
BALLRAUM                 10.518154
EWDICHTE                 10.518154
GEBAEUDETYP_RASTER       10.452514
MIN_GEBAEUDEJAHR         10.451729
OST_WEST_KZ              10.451729
WOHNLAGE                 10.451729
GEBAEUDETYP              10.451729
LP_FAMILIE_GROB           8.728699
LP_FAMILIE_FEIN           8.728699
KONSUMNAEHE               8.299737
WOHNDAUER_2008            8.247000
ANZ_TITEL                 8.247000
SOHO_KZ                   8.247000
ANZ_PERSONEN              8.247000
HH_EINKOMMEN_SCORE        2.058749
LP_STATUS_GROB            0.544646
LP_STATUS_FEIN            0.544646
RETOURTYP_BK_S            0.544646
ONLINE_AFFINITAET         0.544646
GFK_URLAUBERTYP           0.544646
CJT_GESAMTTYP             0.544646
ALTERSKATEGORIE_GROB      0.323264
Length: 61, dtype: float64

Discussion

Looking at the columns with more than 25% of their data missing, I don't think that our analysis would suffer much without those values. I will choose to drop them.

In [212]:
# Remove the outlier columns from the dataset. (You'll perform other data
# engineering tasks such as re-encoding and imputation later.)
for col in missing_25P:
    azdiasPP.drop(col, axis=1, inplace=True)
In [213]:
#check to make sure that our code was successful
missing_data_check = azdiasPP.isnull().sum()
#get the percentage
missing_data_check = missing_data_check[missing_data_check > 0]/(azdiasPP.shape[0]) * 100
#sort the values
missing_data_check.sort_values(inplace=True)
In [214]:
missing_data_check.sort_values(ascending = False)
Out[214]:
REGIOTYP                 17.735668
KKK                      17.735668
W_KEIT_KIND_HH           16.605084
KBA05_ANTG2              14.959701
KBA05_ANTG3              14.959701
KBA05_ANTG4              14.959701
KBA05_GBZ                14.959701
MOBI_REGIO               14.959701
KBA05_ANTG1              14.959701
PLZ8_GBZ                 13.073637
PLZ8_ANTG3               13.073637
PLZ8_BAUMAX              13.073637
PLZ8_ANTG2               13.073637
PLZ8_ANTG4               13.073637
PLZ8_ANTG1               13.073637
PLZ8_HHZ                 13.073637
SHOPPER_TYP              12.476816
VERS_TYP                 12.476816
HEALTH_TYP               12.476816
NATIONALITAET_KZ         12.153551
PRAEGENDE_JUGENDJAHRE    12.136608
KBA13_ANZAHL_PKW         11.871354
ANZ_HAUSHALTE_AKTIV      11.176913
CAMEO_INTL_2015          11.147852
CAMEO_DEU_2015           11.147852
CAMEO_DEUG_2015          11.147852
LP_LEBENSPHASE_FEIN      10.954859
ARBEIT                   10.926022
RELAT_AB                 10.926022
ORTSGR_KLS9              10.914689
ANZ_HH_TITEL             10.884842
LP_LEBENSPHASE_GROB      10.611509
EWDICHTE                 10.518154
INNENSTADT               10.518154
BALLRAUM                 10.518154
GEBAEUDETYP_RASTER       10.452514
WOHNLAGE                 10.451729
GEBAEUDETYP              10.451729
MIN_GEBAEUDEJAHR         10.451729
OST_WEST_KZ              10.451729
LP_FAMILIE_GROB           8.728699
LP_FAMILIE_FEIN           8.728699
KONSUMNAEHE               8.299737
ANZ_TITEL                 8.247000
SOHO_KZ                   8.247000
WOHNDAUER_2008            8.247000
ANZ_PERSONEN              8.247000
HH_EINKOMMEN_SCORE        2.058749
ONLINE_AFFINITAET         0.544646
CJT_GESAMTTYP             0.544646
GFK_URLAUBERTYP           0.544646
LP_STATUS_FEIN            0.544646
LP_STATUS_GROB            0.544646
RETOURTYP_BK_S            0.544646
ALTERSKATEGORIE_GROB      0.323264
dtype: float64

Discussion 1.1.2: Assess Missing Data in Each Column

here we investigated the data and looked at the amount of missing data in each columns and chose to remove columns which had more than 25% of their data missing, those include:['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX']

Step 1.1.3: Assess Missing Data in Each Row

Now, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.

In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.

  • You can use seaborn's countplot() function to create a bar chart of code frequencies and matplotlib's subplot() function to put bar charts for the two subplots side by side.
  • To reduce repeated code, you might want to write a function that can perform this comparison, taking as one of its arguments a column to be compared.

Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.

In [215]:
# How much data is missing in each row of the dataset?
missing_row_data = azdiasPP.isnull().sum(axis=1)
missing_row_data = missing_row_data[missing_row_data > 0]/(len(azdiasPP.columns)) * 100
missing_row_data.sort_values(inplace=True)

plt.hist(missing_row_data, bins=25)


plt.xlabel('Percentage of missing row data (%)')
plt.ylabel('Counts')
plt.title('Histogram of missing data counts in rows')
plt.grid(True)
plt.show()
In [216]:
missing_row_counts = azdiasPP.isnull().sum(axis=1)
missing_row_counts.sort_values(ascending=False)
Out[216]:
643174    49
732775    49
472919    48
183108    47
139316    47
691141    47
691142    47
691171    47
691183    47
139332    47
691197    47
139323    47
691212    47
691122    47
139267    47
139255    47
139250    47
139248    47
139245    47
139243    47
691317    47
691129    47
691118    47
139236    47
139478    47
690871    47
690876    47
690878    47
690887    47
139521    47
          ..
540246     0
540244     0
540243     0
540242     0
540241     0
540240     0
540239     0
540269     0
540271     0
540300     0
540289     0
540299     0
540298     0
540296     0
540295     0
540293     0
540292     0
540291     0
540290     0
540287     0
540273     0
540286     0
540284     0
540283     0
540281     0
540280     0
540277     0
540275     0
540274     0
445610     0
Length: 891221, dtype: int64
In [217]:
missing_row_counts.sum()
Out[217]:
5035304

Discussion

It appears that we have rows that have up to 50 missing values. I think that that number is severe and directly impacts our work. My inclination is to remove rows with more than 10 values missing.

In [218]:
# Write code to divide the data into two subsets based on the number of missing
# values in each row.

# We use the drop parameter to avoid the old index being added as a column
## link ^ https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html
few_missing = azdiasPP[azdiasPP.isnull().sum(axis=1) < 10].reset_index(drop=True)

many_missing = azdiasPP[azdiasPP.isnull().sum(axis=1) >= 10].reset_index(drop=True)
In [219]:
plt.figure(figsize=(25,25))
for i, col in enumerate(azdiasPP.columns[:10]):
    plt.subplot(5, 2, i+1)
    sns.distplot(few_missing[col][few_missing[col].notnull()], label='few_missing')
    sns.distplot(many_missing[col][many_missing[col].notnull()], label='many_missing')
    plt.title('Distribution for column: {}'.format(col))
    plt.legend();
In [220]:
# Compare the distribution of values for at least five columns where there are
# no or few missing values, between the two subsets.
#check here for some more example of code: https://jakevdp.github.io/PythonDataScienceHandbook/04.08-multiple-subplots.html

col_names_few = few_missing.columns

def print_countplot(cols,num):
    
    fig, axs = plt.subplots(num,2, figsize=(20, 25))
    
    # adjust the spacing between these plots
    fig.subplots_adjust(hspace =2 , wspace=.2)
    
    
    #This function returns a flattened one-dimensional array.
    #A copy is made only if needed. The returned array will have 
    #the same type as that of the input array.
    #Read more here: https://www.tutorialspoint.com/numpy/numpy_ndarray_ravel.htm
    axs = axs.ravel()

    for i in range(num):
    
        sns.countplot(few_missing[cols[i]], ax=axs[i*2])
        axs[i*2].set_title('few_missing')
        
        sns.countplot(many_missing[cols[i]], ax=axs[i*2+1])
        axs[i*2+1].set_title('many_missing')
    
    
print_countplot(col_names_few,8)

Discussion

The data with many missing rows is significantly different from the data with few missing rows. I am a little concerned about dropping the data with many missing rows because of this vast difference. In addition dropping the data can result in a significant decerease in the number of data rendering our culstering algorithm obsolete.

However, I will choose to drop rows with many missing data points and fill the few missing data points (ie. less than 10).

Nevertheless, it is important to keep the following in mind: filling in the data with the mean or the median of the overall column might create a significant challenge and not help our clustering algorithms. So i will choose to interpolate the data, aka imputation based on the mean of the values directly near it. Keep in mind that this might be dangerous if we have very big outliers.

In [ ]:
 

check exactly how many missing rows will be dropped and save them. this step is not absolutely necessary but it is good practice.

In [221]:
#many_missing = missing_row_counts[missing_row_counts > 10]
print('rows with many missing values:', many_missing.shape[0], 'or', \
      np.round(many_missing.shape[0]*100/missing_row_counts.shape[0],2), 
      '% of all data')
rows with many missing values: 116478 or 13.07 % of all data
In [222]:
# Save data with many missing rows  
azdiasPP_many_missing = azdiasPP.iloc[many_missing.index]

fill in the missing values in the few_missing data set. Interpolation fills the missing value with the mean of the value before it and after it. I used the director in both ways to fill in any NaNs at the top of the columns.

In [223]:
#filling missing values in the few missing with the mean of the value, before and after
for col in few_missing.columns:
    few_missing[col] = few_missing[col].interpolate(limit_direction='both')
In [224]:
few_missing[col].interpolate().count()
Out[224]:
774743
In [225]:
few_missing[col].interpolate().plot()
Out[225]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f5ede045470>
In [226]:
few_missing.head()
Out[226]:
ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER FINANZTYP ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
0 1.0 2.0 5.0 1.0 5.0 2.0 5.0 4.0 5.0 1.0 ... 2.0 3.0 2.0 1.0 1.0 5.0 4.0 3.0 5.0 4.0
1 3.0 2.0 3.0 1.0 4.0 1.0 2.0 3.0 5.0 1.0 ... 3.0 3.0 1.0 0.0 1.0 4.0 4.0 3.0 5.0 2.0
2 4.0 2.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 6.0 ... 2.0 2.0 2.0 0.0 1.0 3.0 4.0 2.0 3.0 3.0
3 3.0 1.0 5.0 4.0 3.0 4.0 1.0 3.0 2.0 5.0 ... 2.0 4.0 2.0 1.0 2.0 3.0 3.0 4.0 6.0 5.0
4 1.0 2.0 2.0 3.0 1.0 5.0 2.0 2.0 5.0 2.0 ... 2.0 3.0 1.0 1.0 1.0 5.0 5.0 2.0 3.0 3.0

5 rows × 79 columns

In [227]:
#check for any null values
few_missing.isnull().sum()
Out[227]:
ALTERSKATEGORIE_GROB        0
ANREDE_KZ                   0
CJT_GESAMTTYP               0
FINANZ_MINIMALIST           0
FINANZ_SPARER               0
FINANZ_VORSORGER            0
FINANZ_ANLEGER              0
FINANZ_UNAUFFAELLIGER       0
FINANZ_HAUSBAUER            0
FINANZTYP                   0
GFK_URLAUBERTYP             0
GREEN_AVANTGARDE            0
HEALTH_TYP                  0
LP_LEBENSPHASE_FEIN         0
LP_LEBENSPHASE_GROB         0
LP_FAMILIE_FEIN             0
LP_FAMILIE_GROB             0
LP_STATUS_FEIN              0
LP_STATUS_GROB              0
NATIONALITAET_KZ            0
PRAEGENDE_JUGENDJAHRE       0
RETOURTYP_BK_S              0
SEMIO_SOZ                   0
SEMIO_FAM                   0
SEMIO_REL                   0
SEMIO_MAT                   0
SEMIO_VERT                  0
SEMIO_LUST                  0
SEMIO_ERL                   0
SEMIO_KULT                  0
                         ... 
MIN_GEBAEUDEJAHR            0
OST_WEST_KZ                 0
WOHNLAGE                    0
CAMEO_DEUG_2015             0
CAMEO_DEU_2015           3456
CAMEO_INTL_2015             0
KBA05_ANTG1                 0
KBA05_ANTG2                 0
KBA05_ANTG3                 0
KBA05_ANTG4                 0
KBA05_GBZ                   0
BALLRAUM                    0
EWDICHTE                    0
INNENSTADT                  0
GEBAEUDETYP_RASTER          0
KKK                         0
MOBI_REGIO                  0
ONLINE_AFFINITAET           0
REGIOTYP                    0
KBA13_ANZAHL_PKW            0
PLZ8_ANTG1                  0
PLZ8_ANTG2                  0
PLZ8_ANTG3                  0
PLZ8_ANTG4                  0
PLZ8_BAUMAX                 0
PLZ8_HHZ                    0
PLZ8_GBZ                    0
ARBEIT                      0
ORTSGR_KLS9                 0
RELAT_AB                    0
Length: 79, dtype: int64
In [228]:
few_missing.isnull().sum().sum()
Out[228]:
3456

It appears that we have a column that has a significant number of missing points. I chose to drop this column, however if the client asked to keep it, We could fill it using the mode or mean.

In [229]:
few_missing.drop(['CAMEO_DEU_2015'], axis=1, inplace=True)
In [230]:
few_missing.isnull().sum().sum()
Out[230]:
0
In [231]:
few_missing.isnull().sum()
Out[231]:
ALTERSKATEGORIE_GROB     0
ANREDE_KZ                0
CJT_GESAMTTYP            0
FINANZ_MINIMALIST        0
FINANZ_SPARER            0
FINANZ_VORSORGER         0
FINANZ_ANLEGER           0
FINANZ_UNAUFFAELLIGER    0
FINANZ_HAUSBAUER         0
FINANZTYP                0
GFK_URLAUBERTYP          0
GREEN_AVANTGARDE         0
HEALTH_TYP               0
LP_LEBENSPHASE_FEIN      0
LP_LEBENSPHASE_GROB      0
LP_FAMILIE_FEIN          0
LP_FAMILIE_GROB          0
LP_STATUS_FEIN           0
LP_STATUS_GROB           0
NATIONALITAET_KZ         0
PRAEGENDE_JUGENDJAHRE    0
RETOURTYP_BK_S           0
SEMIO_SOZ                0
SEMIO_FAM                0
SEMIO_REL                0
SEMIO_MAT                0
SEMIO_VERT               0
SEMIO_LUST               0
SEMIO_ERL                0
SEMIO_KULT               0
                        ..
KONSUMNAEHE              0
MIN_GEBAEUDEJAHR         0
OST_WEST_KZ              0
WOHNLAGE                 0
CAMEO_DEUG_2015          0
CAMEO_INTL_2015          0
KBA05_ANTG1              0
KBA05_ANTG2              0
KBA05_ANTG3              0
KBA05_ANTG4              0
KBA05_GBZ                0
BALLRAUM                 0
EWDICHTE                 0
INNENSTADT               0
GEBAEUDETYP_RASTER       0
KKK                      0
MOBI_REGIO               0
ONLINE_AFFINITAET        0
REGIOTYP                 0
KBA13_ANZAHL_PKW         0
PLZ8_ANTG1               0
PLZ8_ANTG2               0
PLZ8_ANTG3               0
PLZ8_ANTG4               0
PLZ8_BAUMAX              0
PLZ8_HHZ                 0
PLZ8_GBZ                 0
ARBEIT                   0
ORTSGR_KLS9              0
RELAT_AB                 0
Length: 78, dtype: int64

Wunderbar!

Discussion 1.1.3: Assess Missing Data in Each Row

There were many rows with missing data, I chose to create a cut off around 10 missing values per row, deleted any rows with more than that, which lost us about 13% of the data set. I also chose to interpolate the rest of the values and deleted the columns that presisted with a significant number of empty values. now the data is clean and is void of any null vlaues.

Step 1.2: Select and Re-Encode Features

Checking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (feat_info) for a summary of types of measurement.

  • For numeric and interval data, these features can be kept without changes.
  • Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).
  • Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.

In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.

Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!

In [234]:
#set the attribute as the index
feat_info.set_index('attribute', inplace=True)

feat_info.head(n=10)
Out[234]:
information_level type missing_or_unknown
attribute
AGER_TYP person categorical [-1,0]
ALTERSKATEGORIE_GROB person ordinal [-1,0,9]
ANREDE_KZ person categorical [-1,0]
CJT_GESAMTTYP person categorical [0]
FINANZ_MINIMALIST person ordinal [-1]
FINANZ_SPARER person ordinal [-1]
FINANZ_VORSORGER person ordinal [-1]
FINANZ_ANLEGER person ordinal [-1]
FINANZ_UNAUFFAELLIGER person ordinal [-1]
FINANZ_HAUSBAUER person ordinal [-1]
In [235]:
feat_info['col_names'] = feat_info.index
feat_info.head()
Out[235]:
information_level type missing_or_unknown col_names
attribute
AGER_TYP person categorical [-1,0] AGER_TYP
ALTERSKATEGORIE_GROB person ordinal [-1,0,9] ALTERSKATEGORIE_GROB
ANREDE_KZ person categorical [-1,0] ANREDE_KZ
CJT_GESAMTTYP person categorical [0] CJT_GESAMTTYP
FINANZ_MINIMALIST person ordinal [-1] FINANZ_MINIMALIST
In [236]:
# How many features are there of each data type?

dtype={}
for col in few_missing.columns:
    data_type = feat_info.loc[col].type
    if data_type not in dtype:
        dtype[data_type] = 1
    else:
        dtype[data_type] += 1
    
dtype
Out[236]:
{'ordinal': 49, 'categorical': 17, 'mixed': 6, 'numeric': 6}

Step 1.2.1: Re-Encode Categorical Features

For categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:

  • For binary (two-level) categoricals that take numeric values, you can keep them without needing to do anything.
  • There is one binary variable that takes on non-numeric values. For this one, you need to re-encode the values as numbers or create a dummy variable.
  • For multi-level categoricals (three or more values), you can choose to encode the values using multiple dummy variables (e.g. via OneHotEncoder), or (to keep things straightforward) just drop them from the analysis. As always, document your choices in the Discussion section.
In [237]:
# Assess categorical variables: which are binary, which are multi-level, and
# which one needs to be re-encoded?
for col in few_missing.columns:
    if feat_info.loc[col].type == 'categorical':
        print(col, len(few_missing[col].unique()), few_missing[col].dtype)
ANREDE_KZ 2 float64
CJT_GESAMTTYP 6 float64
FINANZTYP 6 float64
GFK_URLAUBERTYP 12 float64
GREEN_AVANTGARDE 2 int64
LP_FAMILIE_FEIN 68 float64
LP_FAMILIE_GROB 28 float64
LP_STATUS_FEIN 10 float64
LP_STATUS_GROB 5 float64
NATIONALITAET_KZ 20 float64
SHOPPER_TYP 28 float64
SOHO_KZ 2 float64
VERS_TYP 13 float64
ZABEOTYP 6 float64
GEBAEUDETYP 7 float64
OST_WEST_KZ 2 object
CAMEO_DEUG_2015 61 float64
In [238]:
#extract categorical columns with more than 2 categories
non_binary_type = []
for col in few_missing.columns:
    if feat_info.loc[col].type == 'categorical' and len(few_missing[col].unique()) > 2:
        non_binary_type.append(col)
non_binary_type
Out[238]:
['CJT_GESAMTTYP',
 'FINANZTYP',
 'GFK_URLAUBERTYP',
 'LP_FAMILIE_FEIN',
 'LP_FAMILIE_GROB',
 'LP_STATUS_FEIN',
 'LP_STATUS_GROB',
 'NATIONALITAET_KZ',
 'SHOPPER_TYP',
 'VERS_TYP',
 'ZABEOTYP',
 'GEBAEUDETYP',
 'CAMEO_DEUG_2015']
In [239]:
# Find the column names for mixed type columns
mixed_type = []
for col in few_missing.columns:
    if feat_info.loc[col].type == 'mixed':
        mixed_type.append(col)
        
mixed_type
Out[239]:
['LP_LEBENSPHASE_FEIN',
 'LP_LEBENSPHASE_GROB',
 'PRAEGENDE_JUGENDJAHRE',
 'WOHNLAGE',
 'CAMEO_INTL_2015',
 'PLZ8_BAUMAX']

I think that the shopper type will be of value to the client so I choose to reencode it, but first, we must round the values up. those decimal values were the result of the interpolation.

In [241]:
few_missing['SHOPPER_TYP'].value_counts()
Out[241]:
1.000000    246213
2.000000    210053
3.000000    175154
0.000000    126348
1.500000      7786
2.500000      3998
0.500000      3235
2.333333       279
1.666667       244
1.666667       222
1.333333       192
1.333333       189
2.666667       179
0.666667       150
0.666667       146
2.333333       113
0.333333        89
0.333333        79
1.250000        17
1.750000        17
2.250000        13
0.750000         9
2.750000         9
0.250000         5
1.800000         1
2.200000         1
1.400000         1
2.600000         1
Name: SHOPPER_TYP, dtype: int64
In [246]:
few_missing['SHOPPER_TYP'] = few_missing['SHOPPER_TYP'].round()
few_missing['SHOPPER_TYP']
Out[246]:
0         3.0
1         2.0
2         1.0
3         2.0
4         0.0
5         1.0
6         0.0
7         3.0
8         3.0
9         2.0
10        1.0
11        3.0
12        1.0
13        2.0
14        1.0
15        2.0
16        1.0
17        1.0
18        0.0
19        0.0
20        0.0
21        1.0
22        1.0
23        1.0
24        0.0
25        1.0
26        2.0
27        2.0
28        2.0
29        1.0
         ... 
774713    0.0
774714    3.0
774715    0.0
774716    2.0
774717    0.0
774718    3.0
774719    1.0
774720    2.0
774721    1.0
774722    2.0
774723    3.0
774724    2.0
774725    3.0
774726    0.0
774727    2.0
774728    0.0
774729    3.0
774730    3.0
774731    2.0
774732    2.0
774733    1.0
774734    2.0
774735    1.0
774736    3.0
774737    1.0
774738    3.0
774739    2.0
774740    2.0
774741    0.0
774742    2.0
Name: SHOPPER_TYP, Length: 774743, dtype: float64
In [247]:
few_missing['SHOPPER_TYP'].value_counts()
Out[247]:
1.0    246917
2.0    222727
3.0    175343
0.0    129756
Name: SHOPPER_TYP, dtype: int64

re encode the categorical variable OST_WEST_KZ

In [248]:
# Re-encode categorical variable(s) to be kept in the analysis.
# we did not encode the others given that they are already binary.
#Onehot encoding for the one binary variable that takes on non-numeric values
# This will create an extra unecessary column:
##few_missing = pd.get_dummies(data=few_missing, columns=['OST_WEST_KZ'])
few_missing.loc[:, 'OST_WEST_KZ'].replace({'W':0, 'O':1}, inplace=True);
In [249]:
# Reencode Shopper type
few_missing = pd.get_dummies(data=few_missing, columns=['SHOPPER_TYP'])
In [251]:
#check the columns for the newly re-encoded categorical values
few_missing.head()
Out[251]:
ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER FINANZTYP ... PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0
0 1.0 2.0 5.0 1.0 5.0 2.0 5.0 4.0 5.0 1.0 ... 1.0 5.0 4.0 3.0 5.0 4.0 0 0 0 1
1 3.0 2.0 3.0 1.0 4.0 1.0 2.0 3.0 5.0 1.0 ... 1.0 4.0 4.0 3.0 5.0 2.0 0 0 1 0
2 4.0 2.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 6.0 ... 1.0 3.0 4.0 2.0 3.0 3.0 0 1 0 0
3 3.0 1.0 5.0 4.0 3.0 4.0 1.0 3.0 2.0 5.0 ... 2.0 3.0 3.0 4.0 6.0 5.0 0 0 1 0
4 1.0 2.0 2.0 3.0 1.0 5.0 2.0 2.0 5.0 2.0 ... 1.0 5.0 5.0 2.0 3.0 3.0 1 0 0 0

5 rows × 81 columns

In [252]:
#remove categorical columns with more than 2 categories
non_binary_cols = ['CJT_GESAMTTYP',
 'FINANZTYP',
 'GFK_URLAUBERTYP',
 'LP_FAMILIE_FEIN',
 'LP_FAMILIE_GROB',
 'LP_STATUS_FEIN',
 'LP_STATUS_GROB',
 'NATIONALITAET_KZ',
 'VERS_TYP',
 'ZABEOTYP',
 'GEBAEUDETYP',
 'CAMEO_DEUG_2015']

for col in non_binary_cols:
    few_missing.drop(col, axis=1, inplace=True)

Discussion 1.2.1: Re-Encode Categorical Features

I checked the values and I have a significant number of ordinal values, catgeorical and several mixed ones too. I rencoded the categorical variable for easier wrangling and less uncessary columns. I also reencoded the shopper type and dummied it out. I deleted the remaining non-binary columns, I chose to drop them for ease of analysis. Obviosuly if the client wanted them encoded, that could also be done. I looked at the mixed ones as well to prepare for their engineering below.

Step 1.2.2: Engineer Mixed-Type Features

There are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:

  • "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.
  • "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values).
  • If you decide to keep or engineer new features around the other mixed-type features, make sure you note your steps in the Discussion section.

Be sure to check Data_Dictionary.md for the details needed to finish these tasks.

In [253]:
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
few_missing['PRAEGENDE_JUGENDJAHRE'].head()
Out[253]:
0    14.0
1    15.0
2     8.0
3     8.0
4     3.0
Name: PRAEGENDE_JUGENDJAHRE, dtype: float64
In [254]:
few_missing['CAMEO_INTL_2015'].head()
Out[254]:
0    51.0
1    24.0
2    12.0
3    43.0
4    54.0
Name: CAMEO_INTL_2015, dtype: float64
In [255]:
# this information is based on the dictionary document
#Map generation 
gen_dict = {0: [1, 2], 
            1: [3, 4],
            2: [5, 6, 7],
            3: [8, 9],
            4: [10, 11, 12, 13], 
            5:[14, 15]}

def map_gen(x):
    try:
        for key, array in gen_dict.items():
            if x in array:
                return key
    except ValueError:
        return np.nan
    
# Map movement 
mainstream = [1, 3, 5, 8, 10, 12, 14]

def map_mov(x):
    try:
        if x in mainstream:
            return 0
        else:
            return 1
    except ValueError:
        return np.nan
In [256]:
# Create generation column
few_missing['PRAEGENDE_JUGENDJAHRE_decade'] = few_missing['PRAEGENDE_JUGENDJAHRE'].apply(map_gen)

# Create movement column
few_missing['PRAEGENDE_JUGENDJAHRE_movement'] = few_missing['PRAEGENDE_JUGENDJAHRE'].apply(map_mov)
In [257]:
# drop the original column
few_missing.drop(['PRAEGENDE_JUGENDJAHRE'], axis = 1, inplace=True)
In [258]:
few_missing.head()
Out[258]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement
0 1.0 2.0 1.0 5.0 2.0 5.0 4.0 5.0 0 3.0 ... 4.0 3.0 5.0 4.0 0 0 0 1 5.0 0
1 3.0 2.0 1.0 4.0 1.0 2.0 3.0 5.0 1 3.0 ... 4.0 3.0 5.0 2.0 0 0 1 0 5.0 1
2 4.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 0 2.0 ... 4.0 2.0 3.0 3.0 0 1 0 0 3.0 0
3 3.0 1.0 4.0 3.0 4.0 1.0 3.0 2.0 0 3.0 ... 3.0 4.0 6.0 5.0 0 0 1 0 3.0 0
4 1.0 2.0 3.0 1.0 5.0 2.0 2.0 5.0 0 3.0 ... 5.0 2.0 3.0 3.0 1 0 0 0 1.0 0

5 rows × 70 columns

In [259]:
# Map wealth 
def map_wealth(x):
    # Check of nan first, or it will convert nan to string 'nan'
    try:
        if pd.isnull(x):
            return np.nan
        else:
            return int(str(x)[0])
    except ValueError:
        return np.nan

# Map life stage
def map_lifestage(x):
    try:
        if pd.isnull(x):
            return np.nan
        else:
            return int(str(x)[1])
    except ValueError:
        return np.nan
In [260]:
# Create wealth column
few_missing['CAMEO_INTL_2015_wealth'] = few_missing['CAMEO_INTL_2015'].apply(map_wealth)

# Create life stage column
few_missing['CAMEO_INTL_2015_lifestage'] = few_missing['CAMEO_INTL_2015'].apply(map_lifestage)
In [261]:
# drop the original column
few_missing.drop(['CAMEO_INTL_2015'], axis = 1, inplace=True)
In [262]:
#Map WOHNLAGE

# Map Rural
 
rural = [7,8]

def map_rur(x):
    try:
        if x in rural:
            return 0
        else:
            return 1
    except ValueError:
        return np.nan
    
    
# Map Neigh

neigh = [1,2,3,4,5]

def map_neigh(x):
    try: 
        if x in neigh:
            return 0
        else:
            return 1
    except ValueError:
        return np.nan
In [263]:
# Create rural column
few_missing['WOHNLAGE_rural'] = few_missing['WOHNLAGE'].apply(map_rur)
# Create neighborhood column
few_missing['WOHNLAGE_neigh'] = few_missing['WOHNLAGE'].apply(map_neigh)
In [264]:
# drop the original column
few_missing.drop(['WOHNLAGE'], axis = 1, inplace=True)
In [265]:
few_missing.head()
Out[265]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 1.0 2.0 1.0 5.0 2.0 5.0 4.0 5.0 0 3.0 ... 0 0 0 1 5.0 0 5 1 1 0
1 3.0 2.0 1.0 4.0 1.0 2.0 3.0 5.0 1 3.0 ... 0 0 1 0 5.0 1 2 4 1 0
2 4.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 0 2.0 ... 0 1 0 0 3.0 0 1 2 0 1
3 3.0 1.0 4.0 3.0 4.0 1.0 3.0 2.0 0 3.0 ... 0 0 1 0 3.0 0 4 3 1 0
4 1.0 2.0 3.0 1.0 5.0 2.0 2.0 5.0 0 3.0 ... 1 0 0 0 1.0 0 5 4 0 1

5 rows × 72 columns

Discussion

After mapping WOHNLAGE in addition to the others, I am opting to drop the other mixed categories, they were too fine and something tells me that because there were lumped like this anyway, that means that they didn't matter that much anyway!

In [266]:
# dropping the other mixed columns
mixed = ['LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB','PLZ8_BAUMAX']
for col in mixed:
    few_missing.drop(col, axis=1, inplace=True)
In [267]:
few_missing.head()
Out[267]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 1.0 2.0 1.0 5.0 2.0 5.0 4.0 5.0 0 3.0 ... 0 0 0 1 5.0 0 5 1 1 0
1 3.0 2.0 1.0 4.0 1.0 2.0 3.0 5.0 1 3.0 ... 0 0 1 0 5.0 1 2 4 1 0
2 4.0 2.0 4.0 2.0 5.0 2.0 1.0 2.0 0 2.0 ... 0 1 0 0 3.0 0 1 2 0 1
3 3.0 1.0 4.0 3.0 4.0 1.0 3.0 2.0 0 3.0 ... 0 0 1 0 3.0 0 4 3 1 0
4 1.0 2.0 3.0 1.0 5.0 2.0 2.0 5.0 0 3.0 ... 1 0 0 0 1.0 0 5 4 0 1

5 rows × 69 columns

Discussion 1.2.2: Engineer Mixed-Type Features

To reingineer those mixed type values, it was helpful that they were assigned a numeric value. I was interested in reeingineering the LP_LEBENSPHASE_FEIN column, but unfortunately, it was too fine and had more than 3-4 vlaues per person and no clear code. if I had more time, I would have looked at it more deeply, but for now, I chose to drop the other mixed variables, including: 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB','WOHNLAGE','KBA05_BAUMAX','PLZ8_BAUMAX'.

Step 1.2.3: Complete Feature Selection

In order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:

  • All numeric, interval, and ordinal type columns from the original dataset.
  • Binary categorical features (all numerically-encoded).
  • Engineered features from other multi-level categorical features and mixed features.

Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from the subset with few or no missing values.

In [ ]:
# If there are other re-engineering tasks you need to perform, make sure you
# take care of them here. (Dealing with missing data will come in step 2.1.)
In [268]:
#check again for missing values
few_missing.isnull().sum()
Out[268]:
ALTERSKATEGORIE_GROB                  0
ANREDE_KZ                             0
FINANZ_MINIMALIST                     0
FINANZ_SPARER                         0
FINANZ_VORSORGER                      0
FINANZ_ANLEGER                        0
FINANZ_UNAUFFAELLIGER                 0
FINANZ_HAUSBAUER                      0
GREEN_AVANTGARDE                      0
HEALTH_TYP                            0
RETOURTYP_BK_S                        0
SEMIO_SOZ                             0
SEMIO_FAM                             0
SEMIO_REL                             0
SEMIO_MAT                             0
SEMIO_VERT                            0
SEMIO_LUST                            0
SEMIO_ERL                             0
SEMIO_KULT                            0
SEMIO_RAT                             0
SEMIO_KRIT                            0
SEMIO_DOM                             0
SEMIO_KAEM                            0
SEMIO_PFLICHT                         0
SEMIO_TRADV                           0
SOHO_KZ                               0
ANZ_PERSONEN                          0
ANZ_TITEL                             0
HH_EINKOMMEN_SCORE                    0
W_KEIT_KIND_HH                        0
                                  ...  
KBA05_ANTG4                           0
KBA05_GBZ                             0
BALLRAUM                              0
EWDICHTE                              0
INNENSTADT                            0
GEBAEUDETYP_RASTER                    0
KKK                                   0
MOBI_REGIO                            0
ONLINE_AFFINITAET                     0
REGIOTYP                              0
KBA13_ANZAHL_PKW                      0
PLZ8_ANTG1                            0
PLZ8_ANTG2                            0
PLZ8_ANTG3                            0
PLZ8_ANTG4                            0
PLZ8_HHZ                              0
PLZ8_GBZ                              0
ARBEIT                                0
ORTSGR_KLS9                           0
RELAT_AB                              0
SHOPPER_TYP_0.0                       0
SHOPPER_TYP_1.0                       0
SHOPPER_TYP_2.0                       0
SHOPPER_TYP_3.0                       0
PRAEGENDE_JUGENDJAHRE_decade      11896
PRAEGENDE_JUGENDJAHRE_movement        0
CAMEO_INTL_2015_wealth                0
CAMEO_INTL_2015_lifestage             0
WOHNLAGE_rural                        0
WOHNLAGE_neigh                        0
Length: 69, dtype: int64
In [269]:
# interpolate one more time to clear the empty values from the newly created columns

for col in few_missing.columns:
        few_missing[col] = few_missing[col].interpolate(limit_direction='both')
    
In [270]:
few_missing.isnull().sum().sum()
Out[270]:
0
In [ ]:
#this could also work to clear the data from NaNs

from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values=float("NaN"), 
                        strategy="mean/most frequent/median",
                        axis=1, copy = False)

Step 1.3: Create a Cleaning Function

Even though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.

In [325]:
def clean_data(df):
    """
    Perform feature trimming, re-encoding, and engineering for demographics
    data
    
    INPUT: Demographics DataFrame
    OUTPUT: Trimmed and cleaned demographics DataFrame
    """
    
    # Put in code here to execute all main cleaning steps:
    # convert missing value codes into NaNs, ...
    feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', delimiter=';')
    feat_info['missing_or_unknown'] = feat_info['missing_or_unknown'].apply(lambda x: x[1:-1].split(','))



    # Identify missing or unknown data values and convert them to NaNs.
    for attrib, missing_values in zip(feat_info['attribute'], feat_info['missing_or_unknown']):
        if missing_values[0] != '':
            for value in missing_values:
                if value.isnumeric() or value.lstrip('-').isnumeric():
                    value = int(value)
                azdias.loc[azdias[attrib] == value, attrib] = np.nan
    # remove selected columns and rows, ...

    #Columns
    columns_removed = ['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 
                       'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX',
                       'CAMEO_DEU_2015']
    
    
    for col in columns_removed:
        df.drop(col, axis=1, inplace=True)
        
    #Rows
    few_missing = df[df.isnull().sum(axis=1) < 10].reset_index(drop=True)
    
    #interpolate and impute the NaNs
    for col in few_missing.columns:
        few_missing[col] = few_missing[col].interpolate(limit_direction='both')
    
    # select, re-encode, and engineer column values.
    #drop non categorical
    non_binary_cols = ['CJT_GESAMTTYP','FINANZTYP','GFK_URLAUBERTYP','LP_FAMILIE_FEIN',
                       'LP_FAMILIE_GROB','LP_STATUS_FEIN','LP_STATUS_GROB','NATIONALITAET_KZ',
                       'VERS_TYP','ZABEOTYP','GEBAEUDETYP','CAMEO_DEUG_2015']
    
    for col in non_binary_cols:
        few_missing.drop(col, axis=1, inplace=True)
        
    
    #Reencode the categorical and get dummies for the shopper type
    few_missing.loc[:, 'OST_WEST_KZ'].replace({'W':0, 'O':1}, inplace=True);
    few_missing['SHOPPER_TYP'] = few_missing['SHOPPER_TYP'].round()
    few_missing = pd.get_dummies(data=few_missing, columns=['SHOPPER_TYP'])
    
     # Complete the mapping process of mixed features
    few_missing['PRAEGENDE_JUGENDJAHRE_decade'] = few_missing['PRAEGENDE_JUGENDJAHRE'].apply(map_gen)
    few_missing['PRAEGENDE_JUGENDJAHRE_movement'] = few_missing['PRAEGENDE_JUGENDJAHRE'].apply(map_mov)
    few_missing.drop('PRAEGENDE_JUGENDJAHRE', axis=1, inplace=True)   
  
    few_missing['CAMEO_INTL_2015_wealth'] = few_missing['CAMEO_INTL_2015'].apply(map_wealth)
    few_missing['CAMEO_INTL_2015_lifestage'] = few_missing['CAMEO_INTL_2015'].apply(map_lifestage)
    few_missing.drop('CAMEO_INTL_2015', axis=1, inplace=True)
    
    few_missing['WOHNLAGE_rural'] = few_missing['WOHNLAGE'].apply(map_rur)
    few_missing['WOHNLAGE_neigh'] = few_missing['WOHNLAGE'].apply(map_neigh)
    few_missing.drop('WOHNLAGE', axis=1, inplace=True)
    
    #drop mixed
    mixed = ['LP_LEBENSPHASE_FEIN','LP_LEBENSPHASE_GROB','PLZ8_BAUMAX']
    for col in mixed:
        few_missing.drop(col, axis=1, inplace=True)
        
    # interpolate again to clean the data set
    for col in few_missing.columns:
        few_missing[col] = few_missing[col].interpolate(limit_direction='both')
    
    # Return the cleaned dataframe.
    return few_missing

    

Step 2: Feature Transformation

Step 2.1: Apply Feature Scaling

Before we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the API reference page for sklearn to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:

  • sklearn requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to your data, make sure that you've cleaned the DataFrame of the remaining missing values. This can be as simple as just removing all data points with missing data, or applying an Imputer to replace all missing values. You might also try a more complicated procedure where you temporarily remove missing values in order to compute the scaling parameters before re-introducing those missing values and applying imputation. Think about how much missing data you have and what possible effects each approach might have on your analysis, and justify your decision in the discussion section below.
  • For the actual scaling function, a StandardScaler instance is suggested, scaling each feature to mean 0 and standard deviation 1.
  • For these classes, you can make use of the .fit_transform() method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project.
In [272]:
# Apply feature scaling to the general population demographics data.
# use .as_matrix to convert a pandas data frame to a numpy matrix for scikit-learn

start_time = time.time()

scaler = StandardScaler()
few_missing[few_missing.columns] = scaler.fit_transform(few_missing[few_missing.columns].as_matrix())

print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 0.63 mins ---
In [273]:
#check the stats, mean, etc
few_missing.describe()
Out[273]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
count 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 ... 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05 7.747430e+05
mean 1.825462e-16 -3.738236e-17 -1.041864e-16 8.767796e-17 4.510462e-17 1.162742e-16 -3.250321e-17 1.707152e-16 6.704246e-17 -1.624794e-16 ... -4.706729e-17 -4.792940e-17 8.034089e-18 9.776643e-18 2.685367e-17 -7.713092e-18 8.872349e-17 6.403426e-17 -1.279768e-16 -4.325201e-17
std 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 ... 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00 1.000001e+00
min -1.763460e+00 -1.043381e+00 -1.488785e+00 -1.151076e+00 -1.770775e+00 -1.247929e+00 -1.171961e+00 -1.533358e+00 -5.311360e-01 -1.612046e+00 ... -4.485266e-01 -6.839591e-01 -6.352002e-01 -5.408612e-01 -2.297786e+00 -5.676484e-01 -1.552736e+00 -1.916286e+00 -1.811996e+00 -5.577290e-01
25% -7.820297e-01 -1.043381e+00 -7.627794e-01 -1.151076e+00 -1.044684e+00 -1.247929e+00 -1.171961e+00 -8.182165e-01 -5.311360e-01 -2.732270e-01 ... -4.485266e-01 -6.839591e-01 -6.352002e-01 -5.408612e-01 -9.150856e-01 -5.676484e-01 -8.698078e-01 -1.250972e+00 5.518776e-01 -5.577290e-01
50% 1.994002e-01 9.584224e-01 -3.677339e-02 1.944708e-01 4.074985e-01 1.103059e-01 -4.542929e-01 -1.030746e-01 -5.311360e-01 -2.732270e-01 ... -4.485266e-01 -6.839591e-01 -6.352002e-01 -5.408612e-01 -2.237354e-01 -5.676484e-01 4.960489e-01 7.965636e-02 5.518776e-01 -5.577290e-01
75% 1.180830e+00 9.584224e-01 6.892326e-01 8.672441e-01 1.133590e+00 7.894232e-01 9.810433e-01 6.120673e-01 -5.311360e-01 1.065592e+00 ... -4.485266e-01 1.462076e+00 1.574307e+00 -5.408612e-01 1.158965e+00 -5.676484e-01 1.178977e+00 7.449705e-01 5.518776e-01 -5.577290e-01
max 1.180830e+00 9.584224e-01 1.415239e+00 1.540017e+00 1.133590e+00 1.468541e+00 1.698711e+00 1.327209e+00 1.882757e+00 1.065592e+00 ... 2.229522e+00 1.462076e+00 1.574307e+00 1.848903e+00 1.158965e+00 1.761654e+00 1.178977e+00 4.071541e+00 5.518776e-01 1.792986e+00

8 rows × 69 columns

In [274]:
few_missing.head()
Out[274]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 -1.76346 0.958422 -1.488785 1.540017 -1.044684 1.468541 0.981043 1.327209 -0.531136 1.065592 ... -0.448527 -0.683959 -0.635200 1.848903 1.158965 -0.567648 1.178977 -1.250972 0.551878 -0.557729
1 0.19940 0.958422 -1.488785 0.867244 -1.770775 -0.568811 0.263375 1.327209 1.882757 1.065592 ... -0.448527 -0.683959 1.574307 -0.540861 1.158965 1.761654 -0.869808 0.744971 0.551878 -0.557729
2 1.18083 0.958422 0.689233 -0.478302 1.133590 -0.568811 -1.171961 -0.818216 -0.531136 -0.273227 ... -0.448527 1.462076 -0.635200 -0.540861 -0.223735 -0.567648 -1.552736 -0.585658 -1.811996 1.792986
3 0.19940 -1.043381 0.689233 0.194471 0.407498 -1.247929 0.263375 -0.818216 -0.531136 1.065592 ... -0.448527 -0.683959 1.574307 -0.540861 -0.223735 -0.567648 0.496049 0.079656 0.551878 -0.557729
4 -1.76346 0.958422 -0.036773 -1.151076 1.133590 -0.568811 -0.454293 1.327209 -0.531136 1.065592 ... 2.229522 -0.683959 -0.635200 -0.540861 -1.606436 -0.567648 1.178977 0.744971 -1.811996 1.792986

5 rows × 69 columns

In [322]:
few_missing.columns
Out[322]:
Index(['ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'FINANZ_MINIMALIST',
       'FINANZ_SPARER', 'FINANZ_VORSORGER', 'FINANZ_ANLEGER',
       'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER', 'GREEN_AVANTGARDE',
       'HEALTH_TYP', 'RETOURTYP_BK_S', 'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL',
       'SEMIO_MAT', 'SEMIO_VERT', 'SEMIO_LUST', 'SEMIO_ERL', 'SEMIO_KULT',
       'SEMIO_RAT', 'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM', 'SEMIO_PFLICHT',
       'SEMIO_TRADV', 'SOHO_KZ', 'ANZ_PERSONEN', 'ANZ_TITEL',
       'HH_EINKOMMEN_SCORE', 'W_KEIT_KIND_HH', 'WOHNDAUER_2008',
       'ANZ_HAUSHALTE_AKTIV', 'ANZ_HH_TITEL', 'KONSUMNAEHE',
       'MIN_GEBAEUDEJAHR', 'OST_WEST_KZ', 'KBA05_ANTG1', 'KBA05_ANTG2',
       'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_GBZ', 'BALLRAUM', 'EWDICHTE',
       'INNENSTADT', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO',
       'ONLINE_AFFINITAET', 'REGIOTYP', 'KBA13_ANZAHL_PKW', 'PLZ8_ANTG1',
       'PLZ8_ANTG2', 'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_HHZ', 'PLZ8_GBZ',
       'ARBEIT', 'ORTSGR_KLS9', 'RELAT_AB', 'SHOPPER_TYP_0.0',
       'SHOPPER_TYP_1.0', 'SHOPPER_TYP_2.0', 'SHOPPER_TYP_3.0',
       'PRAEGENDE_JUGENDJAHRE_decade', 'PRAEGENDE_JUGENDJAHRE_movement',
       'CAMEO_INTL_2015_wealth', 'CAMEO_INTL_2015_lifestage', 'WOHNLAGE_rural',
       'WOHNLAGE_neigh'],
      dtype='object')

Discussion 2.1: Apply Feature Scaling

I used the standard scaler to normalize the values to analyze our data properly and provide a good solution that is not affected or skewed by varying scales within the data.

Step 2.2: Perform Dimensionality Reduction

On your scaled data, you are now ready to apply dimensionality reduction techniques.

  • Use sklearn's PCA class to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, you should not set any parameters (so all components are computed) or set a number of components that is at least half the number of features (so there's enough features to see the general trend in variability).
  • Check out the ratio of variance explained by each principal component as well as the cumulative variance explained. Try plotting the cumulative or sequential values using matplotlib's plot() function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project.
  • Once you've made a choice for the number of components to keep, make sure you re-fit a PCA instance to perform the decided-on transformation.
In [275]:
# Apply PCA to the data.
start_time = time.time()

pca = PCA()
few_missing_pca = pca.fit_transform(few_missing)
print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 0.23 mins ---
In [276]:
def pca_plot1(pca):
    '''
    Creates a scree plot associated with the principal components 
    
    INPUT: pca - the result of instantian of PCA in scikit learn
            
    OUTPUT:
            None
    '''
    num_components = len(pca.explained_variance_ratio_)
    ind = np.arange(num_components)
    vals = pca.explained_variance_ratio_
 
    plt.figure(figsize=(10, 6))
    ax = plt.subplot(111)
    cumvals = np.cumsum(vals)
    ax.bar(ind, vals)
    ax.plot(ind, cumvals)
    for i in range(num_components):
        ax.annotate(r"%s%%" % ((str(vals[i]*100)[:4])), (ind[i]+0.2, vals[i]), va="bottom", ha="center", fontsize=12)
 
    ax.xaxis.set_tick_params(width=0)
    ax.yaxis.set_tick_params(width=2, length=12)
 
    ax.set_xlabel("Principal Component")
    ax.set_ylabel("Variance Explained (%)")
    plt.title('Explained Variance Per Principal Component')

pca_plot1(pca)
In [278]:
# Number of components required to maintain %75 variance:
def pca_plot2(pca):

    n_components = min(np.where(np.cumsum(pca.explained_variance_ratio_)>0.85)[0]+1)

    fig = plt.figure()
    ax = fig.add_axes([0,0,1,1],True)
    ax2 = ax.twinx()
    ax.plot(pca.explained_variance_ratio_, label='Variance',)
    ax2.plot(np.cumsum(pca.explained_variance_ratio_), label='Cumulative Variance',color = 'red');
    ax.set_title('n_components needed for >%85 explained variance: {}'.format(n_components));
    ax.axvline(n_components, linestyle='dashed', color='grey')
    ax2.axhline(np.cumsum(pca.explained_variance_ratio_)[n_components], linestyle='dashed', color='grey')
    fig.legend(loc=(0.6,0.2));

pca_plot2(pca)
In [279]:
# Investigate the variance accounted for by each principal component.
# Re-apply PCA to the data while selecting for number of components to retain.
start_time = time.time()

pca = PCA(n_components= 28, random_state=10)
few_missing_pca = pca.fit_transform(few_missing)

print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 0.38 mins ---
In [280]:
#check the explained variance ratio
pca.explained_variance_ratio_.sum()
Out[280]:
0.85399780354172894
In [ ]:
 

Discussion 2.2: Perform Dimensionality Reduction

Looking at the plots above, it became clear that 28 features seemed to have the most explanatory power at 85%. I chose to go with 28 principal components and an explained variance ratio of 85 because anything after 28 components had an explanatory power that diverged on zero. 85% seemed like a sufficient percentage to capture the majority of the variability in our data without any redundant features.

Step 2.3: Interpret Principal Components

Now that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.

As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.

  • To investigate the features, you should map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. Use the data dictionary document to help you understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate.
  • You should investigate and interpret feature associations from the first three principal components in this substep. To help facilitate this, you should write a function that you can call at any time to print the sorted list of feature weights, for the i-th principal component. This might come in handy in the next step of the project, when you interpret the tendencies of the discovered clusters.
In [281]:
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
# HINT: Try defining a function here or in a new cell that you can reuse in the
# other cells.

def pca_weights(pca, i):
    weight_map = {}
    for counter, feature in enumerate(few_missing.columns):
        weight_map[feature] = pca.components_[i][counter]
    
    sorted_weights = sorted(weight_map.items(), key=operator.itemgetter(1), reverse=True)
    
    return sorted_weights

weights = pca_weights(pca,1)

#pretty print
pprint.pprint(weights)
[('ALTERSKATEGORIE_GROB', 0.26347995732133683),
 ('SEMIO_ERL', 0.23816136607761318),
 ('FINANZ_VORSORGER', 0.23499841184988668),
 ('SEMIO_LUST', 0.18515245026172314),
 ('RETOURTYP_BK_S', 0.16013039489101064),
 ('SEMIO_KRIT', 0.12007713970476436),
 ('SEMIO_KAEM', 0.11706398061322876),
 ('W_KEIT_KIND_HH', 0.10676474518388057),
 ('SHOPPER_TYP_3.0', 0.10062594712969217),
 ('FINANZ_HAUSBAUER', 0.097793734390226122),
 ('ANREDE_KZ', 0.097648961223096201),
 ('FINANZ_MINIMALIST', 0.082601260308946836),
 ('EWDICHTE', 0.078040983073825329),
 ('SEMIO_DOM', 0.077741809564029707),
 ('ORTSGR_KLS9', 0.077565853339305685),
 ('PLZ8_ANTG3', 0.072091405962423596),
 ('PLZ8_ANTG4', 0.067981800154932076),
 ('WOHNLAGE_rural', 0.06366357339061611),
 ('WOHNDAUER_2008', 0.063207310536588762),
 ('ARBEIT', 0.05503673333097335),
 ('RELAT_AB', 0.053648196319146683),
 ('KBA05_ANTG4', 0.052769913306393974),
 ('CAMEO_INTL_2015_wealth', 0.050789697153108188),
 ('PLZ8_ANTG2', 0.05058634644361007),
 ('ANZ_HAUSHALTE_AKTIV', 0.047202140423360239),
 ('HH_EINKOMMEN_SCORE', 0.030840131746433998),
 ('KBA05_ANTG3', 0.029197236252659558),
 ('ANZ_HH_TITEL', 0.027043425799024429),
 ('OST_WEST_KZ', 0.017262596465908837),
 ('CAMEO_INTL_2015_lifestage', 0.014647613960954889),
 ('REGIOTYP', 0.012658803095810594),
 ('PLZ8_HHZ', 0.011396993799864578),
 ('SHOPPER_TYP_2.0', 0.011306006346615603),
 ('PRAEGENDE_JUGENDJAHRE_movement', 0.0077541285433471692),
 ('ANZ_TITEL', 0.0076273488533074116),
 ('GREEN_AVANTGARDE', 0.0022957683038972248),
 ('SOHO_KZ', -0.0017342109157618621),
 ('KKK', -0.0028657850115469687),
 ('KBA05_ANTG2', -0.0098520003586790496),
 ('KBA13_ANZAHL_PKW', -0.027153071750343417),
 ('GEBAEUDETYP_RASTER', -0.031023230235794324),
 ('SHOPPER_TYP_1.0', -0.040286849095975331),
 ('MIN_GEBAEUDEJAHR', -0.041463272205265585),
 ('ANZ_PERSONEN', -0.050939006075352705),
 ('BALLRAUM', -0.051171490651689612),
 ('PLZ8_GBZ', -0.053605695913621014),
 ('KONSUMNAEHE', -0.053708845255129312),
 ('KBA05_ANTG1', -0.054573482863503928),
 ('HEALTH_TYP', -0.060109734500614409),
 ('INNENSTADT', -0.061051169956190886),
 ('MOBI_REGIO', -0.063787237423169801),
 ('WOHNLAGE_neigh', -0.064858790164859098),
 ('KBA05_GBZ', -0.065905231264836509),
 ('PLZ8_ANTG1', -0.067602905384976375),
 ('SEMIO_VERT', -0.072053616510078466),
 ('SHOPPER_TYP_0.0', -0.07619409632584459),
 ('SEMIO_SOZ', -0.109690119475828),
 ('ONLINE_AFFINITAET', -0.15349606894267423),
 ('SEMIO_MAT', -0.16676035852102453),
 ('SEMIO_RAT', -0.17001401754183806),
 ('SEMIO_FAM', -0.19358248842556983),
 ('FINANZ_ANLEGER', -0.20536151214812348),
 ('FINANZ_UNAUFFAELLIGER', -0.22458097171355593),
 ('SEMIO_KULT', -0.22835664921588381),
 ('SEMIO_TRADV', -0.23209728264019644),
 ('SEMIO_PFLICHT', -0.23261028699819525),
 ('FINANZ_SPARER', -0.24147857264889672),
 ('PRAEGENDE_JUGENDJAHRE_decade', -0.25032717339763555),
 ('SEMIO_REL', -0.26336200978444052)]
In [282]:
#draw plot for visualizing those weights
def plot_pca(data, pca, n_component):
    '''
	Plot the features with the most absolute variance for given pca component 
	'''
    Pcomponent = pd.DataFrame(np.round(pca.components_, 4), 
                              columns = data.keys()).iloc[n_component-1]
    Pcomponent.sort_values(ascending=False, inplace=True)
    Pcomponent = pd.concat([Pcomponent.head(5), Pcomponent.tail(5)])
    
    Pcomponent.plot(kind='bar', title='Component ' + str(n_component))
    ax = plt.gca()
    ax.grid(linewidth='0.5', alpha=0.5)
    ax.set_axisbelow(True)
    plt.show()
In [283]:
plot_pca(few_missing, pca, 1)

Discussion

It appears that the first component is concerned with big family homes, and their wealth. it has a positive assocition with the shares of big family homes in the PLZ8 region and their wealth in macro cell features and has a negative association with the micro cell features.

In [284]:
# Map weights for the second principal component to corresponding feature names
# and then print the linked values, sorted by weight.

weights = pca_weights(pca,2)


# Prints the nicely formatted dictionary
pprint.pprint(weights)
[('SEMIO_VERT', 0.33694390690372567),
 ('SEMIO_SOZ', 0.25553110741024171),
 ('SEMIO_FAM', 0.23773700428624681),
 ('SEMIO_KULT', 0.21990552458005441),
 ('FINANZ_MINIMALIST', 0.15798582464651298),
 ('SHOPPER_TYP_0.0', 0.12462345967465777),
 ('RETOURTYP_BK_S', 0.11186172732017072),
 ('FINANZ_VORSORGER', 0.10893735354741116),
 ('W_KEIT_KIND_HH', 0.092479396927734075),
 ('ALTERSKATEGORIE_GROB', 0.086608079910694913),
 ('SEMIO_LUST', 0.06995297439340796),
 ('SEMIO_REL', 0.059190530480938947),
 ('GREEN_AVANTGARDE', 0.052344601168031413),
 ('ORTSGR_KLS9', 0.051634311603126787),
 ('EWDICHTE', 0.049904446996128907),
 ('PRAEGENDE_JUGENDJAHRE_movement', 0.045144273357689238),
 ('PLZ8_ANTG3', 0.044798358974973258),
 ('PLZ8_ANTG4', 0.044764725584105935),
 ('SEMIO_MAT', 0.04444113941019448),
 ('SHOPPER_TYP_1.0', 0.041166653838551427),
 ('WOHNLAGE_rural', 0.038449577374574322),
 ('ARBEIT', 0.036467049395956061),
 ('WOHNDAUER_2008', 0.03530859788436564),
 ('RELAT_AB', 0.033376511438733429),
 ('PLZ8_ANTG2', 0.030405571040145431),
 ('KBA05_ANTG4', 0.025892214965374652),
 ('ANZ_HAUSHALTE_AKTIV', 0.024157758674132449),
 ('CAMEO_INTL_2015_wealth', 0.024114904464956101),
 ('ANZ_HH_TITEL', 0.013481681261453363),
 ('OST_WEST_KZ', 0.012902551215154111),
 ('ANZ_TITEL', 0.0099556081707490255),
 ('KBA05_ANTG3', 0.0062249390613992248),
 ('PLZ8_HHZ', 0.0056041906793755326),
 ('SOHO_KZ', 0.00024871292510576771),
 ('CAMEO_INTL_2015_lifestage', -0.0049366202106405086),
 ('REGIOTYP', -0.0056372503344391473),
 ('ANZ_PERSONEN', -0.0091639633277973385),
 ('KBA05_ANTG2', -0.010934194581784143),
 ('KKK', -0.01550558286589189),
 ('MIN_GEBAEUDEJAHR', -0.019077092421785114),
 ('KBA05_ANTG1', -0.019856173849616982),
 ('KBA13_ANZAHL_PKW', -0.022462488678650534),
 ('HH_EINKOMMEN_SCORE', -0.022995901285671382),
 ('KBA05_GBZ', -0.025001277172183475),
 ('HEALTH_TYP', -0.026153466663445601),
 ('MOBI_REGIO', -0.027087133841122081),
 ('GEBAEUDETYP_RASTER', -0.028736675076122965),
 ('SHOPPER_TYP_3.0', -0.032035924669712453),
 ('PLZ8_GBZ', -0.036645348792983708),
 ('BALLRAUM', -0.036746734323821424),
 ('KONSUMNAEHE', -0.037768535884123329),
 ('WOHNLAGE_neigh', -0.039022273501762156),
 ('FINANZ_HAUSBAUER', -0.039708924284479079),
 ('PLZ8_ANTG1', -0.043904264114800189),
 ('INNENSTADT', -0.044520475130854539),
 ('ONLINE_AFFINITAET', -0.05742027977039748),
 ('SEMIO_TRADV', -0.086098463448866069),
 ('SEMIO_PFLICHT', -0.087808140504225932),
 ('FINANZ_UNAUFFAELLIGER', -0.10449296673713478),
 ('FINANZ_SPARER', -0.11346232077495681),
 ('PRAEGENDE_JUGENDJAHRE_decade', -0.11465525579078052),
 ('SHOPPER_TYP_2.0', -0.11558452570475793),
 ('SEMIO_ERL', -0.16600330410188358),
 ('FINANZ_ANLEGER', -0.19438410324435182),
 ('SEMIO_RAT', -0.22048387376774739),
 ('SEMIO_KRIT', -0.26660111309525469),
 ('SEMIO_DOM', -0.30636830720935149),
 ('SEMIO_KAEM', -0.32843841355814407),
 ('ANREDE_KZ', -0.35792996893144202)]
In [285]:
plot_pca(few_missing, pca, 2)

Discussion

The second principal component is interested in the personality topology and has a possitive association with the sensual minded and event oriented person as opposed to a negative association with the the religious, dutiful rational and traditional minded person.

In [286]:
# Map weights for the third principal component to corresponding feature names
# and then print the linked values, sorted by weight.

weights = pca_weights(pca,3)


# Prints the nicely formatted dictionary
pprint.pprint(weights)
[('GREEN_AVANTGARDE', 0.37751533494767231),
 ('PRAEGENDE_JUGENDJAHRE_movement', 0.36478841068043011),
 ('WOHNLAGE_rural', 0.26073432923754097),
 ('ORTSGR_KLS9', 0.23763510347293124),
 ('EWDICHTE', 0.23602594927470757),
 ('ONLINE_AFFINITAET', 0.14016151831148305),
 ('KBA05_ANTG1', 0.12996309558210997),
 ('PLZ8_HHZ', 0.12202281753537982),
 ('SEMIO_DOM', 0.11398118604780198),
 ('ANZ_PERSONEN', 0.1051964741035562),
 ('MOBI_REGIO', 0.10006689795642418),
 ('FINANZ_UNAUFFAELLIGER', 0.093716873791000341),
 ('RELAT_AB', 0.091398557104667114),
 ('KBA05_GBZ', 0.08735121560153114),
 ('PLZ8_ANTG2', 0.079592460617676927),
 ('FINANZ_MINIMALIST', 0.076536164477086402),
 ('SEMIO_KAEM', 0.072258485171386388),
 ('CAMEO_INTL_2015_lifestage', 0.069868911435925968),
 ('PLZ8_GBZ', 0.065807858989225934),
 ('PLZ8_ANTG3', 0.057208425273713504),
 ('SEMIO_TRADV', 0.056798763084519442),
 ('ARBEIT', 0.055507901069394891),
 ('KBA13_ANZAHL_PKW', 0.051677705246593222),
 ('SEMIO_RAT', 0.050983396939539488),
 ('SHOPPER_TYP_3.0', 0.045541014992618783),
 ('PRAEGENDE_JUGENDJAHRE_decade', 0.044250786311145186),
 ('ANZ_TITEL', 0.03911905657408208),
 ('PLZ8_ANTG4', 0.034281355267962998),
 ('ANREDE_KZ', 0.031453267542888325),
 ('SEMIO_PFLICHT', 0.026653222236793287),
 ('SEMIO_MAT', 0.013997736803584227),
 ('ANZ_HH_TITEL', 0.013817719338825439),
 ('KBA05_ANTG2', 0.013254523345981066),
 ('WOHNDAUER_2008', 0.012142064555993816),
 ('SEMIO_KRIT', 0.0095104097708260064),
 ('HEALTH_TYP', 0.0066434977951931777),
 ('SEMIO_REL', 0.0045308346225441595),
 ('FINANZ_SPARER', 0.0026295206759556411),
 ('SOHO_KZ', 0.0025804062389728744),
 ('PLZ8_ANTG1', 0.00072759678005038199),
 ('SEMIO_SOZ', 0.000274605627889423),
 ('SEMIO_LUST', -0.0049077596912496186),
 ('RETOURTYP_BK_S', -0.0055601499290370109),
 ('SEMIO_VERT', -0.00950064372743082),
 ('SHOPPER_TYP_1.0', -0.0098693858424936734),
 ('SHOPPER_TYP_2.0', -0.013210398172806694),
 ('SEMIO_FAM', -0.018313039399370658),
 ('FINANZ_VORSORGER', -0.019118415767545738),
 ('MIN_GEBAEUDEJAHR', -0.021587271148196484),
 ('SEMIO_KULT', -0.022062619252464746),
 ('SHOPPER_TYP_0.0', -0.022706944543942217),
 ('SEMIO_ERL', -0.023780877632421087),
 ('GEBAEUDETYP_RASTER', -0.039125934144762703),
 ('ALTERSKATEGORIE_GROB', -0.044365070080938034),
 ('ANZ_HAUSHALTE_AKTIV', -0.061899197822667881),
 ('KBA05_ANTG4', -0.067651070584413928),
 ('KBA05_ANTG3', -0.084735167876516734),
 ('W_KEIT_KIND_HH', -0.093059848820811156),
 ('KONSUMNAEHE', -0.10317563679877792),
 ('FINANZ_ANLEGER', -0.10342356819389699),
 ('OST_WEST_KZ', -0.12647770084986884),
 ('FINANZ_HAUSBAUER', -0.12919758962120501),
 ('CAMEO_INTL_2015_wealth', -0.14303905098322969),
 ('REGIOTYP', -0.16829841328461168),
 ('INNENSTADT', -0.18099114238226252),
 ('BALLRAUM', -0.19691425879930588),
 ('KKK', -0.21744971644798611),
 ('HH_EINKOMMEN_SCORE', -0.25255725242651883),
 ('WOHNLAGE_neigh', -0.25860668754844307)]
In [287]:
plot_pca(few_missing, pca, 3)

Discussion

The third principal component is also concerned with personalities, this one has a negative association with the combative, dominant minded, critical, rational and event oriented person and a possitive association with the cultural minded, familial, social and dreamy person.

Discussion 2.3: Interpret Principal Components

In the dicussion points above, I discussed the main positive and negative values that we extracted from the PCA analysis, the values are certainly explainable and are filled with very interesting and helpful information. This can already help us in detecting trends, customer groups and how to approach any new marketing campaign.

Step 3: Clustering

Step 3.1: Apply Clustering to General Population

You've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.

  • Use sklearn's KMeans class to perform k-means clustering on the PCA-transformed data.
  • Then, compute the average difference from each point to its assigned cluster's center. Hint: The KMeans object's .score() method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding.
  • Perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. However, each additional cluster provides a smaller net benefit. Use this fact to select a final number of clusters in which to group the data. Warning: because of the large size of the dataset, it can take a long time for the algorithm to resolve. The more clusters to fit, the longer the algorithm will take. You should test for cluster counts through at least 10 clusters to get the full picture, but you shouldn't need to test for a number of clusters above about 30.
  • Once you've selected a final number of clusters to use, re-fit a KMeans instance to perform the clustering operation. Make sure that you also obtain the cluster assignments for the general demographics data, since you'll be using them in the final Step 3.3.
In [288]:
# Over a number of different cluster counts...
def get_kmeans_score(data, center):
    '''
    returns the kmeans score regarding SSE for points to centers
    INPUT:
        data - the dataset you want to fit kmeans to
        center - the number of centers you want (the k value)
    OUTPUT:
        score - the SSE score for the kmeans model fit to the data
    '''

    # run k-means clustering on the data .
    kmeans = KMeans(n_clusters=center)
     # fit the model
    model = kmeans.fit(data)
    # Obtain a score related to the model fit
     # compute the average within-cluster distances
    score = np.abs(model.score(data))
    
    return score
    
   
    
    
In [289]:
# Over a number of different cluster counts...
# run k-means clustering on the data and...
# compute the average within-cluster distances.
#this takes a while
start_time = time.time()

scores = []
centers = list(range(1,30,2))

for center in centers:
    scores.append(get_kmeans_score(few_missing_pca, center))
    
print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 84.0 mins ---
In [290]:
print(scores)
[45652388.601343751, 35837131.779208362, 32318643.977452036, 29912319.117318157, 28296498.521711361, 27274501.685615662, 26011185.091405515, 25749410.553372856, 24673762.099093843, 24454909.331905462, 23755420.042959753, 23158069.856501151, 22833526.47130622, 22470983.469359431, 22178657.105992272]
In [291]:
# Investigate the change in within-cluster distance across number of clusters.
# HINT: Use matplotlib's plot function to visualize this relationship.
# SSE = sum of squared errors
plt.plot(centers, scores, linestyle='--', marker='o', color='b');
plt.xlabel('K');
plt.ylabel('SSE');
plt.title('SSE vs. K');
In [294]:
reduced_data = PCA(n_components=2).fit_transform(few_missing)
#clusters = KMeans(n_clusters=best_k, random_state=100).fit(reduced_data)
clusters = KMeans(n_clusters=15, random_state=100).fit(reduced_data)

x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
hx = (x_max-x_min)/1000.
hy = (y_max-y_min)/1000.
xx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))


Z = clusters.predict(np.c_[xx.ravel(), yy.ravel()])
centroids = clusters.cluster_centers_

def PCA_plot(Z, centroids):
    Z = Z.reshape(xx.shape)
    plt.figure(1, figsize=(20, 10))
    plt.clf()
    plt.imshow(Z, interpolation='nearest',
               extent=(xx.min(), xx.max(), yy.min(), yy.max()),
               cmap=plt.cm.Paired,
               aspect='auto', origin='lower')

    plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2, alpha=0.1)
    plt.scatter(centroids[:, 0], centroids[:, 1],
                marker='x', s=169, linewidths=3,
                color='w', zorder=10)
    plt.title('Clustering on the dataset (PCA-reduced data)\n'
              'Centroids are marked with white cross')
    plt.xlim(x_min, x_max)
    plt.ylim(y_min, y_max)
    plt.xticks(())
    plt.yticks(())
    plt.show()

PCA_plot(Z, centroids)

Discussion

Although there is isn't a clear 'elbow' in the graphs above, 15 clusters seems to me like a good cut off where most for the data can fit nicely. anything after 15 becomes way too granular

In [295]:
# Re-fit the k-means model with the selected number of clusters and obtain
# cluster predictions for the general population demographics data.
start_time = time.time()
kmeans = KMeans(n_clusters=15)
model_general = kmeans.fit(few_missing_pca)
print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 5.83 mins ---
In [296]:
predict_general = model_general.predict(few_missing_pca)
In [ ]:
 

Discussion 3.1: Apply Clustering to General Population

I chose 15 clusters and refit the model accordingly

Step 3.2: Apply All Steps to the Customer Data

Now that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.

  • Don't forget when loading in the customers data, that it is semicolon (;) delimited.
  • Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the clean_data() function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.)
  • Use the sklearn objects from the general demographics data, and apply their transformations to the customers data. That is, you should not be using a .fit() or .fit_transform() method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data.
In [333]:
# Load in the customer demographics data.
customers = pd.read_csv('Udacity_CUSTOMERS_Subset.csv', delimiter=';')
In [334]:
customers.head()
Out[334]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
0 2 4 1 5.0 5 1 5 1 2 2 ... 3.0 3.0 1.0 0.0 1.0 5.0 5.0 1.0 2.0 1.0
1 -1 4 1 NaN 5 1 5 1 3 2 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 -1 4 2 2.0 5 1 5 1 4 4 ... 2.0 3.0 3.0 1.0 3.0 3.0 2.0 3.0 5.0 3.0
3 1 4 1 2.0 5 1 5 2 1 2 ... 3.0 2.0 1.0 0.0 1.0 3.0 4.0 1.0 3.0 1.0
4 -1 3 1 6.0 3 1 4 4 5 2 ... 2.0 4.0 2.0 1.0 2.0 3.0 3.0 3.0 5.0 1.0

5 rows × 85 columns

In [335]:
def clean_data(df):
    """
    Perform feature trimming, re-encoding, and engineering for demographics
    data
    
    INPUT: Demographics DataFrame
    OUTPUT: Trimmed and cleaned demographics DataFrame
    """
    
    # Put in code here to execute all main cleaning steps:
    # convert missing value codes into NaNs, ...
    feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', delimiter=';')
    feat_info['missing_or_unknown'] = feat_info['missing_or_unknown'].apply(lambda x: x[1:-1].split(','))



    # Identify missing or unknown data values and convert them to NaNs.
    for attrib, missing_values in zip(feat_info['attribute'], feat_info['missing_or_unknown']):
        if missing_values[0] != '':
            for value in missing_values:
                if value.isnumeric() or value.lstrip('-').isnumeric():
                    value = int(value)
                azdias.loc[azdias[attrib] == value, attrib] = np.nan
    # remove selected columns and rows, ...

    #Columns
    columns_removed = ['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 
                       'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX',
                       'CAMEO_DEU_2015']
    
    
    for col in columns_removed:
        df.drop(col, axis=1, inplace=True)
        
    #Rows
    few_missing = df[df.isnull().sum(axis=1) < 10].reset_index(drop=True)
    
    #interpolate and impute the NaNs
    for col in few_missing.columns:
        few_missing[col] = few_missing[col].interpolate(limit_direction='both')
    
    # select, re-encode, and engineer column values.
    #drop non categorical
    non_binary_cols = ['CJT_GESAMTTYP','FINANZTYP','GFK_URLAUBERTYP','LP_FAMILIE_FEIN',
                       'LP_FAMILIE_GROB','LP_STATUS_FEIN','LP_STATUS_GROB','NATIONALITAET_KZ',
                       'VERS_TYP','ZABEOTYP','GEBAEUDETYP','CAMEO_DEUG_2015']
    
    for col in non_binary_cols:
        few_missing.drop(col, axis=1, inplace=True)
        
    
    #Reencode the categorical and get dummies for the shopper type
    few_missing.loc[:, 'OST_WEST_KZ'].replace({'W':0, 'O':1}, inplace=True);
    few_missing['SHOPPER_TYP'] = few_missing['SHOPPER_TYP'].round()
    few_missing = pd.get_dummies(data=few_missing, columns=['SHOPPER_TYP'])
    
     # Complete the mapping process of mixed features
    few_missing['PRAEGENDE_JUGENDJAHRE_decade'] = few_missing['PRAEGENDE_JUGENDJAHRE'].apply(map_gen)
    few_missing['PRAEGENDE_JUGENDJAHRE_movement'] = few_missing['PRAEGENDE_JUGENDJAHRE'].apply(map_mov)
    few_missing.drop('PRAEGENDE_JUGENDJAHRE', axis=1, inplace=True)   
  
    few_missing['CAMEO_INTL_2015_wealth'] = few_missing['CAMEO_INTL_2015'].apply(map_wealth)
    few_missing['CAMEO_INTL_2015_lifestage'] = few_missing['CAMEO_INTL_2015'].apply(map_lifestage)
    few_missing.drop('CAMEO_INTL_2015', axis=1, inplace=True)
    
    few_missing['WOHNLAGE_rural'] = few_missing['WOHNLAGE'].apply(map_rur)
    few_missing['WOHNLAGE_neigh'] = few_missing['WOHNLAGE'].apply(map_neigh)
    few_missing.drop('WOHNLAGE', axis=1, inplace=True)
    
    #drop mixed
    mixed = ['LP_LEBENSPHASE_FEIN','LP_LEBENSPHASE_GROB','PLZ8_BAUMAX']
    for col in mixed:
        few_missing.drop(col, axis=1, inplace=True)
        
    # interpolate again to clean the data set
    for col in few_missing.columns:
        few_missing[col] = few_missing[col].interpolate(limit_direction='both')
    
    # Return the cleaned dataframe.
    return few_missing

    
In [336]:
# Apply preprocessing, feature transformation, and clustering from the general
# demographics onto the customer data, obtaining cluster predictions for the
# customer demographics data.
#this takes a while

start_time = time.time()

customers_clean = clean_data(customers)

print("--- Run time: %s mins ---" % np.round(((time.time() - start_time)/60),2))
--- Run time: 0.14 mins ---
In [337]:
customers_clean.head()
Out[337]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0 SHOPPER_TYP_1 SHOPPER_TYP_2 SHOPPER_TYP_3 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 4 1 5 1 5 1 2 2 1 1 ... 0 0 0 1 1.0 1 1.0 3.0 0 1
1 4 2 5 1 5 1 4 4 1 2 ... 0 1 0 0 1.0 1 3.0 4.0 1 0
2 4 1 5 1 5 2 1 2 0 2 ... 1 0 0 0 0.0 0 2.0 4.0 0 1
3 3 1 3 1 4 4 5 2 0 3 ... 0 1 0 0 3.0 0 4.0 1.0 1 0
4 3 1 5 1 5 1 2 3 1 3 ... 0 1 0 0 1.0 1 3.0 4.0 1 0

5 rows × 70 columns

In [338]:
customers_clean.isnull().sum().sum()
Out[338]:
0
In [339]:
customers_clean.describe()
Out[339]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0 SHOPPER_TYP_1 SHOPPER_TYP_2 SHOPPER_TYP_3 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
count 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 ... 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000 136618.000000
mean 3.501127 1.330022 4.268808 1.423282 4.571206 1.597989 1.847560 2.741125 0.500307 1.917295 ... 0.214532 0.258048 0.164363 0.346887 1.873578 0.508330 2.596071 3.371554 0.761510 0.242728
std 0.759636 0.470223 1.017442 0.826992 0.837980 1.000277 0.966183 1.314660 0.500002 0.852617 ... 0.410499 0.437562 0.370606 0.475981 1.339633 0.499932 1.404890 1.338064 0.426162 0.428733
min 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 0.000000 -1.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 1.000000 0.000000 0.000000
25% 3.000000 1.000000 4.000000 1.000000 4.000000 1.000000 1.000000 2.000000 0.000000 1.000000 ... 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 1.000000 2.000000 1.000000 0.000000
50% 4.000000 1.000000 5.000000 1.000000 5.000000 1.000000 2.000000 2.000000 1.000000 2.000000 ... 0.000000 0.000000 0.000000 0.000000 2.000000 1.000000 2.000000 4.000000 1.000000 0.000000
75% 4.000000 2.000000 5.000000 2.000000 5.000000 2.000000 2.000000 4.000000 1.000000 3.000000 ... 0.000000 1.000000 0.000000 1.000000 3.000000 1.000000 4.000000 4.000000 1.000000 0.000000
max 9.000000 2.000000 5.000000 5.000000 5.000000 5.000000 5.000000 5.000000 1.000000 3.000000 ... 1.000000 1.000000 1.000000 1.000000 5.000000 1.000000 5.000000 5.000000 1.000000 1.000000

8 rows × 70 columns

looking at this, it has encoded the shopper_typ -1 as its own column. Frankly, I am unclear on why this happened, and I have tried reorganizing my clean data function, but it's not working. given that this columns is essentially a collection of NaN values (shopper_typ -1 = unkown), I will delete this column here

In [342]:
customers_clean.drop('SHOPPER_TYP_-1', axis=1, inplace=True)
In [343]:
customers_clean.columns
Out[343]:
Index(['ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'FINANZ_MINIMALIST',
       'FINANZ_SPARER', 'FINANZ_VORSORGER', 'FINANZ_ANLEGER',
       'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER', 'GREEN_AVANTGARDE',
       'HEALTH_TYP', 'RETOURTYP_BK_S', 'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL',
       'SEMIO_MAT', 'SEMIO_VERT', 'SEMIO_LUST', 'SEMIO_ERL', 'SEMIO_KULT',
       'SEMIO_RAT', 'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM', 'SEMIO_PFLICHT',
       'SEMIO_TRADV', 'SOHO_KZ', 'ANZ_PERSONEN', 'ANZ_TITEL',
       'HH_EINKOMMEN_SCORE', 'W_KEIT_KIND_HH', 'WOHNDAUER_2008',
       'ANZ_HAUSHALTE_AKTIV', 'ANZ_HH_TITEL', 'KONSUMNAEHE',
       'MIN_GEBAEUDEJAHR', 'OST_WEST_KZ', 'KBA05_ANTG1', 'KBA05_ANTG2',
       'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_GBZ', 'BALLRAUM', 'EWDICHTE',
       'INNENSTADT', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO',
       'ONLINE_AFFINITAET', 'REGIOTYP', 'KBA13_ANZAHL_PKW', 'PLZ8_ANTG1',
       'PLZ8_ANTG2', 'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_HHZ', 'PLZ8_GBZ',
       'ARBEIT', 'ORTSGR_KLS9', 'RELAT_AB', 'SHOPPER_TYP_0', 'SHOPPER_TYP_1',
       'SHOPPER_TYP_2', 'SHOPPER_TYP_3', 'PRAEGENDE_JUGENDJAHRE_decade',
       'PRAEGENDE_JUGENDJAHRE_movement', 'CAMEO_INTL_2015_wealth',
       'CAMEO_INTL_2015_lifestage', 'WOHNLAGE_rural', 'WOHNLAGE_neigh'],
      dtype='object')
In [344]:
customers_clean.head()
Out[344]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0 SHOPPER_TYP_1 SHOPPER_TYP_2 SHOPPER_TYP_3 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 4 1 5 1 5 1 2 2 1 1 ... 0 0 0 1 1.0 1 1.0 3.0 0 1
1 4 2 5 1 5 1 4 4 1 2 ... 0 1 0 0 1.0 1 3.0 4.0 1 0
2 4 1 5 1 5 2 1 2 0 2 ... 1 0 0 0 0.0 0 2.0 4.0 0 1
3 3 1 3 1 4 4 5 2 0 3 ... 0 1 0 0 3.0 0 4.0 1.0 1 0
4 3 1 5 1 5 1 2 3 1 3 ... 0 1 0 0 1.0 1 3.0 4.0 1 0

5 rows × 69 columns

In [345]:
#normalization using StandardScaler
customers_clean[customers_clean.columns] = scaler.transform(customers_clean[customers_clean.columns].as_matrix())

#transform the customers data using pca object
customers_clean_pca = pca.transform(customers_clean)
#predict clustering using the kmeans object
predict_customers = model_general.predict(customers_clean_pca)
In [ ]:
 

Step 3.3: Compare Customer Data to Demographics Data

At this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is.

Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.

Take a look at the following points in this step:

  • Compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. Seaborn's countplot() or barplot() function could be handy.
    • Recall the analysis you performed in step 1.1.3 of the project, where you separated out certain data points from the dataset if they had more than a specified threshold of missing values. If you found that this group was qualitatively different from the main bulk of the data, you should treat this as an additional data cluster in this analysis. Make sure that you account for the number of data points in this subset, for both the general population and customer datasets, when making your computations!
  • Which cluster or clusters are overrepresented in the customer dataset compared to the general population? Select at least one such cluster and infer what kind of people might be represented by that cluster. Use the principal component interpretations from step 2.3 or look at additional components to help you make this inference. Alternatively, you can use the .inverse_transform() method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly.
  • Perform a similar investigation for the underrepresented clusters. Which cluster or clusters are underrepresented in the customer dataset compared to the general population, and what kinds of people are typified by these clusters?
In [361]:
def plot_scaled_comparison(df_sample, kmeans, cluster):
    X = pd.DataFrame.from_dict(dict(zip(df_sample.columns,
pca.inverse_transform(kmeans.cluster_centers_[cluster]))), orient='index').rename(
columns={0: 'feature_values'}).sort_values('feature_values', ascending=False)
    X['feature_values_abs'] = abs(X['feature_values'])
    pd.concat((X['feature_values'][:10], X['feature_values'][-10:]), axis=0).plot(kind='barh');
In [364]:
plot_scaled_comparison(customers_clean, kmeans, 14)

We can clearly see that in our analysis here that rational, traditional, combative (aggressive) and investors are over represented in the customers data compared to the general population. It also shows that there is little representation of financial minimalists with little investment, young people and dreamful ones in the customers data.

In [350]:
# Compare the proportion of data in each cluster for the customer data to the
# proportion of data in each cluster for the general population.

general_port = []
customers_port = []
x = [i+1 for i in range(20)]
for i in range(20):
    general_port.append((predict_general == i).sum()/len(predict_general))
    customers_port.append((predict_customers == i).sum()/len(predict_customers))


df_general = pd.DataFrame({'cluster' : x, 'propotion_general' : general_port, 'proportion_customers':customers_port})

#ax = sns.countplot(x='index', y = df_general['prop_1', 'prop_2'], data=df_general )
df_general.plot(x='cluster', y = ['propotion_general', 'proportion_customers'], kind='bar', figsize=(9,6))
plt.ylabel('proportion of persons in each cluster')
plt.show()
In [354]:
# What kinds of people are part of a cluster that is overrepresented in the
# customer data compared to the general population?
data = scaler.inverse_transform(pca.inverse_transform(customers_clean_pca[np.where(predict_customers==8)])).round()
df = pd.DataFrame(data=data,
          index=np.array(range(0, data.shape[0])),
          columns=few_missing.columns)
df.head(10)
Out[354]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 3.0 2.0 3.0 2.0 4.0 2.0 3.0 3.0 0.0 2.0 ... -0.0 -0.0 -0.0 1.0 3.0 -0.0 3.0 1.0 1.0 0.0
1 3.0 2.0 3.0 4.0 2.0 4.0 4.0 3.0 1.0 -1.0 ... 0.0 0.0 0.0 0.0 4.0 1.0 2.0 4.0 1.0 0.0
2 3.0 2.0 3.0 3.0 4.0 2.0 3.0 4.0 1.0 3.0 ... -0.0 0.0 1.0 0.0 4.0 1.0 3.0 5.0 1.0 -0.0
3 3.0 2.0 4.0 4.0 2.0 4.0 4.0 -1.0 1.0 3.0 ... 0.0 -0.0 1.0 0.0 5.0 1.0 2.0 2.0 -0.0 1.0
4 3.0 2.0 5.0 2.0 4.0 2.0 3.0 1.0 1.0 2.0 ... -0.0 0.0 1.0 0.0 3.0 1.0 1.0 5.0 1.0 0.0
5 3.0 2.0 5.0 2.0 3.0 2.0 3.0 0.0 1.0 2.0 ... -0.0 0.0 -0.0 1.0 3.0 1.0 2.0 3.0 1.0 0.0
6 2.0 2.0 4.0 2.0 3.0 2.0 3.0 1.0 0.0 2.0 ... -0.0 0.0 0.0 1.0 4.0 -0.0 1.0 4.0 1.0 0.0
7 3.0 2.0 2.0 4.0 2.0 5.0 4.0 3.0 1.0 3.0 ... 0.0 -0.0 1.0 0.0 5.0 1.0 2.0 4.0 0.0 1.0
8 3.0 2.0 4.0 2.0 3.0 2.0 3.0 1.0 1.0 2.0 ... -0.0 0.0 -0.0 1.0 4.0 1.0 2.0 2.0 1.0 0.0
9 3.0 2.0 4.0 1.0 5.0 2.0 1.0 3.0 1.0 3.0 ... 0.0 -0.0 -0.0 1.0 2.0 1.0 2.0 4.0 1.0 -0.0

10 rows × 69 columns

In [355]:
data_2 = scaler.inverse_transform(pca.inverse_transform(few_missing_pca[np.where(predict_general==8)])).round()
df_2 = pd.DataFrame(data=data_2,
          index=np.array(range(0, data_2.shape[0])),
          columns=few_missing.columns)
df_2.head(10)
Out[355]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 3.0 2.0 2.0 4.0 2.0 4.0 4.0 4.0 1.0 3.0 ... -0.0 0.0 1.0 0.0 4.0 1.0 2.0 4.0 1.0 -0.0
1 3.0 2.0 1.0 4.0 2.0 4.0 4.0 4.0 1.0 2.0 ... -0.0 0.0 0.0 1.0 4.0 1.0 2.0 5.0 0.0 1.0
2 3.0 2.0 4.0 1.0 5.0 1.0 2.0 3.0 1.0 2.0 ... 0.0 -0.0 1.0 -0.0 2.0 1.0 2.0 5.0 1.0 0.0
3 1.0 2.0 3.0 4.0 2.0 5.0 4.0 2.0 1.0 3.0 ... 0.0 -0.0 1.0 -0.0 5.0 1.0 1.0 5.0 1.0 0.0
4 1.0 2.0 3.0 4.0 2.0 3.0 4.0 2.0 1.0 2.0 ... -0.0 1.0 -0.0 0.0 5.0 1.0 2.0 3.0 1.0 0.0
5 2.0 2.0 3.0 4.0 2.0 4.0 4.0 1.0 0.0 3.0 ... 0.0 0.0 0.0 1.0 5.0 -0.0 1.0 4.0 1.0 0.0
6 3.0 2.0 2.0 4.0 2.0 5.0 5.0 3.0 1.0 3.0 ... -0.0 0.0 1.0 0.0 5.0 1.0 3.0 3.0 1.0 -0.0
7 4.0 2.0 2.0 5.0 2.0 5.0 5.0 3.0 1.0 1.0 ... 0.0 -0.0 -0.0 1.0 5.0 1.0 3.0 3.0 1.0 0.0
8 2.0 2.0 4.0 3.0 3.0 3.0 4.0 2.0 1.0 2.0 ... -0.0 0.0 1.0 0.0 4.0 1.0 2.0 4.0 1.0 0.0
9 1.0 2.0 2.0 5.0 1.0 5.0 4.0 3.0 0.0 3.0 ... 0.0 0.0 0.0 1.0 6.0 -0.0 2.0 4.0 1.0 -0.0

10 rows × 69 columns

In [359]:
# What kinds of people are part of a cluster that is underrepresented in the
# customer data compared to the general population?

data_3 = scaler.inverse_transform(pca.inverse_transform(customers_clean_pca[np.where(predict_customers==13)])).round()
df = pd.DataFrame(data=data_3,
          index=np.array(range(0, data_3.shape[0])),
          columns=few_missing.columns)
df.head(10)
Out[359]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 4.0 2.0 2.0 1.0 5.0 1.0 1.0 5.0 -0.0 2.0 ... 0.0 -0.0 -0.0 1.0 2.0 0.0 4.0 5.0 1.0 0.0
1 4.0 2.0 3.0 1.0 5.0 1.0 1.0 4.0 -0.0 2.0 ... 0.0 1.0 0.0 -0.0 1.0 -0.0 4.0 3.0 1.0 -0.0
2 4.0 2.0 3.0 1.0 5.0 2.0 1.0 5.0 -0.0 1.0 ... -0.0 0.0 1.0 0.0 2.0 0.0 6.0 1.0 1.0 -0.0
3 4.0 2.0 2.0 2.0 5.0 2.0 2.0 5.0 0.0 2.0 ... -0.0 1.0 -0.0 0.0 2.0 1.0 4.0 4.0 -0.0 1.0
4 3.0 2.0 2.0 1.0 5.0 1.0 1.0 5.0 -0.0 3.0 ... 0.0 -0.0 1.0 0.0 2.0 -0.0 5.0 2.0 1.0 -0.0
5 4.0 2.0 2.0 3.0 4.0 2.0 3.0 4.0 -0.0 2.0 ... -0.0 0.0 0.0 1.0 3.0 0.0 4.0 3.0 1.0 0.0
6 3.0 2.0 2.0 2.0 4.0 2.0 2.0 5.0 0.0 2.0 ... -0.0 -0.0 -0.0 1.0 2.0 0.0 5.0 1.0 1.0 0.0
7 4.0 2.0 2.0 1.0 4.0 2.0 1.0 5.0 0.0 1.0 ... -0.0 0.0 -0.0 1.0 2.0 0.0 5.0 1.0 1.0 -0.0
8 3.0 2.0 2.0 3.0 4.0 3.0 3.0 4.0 -0.0 1.0 ... -0.0 0.0 0.0 1.0 3.0 0.0 4.0 5.0 1.0 -0.0
9 3.0 2.0 2.0 2.0 5.0 2.0 2.0 5.0 0.0 -0.0 ... 0.0 0.0 0.0 0.0 2.0 0.0 5.0 2.0 1.0 0.0

10 rows × 69 columns

In [360]:
data_4 = scaler.inverse_transform(pca.inverse_transform(few_missing_pca[np.where(predict_general==13)])).round()
df = pd.DataFrame(data=data_4,
          index=np.array(range(0, data_4.shape[0])),
          columns=few_missing.columns)
df.head(10)
Out[360]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 PRAEGENDE_JUGENDJAHRE_decade PRAEGENDE_JUGENDJAHRE_movement CAMEO_INTL_2015_wealth CAMEO_INTL_2015_lifestage WOHNLAGE_rural WOHNLAGE_neigh
0 3.0 2.0 2.0 3.0 3.0 3.0 2.0 3.0 0.0 3.0 ... 0.0 -0.0 1.0 0.0 4.0 0.0 5.0 2.0 1.0 0.0
1 4.0 2.0 2.0 2.0 5.0 1.0 2.0 5.0 0.0 2.0 ... 0.0 1.0 -0.0 0.0 2.0 1.0 3.0 5.0 1.0 -0.0
2 4.0 2.0 2.0 4.0 3.0 3.0 4.0 4.0 -0.0 1.0 ... -0.0 0.0 -0.0 1.0 4.0 0.0 5.0 2.0 1.0 0.0
3 3.0 2.0 1.0 4.0 2.0 4.0 4.0 4.0 -0.0 3.0 ... -0.0 0.0 0.0 1.0 5.0 -0.0 4.0 4.0 1.0 0.0
4 3.0 2.0 1.0 4.0 2.0 4.0 3.0 5.0 -0.0 2.0 ... -0.0 0.0 1.0 0.0 5.0 0.0 5.0 1.0 1.0 0.0
5 3.0 2.0 2.0 3.0 3.0 3.0 2.0 4.0 -0.0 3.0 ... -0.0 -0.0 1.0 0.0 4.0 0.0 4.0 4.0 1.0 -0.0
6 3.0 2.0 2.0 3.0 4.0 2.0 2.0 4.0 0.0 3.0 ... 0.0 -0.0 1.0 0.0 3.0 0.0 5.0 2.0 1.0 -0.0
7 4.0 2.0 3.0 1.0 5.0 2.0 0.0 5.0 -0.0 2.0 ... -0.0 0.0 -0.0 1.0 1.0 0.0 4.0 4.0 1.0 -0.0
8 3.0 2.0 3.0 2.0 5.0 1.0 2.0 5.0 1.0 2.0 ... -0.0 0.0 1.0 0.0 2.0 1.0 3.0 5.0 1.0 -0.0
9 4.0 2.0 3.0 1.0 5.0 1.0 1.0 4.0 0.0 2.0 ... 0.0 -0.0 -0.0 1.0 2.0 0.0 4.0 2.0 1.0 0.0

10 rows × 69 columns

Discussion 3.3: Compare Customer Data to Demographics Data

I think clusters 8, 1,and 2 are the most popular with the company. relatively speaking though, there doesn't seem to be a very clear definition of the relationship between the customers and the general population data sets. However, it is important to keep in mind that we removed a significant chunk of variables that had significant information. I think in the future, a project like should really be the result of in depth conversations and continued supervision by the client to provide better results. Nevertheless, I am still very happy with the results of the analysis and would like to believe that this work delivered significant value to the client.

Congratulations on making it this far in the project! Before you finish, make sure to check through the entire notebook from top to bottom to make sure that your analysis follows a logical flow and all of your findings are documented in Discussion cells. Once you've checked over all of your work, you should export the notebook as an HTML document to submit for evaluation. You can do this from the menu, navigating to File -> Download as -> HTML (.html). You will submit both that document and this notebook for your project submission.

In [ ]: