Reader small image

You're reading from  Applied Supervised Learning with Python

Product typeBook
Published inApr 2019
Reading LevelIntermediate
Publisher
ISBN-139781789954920
Edition1st Edition
Languages
Right arrow
Authors (2):
Benjamin Johnston
Benjamin Johnston
author image
Benjamin Johnston

Benjamin Johnston is a senior data scientist for one of the world's leading data-driven MedTech companies and is involved in the development of innovative digital solutions throughout the entire product development pathway, from problem definition to solution research and development, through to final deployment. He is currently completing his Ph.D. in machine learning, specializing in image processing and deep convolutional neural networks. He has more than 10 years of experience in medical device design and development, working in a variety of technical roles, and holds first-class honors bachelor's degrees in both engineering and medical science from the University of Sydney, Australia.
Read more about Benjamin Johnston

Ishita Mathur
Ishita Mathur
author image
Ishita Mathur

Ishita Mathur has worked as a data scientist for 2.5 years with product-based start-ups working with business concerns in various domains and formulating them as technical problems that can be solved using data and machine learning. Her current work at GO-JEK involves the end-to-end development of machine learning projects, by working as part of a product team on defining, prototyping, and implementing data science models within the product. She completed her masters' degree in high-performance computing with data science at the University of Edinburgh, UK, and her bachelor's degree with honors in physics at St. Stephen's College, Delhi.
Read more about Ishita Mathur

View More author details
Right arrow

Chapter 2: Exploratory Data Analysis and Visualization


Activity 2: Summary Statistics and Missing Values

Solution

The steps to complete this activity are as follows:

  1. Read the data. Use pandas' .read_csv method to read the CSV file into a pandas DataFrame:

    data = pd.read_csv('house_prices.csv')
  2. Use pandas' .info() and .describe() methods to view the summary statistics of the dataset:

    data.info()
    data.describe().T

    The output of info() will be:

    Figure 2.39: The output of the info() method

    The output of describe() will be:

    Figure 2.40: The output of the describe() method

  3. Find the total count and total percentage of missing values in each column of the DataFrame and display them for columns having at least one null value, in descending order of missing percentages.

    As we did in Exercise 12: Visualizing Missing Values, we will use the .isnull() function on the DataFrame to get a mask, find the count of null values in each column by using the .sum() function over the mask DataFrame and the fraction of null values by using .mean() over the mask DataFrame and multiply by 100 to convert it to a percentage. Then, we use pd.concat() to combine the total and percentage of null values into a single DataFrame and sort the rows by percentage of missing values:

    mask = data.isnull()
    total = mask.sum()
    percent = 100*mask.mean()
    
    missing_data = pd.concat([total, percent], axis=1,join='outer',
                   keys=['count_missing', 'perc_missing'])
    missing_data.sort_values(by='perc_missing', ascending=False, inplace=True)
    
    missing_data[missing_data.count_missing > 0]

    The output will be:

    Figure 2.41: Total count and percentage of missing values in each column

  4. Plot the nullity matrix and nullity correlation heatmap. First, we find the list of column names for those having at least one null value. Then, we use the missingno library to plot the nullity matrix (as we did in Exercise 12: Visualizing Missing Values) for a sample of 500 points, and the nullity correlation heatmap for the data in those columns:

    nullable_columns = data.columns[mask.any()].tolist()
    msno.matrix(data[nullable_columns].sample(500))
    plt.show()
    
    msno.heatmap(data[nullable_columns], figsize=(18,18))
    plt.show()

    The nullity matrix will look like this:

    Figure 2.42: Nullity matrix

    The nullity correlation heatmap will look like this:

    Figure 2.43: Nullity correlation heatmap

  5. Delete the columns having more than 80% of values missing. Use the .loc operator on the DataFrame we created in Step 3 to select only those columns that had less than 80% of values missing:

    data = data.loc[:,missing_data[missing_data.perc_missing < 80].index]
  6. Replace null values in the FireplaceQu column with NA values. Use the .fillna() method to replace null values with the NA string:

    data['FireplaceQu'] = data['FireplaceQu'].fillna('NA')

Activity 3: Visually Representing the Distribution of Values

Solution

  1. Plot a histogram using Matplotlib for the target variable, SalePrice. First, we initialize the figure using the plt.figure command and set the figure size. Then, we use Matplotlib's .hist() function as our primary plotting function, to which we pass the SalePrice series object for plotting the histogram. Lastly, we specify the axes labels and show the plot:

    plt.figure(figsize=(8,6))
    
    plt.hist(data.SalePrice, bins=range(0,800000,50000))
    
    plt.ylabel('Number of data points')
    plt.xlabel('SalePrice')
    plt.show()

    The output will be as follows:

    Figure 2.44: Histogram for the target variable

  2. Find the number of unique values within each column having the object type. Create a new DataFrame called object_variables by using the .select_dtypes function on the original DataFrame to select those columns with the numpy.object data type. Then, find the number of unique values for each column in this DataFrame by using the .nunique() function, and sort the resultant series:

    object_variables = data.select_dtypes(include=[np.object])
    object_variables.nunique().sort_values()

    The output will be:

    Figure 2.45: Number of unique values within each column having the object type

  3. Create a DataFrame representing the number of occurrences for each categorical value in the HouseStyle column. Use the .value_counts() function to calculate the frequencies of each value in decreasing order in the form of a pandas series, then reset the index to give us a DataFrame and sort the values by the index:

    counts = data.HouseStyle.value_counts(dropna=False)
    counts.reset_index().sort_values(by='index')

    The output will be:

    Figure 2.46: Number of occurrences for each categorical value in the HouseStyle column

  4. Plot a pie chart representing these counts. As in Step 1, we initialize the image using plt.figure() and use the plt.title() and plt.show() methods to set the figure title and display it respectively. The primary plotting function used is plt.pie(), to which we pass the series we created in the previous step:

    plt.figure(figsize=(10,10))
    plt.pie(counts, labels=counts.index)
    plt.title('Pie chart showing counts for\nHouseStyle categories')
    plt.show()

    The output will be as follows:

    Figure 2.47: Pie chart representing the counts

  5. Find the number of unique values within each column having the number type. As done in Step 2, now select columns having the numpy.number data type and find the number of unique values in each column using .nunique(). Sort the resultant series in descending order:

    numeric_variables = data.select_dtypes(include=[np.number])
    numeric_variables.nunique().sort_values(ascending=False)

    The output will be as follows:

    Figure 2.48: Number of unique values within each column having the number type

  6. Plot a histogram using Seaborn for the LotArea variable. Use Seaborn's .distplot() function as the primary plotting function, to which the LotArea series in the DataFrame needs to be passed (without any null values; use .dropna() on the series to remove them). To improve the plot view, also set the bins parameter and specify the X axis limits using plt.xlim():

    plt.figure(figsize=(10,7))
    sns.distplot(data.LotArea.dropna(), , bins=range(0,100000,1000))
    plt.xlim(0,100000)
    plt.show()

    The output will be as follows:

    Figure 2.49: Histogram for the LotArea variable

  7. Calculate the skew and kurtosis values for the values in each column:

    data.skew().sort_values()
    data.kurt()

    The output for skew values will be:

    Figure 2.50: Skew values for each column

    The output for kurtosis values will be:

    Figure 2.51: Kurtosis values for each column

Activity 4: Relationships Within the Data

Solution

  1. Plot the correlation heatmap for the dataset. As we did in Exercise 23: Correlation Heatmap, plot the heatmap using Seaborn's .heatmap() function and pass the feature correlation matrix (as determined by using pandas' .corr() function on the DataFrame). Additionally, set the color map to RdBu using the cmap parameter and the minimum and maximum values on the color scale to -1 and 1 using the vmin and vmax parameters respectively:

    plt.figure(figsize = (12,10))
    sns.heatmap(data.corr(), square=True, cmap="RdBu", vmin=-1, vmax=1)
    plt.show()

    The output will be:

    Figure 2.52: Heatmap for the dataset

  2. Plot a more compact heatmap having annotations for correlation values using the following subset of features:

    feature_subset = [
        'GarageArea', 'GarageCars','GarageCond','GarageFinish','GarageQual','GarageType',
        'GarageYrBlt','GrLivArea','LotArea','MasVnrArea','SalePrice'
    ]

    Now do the same as in the previous step, this time selecting only the above columns in the dataset, and adding an annot parameter with the True value to the primary plotting function, all else remaining the same:

    plt.figure(figsize = (12,10))
    sns.heatmap(data[feature_subset].corr(), square=True, annot=True, cmap="RdBu", vmin=-1, vmax=1)
    plt.show()

    The output will be as follows:

    Figure 2.53: Heatmap with annotations for correlation values

  3. Display the pairplot for the same subset of features, with the KDE plot on the diagonals and scatter plot elsewhere. Use Seaborn's .pairplot() function to plot the pairplot for the non-null values in the selected columns of the DataFrame. To make the diagonal plots KDE plots, pass kde to the diag_kind parameter and to set all other plots as scatter plots, pass scatter to the kind parameter:

    sns.pairplot(data[feature_subset].dropna(), kind ='scatter', diag_kind='kde')
    plt.show()

    The output will be:

    Figure 2.54: Pairplot for the same subset of features

  4. Create a boxplot to show the variation in SalePrice for each category of GarageCars. The primary plotting function used here will be Seaborn's .boxplot() function, to which we pass the DataFrame along with parameters x and y, the former is the categorical variable and the latter is the continuous variable over which we want to see the variation within each category, that is, GarageCars and SalePrice respectively:

    plt.figure(figsize=(10, 10))
    sns.boxplot(x='GarageCars', y="SalePrice", data=data)
    plt.show()

    The output will be as follows:

    Figure 2.55: Boxplot showing variation in SalePrice for each category of GarageCars

  5. Plot a line graph using Seaborn to show the variation in SalePrice for older and more recently built flats. Here, we will plot a line plot using Seaborn's .lineplot() function. Since we want to see the variation in SalePrice, we take this as the y variable, and as the variation is across a period of time, we take YearBuilt as the x variable. Keeping this in mind, we pass the respective series as values to the y and x parameters for the primary plotting function. We also pass a ci=None parameter to hide the standard deviation indicator around the line in the plot:

    plt.figure(figsize=(10,7))
    sns.lineplot(x=data.YearBuilt, y=data.SalePrice, ci=None)
    plt.show()

    The output will be as follows:

    Figure 2.56: Line graph showing the variation in SalePrice for older and more recently built flats

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Applied Supervised Learning with Python
Published in: Apr 2019Publisher: ISBN-13: 9781789954920

Authors (2)

author image
Benjamin Johnston

Benjamin Johnston is a senior data scientist for one of the world's leading data-driven MedTech companies and is involved in the development of innovative digital solutions throughout the entire product development pathway, from problem definition to solution research and development, through to final deployment. He is currently completing his Ph.D. in machine learning, specializing in image processing and deep convolutional neural networks. He has more than 10 years of experience in medical device design and development, working in a variety of technical roles, and holds first-class honors bachelor's degrees in both engineering and medical science from the University of Sydney, Australia.
Read more about Benjamin Johnston

author image
Ishita Mathur

Ishita Mathur has worked as a data scientist for 2.5 years with product-based start-ups working with business concerns in various domains and formulating them as technical problems that can be solved using data and machine learning. Her current work at GO-JEK involves the end-to-end development of machine learning projects, by working as part of a product team on defining, prototyping, and implementing data science models within the product. She completed her masters' degree in high-performance computing with data science at the University of Edinburgh, UK, and her bachelor's degree with honors in physics at St. Stephen's College, Delhi.
Read more about Ishita Mathur