Reader small image

You're reading from  Learning Predictive Analytics with Python

Product typeBook
Published inFeb 2016
Reading LevelIntermediate
Publisher
ISBN-139781783983261
Edition1st Edition
Languages
Right arrow
Authors (2):
Ashish Kumar
Ashish Kumar
author image
Ashish Kumar

Ashish Kumar is a seasoned data science professional, a publisher author and a thought leader in the field of data science and machine learning. An IIT Madras graduate and a Young India Fellow, he has around 7 years of experience in implementing and deploying data science and machine learning solutions for challenging industry problems in both hands-on and leadership roles. Natural Language Procession, IoT Analytics, R Shiny product development, Ensemble ML methods etc. are his core areas of expertise. He is fluent in Python and R and teaches a popular ML course at Simplilearn. When not crunching data, Ashish sneaks off to the next hip beach around and enjoys the company of his Kindle. He also trains and mentors data science aspirants and fledgling start-ups.
Read more about Ashish Kumar

View More author details
Right arrow

Chapter 2. Data Cleaning

Without any further ado, lets kick-start the engine and start our foray into the world of predictive analytics. However, you need to remember that our fuel is data. In order to do any predictive analysis, one needs to access and import data for the engine to rev up.

I assume that you have already installed Python and the required packages with an IDE of your choice. Predictive analytics, like any other art, is best learnt when tried hands-on and practiced as frequently as possible. The book will be of the best use if you open a Python IDE of your choice and practice the explained concepts on your own. So, if you haven't installed Python and its packages yet, now is the time. If not all the packages, at-least pandas should be installed, which are the mainstay of the things that we will learn in this chapter.

After reading this chapter, you should be familiar with the following topics:

  • Handling various kind of data importing scenarios that is importing various kind of...

Reading the data – variations and examples


Before we delve deeper into the realm of data, let us familiarize ourselves with a few terms that will appear frequently from now on.

Data frames

A data frame is one of the most common data structures available in Python. Data frames are very similar to the tables in a spreadsheet or a SQL table. In Python vocabulary, it can also be thought of as a dictionary of series objects (in terms of structure). A data frame, like a spreadsheet, has index labels (analogous to rows) and column labels (analogous to columns). It is the most commonly used pandas object and is a 2D structure with columns of different or same types. Most of the standard operations, such as aggregation, filtering, pivoting, and so on which can be applied on a spreadsheet or the SQL table can be applied to data frames using methods in pandas.

The following screenshot is an illustrative picture of a data frame. We will learn more about working with them as we progress in the chapter:

Fig...

Various methods of importing data in Python


pandas is the Python library/package of choice to import, wrangle, and manipulate datasets. The datasets come in various forms; the most frequent being in the .csv format. The delimiter (a special character that separates the values in a dataset) in a CSV file is a comma. Now we will look at the various methods in which you can read a dataset in Python.

Case 1 – reading a dataset using the read_csv method

Open an IPython Notebook by typing ipython notebook in the command line.

Download the Titanic dataset from the shared Google Drive folder (any of .xls or .xlsx would do). Save this file in a CSV format and we are good to go. This is a very popular dataset that contains information about the passengers travelling on the famous ship Titanic on the fateful sail that saw it sinking. If you wish to know more about this dataset, you can go to the Google Drive folder and look for it.

A common practice is to share a variable description file with the dataset...

The read_csv method


The name of the method doesn't unveil its full might. It is a kind of misnomer in the sense that it makes us think that it can be used to read only CSV files, which is not the case. Various kinds of files, including .txt files having delimiters of various kinds can be read using this method.

Let's learn a little bit more about the various arguments of this method in order to assess its true potential. Although the read_csv method has close to 30 arguments, the ones listed in the next section are the ones that are most commonly used.

The general form of a read_csv statement is something similar to:

pd.read_csv(filepath, sep=', ', dtype=None, header=None, skiprows=None, index_col=None, skip_blank_lines=TRUE, na_filter=TRUE)

Now, let us understand the significance and usage of each of these arguments one by one:

  • filepath: filepath is the complete address of the dataset or file that you are trying to read. The complete address includes the address of the directory in which the...

Use cases of the read_csv method


The read_csv method can be put to a variety of uses. Let us look at some such use cases.

Passing the directory address and filename as variables

Sometimes it is easier and viable to pass the directory address and filename as variables to avoid hard-coding. More importantly so, when one doesn't want to hardcode the full address of the file and intend to use this full address many times. Let us see how we can do so while importing a dataset.

import pandas as pd
path = 'E:/Personal/Learning/Datasets/Book'
filename = 'titanic3.csv'
fullpath = path+'/'+filename
data = pd.read_csv(fullpath)

For such cases, alternatively, one can use the following snippet that uses the path.join method in an os package:

import pandas as pd
import os
path = 'E:/Personal/Learning/Datasets/Book'
filename = 'titanic3.csv'
fullpath = os.path.join(path,filename)
data = pd.read_csv(fullpath)

One advantage of using the latter method is that it trims the lagging or leading white spaces, if any...

Case 2 – reading a dataset using the open method of Python


pandas is a very robust and comprehensive library to read, explore, and manipulate a dataset. But, it might not give an optimal performance with very big datasets as it reads the entire dataset, all at once, and blocks the majority of computer memory. Instead, you can try one of the Python's file handling methods—open. One can read the dataset line by line or in chunks by running a for loop over the rows and delete the chunks from the memory, once they have been processed. Let us look at some of the use case examples of the open method.

Reading a dataset line by line

As you might be aware that while reading a file using the open method, we can specify to use a particular mode that is read, write, and so on. By default, the method opens a file in the read-mode. This method can be useful while reading a big dataset, as this method reads data line-by-line (not at once, unlike what pandas does). You can read datasets into chunks using...

Case 3 – reading data from a URL


Several times, we need to read the data directly from a web URL. This URL might contain the data written in it or might contain a file which has the data. For example, navigate to this website, http://winterolympicsmedals.com/ which lists the medals won by various countries in different sports during the Winter Olympics. Now type the following address in the URL address bar: http://winterolympicsmedals.com/medals.csv.

A CSV file will be downloaded automatically. If you choose to download it manually, saving it and then specifying the directory path for the read_csv method is a time consuming process. Instead, Python allows us to read such files directly from the URL. Apart from the significant saving in time, it is also beneficial to loop over the files when there are many such files to be downloaded and read in.

A simple read_csv statement is required to read the data directly from the URL:

import pandas as pd
medal_data=pd.read_csv('http://winterolympicsmedals...

Case 4 – miscellaneous cases


Apart from the standard cases described previously, there are certain less frequent cases of data file handling that might need to be taken care of. Let's have a look at two of them.

Reading from an .xls or .xlsx file

Go to the Google Drive and look for .xls and .xlsx versions of the Titanic dataset. They will be named titanic3.xls and titanic3.xlsx. Download both of them and save them on your computer. The ability to read Excel files with all its sheets is a very powerful technique available in pandas. It is done using a read_excel method, as shown in the following code:

import pandas as pd
data=pd.read_excel('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/titanic3.xls','titanic3')

import pandas as pd
data=pd.read_excel('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/titanic3.xlsx','titanic3')

It works with both, .xls and .xlsx files. The second argument of the read_excel method is the sheet name that you want to read in.

Another available...

Basics – summary, dimensions, and structure


After reading in the data, there are certain tasks that need to be performed to get the touch and feel of the data:

  • To check whether the data has read in correctly or not

  • To determine how the data looks; its shape and size

  • To summarize and visualize the data

  • To get the column names and summary statistics of numerical variables

Let us go back to the example of the Titanic dataset and import it again. The head() method is used to look at the first first few rows of the data, as shown:

import pandas as pd
data=pd.read_csv('E:/Personal/Learning/Datasets/Book/titanic3.csv')
data.head()

The result will look similar to the following screenshot:

Fig. 2.6: Thumbnail view of the Titanic dataset obtained using the head() method

In the head() method, one can also specify the number of rows they want to see. For example, head(10) will show the first 10 rows.

The next attribute of the dataset that concerns us is its dimension, that is the number of rows and columns present...

Handling missing values


Checking for missing values and handling them properly is an important step in the data preparation process, if they are left untreated they can:

  • Lead to the behavior between the variables not being analyzed correctly

  • Lead to incorrect interpretation and inference from the data

To see how; move up a few pages to see how the describe method is explained. Look at the output table; why are the counts for many of the variables different from each other? There are 1310 rows in the dataset, as we saw earlier in the section. Why is it then that the count is 1046 for age, 1309 for pclass, and 121 for body. This is because the dataset doesn't have a value for 264 (1310-1046) entries in the age column, 1 (1310-1309) entry in the pclass column, and 1189 (1310-121) entries in the body column. In other words, these many entries have missing values in their respective columns. If a column has a count value less than the number of rows in the dataset, it is most certainly because the...

Creating dummy variables


Creating dummy variables is a method to create separate variable for each category of a categorical variable., Although, the categorical variable contains plenty of information and might show a causal relationship with output variable, it can't be used in the predictive models like linear and logistic regression without any processing.

In our dataset, sex is a categorical variable with two categories that are male and female. We can create two dummy variables out of this, as follows:

dummy_sex=pd.get_dummies(data['sex'],prefix='sex')

The result of this statement is, as follows:

Fig. 2.17: Dummy variable for the sex variable in the Titanic dataset

This process is called dummifying, the variable creates two new variables that take either 1 or 0 value depending on what the sex of the passenger was. If the sex was female, sex_female would be 1 and sex_male would be 0. If the sex was male, sex_male would be 1 and sex_female would be 0. In general, all but one dummy variable...

Visualizing a dataset by basic plotting


Plots are a great way to visualize a dataset and gauge possible relationships between the columns of a dataset. There are various kinds of plots that can be drawn. For example, a scatter plot, histogram, box-plot, and so on.

Let's import the Customer Churn Model dataset and try some basic plots:

import pandas as pd
data=pd.read_csv('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Customer Churn Model.txt')

While plotting any kind of plot, it helps to keep these things in mind:

  • If you are using IPython Notebook, write % matplotlib inline in the input cell and run it before plotting to see the output plot inline (in the output cell).

  • To save a plot in your local directory as a file, you can use the savefig method. Let's go back to the example where we plotted four scatter plots in a 2x2 panel. The name of this image is specified in the beginning of the snippet, as a figure parameter of the plot. To save this image one can write the following code...

Summary


The main learning outcomes of this chapter are summarized as follows:

  • Various methods and variations in importing a dataset using pandas: read_csv and its variations, reading a dataset using open method in Python, reading a file in chunks using the open method, reading directly from a URL, specifying the column names from a list, changing the delimiter of a dataset, and so on.

  • Basic exploratory analysis of data: observing a thumbnail of data, shape, column names, column types, and summary statistics for numerical variables

  • Handling missing values: The reason for incorporation of missing values, why it is important to treat them properly, how to treat them properly by deletion and imputation, and various methods of imputing data.

  • Creating dummy variables: creating dummy variables for categorical variables to be used in the predictive models.

  • Basic plotting: scatter plotting, histograms and boxplots; their meaning and relevance; and how they are plotted.

This chapter is a head start into...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learning Predictive Analytics with Python
Published in: Feb 2016Publisher: ISBN-13: 9781783983261
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Ashish Kumar

Ashish Kumar is a seasoned data science professional, a publisher author and a thought leader in the field of data science and machine learning. An IIT Madras graduate and a Young India Fellow, he has around 7 years of experience in implementing and deploying data science and machine learning solutions for challenging industry problems in both hands-on and leadership roles. Natural Language Procession, IoT Analytics, R Shiny product development, Ensemble ML methods etc. are his core areas of expertise. He is fluent in Python and R and teaches a popular ML course at Simplilearn. When not crunching data, Ashish sneaks off to the next hip beach around and enjoys the company of his Kindle. He also trains and mentors data science aspirants and fledgling start-ups.
Read more about Ashish Kumar