Data Science with Python

5 (2 reviews total)
By Rohan Chopra , Aaron England , Mohamed Noordeen Alaudeen
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Introduction to Data Science and Data Pre-Processing

About this book

Data Science with Python begins by introducing you to data science and teaches you to install the packages you need to create a data science coding environment. You will learn three major techniques in machine learning: unsupervised learning, supervised learning, and reinforcement learning. You will also explore basic classification and regression techniques, such as support vector machines, decision trees, and logistic regression.

As you make your way through chapters, you will study the basic functions, data structures, and syntax of the Python language that are used to handle large datasets with ease. You will learn about NumPy and pandas libraries for matrix calculations and data manipulation, study how to use Matplotlib to create highly customizable visualizations, and apply the boosting algorithm XGBoost to make predictions. In the concluding chapters, you will explore convolutional neural networks (CNNs), deep learning algorithms used to predict what is in an image. You will also understand how to feed human sentences to a neural network, make the model process contextual information, and create human language processing systems to predict the outcome.

By the end of this book, you will be able to understand and implement any new data science algorithm and have the confidence to experiment with tools or libraries other than those covered in the book.

Publication date:
July 2019
Publisher
Packt
Pages
426
ISBN
9781838552862

 

Introduction to Data Science and Data Pre-Processing

Learning Objectives

By the end of this chapter, you will be able to:

  • Use various Python machine learning libraries
  • Handle missing data and deal with outliers
  • Perform data integration to bring together data from different sources
  • Perform data transformation to convert data into a machine-readable form
  • Scale data to avoid problems with values of different magnitudes
  • Split data into train and test datasets
  • Describe the different types of machine learning
  • Describe the different performance measures of a machine learning model

This chapter introduces data science and covers the various processes included in the building of machine learning models, with a particular focus on pre-processing.

 

Introduction

We live in a world where we are constantly surrounded by data. As such, being able to understand and process data is an absolute necessity.

Data Science is a field that deals with the description, analysis, and prediction of data. Consider an example from our daily lives: every day, we utilize multiple social media applications on our phones. These applications gather and process data in order to create a more personalized experience for each user – for example, showing us news articles that we may be interested in, or tailoring search results according to our location. This branch of data science is known as machine learning.

Machine learning is the methodical learning of procedures and statistical representations that computers use to accomplish tasks without human intervention. In other words, it is the process of teaching a computer to perform tasks by itself without explicit instructions, relying only on patterns and inferences. Some common uses of machine learning algorithms are in email filtering, computer vision, and computational linguistics.

This book will focus on machine learning and other aspects of data science using Python. Python is a popular language for data science, as it is versatile and relatively easy to use. It also has several ready-made libraries that are well equipped for processing data.

 

Python Libraries

Throughout this book, we'll be using various Python libraries, including pandas, Matplotlib, Seaborn, and scikit-learn.

pandas

pandas is an open source package that has many functions for loading and processing data in order to prepare it for machine learning tasks. It also has tools that can be used to analyze and manipulate data. Data can be read from many formats using pandas. We will mainly be using CSV data throughout this book. To read CSV data, you can use the read_csv() function by passing filename.csv as an argument. An example of this is shown here:

>>> import pandas as pd

>>> pd.read_csv("data.csv")

In the preceding code, pd is an alias name given to pandas. It is not mandatory to give an alias. To visualize a pandas DataFrame, you can use the head() function to list the top five rows. This will be demonstrated in one of the following exercises.

Note

Please visit the following link to learn more about pandas: https://pandas.pydata.org/pandas-docs/stable/.

NumPy

NumPy is one of the main packages that Python has to offer. It is mainly used in practices related to scientific computing and when working on mathematical operations. It comprises of tools that enable us to work with arrays and array objects.

Matplotlib

Matplotlib is a data visualization package. It is useful for plotting data points in a 2D space with the help of NumPy.

Seaborn

Seaborn is also a data visualization library that is based on matplotlib. Visualizations created using Seaborn are far more attractive than ones created using matplotlib in terms of graphics.

scikit-learn

scikit-learn is a Python package used for machine learning. It is designed in such a way that it interoperates with other numeric and scientific libraries in Python to achieve the implementation of algorithms.

These ready-to-use libraries have gained interest and attention from developers, especially in the data science space. Now that we have covered the various libraries in Python, in the next section we'll explore the roadmap for building machine learning models.

 

Roadmap for Building Machine Learning Models

The roadmap for building machine learning models is straightforward and consists of five major steps, which are explained here:

  • Data Pre-processing

    This is the first step in building a machine learning model. Data pre-processing refers to the transformation of data before feeding it into the model. It deals with the techniques that are used to convert unusable raw data into clean reliable data.

    Since data collection is often not performed in a controlled manner, raw data often contains outliers (for example, age = 120), nonsensical data combinations (for example, model: bicycle, type: 4-wheeler), missing values, scale problems, and so on. Because of this, raw data cannot be fed into a machine learning model because it might compromise the quality of the results. As such, this is the most important step in the process of data science.

  • Model Learning

    After pre-processing the data and splitting it into train/test sets (more on this later), we move on to modeling. Models are nothing but sets of well-defined methods called algorithms that use pre-processed data to learn patterns, which can later be used to make predictions. There are different types of learning algorithms, including supervised, semi-supervised, unsupervised, and reinforcement learning. These will be discussed later.

  • Model Evaluation

    In this stage, the models are evaluated with the help of specific performance metrics. With these metrics, we can go on to tune the hyperparameters of a model in order to improve it. This process is called hyperparameter optimization. We will repeat this step until we are satisfied with the performance.

  • Prediction

    Once we are happy with the results from the evaluation step, we will then move on to predictions. Predictions are made by the trained model when it is exposed to a new dataset. In a business setting, these predictions can be shared with decision makers to make effective business choices.

  • Model Deployment

    The whole process of machine learning does not just stop with model building and prediction. It also involves making use of the model to build an application with the new data. Depending on the business requirements, the deployment may be a report, or it may be some repetitive data science steps that are to be executed. After deployment, a model needs proper management and maintenance at regular intervals to keep it up and running.

This chapter will mainly focus on pre-processing. We will cover the different tasks involved in data pre-processing, such as data representation, data cleaning, and others.

 

Data Representation

The main objective of machine learning is to build models that understand data and find underlying patterns. In order to do so, it is very important to feed the data in a way that is interpretable by the computer. To feed the data into a model, it must be represented as a table or a matrix of the required dimensions. Converting your data into the correct tabular form is one of the first steps before pre-processing can properly begin.

Data Represented in a Table

Data should be arranged in a two-dimensional space made up of rows and columns. This type of data structure makes it easy to understand the data and pinpoint any problems. An example of some raw data stored as a CSV (comma separated values) file is shown here:

Figure 1.1: Raw data in CSV format
Figure 1.1: Raw data in CSV format

The representation of the same data in a table is as follows:

Figure 1.2: CSV data in table format
Figure 1.2: CSV data in table format

If you compare the data in CSV and table formats, you will see that there are missing values in both. We will cover what to do with these later in the chapter. To load a CSV file and work on it as a table, we use the pandas library. The data here is loaded into tables called DataFrames.

Note

To learn more about pandas, visit the following link: http://pandas.pydata.org/pandas-docs/version/0.15/tutorials.html.

Independent and Target Variables

The DataFrame that we use contains variables or features that can be classified into two categories. These are independent variables (also called predictor variables) and dependent variables (also called target variables). Independent variables are used to predict the target variable. As the name suggests, independent variables should be independent of each other. If they are not, this will need to be addressed in the pre-processing (cleaning) stage.

Independent Variables

These are all the features in the DataFrame except the target variable. They are of size (m, n), where m is the number of observations and n is the number of features. These variables must be normally distributed and should NOT contain:

  • Missing or NULL values
  • Highly categorical data features or high cardinality (these terms will be covered in more detail later)
  • Outliers
  • Data on different scales
  • Human error
  • Multicollinearity (independent variables that are correlated)
  • Very large independent feature sets (too many independent variables to be manageable)
  • Sparse data
  • Special characters

Feature Matrix and Target Vector

A single piece of data is called a scalar. A group of scalars is called a vector, and a group of vectors is called a matrix. A matrix is represented in rows and columns. Feature matrix data is made up of independent columns, and the target vector depends on the feature matrix columns. To get a better understanding of this, let's look at the following table:

Figure 1.3: Table containing car details
Figure 1.3: Table containing car details

As you can see in the table, there are various columns: Car Model, Car Capacity, Car Brand, and Car Price. All columns except Car Price are independent variables and represent the feature matrix. Car Price is the dependent variable that depends on the other columns (Car Model, Car Capacity, and Car Brand). It is a target vector because it depends on the feature matrix data. In the next section, we'll go through an exercise based on features and a target matrix to get a thorough understanding.

Note 

All exercises and activities will be primarily developed in Jupyter Notebook. It is recommended to keep a separate notebook for different assignments unless advised not to. Also, to load a sample dataset, the pandas library will be used, because it displays the data as a table. Other ways to load data will be explained in further sections.

Exercise 1: Loading a Sample Dataset and Creating the Feature Matrix and Target Matrix

In this exercise, we will be loading the House_price_prediction dataset into the pandas DataFrame and creating feature and target matrices. The House_price_prediction dataset is taken from the UCI Machine Learning Repository. The data was collected from various suburbs of the USA and consists of 5,000 entries and 6 features related to houses. Follow these steps to complete this exercise:

Note

The House_price_prediction dataset can be found at this location: https://github.com/TrainingByPackt/Data-Science-with-Python/blob/master/Chapter01/Data/USA_Housing.csv.

  1. Open a Jupyter notebook and add the following code to import pandas:

    import pandas as pd

  2. Now we need to load the dataset into a pandas DataFrame. As the dataset is a CSV file, we'll be using the read_csv() function to read the data. Add the following code to do this:

    dataset = "https://github.com/TrainingByPackt/Data-Science-with-Python/blob/master/Chapter01/Data/USA_Housing.csv"

    df = pd.read_csv(dataset, header = 0)

    As you can see in the preceding code, the data is stored in a variable named df.

  3. To print all the column names of the DataFrame, we'll use the df.columns command. Write the following code in the notebook:

    df.columns

    The preceding code generates the following output:

    Figure 1.4: List of columns present in the dataframe
    Figure 1.4: List of columns present in the dataframe
  4. The dataset contains n number of data points. We can find the total number of rows using the following command:

    df.index

    The preceding code generates the following output:

    Figure 1.5: Total Index in the dataframe
    Figure 1.5: Total Index in the dataframe

    As you can see in the preceding figure, our dataset contains 5000 rows, from index 0 to 5000.

    Note

    You can use the set_index() function in pandas to convert a column into an index of rows in a DataFrame. This is a bit like using the values in that column as your row labels.

    Dataframe.set_index('column name', inplace = True')'

  5. Let's set the Address column as an index and reset it back to the original DataFrame. The pandas library provides the set_index() method to convert a column into an index of rows in a DataFrame. Add the following code to implement this:

    df.set_index('Address', inplace=True)

    df

    The preceding code generates the following output:

    Figure 1.6: DataFrame with an indexed Address column
    Figure 1.6: DataFrame with an indexed Address column

    The inplace parameter in the set_index() function is by default set to False. If the value is changed to True, then whatever operation we perform the content of the DataFrame changes directly without the copy being created.

  6. In order to reset the index of the given object, we use the reset_index() function. Write the following code to implement this:

    df.reset_index(inplace=True)

    df

    The preceding code generates the following output:

    Figure 1.7: DataFrame with the index reset
    Figure 1.7: DataFrame with the index reset

    Note

    The index is like a name given to a row and column. Rows and columns both have an index. You can index by row/column number or row/column name.

  7. We can retrieve the first four rows and the first three columns using a row number and column number. This can be done using the iloc indexer in pandas, which retrieves data using index positions. Add the following code to do this:

    df.iloc[0:4 , 0:3]

    Figure 1.8: Dataset of four rows and three columns
    Figure 1.8: Dataset of four rows and three columns
  8. To retrieve the data using labels, we use the loc indexer. Add the following code to retrieve the first five rows of the Income and Age columns:

    df.loc[0:4 , ["Avg. Area Income", "Avg. Area House Age"]]

    Figure 1.9: Dataset of five rows and two columns
    Figure 1.9: Dataset of five rows and two columns
  9. Now create a variable called X to store the independent features. In our dataset, we will consider all features except Price as independent variables, and we will use the drop() function to include them. Once this is done, we print out the top five instances of the X variable. Add the following code to do this:

    X = df.drop('Price', axis=1)

    X.head()

    The preceding code generates the following output:

    Figure 1.10: Dataset showing the first five rows of the feature matrix
    Figure 1.10: Dataset showing the first five rows of the feature matrix

    Note

    The default number of instances that will be taken for the head is five, so if you don't specify the number then it will by default output five observations. The axis parameter in the preceding screenshot denotes whether you want to drop the label from rows (axis = 0) or columns (axis = 1).

  10. Print the shape of your newly created feature matrix using the X.shape command. Add the following code to do this:

    X.shape

    The preceding code generates the following output:

    Figure 1.11: Shape of the feature matrix

    In the preceding figure, the first value indicates the number of observations in the dataset (5000), and the second value represents the number of features (6).

  11. Similarly, we will create a variable called y that will store the target values. We will use indexing to grab the target column. Indexing allows you to access a section of a larger element. In this case, we want to grab the column named Price from the df DataFrame. Then, we want to print out the top 10 values of the variable. Add the following code to implement this:

    y = df['Price']

    y.head(10)

    The preceding code generates the following output:

    Figure 1.12: Dataset showing the first 10 rows of the target matrix
    Figure 1.12: Dataset showing the first 10 rows of the target matrix
  12. Print the shape of your new variable using the y.shape command. The shape should be one-dimensional, with a length equal to the number of observations (5000) only. Add the following code to implement this:

    y.shape

    The preceding code generates the following output:

Figure 1.13: Shape of the target matrix
Figure 1.13: Shape of the target matrix

You have successfully created the feature and target matrices of a dataset. You have completed the first step in the process of building a predictive model. This model will learn the patterns from the feature matrix (columns in X) and how they map to the values in the target vector (y). These patterns can then be used to predict house prices from new data based on the features of those new houses.

In the next section, we will explore more steps involved in pre-processing.

About the Authors

  • Rohan Chopra

    Rohan Chopra graduated from Vellore Institute of Technology with a bachelor’s degree in computer science. Rohan has an experience of more than 2 years in designing, implementing, and optimizing end-to-end deep neural network systems. His research is centered around the use of deep learning to solve computer vision-related problems and has hands-on experience working on self-driving cars. He is a data scientist at Absolutdata.

    Browse publications by this author
  • Aaron England

    Aaron England earned a Ph.D from the University of Utah in Exercise and Sports Science with a cognate in Biostatistics. Currently, he resides in Scottsdale, Arizona where he works as a data scientist at Natural Partners Fullscript.

    Browse publications by this author
  • Mohamed Noordeen Alaudeen

    Mohamed Noordeen Alaudeen is a lead data scientist at Logitech. Noordeen has 7+ years of experience in building and developing end-to-end BigData and Deep Neural Network Systems. It all started when he decided to engage the rest of his life for data science. He is Seasonal data science and big data trainer with both Imarticus Learning and Great Learning, which are two of the renowned data science institutes in India. Apart from his teaching, he does contribute his work to open-source. He has over 90+ repositories on GitHub, which have open-sourced his technical work and data science material. He is an active influencer( with over 22,000+ connections) on Linkedin, helping the data science community.

    Browse publications by this author

Latest Reviews

(2 reviews total)
A really fascinating area of IT, and with my preferred language, python
Packt has put all the Python books that I need under one roof. I highly recommend them to my students.

Recommended For You

Book Title
Unlock this book and the full library for FREE
Start free trial