Reader small image

You're reading from  Python Data Science Essentials. - Third Edition

Product typeBook
Published inSep 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781789537864
Edition3rd Edition
Languages
Concepts
Right arrow
Author (1)
Alberto Boschetti
Alberto Boschetti
author image
Alberto Boschetti

Alberto Boschetti is a data scientist with expertise in signal processing and statistics. He holds a Ph.D. in telecommunication engineering and currently lives and works in London. In his work projects, he faces challenges ranging from natural language processing (NLP) and behavioral analysis to machine learning and distributed processing. He is very passionate about his job and always tries to stay updated about the latest developments in data science technologies, attending meet-ups, conferences, and other events.
Read more about Alberto Boschetti

Right arrow

The Data Pipeline

Up until this point, we've explored how to load data into Python and process it to create a bidimensional NumPy array containing numerical values (your dataset). Now, we are ready to be immersed fully in data science, extract meaning from data, and develop potential data products. This chapter on data treatment and transformations and the next one on machine learning are the most challenging sections of this entire book.

In this chapter, you will learn how to do the following:

  • Briefly explore data and create new features
  • Reduce the dimensionality of data
  • Spot and treat outliers
  • Decide on the best score or loss metrics for your project
  • Apply scientific methodology and effectively test the performance of your machine learning hypothesis
  • Reduce the complexity of the data science problem by decreasing the number of features
  • Optimize your learning parameters...

Introducing EDA

Exploratory data analysis (EDA), or data exploration, is the first step in the data science process. John Tukey coined this term in 1977 when he first wrote his, book Exploratory Data Analysis, emphasizing the importance of EDA. EDA is required to understand the dataset better, check its features and its shape, validate some first hypothesis that you have in mind, and get a preliminary idea about the next step that you want to pursue in subsequent subsequent data science tasks.

In this section, you will work on the Iris dataset, which was already used in the previous chapter. First, let's load the dataset:

In: import pandas as pd
iris_filename = 'datasets-uci-iris.csv'
iris = pd.read_csv(iris_filename, header=None,
names= ['sepal_length', 'sepal_width',
'petal_length', 'petal_width...

Building new features

Sometimes, you'll find yourself in a situation where features and target variables are not really related. In this case, you can modify the input dataset. You can apply linear or nonlinear transformations that can improve the accuracy of the system, and so on. It's a very important step for the overall process because it completely depends on the skills of the data scientist, who is the one responsible for artificially changing the dataset and shaping the input data for a better fit for the learning model. Although this step intuitively just adds complexity, this approach often boosts the performance of the learner; that's why it is used by bleeding-edge techniques, such as deep learning.

For example, if you're trying to predict the value of a house and you know the height, width, and the length of each room, you can artificially build...

Dimensionality reduction

Oftentimes, you will have to deal with a dataset containing a large number of features, many of which may be unnecessary. This is a typical problem where some features are very informative for the prediction, some are somehow related, and some are completely unrelated (that is, they only contain noise or irrelevant information). Keeping only the interesting features is a way to not only make your dataset more manageable but also have predictive algorithms work better instead of being fooled in their predictions by the noise in the data.

Hence, dimensionality reduction is the operation of eliminating some features of the input dataset and creating a restricted set of features that contains all of the information you need to predict the target variable in a more effective and reliable way. As mentioned previously, reducing the number of features usually...

The detection and treatment of outliers

In data science, examples are at the core of learning from data processes. If unusual, inconsistent, or erroneous data is fed into the learning process, the resulting model may be unable to generalize the accommodation of any new data correctly. An unusually high value present in a variable, apart from skewing descriptive measures such as the mean and variance, may also distort how many machine learning algorithms learn from data, causing distorted predictions as a result.

When a data point deviates markedly from the others in a sample, it is called an outlier. Any other expected observation is labeled as an inlier.

A data point may be an outlier due to the following three general causes (and each one implies different remedies):

  • The point represents a rare occurrence, but it is also a possible value, given the fact that the available data...

Validation metrics

In order to evaluate the performance of the data science system that you have built and check how close you are to the goal that you have in mind, you need to use a function that scores the outcome. Typically, different scoring functions are used to deal with binary classification, multilabel classification, regression, or a clustering problem. Now, let's see the most popular functions for each of these tasks and how they are used by machine learning algorithms.

Learning how to choose the right score/error measure for your data science project is really a matter of experience. We found it very helpful to consult (and participate in) the data science competitions held by Kaggle (kaggle.com), a company devoted to organizing data challenges between data scientists from all over the world. By observing the various challenges and what score or error measure...

Testing and validating

After loading our data, preprocessing it, creating new, useful features, checking for outliers and other inconsistent data points, and finally choosing the right metric, we are ready to apply a machine learning algorithm.

A machine learning algorithm, by observing a series of examples and pairing them with their outcome, is able to extract a series of rules that can be successfully generalized to new examples by correctly guessing their resulting outcome. Such is the supervised learning approach, where it applies a series of highly specialized learning algorithms that we expect can correctly predict (and generalize) on any new data.

But how can we correctly apply the learning process in order to achieve the best model for prediction to be generally used with similar yet new data?

In data science, there are some best practices to be followed that can assure...

Cross-validation

If you have run the previous experiment, you may have realized that:

  • Both the validation and test results vary, as their samples are different.
  • The chosen hypothesis is often the best one, but this is not always the case.

Unfortunately, relying on the validation and testing phases of samples brings uncertainty, along with a reduction of the learning examples dedicated to training (the fewer the examples, the more the variance of the estimates from the model).

A solution would be to use cross-validation, and Scikit-learn offers a complete module for cross-validation and performance evaluation (sklearn.model_selection).

By resorting to cross-validation, you'll just need to separate your data into a training and test set, and you will be able to use the training data for both model optimization and model training.

How does cross-validation work? The idea is...

Hyperparameter optimization

A machine learning hypothesis is not simply determined by the learning algorithm but also by its hyperparameters (the parameters of the algorithm that have to be fixed prior, and which cannot be learned during the training process) and the selection of variables to be used to achieve the best learned parameters.

In this section, we will explore how to extend the cross-validation approach to find the best hyperparameters that are able to generalize to our test set. We will keep on using the handwritten digits dataset offered by the Scikit-learn package. Here's a useful reminder about how to load the dataset:

In: from sklearn.datasets import load_digits
digits = load_digits()
X, y = digits.data, digits.target

In addition, we will keep on using support vector machines as our learning algorithm:

In: from sklearn import svm
h = svm.SVC()
...

Feature selection

With respect to the machine learning algorithm that you are going to use, irrelevant and redundant features may play a role in the lack of interpretability of the resulting model, long training times and, most importantly, overfitting and poor generalization.

Overfitting is related to the ratio of the number of observations and the variables available in your dataset. When the variables are many compared to the observations, your learning algorithm will have more chance of ending up with some local optimization or the fitting of some spurious noise due to the correlation between variables.

Apart from dimensionality reduction, which requires you to transform data, feature selection can be the solution to the aforementioned problems. It simplifies high-dimensional structures by choosing the most predictive set of variables; that is, it picks the features that work...

Wrapping everything in a pipeline

As a concluding topic, we will discuss how to wrap the operations of transformation and selection we have seen so far together, into a single command, a pipeline that will take your data from source to your machine learning algorithm.

Wrapping all of your data operations into a single command offers some advantages:

  • Your code becomes clear and more logically constructed because pipelines force you to rely on functions for your operations (each step is a function).
  • You treat the test data in the exact same way as your train data without code repetitions or the possibility of any mistakes being made in the process.
  • You can easily grid search the best parameters on all the data pipelines you have devised, not just on the machine learning hyperparameters.

We distinguish between two kinds of wrappers, depending on the data flow you need to build...

Summary

In this chapter, we extracted significant meanings from data by applying a number of advanced data operations, from EDA and feature creation to dimensionality reduction and outlier detection.

More importantly, we started developing, with the help of many examples, our data pipeline. This was achieved by encapsulating a train/cross-validation/test setting into our hypothesis, which was expressed in terms of various activities from data selection and transformation to the choice of learning algorithm and its best hyperparameters.

In the next chapter, we will delve into the principal machine learning algorithms offered by the Scikit-learn package, such as linear models, support vectors machines, ensembles of trees, and unsupervised techniques for clustering, among others.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Python Data Science Essentials. - Third Edition
Published in: Sep 2018Publisher: PacktISBN-13: 9781789537864
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alberto Boschetti

Alberto Boschetti is a data scientist with expertise in signal processing and statistics. He holds a Ph.D. in telecommunication engineering and currently lives and works in London. In his work projects, he faces challenges ranging from natural language processing (NLP) and behavioral analysis to machine learning and distributed processing. He is very passionate about his job and always tries to stay updated about the latest developments in data science technologies, attending meet-ups, conferences, and other events.
Read more about Alberto Boschetti