Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Python Machine Learning Cookbook
Python Machine Learning Cookbook

Python Machine Learning Cookbook: 100 recipes that teach you how to perform various machine learning tasks in the real world

Arrow left icon
Profile Icon Joshi Profile Icon Vahid Mirjalili
Arrow right icon
$65.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (5 Ratings)
Paperback Jun 2016 304 pages 1st Edition
eBook
$9.99 $51.99
Paperback
$65.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Joshi Profile Icon Vahid Mirjalili
Arrow right icon
$65.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (5 Ratings)
Paperback Jun 2016 304 pages 1st Edition
eBook
$9.99 $51.99
Paperback
$65.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$9.99 $51.99
Paperback
$65.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Python Machine Learning Cookbook

Chapter 2. Constructing a Classifier

In this chapter, we will cover the following recipes:

  • Building a simple classifier
  • Building a logistic regression classifier
  • Building a Naïve Bayes classifier
  • Splitting the dataset for training and testing
  • Evaluating the accuracy using cross-validation
  • Visualizing the confusion matrix
  • Extracting the performance report
  • Evaluating cars based on their characteristics
  • Extracting validation curves
  • Extracting learning curves
  • Estimating the income bracket

Introduction

In the field of machine learning, classification refers to the process of using the characteristics of data to separate it into a certain number of classes. This is different from regression that we discussed in the previous chapter where the output is a real number. A supervised learning classifier builds a model using labeled training data and then uses this model to classify unknown data.

A classifier can be any algorithm that implements classification. In simple cases, this classifier can be a straightforward mathematical function. In more real-world cases, this classifier can take very complex forms. In the course of study, we will see that classification can be either binary, where we separate data into two classes, or it can be multiclass, where we separate data into more than two classes. The mathematical techniques that are devised to deal with the classification problem tend to deal with two classes, so we extend them in different ways to deal with the multiclass...

Building a simple classifier

Let's see how to build a simple classifier using some training data.

How to do it…

  1. We will use the simple_classifier.py file that is already provided to you as reference. Assuming that you imported the numpy and matplotlib.pyplot packages like we did in the last chapter, let's create some sample data:
    X = np.array([[3,1], [2,5], [1,8], [6,4], [5,2], [3,5], [4,7], [4,-1]])
  2. Let's assign some labels to these points:
    y = [0, 1, 1, 0, 0, 1, 1, 0]
  3. As we have only two classes, the y list contains 0s and 1s. In general, if you have N classes, then the values in y will range from 0 to N-1. Let's separate the data into classes based on the labels:
    class_0 = np.array([X[i] for i in range(len(X)) if y[i]==0])
    class_1 = np.array([X[i] for i in range(len(X)) if y[i]==1])
  4. To get an idea about our data, let's plot it, as follows:
    plt.figure()
    plt.scatter(class_0[:,0], class_0[:,1], color='black', marker='s')
    plt.scatter(class_1[:,0...

Building a logistic regression classifier

Despite the word regression being present in the name, logistic regression is actually used for classification purposes. Given a set of datapoints, our goal is to build a model that can draw linear boundaries between our classes. It extracts these boundaries by solving a set of equations derived from the training data.

How to do it…

  1. Let's see how to do this in Python. We will use the logistic_regression.py file that is provided to you as a reference. Assuming that you imported the necessary packages, let's create some sample data along with training labels:
    import numpy as np
    from sklearn import linear_model 
    import matplotlib.pyplot as plt
    
    X = np.array([[4, 7], [3.5, 8], [3.1, 6.2], [0.5, 1], [1, 2], [1.2, 1.9], [6, 2], [5.7, 1.5], [5.4, 2.2]])
    y = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2])

    Here, we assume that we have three classes.

  2. Let's initialize the logistic regression classifier:
    classifier = linear_model.LogisticRegression(solver...

Building a Naive Bayes classifier

A Naive Bayes classifier is a supervised learning classifier that uses Bayes' theorem to build the model. Let's go ahead and build a Naïve Bayes classifier.

How to do it…

  1. We will use naive_bayes.py that is provided to you as reference. Let's import a couple of things:
    from sklearn.naive_bayes import GaussianNB 
    from logistic_regression import plot_classifier
  2. You were provided with a data_multivar.txt file. This contains data that we will use here. This contains comma-separated numerical data in each line. Let's load the data from this file:
    input_file = 'data_multivar.txt'
    
    X = []
    y = []
    with open(input_file, 'r') as f:
        for line in f.readlines():
            data = [float(x) for x in line.split(',')]
            X.append(data[:-1])
            y.append(data[-1]) 
    
    X = np.array(X)
    y = np.array(y)

    We have now loaded the input data into X and the labels into y.

  3. Let's build the Naive Bayes classifier:
    classifier_gaussiannb...

Splitting the dataset for training and testing

Let's see how to split our data properly into training and testing datasets.

How to do it…

  1. Add the following code snippet into the same Python file as the previous recipe:
    from sklearn import cross_validation
    
    X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.25, random_state=5)
    classifier_gaussiannb_new = GaussianNB()
    classifier_gaussiannb_new.fit(X_train, y_train)

    Here, we allocated 25% of the data for testing, as specified by the test_size parameter. The remaining 75% of the data will be used for training.

  2. Let's evaluate the classifier on test data:
    y_test_pred = classifier_gaussiannb_new.predict(X_test)
  3. Let's compute the accuracy of the classifier:
    accuracy = 100.0 * (y_test == y_test_pred).sum() / X_test.shape[0]
    print "Accuracy of the classifier =", round(accuracy, 2), "%"
  4. Let's plot the datapoints and the boundaries on test data:
    plot_classifier(classifier_gaussiannb_new...

Evaluating the accuracy using cross-validation

The cross-validation is an important concept in machine learning. In the previous recipe, we split the data into training and testing datasets. However, in order to make it more robust, we need to repeat this process with different subsets. If we just fine-tune it for a particular subset, we may end up overfitting the model. Overfitting refers to a situation where we fine-tune a model too much to a dataset and it fails to perform well on unknown data. We want our machine learning model to perform well on unknown data.

Getting ready…

Before we discuss how to perform cross-validation, let's talk about performance metrics. When we are dealing with machine learning models, we usually care about three things: precision, recall, and F1 score. We can get the required performance metric using the parameter scoring. Precision refers to the number of items that are correctly classified as a percentage of the overall number of items in the...

Introduction


In the field of machine learning, classification refers to the process of using the characteristics of data to separate it into a certain number of classes. This is different from regression that we discussed in the previous chapter where the output is a real number. A supervised learning classifier builds a model using labeled training data and then uses this model to classify unknown data.

A classifier can be any algorithm that implements classification. In simple cases, this classifier can be a straightforward mathematical function. In more real-world cases, this classifier can take very complex forms. In the course of study, we will see that classification can be either binary, where we separate data into two classes, or it can be multiclass, where we separate data into more than two classes. The mathematical techniques that are devised to deal with the classification problem tend to deal with two classes, so we extend them in different ways to deal with the multiclass problem...

Building a simple classifier


Let's see how to build a simple classifier using some training data.

How to do it…

  1. We will use the simple_classifier.py file that is already provided to you as reference. Assuming that you imported the numpy and matplotlib.pyplot packages like we did in the last chapter, let's create some sample data:

    X = np.array([[3,1], [2,5], [1,8], [6,4], [5,2], [3,5], [4,7], [4,-1]])
  2. Let's assign some labels to these points:

    y = [0, 1, 1, 0, 0, 1, 1, 0]
  3. As we have only two classes, the y list contains 0s and 1s. In general, if you have N classes, then the values in y will range from 0 to N-1. Let's separate the data into classes based on the labels:

    class_0 = np.array([X[i] for i in range(len(X)) if y[i]==0])
    class_1 = np.array([X[i] for i in range(len(X)) if y[i]==1])
  4. To get an idea about our data, let's plot it, as follows:

    plt.figure()
    plt.scatter(class_0[:,0], class_0[:,1], color='black', marker='s')
    plt.scatter(class_1[:,0], class_1[:,1], color='black', marker='x')

    This is a...

Building a logistic regression classifier


Despite the word regression being present in the name, logistic regression is actually used for classification purposes. Given a set of datapoints, our goal is to build a model that can draw linear boundaries between our classes. It extracts these boundaries by solving a set of equations derived from the training data.

How to do it…

  1. Let's see how to do this in Python. We will use the logistic_regression.py file that is provided to you as a reference. Assuming that you imported the necessary packages, let's create some sample data along with training labels:

    import numpy as np
    from sklearn import linear_model 
    import matplotlib.pyplot as plt
    
    X = np.array([[4, 7], [3.5, 8], [3.1, 6.2], [0.5, 1], [1, 2], [1.2, 1.9], [6, 2], [5.7, 1.5], [5.4, 2.2]])
    y = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2])

    Here, we assume that we have three classes.

  2. Let's initialize the logistic regression classifier:

    classifier = linear_model.LogisticRegression(solver='liblinear', C=100...

Building a Naive Bayes classifier


A Naive Bayes classifier is a supervised learning classifier that uses Bayes' theorem to build the model. Let's go ahead and build a Naïve Bayes classifier.

How to do it…

  1. We will use naive_bayes.py that is provided to you as reference. Let's import a couple of things:

    from sklearn.naive_bayes import GaussianNB 
    from logistic_regression import plot_classifier
  2. You were provided with a data_multivar.txt file. This contains data that we will use here. This contains comma-separated numerical data in each line. Let's load the data from this file:

    input_file = 'data_multivar.txt'
    
    X = []
    y = []
    with open(input_file, 'r') as f:
        for line in f.readlines():
            data = [float(x) for x in line.split(',')]
            X.append(data[:-1])
            y.append(data[-1]) 
    
    X = np.array(X)
    y = np.array(y)

    We have now loaded the input data into X and the labels into y.

  3. Let's build the Naive Bayes classifier:

    classifier_gaussiannb = GaussianNB()
    classifier_gaussiannb.fit(X, y)
    y_pred...

Splitting the dataset for training and testing


Let's see how to split our data properly into training and testing datasets.

How to do it…

  1. Add the following code snippet into the same Python file as the previous recipe:

    from sklearn import cross_validation
    
    X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.25, random_state=5)
    classifier_gaussiannb_new = GaussianNB()
    classifier_gaussiannb_new.fit(X_train, y_train)

    Here, we allocated 25% of the data for testing, as specified by the test_size parameter. The remaining 75% of the data will be used for training.

  2. Let's evaluate the classifier on test data:

    y_test_pred = classifier_gaussiannb_new.predict(X_test)
  3. Let's compute the accuracy of the classifier:

    accuracy = 100.0 * (y_test == y_test_pred).sum() / X_test.shape[0]
    print "Accuracy of the classifier =", round(accuracy, 2), "%"
  4. Let's plot the datapoints and the boundaries on test data:

    plot_classifier(classifier_gaussiannb_new, X_test, y_test)
  5. You should see the following...

Evaluating the accuracy using cross-validation


The cross-validation is an important concept in machine learning. In the previous recipe, we split the data into training and testing datasets. However, in order to make it more robust, we need to repeat this process with different subsets. If we just fine-tune it for a particular subset, we may end up overfitting the model. Overfitting refers to a situation where we fine-tune a model too much to a dataset and it fails to perform well on unknown data. We want our machine learning model to perform well on unknown data.

Getting ready…

Before we discuss how to perform cross-validation, let's talk about performance metrics. When we are dealing with machine learning models, we usually care about three things: precision, recall, and F1 score. We can get the required performance metric using the parameter scoring. Precision refers to the number of items that are correctly classified as a percentage of the overall number of items in the list. Recall...

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Understand which algorithms to use in a given context with the help of this exciting recipe-based guide
  • Learn about perceptrons and see how they are used to build neural networks
  • Stuck while making sense of images, text, speech, and real estate? This guide will come to your rescue, showing you how to perform machine learning for each one of these using various techniques

Description

Machine learning is becoming increasingly pervasive in the modern data-driven world. It is used extensively across many fields such as search engines, robotics, self-driving cars, and more. With this book, you will learn how to perform various machine learning tasks in different environments. We’ll start by exploring a range of real-life scenarios where machine learning can be used, and look at various building blocks. Throughout the book, you’ll use a wide variety of machine learning algorithms to solve real-world problems and use Python to implement these algorithms. You’ll discover how to deal with various types of data and explore the differences between machine learning paradigms such as supervised and unsupervised learning. We also cover a range of regression techniques, classification algorithms, predictive modeling, data visualization techniques, recommendation engines, and more with the help of real-world examples.

Who is this book for?

This book is for Python programmers who are looking to use machine-learning algorithms to create real-world applications. This book is friendly to Python beginners, but familiarity with Python programming would certainly be useful to play around with the code.

What you will learn

  • Explore classification algorithms and apply them to the income bracket estimation problem
  • Use predictive modeling and apply it to real-world problems
  • Understand how to perform market segmentation using unsupervised learning
  • Explore data visualization techniques to interact with your data in diverse ways
  • Find out how to build a recommendation engine
  • Understand how to interact with text data and build models to analyze it
  • Work with speech data and recognize spoken words using Hidden Markov Models
  • Analyze stock market data using Conditional Random Fields
  • Work with image data and build systems for image recognition and biometric face recognition
  • Grasp how to use deep neural networks to build an optical character recognition system
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 23, 2016
Length: 304 pages
Edition : 1st
Language : English
ISBN-13 : 9781786464477
Category :
Languages :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Jun 23, 2016
Length: 304 pages
Edition : 1st
Language : English
ISBN-13 : 9781786464477
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 169.97
Python Machine Learning Cookbook
$65.99
Mastering Data Mining with Python ??? Find patterns hidden in your data
$54.99
Advanced Machine Learning with Python
$48.99
Total $ 169.97 Stars icon
Banner background image

Table of Contents

13 Chapters
1. The Realm of Supervised Learning Chevron down icon Chevron up icon
2. Constructing a Classifier Chevron down icon Chevron up icon
3. Predictive Modeling Chevron down icon Chevron up icon
4. Clustering with Unsupervised Learning Chevron down icon Chevron up icon
5. Building Recommendation Engines Chevron down icon Chevron up icon
6. Analyzing Text Data Chevron down icon Chevron up icon
7. Speech Recognition Chevron down icon Chevron up icon
8. Dissecting Time Series and Sequential Data Chevron down icon Chevron up icon
9. Image Content Analysis Chevron down icon Chevron up icon
10. Biometric Face Recognition Chevron down icon Chevron up icon
11. Deep Neural Networks Chevron down icon Chevron up icon
12. Visualizing Data Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(5 Ratings)
5 star 60%
4 star 20%
3 star 20%
2 star 0%
1 star 0%
Amazon Customer Dec 10, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The cookbook is excellent. Focused and relevant to the needs of the machine learnig community. The author has communicated with clarity for the individual who would like to learn the practical aspects of implementing learning algorithms of today and for the future. Excellent work, uptodate and very relevant for the applications of the day!. Every algorithm works and is applicable easily.
Amazon Verified review Amazon
Spoorthi V. Jul 28, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I'm relatively new to Python and I would like to say that this book is very friendly to Python beginners. The projects were easy to understand and the code is explained step by step. It was interesting to learn how to work with different types of data like images, text, and audio. I would definitely recommend this book to people who want to get started with machine learning in Python.
Amazon Verified review Amazon
Nari Aug 08, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I would say this book is ideal for anyone who knows some Machine Learning basics and has experience with Python, but it's also a great book for beginners who want to learn about practical ML problems. I've taken Andrew Ng’s Stanford Machine Learning courses in the past, and converting the theories into code isn't always intuitive. However, this book teaches you how to implement those algorithms into code, with lots of practical problems and easy-to-understand example code. Also, the additional graphs and images helped me visualize the concepts. Highly recommended!
Amazon Verified review Amazon
Rajesh Ranjan Dec 04, 2018
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
👍🏻
Amazon Verified review Amazon
P. Sebastien May 28, 2017
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
I didnot fall in love at all with this book. many recipes, but treated very fast. I found a couple of tips, but other books of same publisher are better i brlive
Amazon Verified review Amazon