Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Hands-On Unsupervised Learning with Python
Hands-On Unsupervised Learning with Python

Hands-On Unsupervised Learning with Python: Implement machine learning and deep learning models using Scikit-Learn, TensorFlow, and more

Arrow left icon
Profile Icon Bonaccorso Profile Icon Giuseppe Bonaccorso
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7 (3 Ratings)
Paperback Feb 2019 386 pages 1st Edition
eBook
$34.19 $37.99
Paperback
$51.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Bonaccorso Profile Icon Giuseppe Bonaccorso
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7 (3 Ratings)
Paperback Feb 2019 386 pages 1st Edition
eBook
$34.19 $37.99
Paperback
$51.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$34.19 $37.99
Paperback
$51.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Unsupervised Learning with Python

Clustering Fundamentals

In this chapter, we are going to introduce the fundamental concept of cluster analysis, focusing the attention on our main principles that are shared by many algorithms and the most important techniques that can be employed to evaluate the performance of a method.

In particular, we are going to discuss:

  • An introduction to clustering and distance functions
  • K-means and K-means++
  • Evaluation metrics
  • K-Nearest Neighbors (KNN)
  • Vector Quantization (VQ)

Technical requirements

The code presented in this chapter requires:

  • Python 3.5+ (Anaconda distribution: https://www.anaconda.com/distribution/ is highly recommended)
  • Libraries:
    • SciPy 0.19+
    • NumPy 1.10+
    • scikit-learn 0.20+
    • pandas 0.22+
    • Matplotlib 2.0+
    • seaborn 0.9+

The dataset can be obtained through UCI. The CSV file can be downloaded from https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data and doesn't need any preprocessing except for the addition of the column names that will occur during the loading stage.

The examples are available on the GitHub repository:

https://github.com/PacktPublishing/HandsOn-Unsupervised-Learning-with-Python/Chapter02.

Introduction to clustering

As we explained in Chapter 1, Getting Started with Unsupervised Learning, the main goal of a cluster analysis is to group the elements of a dataset according to a similarity measure or a proximity criterion. In the first part of this chapter, we are going to focus on the former approach, while in the second part and in the next chapter, we will analyze more generic methods that exploit other geometric features of the dataset.

Let's take a data generating process pdata(x) and draw N samples from it:

It's possible to assume that the probability space of pdata(x) is partitionable into (potentially infinite) configurations containing K (for K=1,2, ...) regions so that pdata(x; k) represents the probability of a sample belonging to a cluster k. In this way, we are stating that every possible clustering structure is already existing when pdata(x...

K-means

K-means is the simplest implementation of the principle of maximum separation and maximum internal cohesion. Let's suppose we have a dataset X ∈ ℜM×N (that is, M N-dimensional samples) that we want to split into K clusters and a set of K centroids corresponding to the means of the samples assigned to each cluster Kj:

The set M and the centroids have an additional index (as a superscript) indicating the iterative step. Starting from an initial guess M(0), K-means tries to minimize an objective function called inertia (that is, the total average intra-cluster distance between samples assigned to a cluster Kj and its centroid μj):

It's easy to understand that S(t) cannot be considered as an absolute measure because its value is highly influenced by the variance of the samples. However, S(t+1) < S(t) implies that the centroids are moving...

Analysis of the Breast Cancer Wisconsin dataset

In this chapter, we are using the well-known Breast Cancer Wisconsin dataset to perform a cluster analysis. Originally, the dataset was proposed in order to train classifiers; however, it can be very helpful for a non-trivial cluster analysis. It contains 569 records made up of 32 attributes (including the diagnosis and an identification number). All the attributes are strictly related to biological and morphological properties of the tumors, but our goal is to validate generic hypotheses considering the ground truth (benign or malignant) and the statistical properties of the dataset. Before moving on, it's important to clarify some points. The dataset is high-dimensional and the clusters are non-convex (so we cannot expect a perfect segmentation). Moreover our goal is not using a clustering algorithm to obtain the results of...

Evaluation metrics

In this section, we are going to analyze some common methods that can be employed to evaluate the performances of a clustering algorithm and also to help find the optimal number of clusters.

Minimizing the inertia

One of the biggest drawbacks of K-means and similar algorithms is the explicit request for the number of clusters. Sometimes this piece of information is imposed by external constraints (for example, in the example of breast cancer, there are only two possible diagnoses), but in many cases (when an exploratory analysis is needed), the data scientist has to check different configurations and evaluate them. The simplest way to evaluate K-means performance and choose an appropriate number of clusters...

K-Nearest Neighbors

K-Nearest Neighbors (KNN) is a method belonging to a category called instance-based learning. In this case, there's no parametrized model, but rather a rearrangement of the samples in order to speed up specific queries. In the simplest case (also known as brute-force search), let's say we have a dataset X containing M samples xi ∈ ℜN. Given a distance function d(xi, xj), it's possible to define the radius neighborhood of a test sample xi as:

The set ν(xi) is a ball centered on xi and including all the samples whose distance is less or equal to R. Alternatively, it's possible to compute only the top k nearest neighbors, which are the k samples closer to xi (in general, this set is a subset of ν(xi), but the opposite condition can also happen when k is very large). The procedure is straightforward but, unfortunately...

Vector Quantization

Vector Quantization (VQ) is a method that exploits unsupervised learning in order to perform a lossy compression of a sample xi ∈ ℜN (for simplicity, we are supposing the multi-dimensional samples are flattened) or an entire dataset X. The main idea is to find a codebook Q with a number of entries C << N and associate each element with an entry qi ∈ Q. In the case of a single sample, each entry will represent one or more groups of features (for example, it can be the mean), therefore, the process can be described as a transformation T whose general representation is:

The codebook is defined as Q = (q1, q2, ..., qC). Hence, given a synthetic dataset made up of a group of feature aggregates (for example, a group of two consecutive elements), VQ associates a single codebook entry:

As the input sample is represented using a combination...

Summary

In this chapter, we explained the fundamental concepts of cluster analysis, starting from the concept of similarity and how to measure it. We discussed the K-means algorithm and its optimized variant called K-means++ and we analyzed the Breast Cancer Wisconsin dataset. Then we discussed the most important evaluation metrics (with or without knowledge of the ground truth) and we have learned which factors can influence performance. The next two topics were KNN, a very famous algorithm that can be employed to find the most similar samples given a query vector, and VQ, a technique that exploits clustering algorithms in order to find a lossy representation of a sample (for example, an image) or a dataset.

In the next chapter, we are going to introduce some of the most important advanced clustering algorithms, showing how they can easily solve non-convex problems.

...

Questions

  1. If two samples have a Minkowski distance (p=5) equal to 10, what can you say about their Manhattan distance?
  2. The main factor that negatively impacts on the convergence speed of K-means is the dimensionality of the dataset. Is this correct?
  3. One of the most important factors that can positively impact on the performance of K-means is the convexity of the clusters. Is this correct?
  4. The homogeneity score of a clustering application is equal to 0.99. What does it mean?
  5. What is the meaning of an adjusted Rand score equal to -0.5?
  6. Considering the previous question, can a different number of clusters yield a better score?
  7. An application based on KNN requires on average 100 5-NN base queries per minute. Every minute, 2 50-NN queries are executed (each of them requires 4 seconds with a leaf size=25) and, immediately after them, a 2-second blocking task is performed. Assuming...

Further reading

  • On the Surprising Behavior of Distance Metrics in High Dimensional Space, Aggarwal C. C., Hinneburg A., Keim D. A., ICDT, 2001
  • K-means++: The Advantages of Careful Seeding, Arthur D., Vassilvitskii S., Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2007
  • Visualizing Data using t-SNE, van der Maaten L., Hinton G., Journal of Machine Learning Research 9, 2008
  • Robust Linear Programming Discrimination of Two Linearly Inseparable Sets, Bennett K. P., Mangasarian O. L., Optimization Methods and Software 1, 1992
  • Breast cancer diagnosis and prognosis via linear programming, Mangasarian O. L., Street W.N, Wolberg W. H., Operations Research, 43(4), pages 570-577, July-August 1995
  • V-Measure: A conditional entropy-based external cluster evaluation measure, Rosenberg A., Hirschberg J., Proceedings of the 2007 Joint Conference on Empirical Methods...
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore unsupervised learning with clustering, autoencoders, restricted Boltzmann machines, and more
  • Build your own neural network models using modern Python libraries
  • Practical examples show you how to implement different machine learning and deep learning techniques

Description

Unsupervised learning is about making use of raw, untagged data and applying learning algorithms to it to help a machine predict its outcome. With this book, you will explore the concept of unsupervised learning to cluster large sets of data and analyze them repeatedly until the desired outcome is found using Python. This book starts with the key differences between supervised, unsupervised, and semi-supervised learning. You will be introduced to the best-used libraries and frameworks from the Python ecosystem and address unsupervised learning in both the machine learning and deep learning domains. You will explore various algorithms, techniques that are used to implement unsupervised learning in real-world use cases. You will learn a variety of unsupervised learning approaches, including randomized optimization, clustering, feature selection and transformation, and information theory. You will get hands-on experience with how neural networks can be employed in unsupervised scenarios. You will also explore the steps involved in building and training a GAN in order to process images. By the end of this book, you will have learned the art of unsupervised learning for different real-world challenges.

Who is this book for?

This book is intended for statisticians, data scientists, machine learning developers, and deep learning practitioners who want to build smart applications by implementing key building block unsupervised learning, and master all the new techniques and algorithms offered in machine learning and deep learning using real-world examples. Some prior knowledge of machine learning concepts and statistics is desirable.

What you will learn

  • Use cluster algorithms to identify and optimize natural groups of data
  • Explore advanced non-linear and hierarchical clustering in action
  • Soft label assignments for fuzzy c-means and Gaussian mixture models
  • Detect anomalies through density estimation
  • Perform principal component analysis using neural network models
  • Create unsupervised models using GANs

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Feb 28, 2019
Length: 386 pages
Edition : 1st
Language : English
ISBN-13 : 9781789348279
Category :
Languages :
Concepts :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Feb 28, 2019
Length: 386 pages
Edition : 1st
Language : English
ISBN-13 : 9781789348279
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 139.97
Python Machine Learning By Example
$38.99
Applied Unsupervised Learning with Python
$48.99
Hands-On Unsupervised Learning with Python
$51.99
Total $ 139.97 Stars icon

Table of Contents

11 Chapters
Getting Started with Unsupervised Learning Chevron down icon Chevron up icon
Clustering Fundamentals Chevron down icon Chevron up icon
Advanced Clustering Chevron down icon Chevron up icon
Hierarchical Clustering in Action Chevron down icon Chevron up icon
Soft Clustering and Gaussian Mixture Models Chevron down icon Chevron up icon
Anomaly Detection Chevron down icon Chevron up icon
Dimensionality Reduction and Component Analysis Chevron down icon Chevron up icon
Unsupervised Neural Network Models Chevron down icon Chevron up icon
Generative Adversarial Networks and SOMs Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7
(3 Ratings)
5 star 66.7%
4 star 0%
3 star 0%
2 star 0%
1 star 33.3%
Ellery Lin May 24, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I appreciate the introduction of Shape - to cluster time-series very much.
Amazon Verified review Amazon
Diana Sep 08, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I like that have the theory and examples in how to program it in Python. This book who's a little bit about mathematics equations. So if you are interested in the demonstration of mathematics equations then you need other book. This is practical book in Python and I love it.
Amazon Verified review Amazon
Danielle W Jun 08, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
This is a paperback re-print made in black and white. Nowhere in the description or preview does it show that it's not in color. For most of the figures having color is pretty important to follow along. Disappointing.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.

Modal Close icon
Modal Close icon