2. Hierarchical Clustering
In this chapter, we will implement the hierarchical clustering algorithm from scratch using common Python packages and perform agglomerative clustering. We will also compare k-means with hierarchical clustering. We will use hierarchical clustering to build stronger groupings that make more logical sense. By the end of this chapter, we will be able to use hierarchical clustering to build stronger groupings that make more logical sense.
In this chapter, we will expand on the basic ideas that we built in Chapter 1, Introduction to Clustering, by surrounding clustering with the concept of similarity. Once again, we will be implementing forms of the Euclidean distance to capture the notion of similarity. It is important to bear in mind that the Euclidean distance just happens to be one of the most popular distance metrics; it's not the only one. Through these distance metrics, we will expand on the simple neighbor calculations that we explored in the previous chapter by introducing the concept of hierarchy. By using hierarchy to convey clustering information, we can build stronger groupings that make more logical sense. Similar to k-means, hierarchical clustering can be helpful for cases such as customer segmentation or identifying similar product types. However, there is a slight benefit in being able to explain things in a clearer fashion with hierarchical clustering. In this chapter, we will outline...
Chapter 1, Introduction to Clustering, covered both the high-level concepts and in-depth details of one of the most basic clustering algorithms: k-means. While it is indeed a simple approach, do not discredit it; it will be a valuable addition to your toolkit as you continue your exploration of the unsupervised learning world. In many real-world use cases, companies experience valuable discoveries through the simplest methods, such as k-means or linear regression (for supervised learning). An example of this is evaluating a large selection of customer data – if you were to evaluate it directly in a table, it would be unlikely that you'd find anything helpful. However, even a simple clustering algorithm can identify where groups within the data are similar and dissimilar. As a refresher, let's quickly walk through what clusters are and how k-means works to find them:
The Organization of the Hierarchy
Both the natural and human-made world contain many examples of organizing systems into hierarchies and why, for the most part, it makes a lot of sense. A common representation that is developed from these hierarchies can be seen in tree-based data structures. Imagine that you have a parent node with any number of child nodes that can subsequently be parent nodes themselves. By organizing information into a tree structure, you can build an information-dense diagram that clearly shows how things are related to their peers and their larger abstract concepts.
An example from the natural world to help illustrate this concept can be seen in how we view the hierarchy of animals, which goes from parent classes to individual species:
In the preceding diagram, you can see an example of how relational information between varieties of animals can be...
Introduction to Hierarchical Clustering
So far, we have shown you that hierarchies can be excellent structures to organize information that clearly shows nested relationships among data points. While this helps us gain an understanding of the parent/child relationships between items, it can also be very handy when forming clusters. Expanding on the animal example in the previous section, imagine that you were simply presented with two features of animals: their height (measured from the tip of the nose to the end of the tail) and their weight. Using this information, you then have to recreate a hierarchical structure in order to identify which records in your dataset correspond to dogs and cats, as well as their relative subspecies.
Since you are only given animal heights and weights, you won't be able to deduce the specific names of each species. However, by analyzing the features that you have been provided with, you can develop a structure within the data that serves as...
In Exercise 2.01, Building a Hierarchy, you implemented hierarchical clustering using what is known as Centroid Linkage. Linkage is the concept of determining how you can calculate the distances between clusters and is dependent on the type of problem you are facing. Centroid linkage was chosen for Exercise 2.02, Applying Linkage Criteria, as it essentially mirrors the new centroid search that we used in k-means. However, this is not the only option when it comes to clustering data points. Two other popular choices for determining distances between clusters are single linkage and complete linkage.
Single Linkage works by finding the minimum distance between a pair of points between two clusters as its criteria for linkage. Simply put, it essentially works by combining clusters based on the closest points between the two clusters. This is expressed mathematically as follows:
dist(a,b) = min( dist( a[i]), b[j] ) )
In the preceding code,
a[i] is the ith point within...
Agglomerative versus Divisive Clustering
So far, our instances of hierarchical clustering have all been agglomerative – that is, they have been built from the bottom up. While this is typically the most common approach for this type of clustering, it is important to know that it is not the only way a hierarchy can be created. The opposite hierarchical approach, that is, built from the top up, can also be used to create your taxonomy. This approach is called divisive hierarchical clustering and works by having all the data points in your dataset in one massive cluster. Many of the internal mechanics of the divisive approach will prove to be quite similar to the agglomerative approach:
As with most problems in unsupervised learning, deciding on the best approach is often highly dependent on the problem you are faced with solving.
Imagine that you are an entrepreneur who has just bought...
k-means versus Hierarchical Clustering
In the previous chapter, we explored the merits of k-means clustering. Now, it is important to explore where hierarchical clustering fits into the picture. As we mentioned in the Linkage section, there is some potential direct overlap when it comes to grouping data points together using centroids. Universal to all of the approaches we've mentioned so far is the use of a distance function to determine similarity. Due to our in-depth exploration in the previous chapter, we used the Euclidean distance here, but we understand that any distance function can be used to determine similarities.
In practice, here are some quick highlights for choosing one clustering method over another:
- Hierarchical clustering benefits from not needing to pass in an explicit "k" number of clusters a priori. This means that you can find all the potential clusters and decide which clusters make the most sense after the algorithm has completed...
In this chapter, we discussed how hierarchical clustering works and where it may be best employed. In particular, we discussed various aspects of how clusters can be subjectively chosen through the evaluation of a dendrogram plot. This is a huge advantage over k-means clustering if you have absolutely no idea of what you're looking for in the data. Two key parameters that drive the success of hierarchical clustering were also discussed: the agglomerative versus divisive approach and linkage criteria. Agglomerative clustering takes a bottom-up approach by recursively grouping nearby data together until it results in one large cluster. Divisive clustering takes a top-down approach by starting with the one large cluster and recursively breaking it down until each data point falls into its own cluster. Divisive clustering has the potential to be more accurate since it has a complete view of the data from the start; however, it adds a layer of complexity that can decrease the...