Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Learning Predictive Analytics with R

You're reading from  Learning Predictive Analytics with R

Product type Book
Published in Sep 2015
Publisher Packt
ISBN-13 9781782169352
Pages 332 pages
Edition 1st Edition
Languages
Author (1):
Eric Mayor Eric Mayor
Profile icon Eric Mayor

Table of Contents (23) Chapters

Learning Predictive Analytics with R
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Setting GNU R for Predictive Analytics Visualizing and Manipulating Data Using R Data Visualization with Lattice Cluster Analysis Agglomerative Clustering Using hclust() Dimensionality Reduction with Principal Component Analysis Exploring Association Rules with Apriori Probability Distributions, Covariance, and Correlation Linear Regression Classification with k-Nearest Neighbors and Naïve Bayes Classification Trees Multilevel Analyses Text Analytics with R Cross-validation and Bootstrapping Using Caret and Exporting Predictive Models Using PMML Exercises and Solutions Further Reading and References Index

Chapter 5. Agglomerative Clustering Using hclust()

Unlike partition clustering, which requires the user to specify the number of k clusters and create homogeneous k groups, hierarchical clustering defines clusters without user intervention from distances in the data and defines a tree of clusters from this. Hierarchical clustering is particularly useful when the data is suspected to be hierarchical (leaves nested in nodes nested in higher level nodes). It can also be used to determine the number of clusters to be used in k-means clustering. Hierarchical clustering can be agglomerative or divisive. Agglomerative clustering usually yields a higher number of clusters, with less leaf nodes by cluster.

Agglomerative clustering refers to the use of algorithms, which start with a number of clusters that is equal to the number of cases (each case being a cluster) and merges clusters iteratively one by one, until there is only one cluster that corresponds to the entire dataset. Divisive cluster is...

The inner working of agglomerative clustering


As briefly mentioned, agglomerative clustering refers to algorithms. Let's start with the example of the data we used last in the previous chapter:

1  rownames(life.scaled) = life$country
2  a=hclust(dist(life.scaled))
3  par(mfrow=c(1,2))
4  plot(a, hang=-1, xlab="Case number", main = "Euclidean")

We started by adding the name of each country as the row name of the related case (line 1), in order to display it on the graph. The function hclust() was then used to generate a hierarchical agglomerative clustering solution from the data (line 2). The algorithm uses a distance matrix, provided as an argument (here the default is the Euclidean distance) to determine how to create a hierarchy of clusters. We have discussed measures of distance in the previous chapter. Please refer to this explanation if in doubt. Finally, the hclust object a at line 2 was plotted in a dendrogram (line 4 in the following diagram). At line 3, we set the plotting area to...

Agglomerative clustering with hclust()


In what follows, we are going to explore the use of agglomerative clustering with hclust() using numerical and binary data in two datasets.

Exploring the results of votes in Switzerland

In this section, we will examine the case of another dataset. This dataset represents the percentage of acceptance of the themes of federal (national) voting objects in Switzerland in 2001. The first rows of data are in the following table. The rows represent the cantons (the Swiss name for states). The columns (except the first) represent the topic of the voting. The values are the percentage of acceptance of the topic of voting. The data has been retrieved from the Swiss Statistics Office (www.bfs.admin.ch) and are provided in the folder for this chapter (file swiss_votes.dat).

The first five rows of the dataset

To load the data, save the file in your working directory or change the working directory (use setwd() to the path of the file) and type the following line of...

Summary


In this chapter, we discovered hierarchical (or nested) clustering, particularly in its agglomerative form. We used several distance metrics (Euclidean, Manhattan, and binary) as well as several linkage functions. We discussed how to interpret the result of clustering and how cluster analysis can be used for further inquiry of the data, and discussed real-life examples. Another popular application we did not discuss here is text classification. We have also seen that datasets sometimes require some effort (preprocessing) to be made compliant with analytic requirements. In the next chapter, we will see how to use principal component analysis, notably to perform dimensionality reduction.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Learning Predictive Analytics with R
Published in: Sep 2015 Publisher: Packt ISBN-13: 9781782169352
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}