As you may be already aware, Apache Mahout is an open source library of scalable machine learning algorithms that focuses on clustering, classification, and recommendations.
This chapter will provide an introduction to machine learning and Apache Mahout.
In this chapter, we will cover the following topics:
Machine learning in a nutshell
Machine learning applications
Machine learning libraries
The history of machine learning
Apache Mahout
Setting up Apache Mahout
How Apache Mahout works
From Hadoop MapReduce to Spark
When is it appropriate to use Apache Mahout?
Giving a detailed explanation of machine learning is beyond the scope of this book. For this purpose, there are other excellent resources that I have listed here:
Machine Learning by Andrew Ng at Coursera (https://www.coursera.org/course/ml)
Foundations of Machine Learning (Adaptive Computation and Machine Learning series) by Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalker
However, basic machine learning concepts are explained very briefly here, for those who are not familiar with it.
Machine learning is an area of artificial intelligence that focuses on learning from the available data to make predictions on unseen data without explicit programming.
To solve real-world problems using machine learning, we first need to represent the characteristics of the problem domain using features.
A feature is a distinct, measurable, heuristic property of the item of interest being perceived. We need to consider the features that have the greatest potential in discriminating between different categories.
Let's explain the difference between supervised learning and unsupervised learning using a simple example of pebbles:

Supervised learning: Take a collection of mixed pebbles, as given in the preceding figure, and categorize (label) them as small, medium, and large pebbles. Examples of supervised learning are regression and classification.
Unsupervised learning: Here, just group them based on similar sizes but don't label them. An example of unsupervised learning is clustering.
For a machine to perform learning tasks, it requires features such as the diameter and weight of each pebble.
This book will cover how to implement the following machine learning techniques using Apache Mahout:
Clustering
Classification and regression
Recommendations
Do you know that machine learning has a significant impact in real-life day-to-day applications? World's popular organizations, such as Google, Facebook, Yahoo!, and Amazon, use machine learning algorithms in their applications.
Information retrieval is an area where machine learning is vastly applied in the industry. Some examples include Google News, Google target advertisements, and Amazon product recommendations.
Google News uses machine learning to categorize large volumes of online news articlesL:

The relevance of Google target advertisements can be improved by using machine learning:

Amazon as well as most of the e-business websites use machine learning to understand which products will interest the users:

Even though information retrieval is the area that has commercialized most of the machine learning applications, machine learning can be applied in various other areas, such as business and health care.
Machine learning is applied to solve different business problems, such as market segmentation, business analytics, risk classification, and stock market predictions.
A few of them are explained here.
In market segmentation, clustering techniques can be used to identify the homogeneous subsets of consumers, as shown in the following figure:

Take an example of a Fast-Moving Consumer Goods (FMCG) company that introduces a shampoo for personal use. They can use clustering to identify the different market segments, by considering features such as the number of people who have hair fall, colored hair, dry hair, and normal hair. Then, they can decide on the types of shampoo required for different market segments, which will maximize the profit.
Machine learning libraries can be categorized using different criteria, which are explained in the sections that follow.
Free and open source libraries are cost-effective solutions, and most of them provide a framework that allows you to implement new algorithms on your own. However, support for these libraries is not as good as the support available for proprietary libraries. However, some open source libraries have very active mailing lists to address this issue.
Apache Mahout, OpenCV, MLib, and Mallet are some open source libraries.
MATLAB is a commercial numerical environment that contains a machine learning library.
Machine learning algorithms are resource-intensive (CPU, memory, and storage) operations. Also, most of the time, they are applied on large volumes of datasets. So, decentralization (for example, data and algorithms), distribution, and replication techniques are used to scale out a system:
Apache Mahout (data distributed over clusters and parallel algorithms)
Spark MLib (distributed memory-based Spark architecture)
MLPACK (low memory or CPU requirements due to the use of C++)
GraphLab (multicore parallelism)
Most of the machine learning libraries are implemented using languages such as Java, C#, C++, Python, and Scala.
Machine learning libraries, such as R and Weka, have many machine learning algorithms implemented. However, they are not scalable. So, when it comes to scalable machine learning libraries, Apache Mahout has better algorithm support than Spark MLib at the moment, as Spark MLib is relatively young.
Stream processing mechanisms, for example, Jubatus and Samoa, update a model instantaneously just after receiving data using incremental learning.
In batch processing, data is collected over a period of time and then processed together. In the context of machine learning, the model is updated after collecting data for a period of time. The batch processing mechanism (for example, Apache Mahout) is mostly suitable for processing large volumes of data.
LIBSVM implements support vector machines and it is specialized for that purpose.
A comparison of some of the popular machine learning libraries is given in the following table Table 1: Comparison between popular machine learning libraries:
Machine learning library |
Open source or commercial |
Scalable? |
Language used |
Algorithm support |
---|---|---|---|---|
MATLAB |
Commercial |
No |
Mostly C |
High |
R packages |
Open source |
No |
R |
High |
Weka |
Open source |
No |
Java |
High |
Sci-Kit Learn |
Open source |
No |
Python | |
Apache Mahout |
Open source |
Yes |
Java |
Medium |
Spark MLib |
Open source |
Yes |
Scala |
Low |
Samoa |
Open source |
Yes |
Java |
In this section, we will have a quick look at Apache Mahout.
Do you know how Mahout got its name?

As you can see in the logo, a mahout is a person who drives an elephant. Hadoop's logo is an elephant. So, this is an indicator that Mahout's goal is to use Hadoop in the right manner.
The following are the features of Mahout:
It is a project of the Apache software foundation
It is a scalable machine learning library
The MapReduce implementation scales linearly with the data
Fast sequential algorithms (the runtime does not depend on the size of the dataset)
It mainly contains clustering, classification, and recommendation (collaborative filtering) algorithms
Here, machine learning algorithms can be executed in sequential (in-memory mode) or distributed mode (MapReduce is enabled)
Most of the algorithms are implemented using the MapReduce paradigm
It runs on top of the Hadoop framework for scaling
Data is stored in HDFS (data storage) or in memory
It is a Java library (no user interface!)
The latest released version is 0.9, and 1.0 is coming soon
It is not a domain-specific but a general purpose library
Note
For those of you who are curious! What are the problems that Mahout is trying to solve? The following problems that Mahout is trying to solve:
The amount of available data is growing drastically.
The computer hardware market is geared toward providing better performance in computers. Machine learning algorithms are computationally expensive algorithms. However, there was no framework sufficient to harness the power of hardware (multicore computers) to gain better performance.
The need for a parallel programming framework to speed up machine learning algorithms.
Mahout is a general parallelization for machine learning algorithms (the parallelization method is not algorithm-specific).
No specialized optimizations are required to improve the performance of each algorithm; you just need to add some more cores.
Linear speed up with number of cores.
Each algorithm, such as Naïve Bayes, K-Means, and Expectation-maximization, is expressed in the summation form. (I will explain this in detail in future chapters.)
For more information, please read Map-Reduce for Machine Learning on Multicore, which can be found at http://www.cs.stanford.edu/people/ang/papers/nips06-mapreducemulticore.pdf.
Download the latest release of Mahout from https://mahout.apache.org/general/downloads.html.
If you are referencing Mahout as a Maven project, add the following dependency in the pom.xml
file:
<dependency> <groupId>org.apache.mahout</groupId> <artifactId>mahout-core</artifactId> <version>${mahout.version}</version> </dependency>
If required, add the following Maven dependencies as well:
<dependency> <groupId>org.apache.mahout</groupId> <artifactId>mahout-math</artifactId> <version>${mahout.version}</version> </dependency> <dependency> <groupId>org.apache.mahout</groupId> <artifactId>mahout-integration</artifactId> <version>${mahout.version}</version> </dependency>
Tip
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
More details on setting up a Maven project can be found at http://maven.apache.org/.
Follow the instructions given at https://mahout.apache.org/developers/buildingmahout.html to build Mahout from the source.
The Mahout command-line launcher is located at bin/mahout
.
Let's take a look at the various components of Mahout.
The following table represents the high-level design of a Mahout implementation. Machine learning applications access the API, which provides support for implementing different machine learning techniques, such as clustering, classification, and recommendations.
Also, if the application requires preprocessing (for example, stop word removal and stemming) for text input, it can be achieved with Apache Lucene. Apache Hadoop provides data processing and storage to enable scalable processing.
Also, there will be performance optimizations using Java Collections and the Mahout-Math library. The Mahout-integration library contains utilities such as displaying the data and results.

MapReduce is a programming paradigm to enable parallel processing. When it is applied to machine learning, we assign one MapReduce engine to one algorithm (for each MapReduce engine, one master is assigned).
Input is provided as Hadoop sequence files, which consist of binary key-value pairs. The master node manages the mappers and reducers. Once the input is represented as sequence files and sent to the master, it splits data and assigns the data to different mappers, which are other nodes. Then, it collects the intermediate outcome from mappers and sends them to related reducers for further processing. Lastly, the final outcome is generated.
Let's take a look at the journey from MapReduce to Spark.
Even though MapReduce provides a suitable programming model for batch data processing, it does not perform well with real-time data processing. When it comes to iterative machine learning algorithms, it is necessary to carry information across iterations. Moreover, an intermediate outcome needs to be persisted during each iteration. Therefore, it is necessary to store and retrieve temporary data from the Hadoop Distributed File System (HDFS) very frequently, which incurs significant performance degradation.
Machine learning algorithms that can be written in a certain form of summation (algorithms that fit in the statistical query model) can be implemented in the MapReduce programming model. However, some of the machine learning algorithms are hard to implement by adhering to the MapReduce programming paradigm. MapReduce cannot be applied if there are any computational dependencies between the data.
Therefore, this constrained programming model is a barrier for Apache Mahout as it can limit the number of supported distributed algorithms.
Apache Spark is a large-scale scalable data processing framework, which claims to be 100 times faster than Hadoop MapReduce when in memory and 10 times faster in disk, has a distributed memory-based architecture. H2O is an open source, parallel processing engine for machine learning by 0xdata.
As a solution to the problems of the Hadoop MapReduce approach mentioned previously, Apache Mahout is working on integrating Apache Spark and H2O as the backend integration (with the Mahout Math library).
With Spark, there can be better support for iterative machine learning algorithms using the in-memory approach. In-memory applications are self-optimizing. An algebraic expression optimizer is used for distributed linear algebra. One significant example is the Distributed Row Matrix (DRM), which is a huge matrix partitioned by rows.
Further, programming with Spark is easier than programming with MapReduce because Spark decouples the machine learning logic from the distributed backend. Accordingly, the distribution is hidden from the machine learning API users. This can be used like R or MATLAB.
You should consider the following aspects before making a decision to use Apache Mahout as your machine learning library:
Are you looking for a machine learning algorithm for industry use with performance as a critical evaluation factor?
Are you looking for a free and open source solution?
Is your dataset large and growing at an alarming rate? (MATLAB, Weka, Octave, and R can be used to process KBs and MBs of data, but if your data volume is growing up to the GB level, then it is better to use Mahout.)
Do you want batch data processing as opposed to real-time data processing?
Are you looking for a mature library, which has been there in the market for a few years?
If all or most of the preceding considerations are met, then Mahout is the right solution for you.
Machine learning is about discovering hidden insights or patterns from the available data. Machine learning algorithms can be divided in two categories: supervised learning and unsupervised learning. There are many real-world applications of machine learning in diverse domains, such as information retrieval, business, and health care.
Apache Mahout is a scalable machine learning library that runs on top of the Hadoop framework. In v0.10, Apache Mahout is shifting toward Apache Spark and H20 to address performance and usability issues that occur due to the MapReduce programming paradigm.
In the upcoming chapters, we will dive deep into different machine learning techniques.