The key strength of Apache Mahout lies in its ability to scale. This is achieved by implementing machine learning algorithms according to the MapReduce programming paradigm.
If your dataset is small and fits into memory, then you can run Mahout in local mode. If your dataset is growing and it comes to a point where it can't fit into memory, then you should consider moving the computation to the Hadoop cluster. The complete guide on Hadoop installation is given in Chapter 5, Apache Mahout in Production.
In this section, we will explain how the K-Means algorithm is implemented in Apache Mahout in a scalable manner.
However, please note that it is not mandatory for you to thoroughly understand the MapReduce concept in order to use the algorithms in your applications. So, you can proceed with this section only if you are interested in understanding the internals.
Let's continue with the previous people size example, with height and weight as features. The data distribution...