You're reading from Apache Spark for Data Science Cookbook
The techniques for data exploration and preparation are typically applied before applying models on the data and this also helps in developing complex statistical models. These techniques are also important for eliminating or sharpening a potential hypothesis which can be addressed by the data. The amount of time spent in preprocessing and data exploration provides the quality input which decides the quality of the output. Once the business hypothesis is ready, a series of steps in data exploration and preparation decides the accuracy of the model and reliable results.
In this chapter, we are going to look at the following common data analysis techniques such as univariate analysis, bivariate analysis, missing values treatment, identifying the outliers, and techniques for variable transformation.
Once the data is available, we have to spend lot of time and effort in data exploration, cleaning and preparation because the quality of the high input data decides the quality of calculating the output. Hence, once we identify the business questions, the first step of data exploration/analysis is univariate analysis, which explores the variables one by one. The methods of univariate analysis depend on whether the variable type is categorical or continuous.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
Bivariate analysis finds out the relationship between two variables. In this, we always look for association and disassociation between variables at a predefined significance level. This analysis could be performed for any combination of categorical and continuous variables. The various combinations can be: both the variables categorical, categorical and continuous, and continuous and continuous.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
Missing data in the training dataset can reduce the fitness of a model or can lead to a biased model because we have not analyzed the behavior and relationship with other variables correctly. This could also lead to wrong predictions or classifications. The reasons for the occurrence of the missing values could be that while extracting data from multiple sources, there is a possible chance to have missing data. Hence, using some hashing procedure ensures that the data extraction is correct. The errors that occur at the time of data collection are tougher to correct as the values might miss at random and the missing values might also depend on the unobserved predictors.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package...
Outliers are infrequent observations, that is, the data points that do not appear to follow the characteristic distribution of the rest of the data. They appear far away and diverge from the overall pattern of the data. These might occur due to measurement errors or other anomalies which result in wrong estimations. Outliers can be univariate and multivariate. Univariate outliers can be determined by looking at the distribution of a single variable whereas multivariate outliers are present in an n-dimensional space which can be found by looking at the distributions in multi-dimensions.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used...
In the previous recipes, we saw various steps of performing data analysis. In this recipe, let's download the commonly used dataset for movie recommendations. The dataset is known as the MovieLens
dataset. The dataset is quite applicable for recommender systems as well as potentially for other machine learning tasks.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
Let's see how to analyse the MovieLens dataset.
Let's download the
MovieLens
dataset from the following location: https://drive.google.com/file/d/0Bxr27gVaXO5sRUZnMjBQR0lqNDA/view?usp=sharing...
In the previous recipes, we saw various steps of performing data analysis. In this recipe, let's download the Uber dataset and try to solve some of the analytical questions that arise on such data.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
In this section, let's see how to analyse the Uber dataset.
Let's download the
Uber
dataset from the following location: https://github.com/ChitturiPadma/datasets/blob/master/uber.csv.The dataset contains four columns:
dispatching_base_number
,date
,active_vehicles
, andtrips
. Let's load the data and see what the...