Reader small image

You're reading from  Mastering Predictive Analytics with Python

Product typeBook
Published inAug 2016
Reading LevelIntermediate
Publisher
ISBN-139781785882715
Edition1st Edition
Languages
Right arrow
Author (1)
Joseph Babcock
Joseph Babcock
author image
Joseph Babcock

Joseph Babcock has spent more than a decade working with big data and AI in the e-commerce, digital streaming, and quantitative finance domains. Through his career he has worked on recommender systems, petabyte scale cloud data pipelines, A/B testing, causal inference, and time series analysis. He completed his PhD studies at Johns Hopkins University, applying machine learning to the field of drug discovery and genomics.
Read more about Joseph Babcock

Right arrow

Chapter 2. Exploratory Data Analysis and Visualization in Python

Analytic pipelines are not built from raw data in a single step. Rather, development is an iterative process that involves understanding the data in greater detail and systematically refining both model and inputs to solve a problem. A key part of this cycle is interactive data analysis and visualization, which can provide initial ideas for features in our predictive modeling or clues as to why an application is not behaving as expected.

Spreadsheet programs are one kind of interactive tool for this sort of exploration: they allow the user to import tabular information, pivot and summarize data, and generate charts. However, what if the data in question is too large for such a spreadsheet application? What if the data is not tabular, or is not displayed effectively as a line or bar chart? In the former case, we could simply obtain a more powerful computer, but the latter is more problematic. Simply put, many traditional data...

Exploring categorical and numerical data in IPython


We will start our explorations in IPython by loading a text file into a DataFrame, calculating some summary statistics, and visualizing distributions. For this exercise we'll use a set of movie ratings and metadata from the Internet Movie Database (http://www.imdb.com/) to investigate what factors might correlate with high ratings for films on this website. Such information might be helpful, for example, in developing a recommendation system based on this kind of user feedback.

Installing IPython notebook

To follow along with the examples, you should have a Windows, Linux, or Mac OSX operating system installed on your computer and access to the Internet. There are a number of options available to install IPython: since each of these resources includes installation guides, we provide a summary of the available sources and direct the reader to the relevant documentation for more in-depth instructions.

  • For most users, a pre-bundled Python environment...

Time series analysis


While the imdb data contained movie release years, fundamentally the objects of interest were the individual films and the ratings, not a linked series of events over time that might be correlated with one another. This latter type of data – a time series – raises a different set of questions. Are datapoints correlated with one another? If so, over what timeframe are they correlated? How noisy is the signal? Pandas DataFrames have many built-in tools for time series analysis, which we will examine in the next section.

Cleaning and converting

In our previous example, we were able to use the data more or less in the form in which it was supplied. However, there is not always a guarantee that this will be the case. In our second example, we'll look at a time series of oil prices in the US by year over the last century (Makridakis, Spyros, Steven C. Wheelwright, and Rob J. Hyndman. Forecasting methods and applications, John Wiley & Sons. Inc, New York(1998). We'll start...

Working with geospatial data


For our last case study, let us explore the analysis of geospatial data using an extension to the Pandas library, GeoPandas. You will need to have GeoPandas installed in your IPython environment to follow this example. If it is not already installed, you can add it using easy_install or pip.

Loading geospatial data

In addition to our other dependencies, we will import the GeoPandas library using the command:

>>> import GeoPandas as geo.

We load dataset for this example, the coordinates of countries in Africa ("Africa." Maplibrary.org. Web. 02 May 2016. http://www.mapmakerdata.co.uk.s3-website-eu-west-1.amazonaws.com/library/stacks/Africa/) which are contained in a shape (.shp) file as before into a GeoDataFrame, an extension of the Pandas DataFrame, using:

>>> africa_map = geo.GeoDataFrame.from_file('Africa_SHP/Africa.shp')

Examining the first few lines using head():

We can see that the data consists of identifier columns, along with a geometry...

Introduction to PySpark


So far we've mainly focused on datasets that can fit on a single machine. For larger datasets, we may need to access them through distributed file systems such as Amazon S3 or HDFS. For this purpose, we can utilize the open-source distributed computing framework PySpark (http://spark.apache.org/docs/latest/api/python/). PySpark is a distributed computing framework that uses the abstraction of Resilient Distributed Datasets (RDDs) for parallel collections of objects, which allows us to programmatically access a dataset as if it fits on a single machine. In later chapters we will demonstrate how to build predictive models in PySpark, but for this introduction we focus on data manipulation functions in PySpark.

Creating the SparkContext

The first step in any spark application is the generation of the SparkContext. The SparkContext contains any job-specific configurations (such as memory settings or the number of worker tasks), and allows us to connect to a Spark cluster...

Summary


We have now examined many of the tasks needed to start building analytical applications. Using the IPython notebook, we have covered how to load data in a file into a DataFrame in Pandas, rename columns in the dataset, filter unwanted rows, convert column data types, and create new columns. In addition, we have joined data from different sources and performed some basic statistical analyses using aggregations and pivots. We have visualized the data using histograms, scatter plots, and density plots as well as autocorrelation and log plots for time series. We also visualized geospatial data, using coordinate files to overlay data on maps. In addition, we processed the movies dataset using PySpark, creating both an RDD and a PySpark DataFrame, and performed some basic operations on these datatypes.

We will build on these tools in future sections, manipulating the raw input to develop features for building predictive analytics pipelines. We will later utilize similar tools to visualize...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Predictive Analytics with Python
Published in: Aug 2016Publisher: ISBN-13: 9781785882715
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Joseph Babcock

Joseph Babcock has spent more than a decade working with big data and AI in the e-commerce, digital streaming, and quantitative finance domains. Through his career he has worked on recommender systems, petabyte scale cloud data pipelines, A/B testing, causal inference, and time series analysis. He completed his PhD studies at Johns Hopkins University, applying machine learning to the field of drug discovery and genomics.
Read more about Joseph Babcock