Reader small image

You're reading from  Mastering Predictive Analytics with Python

Product typeBook
Published inAug 2016
Reading LevelIntermediate
Publisher
ISBN-139781785882715
Edition1st Edition
Languages
Right arrow
Author (1)
Joseph Babcock
Joseph Babcock
author image
Joseph Babcock

Joseph Babcock has spent more than a decade working with big data and AI in the e-commerce, digital streaming, and quantitative finance domains. Through his career he has worked on recommender systems, petabyte scale cloud data pipelines, A/B testing, causal inference, and time series analysis. He completed his PhD studies at Johns Hopkins University, applying machine learning to the field of drug discovery and genomics.
Read more about Joseph Babcock

Right arrow

Chapter 6. Words and Pixels – Working with Unstructured Data

Most of the data we have looked at thus far is composed of rows and columns with numerical or categorical values. This sort of information fits in both traditional spreadsheet software and the interactive Python notebooks used in the previous exercises. However, data is increasingly available in both this form, usually called structured data, and more complex formats such as images and free text. These other data types, also known as unstructured data, are more challenging than tabular information to parse and transform into features that can be used in machine learning algorithms.

What makes unstructured data challenging to use? It is challenging largely because images and text are extremely high dimensional, consisting of a much larger number of columns or features than we have seen previously. For example, this means that a document may have thousands of words, or an image thousands of individual pixels. Each of these components...

Working with textual data


In the following example, we will consider the problem of separating text messages sent between cell phone users. Some of these messages are spam advertisements, and the objective is to separate these from normal communications (Almeida, Tiago A., José María G. Hidalgo, and Akebo Yamakami. Contributions to the study of SMS spam filtering: new collection and results. Proceedings of the 11th ACM symposium on Document engineering. ACM, 2011). By looking for patterns of words that are typically found in spam advertisements, we could potentially derive a smart filter that would automatically remove these messages from a user's inbox. However, while in previous chapters we were concerned with fitting a predictive model for this kind of problem, here we will be shifting focus to cleaning up the data, removing noise, and extracting features. Once these tasks are done, either simple or lower-dimensional features can be input into many of the algorithms we have already studied...

Principal component analysis


One of the most commonly used methods of dimensionality reduction is Principal Component Analysis (PCA). Conceptually, PCA computes the axes along which the variation in the data is greatest. You may recall that in Chapter 3, Finding Patterns in the Noise – Clustering and Unsupervised Learning, we calculated the eigenvalues of the adjacency matrix of a dataset to perform spectral clustering. In PCA, we also want to find the eigenvalue of the dataset, but here, instead of any adjacency matrix, we will use the covariance matrix of the data, which is the relative variation within and between columns. The covariance for columns xi and xj in the data matrix X is given by:

This is the average product of the offsets from the mean column values. We saw this value before when we computed the correlation coefficient in Chapter 3, Finding Patterns in the Noise – Clustering and Unsupervised Learning, as it is the denominator of the Pearson coefficient. Let us use a simple...

Images


Like textual data, images are potentially noisy and complex. Furthermore, unlike language, which has a structure of words, paragraphs, and sentences, images have no predefined rules that we might use to simplify raw data. Thus, much of image analysis will involve extracting patterns from the input's features, which are ideally interpretable to a human analyst based only on the input pixels.

Cleaning image data

One of the common operations we will perform on images is to enhance contrast or change their color scale. For example, let us start with an example image of a coffee cup from the skimage package, which you can import and visualize using the following commands:

>>> from skimage import data, io, segmentation
>>> image = data.coffee()
>>> io.imshow(image)
>>> plt.axis('off');

This produces the following image:

In Python, this image is represented as a three-dimensional matrix with the dimensions corresponding to height, width, and color channels...

Case Study: Training a Recommender System in PySpark


To close this chapter, let us look at an example of how we might generate a large-scale recommendation system using dimensionality reduction. The dataset we will work with comes from a set of user transactions from an online store (Chen, Daqing, Sai Laing Sain, and Kun Guo. Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining. Journal of Database Marketing & Customer Strategy Management 19.3 (2012): 197-208). In this model, we will input a matrix in which the rows are users and the columns represent items in the catalog of an e-commerce site. Items purchased by a user are indicated by a 1. Our goal is to factorize this matrix into 1 x k user factors (row components) and k x 1 item factors (column components) using k components. Then, presented with a new user and their purchase history, we can predict what items they are like to buy in the future, and thus what we might...

Summary


In this chapter, we have examined complex, unstructured data. We cleaned and tokenized text and examined several ways of extracting features from documents in a way that could be incorporated into predictive models such as n-grams and tf-idf scores. We also examined dimensionality reduction techniques, such as the HashingVectorizer, matrix decompositions, such as PCA, CUR, NMF, and probabilistic models, such as LDA. We also examined image data, including normalization and thresholding operations, and how we can use dimensionality reduction techniques to find common patterns among images. Finally, we used a matrix factorization algorithm to prototype a recommender system in PySpark.

In the next section, you will also look at image data, but in a different context: trying to capture complex features from these data using sophisticated deep learning models.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Predictive Analytics with Python
Published in: Aug 2016Publisher: ISBN-13: 9781785882715
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Joseph Babcock

Joseph Babcock has spent more than a decade working with big data and AI in the e-commerce, digital streaming, and quantitative finance domains. Through his career he has worked on recommender systems, petabyte scale cloud data pipelines, A/B testing, causal inference, and time series analysis. He completed his PhD studies at Johns Hopkins University, applying machine learning to the field of drug discovery and genomics.
Read more about Joseph Babcock