Reader small image

You're reading from  Natural Language Processing with Python Quick Start Guide

Product typeBook
Published inNov 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781789130386
Edition1st Edition
Languages
Right arrow
Author (1)
Nirant Kasliwal
Nirant Kasliwal
author image
Nirant Kasliwal

Nirant Kasliwal maintains an awesome list of NLP natural language processing resources. GitHub's machine learning collection features this as the go-to guide. Nobel Laureate Dr. Paul Romer found his programming notes on Jupyter Notebooks helpful. Nirant won the first ever NLP Google Kaggle Kernel Award. At Soroco, image segmentation and intent categorization are the challenges he works with. His state-of-the-art language modeling results are available as Hindi2vec.
Read more about Nirant Kasliwal

Right arrow

Deep Learning for NLP

n the previous chapter, we used classic machine learning techniques to build our text classifiers. In this chapter, we will replace those with deep learning techniques via the use of recurrent neural networks (RNN).

In particular, we will use a relatively simple bidirectional LSTM model. If this is new to you, keep reading if not, please feel free to skip ahead!

The dataset attribute of the batch variable should point to the trn variable of the torchtext.data.TabularData type. This is a useful checkpoint to understand how data flow differs in training deep learning models.

Let's begin by touching upon the overhyped terms, that is, deep in deep learning and neural in deep neural networks. Before we do that, let's take a moment to explain why I use PyTorch and compare it to Tensorflow and Keras—the other popular deep learning frameworks...

What is deep learning?

Deep learning is a subset of machine learning: a new take on learning from data that puts an emphasis on learning successive layers of increasingly meaningful representations. But what does the deep in deep learning mean?

"The deep in deep learning isn't a reference to any kind of deeper understanding achieved by the approach; rather, it stands for this idea of successive layers of representations."
– F. Chollet, Lead Developer of Keras

The depth of the model is indicative of how many layers of such representations we use. F Chollet suggested layered representations learning and hierarchical representations learning as better names for this. Another name could have been differentiable programming.

The term differentiable programming, coined by Yann LeCun, stems from the fact that what our deep learning methods have in common is not...

Understanding deep learning

In a loosely worded manner, machine learning is about mapping inputs (such as images, or movie reviews) to targets (such as the label cat or positive). The model does this by looking at (or training from) several pairs of input and targets.

Deep neural networks do this input-to-target mapping using a long sequence of simple data transformations (layers). This sequence length is referred to as the depth of the network. The entire sequence from input-to-target is referred to as a model that learns about the data. These data transformations are learned by repeated observation of examples. Let's look at how this learning happens.

Puzzle pieces

We are looking at a particular subclass of challenges...

Putting it all together – the training loop

We now have a shared vocabulary. You have a notional understanding of what terms like layers, model weights, loss function, and optimizer mean. But how do they work together? How do we train them on arbitrary data? We can train them to give us the ability to recognize cat pictures or fraudulent reviews on Amazon.

Here is the rough outline of the steps that occur inside a training loop:

  • Initialize:
    • The network/model weights are assigned random values, usually in the form of (-1, 1) or (0, 1).
    • The model is very far from the target. This is because it is simply executing a series of random transformations.
    • The loss is very high.
  • With every example that the network processes, the following occurs:
    • The weights are adjusted a little in the correct direction
    • The loss score decreases

This is the training loop, which is repeated...

Kaggle – text categorization challenge

Getting the data

Note that you will need to accept the terms and conditions of the competition and data usage to get this dataset.

For a direct download, you can get the train and test data from the data tab on the challenge website.

Alternatively, you can use the official Kaggle API (github link) to download the data via a Terminal or Python program as well.

In the case of both direct download and Kaggle API, you have to split your train data into smaller train and validation...

Summary

This was our first brush with deep learning for NLP. This was very a thorough introduction to torchtext and how we can leverage it with Pytorch. We also got a very broad view of deep learning as a puzzle of only two or three broad pieces: the model, the optimizer, and the loss functions. This is true irrespective of what framework or dataset you use.

We did skimp a bit on the model architecture explanation in the interest of keeping this short. We will avoid using concepts that have not been explained here in other sections.

When we are working with modern ensembling methods, we don't always know how a particular prediction is being made. That's a black box to us, in the same sense that all deep learning model predictions are a black box.

In the next chapter, we will look at some tools and techniques that will help us look into these boxes at least a...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Natural Language Processing with Python Quick Start Guide
Published in: Nov 2018Publisher: PacktISBN-13: 9781789130386
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Nirant Kasliwal

Nirant Kasliwal maintains an awesome list of NLP natural language processing resources. GitHub's machine learning collection features this as the go-to guide. Nobel Laureate Dr. Paul Romer found his programming notes on Jupyter Notebooks helpful. Nirant won the first ever NLP Google Kaggle Kernel Award. At Soroco, image segmentation and intent categorization are the challenges he works with. His state-of-the-art language modeling results are available as Hindi2vec.
Read more about Nirant Kasliwal