Reader small image

You're reading from  Natural Language Processing with Python Quick Start Guide

Product typeBook
Published inNov 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781789130386
Edition1st Edition
Languages
Right arrow
Author (1)
Nirant Kasliwal
Nirant Kasliwal
author image
Nirant Kasliwal

Nirant Kasliwal maintains an awesome list of NLP natural language processing resources. GitHub's machine learning collection features this as the go-to guide. Nobel Laureate Dr. Paul Romer found his programming notes on Jupyter Notebooks helpful. Nirant won the first ever NLP Google Kaggle Kernel Award. At Soroco, image segmentation and intent categorization are the challenges he works with. His state-of-the-art language modeling results are available as Hindi2vec.
Read more about Nirant Kasliwal

Right arrow

Tidying your Text

Data cleaning is one of the most important and time-consuming tasks when it comes to natural language processing (NLP):

"There's the joke that 80 percent of data science is cleaning the data and 20 percent is complaining about cleaning the data."
– Kaggle founder and CEO Anthony Goldbloom in a Verge Interview

In this chapter, we will discuss some of the most common text pre-processing ideas. This task is universal, tedious, and unavoidable. Most people working in data science or NLP understand that it's an underrated value addition. Some of these tasks don't work well in isolation but have a powerful effect when used in the right combination and order. This chapter will introduce several new words and tools, since the field has a rich history from two worlds. It borrows from both traditional NLP and machine learning. We&apos...

Bread and butter – most common tasks

There are several well-known text cleaning ideas. They have all made their way into the most popular tools today such as NLTK, Stanford CoreNLP, and spaCy. I like spaCy for two main reasons:

  • It's an industry-grade NLP, unlike NLTK, which is mainly meant for teaching.
  • It has good speed-to-performance trade-off. spaCy is written in Cython, which gives it C-like performance with Python code.

spaCy is actively maintained and developed, and incorporates the best methods available for most challenges.

By the end of this section, you will be able to do the following:

  • Understand tokenization and do it manually yourself using spaCy
  • Understand why stop word removal and case standardization works, with spaCy examples
  • Differentiate between stemming and lemmatization, with spaCy lemmatization examples
...

Tokenization

Given a character sequence and a defined document unit, tokenization is the task of chopping it up into pieces, called tokens , perhaps at the same time throwing away certain characters, such as punctuation.
Here is an example of tokenization:

Input: Friends, Romans, Countrymen, lend me your ears;
Output: .

It is, in fact, sometimes useful to distinguish between tokens and words. But here, for ease of understanding, we will use them interchangeably.

We will convert the raw text into a list of words. This should preserve the original ordering of the text.

There are several ways to do this, so let's try a few of them out. We will program two methods from scratch to build our intuition, and then check how spaCy handles tokenization.

Intuitive – split by...

Stemming and lemmatization

Stemming and lemmatization are very two very popular ideas that are used to reduce the vocabulary size of your corpus.

Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes.

Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.

If confronted with the token saw, stemming might return just s, whereas lemmatization would attempt to return either see or saw, depending on whether the use of the token was as a verb or a noun.
- Dr. Christopher Manning et al, 2008, [IR-Book]
(Chris Manning is a Professor in machine...

spaCy compared with NLTK and CoreNLP

The following is a comparison of the NLTK and CoreNLP:

Feature Spacy NLTK CoreNLP
Native Python support/API Y Y Y
Multi-language support Y Y Y
Tokenization Y Y Y
Part-of-speech tagging Y Y Y
Sentence segmentation Y Y Y
Dependency parsing Y N Y
Entity recognition Y Y Y
Integrated word vectors Y N N
Sentiment analysis Y Y Y
Coreference resolution N N Y

Correcting spelling

One of the most frequently seen text challenges is correcting spelling errors. This is all the more true when data is entered by casual human users, for instance, shipping addresses or similar.

Let's look at an example. We want to correct Gujrat, Gujart, and other minor misspellings to Gujarat. There are several good ways to do this, depending on your dataset and level of expertise. We will discuss two or three popular ways, and discuss their pros and cons.

Before I begin, we need to pay homage to the legendary Peter Norvig's Spell Correct. It's still worth a read on how to think about solving a problem and exploring implementations. Even the way he refactors his code and writes functions is educational.

His spell-correction module is not the simplest or best way of doing this. I recommend two packages: one with a bias toward simplicity, one...

Cleaning a corpus with FlashText

But what about a web-scale corpus with millions of documents and a few thousand keywords? Regex can take several days to run over such exact searches because of its linear time complexity. How can we improve this?

We can use FlashText for this very specific use case:

  • A few million documents with a few thousand keywords
  • Exact keyword matches either by replacing or searching for the presence of those keywords

Of course, there are several different possible solutions to this problem. I recommend this for its simplicity and focus on solving one problem. It does not require us to learn new syntax or set up specific tools such as ElasticSearch.

The following table gives you a comparison of using Flashtext versus compiled regex for searching:

The following tables gives you a comparison of using FlashText versus compiled regex for substitutions...

Summary

This chapter covered a lot of new ground. We started by performing linguistic processing on our text. We met spaCy, which we will continue to dive deeper into as we move on in this book. We covered the following foundational ideas from linguistics, tokenization doing this with and without spaCy, stop word removal, case standardization, lemmatization (we skipped stemming) using spaCy and its peculiarities such as-PRON-

But what do we do with spaCy, other than text cleaning? Can we build something? Yes!

Not only can we extend our simple linguistics based text cleaning using spaCy pipelines but also do parts of speech tagging, named entity recognition, and other common tasks. We will look at this in the next chapter.

We looked at spelling correction or the closest word match problem. We discussed FuzzyWuzzy and Jellyfish in this context. To ensure that we can scale...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Natural Language Processing with Python Quick Start Guide
Published in: Nov 2018Publisher: PacktISBN-13: 9781789130386
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Nirant Kasliwal

Nirant Kasliwal maintains an awesome list of NLP natural language processing resources. GitHub's machine learning collection features this as the go-to guide. Nobel Laureate Dr. Paul Romer found his programming notes on Jupyter Notebooks helpful. Nirant won the first ever NLP Google Kaggle Kernel Award. At Soroco, image segmentation and intent categorization are the challenges he works with. His state-of-the-art language modeling results are available as Hindi2vec.
Read more about Nirant Kasliwal