In this chapter, we will cover:
Tokenizing text into sentences
Tokenizing sentences into words
Tokenizing sentences using regular expressions
Filtering stopwords in a tokenized sentence
Looking up synsets for a word in WordNet
Looking up lemmas and synonyms in WordNet
Calculating WordNet synset similarity
Discovering word collocations
NLTK is the Natural Language Toolkit, a comprehensive Python library for natural language processing and text analytics. Originally designed for teaching, it has been adopted in the industry for research and development due to its usefulness and breadth of coverage.
This chapter will cover the basics of tokenizing text and using WordNet. Tokenization is a method of breaking up a piece of text into many pieces, and is an essential first step for recipes in later chapters.
WordNet is a dictionary designed for programmatic access by natural language processing systems. NLTK includes a WordNet corpus reader, which we will use to access and explore WordNet. We'll be using WordNet again in later chapters, so it's important to familiarize yourself with the basics first.
Tokenization is the process of splitting a string into a list of pieces, or tokens. We'll start by splitting a paragraph into a list of sentences.
Installation instructions for NLTK are available at http://www.nltk.org/download and the latest version as of this writing is 2.0b9. NLTK requires Python 2.4 or higher, but is not compatible with Python 3.0. The recommended Python version is 2.6.
Once you've installed NLTK, you'll also need to install the data by following the instructions at http://www.nltk.org/data. We recommend installing everything, as we'll be using a number of corpora and pickled objects. The data is installed in a data directory, which on Mac and Linux/Unix is usually
/usr/share/nltk_data, or on Windows is
C:\nltk_data. Make sure that
tokenizers/punkt.zip is in the data directory and has been unpacked so that there's a file at
Finally, to run the code examples, you'll need to start a Python console. Instructions on how to do so are available at http://www.nltk.org/getting-started. For Mac with Linux/Unix users, you can open a terminal and type python.
Once NLTK is installed and you have a Python console running, we can start by creating a paragraph of text:
>>> para = "Hello World. It's good to see you. Thanks for buying this book."
Now we want to split
para into sentences. First we need to import the sentence tokenization function, and then we can call it with the paragraph as an argument.
>>> from nltk.tokenize import sent_tokenize >>> sent_tokenize(para) ['Hello World.', "It's good to see you.", 'Thanks for buying this book.']
So now we have a list of sentences that we can use for further processing.
sent_tokenize uses an instance of
PunktSentenceTokenizer from the
nltk.tokenize.punkt module. This instance has already been trained on and works well for many European languages. So it knows what punctuation and characters mark the end of a sentence and the beginning of a new sentence.
The instance used in
sent_tokenize() is actually loaded on demand from a pickle file. So if you're going to be tokenizing a lot of sentences, it's more efficient to load the
PunktSentenceTokenizer once, and call its
tokenize() method instead.
>>> import nltk.data >>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') >>> tokenizer.tokenize(para) ['Hello World.', "It's good to see you.", 'Thanks for buying this book.']
If you want to tokenize sentences in languages other than English, you can load one of the other pickle files in
tokenizers/punkt and use it just like the English sentence tokenizer. Here's an example for Spanish:
>>> spanish_tokenizer = nltk.data.load('tokenizers/punkt/spanish.pickle') >>> spanish_tokenizer.tokenize('Hola amigo. Estoy bien.')
In this recipe, we'll split a sentence into individual words. The simple task of creating a list of words from a string is an essential part of all text processing.
Basic word tokenization is very simple: use the
>>> from nltk.tokenize import word_tokenize >>> word_tokenize('Hello World.') ['Hello', 'World', '.']
>>> from nltk.tokenize import TreebankWordTokenizer >>> tokenizer = TreebankWordTokenizer() >>> tokenizer.tokenize('Hello World.') ['Hello', 'World', '.']
It works by separating words using spaces and punctuation. And as you can see, it does not discard the punctuation, allowing you to decide what to do with it.
Ignoring the obviously named
, there are two other word tokenizers worth looking at:
WordPunctTokenizer. These differ from the
TreebankWordTokenizer by how they handle punctuation and contractions, but they all inherit from
TokenizerI. The inheritance tree looks like this:
TreebankWordTokenizer uses conventions found in the Penn Treebank corpus, which we'll be using for training in Chapter 4, Part-of-Speech Tagging and Chapter 5, Extracting Chunks. One of these conventions is to separate contractions. For example:
>>> word_tokenize("can't") ['ca', "n't"]
If you find this convention unacceptable, then read on for alternatives, and see the next recipe for tokenizing with regular expressions.
An alternative word tokenizer is the
PunktWordTokenizer. It splits on punctuation, but keeps it with the word instead of creating separate tokens.
>>> from nltk.tokenize import PunktWordTokenizer >>> tokenizer = PunktWordTokenizer() >>> tokenizer.tokenize("Can't is a contraction.") ['Can', "'t", 'is', 'a', 'contraction.']
Another alternative word tokenizer is
WordPunctTokenizer. It splits all punctuations into separate tokens.
>>> from nltk.tokenize import WordPunctTokenizer >>> tokenizer = WordPunctTokenizer() >>> tokenizer.tokenize("Can't is a contraction.") ['Can', "'", 't', 'is', 'a', 'contraction', '.']
Regular expression can be used if you want complete control over how to tokenize text. As regular expressions can get complicated very quickly, we only recommend using them if the word tokenizers covered in the previous recipe are unacceptable.
First you need to decide how you want to tokenize a piece of text, as this will determine how you construct your regular expression. The choices are:
Match on the tokens
Match on the separators, or gaps
We'll start with an example of the first, matching alphanumeric tokens plus single quotes so that we don't split up contractions.
We'll create an instance of the
RegexpTokenizer, giving it a regular expression string to use for matching tokens.
>>> from nltk.tokenize import RegexpTokenizer >>> tokenizer = RegexpTokenizer("[\w']+") >>> tokenizer.tokenize("Can't is a contraction.") ["Can't", 'is', 'a', 'contraction']
There's also a simple helper function you can use in case you don't want to instantiate the class.
>>> from nltk.tokenize import regexp_tokenize >>> regexp_tokenize("Can't is a contraction.", "[\w']+") ["Can't", 'is', 'a', 'contraction']
Now we finally have something that can treat contractions as whole words, instead of splitting them into tokens.
RegexpTokenizer works by compiling your pattern, then calling
re.findall() on your text. You could do all this yourself using the
re module, but the
RegexpTokenizer implements the
TokenizerI interface, just like all the word tokenizers from the previous recipe. This means it can be used by other parts of the NLTK package, such as corpus readers, which we'll cover in detail in Chapter 3, Creating Custom Corpora. Many corpus readers need a way to tokenize the text they're reading, and can take optional keyword arguments specifying an instance of a
TokenizerI subclass. This way, you have the ability to provide your own tokenizer instance if the default tokenizer is unsuitable.
RegexpTokenizer can also work by matching the gaps, instead of the tokens. Instead of using
RegexpTokenizer will use
re.split(). This is how the
nltk.tokenize is implemented.
Stopwords are common words that generally do not contribute to the meaning of a sentence, at least for the purposes of information retrieval and natural language processing. Most search engines will filter stopwords out of search queries and documents in order to save space in their index.
NLTK comes with a stopwords corpus that contains word lists for many languages. Be sure to unzip the datafile so NLTK can find these word lists in
We're going to create a set of all English stopwords, then use it to filter stopwords from a sentence.
>>> from nltk.corpus import stopwords >>> english_stops = set(stopwords.words('english')) >>> words = ["Can't", 'is', 'a', 'contraction'] >>> [word for word in words if word not in english_stops] ["Can't", 'contraction']
The stopwords corpus is an instance of
nltk.corpus.reader.WordListCorpusReader. As such, it has a
words() method that can take a single argument for the file ID, which in this case is
'english', referring to a file containing a list of English stopwords. You could also call
stopwords.words() with no argument to get a list of all stopwords in every language available.
You can see the list of all English stopwords using
stopwords.words('english') or by examining the word list file at
nltk_data/corpora/stopwords/english. There are also stopword lists for many other languages. You can see the complete list of languages using the
>>> stopwords.fileids() ['danish', 'dutch', 'english', 'finnish', 'french', 'german', 'hungarian', 'italian', 'norwegian', 'portuguese', 'russian', 'spanish', 'swedish', 'turkish']
Any of these
fileids can be used as an argument to the
words() method to get a list of stopwords for that language.
If you'd like to create your own stopwords corpus, see the Creating a word list corpus recipe in Chapter 3, Creating Custom Corpora, to learn how to use the
WordListCorpusReader. We'll also be using stopwords in the Discovering word collocations recipe, later in this chapter.
WordNet is a lexical database for the English language. In other words, it's a dictionary designed specifically for natural language processing.
NLTK comes with a simple interface for looking up words in WordNet. What you get is a list of synset instances, which are groupings of synonymous words that express the same concept. Many words have only one synset, but some have several. We'll now explore a single synset, and in the next recipe, we'll look at several in more detail.
Be sure you've unzipped the
wordnet corpus in
nltk_data/corpora/wordnet. This will allow the
WordNetCorpusReader to access it.
Now we're going to lookup the
cookbook, and explore some of the properties and methods of a synset.
>>> from nltk.corpus import wordnet >>> syn = wordnet.synsets('cookbook') >>> syn.name 'cookbook.n.01' >>> syn.definition 'a book of recipes and cooking directions'
You can look up any word in WordNet using
wordnet.synsets(word) to get a list of synsets. The list may be empty if the word is not found. The list may also have quite a few elements, as some words can have many possible meanings and therefore many synsets.
Each synset in the list has a number of attributes you can use to learn more about it. The
name attribute will give you a unique name for the synset, which you can use to get the synset directly.
>>> wordnet.synset('cookbook.n.01') Synset('cookbook.n.01')
definition attribute should be self-explanatory. Some synsets also have an
examples attribute, which contains a list of phrases that use the word in context.
>>> wordnet.synsets('cooking').examples ['cooking can be a great art', 'people are needed who have experience in cookery', 'he left the preparation of meals to his wife']
Hypernyms provide a way to categorize and group words based on their similarity to each other. The synset similarity recipe details the functions used to calculate similarity based on the distance between two words in the hypernym tree.
>>> syn.hypernyms() [Synset('reference_book.n.01')] >>> syn.hypernyms().hyponyms() [Synset('encyclopedia.n.01'), Synset('directory.n.01'), Synset('source_book.n.01'), Synset('handbook.n.01'), Synset('instruction_book.n.01'), Synset('cookbook.n.01'), Synset('annual.n.02'), Synset('atlas.n.02'), Synset('wordbook.n.01')] >>> syn.root_hypernyms() [Synset('entity.n.01')]
As you can see,
reference book is a hypernym of
cookbook is only one of many hyponyms of
reference book. All these types of books have the same root hypernym,
entity, one of the most abstract terms in the English language. You can trace the entire path from
entity down to
cookbook using the
>>> syn.hypernym_paths() [[Synset('entity.n.01'), Synset('physical_entity.n.01'), Synset('object.n.01'), Synset('whole.n.02'), Synset('artifact.n.01'), Synset('creation.n.02'), Synset('product.n.02'), Synset('work.n.02'), Synset('publication.n.01'), Synset('book.n.01'), Synset('reference_book.n.01'), Synset('cookbook.n.01')]]
This method returns a list of lists, where each list starts at the root hypernym and ends with the original
Synset. Most of the time you'll only get one nested list of synsets.
You can also look up a simplified part-of-speech tag.
>>> syn.pos 'n'
There are four common POS found in WordNet.
These POS tags can be used for looking up specific
synsets for a word. For example, the word
great can be used as a noun or an adjective. In WordNet,
great has one noun synset and six adjective synsets.
>>> len(wordnet.synsets('great')) 7 >>> len(wordnet.synsets('great', pos='n')) 1 >>> len(wordnet.synsets('great', pos='a')) 6
These POS tags will be referenced more in the Using WordNet for Tagging recipe of Chapter 4, Part-of-Speech Tagging.
In the next two recipes, we'll explore lemmas and how to calculate synset similarity. In Chapter 2, Replacing and Correcting Words, we'll use WordNet for lemmatization, synonym replacement, and then explore the use of antonyms.
In the following block of code, we'll find that there are two lemmas for the
cookbook synset by using the
>>> from nltk.corpus import wordnet >>> syn = wordnet.synsets('cookbook') >>> lemmas = syn.lemmas >>> len(lemmas) 2 >>> lemmas.name 'cookbook' >>> lemmas.name 'cookery_book' >>> lemmas.synset == lemmas.synset True
As you can see,
cookbook are two distinct
lemmas in the same
synset. In fact, a lemma can only belong to a single synset. In this way, a synset represents a group of lemmas that all have the same meaning, while a lemma represents a distinct word form.
Since lemmas in a synset all have the same meaning, they can be treated as synonyms. So if you wanted to get all synonyms for a
synset, you could do:
>>> [lemma.name for lemma in syn.lemmas] ['cookbook', 'cookery_book']
As mentioned before, many words have multiple
synsets because the word can have different meanings depending on the context. But let's say you didn't care about the context, and wanted to get all possible synonyms for a word.
>>> synonyms =  >>> for syn in wordnet.synsets('book'): ... for lemma in syn.lemmas: ... synonyms.append(lemma.name) >>> len(synonyms) 38
As you can see, there appears to be 38 possible synonyms for the word
book. But in fact, some are verb forms, and many are just different usages of
book. Instead, if we take the set of synonyms, there are fewer unique words.
>>> len(set(synonyms)) 25
Some lemmas also have antonyms. The word
good, for example, has 27
synsets, five of which have
lemmas with antonyms.
>>> gn2 = wordnet.synset('good.n.02') >>> gn2.definition 'moral excellence or admirableness' >>> evil = gn2.lemmas.antonyms() >>> evil.name 'evil' >>> evil.synset.definition 'the quality of being morally wrong in principle or practice' >>> ga1 = wordnet.synset('good.a.01') >>> ga1.definition 'having desirable or positive qualities especially those suitable for a thing specified' >>> bad = ga1.lemmas.antonyms() >>> bad.name 'bad' >>> bad.synset.definition 'having undesirable or negative qualities'
antonyms() method returns a list of
lemmas. In the first case here, we see that the second
good as a noun is defined as
moral excellence, and its first antonym is
evil, defined as
morally wrong. In the second case, when
good is used as an adjective to describe positive qualities, the first antonym is
bad, which describes negative qualities.
In the next recipe, we'll learn how to calculate
synset similarity. Then in Chapter 2, Replacing and Correcting Words, we'll revisit lemmas for lemmatization, synonym replacement, and antonym replacement.
Synsets are organized in a hypernym tree. This tree can be used for reasoning about the similarity between the synsets it contains. Two synsets are more similar, the closer they are in the tree.
If you were to look at all the hyponyms of
reference book (which is the hypernym of
cookbook) you'd see that one of them is
instruction_book. These seem intuitively very similar to
cookbook, so let's see what WordNet similarity has to say about it.
>>> from nltk.corpus import wordnet >>> cb = wordnet.synset('cookbook.n.01') >>> ib = wordnet.synset('instruction_book.n.01') >>> cb.wup_similarity(ib) 0.91666666666666663
So they are over 91% similar!
is short for Wu-Palmer Similarity, which is a scoring method based on how similar the word senses are and where the synsets occur relative to each other in the hypernym tree. One of the core metrics used to calculate similarity is the shortest path distance between the two synsets and their common hypernym.
>>> ref = cb.hypernyms() >>> cb.shortest_path_distance(ref) 1 >>> ib.shortest_path_distance(ref) 1 >>> cb.shortest_path_distance(ib) 2
instruction book must be very similar, because they are only one step away from the same hypernym,
reference book, and therefore only two steps away from each other.
Let's look at two dissimilar words to see what kind of score we get. We'll compare
cookbook, two seemingly very different words.
>>> dog = wordnet.synsets('dog') >>> dog.wup_similarity(cb) 0.38095238095238093
cookbook are apparently 38% similar! This is because they share common hypernyms farther up the tree.
>>> dog.common_hypernyms(cb) [Synset('object.n.01'), Synset('whole.n.02'), Synset('physical_entity.n.01'), Synset('entity.n.01')]
The previous comparisons were all between nouns, but the same can be done for verbs as well.
>>> cook = wordnet.synset('cook.v.01') >>> bake = wordnet.synset('bake.v.02') >>> cook.wup_similarity(bake) 0.75
The previous synsets were obviously handpicked for demonstration, and the reason is that the hypernym tree for verbs has a lot more breadth and a lot less depth. While most nouns can be traced up to
object, thereby providing a basis for similarity, many verbs do not share common hypernyms, making WordNet unable to calculate similarity. For example, if you were to use the
bake.v.01 here, instead of
bake.v.02, the return value would be
None. This is because the root hypernyms of the two synsets are different, with no overlapping paths. For this reason, you also cannot calculate similarity between words with different parts of speech.
>>> cb.path_similarity(ib) 0.33333333333333331 >>> cb.path_similarity(dog) 0.071428571428571425 >>> cb.lch_similarity(ib) 2.5389738710582761 >>> cb.lch_similarity(dog) 0.99852883011112725
Collocations are two or more words that tend to appear frequently together, such as "United States". Of course, there are many other words that can come after "United", for example "United Kingdom", "United Airlines", and so on. As with many aspects of natural language processing, context is very important, and for collocations, context is everything!
In the case of collocations, the context will be a document in the form of a list of words. Discovering collocations in this list of words means that we'll find common phrases that occur frequently throughout the text. For fun, we'll start with the script for Monty Python and the Holy Grail.
The script for Monty Python and the Holy Grail is found in the
webtext corpus, so be sure that it's unzipped in
We're going to create a list of all lowercased words in the text, and then produce a
BigramCollocationFinder, which we can use to find bigrams, which are pairs of words. These bigrams are found using association measurement functions found in the
>>> from nltk.corpus import webtext >>> from nltk.collocations import BigramCollocationFinder >>> from nltk.metrics import BigramAssocMeasures >>> words = [w.lower() for w in webtext.words('grail.txt')] >>> bcf = BigramCollocationFinder.from_words(words) >>> bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4) [("'", 's'), ('arthur', ':'), ('#', '1'), ("'", 't')]
Well that's not very useful! Let's refine it a bit by adding a word filter to remove punctuation and stopwords.
>>> from nltk.corpus import stopwords >>> stopset = set(stopwords.words('english')) >>> filter_stops = lambda w: len(w) < 3 or w in stopset >>> bcf.apply_word_filter(filter_stops) >>> bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4) [('black', 'knight'), ('clop', 'clop'), ('head', 'knight'), ('mumble', 'mumble')]
Much better—we can clearly see four of the most common bigrams in Monty Python and the Holy Grail. If you'd like to see more than four, simply increase the number to whatever you want, and the collocation finder will do its best.
BigramCollocationFinder constructs two frequency distributions: one for each word, and another for bigrams. A
frequency distribution, or
FreqDist in NLTK, is basically an enhanced dictionary where the keys are what's being counted, and the values are the counts. Any filtering functions that are applied, reduce the size of these two
FreqDists by eliminating any words that don't pass the filter. By using a filtering function to eliminate all words that are one or two characters, and all English stopwords, we can get a much cleaner result. After filtering, the collocation finder is ready to accept a generic scoring function for finding collocations. Additional scoring functions are covered in the Scoring functions section further in this chapter.
>>> from nltk.collocations import TrigramCollocationFinder >>> from nltk.metrics import TrigramAssocMeasures >>> words = [w.lower() for w in webtext.words('singles.txt')] >>> tcf = TrigramCollocationFinder.from_words(words) >>> tcf.apply_word_filter(filter_stops) >>> tcf.apply_freq_filter(3) >>> tcf.nbest(TrigramAssocMeasures.likelihood_ratio, 4) [('long', 'term', 'relationship')]
Now, we don't know whether people are looking for a long-term relationship or not, but clearly it's an important topic. In addition to the stopword filter, we also applied a frequency filter which removed any trigrams that occurred less than three times. This is why only one result was returned when we asked for four—because there was only one result that occurred more than twice.
There are many more scoring functions available besides
likelihood_ratio(). But other than
raw_freq(), you may need a bit of a statistics background to understand how they work. Consult the NLTK API documentation for
NgramAssocMeasures in the
nltk.metrics package, to see all the possible scoring functions.
In addition to the
nbest() method, there are two other ways to get ngrams (a generic term for describing bigrams and trigrams) from a collocation finder.
above_score(score_fn, min_score)can be used to get all ngrams with scores that are at least
min_scorethat you choose will depend heavily on the
score_ngrams(score_fn)will return a list with tuple pairs of
(ngram, score). This can be used to inform your choice for
min_scorein the previous step.
nltk.metrics module will be used again in Chapter 7, Text Classification.