Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Natural Language Processing Workshop

You're reading from  The Natural Language Processing Workshop

Product type Book
Published in Aug 2020
Publisher Packt
ISBN-13 9781800208421
Pages 452 pages
Edition 1st Edition
Languages
Authors (6):
Rohan Chopra Rohan Chopra
Profile icon Rohan Chopra
Aniruddha M. Godbole Aniruddha M. Godbole
Profile icon Aniruddha M. Godbole
Nipun Sadvilkar Nipun Sadvilkar
Profile icon Nipun Sadvilkar
Muzaffar Bashir Shah Muzaffar Bashir Shah
Profile icon Muzaffar Bashir Shah
Sohom Ghosh Sohom Ghosh
Profile icon Sohom Ghosh
Dwight Gunning Dwight Gunning
Profile icon Dwight Gunning
View More author details

Table of Contents (10) Chapters

Preface
1. Introduction to Natural Language Processing 2. Feature Extraction Methods 3. Developing a Text Classifier 4. Collecting Text Data with Web Scraping and APIs 5. Topic Modeling 6. Vector Representation 7. Text Generation and Summarization 8. Sentiment Analysis Appendix

Word Sense Disambiguation

There's a popular saying: "A man is known by the company he keeps.'' Similarly, a word's meaning depends on its association with other words in a sentence. This means two or more words with the same spelling may have different meanings in different contexts. This often leads to ambiguity. Word sense disambiguation is the process of mapping a word to the sense that it should carry. We need to disambiguate words based on the sense they carry so that they can be treated as different entities when being analyzed. The following figure displays a perfect example of how ambiguity is caused due to the usage of the same word in different sentences:

Figure 1.3: Word sense disambiguation

Figure 1.3: Word sense disambiguation

One of the algorithms to solve word sense disambiguation is the Lesk algorithm. It has a huge corpus in the background (generally WordNet is used) that contains definitions of all the possible synonyms of all the possible words in a language. Then it takes a word and the context as input and finds a match between the context and all the definitions of the word. The meaning with the highest number of matches with the context of the word will be returned.

For example, suppose we have a sentence such as "We play only soccer" in a given text. Now, we need to find the meaning of the word "play" in this sentence. In the Lesk algorithm, each word with ambiguous meaning is saved in background synsets. In this case, the word "play" will be saved with all possible definitions. Let's say we have two definitions of the word "play":

  1. Play: Participating in a sport or game
  2. Play: Using a musical instrument

Then, we will find the similarity between the context of the word "play" in the text and both of the preceding definitions using text similarity techniques. The definition best suited to the context of "play" in the sentence will be considered the meaning or definition of the word. In this case, we will find that our first definition fits best in context, as the words "sport" and "game" are present in the preceding sentences.

In the next exercise, we will be using the Lesk module from NLTK. It takes a sentence and the word as input, and returns the meaning or definition of the word. The output of the Lesk method is synset, which contains the ID of the matched definition. These IDs can be matched with their definitions using the definition() method of wsd.synset('word').

To get a better understanding of this process, let's look at an exercise.

Exercise 1.10: Word Sense Disambiguation

In this exercise, we will find the sense of the word "bank" in two different sentences. Follow these steps to implement this exercise:

  1. Open a Jupyter Notebook.
  2. Insert a new cell and add the following code to import the necessary libraries:
    import nltk
    nltk.download('wordnet')
    from nltk.wsd import lesk
    from nltk import word_tokenize
  3. Declare two variables, sentence1 and sentence2, and assign them with appropriate strings. Insert a new cell and the following code to implement this:
    sentence1 = "Keep your savings in the bank"
    sentence2 = "It's so risky to drive over the banks of the road"
  4. To find the sense of the word "bank" in the preceding two sentences, use the Lesk algorithm provided by the nltk.wsd library. Insert a new cell and add the following code to implement this:
    def get_synset(sentence, word):
        return lesk(word_tokenize(sentence), word)
    get_synset(sentence1,'bank')

    This code generates the following output:

    Synset('savings_bank.n.02')
  5. Here, savings_bank.n.02 refers to a container for keeping money safely at home. To check the other sense of the word "bank," write the following code:
    get_synset(sentence2,'bank')

    This code generates the following output:

    Synset('bank.v.07')

    Here, bank.v.07 refers to a slope in the turn of a road.

    Thus, with the help of the Lesk algorithm, we were able to identify the sense of a word in whatever context.

    Note

    To access the source code for this specific section, please refer to https://packt.live/399JCq5.

    You can also run this example online at https://packt.live/30haCQ6.

In the next section, we will focus on sentence boundary detection, which helps detect the start and end points of sentences.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}