Reader small image

You're reading from  Transformers for Natural Language Processing - Second Edition

Product typeBook
Published inMar 2022
PublisherPackt
ISBN-139781803247335
Edition2nd Edition
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Matching Tokenizers and Datasets

When studying transformer models, we tend to focus on the models’ architecture and the datasets provided to train them. We have explored the original Transformer, fine-tuned a BERT-like model, trained a RoBERTa model, explored a GPT-3 model, trained a GPT-2 model, implemented a T5 model, and more. We have also gone through the main benchmark tasks and datasets.

We trained a RoBERTa tokenizer and used tokenizers to encode data. However, we did not explore the limits of tokenizers to evaluate how they fit the models we build. AI is data-driven. Raffel et al. (2019), like all the authors cited in this book, spent time preparing datasets for transformer models.

In this chapter, we will go through some of the limits of tokenizers that hinder the quality of downstream transformer tasks. Do not take pretrained tokenizers at face value. You might have a specific dictionary of words you use (advanced medical language, for example) with words...

Matching datasets and tokenizers

Downloading benchmark datasets to train transformers has many advantages. The data has been prepared, and every research lab uses the same references. Also, the performance of a transformer model can be compared to another model with the same data.

However, more needs to be done to improve the performance of transformers. Furthermore, implementing a transformer model in production requires careful planning and defining best practices.

In this section, we will define some best practices to avoid critical stumbling blocks.

Then we will go through a few examples in Python using cosine similarity to measure the limits of tokenization and encoding datasets.

Let’s start with best practices.

Best practices

Raffel et al. (2019) defined a standard text-to-text T5 transformer model. They also went further. They began destroying the myth of using raw data without preprocessing it first.

Preprocessing data reduces training time...

Standard NLP tasks with specific vocabulary

This section focuses on Case 4: Rare words and Case 5: Replacing rare words from the Word2Vec tokenization section of this chapter.

We will use Training_OpenAI_GPT_2_CH09.ipynb, a renamed version of the notebook we used to train a dataset in Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines.

Two changes were made to the notebook:

  • dset, the dataset, was renamed mdset and contains medical content
  • A Python function was added to control the text that was tokenized using byte-level BPE

We will not describe Training_OpenAI_GPT_2_CH09.ipynb, which we covered in Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines, and Appendices III and IV. Make sure you upload the necessary files before beginning, as explained in Chapter 7.

There is no limit to the time you wish to train the model for. Interrupt it in order to save the model.

The files are on GitHub in the gpt-2-train_files...

Exploring the scope of GPT-3

Even the most powerful transformers such as OpenAI GPT-3 have their limits. Let’s see how GPT-3 reacts to the word amoeboid, which is closer to a medical term than a mainstream word. We will need technical jargon in many projects. Matching datasets requires quality control of how a transformer organizes its dictionary and embeddings.

We humans can detect errors and correct somebody. For example, in this chapter, we explored the word amoeboid in the Controlling tokenized data section of this chapter.

Let’s first ask GPT-3 what amoeboid means:

Graphical user interface, text, application, email  Description automatically generated

Figure 9.4: Asking GPT-3 what “amoeboid” means

amoeboid (resembling an amoeba) is an adjective, yet GPT-3 states that it is a noun in the output:

A: Amoeboid is a noun which means "resembling an amoeba"

We then ask GPT-3 a more precise question and still obtain an incorrect answer:

Q: Is amoeboid a noun or an adjective?
A: Amoeboid is a noun.
...

Summary

In this chapter, we measured the impact of the tokenization and subsequent data encoding process on transformer models. A transformer model can only attend to tokens from the embedding and positional encoding sub-layers of a stack. It does not matter if the model is an encoder-decoder, encoder-only, or decoder-only model. It does not matter if the dataset seems good enough to train.

If the tokenization process fails, even partly, the transformer model we are running will miss critical tokens.

We first saw that for standard language tasks, raw datasets might be enough to train a transformer.

However, we discovered that even if a pretrained tokenizer has gone through a billion words, it only creates a dictionary with a small portion of the vocabulary it comes across. Like us, a tokenizer captures the essence of the language it is learning and only remembers the most important words if these words are also frequently used. This approach works well for a standard task...

Questions

  1. A tokenized dictionary contains every word that exists in a language. (True/False)
  2. Pretrained tokenizers can encode any dataset. (True/False)
  3. It is good practice to check a database before using it. (True/False)
  4. It is good practice to eliminate obscene data from datasets. (True/False)
  5. It is good practice to delete data containing discriminating assertions. (True/False)
  6. Raw datasets might sometimes produce relationships between noisy content and useful content. (True/False)
  7. A standard pretrained tokenizer contains the English vocabulary of the past 700 years. (True/False)
  8. Old English can create problems when encoding data with a tokenizer trained in modern English. (True/False)
  9. Medical and other types of jargon can create problems when encoding data with a tokenizer trained in modern English. (True/False)
  10. Controlling the output of the encoded data produced by a pretrained tokenizer is good practice. (True/False...

References

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing - Second Edition
Published in: Mar 2022Publisher: PacktISBN-13: 9781803247335
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman