Reader small image

You're reading from  Transformers for Natural Language Processing and Computer Vision - Third Edition

Product typeBook
Published inFeb 2024
Reading LevelN/a
PublisherPackt
ISBN-139781805128724
Edition3rd Edition
Languages
Tools
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Summarization with T5 and ChatGPT

During the first seven chapters, we explored the architecture training, fine-tuning, and usage of several transformer ecosystems. In Chapter 7, The Generative AI Revolution with ChatGPT, we discovered that OpenAI has begun experimenting with zero-shot models that require no fine-tuning or development and can be implemented in a few lines.

The underlying concept of such an evolution relies on how transformers strive to teach a machine how to understand a language and express itself in a human-like manner. Thus, we have gone from training a model to teaching languages to machines.

ChatGPT, New Bing, Gemini, and other end-user software can summarize, so why bother with T5? Because Hugging Face T5 might be the right solution for your project, as we will see. It has unique qualities, such as task-specific parameters for summarizing.

Raffel et al. (2019) designed a transformer meta-model based on a simple assertion: every NLP problem can be...

Designing a universal text-to-text model

Google’s NLP technical revolution started with Vaswani et al. (2017), the Original Transformer, in 2017. Attention Is All You Need toppled 30+ years of artificial intelligence belief in RNNs and CNNs applied to NLP tasks. It took us from the Stone Age of NLP/NLU to the 21st century in a long-overdue evolution.

Chapter 7, The Generative AI Revolution with ChatGPT, summed up a second revolution that boiled up and erupted between Google’s Vaswani et al. (2017) Original Transformer, OpenAI’s Brown et al. (2020) GPT-3 transformers, and now ChatGPT ’s, GPT-4 models. The Original Transformer was focused on performance to prove that attention was all we needed for NLP/NLU tasks.

OpenAI’s second revolution, through GPT-3, focused on taking transformer models from fine-tuned pretrained models to few-shot trained models that required no fine-tuning. ChatGPT with GPT-4 continued the progression that will continue...

The rise of text-to-text transformer models

Raffel et al. (2019) set out on a journey as pioneers with one goal: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. The Google team working on this approach emphasized that it would not modify the Original Transformer’s fundamental architecture from the start.

At that point, Raffel et al. (2019) wanted to focus on concepts, not techniques. Therefore, they showed no interest in producing the latest transformer model, as we often see a so-called silver bullet transformer model with n parameters and layers. This time, the T5 team wanted to find out how good transformers could be at understanding a language.

Humans learn a language and then apply that knowledge to a wide range of NLP tasks through transfer learning. The core concept of a T5 model is to find an abstract model that can do things like us. Remember, transformers learn to reproduce human-like responses through statistical pattern...

A prefix instead of task-specific formats

Raffel et al. (2019) still had one problem to solve: unifying task-specific formats. The idea was to find a way to have one input format for every task submitted to the transformer. That way, the model parameters would be trained for all types of tasks in one text-to-text format.

The Google T5 team devised a simple solution: adding a prefix to an input sequence. We would need thousands of additional vocabularies in many languages without the invention of the prefix by some long-forgotten genius. For example, we would need to find words to describe prepayment, prehistoric, Precambrian, and thousands of other words if we did not use “pre” as a prefix.

Raffel et al. (2019) proposed adding a prefix to an input sequence. A T5 prefix is not just a tag or indicator like [CLS] for classification in some transformer models. Instead, a T5 prefix contains the essence of a task a transformer needs to solve. A prefix conveys meaning...

The T5 model

Raffel et al. (2019) focused on designing a standard input format to obtain text output. The Google T5 team did not want to try new architectures derived from the Original Transformer, such as BERT-like encoder-only layers or GPT-like decoder-only layers. Instead, the team focused on defining NLP tasks in a standard format.

They chose to use the Original Transformer model we defined in Chapter 2, Getting Started with the Architecture of the Transformer Model, as we can see in Figure 13.4:

Diagram  Description automatically generated

Figure 13.4: The Original Transformer model used by T5

Raffel et al. (2019) kept most of the Original Transformer architecture and terms. However, they emphasized some key aspects. Also, they made some vocabulary and functional changes. The following list contains some of the main aspects of the T5 model:

  • The encoder and decoder remain in the model. The encoder and decoder layers become “blocks,” and the sublayers become “subcomponents...

Text summarization with T5

NLP summarizing tasks extract succinct parts of a text. This section will start by presenting the Hugging Face resources we will use in this chapter. Then, we will initialize a T5-large transformer model. Finally, we will see how to use T5 to summarize any document, including legal and corporate documents.

Let’s begin by introducing Hugging Face’s framework.

Hugging Face

Hugging Face designed a framework to implement transformers at a higher level. For example, we already used Hugging Face to fine-tune a BERT model in Chapter 5, Diving into Fine-Tuning Through BERT, and train a RoBERTa model in Chapter 6, Pretraining a Transformer from Scratch through RoBERTa.

This chapter will use Hugging Face’s framework again to implement a T5-large model.

Selecting a Hugging Face transformer model

In this subsection, we will choose the T5 model we will implement in this chapter.

A wide range of models can be found on the...

From text-to-text to new word predictions with OpenAI ChatGPT

The choice between T5 and ChatGPT (GPT-4) to perform summarization will always remain yours, depending on the project you implement. Hugging Face T5 offers many advantages with its text-to-text approach. ChatGPT has proven its efficiency. Ultimately, the requirements of a project will determine which model you will decide to use.

In this section, we will first compare some key points of each model. Then, we will create a program to summarize text with ChatGPT.

Comparing T5 and ChatGPT’s summarization methods

This section aims to compare T5 and ChatGPT’s summarization methods, not their performances, which depend on factors you will have to evaluate: datasets, hyperparameters, the scope of the project, and other project-level considerations.

In this section, the term “T5” refers to the T5 models described in the Selecting a Hugging Face transformer model section. The term ChatGPT...

Summary

In this chapter, we saw how the T5 transformer models standardized the input of the encoder and decoder stacks of the Original Transformer. The Original Transformer architecture has an identical structure for each block (or layer) of the encoder and decoder stacks. However, the Original Transformer did not have a standardized input format for NLP tasks.

Raffel et al. (2018) designed a standard input for a wide range of NLP tasks by defining a text-to-text model. They added a prefix to an input sequence, indicating the NLP problem type to solve. This led to a standard text-to-text format. The Text-To-Text Transfer Transformer (T5) was born. This deceptively simple evolution made it possible to use the same model and hyperparameters for a wide range of NLP tasks. The invention of T5 takes the standardization process of transformer models a step further.

We then implemented a T5 model that could summarize any text. We tested the model on texts that were not part of ready...

Questions

  1. T5 models only have encoder stacks like BERT models. (True/False)
  2. T5 models have both encoder and decoder stacks. (True/False)
  3. T5 models use relative positional encoding, not absolute positional encoding. (True/False)
  4. Text-to-text models are only designed for summarization. (True/False)
  5. Text-to-text models apply a prefix to the input sequence that determines the NLP task. (True/False)
  6. T5 models require specific hyperparameters for each task. (True/False)
  7. One of the advantages of text-to-text models is that they use the same hyperparameters for all NLP tasks. (True/False)
  8. T5 transformers do not contain a feedforward network. (True/False)
  9. Hugging Face is a framework that makes transformers easier to implement. (True/False)
  10. OpenAI’s transformer models are the best for summarization tasks. (True/False)

References

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://www.packt.link/Transformers

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing and Computer Vision - Third Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman