Reader small image

You're reading from  Transformers for Natural Language Processing and Computer Vision - Third Edition

Product typeBook
Published inFeb 2024
Reading LevelN/a
PublisherPackt
ISBN-139781805128724
Edition3rd Edition
Languages
Tools
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Further Reading

Transformers could not have increased their potential without hardware innovation. Nvidia, for example, offers interesting insights on transformers and the related hardware:https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html

The paradigm shift: What is an NLP task?

ChatGPT stunned the world when it suddenly became mainstream in late 2022 and early 2023. An AI could generate human-like text on practically any topic. Thousands of tasks were submitted to this incredible Generative AI transformer. ChatGPT Plus with GPT-4 seemed to be able to perform any task an end user came up with.

However, OpenAI couldn’t have possibly pretrained ChatGPT on thousands of tasks that could not be guessed beforehand. Nor could OpenAI have possibly fine-tuned its GPT models for everything the end user was coming up with.

Of course, a transformer model can be trained for specific tasks and determined downstream tasks such as summarizing. However, models such as ChatGPT can perform downstream tasks for which they were not trained.

This section takes us inside the head of a transformer model to see how the architecture described in Chapter 2, Getting Started with the Architecture of the Transformer Model, applies...

Investigating the potential of downstream tasks

Transformers, like humans, can be fine-tuned to perform downstream tasks by inheriting the properties of a pretrained model. The pretrained model provides its architecture and language representations through its parameters.

A pretrained model trains on key tasks to acquire a general knowledge of the language. A fine-tuned model trains on downstream tasks. Not every transformer model uses the same tasks for pretraining. But, potentially, all tasks can be pretrained or fine-tuned.

Organizing downstream tasks provides a scientific framework for implementing and measuring NLP. However, every NLP model needs to be evaluated with a standard method.

This section will first go through some of the key measurement methods. Then, we will go through some of the main benchmark tasks and datasets.

Let’s start by going through some of the key metric methods.

Evaluating models with metrics

It is impossible to compare...

Running downstream tasks

In this section, we will jump into some transformer cars and drive them around a bit to see what they do. There are many models and tasks. We will run a few of them in this section. We will be going through variants of these models during our journey in the book. Once you understand the process of running a few tasks, you will quickly understand all of them. After all, the human baseline for all these tasks is us!

A downstream task is a fine-tuned transformer task that inherits the model and parameters from a pretrained transformer model.

A downstream task is thus the perspective of a pretrained model running fine-tuned tasks. That means, depending on the model, a task is downstream if it was not used to fully pretrain the model. In this section, we will consider all the tasks downstream since we did not pretrain them.

Models will evolve, as will databases, benchmark methods, accuracy measurement methods, and leaderboard criteria. However, the...

Summary

The paradigm shift triggered by ChatGPT compelled us to redefine what an NLP task is. We saw that ChatGPT, like other LLM models, can perform tasks they were not trained for, including many SuperGLUE tasks through advanced emergence. We explored the outputs of the attention heads to bridge the gap between numerical calculations and producing sequences of words.

We then explored how to measure the performance of multi-task transformers. Transformers’ ability to obtain top-ranking results for downstream tasks is unique in NLP history. We went through the demanding SuperGLUE tasks that brought transformers up to the top ranks of the GLUE and SuperGLUE leaderboards.

BoolQ, CB, WiC, and the many other tasks we covered are by no means easy to process, even for humans. We went through an example of several downstream tasks that show the difficulty transformer models face in proving their efficiency.

Transformers have proven their value by outperforming the former...

Questions

  1. Machine intelligence uses the same data as humans to make predictions. (True/False)
  2. SuperGLUE is more difficult than GLUE for NLP models. (True/False)
  3. BoolQ expects a binary answer. (True/False)
  4. WiC stands for Words in Context. (True/False)
  5. Recognizing Textual Entailment (RTE) detects whether one sequence entails another sequence. (True/False)
  6. A Winograd schema predicts whether a verb is spelled correctly. (True/False)
  7. Transformer models now occupy the top ranks of GLUE and SuperGLUE. (True/False)
  8. Human Baselines standards are not defined once and for all. They were made tougher to attain by SuperGLUE. (True/False)
  9. Transformer models will never beat SuperGLUE Human Baselines standards. (True/False)
  10. Variants of transformer models have outperformed RNN and CNN models. (True/False)

References

  • Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman, 2019, SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems: https://w4ngatang.github.io/static/papers/superglue.pdf
  • Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman, 2019, GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding: https://arxiv.org/abs/1804.07461
  • Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang, 2019, ERNIE 2.0: A Continual Pretraining Framework for Language Understanding: https://arxiv.org/pdf/1907.12412.pdf
  • Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon, 2011, Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning: https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF
  • Richard Socher, Alex...

Further reading

You can examine many other LLM benchmarking approaches, including the following tools:

Ultimately, the decision to use a benchmarking framework depends on each project.

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://www.packt.link/Transformers

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing and Computer Vision - Third Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman