Reader small image

You're reading from  Transformers for Natural Language Processing - Second Edition

Product typeBook
Published inMar 2022
PublisherPackt
ISBN-139781803247335
Edition2nd Edition
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Downstream NLP Tasks with Transformers

Transformers reveal their full potential when we unleash pretrained models and watch them perform downstream Natural Language Understanding (NLU) tasks. It takes a lot of time and effort to pretrain and fine-tune a transformer model, but the effort is worthwhile when we see a multi-million parameter transformer model in action on a range of NLU tasks.

We will begin this chapter with the quest of outperforming the human baseline. The human baseline represents the performance of humans on an NLU task. Humans learn transduction at an early age and quickly develop inductive thinking. We humans perceive the world directly with our senses. Machine intelligence relies entirely on our perceptions transcribed into words to make sense of our language.

We will then see how to measure the performance of transformers. Measuring Natural Language Processing (NLP) tasks remains a straightforward approach involving accuracy scores in various forms based...

Transduction and the inductive inheritance of transformers

The emergence of Automated Machine Learning (AutoML), meaning APIs in automated cloud AI platforms, has deeply changed the job description of every AI specialist. Google Vertex, for example, boasts a reduction of 80% of the development required to implement ML. This suggests that anybody can implement ML with ready-to-use systems. Does that mean an 80% reduction of the workforce of developers? I don’t think so. I see an Industry 4.0 AI specialist assemble AI with added value to a project.

Industry 4.0. NLP AI specialists invest less in source code and more in knowledge to become the AI guru of a team.

Transformers possess the unique ability to apply their knowledge to tasks they did not learn. A BERT transformer, for example, acquires language through sequence-to-sequence and masked language modeling. The BERT transformer can then be fine-tuned to perform downstream tasks that it did not learn from...

Transformer performances versus Human Baselines

Transformers, like humans, can be fine-tuned to perform downstream tasks by inheriting the properties of a pretrained model. The pretrained model provides its architecture and language representations through its parameters.

A pretrained model trains on key tasks to acquire a general knowledge of the language. A fine-tuned model trains on downstream tasks. Not every transformer model uses the same tasks for pretraining. Potentially, all tasks can be pretrained or fine-tuned.

Every NLP model needs to be evaluated with a standard method.

This section will first go through some of the key measurement methods. Then, we will go through some of the main benchmark tasks and datasets.

Let’s start by going through some of the key metric methods.

Evaluating models with metrics

It is impossible to compare one transformer model to another transformer model (or any other NLP model) without a universal measurement system...

Running downstream tasks

In this section, we will just jump into some transformer cars and drive them around a bit to see what they do. There are many models and tasks. We will run a few of them in this section. Once you understand the process of running a few tasks, you will quickly understand all of them. After all, the human baseline for all these tasks is us!

A downstream task is a fine-tuned transformer task that inherits the model and parameters from a pretrained transformer model.

A downstream task is thus the perspective of a pretrained model running fine-tuned tasks. That means, depending on the model, a task is downstream if it was not used to fully pretrain the model. In this section, we will consider all the tasks as downstream since we did not pretrain them.

Models will evolve, as will databases, benchmark methods, accuracy measurement methods, and leaderboard criteria. But the structure of human thought reflected through the downstream tasks in this chapter...

Summary

This chapter analyzed the difference between the human language representation process and the way machine intelligence performs transduction. We saw that transformers must rely on the outputs of our incredibly complex thought processes expressed in written language. Language remains the most precise way to express a massive amount of information. The machine has no senses and must convert speech to text to extract meaning from raw datasets.

We then explored how to measure the performance of multi-task transformers. Transformers’ ability to obtain top-ranking results for downstream tasks is unique in NLP history. We went through the tough SuperGLUE tasks that brought transformers up to the top ranks of the GLUE and SuperGLUE leaderboards.

BoolQ, CB, WiC, and the many other tasks we covered are by no means easy to process, even for humans. We went through an example of several downstream tasks that show the difficulty transformer models face in proving their...

Questions

  1. Machine intelligence uses the same data as humans to make predictions. (True/False)
  2. SuperGLUE is more difficult than GLUE for NLP models. (True/False)
  3. BoolQ expects a binary answer. (True/False)
  4. WiC stands for Words in Context. (True/False)
  5. Recognizing Textual Entailment (RTE) detects whether one sequence entails another sequence. (True/False)
  6. A Winograd schema predicts whether a verb is spelled correctly. (True/False)
  7. Transformer models now occupy the top ranks of GLUE and SuperGLUE. (True/False)
  8. Human Baselines standards are not defined once and for all. They were made tougher to attain by SuperGLUE. (True/False)
  9. Transformer models will never beat SuperGLUE Human Baselines standards. (True/False)
  10. Variants of transformer models have outperformed RNN and CNN models. (True/False)

References

  • Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman, 2019, SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems: https://w4ngatang.github.io/static/papers/superglue.pdf
  • Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman, 2019, GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
  • Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang, 2019, ERNIE 2.0: A Continual Pretraining Framework for Language Understanding: https://arxiv.org/pdf/1907.12412.pdf
  • Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon, 2011, Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning: https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF
  • Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing - Second Edition
Published in: Mar 2022Publisher: PacktISBN-13: 9781803247335
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman