Reader small image

You're reading from  Transformers for Natural Language Processing and Computer Vision - Third Edition

Product typeBook
Published inFeb 2024
Reading LevelN/a
PublisherPackt
ISBN-139781805128724
Edition3rd Edition
Languages
Tools
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Summary

In this chapter, we explored the process of fine-tuning a transformer model. We achieved this by implementing the fine-tuning process of a pretrained Hugging Face BERT model.We began by analyzing the architecture of BERT, which only uses the encoder stack of transformers and uses bidirectional attention. BERT was designed as a two-step framework. The first step of the framework is to pretrain a model. The second step is to fine-tune the model. We then configured a fine-tuning BERT model for an Acceptability Judgment downstream task. The fine-tuning process went through all phases of the process. We installed the Hugging Face transformers and considered the hardware constraints, including selecting CUDA as the device for torch. We retrieved the CoLA dataset from GitHub. We loaded and created in-domain (training data) sentences, label lists, and BERT tokens.The training data was processed with the BERT tokenizer and other data preparation functions, including the attention masks...

Questions

  1. BERT stands for Bidirectional Encoder Representations from Transformers. (True/False)
  2. BERT is a two-step framework. Step 1 is pretraining. Step 2 is fine-tuning. (True/False)
  3. Fine-tuning a BERT model implies training parameters from scratch. (True/False)
  4. BERT only pretrains using all downstream tasks. (True/False)
  5. BERT pretrains with Masked Language Modeling (MLM). (True/False)
  6. BERT pretrains with Next Sentence Predictions (NSP). (True/False)
  7. BERT pretrains mathematical functions. (True/False)
  8. A question-answer task is a downstream task. (True/False)
  9. A BERT pretraining model does not require tokenization. (True/False)
  10. Fine-tuning a BERT model takes less time than pretraining. (True/False)

References

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, 2017, Attention Is All You Need: https://arxiv.org/abs/1706.03762Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman, 2018, Neural Network Acceptability Judgments: https://arxiv.org/abs/1805.12471The Corpus of Linguistic Acceptability (CoLA): https://nyu-mll.github.io/CoLA/Documentation on Hugging Face models:https://huggingface.co/transformers/pretrained_models.htmlhttps://huggingface.co/transformers/model_doc/bert.htmlhttps://huggingface.co/transformers/model_doc/roberta.htmlhttps://huggingface.co/transformers/model_doc/distilbert.html

Further Reading

Fine-Tuning Transformers: Vocabulary Transfer, Mosin et al. (2021): https://arxiv.org/abs/2112.14569Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers, Tay et al. (2022): https://arxiv.org/abs/2109.10686

Join our book's Discord space

Join the book's Discord workspace:https://www.packt.link/Transformers

A picture containing black, darkness Description automatically generated

Summary

In this chapter, we built KantaiBERT, a RoBERTa-like model transformer, from scratch using the building blocks provided by Hugging Face.

We first started by loading a customized dataset on a specific topic related to the works of Immanuel Kant. Depending on your goals, you can load an existing dataset or create your own. We saw that using a customized dataset provides insights into how a transformer model thinks. However, this experimental approach has its limits. Training a model beyond educational purposes would take a much larger dataset.

The KantaiBERT project was used to train a tokenizer on the kant.txt dataset. The trained merges.txt and vocab.json files were saved. A tokenizer was recreated with our pretrained files. KantaiBERT built the customized dataset and defined a data collator to process the training batches for backpropagation. The trainer was initialized, and we explored the parameters of the RoBERTa model in detail. The model was trained and saved...

Questions

  1. RoBERTa uses a byte-level byte-pair encoding tokenizer. (True/False)
  2. A trained Hugging Face tokenizer produces merges.txt and vocab.json. (True/False)
  3. RoBERTa does not use token-type IDs. (True/False)
  4. DistilBERT has 6 layers and 12 heads. (True/False)
  5. A transformer model with 80 million parameters is enormous. (True/False)
  6. We cannot train a tokenizer. (True/False)
  7. A BERT-like model has six decoder layers. (True/False)
  8. MLM predicts a word contained in a mask token in a sentence. (True/False)
  9. A BERT-like model has no self-attention sublayers. (True/False)
  10. Data collators are helpful for backpropagation. (True/False)

Further reading

  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, 2018, Pretraining of Deep Bidirectional Transformers for Language Understanding: https://arxiv.org/abs/1810.04805
  • Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov, RoBERTa: A Robustly Optimized BERT Pretraining Approach: https://arxiv.org/abs/1907.11692

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://www.packt.link/Transformers

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing and Computer Vision - Third Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman