Reader small image

You're reading from  Transformers for Natural Language Processing and Computer Vision - Third Edition

Product typeBook
Published inFeb 2024
Reading LevelN/a
PublisherPackt
ISBN-139781805128724
Edition3rd Edition
Languages
Tools
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Questions

  1. Machine translation has now exceeded human baselines. (True/False)
  2. Machine translation requires large datasets. (True/False)
  3. There is no need to compare transformer models using the same datasets. (True/False)
  4. BLEU is the French word for blue and is the acronym of an NLP metric (True/False)
  5. Smoothing techniques enhance BERT. (True/False)
  6. German-English is the same as English-German for machine translation. (True/False)
  7. The original Transformer multi-head attention sub-layer has 2 heads. (True/False)
  8. The original Transformer encoder has 6 layers. (True/False)
  9. The original Transformer encoder has 6 layers but only 2 decoder layers. (True/False)
  10. You can train transformers without decoders. (True/False)

References

English-German BLEU scores with reference papers and code: https://paperswithcode.com/sota/machine-translation-on-wmt2014-english-germanThe 2014 Workshop on Machine Translation (WMT): https://www.statmt.org/wmt14/translation-task.htmlEuropean Parliament Proceedings Parallel Corpus 1996-2011, parallel corpus French-English: https://www.statmt.org/europarl/v7/fr-en.tgzJason Brownlee, Ph.D., How to Prepare a French-to-English Dataset for Machine Translation: https://machinelearningmastery.com/prepare-french-english-dataset-machine-translation/Jason Brownlee, Ph.D., A Gentle Introduction to Calculating the BLEU Score for Text in Python: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/Boxing Chen and Colin Cherry, 2014, A Systematic Comparison of Smoothing Techniques for Sentence-Level BLEU: http://acl2014.org/acl2014/W14-33/pdf/W14-3346.pdfTrax repository: https://github.com/google/traxTrax tutorial: https://trax-ml.readthedocs.io/en/latest/

Further Reading

Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu, 2002, BLEU: a Method for Automatic Evaluation of Machine Translation: https://aclanthology.org/P02-1040.pdfAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, 2017, Attention Is All You Need: https://arxiv.org/abs/1706.03762

Building a Python interface to interact with the model

In this section, we will first save the model and then build an interface to interact with our trained model.

Let’s first save the model if we choose to.

Saving the model

The following code will save the model’s files:

# Specify a directory to save your model and tokenizer
save_directory = "/content/model"
# If your model is wrapped in DataParallel, access the original model using .module and then save
if isinstance(model, torch.nn.DataParallel):
    model.module.save_pretrained(save_directory)
else:
    model.save_pretrained(save_directory)
# Save the tokenizer
tokenizer.save_pretrained(save_directory)

The saved /content/model directory contains:

  • tokenizer_config.json: Configuration details specific to the tokenizer.
  • special_tokens_map.json: Mappings for any special tokens.
  • vocab.txt: The vocabulary of tokens that the tokenizer can recognize.
  • added_tokens...

Summary

In this chapter, we explored the process of fine-tuning a transformer model. We achieved this by implementing the fine-tuning process of a pretrained Hugging Face BERT model.

We began by analyzing the architecture of BERT, which only uses the encoder stack of transformers and uses bidirectional attention. BERT was designed as a two-step framework. The first step of the framework is to pretrain a model. The second step is to fine-tune the model.

We then configured a fine-tuning BERT model for an acceptability judgment downstream task. The fine-tuning process went through all phases of the process.

We installed the Hugging Face transformers and considered the hardware constraints, including selecting CUDA as the device for torch. We retrieved the CoLA dataset from GitHub. We loaded and created in-domain (training data) sentences, label lists, and BERT tokens.

The training data was processed with the BERT tokenizer and other data preparation functions, including...

Questions

  1. BERT stands for Bidirectional Encoder Representations from Transformers. (True/False)
  2. BERT is a two-step framework. Step 1 is pretraining. Step 2 is fine-tuning. (True/False)
  3. Fine-tuning a BERT model implies training parameters from scratch. (True/False)
  4. BERT only pretrains using all downstream tasks. (True/False)
  5. BERT pretrains with MLM. (True/False)
  6. BERT pretrains with NSP. (True/False)
  7. BERT pretrains on mathematical functions. (True/False)
  8. A question-answer task is a downstream task. (True/False)
  9. A BERT pretraining model does not require tokenization. (True/False)
  10. Fine-tuning a BERT model takes less time than pretraining. (True/False)

References

Further reading

  • Vladislav Mosin, Igor Samenko, Alexey Tikhonov, Borislav Kozlovskii, and Ivan P. Yamshchikov, 2021, Fine-Tuning Transformers: Vocabulary Transfer: https://arxiv.org/abs/2112.14569
  • Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler, 2022, Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers: https://arxiv.org/abs/2109.10686

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://www.packt.link/Transformers

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing and Computer Vision - Third Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman