Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Transformers for Natural Language Processing and Computer Vision - Third Edition

You're reading from  Transformers for Natural Language Processing and Computer Vision - Third Edition

Product type Book
Published in Feb 2024
Publisher Packt
ISBN-13 9781805128724
Pages 728 pages
Edition 3rd Edition
Languages
Author (1):
Denis Rothman Denis Rothman
Profile icon Denis Rothman

Table of Contents (24) Chapters

Preface What Are Transformers? Getting Started with the Architecture of the Transformer Model Emergent vs Downstream Tasks: The Unseen Depths of Transformers Advancements in Translations with Google Trax, Google Translate, and Gemini Diving into Fine-Tuning through BERT Pretraining a Transformer from Scratch through RoBERTa The Generative AI Revolution with ChatGPT Fine-Tuning OpenAI GPT Models Shattering the Black Box with Interpretable Tools Investigating the Role of Tokenizers in Shaping Transformer Models Leveraging LLM Embeddings as an Alternative to Fine-Tuning Toward Syntax-Free Semantic Role Labeling with ChatGPT and GPT-4 Summarization with T5 and ChatGPT Exploring Cutting-Edge LLMs with Vertex AI and PaLM 2 Guarding the Giants: Mitigating Risks in Large Language Models Beyond Text: Vision Transformers in the Dawn of Revolutionary AI Transcending the Image-Text Boundary with Stable Diffusion Hugging Face AutoTrain: Training Vision Models without Coding On the Road to Functional AGI with HuggingGPT and its Peers Beyond Human-Designed Prompts with Generative Ideation Other Books You May Enjoy
Index
Appendix: Answers to the Questions

Training and performance

The Original Transformer was trained on a 4.5 million sentence-pair English-German dataset and a 36 million sentence-pair English-French dataset.

The datasets come from Workshops on Machine Translation (WMT), which can be found at the following link if you wish to explore the WMT datasets: http://www.statmt.org/wmt14/.

The training of the Original Transformer base models took 12 hours for 100,000 steps on a machine with 8 NVIDIA P100 GPUs. The big models took 3.5 days for 300,000 steps.

The Original Transformer outperformed all the previous machine translation models with a BLEU score of 41.8. The result was obtained on the WMT English-to-French dataset.

BLEU stands for Bilingual Evaluation Understudy. It is an algorithm that evaluates the quality of the results of machine translations.

The Google Research and Google Brain team applied optimization strategies to improve the performance of the Transformer. For example, the Adam optimizer was used, but the learning rate varied by first going through warmup states with a linear rate and decreasing the rate afterward.

Different types of regularization techniques, such as residual dropout and dropouts, were applied to the sums of embeddings. Also, the Transformer applies label smoothing to avoid overfitting with overconfident one-hot outputs. It introduces less accurate evaluations and forces the model to train more and better.

Several other transformer model variations have led to other models and usages that we will explore in the subsequent chapters.

Before the end of the chapter, let’s get a feel of the simplicity of ready-to-use transformer models in Hugging Face, for example.

You have been reading a chapter from
Transformers for Natural Language Processing and Computer Vision - Third Edition
Published in: Feb 2024 Publisher: Packt ISBN-13: 9781805128724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}