Reader small image

You're reading from  Transformers for Natural Language Processing and Computer Vision - Third Edition

Product typeBook
Published inFeb 2024
Reading LevelN/a
PublisherPackt
ISBN-139781805128724
Edition3rd Edition
Languages
Tools
Right arrow
Author (1)
Denis Rothman
Denis Rothman
author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman

Right arrow

Pretraining a Generative AI Customer Support Model on X Data

In this section, we will pretrain a Hugging Face RobertaForCausalLM model to be a Generative AI customer support chat agent for X (former Twitter). RoBERTa is an encoder-only model. As such, it is mainly designed to understand and encode inputs. In Chapter 2, Getting Started with the Architecture of the Transformer Model, we saw how the encoder learns to understand and then sends the information to the decoder, generating content. However, in this section, we will use the Hugging Face functionality to adapt a RoBERTa model to run an autoregressive generative AI task. The experiment has limitations, but it shows the inner workings of content generation.The knowledge you acquired in this chapter through building a KantaiBERT from scratch will enable you to enjoy the ride!The generative model and dataset are free, making the exercise particularly interesting. With some work, domain-specific generative AI agents can help companies...

Next steps

You have trained two transformers from scratch. Take some time to imagine what you could do in your personal or corporate environment. You could create a dataset for a specific task and train it from scratch. Many other Hugging Face models are available for training in the BERT family, GPT models, T5, and more!Use your areas of interest or company projects to experiment with the fascinating world of transformer construction kits!Once you have made a model you like, you can share it with the Hugging Face community. Your model will appear on the Hugging Face models page: https://huggingface.co/modelsYou can upload your model in a few steps using the instructions described on this page: https://huggingface.co/transformers/model_sharing.htmlYou can also download models the Hugging Face community has shared to get new ideas for your personal and professional projects.

Summary

In this chapter, we built KantaiBERT, a RoBERTa-like model transformer, from scratch using the building blocks provided by Hugging Face.We first started by loading a customized dataset on a specific topic related to the works of Immanuel Kant. Depending on your goals, you can load an existing dataset or create your own. We saw that using a customized dataset provides insights into how a transformer model thinks. However, this experimental approach has its limits. Training a model beyond educational purposes would take a much larger dataset.The KantaiBERT project was used to train a tokenizer on the kant.txt dataset. The trained merges.txt and vocab.json files were saved. A tokenizer was recreated with our pretrained files. KantaiBERT built the customized dataset and defined a data collator to process the training batches for backpropagation. The trainer was initialized, and we explored the parameters of the RoBERTa model in detail. The model was trained and saved.We saved the...

Questions

  1. RoBERTa uses a byte-level byte-pair encoding tokenizer. (True/False)
  2. A trained Hugging Face tokenizer produces merges.txt and vocab.json. (True/False)
  3. RoBERTa does not use token-type IDs. (True/False)
  4. DistilBERT has 6 layers and 12 heads. (True/False)
  5. A transformer model with 80 million parameters is enormous. (True/False)
  6. We cannot train a tokenizer. (True/False)
  7. A BERT-like model has 6 decoder layers. (True/False)
  8. Masked Language Modeling (MLM) predicts a word contained in a mask token in a sentence. (True/False)
  9. A BERT-like model has no self-attention sublayers. (True/False)
  10. Data collators are helpful for backpropagation. (True/False)

Further Reading

Join our book’s Discord space

Join the book’s Discord workspace:https://www.packt.link/Transformers

A picture containing black, darkness Description automatically generated

Questions

  1. A zero-shot method trains the parameters once. (True/False)
  2. Gradient updates are performed when running zero-shot models. (True/False)
  3. GPT models only have a decoder stack. (True/False)
  4. OpenAI GPT models are not GPTs. (True/False)
  5. The diffusion of generative transformer models is very slow in everyday applications. (True/False)
  6. GPT-3 models have been useless since GPT-4 was made public. (True/False)
  7. ChatGPT models are not completion models. (True/False)
  8. Gradio is a transformer model. (True/False)
  9. Supercomputers with 285,000 CPUs do not exist. (True/False)
  10. Supercomputers with thousands of GPUs are game changers in AI. (True/False)

References

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://www.packt.link/Transformers

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Transformers for Natural Language Processing and Computer Vision - Third Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive Natural Language Processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an Advanced Planning and Scheduling (APS) solution used worldwide.
Read more about Denis Rothman