Home Data Getting Started with Google BERT

Getting Started with Google BERT

By Sudharsan Ravichandiran
books-svg-icon Book
eBook $29.99 $20.98
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $29.99 $20.98
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    A Primer on Transformers
About this book
BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer’s encoder and decoder work. You’ll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you’ll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT. By the end of this BERT book, you’ll be well-versed with using BERT and its variants for performing practical NLP tasks.
Publication date:
January 2021
Publisher
Packt
Pages
352
ISBN
9781838821593

 
A Primer on Transformers

The transformer is one of the most popular state-of-the-art deep learning architectures that is mostly used for natural language processing (NLP) tasks. Ever since the advent of the transformer, it has replaced RNN and LSTM for various tasks. Several new NLP models, such as BERT, GPT, and T5, are based on the transformer architecture. In this chapter, we will look into the transformer in detail and understand how it works.

We will begin the chapter by getting a basic idea of the transformer. Then, we will learn how the transformer uses encoder-decoder architecture for a language translation task. Following this, we will inspect how the encoder of the transformer works in detail by exploring each of the encoder components. After understanding the encoder, we will deep dive into the decoder and look into each of the decoder components in detail. At the...

 

Introduction to the transformer

RNN and LSTM networks are widely used in sequential tasks such as next word prediction, machine translation, text generation, and more. However one of the major challenges with the recurrent model is capturing the long-term dependency.

To overcome this limitation of RNNs, a new architecture called Transformer was introduced in the paper Attention Is All You Need. The transformer is currently the state-of-the-art model for several NLP tasks. The advent of the transformer created a major breakthrough in the field of NLP and also paved the way for new revolutionary architectures such as BERT, GPT-3, T5, and more.

The transformer model is based entirely on the attention mechanism and completely gets rid of recurrence. The transformer uses a special type of attention mechanism called self-attention. We will learn about this in detail in the upcoming sections.

Let's understand how the transformer works with a language translation task. The transformer...

 

Understanding the encoder of the transformer

The transformer consists of a stack of number of encoders. The output of one encoder is sent as input to the encoder above it. As shown in the following figure, we have a stack of number of encoders. Each encoder sends its output to the encoder above it. The final encoder returns the representation of the given source sentence as output. We feed the source sentence as input to the encoder and get the representation of the source sentence as output:

Figure 1.2 – A stack of N number of encoders

Note that in the transformer paper Attention Is All You Need, the authors have used , meaning that they stacked up six encoders one above the another. However, we can try out different values of . For simplicity and better understanding, let's keep :

Figure 1.3 – A stack of encoders

Okay, the question is how exactly does the encoder work? How is it generating the representation for the given source sentence (input sentence)...

 

Understanding the decoder of a transformer

Suppose we want to translate the English sentence (source sentence) I am good to the French sentence (target sentence) Je vais bien. To perform this translation, we feed the source sentence I am good to the encoder. The encoder learns the representation of the source sentence. In the previous section, we learned how exactly the encoder learns the representation of the source sentence. Now, we take this encoder's representation and feed it to the decoder. The decoder takes the encoder representation as input and generates the target sentence Je vais bien, as shown in the following figure:

Figure 1.35 – Encoder and decoder of the transformer

In the encoder section, we learned that, instead of having one encoder, we can have a stack of encoders. Similar to the encoder, we can also have a stack of decoders. For simplicity, let's set . As shown in the following figure, the output of one decoder is sent as the input to the decoder...

 

Putting the encoder and decoder together

To give more clarity, the complete transformer architecture with the encoder and decoder is shown in the following figure:

Figure 1.63 – Encoder and decoder of the transformer

In the preceding figure, Nx denotes that we can stack number of encoders and decoders. As we can observe, once we feed the input sentence (source sentence), the encoder learns the representation and sends the representation to the decoder, which in turn generates the output sentence (target sentence).

 

Training the transformer

We can train the transformer network by minimizing the loss function. Okay, but what loss function should we use? We learned that the decoder predicts the probability distribution over the vocabulary and we select the word that has the highest probability as output. So, we have to minimize the difference between the predicted probability distribution and the actual probability distribution. First, how can we find the difference between the two distributions? We can use cross-entropy for that. Thus, we can define our loss function as a cross-entropy loss and try to minimize the difference between the predicted and actual probability distribution. We train the network by minimizing the loss function and we use Adam as an optimizer.

One additional point we need to note down is that to prevent overfitting, we apply dropout to the output of each sublayer and we also apply dropout to the sum of the embeddings and the positional encoding.

Thus, in this chapter, we learned...

 

Summary

We started off the chapter by understanding what the transformer model is and how it uses encoder-decoder architecture. We looked into the encoder section of the transformer and learned about different sublayers used in encoders, such as multi-head attention and feedforward networks.

We learned that the self-attention mechanism relates a word to all the words in the sentence to better understand the word. To compute self-attention, we used three different matrices, called the query, key, and value matrices. Following this, we learned how to compute positional encoding and how it is used to capture the word order in a sentence. Next, we learned how the feedforward network works in the encoder and then we explored the add and norm component.

After understanding the encoder, we understood how the decoder works. We explored three sublayers used in the decoder in detail, which are the masked multi-head attention, encoder-decoder attention, and feedforward network. Following this...

 

Questions

Let's put our newly acquired knowledge to the test. Try answering the following questions:

  1. What are the steps involved in the self-attention mechanism?
  2. What is scaled dot product attention?
  3. How do we create the query, key, and value matrices?
  4. Why do we need positional encoding?
  5. What are the sublayers of the decoder?
  6. What are the inputs to the encoder-decoder attention layer of the decoder?
 

Further reading

About the Author
  • Sudharsan Ravichandiran

    Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.

    Browse publications by this author
Latest Reviews (3 reviews total)
The information in this book is exactly what I was looking for regarding BERT.
Some issues with the tool itself. Still too little support for different languages
I've bought a book on 31 Jan. 2021 (PAC -21-0001130346-1200099256) I'am still waiting for my book 3 weeks later.
Getting Started with Google BERT
Unlock this book and the full library FREE for 7 days
Start now