Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Deep Learning with TensorFlow 2 and Keras - Second Edition

You're reading from  Deep Learning with TensorFlow 2 and Keras - Second Edition

Product type Book
Published in Dec 2019
Publisher Packt
ISBN-13 9781838823412
Pages 646 pages
Edition 2nd Edition
Languages
Authors (3):
Antonio Gulli Antonio Gulli
Profile icon Antonio Gulli
Amita Kapoor Amita Kapoor
Profile icon Amita Kapoor
Sujit Pal Sujit Pal
Profile icon Sujit Pal
View More author details

Table of Contents (19) Chapters

Preface Neural Network Foundations with TensorFlow 2.0 TensorFlow 1.x and 2.x Regression Convolutional Neural Networks Advanced Convolutional Neural Networks Generative Adversarial Networks Word Embeddings Recurrent Neural Networks Autoencoders Unsupervised Learning Reinforcement Learning TensorFlow and Cloud TensorFlow for Mobile and IoT and TensorFlow.js An introduction to AutoML The Math Behind Deep Learning Tensor Processing Unit Other Books You May Enjoy
Index

Transformer architecture

Even though the transformer architecture is different from recurrent networks, it uses many ideas that originated in recurrent networks. It represents the next evolutionary step of deep learning architectures that work with text, and as such, should be an essential part of your toolbox. The transformer architecture is a variant of the Encoder-Decoder architecture, where the recurrent layers have been replaced with Attention layers. The transformer architecture was proposed by Vaswani, et al. [30], and a reference implementation provided, which we will refer to throughout this discussion.

Figure 7 shows a seq2seq network with attention and compares it to a transformer network. The transformer is similar to the seq2seq with Attention model in the following ways:

  1. Both source and target are sequences
  2. The output of the last block of the encoder is used as context or thought vector for computing the Attention model on the decoder
  3. The target sequences...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}