Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Advanced Deep Learning with TensorFlow 2 and Keras - Second Edition

You're reading from  Advanced Deep Learning with TensorFlow 2 and Keras - Second Edition

Product type Book
Published in Feb 2020
Publisher Packt
ISBN-13 9781838821654
Pages 512 pages
Edition 2nd Edition
Languages
Author (1):
Rowel Atienza Rowel Atienza
Profile icon Rowel Atienza

Table of Contents (16) Chapters

Preface 1. Introducing Advanced Deep Learning with Keras 2. Deep Neural Networks 3. Autoencoders 4. Generative Adversarial Networks (GANs) 5. Improved GANs 6. Disentangled Representation GANs 7. Cross-Domain GANs 8. Variational Autoencoders (VAEs) 9. Deep Reinforcement Learning 10. Policy Gradient Methods 11. Object Detection 12. Semantic Segmentation 13. Unsupervised Learning Using Mutual Information 14. Other Books You May Enjoy
15. Index

What this book covers

Chapter 1, Introducing Advanced Deep Learning with Keras, covers the key concepts of deep learning such as optimization, regularization, loss functions, fundamental layers, and networks and their implementation in tf.keras. This chapter serves as a review of both deep learning and tf.keras using the sequential API.

Chapter 2, Deep Neural Networks, discusses the functional API of tf.keras. Two widely used deep network architectures, ResNet and DenseNet, are examined and implemented in tf.keras using the functional API.

Chapter 3, Autoencoders, covers a common network structure called the autoencoder, which is used to discover the latent representation of input data. Two example applications of autoencoders, denoising and colorization, are discussed and implemented in tf.keras.

Chapter 4, Generative Adversarial Networks (GANs), discusses one of the recent significant advances in deep learning. GAN is used to generate new synthetic data that appear real. This chapter explains the principles of GAN. Two examples of GAN, DCGAN and CGAN, are examined and implemented in tf.keras.

Chapter 5, Improved GANs, covers algorithms that improve the basic GAN. The algorithms address the difficulty in training GANs and improve the perceptual quality of synthetic data. WGAN, LSGAN, and ACGAN are discussed and implemented in tf.keras.

Chapter 6, Disentangled Representation GANs, discusses how to control the attributes of the synthetic data generated by GANs. The attributes can be controlled if the latent representations are disentangled. Two techniques in disentangling representations, InfoGAN and StackedGAN, are covered and implemented in tf.keras.

Chapter 7, Cross-Domain GANs, covers a practical application of GAN, translating images from one domain to another, commonly known as cross-domain transfer. CycleGAN, a widely used cross-domain GAN, is discussed and implemented in tf.keras. This chapter demonstrates CycleGAN performing colorization and style transfer.

Chapter 8, Variational Autoencoders (VAEs), discusses another important topic in DL. Similar to GAN, VAE is a generative model that is used to produce synthetic data. Unlike GAN, VAE focuses on decodable continuous latent space that is suitable for variational inference. VAE and its variations, CVAE and β-VAE, are covered and implemented in tf.keras.

Chapter 9, Deep Reinforcement Learning, explains the principles of reinforcement learning and Q-learning. Two techniques in implementing Q-learning for discrete action space are presented, Q-table update and Deep Q-Networks (DQNs). Implementation of Q-learning using Python and DQN in tf.keras are demonstrated in OpenAI Gym environments.

Chapter 10, Policy Gradient Methods, explains how to use neural networks to learn the policy for decision making in reinforcement learning. Four methods are covered and implemented in tf.keras and OpenAI Gym environments, REINFORCE, REINFORCE with Baseline, Actor-Critic, and Advantage Actor-Critic. The example presented in this chapter demonstrates policy gradient methods on a continuous action space.

Chapter 11, Object Detection, discusses one of the most common applications of computer vision, object detection or identifying and localizing objects in an image. Key concepts of a multi-scale object detection algorithm called SSD are covered and an implementation is built step by step using tf.keras. An example technique for dataset collection and labeling is presented. Afterward, the tf.keras implementation of SSD is trained and evaluated using the dataset.

Chapter 12, Semantic Segmentation, discusses another common application of computer vision, semantic segmentation or identifying the object class of each pixel in an image. Principles of segmentation are discussed. Then, semantic segmentation is covered in more detail. An example implementation of a semantic segmentation algorithm called FCN is built and evaluated using tf.keras. The same dataset collected in the previous chapter is used but relabeled for semantic segmentation.

Chapter 13, Unsupervised Learning Using Mutual Information, looks at how DL is not going to advance if it heavily depends on human labels. Unsupervised learning focuses on algorithms that do not require human labels. One effective technique to achieve unsupervised learning is to take advantage of the concept of Mutual Information (MI). By maximizing MI, unsupervised clustering/classification is implemented and evaluated using tf.keras.

lock icon The rest of the chapter is locked
Next Chapter arrow right
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}