Home Data What's New in TensorFlow 2.0

What's New in TensorFlow 2.0

By Ajay Baranwal , Alizishaan Khatri , Tanish Baranwal
books-svg-icon Book
eBook $19.99 $13.98
Print $26.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $19.99 $13.98
Print $26.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Getting Started with TensorFlow 2.0
About this book
TensorFlow is an end-to-end machine learning platform for experts as well as beginners, and its new version, TensorFlow 2.0 (TF 2.0), improves its simplicity and ease of use. This book will help you understand and utilize the latest TensorFlow features. What's New in TensorFlow 2.0 starts by focusing on advanced concepts such as the new TensorFlow Keras APIs, eager execution, and efficient distribution strategies that help you to run your machine learning models on multiple GPUs and TPUs. The book then takes you through the process of building data ingestion and training pipelines, and it provides recommendations and best practices for feeding data to models created using the new tf.keras API. You'll explore the process of building an inference pipeline using TF Serving and other multi-platform deployments before moving on to explore the newly released AIY, which is essentially do-it-yourself AI. This book delves into the core APIs to help you build unified convolutional and recurrent layers and use TensorBoard to visualize deep learning models using what-if analysis. By the end of the book, you'll have learned about compatibility between TF 2.0 and TF 1.x and be able to migrate to TF 2.0 smoothly.
Publication date:
August 2019
Publisher
Packt
Pages
202
ISBN
9781838823856

 

Getting Started with TensorFlow 2.0

This book aims to familiarize readers with the new features introduced in TensorFlow 2.0 (TF 2.0) and to empower you to unlock its potential while building machine learning applications. This chapter provides a bird's-eye view of new architectural and API-level changes in TF 2.0. We will cover TF 2.0 installation and setup, and will compare the changes with respect to TensorFlow 1.x (TF 1.x), such as Keras APIs and layer APIs. We will also cover the addition of rich extensions, such as TensorFlow Probability, Tensor2Tensor, Ragged Tensors, and the newly available custom training logic for loss functions. This chapter also summarizes the changes to the layers API and other APIs.

The following topics will be covered in this chapter:

  • What's new?
  • TF 2.0 installation and setup
  • Using TF 2.0
  • Rich extensions
 

Technical requirements

You will need the following before you can start executing the steps described in the sections ahead:

  • Python 3.4 or higher
  • A computer with Ubuntu 16.04 or later (The instructions remain similar for most *NIX-based systems such as macOS or other Linux variants)
 

What's new?

The philosophy of TF 2.0 is based on simplicity and ease of use. The major updates include easy model building with tf.keras and eager execution, robust model deployment for production and commercial use for any platform, powerful experimentation techniques and tools for research, and API simplification for a more intuitive organization of APIs. 

The new organization of TF 2.0 is simplified by the following diagram:

The preceding diagram is focused on using the Python API for training and deploying; however, the same process is followed with the other supported languages including Julia, JavaScript, and R. The flow of TF 2.0 is separated into two sections—model training and model deployment, where model training includes the data pipelines, model creation, training, and distribution strategies; and model deployment includes the variety of means of deployment, such as TF Serving, TFLite, TF.js, and other language bindings. The components in this diagram will each be elaborated upon in their respective chapters.

The biggest change in TF 2.0 is the addition of eager execution. Eager execution is an imperative programming environment that evaluates operations immediately, without necessarily building graphs. All operations return concrete values instead of constructing a computational graph that the user can compute later. 

This makes it significantly easier to build and train TensorFlow models and reduces much of the boilerplate code that was attributed to TF 1.x code. Eager execution has an intuitive interface that follows the standard Python code flow. Code written in eager execution is also much easier to debug, as standard Python modules for debugging, such as pdb, can be used to inspect code for sources of error. The creation of custom models is also easier due to the natural Python control flow and support for iteration.

Another major change in TF 2.0 is the migration to tf.keras as the standard module for creating and training TensorFlow models. The Keras API is the central high-level API in TF 2.0, making it easy to get started with TensorFlow. Although Keras is an independent implementation of deep learning concepts, the tf.keras implementation contains enhancements such as eager execution for immediate iteration and debugging, and tf.data is also included for building scalable input pipelines.

An example workflow in tf.keras would be to first load the data using the tf.data module. This allows for large amounts of data to be streamed from the disk without storing all of the data in memory. Then, the developer builds, trains, and validates the model using tf.keras or the premade estimators. The next step would be to run the model and debug it using the benefits of eager execution. Once the model is ready for full-fledged training, use a distribution strategy for distributed training. Finally, when the model is ready for deployment, export the model to a SavedModel module for deployment through any of the distribution strategies shown in the diagram.

Changes from TF 1.x

The first major difference between TF 1.x and TF 2.0 is the API organization. TF 2.0 has reduced the redundancies in the API structure. Major changes include the removal of tf.app, tf.flags, and tf.logging in favor of other Python modules, such as absl-py and the built-in logging function. 

The tf.contrib library is also now removed from the main TensorFlow repo. The code implemented in this library has either been moved to a different location or has been shifted to the TensorFlow add-ons library. The reason for this move is that the contrib module had grown beyond what could be maintained in a single repository. 

Other changes include the removal of the QueueRunner module in favor of using tf.data, the removal of graph collections, and changes in how variables are treated. The QueueRunner module was a way of providing data to a model for training, but was quite complicated and harder to use than tf.data, which is now the default way of feeding data to a model. Other benefits of using tf.data for the data pipeline are explained in Chapter 3, Designing and Constructing Input Data Pipelines.

Another major change in TF 2.0 is that there are no more global variables. In TF 1.x, variables created using tf.Variable would be put on the default graph and would still be recoverable through their names. TF 1.x had all sorts of mechanisms as an attempt to help users to recover their variables, such as variable scopes, global collections, and helper methods such as tf.get_global_step and tf.global_variables_initializer. All of this is removed in TF 2.0 for the default variable behavior in Python.

 

TF 2.0 installation and setup

This section describes the steps required to install TF 2.0 on your system using different methods and on different system configurations. Entry-level users are recommended to start with the pip- and virtualenv-based methods. For users of the GPU version, docker is the recommended method.

Installing and using pip

For the uninitiated, pip is a popular package management system in the Python community. If this is not installed on your system, please install it before proceeding further. On many Linux installations, Python and pip are installed by default. You can check whether pip is installed by typing the following command:

python3 -m pip --help

If you see a blurb describing the different commands that pip supports, pip is installed on your system. If pip is not installed, you will see an error message, which will be something similar to No module named pip.

It usually is a good idea to isolate your development environment. This greatly simplifies dependency management and streamlines the software development process. We can achieve environment isolation by using a tool in Python called virtualenv. This step is optional but highly recommended:
>>mkdir .venv
>>virtualenv --python=python3.6 .venv/
>>source .venv.bin/activate

You can install TensorFlow using pip, as shown in the following command: 

pip3 install tensorflow==version_tag

For example, if you want to install version 2.0.0-beta1, your command should be as follows:

pip3 install tensorflow==2.0.0-beta1
A complete list of the most recent package updates is available at https://pypi.org/project/tensorflow/#history.

You can test your installation by running the following command:

python3 -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"

Using Docker

If you would like to isolate your TensorFlow installation from the rest of your system, you might want to consider installing it using a Docker image. This would require you to have Docker installed on your system. Installation instructions are available at https://docs.docker.com/install/.

In order to use Docker without sudo on a Linux system, please follow the post-install steps at:
https://docs.docker.com/install/linux/linux-postinstall/.

The TensorFlow team officially supports Docker images as a mode of installation. To the user, one implication of this is that updated Docker images will be made available for download at https://hub.docker.com/r/tensorflow/tensorflow/.

Download a Docker image locally using the following command:

docker pull tensorflow/tensorflow:YOUR_TAG_HERE

The previous command should've downloaded the Docker image from the centralized repository. To run the code using this image, you need to start a new container and type the following:

docker run -it --rm tensorflow/tensorflow:YOUR_TAG_HERE \
python -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"

A Docker-based installation is also a good option if you intend to use GPUs. Detailed instructions for this are provided in the next section. 

GPU installation

Installing the GPU version of TensorFlow is slightly different from the process for the CPU version. It can be installed using both pip and Docker. The choice of installation process boils down to the end objective. The Docker-based process is easier as it involves installing fewer additional components. It also helps avoid library conflict. This can, though, introduce an additional overhead of managing the container environment. The pip-based version involves installing more additional components but offers a greater degree of flexibility and efficiency. It enables the resultant installation to run directly on the local host without any virtualization.

To proceed, assuming you have the necessary hardware set up, you would need the following piece of software at a minimum. Detailed instructions for installation are provided in the link for NVIDIA GPU drivers (https://www.nvidia.com/Download/index.aspx?lang=en-us).

Installing using Docker

At the time of writing this book, this option is only available for NVIDIA GPUs running on Linux hosts. If you meet the platform constraints, then this is an excellent option as it significantly simplifies the process. It also minimizes the number of additional software components that you need to install by leveraging a pre-built container. To proceed, we need to install nvidia-docker. Please refer the following links for additional details:

Once you've completed the steps described in the preceding links, take the following steps:

  1. Test whether the GPU is available:
lspci | grep -i nvidia
  1. Verify your nvidia-docker installation (for v2 of nvidia-docker):
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
  1. Download a Docker image locally:
docker pull tensorflow/tensorflow:YOUR_TAG_HERE
  1. Let's say you're trying to run the most recent version of the GPU-based image. You'd type the following:
docker pull tensorflow/tensorflow:latest-gpu
  1. Start the container and run the code:
docker run --runtime=nvidia -it --rm tensorflow/tensorflow:latest-gpu \
python -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"

Installing using pip

If you would like to use TensorFlow with an NVIDIA GPU, you need to install the following additional pieces of software on your system. Detailed instructions for installation are provided in the links shared:

Once all the previous components have been installed, this is a fairly straightforward process. 

Install TensorFlow using pip:

pip3 install tensorflow-gpu==version_tag

For example, if you want to install tensorflow-2.0:alpha, then you'd have to type in the following command:

pip3 install tensorflow-gpu==2.0.0-alpha0

A complete list of the most recent package updates is available at https://pypi.org/project/tensorflow/#history.

You can test your installation by running the following command:

python3 -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"
 

Using TF 2.0

TF 2.0 can be used in two main ways—using low-level APIs and using high-level APIs. To use the low-level APIs in TF 2.0, APIs such as tf.GradientTape and tf.function are implemented.

The code flow for writing low-level code is to define a forward pass inside of a function that takes the input data as an argument. This function is then annotated with the tf.function decorator in order to run it in graph mode along with all of its benefits. To record and get the gradients of the forward pass, both the decorator function and the loss function are run inside the tf.GradientTape context manager, from which gradients can be calculated and applied on the model variables.

Training code can also be written using the low-level APIs for tf.keras models by using tf.GradientTape. This is for when more control and customizability is needed over the default tf.keras.Model.fit method. Training methods and pipelines are explained in depth in Chapter 4, Model Training and Use of TensorBoard.

The simple comparison between TF 2.0 and TF 1.x is that the tensor that is run using sess.run in TF 1.x is now a function, and the feed dict and placeholders are the arguments of that function. This is the philosophical change between TF 2.0 and TF 1.x; there is a shift toward complete object-oriented code where all APIs and modules are callable objects.

Using the high-level APIs in TF 2.0 is easier, where tf.keras is the default high-level API used. tf.keras has three different methods of model creation. These methods are as follows:

  • The Sequential API: This is another change brought on in TF 2.0. The previous high-level API for model creation in TF 1.x was the tf.layers module. This module has been converted to tf.keras.layers, where nearly all the methods from the tf.layers module are replicated in tf.keras.layers. This makes it easy to convert from tf.layers to tf.keras.layers, as the code is nearly completely identical.

Using the Sequential API to create a model is done by creating a linear model with the symbolic tf.keras layer classes. This style is used for completely linear models and is the easiest style to use.

The following code block is an example of a Sequential API model:

model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation='softmax')
])

train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))

train_out = model(train_data, training=True)

test_out = model(test_data, training=False)
  • The functional API: This API has more flexibility than the Sequential API in the sense that it's based on calling the layer classes on the output tensor of the layer preceding it. This means that non-linear models and architectures can be implemented, such as the Inception and ResNet architectures.

The following code block is an example of a model created with the functional API:

encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)

encoder = keras.Model(encoder_input, encoder_output, name='encoder')
  • The model subclassing technique: This is very similar to the low-level approach in the sense that it is used to create custom models and layers that implement technologies and techniques not included in TensorFlow. The model subclassing technique involves creating a class that inherits from the tf.keras.Model base class and has a call method defined that takes an input argument and a training argument, and then computes and returns the result of a forward pass through the model.

The following code block is an example of a model created with model subclassing:

class ResNet(tf.keras.Model):

def __init__(self):
super(ResNet, self).__init__()
self.block_1 = ResNetBlock()
self.block_2 = ResNetBlock()
self.global_pool = layers.GlobalAveragePooling2D()
self.classifier = Dense(num_classes)

def call(self, inputs):
x = self.block_1(inputs)
x = self.block_2(x)
x = self.global_pool(x)
return self.classifier(x)

resnet = ResNet()
dataset = ...
resnet.fit(dataset, epochs=10)

 

Rich extensions

Rich extensions are a set of features that have been introduced in TensorFlow to boost user productivity and expand capabilities. In this section, we will cover Ragged Tensors and how to use them, and, we will also cover the new modules introduced in TF 2.0.

Ragged Tensors

Variable-sized data is a common occurrence when both training and serving machine learning models. This issue is constant across the different underlying media types and model architectures. The contemporary solution is to use the size of the largest record, and use padding for smaller records. This is inefficient, not only in terms of memory or storage, but also computational efficiency; for example, when dealing with inputs to a recurrent model.

Ragged Tensors help address this issue. At a very high level, Ragged Tensors can be thought of as the TensorFlow analogs of variable-length linked lists. An important fact to note here is that this variability can be present in nested dimensions as well. This means that it is possible to have a list of variable-sized elements. Generalizing this property to multiple dimensions opens doors to a variety of interesting use cases. One of the important restrictions to keep in mind, though, is that all values in a Ragged Tensor must be of the same type. Some commonly non-uniform shaped data types that Ragged Tensors can be used for includes the following:

  • Variable-length features:
    • Example—the number of characters in a word
  • Batches of variable-length sequential inputs:
    • Example—sentences, time-series data, and audio clips
  • Hierarchical inputs:
    • Example—text documents that are subdivided into sections, paragraphs, sentences, words and characters; organizational hierarchies 
  • Individual fields in structured inputs:
    • Example—HTTP Request payloads, protocol buffers, and JSON data

In the following subsections, we shall look at the main properties of Ragged Tensors and write some code to see them in action.

What are Ragged Tensors, really?

Ragged Tensors can also be defined as tensors with one or more ragged dimensions; in other words, dimensions with variable-length slices. As most common use-cases involve dealing with a finite number of records, Ragged Tensors require the outermost dimension to be uniform, in other words, that all slices of that dimension should have the same length. Dimensions preceding the outermost dimension can be both ragged and uniform. To summarize these points, we can state that the shape of a Ragged Tensor is currently restricted to the following form:

  • A single uniform dimension
  • Followed by one or more ragged dimensions
  • Followed by zero or more uniform dimensions

Constructing a Ragged Tensor

TF 2.0 provides a large number of methods that can be used to create or return Ragged Tensors. One of the most straightforward ones is tf.ragged.constant(). Let's use it to create a Ragged Tensor of dimension (num_sentences, (num_words)). Please note that we've used round brackets to indicate the dimension that is ragged:

sentences = tf.ragged.constant([   
["Hello", "World", "!"],
["We", "are", "testing", "tf.ragged.constant", "."]
])
print(sentences)

You should see something like this:

<tf.RaggedTensor [[b'Hello', b'World', b'!'], [b'We', b'are', b'testing', b'tf.ragged.constant', b'.']]>

It is also possible to create a Ragged Tensor from an old-style tensor or Python list with padded elements. This can be very useful in building efficient TF 2.0 models that consume data from a lower-stage pipeline written for earlier versions of TensorFlow. The functionality is exposed by the tf.RaggedTensor.from_tensor() function. The padding value is provided by the padding keyword argument. If used correctly, this can provide users with significant amounts of memory, especially in cases of sparse arrays.

Consider the following example in which we define a Python list. Each element of this list has a further list containing a variable number of numerical values. Some of the numbers listed here are padded values and are indicated by the digit 0. This can also be looked at as a matrix of 4 records containing 5 attributes each; in other words, a 4 x 5 matrix:

x = [
[1, 7, 0, 0, 0],
[2, 0, 0, 0, 0],
[4, 5, 8, 9, 1],
[1, 0, 0, 0, 0]
]
print(tf.RaggedTensor.from_tensor(x, padding=0))

We can see that a majority of records in the preceding matrix contain padding values. These values occupy memory. As seen in the following output, converting the preceding matrix to a Ragged Tensor eliminates the lagging 0 (padding) values. This results in a memory-efficient representation of the data:

<tf.RaggedTensor [[1, 7], [2], [4, 5, 8, 9, 1], [1]]>

The preceding example is a small illustration of how using ragged representations saves memory. As the number of records and/or dimensions grow, the memory savings provided by this representation would become more pronounced. 

Basic operations on Ragged Tensors

Ragged Tensors can be used in a manner similar to regular tensors in many cases. TensorFlow provides over 100 operators that support Ragged Tensors. These operators can be broadly classified as fundamental mathematical operators, array operators, or string operators, among others.

The following code block shows the process of adding two Ragged Tensors:

x = tf.ragged.constant([
[1, 2, 3, 4],
[1, 2]
])
y = tf.ragged.constant([
[4, 3, 2, 1],
[5, 6]
])
print(tf.add(x, y))

This results in the following output:

<tf.RaggedTensor [[5, 5, 5, 5], [6, 8]]>

Another interesting feature is that operator overloading is defined for Ragged Tensors. This means that a programmer can intuitively use operators such as +, -, *, /, //, %, **, &, |, ^, <, <=, >, and >=, just like they would with other tensors.

The following code block shows the multiplication of a Ragged Tensor using an overloaded operator:

x = tf.ragged.constant([
[1, 2, 3, 4],
[1, 2]
])
print(x * 2) # Multiply a ragged tensor with a scalar
print(x * x) # Multiply a ragged tensor with another ragged tensor

The resultant output is as follows:

<tf.RaggedTensor [[2, 4, 6, 8], [2, 4]]>
<tf.RaggedTensor [[1, 4, 9, 16], [1, 4]]>

In addition, a variety of Ragged Tensor-specific operators are defined in the tf.ragged package. It could be worthwhile to check out the documentation of the package to learn more. Please see the following links for detailed documentation on this:

New and important packages

The arrival of TF 2.0 also comes with the arrival of many more interesting and useful packages under TensorFlow that can be installed separately. Some of these packages include TensorFlow Datasets, TensorFlow Addons, TensorFlow Text, and TensorFlow Probability.

TensorFlow Datasets is a Python module that provides easy access to over 100 datasets, ranging from audio to natural language to images. These datasets can be easily downloaded and used in models via the following code:

import tensorflow_datasets as tfds
dataset = tfds.load(name="mnist", split=tfds.Split.TRAIN)
dataset = dataset.shuffle(1024).batch(32).prefetch(tf.data.experimental.AUTOTUNE)
Datasets taken from this library are tf.data.Dataset objects, which means that all tf.data methods can be used to modify the base dataset. More details on the TensorFlow datasets module are in Chapter 3, Designing and Constructing Input Data Pipelines.

TensorFlow Addons (TF Addonsis another TensorFlow module. This module contains most of the tf.contrib module from TF 1.x, other than the methods that were moved into the main tf module. TF Addons contains many experimental and state-of-the-art layers, loss functions, initializers, and optimizers, all in the form of TF 2.0 objects. This means that APIs taken from TF Addons can be seamlessly incorporated into a normal tf.keras model without any extra changes.

TensorFlow Text is a very recent module, which adds NLP APIs to TF 2.0. This module includes methods such as sentence and word tokenization, among other popular techniques in the NLP field. Something to note is that this module is very new and so is subject to multiple changes in the API structure.

TensorFlow Probability is a module that adds APIs for probability calculations in TensorFlow. This module allows researchers and developers to take advantage of TensorFlow's optimized operations and computations in order to perform a multitude of probability-related tasks.

All the aforementioned packages can be installed using pip and by installing in the tensorflow-module format.

 

Summary

TF 2.0 contains many major changes, such as API cleanup, eager execution, and an object-oriented philosophy. The API cleanup includes deprecating redundant modules that have equivalent standard Python libraries, as well as removing and reorganizing the tf.contrib module into the main API and into the TensorFlow Addons package. Eager execution and object-oriented APIs allow debugging to be much more efficient and straightforward, and also lead to variables being treated as normal Python variables. This means that variable collections and other APIs dedicated to dealing with global variables are no longer needed, and thus are removed in TF 2.0.

TF 2.0 also shifts the default high-level API from estimators in TF 1.x to tf.keras in TF 2.0 for both simplicity and scalability. The tf.keras API has three different programming types, each providing different levels of abstraction and customizability. Low-level TF 2.0 code can be written using tf.GradientTape to handle gradients of operations, and tf.function for graph-based execution.

This chapter also covered the different ways to install TF 2.0, including installation through pip and Docker, as well as the installation of the GPU version. There are many modules that are compatible with and have been released alongside TF 2.0, which further enhance and augment the possibilities of the base API. These include TensorFlow Datasets, TensorFlow Addons, TensorFlow Text, and TensorFlow Probability.

This chapter also included Ragged Tensors, which are useful for storing data with variable length and shape and hierarchical inputs. This means that Ragged Tensors are useful for storing language and sequence data.

In the next chapter, we will learn about Keras' default integration and eager execution.

About the Authors
  • Ajay Baranwal

    Ajay Baranwal works as a director at the Center for Deep Learning in Electronics Manufacturing, where he is responsible for researching and developing TensorFlow-based deep learning applications in the semiconductor and electronics manufacturing industry. Part of his role is to teach and train deep learning techniques to professionals. He has a solid history of software engineering and management, where he got hooked on deep learning. He moved to natural language understanding (NLU) to pursue deep learning further at Abzooba and built an information retrieval system for the finance sector. He has also worked at Ansys Inc. as a senior manager (engineering) and a technical fellow (data science) and introduced several ML applications.

    Browse publications by this author
  • Alizishaan Khatri

    Alizishaan Khatri works as a machine learning engineer in Silicon Valley. He uses TensorFlow to build, design, and maintain production-grade systems that use deep learning for NLP applications. A major system he has built is a deep learning-based system for detecting offensive content in chats. Other works he has done includes text classification and named entity recognition (NER) systems for different use cases. He is passionate about sharing ideas with the community and frequently speaks at tech conferences across the globe. He holds a master's degree in computer science from the SUNY Buffalo University. His thesis proposed a solution to the problem of overfitting in deep learning. Outside of his work, he enjoys skiing and mountaineering.

    Browse publications by this author
  • Tanish Baranwal

    Tanish Baranwal is a sophomore in high school and lives in California with his family and has worked with his dad on deep learning projects using TensorFlow for the last 3 years. He has been coding for 9 years (since 1st grade) and is well versed in Python and JavaScript. He is now learning C++. He has certificates from various online courses and has won the Entrepreneurship Showcase Award at his school. Some of his deep learning projects include anomaly detection systems for transaction fraud, a system to save energy by turning off domestic water heaters when not in use, and a fully functional style transfer program that can recreate any photograph in another style. He has also written blogs on deep learning on Medium with over 1,000 views.

    Browse publications by this author
Latest Reviews (2 reviews total)
No pienso probar los los vídeos hasta que no me lleguen los libros :D
Very good introduction to TF 2.0
What's New in TensorFlow 2.0
Unlock this book and the full library FREE for 7 days
Start now