TensorFlow began its life in 2011 as DisBelief, an internal, closed source project at Google. DisBelief was a machine learning system that employed deep learning neural networks. This system morphed into TensorFlow, which was released to the developer community under an Apache 2.0 open source license, on November 9, 2015. Version 1.0.0 made its appearance on February 11, 2017. There have been a number of point releases since then that have incorporated a wealth of new features.

At the time of writing this book, the most recent version is TensorFlow 2.0.0 alpha release, which was announced at the TensorFlow Dev Summit on March 6, 2019.

TensorFlow takes its name from, well, tensors. A tensor is a generalization of vectors and matrices to possibly higher dimensions. The rank of a tensor is the number of indices it takes to uniquely specify each element of that tensor. A scalar (a simple number) is a tensor of rank 0, a vector is a tensor of rank 1, a matrix is a tensor of rank 2, and a 3-dimensional array is a tensor of rank 3. A tensor has a datatype and a shape (all of the data items in a tensor must have the same type). An example of a 4-dimensional tensor (that is, rank 4) is an image where the dimensions are an example withinâ€”`batch`

, `height`

, `width`

, and `color`

channel (for example):

image1 = tf.zeros([7,28,28,3])# example-within-batch by height by width by color

Although TensorFlow can be leveraged for many areas of numerical computing in general, andÂ machine learning in particular, its main area of research and development has been in the applications of **Deep Neural Networks** (**DNN**), where it has been used in diverse areas such as voice and sound recognition, for example, in the now widespread voice-activated assistants; text-based applications such as language translators; image recognition such as exo-planet hunting, cancer detection,Â and diagnosis; and time series applications such as recommendation systems.

In this chapter, we will discuss the following:

- Looking at the modern TensorFlow ecosystem
- Installing TensorFlow
- Housekeeping and eager operations
- Providing useful TensorFlow operations

Let's discuss **eager execution**. The first incarnation of TensorFlow involved constructing a computational graph made up of operations and tensors, which had to be subsequently evaluated in what Google termed as session(this is known asdeclarativeprogramming). This is still a common way to write TensorFlow programs. However, eager execution, available from release 1.5 onward in research form and baked into TensorFlow proper from release 1.7, involves the immediate evaluation of operations, with the consequence that tensors can be treated like NumPy arrays (this is known asimperativeprogramming).

Google says that eager execution is the preferred method for research and development but that computational graphs are to be preferred for serving TensorFlow production applications.

`tf.data`

is an API that allows you to build complicated data input pipelines from simpler, reusable parts. The highest level abstraction is `Dataset`

, which comprises both elements of nested structures of tensors and a plan of transformations that are to act on those elements. There are classes for the following:

- There's
`Dataset`

Â consisting of fixed length record sets from at least one binary file (`FixedLengthRecordDataset`

) - There's
`Dataset`

Â consisting of records from at least one TFRecord file (`TFRecordDataset`

) - Â There'sÂ
`Dataset`

consisting of records that are lines from at least one text file(`TFRecordDataset`

) - There is also a class that represents the state of iterating through
`Dataset`

(`tf.data.Iterator`

)

Let's move on to the **estim****ator**, which is a high-level API that allows you to build greatly simplified machine learning programs. Estimators take care of training, evaluation, prediction, and exports for serving.

**Te****nsorFlow.js** is a collection of APIs that allow you to build and train models using either the low-level JavaScript linear algebra library or the high-level layers API. Hence, models can be trained and run in a browser.

**TensorFlow Lite** is a lightweight version of TensorFlow for mobile and embedded devices. It consists of a runtime interpreter and a set of utilities. The idea is that you train a model on a higher-powered machine and then convert your model into the `.tflite`

format using the utilities. You then load the model into your device of choice. At the time of writing, TensorFlow Lite is supported on Android and iOS with a C++ API and has a Java wrapper for Android. If an Android device supports the **Android Neural Networks** (**ANN**) API for hardware acceleration, then the interpreter will use this, or else it will default to the CPU for execution.

**TensorFlow Hub**is a library designed to foster the publication, discovery, and use of reusable modules of machine learning models. In this context, a module is a self-contained piece of a TensorFlow graph together with its weights and other assets. The module can be reused in different tasks in a method known astransfer learning.The idea is that you train a model on a large dataset and then re-purpose the appropriate module for your different but related task. This approach brings a number of advantagesâ€”you can train a model with a smaller dataset, you can improve generalization, and you can significantly speed up training.

For example, the ImageNet dataset, together with a number of different neural network architectures such as `inception_v3`

, has been very successfully used to jump-start many other image processing training problems.

**TensorFlow Extended** (**TFX**) is a TensorFlow-based general-purpose machine learning platform.Libraries released to open source to date include TensorFlow Transform, TensorFlow Model Analysis, and TensorFlow Serving.

`tf.keras`

is a high-levelneural networksAPI,written in Python,that interfaces to TensorFlow(and various other tensor tools).`tf.k`

`eras`

supports fast prototyping and is user friendly, modular, and extensible. It supports both convolutional and recurrent networks and will run on CPUs and GPUs. Keras is the API of choice for developing in TensorFlow 2.

**TensorBoard**is a suite of visualization tools supporting the understanding, debugging, and optimizing of TensorFlow programs. It is compatible with both eager and graph execution environments. You can use TensorBoard to visualize various metrics of your model during training.

One recent development, and at the time of writing still very much in experimental form, integrates TensorFlow directly into the Swift programming language. TensorFlow applications in Swift are written using imperative code, that is, code that executes eagerly (at runtime). The Swift compiler automatically turns this source code into one TensorFlow Graph and this compiled code then executes with the full performance of TensorFlow Sessions on CPU, GPU, and TPU.

In this book, we will focus on those TensorFlow tools that allow us to get up and running with TensorFlow, using Python 3.6 and TensorFlow 2.0.0 alpha release. In particular, we will use eager execution as opposed to computational graphs and we will leverage the power of `tf.keras`

for building networks wherever possible, as it is the modern way for research and experiment.

The best programming support for TensorFlow is provided for Python (although libraries do exist for Java, C, and Go, while those for other languages are under active development).

There is a wealth of information on the web for installing TensorFlow for Python.

It is standard practice, also recommended by Google, to install TensorFlow in a virtual environment, that is, an environment that isolates a set of APIs and code from other APIs and code and from the system-wide environment.

There are two distinct versions of TensorFlowâ€”one for execution on a CPU and another for execution on a GPU. This last requires that the numerical libraries CUDA and CuDNN are installed. Tensorflow will default to GPU execution where possible. See https://www.tensorflow.org/alpha/guide/using_gpu.

Rather than attempt to reinvent the wheel here, there follow resources for creating virtual environments and installing TensorFlow.

In summary, TensorFlow may be installed for Windows 7 or later, Ubuntu Linux 16.04 or later, and macOS 10.12.6 or later.

There is a thorough introduction to virtual environments at http://docs.python-guide.org/.

There is a very detailed set of information on all aspects of what is required to install TensorFlow in the official Google documentation at https://www.tensorflow.org/install/.

Once installed, you can check your TensorFlow installation from a command terminal. There are instructions for doing this at http://www.laurencemoroney.com/tensorflow-to-gpu-or-not-to-gpu/and for installing the nightly build of TensorFlow, which contains all of the latest updates.

We will first look at how to import TensorFlow,Â then TensorFlow coding style, and how to do some basic housekeeping. After this, we will look at some basic TensorFlow operations. You can either create a Jupyter Notebook for these snippets or use your favorite IDE to create your source code. The code is all available in the GitHub repository.

Importing TensorFlow is straightforward. Note a couple of system checks:

import tensorflow as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution is: {}".format(tf.executing_eagerly())) print("Keras version: {}".format(tf.keras.__version__))

For Python applications, Google adheres to the PEP8 standard conventions. In particular, they use CamelCase for classes (for example, `hub.LatestModuleExporter`

) and `snake_case`

for functions, methods, and properties (for example, `tf.math.squared_difference`

). Google also adheres to the Google Python Style Guide, which can be found at https://github.com/google/styleguide/blob/gh-pages/pyguide.md.

Eager execution is the default in TensorFlow 2 and, as such, needs no special setup.

The following code can be used to find out whether a CPU or GPU is in use and if it's a GPU, whether that GPU is #0.

We suggest typing the code in rather than using copy and paste; this way you will get a feel for the commands:

var = tf.Variable([3, 3])if tf.test.is_gpu_available():print('Running on GPU')print('GPU #0?')print(var.device.endswith('GPU:0'))else:print('Running on CPU')

The way to declare a TensorFlow eager variable is as follows:

t0 = 24 # python variable t1 = tf.Variable(42) # rank 0 tensor t2 = tf.Variable([ [ [0., 1., 2.], [3., 4., 5.] ], [ [6., 7., 8.], [9., 10., 11.] ] ]) #rank 3 tensor t0, t1, t2

The output will be as follows:

**(24,
<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=42>,
<tf.Variable 'Variable:0' shape=(2, 2, 3) dtype=float32, numpy=
array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]]], dtype=float32)>)**

TensorFlow will infer the datatype, defaulting to `tf.float32`

for floats and `tf.int32`

for integers (see the preceding examples).

Alternatively, the datatype can be explicitly specified, as here:

f64 = tf.Variable(89, dtype = tf.float64) f64.dtype

TensorFlow has a large number of built-in datatypes.

Examples include those seen previously, `tf.int16`

, `tf.complex64`

, and `tf.string`

.See https://www.tensorflow.org/api_docs/python/tf/dtypes/DType. To reassign a variable, use `var.assign()`

, as here:

f1 = tf.Variable(89.) f1 # <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=89.0> f1.assign(98.) f1 # <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=98.0>

TensorFlow constants may be declared as in the following example:

m_o_l = tf.constant(42) m_o_l # <tf.Tensor: id=45, shape=(), dtype=int32, numpy=42> m_o_l.numpy() # 42

Again, TensorFlow will infer the datatype, or it can be explicitly specified, as is the case with variables:

unit = tf.constant(1, dtype = tf.int64) unit # <tf.Tensor: id=48, shape=(), dtype=int64, numpy=1>

The shape of a tensor is accessed via a property (rather than a function):

t2 = tf.Variable([ [ [0., 1., 2.], [3., 4., 5.] ], [ [6., 7., 8.], [9., 10., 11.] ] ]) # tensor variable print(t2.shape)

The output will be as follows:

**(2, 2, 3)**

Tensors may be reshaped and retain the same values, as is often required for constructing neural networks.

Here is an example:

```
r1 = tf.reshape(t2,[2,6]) # 2 rows 6 cols
r2 = tf.reshape(t2,[1,12]) # 1 rows 12 cols
r1
```**# <tf.Tensor: id=33, shape=(2, 6), dtype=float32,
numpy= array([[ 0., 1., 2., 3., 4., 5.], [ 6., 7., 8., 9., 10., 11.]], dtype=float32)>**

Here is another example:

```
r2 = tf.reshape(t2,[1,12]) # 1 row 12 columns
r2
```**# <tf.Tensor: id=36, shape=(1, 12), dtype=float32,
numpy= array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.]], dtype=float32)>**

The rank of a tensor is the number of dimensions it has, that is, the number of indices that are required to specify any particular element of that tensor.

The rank of a tensor can be ascertained with this, for example:

tf.rank(t2)

The output will be as follows:

<tf.Tensor: id=53, shape=(), dtype=int32, numpy=3>(the shape is () because the output here is a scalar value)

Specifying an element of a tensor is performed, as you would expect, by specifying the required indices.

Take this, for example:

t3 = t2[1, 0, 2] # slice 1, row 0, column 2 t3

The output will be as follows:

**<tf.Tensor: id=75, shape=(), dtype=float32, numpy=8.0>**

Should you need to, you can cast a tensor to a `numpy`

variable as follows:

print(t2.numpy())

The output will be as follows:

[[[ 0. 1. 2.] [ 3. 4. 5.]] [[ 6. 7. 8.] [ 9. 10. 11.]]]

Take this, also:

print(t2[1, 0, 2].numpy())

The output will be as follows:

**8.0**

The number of elements in a tensor is easily obtained. Notice also, again, the use of the `.numpy()`

function to extract the Python value from the tensor:

s = tf.size(input=t2).numpy() s

The output will be as follows:

**12**

TensorFlow supports all of the datatypes you would expect. A full list is available at https://www.tensorflow.org/versions/r1.1/programmers_guide/dims_types and includes `tf.int32`

(the default integer type), `tf.float32`

(the default floating point type), and `tf.complex64`

(the complex type).

To find the datatype of a tensor, use the following `dtype`

property:

t3.dtype

The output will be as follows:

**tf.float32**

Element-wise primitive tensor operationsÂ are specified using, as you would expect, the overloaded operators `+`

, `-`

, `*`

, and `/`

, as here:

t2*t2

The output will be as follows:

**<tf.Tensor: id=555332, shape=(2, 2, 3), dtype=float32, numpy= array([[[ 0., 1., 4.], [ 9., 16., 25.]], [[ 36., 49., 64.], [ 81., 100., 121.]]], dtype=float32)>**

Element-wise tensor operations support broadcasting in the same way that NumPy arrays do. The simplest example is that of multiplying a tensor by a scalar:

t4 = t2*4 print(t4)

The output will be as follows:

**tf.Tensor( [[[ 0. 4. 8.] [12. 16. 20.]] [[24. 28. 32.] [36. 40. 44.]]], shape=(2, 2, 3), dtype=float32)
**

In this example, the scalar multiplier 4 isâ€”conceptually, at leastâ€”expanded into an array that can be multiplied element-wise with `t2`

. There is a very detailed discussion of broadcasting at https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html.

To transpose a matrix and matrix multiplication eagerly, use the following:

u = tf.constant([[3,4,3]]) v = tf.constant([[1,2,1]]) tf.matmul(u, tf.transpose(a=v))

The output will be as follows:

**<tf.Tensor: id=555345, shape=(1, 1), dtype=int32, numpy=array([[14]], dtype=int32)>**

Note, again, that the default integer type is `tf.int32`

and the default float type is `tf.float32`

.

All of the operations that are available for tensors that form part of a computational graph are also available for eager execution variables.

There is a complete list of these operations at https://www.tensorflow.org/api_guides/python/math_ops.

TensorFlow variables of one type may be cast (coerced) to another type. More details may be found atÂ https://www.tensorflow.org/api_docs/python/tf/cast.

Take the following example:

i = tf.cast(t1, dtype=tf.int32) # 42 i

The output will be as follows:

**<tf.Tensor: id=116, shape=(), dtype=int32, numpy=42>**

**Â **

With truncation, it would be as follows:

j = tf.cast(tf.constant(4.9), dtype=tf.int32) # 4 j

The output will be as follows:

**<tf.Tensor: id=119, shape=(), dtype=int32, numpy=4>**

A ragged tensor is a tensor with one or more ragged dimensions. Ragged dimensions are dimensions that have slices that may have different lengths.

There are a variety of methods for declaring ragged arrays, the simplest being a constant ragged array.

The following example shows how to declare a constant ragged array and the lengths of the individual slices:

ragged =tf.ragged.constant([[5, 2, 6, 1], [], [4, 10, 7], [8], [6,7]]) print(ragged) print(ragged[0,:]) print(ragged[1,:]) print(ragged[2,:]) print(ragged[3,:]) print(ragged[4,:])

The output is as follows:

**<tf.RaggedTensor [[5, 2, 6, 1], [], [4, 10, 7], [8], [6, 7]]>
tf.Tensor([5 2 6 1], shape=(4,), dtype=int32)
tf.Tensor([], shape=(0,), dtype=int32)
tf.Tensor([ 4 10 7], shape=(3,), dtype=int32)
tf.Tensor([8], shape=(1,), dtype=int32)
tf.Tensor([6 7], shape=(2,), dtype=int32)**

Note the shape of the individual slices.

A common way of creating a ragged array is by using the `tf.RaggedTensor.from_row_splits()`

method, which has the following signature:

@classmethod from_row_splits( cls, values, row_splits, name=None )

Here,Â `values`

is a list of the values to be turned into the ragged array, and `row_splits`

is a list of the positions where the value list is to be split, so that the values for row `ragged[i]`

are stored in `ragged.values[ragged.row_splits[i]:ragged.row_splits[i+1]]`

:

print(tf.RaggedTensor.from_row_splits(values=[5,2,6,1,4, 10, 7, 8, 6, 7], row_splits=[0,4,4,7,8,10]))

`RaggedTensor`

Â is as follows:

**<tf.RaggedTensor [[5, 2, 6, 1], [], [4, 10, 7], [8], [6, 7]]>**

There is a complete list of all TensorFlow Python modules, classes,Â and functions atÂ https://www.tensorflow.org/api_docs/python/tf.

All of the maths functions can be found atÂ https://www.tensorflow.org/api_docs/python/tf/math.

In this section, we will look at some useful TensorFlow operations, especially within the context of neural network programming.

Later in this book, we will need to find the square of the difference between two tensors. The method is as follows:

tf.math.squared.difference( x, y, name=None)

Take the following example:

x = [1,3,5,7,11] y = 5 s = tf.math.squared_difference(x,y) s

The output will be as follows:

**<tf.Tensor: id=279, shape=(5,), dtype=int32, numpy=array([16, 4, 0, 4, 36], dtype=int32)>**

Note that the Python variables,Â `x`

and `y`

, are cast into tensors and that `y`

is then broadcast across `x`

in this example. So, for example, the first calculation is *(1-5) ^{2} = 16*.

The following is the signature ofÂ `tf.reduce_mean()`

.

Note that, in what follows, all TensorFlow operations have a name argument that can safely be left to the default of `None`

when using eager execution as its purpose is to identify the operation in a computational graph.

Note that this is equivalent to `np.mean`

, except that it infers the return datatype from the input tensor, whereas `np.mean`

allows you to specify the output type (defaulting to `float64`

):

**tf.reduce_mean(input_tensor, axis=None, keepdims=None, name=None)**

It is frequently necessary to find the mean value of a tensor. When this is done across a single axis, this axis is said to be reduced.

Here are some examples:

numbers = tf.constant([[4., 5.], [7., 3.]])

Find the mean across all axes (that is, use the default `axis = None`

) with this:

tf.reduce_mean(input_tensor=numbers) #( 4. + 5. + 7. + 3.)/4 = 4.75

The output will be as follows:

**<tf.Tensor: id=272, shape=(), dtype=float32, numpy=4.75>**

Find the mean across columns (that is, reduce rows) with this:

tf.reduce_mean(input_tensor=numbers, axis=0) # [ (4. + 7. )/2 , (5. + 3.)/2 ] = [5.5, 4.]

The output will be as follows:

<tf.Tensor: id=61, shape=(2,), dtype=float32,numpy=array([5.5, 4. ], dtype=float32)>

When `keepdims`

is `True`

, the reduced axis is retained with a length of 1:

tf.reduce_mean(input_tensor=numbers, axis=0, keepdims=True)

The output is as follows:

**array([[5.5, 4.]]) (1 row, 2 columns) **

Find the mean across rows (that is, reduce columns) with this:

tf.reduce_mean(input_tensor=numbers, axis=1) # [ (4. + 5. )/2 , (7. + 3. )/2] = [4.5, 5]

The output will be as follows:

**<tf.Tensor: id=64, shape=(2,), dtype=float32, numpy=array([4.5, 5. ], dtype=float32)>**

When `keepdims`

is `True`

, the reduced axis is retained with a length of 1:

tf.reduce_mean(input_tensor=numbers, axis=1, keepdims=True)

The output is as follows:

**([[4.5], [5]]) (2 rows, 1 column)**

Random values are frequently required when developing neural networks, for example, when initializing weights and biases. TensorFlow provides a number of methods for generating these random values.

`tf.random.normal()`

outputs a tensor of the given shape filled with values of theÂ `dtype`

typeÂ from a normal distribution.

The required signature is as follows:

tf. random.normal(shape, mean = 0, stddev =2, dtype=tf.float32, seed=None, name=None)

Take this, for example:

tf.random.normal(shape = (3,2), mean=10, stddev=2, dtype=tf.float32, seed=None, name=None) ran = tf.random.normal(shape = (3,2), mean=10.0, stddev=2.0) print(ran)

The output will be as follows:

**<tf.Tensor: id=13, shape=(3, 2), dtype=float32, numpy= array([[ 8.537131 , 7.6625767], [10.925293 , 11.804686 ], [ 9.3763075, 6.701221 ]], dtype=float32)>**

The required signature is this:

tf.random.uniform(shape, minval = 0, maxval= None, dtype=tf.float32, seed=None, name=None)

This outputs a tensor of the given shape filled with values from a uniform distribution in the range `minval`

to `maxval`

, where the lower bound is inclusive but the upper bound isn't.

Take this, for example:

tf.random.uniform(shape = (2,4), minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)

The output will be as follows:

**tf.Tensor( [[ 6 7] [ 0 12]], shape=(2, 2), dtype=int32)**

Â

Note that, for both of these random operations, if you want the random values generated to be repeatable, then use `tf.random.set_seed()`

. Use of a non-default datatype is also shown here:

tf.random.set_seed(11) ran1 = tf.random.uniform(shape = (2,2), maxval=10, dtype = tf.int32) ran2 = tf.random.uniform(shape = (2,2), maxval=10, dtype = tf.int32) print(ran1) #Call 1 print(ran2) tf.random.set_seed(11) #same seed ran1 = tf.random.uniform(shape = (2,2), maxval=10, dtype = tf.int32) ran2 = tf.random.uniform(shape = (2,2), maxval=10, dtype = tf.int32) print(ran1) #Call 2 print(ran2)

`Call 1`

and `Call 2`

will return the same set of values.

The output will be as follows:

tf.Tensor( [[4 6] [5 2]], shape=(2, 2), dtype=int32) tf.Tensor( [[9 7] [9 4]], shape=(2, 2), dtype=int32) tf.Tensor( [[4 6] [5 2]], shape=(2, 2), dtype=int32) tf.Tensor( [[9 7] [9 4]], shape=(2, 2), dtype=int32)

Here is a little example adapted for eager execution fromÂ https://colab.research.google.com/notebooks/mlcc/creating_and_manipulating_tensors.ipynb#scrollTo=6UUluecQSCvr.

Notice that this example shows how to initialize an eager variable with a call to a TensorFlow function.

dice1 = tf.Variable(tf.random.uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) dice2 = tf.Variable(tf.random.uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) # We may add dice1 and dice2 since they share the same shape and size. dice_sum = dice1 + dice2 # We've got three separate 10x1 matrices. To produce a single # 10x3 matrix, we'll concatenate them along dimension 1. resulting_matrix = tf.concat(values=[dice1, dice2, dice_sum], axis=1) print(resulting_matrix)

TheÂ sample output will be as follows:

tf.Tensor([[ 5 4 9][ 5 1 6][ 2 4 6][ 5 6 11][ 4 4 8][ 4 6 10][ 2 2 4][ 5 6 11][ 2 6 8][ 5 4 9]], shape=(10, 3), dtype=int32)

We will now look at how to find the indices of the elements with the largest and smallest values, respectively, across the axes of a tensor.

The signatures of the functions are as follows:

tf.argmax(input, axis=None, name=None, output_type=tf.int64 ) tf.argmin(input, axis=None, name=None, output_type=tf.int64 )

Take this, for example:

# 1-D tensor t5 = tf.constant([2, 11, 5, 42, 7, 19, -6, -11, 29]) print(t5) i = tf.argmax(input=t5) print('index of max; ', i) print('Max element: ',t5[i].numpy()) i = tf.argmin(input=t5,axis=0).numpy() print('index of min: ', i) print('Min element: ',t5[i].numpy()) t6 = tf.reshape(t5, [3,3]) print(t6) i = tf.argmax(input=t6,axis=0).numpy() # max arg down rows print('indices of max down rows; ', i) i = tf.argmin(input=t6,axis=0).numpy() # min arg down rows print('indices of min down rows ; ',i) print(t6) i = tf.argmax(input=t6,axis=1).numpy() # max arg across cols print('indices of max across cols: ',i) i = tf.argmin(input=t6,axis=1).numpy() # min arg across cols print('indices of min across cols: ',i)

The output will be as follows:

**tf.Tensor([ 2 11 5 42 7 19 -6 -11 29], shape=(9,), dtype=int32)
index of max; tf.Tensor(3, shape=(), dtype=int64)
Max element: 42
index of min: tf.Tensor(7, shape=(), dtype=int64)
Min element: -11
tf.Tensor( [[ 2 11 5] [ 42 7 19] [ -6 -11 29]], shape=(3, 3), dtype=int32)
indices of max down rows; tf.Tensor([1 0 2], shape=(3,), dtype=int64)
indices of min down rows ; tf.Tensor([2 2 0], shape=(3,), dtype=int64)
tf.Tensor( [[ 2 11 5] [ 42 7 19] [ -6 -11 29]], shape=(3, 3), dtype=int32)
indices of max across cols: tf.Tensor([1 0 2], shape=(3,), dtype=int64)
indices of min across cols: tf.Tensor([0 1 1], shape=(3,), dtype=int64)**

In order to save and load the values of tensors, here is the best method (see Chapter 2,Â *Keras, a High-Level API for TensorFlow 2*, for methods to save complete models):

variable = tf.Variable([[1,3,5,7],[11,13,17,19]]) checkpoint= tf.train.Checkpoint(var=variable) save_path = checkpoint.save('./vars') variable.assign([[0,0,0,0],[0,0,0,0]]) variable checkpoint.restore(save_path) print(variable)

The output will be as follows:

**<tf.Variable 'Variable:0' shape=(2, 4) dtype=int32, numpy= array([[ 1, 3, 5, 7], [11, 13, 17, 19]], dtype=int32)>**

`tf.function`

is a function that will take a Python function and return a TensorFlow graph. The advantage of this is that graphs can apply optimizations and exploit parallelism in the Python function (`func`

).Â `tf.function`

is new to TensorFlow 2.

Its signature is as follows:

tf.function( func=None, input_signature=None, autograph=True, experimental_autograph_options=None )

An example is as follows:

def f1(x, y): return tf.reduce_mean(input_tensor=tf.multiply(x ** 2, 5) + y**2) f2 = tf.function(f1) x = tf.constant([4., -5.]) y = tf.constant([2., 3.]) # f1 and f2 return the same value, but f2 executes as a TensorFlow graph assert f1(x,y).numpy() == f2(x,y).numpy()

The assert passes, so there is no output.

In this chapter, we started to become familiar with TensorFlow by looking at a number of snippets of code illustrating some basic operations. We had a look at an overview of the modern TensorFlow ecosystem and how to install TensorFlow. We also examined some housekeeping operations, some eager operations, and a variety of TensorFlow operations that will be useful in the rest of this book.Â There is an excellent introduction to TensorFlow 2 atÂ www.youtube.com/watch?v=k5c-vg4rjBw.

Also check out *Appendix A* for details of a `tf1.12`

to `tf2`

conversion tool. In the next chapter, we will take a look at Keras, which is a high-level API for TensorFlow 2.