Reader small image

You're reading from  Hands-On Deep Learning for Images with TensorFlow

Product typeBook
Published inJul 2018
Reading LevelBeginner
PublisherPackt
ISBN-139781789538670
Edition1st Edition
Languages
Right arrow
Author (1)
Will Ballard
Will Ballard
author image
Will Ballard

Will Ballard is the chief technology officer at GLG, responsible for engineering and IT. He was also responsible for the design and operation of large data centers that helped run site services for customers including Gannett, Hearst Magazines, NFL, NPR, The Washington Post, and Whole Foods. He has also held leadership roles in software development at NetSolve (now Cisco), NetSpend, and Works (now Bank of America). https://www.linkedin.com/in/will-ballard-b09115/
Read more about Will Ballard

Right arrow

Machine learning REST service

Now that we've got our Docker file built and readable, we're going to run a REST service inside of our container. In this section, we will take a look at running Docker and the correct command-line arguments, the exposed URL from our REST service, and then finally we'll be verifying that Keras is fully installed and operational.

And now for the payoff: we're actually going to run our container using the docker run command. There's a couple of switches we're going to pass here. -p is going to tell us that port 8888 on the container is port 8888 on our PC, and the -v command (and we're actually going to mount our local work directory, which is where we cloned the source code from GitHub) will be mounted into the volume on the container:

C:\11519>docker run -p 8888:8888 -v C:/11519/:/src keras

Press Enter, and suddenly you'll be presented with a token that we're going to actually going to use to test logging in to the IPython container with our web browser:

Output—docker run

Note that this token will be unique on each instance run, and will differ for your PC.

Now, if you have a GPU on a Linux-based machine, there is a separate Docker file in the gpu folder that you can build a Docker container with in order to get accelerated GPU support. So, as you can see here, we're just building that Docker container and calling it keras-gpu:

Building Docker container

It takes a little while to build the container. There's really nothing important to notice in the output; you just need to make sure that the container was actually built successfully at the end:

Building Docker container

Now, with the container built, we're going to go ahead and run it. We're going to run it with nvidia-docker, which exposes the GPU device through to your Docker container:

sudo nvidia-docker run -p 8888:8888 -v ~/kerasvideo/:/src keras-gpu

Otherwise, the command-line switches are the same as we did for actually running the straight Keras container, except they're going to be nvidia-docker and keras-gpu. Now, once the container is up and running, you'll get a URL, and then you'll take this URL and paste it into your browser to access the IPython Notebook being served by the container:

Output—docker run on Ubuntu system

Now, we'll go ahead and make a new IPython Notebook really quick. When it launches, we'll import keras, make sure it loads, and that takes a second in order to come up:

Loading Keras

Then, we'll use the following code that uses TensorFlow in order to detect GPU support:

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

So, we'll be running the preceding bit of code in order to see the libraries and devices:

Detecting libraries and devices

Now, we can see that we have GPU.

Flipping over to our web browser, go ahead and paste that URL and go:

Browser window (lacalhost)

Oops! It can't be reached because 0.0.0.0 is not a real computer; we'll switch that to localhost, hit Enter, and sure enough we have an IPython Notebook:

IPython Notebook

We'll go ahead and create a new Python 3 Notebook, and give it a quick test by seeing if we can import the keras library and make sure everything's okay.

Looks like we're all set. Our TensorFlow backend is good to go!

This is the environment that we'll be running throughout this book: a Docker container fully prepared and ready to go so that all you need to do is start it, run it, and then work with the Keras and IPython Notebooks that are hosted inside so that you can have an easy, repeatable environment every time.

Previous PageNext Page
You have been reading a chapter from
Hands-On Deep Learning for Images with TensorFlow
Published in: Jul 2018Publisher: PacktISBN-13: 9781789538670
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Will Ballard

Will Ballard is the chief technology officer at GLG, responsible for engineering and IT. He was also responsible for the design and operation of large data centers that helped run site services for customers including Gannett, Hearst Magazines, NFL, NPR, The Washington Post, and Whole Foods. He has also held leadership roles in software development at NetSolve (now Cisco), NetSpend, and Works (now Bank of America). https://www.linkedin.com/in/will-ballard-b09115/
Read more about Will Ballard