Getting Started with Mobile TensorFlow
This chapter covers how to get your development environments set up for building all the iOS or Android apps with TensorFlow that are discussed in the rest of the book. We won't discuss in detail all the supported TensorFlow versions, OS versions, Xcode, and Android Studio versions that can be used for development, as that kind of information can easily be found on the TensorFlow website (http://www.tensorflow.org) or via Google. Instead, we'll just talk briefly about sample working environments in this chapter so that we can dive in quickly to look at all the amazing apps you can build with the environments.
If you already have TensorFlow, Xcode, and Android Studio installed, and can run and test the sample TensorFlow iOS and Android apps, and if you already have an NVIDIA GPU installed for faster deep learning model training, you can skip this chapter. Or you can jump directly to the section that you're unfamiliar with.
We're going to cover the following topics in this chapter (how to set up the Raspberry Pi development environment will be discussed in Chapter 12, Developing TensorFlow Apps on Raspberry Pi):
- Setting up TensorFlow
- Setting up Xcode
- Setting up Android Studio
- TensorFlow Mobile vs TensorFlow Lite
- Running sample TensorFlow iOS apps
- Running sample TensorFlow Android apps
Setting up TensorFlow
TensorFlow is the leading open source framework for machine intelligence. When Google released TensorFlow as an open source project in November 2015, there were already several other similar open source frameworks for deep learning: Caffe, Torch, and Theano. By Google I/O 2018 on May 10, TensorFlow on GitHub has reached 99k stars, an increase of 14k stars in 4 months, while Caffe has increased only 2k to 24k stars. Two years later, it's already the most popular open source framework for training and deploying deep learning models (it also has good support for traditional machine learning). As of January 2018, TensorFlow has close to 85k stars (https://github.com/tensorflow/tensorflow) on GitHub, while the other three leading open source deep learning frameworks, Caffe (https://github.com/BVLC/caffe), CNTK (https://github.com/Microsoft/CNTK), and Mxnet (https://github.com/apache/incubator-mxnet) have over 22k, 13k, and 12k stars, respectively.
We assume you already have a basic understanding of TensorFlow, but if you don't, you should check out the Get Started (https://www.tensorflow.org/get_started) and Tutorials (https://www.tensorflow.org/tutorials) parts of the TensorFlow website or the Awesome TensorFlow tutorials (https://github.com/jtoy/awesome-tensorflow). Two good books on the topic are Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition by Sebastian Raschka and Vahid Mirjalili, and Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron.
TensorFlow can be installed on MacOS, Ubuntu or Windows. We'll cover the steps to install TensorFlow 1.4 from its source on MacOS X El Capitan (10.11.6), macOS Sierra (10.12.6), and Ubuntu 16.04. If you have a different OS or version, you can refer to the TensorFlow Install (https://www.tensorflow.org/install) documentation for more information. By the time you read this book, it's likely a newer TensorFlow version will come out. Although you should still be able to run the code in the book with the newer version, it's not a guarantee, which is why we use the TensorFlow 1.4 release source code to set up TensorFlow on Mac and Ubuntu; that way, you can easily test run and play with the apps in the book.
Overall, we'll use TensorFlow on Mac to develop iOS and Android TensorFlow apps, and TensorFlow on Ubuntu to train deep learning models used in the apps.
Setting up TensorFlow on MacOS
Generally, you should use a virtualenv, Docker, or Anaconda installation to install TensorFlow in an isolated environment. But as we have to build iOS and Android TensorFlow apps using the TensorFlow source code, we might as well build TensorFlow itself from source, in which case, using the native pip installation choice could be easier than other options. If you want to experiment with different TensorFlow versions, we recommend you install other TensorFlow versions using one of the virtualenv, Docker, and Anaconda options. Here, we'll have TensorFlow 1.4 installed directly on your MacOS system using the native pip and Python 2.7.10.
Follow these steps to download and install TensorFlow 1.4 on MacOS:
- Download the TensorFlow 1.4.0 source code (zip or tar.gz) from the TensorFlow releases page on GitHub: https://github.com/tensorflow/tensorflow/releases
- Uncompress the downloaded file and drag the tensorflow-1.4.0 folder to your home directory
- Make sure you have Xcode 8.2.1 or above installed (if not, read the Setting up Xcode section first)
- Open a new Terminal window, then cd tensorflow-1.4.0
- Run xcode-select --install to install command-line tools
- Run the following commands to install other tools and packages needed to build TensorFlow:
sudo pip install six numpy wheel brew install automake brew install libtool ./configure brew upgrade bazel
- Build from the TensorFlow source with CPU-only support (we'll cover the GPU support in the next section) and generate a pip package file with the .whl file extension:
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
- Install the TensorFlow 1.4.0 CPU package:
sudo pip install --upgrade /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27m-macosx_10_12_intel.whl
If you encounter any error during the process, googling the error message, to be honest, should be the best way to fix it, as we intend in this book to focus on the tips and knowledge, not easily available elsewhere, gained from our long hours of building and debugging practical mobile TensorFlow apps. One particular error you may see is the Operation not permitted error when running the sudo pip install commands. To fix it, you can disable your Mac's System Integrity Protection (SIP) by restarting the Mac and hitting the Cmd + R keys to enter the recovery mode, then under Utilities-Terminal, running csrutil disable before restarting Mac. If you're uncomfortable with disabling SIP, you can follow the TensorFlow documentation to try one of the more complicated installation methods such as virtualenv.
If everything goes well, you should be able to run on your Terminal Window, Python or preferably IPython, then run import tensorflow as tf and tf.__version__ to see 1.4.0 as output.
Setting up TensorFlow on GPU-powered Ubuntu
One of the benefits of using a good deep learning framework, such as TensorFlow, is its seamless support for using a Graphical Processing Unit (GPU) in model training. It'd be a lot faster training a non-trivial TensorFlow-based model on a GPU than on a CPU, and currently NVIDIA offers the best and most cost-effective GPUs supported by TensorFlow. And Ubuntu is the best OS for running NVIDIA GPUs with TensorFlow. You can easily buy one GPU for a few hundred bucks and install it on an inexpensive desktop with an Ubuntu system. You can also install NVIDIA GPU on Windows but TensorFlow's support for Windows is not as good as that for Ubuntu.
To train the models deployed in the apps in this book, we use NVIDIA GTX 1070, which you can purchase on Amazon or eBay for about $400. There's a good blog by Tim Dettmers on which GPUs to use for deep learning (http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/). After you get such a GPU and install it on your Ubuntu system, and before you install the GPU-enabled TensorFlow, you need to install NVIDIA CUDA 8.0 (or 9.0) and the cuDNN (CUDA-Deep Neural Network library) 6.0 (or 7.0), both are supported by TensorFlow 1.4.
Follow these steps to install CUDA 8.0 and cuDNN 6.0 on Ubuntu 16.04 (you should be able to download and install CUDA 9.0 and cuDNN 7.0 in a similar way):
- Find the NVIDIA CUDA 8.0 GA2 release at https://developer.nvidia.com/cuda-80-ga2-download-archive, and make the selections shown in the following screenshot:

- Download the base installer as shown in the following screenshot:

- Open a new Terminal and run the following commands (you'll also need to add the last two commands to your .bashrc file for the two environment variables to take effect next time you launch a new Terminal):
sudo dpkg -i /home/jeff/Downloads/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb sudo apt-get update sudo apt-get install cuda-8-0 export CUDA_HOME=/usr/local/cuda export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
- Download the NVIDIA cuDNN 6.0 for CUDA 8.0 at https://developer.nvidia.com/rdp/cudnn-download – you'll be asked to sign up (for free) with an NVIDIA Developer account first before you can download it, as shown in the next screenshot (choose the highlighted cuDNN v6.0 Library for Linux):

- Unzip the downloaded file, assuming it's under the default ~/Downloads directory, and you'll see a folder named cuda, with two subfolders named include and lib64
- Copy the cuDNN include and lib64 files to the CUDA_HOME's lib64 and include folders:
sudo cp ~/Downloads/cuda/lib64/* /usr/local/cuda/lib64 sudo cp ~/Downloads/cuda/include/cudnn.h /usr/local/cuda/include
Now we're ready to install the GPU-enabled TensorFlow 1.4 on Ubuntu (the first two steps given here are the same as those described in the section Setting up TensorFlow on MacOS):
- Download the TensorFlow 1.4.0 source code (zip or tar.gz) from the TensorFlow releases page on GitHub: https://github.com/tensorflow/tensorflow/releases
- Uncompress the downloaded file and drag the folder to your home directory
- Download the bazel installer, bazel-0.5.4-installer-linux-x86_64.sh at https://github.com/bazelbuild/bazel/releases
- Open a new Terminal Window, then run the following commands to install the tools and packages needed to build TensorFlow:
sudo pip install six numpy wheel cd ~/Downloads chmod +x bazel-0.5.4-installer-linux-x86_64.sh ./bazel-0.5.4-installer-linux-x86_64.sh --user
- Build from the TensorFlow source with GPU-enabled support and generate a pip package file with the .whl file extension:
cd ~/tensorflow-1.4.0 ./configure bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
- Install the TensorFlow 1.4.0 GPU package:
sudo pip install --upgrade /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27mu-linux_x86_64.whl
Now, if all goes well, you can launch IPython and enter the following scripts to see the GPU information TensorFlow is using:
In [1]: import tensorflow as tf In [2]: tf.__version__ Out[2]: '1.4.0' In [3]: sess=tf.Session() 2017-12-28 23:45:37.599904: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-12-28 23:45:37.600173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:01:00.0 totalMemory: 7.92GiB freeMemory: 7.60GiB 2017-12-28 23:45:37.600186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
Congratulations! You're now ready to train the deep learning models used in the apps in this book. Before we start having fun with our new toy and use it to train our cool models and then deploy and run them on mobile devices, let's first see what it takes to be ready for developing mobile apps.
Setting up Xcode
Xcode is used to develop iOS apps and you need a Mac computer and a free Apple ID to download and install it. If your Mac is relatively older and with OS X El Capitan (version 10.11.6), you can download Xcode 8.2.1 at https://developer.apple.com/download/more. Or if you have macOS Sierra (version 10.12.6) or later, installed, you can download Xcode 9.2 or 9.3, the latest version as of May 2018, from the preceding link. All the iOS apps in the book have been tested in both Xcode 8.2.1, 9.2, and 9.3.
To install Xcode, simply double-click the downloaded file and follow the steps on the screen. It's pretty straightforward. You can now run apps on the iOS simulator that comes with Xcode or your own iOS device. Starting Xcode 7, you can run and debug your iOS apps on an iOS device for free, but if you want to distribute or publish your apps, you need to enroll in the Apple Developer Program for $99 a year as an individual: https://developer.apple.com/programs/enroll.
Although you can test run many apps in the book with the Xcode simulator, some apps in the book require the camera on your actual iOS device to take a picture before processing it with a deep learning model trained with TensorFlow. In addition, it's generally better to test a model on an actual device for accurate performance and memory usage: a model that runs fine in the simulator may crash or run too slow in an actual device. So it's strongly recommended or required that you test and run the iOS apps in the book on your actual iOS device at least once, if not always.
This book assumes you're comfortable with iOS programming, but if you're new to iOS development, you can learn from many excellent online tutorials such as the iOS tutorials by Ray Wenderlich (https://www.raywenderlich.com). We won't cover complicated iOS programming; we'll mainly show you how to use the TensorFlow C++ API in our iOS apps to run the TensorFlow trained models to perform all kinds of intelligent tasks. Both Objective-C and Swift code, the two official iOS programming languages from Apple, will be used to interact with the C++ code in our mobile AI apps.
Setting up Android Studio
Android Studio is the best tool for Android app development, and TensorFlow has great support for using it. Unlike Xcode, you can install and run Android Studio on Mac, Windows, or Linux. For detailed system requirements, see the Android Studio website (https://developer.android.com/studio/index.html). Here, we'll just cover how to set up Android Studio 3.0 or 3.0.1 on Mac – all the apps in the book have been tested on both versions.
First, download the Android Studio 3.0.1, or the latest version if it's newer than 3.0.1 and if you don't mind fixing possible minor issues, from the preceding link. You can also download 3.0.1 or 3.0 from its archives at https://developer.android.com/studio/archive.html.
Then, double-click the downloaded file and drag and drop the Android Studio.app icon to Applications. If you have a previously installed Android Studio, you'll get a prompt asking you if you want to replace it with the newer one. You can just select Replace.
Next, open Android Studio and you need to provide the path to the Android SDK, which by default is in ~/Library/Android/sdk if you have a previous version of Android Studio installed, or you can select Open an existing Android Studio project, then go to the TensorFlow 1.4 source directory created in the section Setting up TensorFlow on MacOS, and open the tensorflow/examples/android folder. After that, you can download the Android SDK by either clicking the link to an Install Build Tools message or going to Android Studio's Tools | Android | SDK Manager, as shown in the following screenshot. From the SDK Tools tab there, you can check the box next to a specific version of Android SDK Tools and click the OK button to install that version:

Finally, as TensorFlow Android apps use the native TensorFlow library in C++ to load and run TensorFlow models, you need to install the Android Native Development Kit (NDK), which you can do either from the Android SDK Manager shown in the preceding screenshot, or by downloading NDK directly from https://developer.android.com/ndk/downloads/index.html. Both the NDK version r16b and r15c have been tested to run the Android apps in the book. If you download the NDK directly, you may also need to set the Android NDK location after opening your project and selecting Android Studio's File | Project Structure, as shown in the following screenshot:

With both Android SDK and NDK installed and set up, you're ready to test run sample TensorFlow Android apps.
TensorFlow Mobile vs TensorFlow Lite
Before we start running sample TensorFlow iOS and Android apps, let's clarify one big picture. TensorFlow currently has two approaches to developing and deploying deep learning apps on mobile devices: TensorFlow Mobile and TensorFlow Lite. TensorFlow Mobile was part of TensorFlow from the beginning, and TensorFlow Lite is a newer way to develop and deploy TensorFlow apps, as it offers better performance and smaller app size. But there's one key factor that will let us focus on TensorFlow Mobile in this book, while still covering TensorFlow Lite in one chapter: TensorFlow Lite is still in developer preview as of TensorFlow 1.8 and Google I/O 2018 in May 2018. So to develop production-ready mobile TensorFlow apps now, you have to use TensorFlow Mobile, as recommended by Google.
Another reason we decided to focus on TensorFlow Mobile now is while TensorFlow Lite only offers a limited support for model operators, TensorFlow Mobile supports customization to add new operators not supported by TensorFlow Mobile by default, which you'll see happens pretty often in our various models of different AI apps.
But in the future, when TensorFlow Lite is out of developer preview, it's likely to replace TensorFlow Mobile, or at least overcome its current limitations. To get yourself ready for that, we'll cover TensorFlow Lite in detail in a later chapter.
Running sample TensorFlow iOS apps
In the last two sections of this chapter, we'll test run three sample iOS apps and four sample Android apps that come with TensorFlow 1.4 to make sure you have your mobile TensorFlow development environments set up correctly and give you a quick preview at what some TensorFlow mobile apps can do.
The source code of the three sample TensorFlow iOS apps is located at tensorflow/examples/ios: simple, camera, and benchmark. To successfully run these samples, you need to first download one pretrained deep learning model by Google, called Inception (https://github.com/tensorflow/models/tree/master/research/inception), for image recognition. There are several versions of Inception: v1 to v4, with better accuracy in each newer version. Here we'll use Inception v1 as the samples were developed for it. After downloading the model file, copy the model-related files to each of the samples' data folder:
curl -o ~/graphs/inception5h.zip https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip unzip ~/graphs/inception5h.zip -d ~/graphs/inception5h cd tensorflow/examples/ios cp ~/graphs/inception5h/* simple/data/ cp ~/graphs/inception5h/* camera/data/ cp ~/graphs/inception5h/* benchmark/data/
Now, go to each app folder and run the following commands to download the required pod for each app before opening and running the apps:
cd simple pod install open tf_simple_example.xcworkspace cd ../camera pod install open tf_camera_example.xcworkspace cd ../benchmark pod install open tf_benchmark_example.xcworkspace
You can then run the three apps on an iOS device, or the simple and benchmark apps on an iOS simulator. If you tap the Run Model button after running the simple app, you'll see a text message saying that the TensorFlow Inception model is loaded, followed by several top recognition results along with confidence values.
If you tap the Benchmark Model button after running the benchmark app, you'll see the average time it takes to run the model for over 20 times. For example, it takes an average of about 0.2089 seconds on my iPhone 6, and 0.0359 seconds on the iPhone 6 simulator.
Finally, running the camera app on an iOS device and pointing the device camera around shows you the objects the app sees and recognizes in real time.
Running sample TensorFlow Android apps
There are four sample TensorFlow Android apps named TF Classify, TF Detect, TF Speech, and TF Stylize, located in tensorflow/examples/android. The easiest way to run these samples is to just open the project in the preceding folder using Android Studio, as shown in the Setting up Android Studio section, then make a single change by editing the project's build.gradle file and changing def nativeBuildSystem = 'bazel' to def nativeBuildSystem = 'none'.
Now connect an Android device to your computer and build, install and run the app by selecting Android Studio's Run | Run 'android'. This will install four Android apps with the names TF Classify, TF Detect, TF Speech, and TF Stylize on your device. TF Classify is just like the iOS camera app, using the TensorFlow Inception v1 model to do real-time object classification with the device camera. TF Detect uses a different model, called Single Shot Multibox Detector (SSD) with MobileNet, a new set of deep learning models Google released that are targeted in particular to mobile and embedded devices, to perform object detection, drawing rectangles on detected objects. TF Speech uses another different deep learning (speech recognition) model to listen and recognize a small set of words such as Yes, No, Left, Right, Stop and Go. TF Stylize uses yet another model to change the style of the images the camera sees. For more detailed information on these apps, you can check out the TensorFlow Android example documentation at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android.
Summary
In this chapter, we covered how to install TensorFlow 1.4 on Mac and Ubuntu, how to set up a cost effective NVIDIA GPU on Ubuntu for faster model training, and how to set up Xcode and Android Studio for mobile AI app development. We also showed you how to run some cool sample TensorFlow iOS and Android apps. We'll discuss in detail in the rest of the book how to build and train, or retrain each of those models used in the apps, and many others, on our GPU-powered Ubuntu system, and show you how to deploy the models in iOS and Android apps and write the code to use the models in your mobile AI apps. Now that we're all set and ready, we can't wait to hit the road. It'll be an exciting journey, a journey we'd certainly be happy to share with our friends. So why not start with our best friends, and let's see what it takes to build a dog breed recognition app?