Reader small image

You're reading from  Caffe2 Quick Start Guide

Product typeBook
Published inMay 2019
Reading LevelBeginner
PublisherPackt
ISBN-139781789137750
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Ashwin Nanjappa
Ashwin Nanjappa
author image
Ashwin Nanjappa

Ashwin Nanjappa is a senior architect at NVIDIA, working in the TensorRT team on improving deep learning inference on GPU accelerators. He has a PhD from the National University of Singapore in developing GPU algorithms for the fundamental computational geometry problem of 3D Delaunay triangulation. As a post-doctoral research fellow at the BioInformatics Institute (Singapore), he developed GPU-accelerated machine learning algorithms for pose estimation using depth cameras. As an algorithms research engineer at Visenze (Singapore), he implemented computer vision algorithm pipelines in C++, developed a training framework built upon Caffe in Python, and trained deep learning models for some of the world's most popular online shopping portals.
Read more about Ashwin Nanjappa

Right arrow

What this book covers

Chapter 1, Introduction and Installation, introduces Caffe2 and examines how to build and install it.

Chapter 2, Composing Networks, teaches you about Caffe2 operators and how to compose them to build a simple computation graph and a neural network to recognize handwritten digits.

Chapter 3, Training Networks, gets into how to use Caffe2 to compose a network for training and how to train a network to solve the MNIST problem.

Chapter 4, Working with Caffe, explores the relationship between Caffe and Caffe2 and how to work with models trained in Caffe.

Chapter 5, Working with Other Frameworks, looks at contemporary deep learning frameworks such as TensorFlow and PyTorch and how we can exchange models from and to Caffe2 and these other frameworks.

Chapter 6, Deploying Models to Accelerators for Inference, talks about inference engines and how they are an essential tool for the final deployment of a trained Caffe2 model on accelerators. We focus on two types of popular accelerators: NVIDIA GPUs and Intel CPUs. We look at how to install and use TensorRT for deploying our Caffe2 model on NVIDIA GPUs. We also look at the installation and use of OpenVINO for deploying our Caffe2 model on Intel CPUs and accelerators.

Chapter 7, Caffe2 at the Edge and in the cloud, covers two applications of Caffe2 to demonstrate its ability to scale. As an application of Caffe2 with edge devices, we look at how to build Caffe2 on Raspberry Pi single-board computers and how to run Caffe2 applications on them. As an application of Caffe2 with the cloud, we look at the use of Caffe2 in Docker containers.

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Caffe2 Quick Start Guide
Published in: May 2019Publisher: PacktISBN-13: 9781789137750

Author (1)

author image
Ashwin Nanjappa

Ashwin Nanjappa is a senior architect at NVIDIA, working in the TensorRT team on improving deep learning inference on GPU accelerators. He has a PhD from the National University of Singapore in developing GPU algorithms for the fundamental computational geometry problem of 3D Delaunay triangulation. As a post-doctoral research fellow at the BioInformatics Institute (Singapore), he developed GPU-accelerated machine learning algorithms for pose estimation using depth cameras. As an algorithms research engineer at Visenze (Singapore), he implemented computer vision algorithm pipelines in C++, developed a training framework built upon Caffe in Python, and trained deep learning models for some of the world's most popular online shopping portals.
Read more about Ashwin Nanjappa