Reader small image

You're reading from  Caffe2 Quick Start Guide

Product typeBook
Published inMay 2019
Reading LevelBeginner
PublisherPackt
ISBN-139781789137750
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Ashwin Nanjappa
Ashwin Nanjappa
author image
Ashwin Nanjappa

Ashwin Nanjappa is a senior architect at NVIDIA, working in the TensorRT team on improving deep learning inference on GPU accelerators. He has a PhD from the National University of Singapore in developing GPU algorithms for the fundamental computational geometry problem of 3D Delaunay triangulation. As a post-doctoral research fellow at the BioInformatics Institute (Singapore), he developed GPU-accelerated machine learning algorithms for pose estimation using depth cameras. As an algorithms research engineer at Visenze (Singapore), he implemented computer vision algorithm pipelines in C++, developed a training framework built upon Caffe in Python, and trained deep learning models for some of the world's most popular online shopping portals.
Read more about Ashwin Nanjappa

Right arrow

Caffe2 model file formats

To be able to use Caffe models in Caffe2, we also need to understand the model file formats that Caffe2 can import. Just like Caffe, Caffe2 also uses Protobuf for serialization and deserialization of its model files. Caffe2 imports a trained model from two files:

  1. The structure of the neural network stored as a predict_net.pb file or as a predict_net.pbtxt file
  2. The weights of the operators of the neural network stored as a init_net.pb file

predict_net file

The predict_net binary file, which is usually named predict_net.pb, holds the list of operators in the neural network, the parameters of each operator, and the connections between the operators. This file is a serialization of the neural network...

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Caffe2 Quick Start Guide
Published in: May 2019Publisher: PacktISBN-13: 9781789137750

Author (1)

author image
Ashwin Nanjappa

Ashwin Nanjappa is a senior architect at NVIDIA, working in the TensorRT team on improving deep learning inference on GPU accelerators. He has a PhD from the National University of Singapore in developing GPU algorithms for the fundamental computational geometry problem of 3D Delaunay triangulation. As a post-doctoral research fellow at the BioInformatics Institute (Singapore), he developed GPU-accelerated machine learning algorithms for pose estimation using depth cameras. As an algorithms research engineer at Visenze (Singapore), he implemented computer vision algorithm pipelines in C++, developed a training framework built upon Caffe in Python, and trained deep learning models for some of the world's most popular online shopping portals.
Read more about Ashwin Nanjappa