Reader small image

You're reading from  Deep Learning with Theano

Product typeBook
Published inJul 2017
PublisherPackt
ISBN-139781786465825
Edition1st Edition
Tools
Right arrow
Author (1)
Christopher Bourez
Christopher Bourez
author image
Christopher Bourez

Christopher Bourez graduated from Ecole Polytechnique and Ecole Normale Suprieure de Cachan in Paris in 2005 with a Master of Science in Math, Machine Learning and Computer Vision (MVA). For 7 years, he led a company in computer vision that launched Pixee, a visual recognition application for iPhone in 2007, with the major movie theater brand, the city of Paris and the major ticket broker: with a snap of a picture, the user could get information about events, products, and access to purchase. While working on missions in computer vision with Caffe, TensorFlow or Torch, he helped other developers succeed by writing on a blog on computer science. One of his blog posts, a tutorial on the Caffe deep learning technology, has become the most successful tutorial on the web after the official Caffe website. On the initiative of Packt Publishing, the same recipes that made the success of his Caffe tutorial have been ported to write this book on Theano technology. In the meantime, a wide range of problems for Deep Learning are studied to gain more practice with Theano and its application.
Read more about Christopher Bourez

Right arrow

Residual connections


While very deep architectures (with many layers) perform better, they are harder to train, because the input signal decreases through the layers. Some have tried training the deep networks in multiple stages.

An alternative to this layer-wise training is to add a supplementary connection to shortcut a block of layers, named the identity connection, passing the signal without modification, in addition to the classic convolutional layers, named the residuals, forming a residual block, as shown in the following image:

Such a residual block is composed of six layers.

A residual network is a network composed of multiple residual blocks. Input is processed by a first convolution, followed by batch normalization and non-linearity:

For example, for a residual net composed of two residual blocks, and eight featuremaps in the first convolution on an input image of size 28x28, the layer output shapes will be the following:

InputLayer                       (None, 1, 28, 28)
Conv2DDNNLayer...
lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Deep Learning with Theano
Published in: Jul 2017Publisher: PacktISBN-13: 9781786465825

Author (1)

author image
Christopher Bourez

Christopher Bourez graduated from Ecole Polytechnique and Ecole Normale Suprieure de Cachan in Paris in 2005 with a Master of Science in Math, Machine Learning and Computer Vision (MVA). For 7 years, he led a company in computer vision that launched Pixee, a visual recognition application for iPhone in 2007, with the major movie theater brand, the city of Paris and the major ticket broker: with a snap of a picture, the user could get information about events, products, and access to purchase. While working on missions in computer vision with Caffe, TensorFlow or Torch, he helped other developers succeed by writing on a blog on computer science. One of his blog posts, a tutorial on the Caffe deep learning technology, has become the most successful tutorial on the web after the official Caffe website. On the initiative of Packt Publishing, the same recipes that made the success of his Caffe tutorial have been ported to write this book on Theano technology. In the meantime, a wide range of problems for Deep Learning are studied to gain more practice with Theano and its application.
Read more about Christopher Bourez