This chapter provides a collection of recipes on the many aspects involved in the learning process of a neural network. The overall objective of the recipes is to provide very neat and specific tricks to boost networks' performances.
- Visualizing training with TensorBoard and Keras
 - Working with batches and mini-batches
 - Using grid search for parameter tuning
 - Learning rates and learning rate schedulers
 - Comparing optimizers
 - Determining the depth of the network
 - Adding dropouts to prevent overfitting
 - Making a model more robust with data augmentation