Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Codeless Deep Learning with KNIME

You're reading from  Codeless Deep Learning with KNIME

Product type Book
Published in Nov 2020
Publisher Packt
ISBN-13 9781800566613
Pages 384 pages
Edition 1st Edition
Languages
Authors (3):
Kathrin Melcher Kathrin Melcher
Profile icon Kathrin Melcher
KNIME AG KNIME AG
Rosaria Silipo Rosaria Silipo
Profile icon Rosaria Silipo
View More author details

Table of Contents (16) Chapters

Preface Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
Chapter 1: Introduction to Deep Learning with KNIME Analytics Platform Chapter 2: Data Access and Preprocessing with KNIME Analytics Platform Chapter 3: Getting Started with Neural Networks Chapter 4: Building and Training a Feedforward Neural Network Section 2: Deep Learning Networks
Chapter 5: Autoencoder for Fraud Detection Chapter 6: Recurrent Neural Networks for Demand Prediction Chapter 7: Implementing NLP Applications Chapter 8: Neural Machine Translation Chapter 9: Convolutional Neural Networks for Image Classification Section 3: Deployment and Productionizing
Chapter 10: Deploying a Deep Learning Network Chapter 11: Best Practices and Other Deployment Options Other Books You May Enjoy

Designing your Network

In the previous section, we learned that neural networks are characterized by a topology, weights, and activation functions. In particular, feedforward neural networks have an input and an output layer, plus a certain number of hidden layers in between. While the values for the network weights are automatically estimated via the training procedure, the network topology and the activation functions have to be predetermined during network design before training. Different network architectures and different activation functions implement different input-output tasks. Designing the appropriate neural architecture for a given task is still an active research field in the deep learning area (Goodfellow I., Bengio Y., Courville A. (2016). Deep Learning, MIT Press).

Other parameters are involved in the training algorithm of neural networks, such as the learning rate or the loss function. We have also seen that neural networks are prone to overfitting; this means...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}