Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Codeless Deep Learning with KNIME

You're reading from  Codeless Deep Learning with KNIME

Product type Book
Published in Nov 2020
Publisher Packt
ISBN-13 9781800566613
Pages 384 pages
Edition 1st Edition
Languages
Authors (3):
Kathrin Melcher Kathrin Melcher
Profile icon Kathrin Melcher
KNIME AG KNIME AG
Rosaria Silipo Rosaria Silipo
Profile icon Rosaria Silipo
View More author details

Table of Contents (16) Chapters

Preface Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
Chapter 1: Introduction to Deep Learning with KNIME Analytics Platform Chapter 2: Data Access and Preprocessing with KNIME Analytics Platform Chapter 3: Getting Started with Neural Networks Chapter 4: Building and Training a Feedforward Neural Network Section 2: Deep Learning Networks
Chapter 5: Autoencoder for Fraud Detection Chapter 6: Recurrent Neural Networks for Demand Prediction Chapter 7: Implementing NLP Applications Chapter 8: Neural Machine Translation Chapter 9: Convolutional Neural Networks for Image Classification Section 3: Deployment and Productionizing
Chapter 10: Deploying a Deep Learning Network Chapter 11: Best Practices and Other Deployment Options Other Books You May Enjoy

Training a Neural Network

After network architecture and activation functions, the last design step before you can start training a neural network is the choice of loss function.

We will start with an overview of possible loss functions for regression, binary classification, and multiclass classification problems. Then, we will introduce some optimizers and additional training parameters for the training algorithms.

Loss Functions

In order to train a feedforward neural network, an appropriate error function, often called a loss function, and a matching last layer have to be selected. Let's start with an overview of commonly used loss functions for regression problems.

Loss Functions for Regression Problems

In the case of a regression problem, where the goal is to predict one single numerical value, the output layer should have one unit only and use the linear activation function. Possible loss functions to train this kind of network must refer to numerical error...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}