Reader small image

You're reading from  Codeless Deep Learning with KNIME

Product typeBook
Published inNov 2020
Reading LevelIntermediate
PublisherPackt
ISBN-139781800566613
Edition1st Edition
Languages
Tools
Right arrow
Authors (3):
Kathrin Melcher
Kathrin Melcher
author image
Kathrin Melcher

Kathrin Melcher is a data scientist at KNIME. She holds a master's degree in mathematics from the University of Konstanz, Germany. She joined the evangelism team at KNIME in 2017 and has a strong interest in data science and machine learning algorithms. She enjoys teaching and sharing her data science knowledge with the community, for example, in the book From Excel to KNIME, as well as on various blog posts and at training courses, workshops, and conference presentations.
Read more about Kathrin Melcher

Rosaria Silipo
Rosaria Silipo
author image
Rosaria Silipo

Rosaria Silipo, Ph.D., now head of data science evangelism at KNIME, has spent 25+ years in applied AI, predictive analytics, and machine learning at Siemens, Viseca, Nuance Communications, and private consulting. Sharing her practical experience in a broad range of industries and deployments, including IoT, customer intelligence, financial services, social media, and cybersecurity, Rosaria has authored 50+ technical publications, including her recent books Guide to Intelligent Data Science (Springer) and Codeless Deep Learning with KNIME (Packt).
Read more about Rosaria Silipo

View More author details
Right arrow

Chapter 3: Getting Started with Neural Networks

Before we dive into the practical implementation of deep learning networks using KNIME Analytics Platform and its integration with the Keras library, we will briefly introduce a few theoretical concepts behind neural networks and deep learning. This is the only purely theoretical chapter in this book, and it is needed to understand the how and why of the following practical implementations.

Throughout this chapter, we will cover the following topics:

  • Neural Networks and Deep Learning – Basic Concepts
  • Designing your Network
  • Training a Neural Network

We will start with the basic concepts of neural networks and deep learning: from the first artificial neuron as a simulation of the biological neuron to the training of a network of neurons, a fully connected feedforward neural network, using a backpropagation algorithm.

We will then discuss the design of a neural architecture as well as the training of the...

Neural Networks and Deep Learning – Basic Concepts

All you hear about at the moment is deep learning. Deep learning stems from the traditional discipline of neural networks, in the realm of machine learning.

The field of neural networks has gone through a number of stop-and-go phases. Since the early excitement for the first perceptron in the '60s and the subsequent lull when it became evident what the perceptron could not do; through the renewed enthusiasm for the backpropagation algorithm applied to multilayer feedforward neural networks and the subsequent lull when it became apparent that training recurrent networks required hardware capabilities that were not available at the time; right up to today's new deep learning paradigms, units, and architectures running on much more powerful, possibly GPU-equipped, hardware.

Let's start from the beginning and, in this section, go through the basic concepts behind neural networks and deep learning. While these...

Designing your Network

In the previous section, we learned that neural networks are characterized by a topology, weights, and activation functions. In particular, feedforward neural networks have an input and an output layer, plus a certain number of hidden layers in between. While the values for the network weights are automatically estimated via the training procedure, the network topology and the activation functions have to be predetermined during network design before training. Different network architectures and different activation functions implement different input-output tasks. Designing the appropriate neural architecture for a given task is still an active research field in the deep learning area (Goodfellow I., Bengio Y., Courville A. (2016). Deep Learning, MIT Press).

Other parameters are involved in the training algorithm of neural networks, such as the learning rate or the loss function. We have also seen that neural networks are prone to overfitting; this means...

Training a Neural Network

After network architecture and activation functions, the last design step before you can start training a neural network is the choice of loss function.

We will start with an overview of possible loss functions for regression, binary classification, and multiclass classification problems. Then, we will introduce some optimizers and additional training parameters for the training algorithms.

Loss Functions

In order to train a feedforward neural network, an appropriate error function, often called a loss function, and a matching last layer have to be selected. Let's start with an overview of commonly used loss functions for regression problems.

Loss Functions for Regression Problems

In the case of a regression problem, where the goal is to predict one single numerical value, the output layer should have one unit only and use the linear activation function. Possible loss functions to train this kind of network must refer to numerical error...

Summary

We have reached the end of this chapter, where we have learned the basic theoretical concepts behind neural networks and deep learning networks. All of this will be helpful to understand the steps for the practical implementation of deep learning networks described in the coming chapters.

We started with the artificial neuron and moved on to describe how to assemble and train a network of neurons, a fully connected feedforward neural network, via a variant of the gradient descent algorithm, using the backpropagation algorithm to calculate the gradient.

We concluded the chapter with a few hints on how to design and train a neural network. First, we described some commonly used network topologies, neural layers, and activation functions to design the appropriate neural architecture.

We then moved to analyze the effects of some parameters involved in the training algorithm. We introduced a few more parameters and techniques to optimize the training algorithm against...

Questions and Exercises

Test how well you have understood the concepts in this chapter by answering the following questions:

  1. A feedforward neural network is an architecture where:

    a. Each neuron from the previous layer is connected to each neuron in the next layer.

    b. There are auto and backward connections.

    c. There is just one unit in the output layer.

    d. There are as many input units as there are output units.

  2. Why do we need hidden layers in a feedforward neural network?

    a. For more computational power

    b. To speed up calculations

    c. To implement more complex functions

    d. For symmetry

  3. The backpropagation algorithm updates the network weights proportionally to:

    a. The output errors backpropagated through the network

    b. The input values forward propagated through the network

    c. The batch size

    d. The deltas calculated at the output layer and backpropagated through the network

  4. Which loss function is commonly used for a multiclass classification problem?

    a. MAE

    b. RMSE

    c. Categorical...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Codeless Deep Learning with KNIME
Published in: Nov 2020Publisher: PacktISBN-13: 9781800566613
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Kathrin Melcher

Kathrin Melcher is a data scientist at KNIME. She holds a master's degree in mathematics from the University of Konstanz, Germany. She joined the evangelism team at KNIME in 2017 and has a strong interest in data science and machine learning algorithms. She enjoys teaching and sharing her data science knowledge with the community, for example, in the book From Excel to KNIME, as well as on various blog posts and at training courses, workshops, and conference presentations.
Read more about Kathrin Melcher

author image
Rosaria Silipo

Rosaria Silipo, Ph.D., now head of data science evangelism at KNIME, has spent 25+ years in applied AI, predictive analytics, and machine learning at Siemens, Viseca, Nuance Communications, and private consulting. Sharing her practical experience in a broad range of industries and deployments, including IoT, customer intelligence, financial services, social media, and cybersecurity, Rosaria has authored 50+ technical publications, including her recent books Guide to Intelligent Data Science (Springer) and Codeless Deep Learning with KNIME (Packt).
Read more about Rosaria Silipo