Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Codeless Deep Learning with KNIME

You're reading from  Codeless Deep Learning with KNIME

Product type Book
Published in Nov 2020
Publisher Packt
ISBN-13 9781800566613
Pages 384 pages
Edition 1st Edition
Languages
Authors (3):
Kathrin Melcher Kathrin Melcher
Profile icon Kathrin Melcher
KNIME AG KNIME AG
Rosaria Silipo Rosaria Silipo
Profile icon Rosaria Silipo
View More author details

Table of Contents (16) Chapters

Preface Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
Chapter 1: Introduction to Deep Learning with KNIME Analytics Platform Chapter 2: Data Access and Preprocessing with KNIME Analytics Platform Chapter 3: Getting Started with Neural Networks Chapter 4: Building and Training a Feedforward Neural Network Section 2: Deep Learning Networks
Chapter 5: Autoencoder for Fraud Detection Chapter 6: Recurrent Neural Networks for Demand Prediction Chapter 7: Implementing NLP Applications Chapter 8: Neural Machine Translation Chapter 9: Convolutional Neural Networks for Image Classification Section 3: Deployment and Productionizing
Chapter 10: Deploying a Deep Learning Network Chapter 11: Best Practices and Other Deployment Options Other Books You May Enjoy

Introducing Autoencoders

In previous chapters, we have seen that neural networks are very powerful algorithms. The power of each network lies in its architecture, activation functions, and regularization terms, plus a few other features. Among the varieties of neural architectures, there is a very versatile one, especially useful for three tasks: detecting unknown events, detecting unexpected events, and reducing the dimensionality of the input space. This neural network is the autoencoder.

Architecture of the Autoencoder

The autoencoder (or autoassociator) is a multilayer feedforward neural network, trained to reproduce the input vector onto the output layer. Like many neural networks, it is trained using the gradient descent algorithm, or one of its modern variations, against a loss function, such as the Mean Squared Error (MSE). It can have as many hidden layers as desired. Regularization terms and other general parameters that are useful for avoiding overfitting or for improving...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}