Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Neural Network Programming with C#

You're reading from  Hands-On Neural Network Programming with C#

Product type Book
Published in Sep 2018
Publisher Packt
ISBN-13 9781789612011
Pages 328 pages
Edition 1st Edition
Languages
Author (1):
Matt Cole Matt Cole
Profile icon Matt Cole

Table of Contents (16) Chapters

Preface A Quick Refresher Building Our First Neural Network Together Decision Trees and Random Forests Face and Motion Detection Training CNNs Using ConvNetSharp Training Autoencoders Using RNNSharp Replacing Back Propagation with PSO Function Optimizations: How and Why Finding Optimal Parameters Object Detection with TensorFlowSharp Time Series Prediction and LSTM Using CNTK GRUs Compared to LSTMs, RNNs, and Feedforward networks Activation Function Timings
Function Optimization Reference Other Books You May Enjoy

Training Autoencoders Using RNNSharp

In this chapter, we will be discussing autoencoders and their usage. We will talk about what an autoencoder is, the different types of autoencoder, and present different samples to help you better understand how to use this technology in your applications. By the end of this chapter, you will know how to design your own autoencoder, load and save it from disk, and train and test it.

The following topics are covered in this chapter:

  • What is an autoencoder?
  • Different types of autoencoder
  • Creating your own autoencoder

Technical requirements

You will require Microsoft Visual Studio.

What is an autoencoder?

An autoencoder is an unsupervised learning algorithm that applies back propagation and sets target values equal to the inputs. An autoencoder learns to compress data from the input layer into shorter code, and then it uncompresses that code into something that closely matches the original data. This is better known as dimensionality reduction.

The following is a depiction of an autoencoder. The original images are encoded, and then decoded to reconstruct the original:

Different types of autoencoder

The following are different types of autoencoder:

Let's briefly discuss autoencoders and the variants we have just seen. Please note that there are other variants out there; these are just probably the most common that I thought you should at least be familiar with.

Standard autoencoder

An autoencoder learns to compress data from the input layer into smaller code, and then uncompress that code into something that (hopefully) matches the original data. The basic idea behind a standard autoencoder is to encode information automatically, hence the name. The entire network always resembles an hourglass, in terms of its shape, with fewer hidden layers than input and output layers. Everything...

Creating your own autoencoder

Now that you are an expert on autoencoders, let's move on to less theory and more practice. Let's take a bit of a different route on this one. Instead of using an open-source package and showing you how to use it, let's write our own autoencoder framework that you can enhance to make your own. We'll discuss and implement the basic pieces needed, and then write some sample code showing how to use it. We will make this chapter unique in that we won't finish the usage sample; we'll do just enough to get you started along your own path to autoencoder creation. With that in mind, let's begin.

Let's start off by thinking about what an autoencoder is and what things we would want to include. First off, we're going to need to keep track of the number of layers that we have. These layers will be Restricted Boltzmann...

Summary

Well, folks, I think it's time to wrap this chapter up and move on. You should commend yourself, as you've written a complete autoencoder from start to (almost) finish. In the accompanying source code, I have added even more functions to make this more complete, and for you to have a better starting point from which to make this a powerful framework for you to use. As you are enhancing this, think about the things you need your autoencoder to do, block in those functions, and then complete them as we have done. Rather than learn to use an open-source framework, you've built your own—congratulations!

I have taken the liberty of developing a bit more of our autoencoder framework with the supplied source code. You can feel free to use it, discard it, or modify it to suit your needs. It's useful, but, as I mentioned, please feel free to embellish...

References

  • Vincent P, La Rochelle H, Bengio Y, and Manzagol P A (2008), Extracting and Composing Robust Features with Denoising Autoencoders, proceedings of the 25th international conference on machine learning (ICML, 2008), pages 1,096 - 1,103, ACM.
  • Vincent, Pascal, et al, Extracting and Composing Robust Features with De-noising Autoencoders., proceedings of the 25th international conference on machine learning. ACM, 2008.
  • Kingma, Diederik P and Max Welling, Auto-encoding variational bayes, arXiv pre-print arXiv:1312.6114 (2013).
  • Marc'Aurelio Ranzato, Christopher Poultney, Sumit Chopra, and Yann LeCun, Efficient Learning of Sparse Representations with an Energy-Based Model, proceedings of NIPS, 2007.
  • Bourlard, Hervé, and Yves Kamp, Auto Association by Multilayer Perceptrons and Singular Value Decomposition, Biological Cybernetics 59.4–5 (1988): 291-294.
...
lock icon The rest of the chapter is locked
You have been reading a chapter from
Hands-On Neural Network Programming with C#
Published in: Sep 2018 Publisher: Packt ISBN-13: 9781789612011
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}