In this chapter, we will investigate unsupervised learning using TensorFlow 2. The object of unsupervised learning is to find patterns or relationships in data in which the data points have not been previously labeled; hence, we have only features. This contrasts with supervised learning, where we are supplied with both features and their labels, and we want to predict the labels of new, previously unseen features. In unsupervised learning, we want to find out whether there is an underlying structure to our data. For example, can it be grouped or organized in any way without any prior knowledge of its structure? This is known as clustering. For example, Amazon uses unsupervised learning in its recommendation system to make suggestions as to what you might like to buy in the way of books, say, by identifying genre clusters in your previous...
- Tech Categories
- Best Sellers
- New Releases
- Books
- Videos
- Audiobooks
Tech Categories Popular Audiobooks
- Articles
- Newsletters
- Free Learning
You're reading from TensorFlow 2.0 Quick Start Guide
Tony Holdroyd's first degree, from Durham University, was in maths and physics. He also has technical qualifications, including MCSD, MCSD.net, and SCJP. He holds an MSc in computer science from London University. He was a senior lecturer in computer science and maths in further education, designing and delivering programming courses in many languages, including C, C+, Java, C#, and SQL. His passion for neural networks stems from research he did for his MSc thesis. He has developed numerous machine learning, neural network, and deep learning applications, and has advised in the media industry on deep learning as applied to image and music processing. Tony lives in Gravesend, Kent, UK, with his wife, Sue McCreeth, who is a renowned musician.
Read more about Tony Holdroyd
Autoencoders
Autoencoding is a data compression and decompression algorithm implemented with an ANN. Since it is an unsupervised form of a learning algorithm, we know that only unlabeled data is required. The way it works is we generate a compressed version of the input by forcing it through a bottleneck, that is, a layer or layers that are less wide than the original input. To reconstruct the input, that is, decompress, we reverse the process. We use backpropagation to both create the representation of the input in the intermediate layer(s), and recreate the input as the output from the representation.
Autoencoding is lossy, that is, the decompressed output will be degraded in comparison to the original input. This is a similar situation to the MP3 and JPEG compression formats.
Autoencoding is data-specific, that is, only data that is similar to that which they have been trained...
Summary
In this chapter, we looked at two applications of autoencoders in unsupervised learning: firstly for compressing data, and secondly for denoising, meaning the removal of noise from images.
In the next chapter, we will look at how neural networks are used in image processing and identification.
Unlock this book and the full library FREE for 7 days
Author (1)
Tony Holdroyd's first degree, from Durham University, was in maths and physics. He also has technical qualifications, including MCSD, MCSD.net, and SCJP. He holds an MSc in computer science from London University. He was a senior lecturer in computer science and maths in further education, designing and delivering programming courses in many languages, including C, C+, Java, C#, and SQL. His passion for neural networks stems from research he did for his MSc thesis. He has developed numerous machine learning, neural network, and deep learning applications, and has advised in the media industry on deep learning as applied to image and music processing. Tony lives in Gravesend, Kent, UK, with his wife, Sue McCreeth, who is a renowned musician.
Read more about Tony Holdroyd