Reader small image

You're reading from  R Machine Learning Projects

Product typeBook
Published inJan 2019
Reading LevelExpert
PublisherPackt
ISBN-139781789807943
Edition1st Edition
Languages
Right arrow
Author (1)
Dr. Sunil Kumar Chinnamgari
Dr. Sunil Kumar Chinnamgari
author image
Dr. Sunil Kumar Chinnamgari

Dr. Sunil Kumar Chinnamgari has a Ph.D. in computer science and specializes in machine learning and natural language processing. He is an AI researcher with more than 14 years of industry experience. Currently, he works in the capacity of lead data scientist with a US financial giant. He has published several research papers in Scopus and IEEE journals and is a frequent speaker at various meetups. He is an avid coder and has won multiple hackathons. In his spare time, Sunil likes to teach, travel, and spend time with family.
Read more about Dr. Sunil Kumar Chinnamgari

Right arrow

Automatic Prose Generation with Recurrent Neural Networks

We have been interacting through this book for almost 200 pages, but I realized that I have not introduced myself properly to you! I guess it's time. You already know some bits about me through the author profile in this book; however, I want to tell you a bit about the city I live in. I am based in South India, in a city called Bengaluru, also know as Bangalore. The city is known for its IT talent and population diversity. I love the city, as it is filled with loads of positive energy. Each day, I get to meet people from all walks of life—people from multiple ethnicities, multiple backgrounds, people who speak multiple languages, and so on. Kannada is the official language spoken in the state of Karnataka where Bangalore is located. Though I can speak bits and pieces of Kannada, my proficiency with speaking...

Understanding language models

In the English language, the character a appears much more often in words and sentences than the character x. Similarly, we can also observe that the word is occurs more frequently than the word specimen. It is possible to learn the probability distributions of characters and words by examining large volumes of text. The following screenshot is a chart showing the probability distribution of letters given a corpus (text dataset):

Probability distribution of letters in a corpus

We can observe that the probability distributions of characters are non-uniform. This essentially means that we can recover the characters in a word, even if they are lost due to noise. If a particular character is missing in a word, it can be reconstructed just based on the characters that are surrounding the missing character. The reconstruction of the missing character is...

Exploring recurrent neural networks

Recurrent neural networks (RNNs) are a family of neural networks for processing sequential data. RNNs are generally used to implement language models. We, as humans, base much of our language understanding on the context. For example, let's consider the sentence Christmas falls in the month of --------. It is easy to fill in the blank with the word December. The essential idea here is that there is information about the last word encoded in the previous elements of the sentence.

The central theme behind the RNN architecture is to exploit the sequential structure of the data. As the name suggests, RNNs operate in a recurrent way. Essentially, this means that the same operation is performed for every element of a sequence or sentence, with its output depending on the current input and the previous operations.

An RNN works by looping an output...

Backpropagation through time

We are already aware that RNNs are cyclical graphs, unlike feedforward networks, which are acyclic directional graphs. In feedforward networks, the error derivatives are calculated from the layer above. However, in an RNN we don't have such layering to perform error derivative calculations. A simple solution to this problem is to unroll the RNN and make it similar to a feedforward network. To enable this, the hidden units from the RNN are replicated at each time step. Each time step replication forms a layer that is similar to layers in a feedforward network. Each time step t layer connects to all possible layers in the time step t+1. Therefore, we randomly initialize the weights, unroll the network, and then use backpropagation to optimize the weights in the hidden layer. The lowest layer is initialized by passing parameters. These parameters...

Problems and solutions to gradients in RNN

RNNs are not perfect, there are two main issues namely exploding gradients and vanishing gradients that they suffer from. To understand the issues, let's first understand what a gradient means. A gradient is a partial derivative with respect to its inputs. In simple layman's terms, a gradient measures how much the output of a function changes, if one were to change the inputs a little bit.

Exploding gradients

Exploding gradients relate to a situation where the BPTT algorithm assigns an insanely high importance to the weights, without a rationale. The problem results in an unstable network. In extreme situations, the values of weights can become so large that the values overflow...

Building an automated prose generator with an RNN

In this project, we will attempt to build a character-level language model using an RNN to generate prose given some initial seed characters. The main task of a character-level language model is to predict the next character given all previous characters in a sequence of data. In other words, the function of an RNN is to generate text character by character.

To start with, we feed the RNN a huge chunk of text as input and ask it to model the probability distribution of the next character in the sequence, given a sequence of previous characters. These probability distributions conceived by the RNN model will then allow us to generate new text, one character at a time.

The first requirement for building a language model is to secure a corpus of text that the model can use to compute the probability distribution of various characters...

Summary

The major theme of this chapter was generating text automatically using RNNs. We started the chapter with a discussion about language models and their applications in the real world. We then carried out an in-depth overview of recurrent neural networks and their suitability for language model tasks. The differences between traditional feedforward networks and RNNs were discussed to get a clearer understanding of RNNs. We then went on to discuss problems and solutions related to the exploding gradients and vanishing gradients experienced by RNNs. After acquiring a detailed theoretical foundation of RNNs, we went ahead with implementing a character-level language model with an RNN. We used Alice's Adventures in Wonderland as a text corpus input to train the RNN model and then generated a string as output. Finally, we discussed some ideas for improving our character...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
R Machine Learning Projects
Published in: Jan 2019Publisher: PacktISBN-13: 9781789807943
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dr. Sunil Kumar Chinnamgari

Dr. Sunil Kumar Chinnamgari has a Ph.D. in computer science and specializes in machine learning and natural language processing. He is an AI researcher with more than 14 years of industry experience. Currently, he works in the capacity of lead data scientist with a US financial giant. He has published several research papers in Scopus and IEEE journals and is a frequent speaker at various meetups. He is an avid coder and has won multiple hackathons. In his spare time, Sunil likes to teach, travel, and spend time with family.
Read more about Dr. Sunil Kumar Chinnamgari