Sentiment Analysis of Twitter data, Part 2

Janu Verma

January 21st, 2016

Sentiment Analysis aims to determine how a certain person or group reacts to a specific topic. Traditionally, we would run surveys to gather data and do statistical analysis. With Twitter, it works by extracting tweets containing references to the desired topic, computing the sentiment polarity and strength of each tweet, and then aggregating the results for all such tweets. Companies use this to gather public opinion on their products and services, and make data-informed decisions.

In Part 1 we explored what sentiment analysis is and why it is effective. We also looked at the two main methods used: lexical and machine learning. Now in this Part 2 post we will examine some actual examples of using sentiment analysis.

Let’s start by examining the AFINN model.

AFINN Model:

In the AFINN model, the authors have computed sentiment scores for a list of words relevant to microblogging. The sentiment of a tweet is computed based on the sentiment scores of the terms in the tweet. The sentiment of the tweet is defined to be equal to the sum of the sentiment scores for each term in the tweet. The AFINN-111 dictionary contains 2477 English words rated for valence with an integer value between -5 and 5. The words have been manually labelled by Finn Arup Neilsen in 2009-2010. It has lots of words/phrases from Internet lingo such as ‘wtf’, ‘wow’, ‘’wowow’, ‘lol’, ‘lmao’, dipshit’ etc. Think of the AFINN list as more Urban dictionary, than Oxford dictionary.  

Some of the words are the grammatically different versions of the same stem, such as ‘favorite’ and ‘favorites’ are listed as two different words with different valence scores.

In the definition of sentiment of a tweet, the words in the tweet that are not in the AFINN list are assumed to have zero sentiment score.

Implementations of the AFINN model can be found here.

Naive Bayes Classifier

The Naive Bayes classifier can be trained on a corpus of labeled (positive and negative) tweets and then employed to assign polarity to a new tweet. The features used in this model are the words with their frequencies in the tweet strings. You may want to keep or remove URLs, emoticons and short tokens, depending on the application.

This classifier essentially computes the probability of a tweet being positive or negative.

One first computes the probability of a word to be present in a positive or negative tweet. This can be easily computed from the training data.

Prob(word in +ve tweet) = frequency of occurrence of this word in positive tweets, for instance, what fraction of all tweets containing this word are positive.

Negative tweets are similar. A tweet contains many words. The probability of a set of words to be in a positive tweet is defined as the product of the probabilities for each word. This is the Naive (=independence) assumption. In general this computation is not very easy.

Using the pre-estimated values of these probabilities, you can compute the probability of a tweet to be positive or negative using Bayes theorem.

Whenever a new tweet is fed to the classifier, it will predict the polarity of the tweet based on the probability of its having that polarity.

An implementation of Naive Bayes classifier for classifying spam and non-spam messages can be found here. The same script can be used for classifying positive and negative tweets.

In most of the cases, we want to include a third category of neutral tweets, or tweets that have zero polarity with respect to the given topic.

The methods described above were chosen for simplicity, several other methods in both the categories are prevalent today. Lots of companies using sentiment analysis employ lexical methods where they create dictionaries based on their trade algorithms and the domain of the application. For machine learning based analysis, instead of Naive Bayes, one can use more sophisticated algorithms like SVMs.

Challenges

Sentiment analysis is a very useful, but there are many challenges that need to be overcome to achieve good results. The very first step in opinion mining, something which I swept under the rug so far, is that we have to identify tweets that are relevant to our topic. Tweets containing the given word can be a decent choice, although not perfect. Once we have identified tweets to be analyzed, we need to sure that the tweets DO contain sentiment. Neutral tweets can be a part of our model, but only polarized tweets tell us something subjective. Even though the tweets are polarized, we still need to make sure that the sentiment in the tweet is related to the topic we are studying. For example, suppose we are studying sentiment related to a movie Mission Impossible, then the tweet: “Tom Cruise in Mission Impossible is pathetic!”.

Now this tweet has a negative sentiment, but is directed at the actor rather than the movie. This is not a great example, as the sentiment of the actor and movie is related.

The main challenge in Sentiment analysis using lexical methods is to build a dictionary that contains words/phrases and their sentiment scores. It is very hard to do so in full generality, and often the best idea is to choose a subject and build a list for that. Thus sentiment analysis is highly domain centric, so the techniques developed for stocks may not work for movies.

To solve these problems, you need expertise in NLP and computational linguistics. They correspond to entity extraction, NER, and entity pattern extraction in NLP terminology.

Beyond Twitter

Facebook performed an experiment to measure the effect of removing positive (or negative) posts from the people's news feeds on how positive (or negative) their own posts were in the days after these changes were made. They found that the people from whose news feeds negative posts were removed produced a larger percentage of positive words as well as a smaller percentage of negative words in their posts. The group of people from whose news feeds negative posts were removed showed similar tendencies. The procedure and results of this experiment were a paper in the Proceedings of the National Academy of Sciences. Though, I don’t subscribe to the idea of using users are subjects to a physiological experiment without their knowledge, this is a cool application of sentiment analysis subject area.

About the Author

Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology. Janu hold a Masters in Theoretical Physics from University of Cambridge in UK, and dropped out from mathematics PhD program (after 3 years) at Kansas State University.

Until 24th January save 50% on our hottest Machine Learning titles, as we celebrate Machine Learning week. From Python to Spark to R, we've got a range of languages and tools covered so you can get to grips with Machine Learning from a range of perspectives. As well as savings on popular titles, we're also giving away a free Machine Learning eBook every day this week - visit our Free Learning page to get yours!