Our approach in this book will be to use statistics and social science theory to mine social media and we'll use R as our base programming language. We will walk you through many important and recent developments in the field of social media. We'll cover advanced topics such as Open Authorization (OAuth), Twitter's OAuth API, Facebook's graph API, and so on, along with some interesting references and resources. It is assumed that the target audience has a basic understanding of R, along with basic concepts of social sciences.
In this chapter, we will cover the following topics:
Importance of social media mining
Basics of social media mining
Social media mining techniques
Basic data mining algorithms
"A group of Internet-based applications that build on the ideological and technological foundations of Web 2.0 and that allow the creation and exchange of user-generated content".
Social media spans lots of Internet-based platforms that facilitate human emotions such as:
Networking, for example, Facebook, LinkedIn, and so on
Micro blogging, for example, Twitter, Tumblr, and so on
Photo sharing, for example, Instagram, Flickr, and so on
Video sharing, for example, YouTube, Vimeo, and so on
Stack exchanging, for example, Stack Overflow, Github, and so on
Instant messaging, for example, Whatsapp, Hike, and so on
The traditional media such as radio, newspaper, or television, facilitates one-way communication with a limited scope of reach and usability. Though the audience can interact (two-way communication) with these channels, particularly radio, the quality and frequency of such communications are very limited. On the other hand, Internet-based social media offers multi-way communication with features such as immediacy and permanence. It is important to understand all the aspects of social media today because real customers are using it.
Today's corporate marketing departments are maturing in understanding the promise or the impact of social media. In the early years, social media was perceived as yet another broadcasting medium for publishing banner advertisements into the world. Unfortunately, many still believe this to be the only use of social media. While it's not deniable that social media is a great tool for banner advertisements in terms of cost and reach, it's not limited to that. There is another use of social media that can turn out to be more influential in the long term. Businesses need to heed to the opinion of the consumer by mining social networks. By gathering information on the opinions of consumers, they can understand current and potential customers' outlook, and such informative data can guide business decisions, in the long run, influencing the fate of any business.
Current customer relationship management (CRM) systems create consumer profiles to help with marketing judgments using a mixture of demographics, past buying patterns, and other prior actions. These methods basically empower companies to keep a close eye on their consumers. The customer data available via communities such as LinkedIn or Facebook is quite detailed. A financial business with access to such data would not only know the intricate details of a customer, but also the interests of the customer, and evidence that might be beneficial in preparation of future marketing plans. Every minute of every day, Facebook, Twitter, LinkedIn, and other online communities generate enormous amounts of this data. If it could be mined, it might work like a real-time CRM, persistently revealing new trends and opportunities.
In simple terms, social media mining is a systematic analysis of information generated from social media. It becomes necessary to tap into this enormous social media data with the help of today's technology, which is not without its challenges. Data stream is a prime example of Big Data. Dealing with data sets measured in petabytes is challenging, and things like signal-to-noise ratio need to be taken into consideration. It is estimated that around 20 percent of such social media data streams contain relevant information.
The set of tools and techniques, which are used to mine such information, are collectively called Data mining technique and in the context of social media it's called social media mining (SMM). SMM can generate insights about how much someone is influencing others on the Web. SMM can help businesses identify the pain points of its customer in real time. In turn, this can be used for proactive planning. Identification of potential customers is a very important problem every business has been trying to solve for ages. SMMs can help us identify the potential customers based on their online activities and based on their friend's online activities. There has been a lot of research in multiple disciplines of social media:
Why does social media mining matter?
If you can measure it, you can improve it
Social media mining is currently in a stage of infancy, and its practitioners are learning and developing new approaches. Social media mining draws its roots from many fields, such as statistics, machine learning, information retrieval, pattern recognition, and bioinformatics. The parent fields themselves are not without their challenges. The sheer amount of data being generated daily is staggering, but current techniques allow for novel data mining solutions and scalable computational models with help from the fundamental concepts and theories and algorithms.
In social media theory, people are considered to be the basic building blocks of a world created on the grounds provided by the social media. The measurements of the interactions between these building blocks and other entities such as sites, networks, content, and so on leads to the discovery of human nature. The knowledge gained via these measurements constitutes the soul of the social worlds. Finding the insights from this data where social relationships play a critical role can be termed as the mining of social media data. This problem not only has to face the basic data mining challenges but also those that emerge because of the social-relationship aspect. We have listed down some of the important challenges here:
Big Data: Should we use the taste of a friend of a friend of the person of interest, who has studied at one particular college and whose hometown was one particular city to recommend something to the person of the interest? In some applications, this might be overkill and in others this information could lead to a very small but differentiating performance increase. The content that can be used in social media data can be very deep. However, this can lead to a problem called over fitting, which is well known in the domain of machine learning. Using multiple sources of data can also complicate the overall performance in a similar fashion.
Sufficiency: Should we restrict people to view only the person of interest's alma mater and his/her hometown to recommend something and not use the tastes of his/her friends? Common sense says this is not correct and we may be missing out on something. This is a problem commonly known as under fitting. This problem can also arise due to the fact that most social media networks restrict the amount of information that can be accessed in a certain time frame, so sometimes the data is not sufficient enough to generate patterns and/or generate recommendations.
Noise removal error: Preprocessing steps are more or less always required in any application of data mining. These steps not only make the actual application run faster on the cleaned data, but they also improve overall accuracy. Due to all the clutter, which is present in most social data, a large amount of noise is always expected but effectively removing the noise from the data we have is a very tricky business. You can always end up missing some information while trying to remove this noise. Noise by its definition is a subjective quantity and can always be confused; hence, this step can end up introducing more error in pattern recognition.
Evaluation dilemma: Because of the sheer size of social media data, it's not possible to obtain a properly annotated dataset to train a supervised machine-learning algorithm. Without the proper ground truth data, there is no way to judge the accuracy of any off-the-shell classification algorithms. Since there can't be any accuracy measures without the ground truth data, only a clustering (unsupervised machine learning) algorithm can be applied. But the problem is that such algorithms rely heavily on the domain expertise.
Network graphs make up the dominant data structure and appear, essentially, in all forms of social media data/information. Typically, user communities constitute a group of nodes in such graphs where nodes within the same community or cluster tend to share common features.
Graph mining can be described as the process of extracting useful knowledge (patterns, outliers and so on.) from a social relationship between the community members can be represented as a graph. The most influential example of graph mining is Facebook Graph Search.
Extraction of meaning from unstructured text data present in social media is described as text mining. The primary targets of this type of mining are blogs and micro blogs such as Twitter. It's applicable to other social networks such as Facebook that contain links to posts, blogs, and other news articles.
Any data mining activity follows some generic steps to gain some useful insights from the data. Since social media is the central theme of this book, let's discuss these steps by taking example data from Twitter:
Getting authentication from the social website
Cleaning and preprocessing
Data modeling using standard algorithms such as opinion mining, clustering, anomaly/spam detection, correlations and segmentations, recommendations
Most social media websites provide API access to their data. To do the mining, we (as a third-party) would need some mechanism to get access to users' data, available on these websites. But the problem is that a user will not share their credentials with anyone due to obvious security reasons. This is where OAuth comes in the picture. According to its home page (http://oauth.net/), OAuth can be defined as follows:
An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications.
To understand it better, let's take an example of Instagram where a user can allow a printing service access to his/her private photographs stored on Instagram's server, without sharing her credentials with the printing service. Instead, they authenticate directly with Instagram, which issues the printing service delegation-specific permissions. The user here is the primary owner of the resource and the printing service is the third-party client. Social media websites such as Instagram, Twitter, and Facebook allow various applications to access user data for various advertisements or recommendations. Almost all cab service applications access user location.
Here's a diagram illustrating the concept:
OAuth 2.0 provides various methods in which different levels of authorizations of the various resources can reliably be granted to the requesting client application. One of the most frequently used and most important use cases is the authorization of World Wide Web server data to another World Wide Web server/application.
The following image shows the authentication process:
The client accesses the web app with the button Login via Twitter (or Login via LinkedIn or Login via Facebook).
This takes the client to an app, which will authenticate it. The client app then asks the user to allow it the access to his/her resources, that is, the profile data. The user needs to accept it to go the next step.
The client is then redirected to a redirect link via the authenticating app, which the client app has provided to the authenticating app. Usually, the redirect link is delivered by registering the client app with the authenticating app. The user of the client app also registers the redirect link and at the same time authenticating app also gives the client app with client credentials.
Using the redirect link, the client contacts the website in the client app. During this step, a connection between authenticating app and client app is made and the authentication code received in the redirect request parameters. So, an access token is returned by the authenticating app.
Depending on the network, the access provided by the access token can be constrained not only in terms of the information but also the life of the access token itself. As soon as the client app obtains an access token, this access token can be sent to the respective social media organizations, such as Facebook, LinkedIn, Twitter, and so on, to access resources in these servers that are related to the clients who gave permission via the tokens.
OAuth 2.0 does not need the client app to have cryptography
OAuth 2.0 offers much less complicated signatures
OAuth 2.0 generates short-lived access tokens, hence it is more secure
OAuth 2.0 has a clearer segregation of roles concerning the server responsible for handling user authorization and the server handling OAuth requests
A number of visualization R packages for text data are available as R package. These libraries, based on available data and objective, provide various options varying from simple clusters of words to the one inline with semantic analysis or topic modeling of the corpus. These libraries provide means to better understand text data. In this book, we'll use the following libraries:
One of the simplest and most frequently used visualization libraries is the simple word cloud. The basic intent to using word cloud is to visualize the weights of the words present. The "wordcloud" R library helps the user get an understanding of weights of a word/term with respect to the tf-idf matrix. The weights are proportional to the size and color of the word you see in the plot. Here's an example of one such simple word cloud based on the corpus created from tweets:
There are R packages that can generate a word cloud similar to the preceding figure, along with the sentiments each word is representing. Such plots are one step ahead of the basic word cloud because they let the user get an understanding of what kind of sentiments are present and why the particular documents (collection of tweets) are of a particular nature (joy, sadness, disgust, love, and so on.). Timothy Jurka developed one such package, which we are going to use. The two main functions of this package are as follows:
Classify_emotion: As the name suggests, the procedure helps the user understand the type of sentiment that is present. This procedure also clusters the words present in the query based on the sentiment and level of emotions that particular word present. A voting-based classification is one the algorithms used in this particular procedure. The Naive Bayes algorithm is also used for more enhanced results. The training dataset used on the above algorithms is from Carlo Strapparava and Alessandro Valitutti. Here's a sample output:
Classify_polarity: This procedure indicates the overall polarity of the emotions (positive or negative). This is, in a way, an extension of the procedure. The training data used here comes from Janyce Wiebe's subjectivity lexicon.
The most commonly used visualization library for Facebook data is Gephi. The key difference between Facebook and Twitter is the richness of the profile of a user and the social connections one shares on Facebook. Gephi helps users visualize both of the distinctions in a very pleasant way. It enables a user to understand the impact one Facebook profile has, or could have, over the network. Gephi is highly customizable and user-friendly library. We'll discuss this in Chapter 3, Find Friends on Facebook. As a working example, here's the graph representation of a social network of two friends.
Preprocessing and cleaning are the very basic and first steps in any data-mining problem. A learning algorithm on a unified and cleaned dataset cannot only run very fast, but can also produce more accurate results. The first steps involve the annotation of target data, in the case of classification problems and understating the feature vector space, to apply an appropriate distance measure for clustering problems. Identification of noise samples and their clean up is a very tricky task but the better it's done, the more accuracy one can expect in the results. As mentioned previously, you need to be careful in cleaning tasks as this can lead to a rejection of good samples. Furthermore, the preprocessing steps need to be a reversible process because at the end of the exercise, the results need to be processed back to the original sample space for it to make sense.
In simple words, opinion mining or sentiment analysis is the method in which we try to assess the opinion/sentiment present in the given phrase. The phrase could be any sentence. Though our examples would be English, the sentiment analysis is not limited to any language. Also, the sentence could come from any source—it could be a 140-character tweet, Facebook post/chats, SMSs, and so on. Consider the following examples:
Visiting to the wonderful places in Europe. Feeling real happy—Positive.
I love little sunshine in winters, make me feel live—Positive.
I am stuck in a same place, feeling sad—Negative.
The cab driver was a nice person. Think many of them are actually good people—Positive.
Sentiment analysis can play a crucial role in understanding the costumer sentiment, which can actually affect the growth of any business. With social media platforms such as Twitter, the meaning of the saying words are mightier than swords, has reached a whole new level. In the next chapter, we'll see how the customer sentiments can affect the growth of business. Also, there is nothing like word of mouth marketing, and again social media platforms can help you provide more business via the words of real customers. This field has become so advanced that people have actually predicted the outcomes of major elections based on the sentiments of the voters. Similarly, stock market forecasts are now being generated based on the analysis of customer tweets.
Oj: This is the objective (that is, product). It is realized via named entity extraction.
fjk: This is a feature of Oj. It is assessed using information mining theory
SOijkl:This is the sentiment value of the opinion of the opinion holder hi on feature fjk of object oj at time tl
hi: This is the information miner
Ti: This is for data extraction
Perform the following steps to get the sentiment value SOijkl:
We look at sentiment orientation (SO) of the patterns we mined. For example, we may have extracted Remarkable + Handset, which is, [JJ] + [NN] (or adjective trailed by noun). The opposite might be "Awful" for instance. In this phase, the system attempts to position the terms on an emotive scale.
"Usually individuals like the fresh Handset." They recommend it
"Usually individuals hate the fresh Handset." They don't recommend it
It's not easy to classify sentiments; nonetheless there are various classification algorithms, which have been employed to aid opinion mining. These algorithms vary from simple probabilistic classifiers such as Naïve Bayes (probability classifier that assumes all the features are independent and does not use any prior information) to the more advanced classifiers such as maximum entropy (which uses the prior information to a certain extent.
Many hyperspace classifiers such as Support Vector Machine (SVM) and Neural Networks (NN) have also been used to correctly classify the sentiments. Between SVM and NN, SVM, in general, works wonders due to the kernel trick.
There are other methods being explored as well. For example, Anomaly/spam detection or social spammer detection. Fake profiles created with a malicious intention are known as spam or anomalous profiles. The user who creates such profiles often pretend to be someone they are not and try to perform some inappropriate activity, which can eventually cause problems for the person they were imitating as well as to others. There has been an increase in the number of cases of online bullying, trolling, and so on, which are direct causes of social spamming. We'll show you the various classification algorithms to detect these fake profiles in Chapter 3, Find Friends on Facebook.
The algorithms we'll use to identify the spam and/or spammers based on a same example datasets, fall under the general class of algorithms known as supervised machine learning algorithms. The example dataset used in these algorithms is called training set. For notational consistency, let's say each ith record in the training set as a pair consists of an input vector represented by xi and output label represented by yi. The vector xi consists of a set of features representative of the ith sample point. The task of such an algorithm is to infer a function f (from a given possible set of functions F) which can map the xi's to the respective yi's, with high level of accuracy. This function f is sometimes also called a learned/trained model. The process of inferring f, using the training data is called learning. Once the model is trained, we use this learned model with the new records to identify new labels. The ability of such a model/algorithm to correctly identify the new example set (also called test set) labels that differ from the training set, is known as generalization.
There are many algorithms under the class of supervised machine learning algorithms such as the Naïve Bayes classifier, Decision tree classifier, and so on. One such algorithm is SVM. In a two-class (binary) classification problem, an SVM is the maximal margin hyperplane that separate the two classes with the largest possible margin. If there are more than two classes, then multiple SVMs are learned under one-versus-rest or one-versus-one methods; discussing these two methods is beyond the scope of the book.
The following figure illustrates a binary classification by SVM. The red and black dots are part of training data point xi's, representing the two types of the label yi. SVM comes with a neat transformation, which can transform the current feature space to a new feature space using various kernels. Discussing the details is beyond the scope of this book.
In graph analogy, a community is a set of nodes between which the communications/interactions are rather more frequent than with those outside the set. From a marketing point of view, community detection become very crucial and has been proven to be very rewarding in terms of return-of-investments (ROIs). For example, travel enthusiasts can be identified on various social media websites based on their visited places, posts, comments, tweets, and so on. If such segmentation can be done, then selling them some product related to travel (such as a handheld compass, travel pillow, global alarm clock, binoculars, slim digital camera, noise-cancelling headphones, and so on) would stand a higher chance of purchase. Hence, with a focused marketing effort, the business can get more ROIs.
While spam detection is a supervised machine-learning task, community detection or clustering falls under the class of unsupervised learning algorithms. Social media offers two types of communities. Some are explicitly created groups with people of common location, hobbies, or occupation. There are several other people who might not be connected to such groups. Identification of these people is a clustering task. This is performed based on their interaction (for example, they mentioned a common thing in their comments/posts/tweets) as features sets (xi's) and without label information (as in the case of supervised machine learning algorithms). These features are passed to various unsupervised machine learning algorithms to find the commonalities and hence the communities. Many algorithms also provide the extent/degree/affinity score with which a particular person belongs to a specific community.
There are many algorithms and techniques proposed in academia that we'll discuss in detail in the following chapters. Basically, these methods are based on calculation of the influence on the link between various edges (people, locations, and other such entities). Similar people are likely to be linked, and edges between these links indicate that linked users will influence each other and become more similar, two users in the same group or community if they have higher similarity.
Visualization helps one understand more about the data in hand. A picture is worth a thousand words. We get a better understanding of the feature space by representing data on a graphical platform. Trends, anomalies, relationships, and other similar patterns help us think more about the possible algorithm and heuristics to use on the given data for a given problem. There can be various levels of abstraction and granularities present in the data. Here's a list of a few standard methods used to visualize data:
Various social networks analysis tools such as Igraph, MuxViz, NetworkX, and so on
In the next chapters, we'll show you how these help us understand the results better. How to interpret the results is a crucial part of the mining process.
What are people talking about right now?
Mining entities from user's tweets
Gender analysis of Facebook post likes
Analysis of Facebook friends network
Inferring community behavior dynamically
Questions such as "Who influences whom?"
In this chapter, we tried to familiarize the user with the concept of social media and mining.
We discussed the OAuth API, which offers a technique for clients to allow third-party entry to their resources without sharing their credentials. It also offers a way to grant controlled access in terms of scope and duration.
We saw examples of various R packages available to visualize the text data. We discussed innovative ways to analyze and study the text data via plots. The application of sentiment analysis along with topic mining was also discussed in the same sections. To many, it's a new way to look at these kinds of data. Historically, people have used plots to plot numerical data, but plotting words on 2D graphs is very new. People have made more advances than 2D plots. With Facebook and LinkedIn, the Gephi library allows visualizing the social networks in 3D.
Next, you learned the basic steps of any data-mining problem along with various machine learning algorithms. We'll see the applications of many of these algorithms in the coming chapters. We briefly talked about sentiment analysis, anomaly detection, and various community detection algorithms. So far, we have not gone deep into any of the algorithms, but will dive into them in the later chapters.
In the next chapter, we will apply the knowledge gained so far to mine Twitter and give detailed information of the methods and techniques used there.