Reader small image

You're reading from  Mastering Text Mining with R

Product typeBook
Published inDec 2016
Reading LevelIntermediate
PublisherPackt
ISBN-139781783551811
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
KUMAR ASHISH
KUMAR ASHISH
author image
KUMAR ASHISH

Ashish Kumar is a seasoned data science professional, a publisher author and a thought leader in the field of data science and machine learning. An IIT Madras graduate and a Young India Fellow, he has around 7 years of experience in implementing and deploying data science and machine learning solutions for challenging industry problems in both hands-on and leadership roles. Natural Language Procession, IoT Analytics, R Shiny product development, Ensemble ML methods etc. are his core areas of expertise. He is fluent in Python and R and teaches a popular ML course at Simplilearn. When not crunching data, Ashish sneaks off to the next hip beach around and enjoys the company of his Kindle. He also trains and mentors data science aspirants and fledgling start-ups.
Read more about KUMAR ASHISH

Right arrow

Chapter 2. Processing Text

A significant part of the time spent on any modeling or analysis activity goes into accessing, preprocessing, and cleaning the data. We should have the capability to access data from diverse sources, load them in our statistical analysis environment and process them in a manner conducive for advanced analysis.

In this chapter, we will learn to access data from a wide variety of sources and load it into our R environment. We will also learn to perform some standard text processing.

By the time you finish the chapter, you should be equipped with enough knowledge to retrieve data from most of the data sources and process it into custom corpus for further analysis:

  • Accessing texts from diverse sources

  • Processing texts using regular expressions

  • Normalizing texts

  • Lexical diversity

  • Language detection

Accessing text from diverse sources


Reading data from diverse sources for analysis, and exporting the results to another system for reporting purposes can be a daunting task that can sometimes take even more time than the real analysis. There are various sources from which we can gather text; some of them are HTML pages, social media, RSS feeds, JSON or XML, enterprise environments, and so on. The source has a very important role to play in the quality of textual data and the way we access the source. For instance, in the case of an enterprise environment, the common sources of text or data can be database and log files. In a web ecosystem, web pages are the source of data. When we consider web service applications, the sources can be JSON or XML over HTTP or HTTPS. We will look into various data sources and ways in which we can collect data from them.

File system

Reading from a file system is a very basic capability that any programming language should provide. We may have collections of...

Processing text using regular expressions


The web consists predominantly of unstructured text. One of the main tasks in web scraping is to collect the relevant information from heaps of textual data. Within the unstructured text we are often interested in specific information, especially when we want to analyze the data using quantitative methods. Specific information can include numbers such as phone numbers, zip codes, latitude, longitude, or addresses.

First, we gather the unstructured text, next we determine the recurring patterns behind the information we are looking for, and then we apply these patterns to the unstructured text to extract the information. When we are web scraping, we have to identify and extract those parts of the document that contain the relevant information. Ideally, we can do so using xpath althrough, sometimes the crucial information is hidden within values. Sometimes relevant information might be scattered across an HTML document. We need to write regular expressions...

Normalizing texts


Normalization in text basically refers to standardization or canonicalization of tokens, which we derived from documents in the previous step. The simplest scenario possible could be the case where query tokens are an exact match to the list of tokens in document, however there can be cases when that is not true. The intent of normalization is to have the query and index terms in the same form. For instance, if you query U.K., you might also be expecting U.K.

Token normalization can be performed either by implicitly creating equivalence classes or by maintaining the relations between unnormalized tokens. There might be cases where we find superficial differences in character sequences of tokens, in such cases query and index term matching becomes difficult. Consider the words anti-disciplinary and anti-disciplinary. If both these words get mapped into one term named after one of the members of the set for example anti-disciplinary, text retrieval would become so efficient...

Lexical diversity


Consider a speaker, who uses the term allow multiple times throughout the speech, compared to an another speaker who uses terms allow, concur, acquiesce, accede, and avow for the same word. The latter speech has more lexical diversity than the former. Lexical diversity is widely believed to be an important parameter to rate a document in terms of textual richness and effectiveness.

Lexical diversity, in simple terms, is a measurement of the breadth and variety of vocabulary used in a document. The different measures of lexical diversity are TTR, MSTTR, MATTR, C, R, CTTR, U, S, K, Maas, HD-D, MTLD, and MTLD-MA.

koRpus package in R provides functions to estimate the lexical diversity or complexity.

If N is the total number of tokens and V is the number of types:

Language detection


TextCat is a text classification utility. The primary usage of TextCat is language identification. textcat package in R provides wrapper function for n-gram based text categorization and the language detection. It can detect up to 75 languages:

Library(textcat)
>my.profiles <- TC_byte_profiles[names(TC_byte_profiles)]
>my.profiles

A textcat profile db of length 75.

> my.text <- c("This book is in English language",
 "Das ist ein deutscher Satz.",
 "Il s'agit d'une phrase française.",
 "Esta es una frase en espa~nol.")
 textcat(my.text, p = my.profiles)
> textcat(my.text, p = my.profiles)

[1] "english" "german"  "french"  "spanish"

Summary


After accessing the data, processing it in different ways and having inspected it using multiple measures, we now have the cleansed data for advanced analysis. In the upcoming chapter, you will learn about advanced text processing techniques and text categorization.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Text Mining with R
Published in: Dec 2016Publisher: PacktISBN-13: 9781783551811
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
KUMAR ASHISH

Ashish Kumar is a seasoned data science professional, a publisher author and a thought leader in the field of data science and machine learning. An IIT Madras graduate and a Young India Fellow, he has around 7 years of experience in implementing and deploying data science and machine learning solutions for challenging industry problems in both hands-on and leadership roles. Natural Language Procession, IoT Analytics, R Shiny product development, Ensemble ML methods etc. are his core areas of expertise. He is fluent in Python and R and teaches a popular ML course at Simplilearn. When not crunching data, Ashish sneaks off to the next hip beach around and enjoys the company of his Kindle. He also trains and mentors data science aspirants and fledgling start-ups.
Read more about KUMAR ASHISH

Measure

Description

Wrapper Function (koRpus package in R)

TTR

Type-Token Ratio

TTR

MSTTR

Mean segment type token ratio

MSTTR

C

logTTR

C.ld

R

Root TTR

R.ld

CTTR

Corrected TTR

CTTR

U

Uber Index

U.ld...