Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Practical Big Data Analytics

You're reading from  Practical Big Data Analytics

Product type Book
Published in Jan 2018
Publisher Packt
ISBN-13 9781783554393
Pages 412 pages
Edition 1st Edition
Languages
Concepts
Author (1):
Nataraj Dasgupta Nataraj Dasgupta
Profile icon Nataraj Dasgupta

Table of Contents (16) Chapters

Title Page
Packt Upsell
Contributors
Preface
Too Big or Not Too Big Big Data Mining for the Masses The Analytics Toolkit Big Data With Hadoop Big Data Mining with NoSQL Spark for Big Data Analytics An Introduction to Machine Learning Concepts Machine Learning Deep Dive Enterprise Data Science Closing Thoughts on Big Data External Data Science Resources Other Books You May Enjoy

Chapter 8. Machine Learning Deep Dive

The prior chapter on machine learning provided a preliminary overview of the subject, including the different classes and core concepts in the subject area. This chapter will delve deeper into the theoretical aspects of machine learning such as the limits of algorithms and how different algorithms work.

Machine learning is a vast and complex subject, and to that end, this chapter focuses on the breadth of different topics, rather than the depth. The concepts are introduced at a high level and the reader may refer to other sources to further their understanding of the topics.

We will start out by discussing a few fundamental theories in machine learning, such as Gradient Descent and VC Dimension. Next, we will look at Bias and Variance, two of the most important factors in any modelling process and the concept of bias-variance trade-off.

We'll then discuss the various machine learning algorithms, their strengths and areas of applications.

We'll conclude with...

The bias, variance, and regularization properties


Bias, variance, and the closely related topic of regularization hold very special and fundamental positions in the field of machine learning.

Bias happens when a machine learning model is too 'simple', leading to results that are consistently off from the actual values.

Variance happens when a model is too 'complex', leading to results that are very accurate on test datasets, but do not perform well on unseen/new datasets.

Once users become familiar with the process of creating machine learning models, it would seem that the process is quite simplistic - get the data, create a training set and a test set, create a model, apply the model on the test dataset, and the exercise is complete. Creating models is easy; creating a good model is a much more challenging topic. But how can one test the quality of a model? And, perhaps more importantly, how does one go about building a 'good' model?

The answer lies in a term called regularization. It's arguably...

The gradient descent and VC Dimension theories


Gradient descent and VC Dimension are two fundamental theories in machine learning. In general, gradient descent gives a structured approach to finding the optimal co-efficients of a function. The hypothesis space of a function can be large and with gradient descent, the algorithm tries to find a minimum (a minima) where the cost function (for example, the squared sum of errors) is the lowest.

VC Dimension provides an upper bound on the maximum number of points that can be classified in a system. It is in essence the measure of the richness of a function and provides an assessment of what the limits of a hypothesis are in a structured way. The number of points that can be exactly classified by a function or hypothesis is known as the VC Dimension of the hypothesis. For example, a linear boundary can accurately classify 2 or 3 points but not 4. Hence, the VC Dimension of this 2-dimensional space would be 3.

VC Dimension, like many other topics...

Popular machine learning algorithms


There are various different classes of machine learning algorithms. As such, since algorithms can belong to multiple 'classes' or categories at the same time at a conceptual level, it is hard to specifically state that an algorithm belongs exclusively to a single class. In this section, we will briefly discuss a few of the most commonly used and well-known algorithms.

These include:

  • Regression models
  • Association rules
  • Decision trees
  • Random forest
  • Boosting algorithms
  • Support vector machines
  • K-means
  • Neural networks

Note that in the examples, we have shown the basic use of the R functions using the entire dataset. In practice, we'd split the data into a training and test set, and once we have built a satisfactory model apply the same on the test dataset to evaluate the model's performance.

Regression models

Regression models range from commonly used linear, logistic, and multiple regression algorithms used in statistics to Ridge and Lasso regression, which penalizes...

Tutorial - associative rules mining with CMS data


This tutorial will implement an interface for accessing rules created using the Apriori Package in R.

We'll be downloading data from the CMS OpenPayments website. The site hosts data on payments made to physicians and hospitals by companies:

The site provides various ways of downloading data. Users can select the dataset of interest and download it manually. In our case, we will download the data using one of the Web-based APIs that is available to all users.

Downloading the data

The dataset can be downloaded either at the Unix terminal (in the virtual machine) or by accessing the site directly from the browser. If you are downloading the dataset in the Virtual Machine, run the following command in the terminal window:

time wget -O cms2016_2.csv 'https://openpaymentsdata.cms.gov/resource/vq63-hu5i.csv?$query=select Physician_First_Name as firstName,Physician_Last_Name as lastName,Recipient_City as city,Recipient_State as state,Submitting_Applicable_Manufacturer_or_Applicable_GPO_Name...

Summary


Machine learning practitioners are often of the opinion that creating models is easy, but creating a good one is much more difficult. Indeed, not only is creating a good model important, but perhaps more importantly, knowing how to identify a good model is what distinguishes successful versus less successful Machine Learning endeavors.

In this chapter, we read up on some of the deeper theoretical concepts in Machine Learning. Bias, Variance, Regularization, and other common concepts were explained with examples as and where needed. With accompanying R code, we also learnt about some of the common machine learning algorithms such as Random Forest, Support Vector Machines, and others. We concluded with a tutorial on how to create an exhaustive web-based application for Association Rules Mining against CMS OpenPayments data.

In the next chapter, we will read about some of the technologies that are being used in enterprises for both big data as well as machine learning. We will also discuss...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Practical Big Data Analytics
Published in: Jan 2018 Publisher: Packt ISBN-13: 9781783554393
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}