Reader small image

You're reading from  Spark Cookbook

Product typeBook
Published inJul 2015
Publisher
ISBN-139781783987061
Edition1st Edition
Right arrow
Author (1)
Rishi Yadav
Rishi Yadav
author image
Rishi Yadav

Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998. About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015. Rishi is an open source contributor and active blogger. This book is dedicated to my parents, Ganesh and Bhagwati Yadav; I would not be where I am without their unconditional support, trust, and providing me the freedom to choose a path of my own. Special thanks go to my life partner, Anjali, for providing immense support and putting up with my long, arduous hours (yet again).Our 9-year-old son, Vedant, and niece, Kashmira, were the unrelenting force behind keeping me and the book on track. Big thanks to InfoObjects' CTO and my business partner, Sudhir Jangir, for providing valuable feedback and also contributing with recipes on enterprise security, a topic he is passionate about; to our SVP, Bart Hickenlooper, for taking the charge in leading the company to the next level; to Tanmoy Chowdhury and Neeraj Gupta for their valuable advice; to Yogesh Chandani, Animesh Chauhan, and Katie Nelson for running operations skillfully so that I could focus on this book; and to our internal review team (especially Rakesh Chandran) for ironing out the kinks. I would also like to thank Marcel Izumi for, as always, providing creative visuals. I cannot miss thanking our dog, Sparky, for giving me company on my long nights out. Last but not least, special thanks to our valuable clients, partners, and employees, who have made InfoObjects the best place to work at and, needless to say, an immensely successful organization.
Read more about Rishi Yadav

Right arrow

Chapter 7. Supervised Learning with MLlib – Regression

This chapter is divided into the following recipes:

  • Using linear regression

  • Understanding the cost function

  • Doing linear regression with lasso

  • Doing ridge regression

Introduction


The following is Wikipedia's definition of supervised learning:

"Supervised learning is the machine learning task of inferring a function from labeled training data."

Supervised learning has two steps:

  • Train the algorithm with training dataset; it is like giving questions and their answers first

  • Use test dataset to ask another set of questions to the trained algorithm

There are two types of supervised learning algorithms:

  • Regression: This predicts continuous value output, such as house price.

  • Classification: This predicts discreet valued output (0 or 1) called label, such as whether an e-mail is a spam or not. Classification is not limited to two values; it can have multiple values such as marking an e-mail important, not important, urgent, and so on (0, 1, 2…).

Note

We are going to cover regression in this chapter and classification in the next.

As an example dataset for regression, we will use the recently sold house data of the City of Saratoga, CA, as a training set to train the algorithm...

Using linear regression


Linear regression is the approach to model the value of a response variable y, based on one or more predictor variables or feature x.

Getting ready

Let's use some housing data to predict the price of a house based on its size. The following are the sizes and prices of houses in the City of Saratoga, CA, in early 2014:

House size (sq ft)

Price

2100

$ 1,620,000

2300

$ 1,690,000

2046

$ 1,400,000

4314

$ 2,000,000

1244

$ 1,060,000

4608

$ 3,830,000

2173

$ 1,230,000

2750

$ 2,400,000

4010

$ 3,380,000

1959

$ 1,480,000

Here's a graphical representation of the same:

How to do it…

  1. Start the Spark shell:

    $ spark-shell
    
  2. Import the statistics and related classes:

    scala> import org.apache.spark.mllib.linalg.Vectors
    scala> import org.apache.spark.mllib.regression.LabeledPoint
    scala> import org.apache.spark.mllib.regression.LinearRegressionWithSGD
    
  3. Create the LabeledPoint array with the house price as the label:

    scala> val points = Array(
    LabeledPoint(1620000...

Understanding cost function


Cost function or loss function is a very important function in machine learning algorithms. Most algorithms have some form of cost function and the goal is to minimize that. Parameters, which affect cost function, such as stepSize in the last recipe, need to be set by hand. Therefore, understanding the whole concept of cost function is very important.

In this recipe, we are going to analyze cost function for linear regression. Linear regression is a simple algorithm to understand and it will help readers understand the role of cost functions for even complex algorithms.

Let's go back to linear regression. The goal is to find the best-fitting line so that the mean square of error is minimum. Here, we are referring error as the difference between the value as per the best-fitting line and the actual value of the response variable for the training dataset.

For a simple case of single predicate variable, the best-fit line can be written as:

This function is also called...

Doing linear regression with lasso


The lasso is a shrinkage and selection method for linear regression. It minimizes the usual sum of squared errors, with a bound on the sum of the absolute values of the coefficients. It is based on the original lasso paper found at http://statweb.stanford.edu/~tibs/lasso/lasso.pdf.

The least square method we used in the last recipe is also called ordinary least squares (OLS). OLS has two challenges:

  • Prediction accuracy: Predictions made using OLS usually have low forecast bias and high variance. Prediction accuracy can be improved by shrinking some coefficients (or even making them zero). There will be some increase in bias, but overall prediction accuracy will improve.

  • Interpretation: With a large number of predictors, it is desirable to find a subset of them that exhibits the strongest effect (correlation).

Note

Bias versus variance

There are two primary reasons behind prediction error: bias and variance. The best way to understand bias and variance is to look...

Doing ridge regression


An alternate way to lasso to improve prediction quality is ridge regression. While in lasso, a lot of features get their coefficients set to zero and, therefore, eliminated from an equation, in ridge, predictors or features are penalized, but are never set to zero.

How to do it…

  1. Start the Spark shell:

    $ spark-shell
    
  2. Import the statistics and related classes:

    scala> import org.apache.spark.mllib.linalg.Vectors
    scala> import org.apache.spark.mllib.regression.LabeledPoint
    scala> import org.apache.spark.mllib.regression.RidgeRegressionWithSGD
    
  3. Create the LabeledPoint array with the house price as the label:

    scala> val points = Array(
    LabeledPoint(1,Vectors.dense(5,3,1,2,1,3,2,2,1)),
    LabeledPoint(2,Vectors.dense(9,8,8,9,7,9,8,7,9))
    )
    
  4. Create an RDD of the preceding data:

    scala> val rdd = sc.parallelize(points)
    
  5. Train a model using this data using 100 iterations. Here, the step size and regularization parameter have been set by hand :

    scala> val model = RidgeRegressionWithSGD...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Spark Cookbook
Published in: Jul 2015Publisher: ISBN-13: 9781783987061
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Rishi Yadav

Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998. About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015. Rishi is an open source contributor and active blogger. This book is dedicated to my parents, Ganesh and Bhagwati Yadav; I would not be where I am without their unconditional support, trust, and providing me the freedom to choose a path of my own. Special thanks go to my life partner, Anjali, for providing immense support and putting up with my long, arduous hours (yet again).Our 9-year-old son, Vedant, and niece, Kashmira, were the unrelenting force behind keeping me and the book on track. Big thanks to InfoObjects' CTO and my business partner, Sudhir Jangir, for providing valuable feedback and also contributing with recipes on enterprise security, a topic he is passionate about; to our SVP, Bart Hickenlooper, for taking the charge in leading the company to the next level; to Tanmoy Chowdhury and Neeraj Gupta for their valuable advice; to Yogesh Chandani, Animesh Chauhan, and Katie Nelson for running operations skillfully so that I could focus on this book; and to our internal review team (especially Rakesh Chandran) for ironing out the kinks. I would also like to thank Marcel Izumi for, as always, providing creative visuals. I cannot miss thanking our dog, Sparky, for giving me company on my long nights out. Last but not least, special thanks to our valuable clients, partners, and employees, who have made InfoObjects the best place to work at and, needless to say, an immensely successful organization.
Read more about Rishi Yadav