Reader small image

You're reading from  R Machine Learning Projects

Product typeBook
Published inJan 2019
Reading LevelExpert
PublisherPackt
ISBN-139781789807943
Edition1st Edition
Languages
Right arrow
Author (1)
Dr. Sunil Kumar Chinnamgari
Dr. Sunil Kumar Chinnamgari
author image
Dr. Sunil Kumar Chinnamgari

Dr. Sunil Kumar Chinnamgari has a Ph.D. in computer science and specializes in machine learning and natural language processing. He is an AI researcher with more than 14 years of industry experience. Currently, he works in the capacity of lead data scientist with a US financial giant. He has published several research papers in Scopus and IEEE journals and is a frequent speaker at various meetups. He is an avid coder and has won multiple hackathons. In his spare time, Sunil likes to teach, travel, and spend time with family.
Read more about Dr. Sunil Kumar Chinnamgari

Right arrow

Predicting Employee Attrition Using Ensemble Models

If you reviewed the recent machine learning competitions, one key observation I am sure you would make is that the recipes of all three winning entries in most of the competitions include very good feature engineering, along with well-tuned ensemble models. One conclusion I derive from this observation is that good feature engineering and building well-performing models are two areas that should be given equal emphasis in order to deliver successful machine learning solutions.

While feature engineering most times is something that is dependent on the creativity and domain expertise of the person building the model, building a well-performing model is something that can be achieved through a philosophy called ensembling. Machine learning practitioners often use ensembling techniques to beat the performance benchmarks yielded by...

Philosophy behind ensembling

Ensembling, which is super-famous among ML practitioners, can be well-understood through a simple real-world, non-ML example.

Assume that you have applied for a job in a very reputable corporate organization and you have been called for an interview. It is unlikely you will be selected for a job just based on one interview with an interviewer. In most cases, you will go through multiple rounds of interviews with several interviewers or with a panel of interviewers. The expectation from the organization is that each of the interviewers is an expert on a particular area and that the interviewer has evaluated your fitness for the job based on your experience in the interviewers' area of expertise. Your selection for the job, of course, depends on consolidated feedback from all of the interviewers that talked to you. The organization deems that you...

Getting started

To get started with this section, you will have to download the WA_Fn-UseC_-HR-Employee-Attrition.csv dataset from the GitHub link for the code in this chapter.

Understanding the attrition problem and the dataset

HR analytics helps with interpreting organizational data. It finds out the people-related trends in the data and helps the HR department take the appropriate steps to keep the organization running smoothly and profitably. Attrition in a corporate setup is one of the complex challenges that the people managers and HR personnel have to deal with. Interestingly, machine learning models can be deployed to predict potential attrition cases, thereby helping the appropriate HR personnel or people managers take the necessary steps to retain the employee.

In this chapter, we are going to build ML ensembles that will predict such potential cases of attrition. The job attrition dataset used for the project is a fictional dataset created by data scientists at IBM. The rsample library incorporates this dataset and we can make use of this...

K-nearest neighbors model for benchmarking the performance

In this section, we will implement the k-nearest neighbors (KNN) algorithm to build a model on our IBM attrition dataset. Of course, we are already aware from EDA that we have a class imbalance problem in the dataset at hand. However, we will not be treating the dataset for class imbalance for now as this is an entire area on its own and several techniques are available in this area and therefore out of scope for the ML ensembling topic covered in this chapter. We will, for now, consider the dataset as is and build ML models. Also, for class imbalance datasets, Kappa or precision and recall or the area under the curve of the receiver operating characteristic (AUROC) are the appropriate metrics to use. However, for simplicity, we will use accuracy as a performance metric. We will adapt 10-fold cross validation repeated...

Bagging

Bootstrap aggregation or bagging is the earliest ensemble technique adopted widely by the ML-practicing community. Bagging involves creating multiple different models from a single dataset. It is important to understand an important statistical technique called bootstrapping in order to get an understanding of bagging.

Bootstrapping involves multiple random subsets of a dataset being created. It is possible that the same data sample gets picked up in multiple subsets and this is termed as bootstrapping with replacement. The advantage with this approach is that the standard error in estimating a quantity that occurs due to the use of whole dataset. This technique can be better explained with an example.

Assume you have a small dataset of 1,000 samples. Based on the samples, you are asked to compute the average of the population that the sample represents. Now, a direct...

Randomization with random forests

As we've seen in bagging, we create a number of bags on which each model is trained. Each of the bags consists of subsets of the actual dataset, however the number of features or variables remain the same in each of the bags. In other words, what we performed in bagging is subsetting the dataset rows.

In random forests, while we create bags from the dataset through subsetting the rows, we also subset the features (columns) that need to be included in each of the bags.

Assume that you have 1,000 observations with 20 features in your dataset. We can create 20 bags where each one of the bags has 100 observations (this is possible because of bootstrapping with replacement) and five features. Now 20 models are trained where each model gets to see only the bag it is assigned with. The final prediction is arrived at by voting or averaging...

Boosting

A weak learner is an algorithm that performs relatively poorly—generally, the accuracy obtained with the weak learners is just above chance. It is often, if not always, observed that weak learners are computationally simple. Decision stumps or 1R algorithms are some examples of weak learners. Boosting converts weak learners into strong learners. This essentially means that boosting is not an algorithm that does the predictions, but it works with an underlying weak ML algorithm to get better performance.

A boosting model is a sequence of models learned on subsets of data similar to that of the bagging ensembling technique. The difference is in the creation of the subsets of data. Unlike bagging, all the subsets of data used for model training are not created prior to the start of the training. Rather, boosting builds a first model with an ML algorithm that...

Stacking

In all the ensembles we have learned about so far, we have manipulated the dataset in certain ways and exposed subsets of the data for model building. However, in stacking, we are not going to do anything with the dataset; instead we are going to apply a different technique that involves using multiple ML algorithms instead. In stacking, we build multiple models with various ML algorithms. Each algorithm possesses a unique way of learning the characteristics of data and the final stacked model indirectly incorporates all those unique ways of learning. Stacking gets the combined power of several ML algorithms through getting the final prediction by means of voting or averaging as we do in other types of ensembles.

Building attrition prediction model with stacking

...

Summary

To recollect, we were using a class-imbalanced dataset to build the attrition model. Using techniques to resolve the class imbalance prior to model building is another key aspect of getting better model performance measurements. We used bagging, randomization, boosting, and stacking to implement and predict the attrition model. We were able to accomplish 91% accuracy just by using the features that were readily available in the models. Feature engineering is a crucial aspect whose role cannot be ignored in ML models. This may be one other path to explore to improve model performance further.

In the next chapter, we will explore the secret recipe of recommending products or content through building a personalized recommendation engines. I am all set to implement a project to recommend jokes. Turn to the next chapter to continue the journey of learning.

...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
R Machine Learning Projects
Published in: Jan 2019Publisher: PacktISBN-13: 9781789807943
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dr. Sunil Kumar Chinnamgari

Dr. Sunil Kumar Chinnamgari has a Ph.D. in computer science and specializes in machine learning and natural language processing. He is an AI researcher with more than 14 years of industry experience. Currently, he works in the capacity of lead data scientist with a US financial giant. He has published several research papers in Scopus and IEEE journals and is a frequent speaker at various meetups. He is an avid coder and has won multiple hackathons. In his spare time, Sunil likes to teach, travel, and spend time with family.
Read more about Dr. Sunil Kumar Chinnamgari