# Encoding Categorical Variables

Categorical variables are those whose values are selected from a group of categories or labels. For example, the **Gender** variable with the values of **Male** and **Female** is categorical, and so is the **marital status** variable with the values of **never married**, **married**, **divorced**, and **widowed**. In some categorical variables, the labels have an intrinsic order; for example, in the **Student’s grade** variable, the values of **A**, **B**, **C**, and **Fail** are ordered, with **A** being the highest grade and **Fail** being the lowest. These are called ordinal categorical variables. Variables in which the categories do not have an intrinsic order are called nominal categorical variables, such as the **City** variable, with the values of **London**, **Manchester**, **Bristol**, and so on.

The values of categorical variables are often encoded as strings. To train mathematical or machine learning models, we need to transform those strings into numbers. The act of replacing strings with numbers is called **categorical encoding**. In this chapter, we will discuss multiple categorical encoding methods.

This chapter will cover the following recipes:

- Creating binary variables through one-hot encoding
- Performing one-hot encoding of frequent categories
- Replacing categories with counts or the frequency of observations
- Replacing categories with ordinal numbers
- Performing ordinal encoding based on the target value
- Implementing target mean encoding
- Encoding with the Weight of Evidence
- Grouping rare or infrequent categories
- Performing binary encoding

# Technical requirements

In this chapter, we will use the pandas, NumPy, and Matplotlib Python libraries, as well as scikit-learn and Feature-engine. For guidelines on how to obtain these libraries, visit the *Technical requirements* section of *Chapter 1*, *Imputing **Missing Data*.

We will also use the open-source `Category Encoders`

Python library, which can be installed using `pip`

:

To learn more about `Category Encoders`

, visit the following link: https://contrib.scikit-learn.org/category_encoders/.

We will also use the Credit Approval dataset, which is available in the UCI Machine Learning Repository at https://archive.ics.uci.edu/ml/datasets/credit+approval.

To prepare the dataset, follow these steps:

- Visit http://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/ and click on
`crx.data`

to download the data:

Figure 2.1 – The index directory for the Credit Approval dataset

- Save
`crx.data`

to the folder where you will run the following commands.

After downloading the data, open up a Jupyter Notebook and run the following commands.

- Import the required libraries:
- Load the data:
- Create a list containing the variable names:
- Add the variable names to the DataFrame:
- Replace the question marks in the dataset with NumPy NaN values:
- Cast some numerical variables as
`float`

data types: - Encode the target variable as binary:
- Rename the target variable:
- Make lists that contain categorical and numerical variables:
- Fill in the missing data:
- Save the prepared data:

You can find a Jupyter Notebook that contains these commands in this book’s GitHub repository at https://github.com/PacktPublishing/Python-Feature-Engineering-Cookbook-Second-Edition/blob/main/ch02-categorical-encoding/donwload-prepare-store-credit-approval-dataset.ipynb.

Note

Some libraries require that you have **already imputed missing data**, for which you can use any of the recipes from *Chapter 1*, *Imputing **Missing Data*.

# Creating binary variables through one-hot encoding

In one-hot encoding, we represent a categorical variable as a group of binary variables, where each binary variable represents one category. The binary variable takes a value of 1 if the category is present in an observation, or 0 otherwise.

The following table shows the one-hot encoded representation of the **Gender** variable with the categories of **Male** and **Female**:

Figure 2.2 – One-hot encoded representation of the Gender variable

As shown in *Figure 2**.2*, from the **Gender** variable, we can derive the binary variable of **Female**, which shows the value of **1** for females, or the binary variable of **Male**, which takes the value of **1** for the males in the dataset.

For the categorical variable of **Color** with the values of **red**, **blue**, and **green**, we can create three variables called red, blue, and green. These variables will take the value of **1** if the observation is red, blue, or green, respectively, or 0 otherwise.

A categorical variable with *k* unique categories can be encoded using *k-1* binary variables. For **Gender**, *k* is 2 as it contains two labels (male and female), so we only need to create one binary variable (*k - 1 = 1*) to capture all of the information. For the **Color** variable, which has three categories (*k=3*; red, blue, and green), we need to create two (*k - 1 = 2*) binary variables to capture all the information so that the following occurs:

- If the observation is red, it will be captured by the
**red**variable (red = 1, blue = 0). - If the observation is blue, it will be captured by the
**blue**variable (red = 0, blue = 1) - If the observation is green, it will be captured by the combination of
**red**and**blue**(red = 0, blue = 0)

Encoding into *k-1* binary variables is well-suited for linear models. There are a few occasions in which we may prefer to encode the categorical variables with *k* binary variables:

- When training decision trees since they do not evaluate the entire feature space at the same time
- When selecting features recursively
- When determining the importance of each category within a variable

In this recipe, we will compare the one-hot encoding implementations of pandas, scikit-learn, Feature-engine, and Category Encoders.

## How to do it...

First, let’s make a few imports and get the data ready:

- Import
`pandas`

and the`train_test_split`

function from scikit-learn: - Let’s load the Credit Approval dataset:
- Let’s separate the data into train and test sets:
- Let’s inspect the unique categories of the
`A4`

variable:

We can see the unique values of `A4`

in the following output:

- Let’s encode
`A4`

into k-1 binary variables using pandas and then inspect the first five rows of the resulting DataFrame:

Note

With pandas `get_dummies()`

, we can either ignore or encode missing data through the `dummy_na`

parameter. By setting `dummy_na=True`

, missing data will be encoded in a new binary variable. To encode the variable into `k`

dummies, use `drop_first=False`

instead.

Here, we can see the output of *step 5*, where each label is now a binary variable:

- Now, let’s encode all of the categorical variables into
*k-1*binaries, capturing the result in a new DataFrame:

Note

The `get_dummies`

method from pandas will automatically encode all variables of the object or type. We can encode a subset of the variables by passing the variable names in a list to the `columns`

parameter.

Note

When encoding more than one variable, `get_dummies()`

captures the variable name – say, `A1`

– and places an underscore followed by the category name to identify the resulting binary variables.

We can see the binary variables in the following output:

Figure 2.3 – Transformed DataFrame showing the dummy variables on the right

Note

The `get_dummies()`

method will create one binary variable per **seen** category. Hence, if there are more categories in the train set than in the test set, `get_dummies()`

will return more columns in the transformed train set than in the transformed test set, and vice versa. To avoid this, it is better to carry out one-hot encoding with scikit-learn or Feature-engine, as we will discuss later in this recipe.

- Let’s concatenate the binary variables to the original dataset:
- Now, let’s drop the categorical variables from the data:

And that’s it! Now, we can use our categorical variables to train mathematical models. To inspect the result, use `X_test_enc.head()`

.

Now, let’s do one-hot encoding using scikit-learn.

- Import the encoder from scikit-learn:
- Let’s set up the transformer. By setting
`drop`

to`"first"`

, we encode into*k-1*binary variables, and by setting`sparse`

to`False`

, the transformer will return a NumPy array (instead of a sparse matrix):

Tip

We can encode variables into k dummies by setting the `drop`

parameter to `None`

. We can also encode into k-1 if variables contain two categories and into `k`

if variables contain more than two categories by setting the `drop`

parameter to “`if_binary`

”. The latter is useful because encoding binary variables into `k`

dummies is redundant.

- First, let’s create a list containing the variable names:
- Let’s fit the encoder to a slice of the train set with the categorical variables:
- Let’s inspect the categories for which dummy variables will be created:

We can see the result of the preceding command here:

Figure 2.4 – Arrays with the categories that will be encoded into binary variables, one array per variable

Note

Scikit-learn’s `OneHotEncoder()`

will only encode the categories learned from the train set. If there are new categories in the test set, we can instruct the encoder to ignore them or to return an error by setting the `handle_unknown`

parameter to `'ignore' `

or `'`

`error'`

, respectively.

- Let’s create the NumPy arrays with the binary variables for the train and test sets:
- Let’s extract the names of the binary variables:

We can see the binary variable names that were returned in the following output:

Figure 2.5 – Arrays with the names of the one-hot encoded variables

- Let’s convert the array into a pandas DataFrame and add the variable names:
- To concatenate the one-hot encoded data to the original dataset, we need to make their indexes match:

Now, we are ready to concatenate the one-hot encoded variables to the original data and then remove the categorical variables using *steps 8* and *9* from this recipe.

To follow up, let’s perform one-hot encoding with Feature-engine.

- Let’s import the encoder from Feature-engine:
- Next, let’s set up the encoder so that it returns
*k-1*binary variables:

Tip

Feature-engine automatically finds the categorical variables. To encode only a subset of the variables, we can pass the variable names in a list: `OneHotCategoricalEncoder(variables=["A1", "A4"])`

. To encode numerical variables, we can set the `ignore_format`

parameter to `True`

or cast the variables as the object type. This is useful because sometimes, numerical variables are used to represent categories, such as postcodes.

- Let’s fit the encoder to the train set so that it learns the categories and variables to encode:
- Let’s explore the variables that will be encoded:

The transformer found and stored the variables of the object or categorical type, as shown in the following output:

Note

Feature-engine’s `OneHotEncoder`

has the option to encode most variables into k dummies, while only returning k-1 dummies for binary variables. For this behavior, set the `drop_last_binary`

parameter to `True`

.

The following dictionary contains the categories that will be encoded in each variable:

- Let’s encode the categorical variables in train and test sets:

Tip

Feature-engine’s `OneHotEncoder()`

returns a copy of the original dataset plus the binary variables and without the original categorical variables. Thus, this data is ready to train machine learning models.

If we execute `X_train_enc.head()`

, we will see the following DataFrame:

Figure 2.6 – Transformed DataFrame with the one-hot encoded variables on the right

Note how the **A4** categorical variable was replaced with **A4_u**, **A4_y**, and so on.

Note

We can get the names of all the variables in the transformed dataset by executing `ohe_enc.get_feature_names_out()`

.

## How it works...

In this recipe, we performed a one-hot encoding of categorical variables using pandas, scikit-learn, Feature-engine, and Category Encoders.

With `get_dummies()`

from pandas, we automatically created binary variables for each of the categories in the categorical variables.

The `OneHotEncoder`

transformers from the scikit-learn and Feature-engine libraries share the `fit()`

and `transform()`

methods. With `fit()`

, the encoders learned the categories for which the dummy variables should be created. With `transform()`

, they returned the binary variables either in a NumPy array or added them to the original DataFrame.

Tip

One-hot encoding expands the feature space. From nine original categorical variables, we created 36 binary ones. If our datasets contain many categorical variables or highly cardinal variables, we will easily increase the feature space dramatically, which increases the computational cost of training machine learning models or obtaining their predictions and may also deteriorate their performance.

## There’s more...

We can also perform one-hot encoding using `OneHotEncoder`

from the Category Encoders library.

`OneHotEncoder()`

from Feature-engine and Category Encoders can automatically identify and encode categorical variables – that is, those of the object or categorical type. So does pandas `get_dummies()`

. Scikit-learn’s `OneHotEncoder()`

, on the other hand, will encode all variables in the dataset.

With pandas, Feature-engine, and Category Encoders, we can only encode a subset of the variables, indicating their names in a list. With scikit-learn, we need to use an additional class, `ColumnTransformer()`

, to slice the data before the transformation.

With Feature-engine and Category Encoders, the dummy variables are added to the original dataset and the categorical variables are removed after the encoding. With scikit-learn and pandas, we need to manually perform these procedures.

Finally, using `OneHotEncoder()`

from scikit-learn, Feature-engine, and Category Encoders, we can perform the encoding step within a scikit-learn pipeline, which is more convenient if we have various feature engineering steps or want to put the pipelines into production. pandas `get_dummies()`

is otherwise well suited for data analysis and visualization.