The problem of large categorical variables in machine learning

The problem of large categorical variables in machine learning

Recently, I was writing an article about dealing with categorical variables using techniques like one-hot encoding or dummy coding. I wondered what the correct approach is when the categorical variable has many unique values. After all, any encoding would create a vast number of new features.

The first approach is not very sophisticated. We can replace the categories with a group of categories. For example, if the feature contains the names of products in a grocery store, we can replace the names with generic categories of products like a vegetable, cheese, bread, and so on.

Are you interested in data engineering?

Check out my other blog https://easydata.engineering

Feature hashing

What if there is no hierarchy? What if it is not possible to group categories in any meaningful way? I started looking for a solution, and I found a technique called “feature hashing.”

In short, we are supposed to define a hashing function which reduces the space of the categorical variable because it maps many categories to the same hash. Fortunately, if we use Scikit-learn we don’t need to do it because such a function already exists.

As an input, we must give it the number of features. This value denotes the number of columns in the output. The number of columns it can use to encode categories. It is not the number of groups we want to get!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from sklearn.feature_extraction import FeatureHasher
import pandas as pd

data = pd.DataFrame([
    ['value_1', 23],
    ['value_2', 13],
    ['value_3', 42],
    ['value_4', 13],
    ['value_2', 46],
    ['value_1', 28],
    ['value_2', 32],
    ['value_3', 87],
    ['value_4', 98],
    ['value_5', 86],
    ['value_3', 45],
    ['value_2', 73],
    ['value_1', 36],
    ['value_3', 93]
], columns = ['feature1', 'feature2'])

feature_hasher = FeatureHasher(n_features = 3, input_type = 'string')

pd.concat([
pd.DataFrame(feature_hasher.fit_transform(data['feature1']).toarray()),
data['feature2']], axis = 1)

There is one problem with the FeatureHasher class in Scikit-learn. I could not get it running inside a ColumnTransformer pipeline, because it throws an error.

I have reported an error. If you want it fixed too, please upvote the issue ;)


Remember to share on social media!
If you like this text, please share it on Facebook/Twitter/LinkedIn/Reddit or other social media.

If you watch programming live streams, check out my YouTube channel.
You can also follow me on Twitter: @mikulskibartosz

If you want to hire me, send me a message on LinkedIn or Twitter.


Bartosz Mikulski
Bartosz Mikulski * data scientist / software/data engineer * conference speaker * organizer of School of A.I. meetups in Poznań * co-founder of Software Craftsmanship Poznan & Poznan Scala User Group