Bootstrapping vs. bagging
Those words are often used in the same texts/tutorials. Some people seem to use them as synonyms.
Those things are not the same. They are not even similar. Sure, we see them used in the same context, but they describe two different steps of a single machine learning process.
Bootstrapping is a method of sample selection. The formal definition describes it as “random sampling with replacement”. Nevermind, let’s forget the definition for a while and build intuition around this term
In short, it allows us to choose duplicates while sampling (for example when selecting observations to be used for training). It may be useful when we have a small dataset, but the algorithm requires many data. Don’t get too excited. It won’t magically let you successfully use deep learning when you have only 10 examples in the training set.
Do you want to show your product/service to 25000 data science enthusiasts every month? I am looking for companies which would like to become a partner of this blog.
Are you interested? Is your employer interested? Here are the details of the offer.
Now, we can move on to “bagging.” Bagging is a technique of fitting multiple classifiers and creating one ensembles model out of them.
Each one of the classifiers gets a different training set, and that is why words “bootstrapping” and “bagging” are often used together. The dataset for every classifier may be generated using bootstrapping.
Bootstrapping and bagging
In Scikit-learn the problem is nicely encapsulated (and not so nicely generalized). We have the
BaggingClassifier in its default configuration uses bootstrapping to choose samples for the training set of every classifier, but it can be configured to choose a subset of features randomly or to use random sampling without replacement.
Remember to share on social media! If you like this text, please share it on Facebook/Twitter/LinkedIn/Reddit or other social media.