Member-only story
Ensemble methods: from decision trees to deep learning
Introduction
Hello, in this article I will be trying to give you an intuitive presentation of the most popular ensemble methods that are used in the machine learning field and if everything goes well I will also venture to deep learning to see how we can apply the same concepts to neural networks.
At its core, the concept is rather simple: Ensemble learning algorithms combine the predictions of multiple models trained on the whole dataset or multiple samples using different techniques for aggregating the results.
By using ensemble methods we have a more general and robust solution than by using a single estimator, so the idea is to use multiple weak learners that individually tend to have a bigger bias but combined we get quite good predictions. Decision trees are some of the most used “weak learners” for creating ensemble predictors, mostly because they are easy and fast to train and this helps with the “horizontal” type of scaling we use in this case.
Standard Ensemble Methods
We have three main categories of ensemble learning algorithms:
- Bagging
- Boosting
- Stacking