Machine Learning with Tree-Based Models in Python

0
Language

Level

Beginner

Access

Paid

Certificate

Paid

In this course, you’ll learn how to use tree-based models and ensembles for regression and classification using scikit-learn.

Add your review

Course Description

Decision trees are supervised learning models used for problems involving classification and regression. Tree models present a high flexibility that comes at a price: on one hand, trees are able to capture complex non-linear relationships

on the other hand, they are prone to memorizing the noise present in a dataset. By aggregating the predictions of trees that are trained differently, ensemble methods take advantage of the flexibility of trees while reducing their tendency to memorize noise. Ensemble methods are used across a variety of fields and have a proven track record of winning many machine learning competitions.

What You’ll Learn

Classification and Regression Trees

Classification and Regression Trees (CART) are a set of supervised learning models used for problems involving classification and regression. In this chapter, you’ll be introduced to the CART algorithm.

Bagging and Random Forests

Bagging is an ensemble method involving training the same algorithm many times using different subsets sampled from the training data. In this chapter, you’ll understand how bagging can be used to create a tree ensemble. You’ll also learn how the random forests algorithm can lead to further ensemble diversity through randomization at the level of each split in the trees forming the ensemble.

Model Tuning

The hyperparameters of a machine learning model are parameters that are not learned from data. They should be set prior to fitting the model to the training set. In this chapter, you’ll learn how to tune the hyperparameters of a tree-based model using grid search cross validation.

The Bias-Variance Tradeoff

The bias-variance tradeoff is one of the fundamental concepts in supervised machine learning. In this chapter, you’ll understand how to diagnose the problems of overfitting and underfitting. You’ll also be introduced to the concept of ensembling where the predictions of several models are aggregated to produce predictions that are more robust.

Boosting

Boosting refers to an ensemble method in which several models are trained sequentially with each model learning from the errors of its predecessors. In this chapter, you’ll be introduced to the two boosting methods of AdaBoost and Gradient Boosting.

User Reviews

0.0 out of 5
0
0
0
0
0
Write a review

There are no reviews yet.

Be the first to review “Machine Learning with Tree-Based Models in Python”

×

    Your Email (required)

    Report this page
    Machine Learning with Tree-Based Models in Python
    Machine Learning with Tree-Based Models in Python
    LiveTalent.org
    Logo
    LiveTalent.org
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.