Machine Learning with PySpark

0
Language

Level

Beginner

Access

Paid

Certificate

Paid

Learn how to make predictions from data with Apache Spark, using decision trees, logistic regression, linear regression, ensembles, and pipelines.

Add your review

Course Description

Learn to Use Apache Spark for Machine Learning

Spark is a powerful, general purpose tool for working with Big Data. Spark transparently handles the distribution of compute tasks across a cluster. This means that operations are fast, but it also allows you to focus on the analysis rather than worry about technical details. In this course you’ll learn how to get data into Spark and then delve into the three fundamental Spark Machine Learning algorithms: Linear Regression, Logistic Regression/Classifiers, and creating pipelines.

Build and Test Decision Trees

Building your own decision trees is a great way to start exploring machine learning models. You’ll use an algorithm called ‘Recursive Partitioning’ to divide data into two classes and find a predictor within your data that results in the most informative split of the two classes, and repeat this action with further nodes. You can then use your decision tree to make predictions with new data.

Master Logistic and Linear Regression in PySpark

Logistic and linear regression are essential machine learning techniques that are supported by PySpark. You’ll learn to build and evaluate logistic regression models, before moving on to creating linear regression models to help you refine your predictors to only the most relevant options.

By the end of the course, you’ll feel confident in applying your new-found machine learning knowledge, thanks to hands-on tasks and practice data sets found throughout the course.

What You’ll Learn

Introduction

Spark is a framework for working with Big Data. In this chapter you’ll cover some background about Spark and Machine Learning. You’ll then find out how to connect to Spark using Python and load CSV data.

Regression

Next you’ll learn to create Linear Regression models. You’ll also find out how to augment your data by engineering new predictors as well as a robust approach to selecting only the most relevant predictors.

Classification

Now that you are familiar with getting data into Spark, you’ll move onto building two types of classification model: Decision Trees and Logistic Regression. You’ll also find out about a few approaches to data preparation.

Ensembles & Pipelines

Finally you’ll learn how to make your models more efficient. You’ll find out how to use pipelines to make your code clearer and easier to maintain. Then you’ll use cross-validation to better test your models and select good model parameters. Finally you’ll dabble in two types of ensemble model.

User Reviews

0.0 out of 5
0
0
0
0
0
Write a review

There are no reviews yet.

Be the first to review “Machine Learning with PySpark”

×

    Your Email (required)

    Report this page
    Machine Learning with PySpark
    Machine Learning with PySpark
    LiveTalent.org
    Logo
    LiveTalent.org
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.