Building Data Engineering Pipelines in Python

0
Language

Level

Beginner

Access

Paid

Certificate

Paid

Learn how to build and test data engineering pipelines in Python using PySpark and Apache Airflow.

Add your review

Course Description

Build a Data Pipeline in Python

Learn how to use Python to build data engineering pipelines with this 4-hour course.

In any data-driven company, you will undoubtedly cross paths with data engineers. Among other things, they facilitate work by making data readily available to everyone within the organization and may also bring machine learning models into production.

One way to speed up this process is through building an understanding of what it means to bring processes into production and what features are of high-grade code. In this course, we’ll be looking at various data pipelines that data engineers build and how some of the tools they use can help you get your models into production or run repetitive tasks consistently and efficiently.

Use PySpark to Create a Data Transformation Pipeline

In this course, we illustrate common elements of data engineering pipelines. In Chapter 1, you will learn what a data platform is and how to ingest data.

Chapter 2 will go one step further with cleaning and transforming data, using PySpark to create a data transformation pipeline.

In Chapter 3, you will learn how to safely deploy code, looking at the different forms of testing. Finally, in Chapter 4, you will schedule complex dependencies between applications, using the basics of Apache Airflow to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order.

Learn How to Manage and Orchestrate Workflows

By the end of this course, you’ll have an understanding of building data pipelines in Python for data engineering. You’ll also have a knowledge of how to orchestrate and manage your workflows using DAG schedules and Apache Airflow for automated testing.

What You’ll Learn

Ingesting Data

After seeing this chapter, you will be able to explain what a data platform is, how data ends up in it, and how data engineers structure its foundations. You will be able to ingest data from a RESTful API into the data platform’s data lake using a self-written ingestion pipeline, made using Singer’s taps and targets.

Testing your data pipeline

Stating “it works on my machine” is not a guarantee it will work reliably elsewhere and in the future. Requirements for your project will change. In this chapter, we explore different forms of testing and learn how to write unit tests for our PySpark data transformation pipeline, so that we make robust and reusable parts.

Creating a data transformation pipeline with PySpark

You will learn how to process data in the data lake in a structured way using PySpark. Of course, you must first understand when PySpark is the right choice for the job.

Managing and orchestrating a workflow

We will explore the basics of Apache Airflow, a popular piece of software that allows you to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order. Here too, we illustrate how a deployment of Apache Airflow can be tested automatically.

User Reviews

0.0 out of 5
0
0
0
0
0
Write a review

There are no reviews yet.

Be the first to review “Building Data Engineering Pipelines in Python”

×

    Your Email (required)

    Report this page
    Building Data Engineering Pipelines in Python
    Building Data Engineering Pipelines in Python
    LiveTalent.org
    Logo
    LiveTalent.org
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.