Batch Data Pipeline with Spark

Data Ingestion and Cleaning you own this product

This free project is part of the liveProject series End-to-End Batch Data Pipeline with Spark
prerequisites
basic Python, basics of Jupyter Notebook • basic distributed computing • basic SQL
skills learned
create clusters and notebook • interact with the Databricks File System (DBFS) • use Apache Spark to read input data in CSV and JSON format • understand the characteristics and importance of the first two layers of the widely used three-layer data lake design • use Databricks functionalities
Mahdi Karabiben
1 week · 4-6 hours per week · BEGINNER
filed under

placing your order...

Don't refresh or navigate away from the page.
This free project is part of the liveProject series End-to-End Batch Data Pipeline with Spark explore series
Check your email for instructions on accessing Data Ingestion and Cleaning (liveProject)
continue shopping
adding to cart

Look inside

Imagine you’re a data engineer working at an enterprise. In this liveProject, you’ll set up a Databricks platform, creating clusters and notebooks, interacting with the Databricks File System (DBFS), and leveraging important Databricks features. You’ll also gain first-hand experience with Apache Spark—the world’s most widely used distributed processing framework—on tasks like reading the input data in CSV and JSON format, filtering, and writing the data to the data lake’s curated layer on DBFS.

project author

Mahdi Karabiben

Mahdi is a senior data engineer at Zendesk. With four years of experience in data engineering, he has worked on multiple large-scale projects within the AdTech and financial sectors. He's a Cloudera-certified Apache Spark developer and works with Big Data technologies on a daily basis, designing and building data pipelines, data lakes, and data services that rely on petabytes of data. Thanks to his degree in software engineering (with a minor in big data), he is comfortable with a wide range of technologies and concepts. He additionally writes for major Medium publications (Towards Data Science, The Startup) and technology websites (TheNextWeb, Software Engineering Daily, freeCodeCamp).

prerequisites

The liveProject is for software engineers and data professionals interested in onboarding big data processing skills including processing large amounts of data and building cloud-based data lakes. To begin these liveProjects you'll need to be familiar with the following:


TOOLS
  • Beginner Python
  • Basics of Jupyter Notebook
TECHNIQUES
  • Basic distributed computing
  • Basic SQL

you will learn

In this liveProject, you’ll learn to set up your workspace on the Databricks platform and push the data into the first two layers of the data lake.


  • Create clusters and notebook
  • Interact with the Databricks File System (DBFS)
  • Use Databricks functionalities
  • Read input data in CSV and JSON format, using Apache Spark
  • Understand the characteristics and importance of the first two layers of the widely used three-layer data lake design

features

Self-paced
You choose the schedule and decide how much time to invest as you build your project.
Project roadmap
Each project is divided into several achievable steps.
Get Help
While within the liveProject platform, get help from other participants.
Compare with others
For each step, compare your deliverable to the solutions by the author and other participants.
RECENTLY VIEWED