Automated Data Processing Pipeline on Jenkins using PySpark Dockerized Jupyter

Audience:

In summary, this scenario describes a data processing pipeline that extracts raw data, applies ETL processes to prepare it for analysis, stores it in the Delta format on Amazon S3, and utilizes PySpark within a Jupyter environment running in a Docker container managed by Jenkins. This setup allows for scalable, secure, and automated data processing and analysis, having a data processing pipeline that involves several key components and technologies:

Data Source: The pipeline begins by reading data from a raw data source. This data source could be a set of raw files, a database, or any other data repository.

ETL (Extract, Transform, Load): After extracting the data from the raw source, it goes through an ETL process. In this stage, data is transformed and cleaned to ensure it's in a suitable format for further analysis or usage.

Processed Data: The transformed data is then referred to as "processed data." This data has been cleaned, structured, and prepared for various downstream applications and analyses.

Storage Format: The processed data is stored in a Delta format. Delta is a storage layer that brings ACID transactions to Apache Spark and big data workloads. It's ideal for data warehouses and analytical use cases.

Storage Location: The processed and Delta-formatted data is hosted on Amazon S3, a popular cloud storage service. S3 is highly scalable and provides secure, durable, and cost-effective storage for a wide range of data types.

IAM Role Credentials: To access and manage data on S3, IAM (Identity and Access Management) roles are used to provide secure and controlled access to AWS resources. These roles are associated with the necessary permissions to interact with S3 data.

Pyspark: Data processing is carried out using PySpark, a Python library that provides an interface for Apache Spark, a powerful, distributed data processing framework. PySpark allows for distributed data processing, making it suitable for handling large datasets efficiently.

Jupyter Image: PySpark is executed within a Jupyter image, which is a popular choice for interactive data analysis and visualization. Jupyter notebooks provide an interactive environment for writing and executing PySpark code.

Docker in Jenkins: The entire pipeline is containerized using Docker. Docker containers encapsulate the PySpark and Jupyter environment, making it easy to set up and deploy the processing pipeline in a consistent and reproducible manner. Jenkins, a popular automation server, is used to manage and orchestrate the containerized pipeline. Jenkins can schedule and trigger the execution of the pipeline as needed, ensuring it runs automatically and reliably.

Room:
Ballroom H
Time:
Friday, March 15, 2024 - 14:30 to 15:30