How to add dependencies as jar files or Python scripts to PySpark

This article is a part of my "100 data engineering tutorials in 100 days" challenge. (89/100)

When we want to use external dependencies in the PySpark code, we have two options. We can either pass them as jar files or Python scripts.

In this article, I will show how to do that when running a PySpark job using AWS EMR. The jar and Python files will be stored on S3 in a location accessible from the EMR cluster (remember to set the permissions).

First, we have to add the --jars and --py-files parameters to the spark-submit command while starting a new PySpark job:

1
2
3
4
spark-submit --deploy-mode cluster \
    --jars s3://some_bucket/java_code.jar \
    --py-files s3://some_bucket/python_code.py \
    s3://some_bucket/pyspark_job.py 

In the pyspark_job.py file, I can import the code from the jar file just like any other dependency.

1
import python_code.something

Did you enjoy reading this article?
Would you like to learn more about software craft in data engineering and MLOps?

Subscribe to the newsletter or add this blog to your RSS reader (does anyone still use them?) to get a notification when I publish a new essay!

Newsletter

Do you enjoy reading my articles?
Subscribe to the newsletter if you don't want to miss the new content, business offers, and free training materials.

Bartosz Mikulski

Bartosz Mikulski

  • Data/MLOps engineer by day
  • DevRel/copywriter by night
  • Python and data engineering trainer
  • Conference speaker
  • Contributed a chapter to the book "97 Things Every Data Engineer Should Know"
  • Twitter: @mikulskibartosz
Newsletter

Do you enjoy reading my articles?
Subscribe to the newsletter if you don't want to miss the new content, business offers, and free training materials.