How to add dependencies as jar files or Python scripts to PySpark

This article is a part of my "100 data engineering tutorials in 100 days" challenge. (89/100)

When we want to use external dependencies in the PySpark code, we have two options. We can either pass them as jar files or Python scripts.

In this article, I will show how to do that when running a PySpark job using AWS EMR. The jar and Python files will be stored on S3 in a location accessible from the EMR cluster (remember to set the permissions).

First, we have to add the --jars and --py-files parameters to the spark-submit command while starting a new PySpark job:

1
2
3
4
spark-submit --deploy-mode cluster \
    --jars s3://some_bucket/java_code.jar \
    --py-files s3://some_bucket/python_code.py \
    s3://some_bucket/pyspark_job.py 

In the pyspark_job.py file, I can import the code from the jar file just like any other dependency.

1
import python_code.something

Subscribe to the newsletter and join the free email course.


Remember to share on social media!
If you like this text, please share it on Facebook/Twitter/LinkedIn/Reddit or other social media.

If you want to contact me, send me a message on LinkedIn or Twitter.

Would you like to have a call and talk? Please schedule a meeting using this link.


Bartosz Mikulski
Bartosz Mikulski * data/machine learning engineer * conference speaker * co-founder of Software Craft Poznan & Poznan Scala User Group

Subscribe to the newsletter and get access to my free email course on building trustworthy data pipelines.