How to add dependencies as jar files or Python scripts to PySpark
When we want to use external dependencies in the PySpark code, we have two options. We can either pass them as jar files or Python scripts.
In this article, I will show how to do that when running a PySpark job using AWS EMR. The jar and Python files will be stored on S3 in a location accessible from the EMR cluster (remember to set the permissions).
First, we have to add the
--py-files parameters to the
spark-submit command while starting a new PySpark job:
1 2 3 4 spark-submit --deploy-mode cluster \ --jars s3://some_bucket/java_code.jar \ --py-files s3://some_bucket/python_code.py \ s3://some_bucket/pyspark_job.py
pyspark_job.py file, I can import the code from the jar file just like any other dependency.
1 import python_code.something
You may also like
- How to derive multiple columns from a single column in a PySpark DataFrame
- How to concatenate columns in a PySpark DataFrame
- How to read from SQL table in PySpark using a query instead of specifying a table
- How to flatten a struct in a Spark DataFrame?
- How to save an Apache Spark DataFrame as a dynamically partitioned table in Hive