How to write to a SQL database using JDBC in PySpark

This article is a part of my "100 data engineering tutorials in 100 days" challenge. (90/100)

To write a PySpark DataFrame to a table in a SQL database using JDBC, we need a few things.

First, we have to add the JDBC driver to the driver node and the worker nodes. We can do that using the --jars property while submitting a new PySpark job:

1
2
3
spark-submit --deploy-mode cluster \
    --jars s3://some_bucket/jdbc_driver.jar \
    s3://some_bucket/pyspark_job.py 

After that, we have to prepare the JDBC connection URL. The URL consists of three parts: the database name, the host with port, and the database (schema) name. If I want to connect to Postgres running on the local machine, the URL should look like this:

1
url = "jdbc:postgresql://localhost/database_name"

In addition to that, I have to prepare a dictionary of properties, which contains the username and password used to connect to the database:

1
2
3
4
properties = {
    "user": "the_username",
    "password": "the_password"
}

Please do not store the credentials in the code. It is better to use the AWS SecretsManager (if you run your code on EMR) or any other method of passing the credentials securely from an external source.

I have to decide how Spark should behave when there is already some data in the table. Let’s assume that I want to overwrite the existing data with the DataFrame df content. In this case, I have to set the write mode to ‘overwrite’.

The last information I need is the table name that will be populated with the DataFrame. When I have all of the required information, I can call the write.jdbc function:

1
df.write.jdbc(url=url, table="the_table_name", mode='overwrite', properties=properties)

Subscribe to the newsletter and join the free email course.


Remember to share on social media!
If you like this text, please share it on Facebook/Twitter/LinkedIn/Reddit or other social media.

If you want to contact me, send me a message on LinkedIn or Twitter.

Would you like to have a call and talk? Please schedule a meeting using this link.


Bartosz Mikulski
Bartosz Mikulski * MLOps Engineer / data engineer * conference speaker * co-founder of Software Craft Poznan & Poznan Scala User Group

Subscribe to the newsletter and get access to my free email course on building trustworthy data pipelines.