How to read multiple Parquet files with different schemas in Apache Spark
When we read multiple Parquet files using Apache Spark, we may end up with a problem caused by schema differences. When Spark gets a list of files to read, it picks the schema from either the Parquet summary file or a randomly chosen input file:
1 2 3 4 5 6 spark.read.parquet( List( "file_a", "file_b", "file_c"): _* )
Most likely, you don’t have the Parquet summary file because it is not a popular solution. In this case, Spark will try to apply the schema of a randomly chosen file to every file in the list.
It is an annoying problem because if we have additional columns in some files, we may end up with a dataset that does not contain those extra columns because Spark read the schema from a file without those columns.
How to merge Parquet schemas in Apache Spark?
To solve the issue, we must instruct Apache Spark to merge the schemas from all given files into one common schema. We can do that using the
mergeSchema configuration parameter:
1 spark.read.option("mergeSchema", "true").parquet(...)
You may also like
- What is the difference between CUBE and ROLLUP and how to use it in Apache Spark?
- Working with dates and time in Apache Spark
- How to configure Spark to maximize resource usage while using AWS EMR
- How Data Mechanics can reduce your Apache Spark costs by 70%
- How to concatenate columns in a PySpark DataFrame