You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Christopher Burns (JIRA)" <ji...@apache.org> on 2018/09/23 02:55:00 UTC

[jira] [Comment Edited] (SPARK-25480) Dynamic partitioning + saveAsTable with multiple partition columns create empty directory

    [ https://issues.apache.org/jira/browse/SPARK-25480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16624913#comment-16624913 ] 

Christopher Burns edited comment on SPARK-25480 at 9/23/18 2:54 AM:
--------------------------------------------------------------------

I can confirm this happens with Spark 2.3 / HDFS 2.7.4 + write.parquet()


was (Author: chris-topher):
I can confirm this happens with Spark 2.3 / HDFS 2.7.4

> Dynamic partitioning + saveAsTable with multiple partition columns create empty directory
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-25480
>                 URL: https://issues.apache.org/jira/browse/SPARK-25480
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.0
>            Reporter: Daniel Mateus Pires
>            Priority: Minor
>         Attachments: dynamic_partitioning.json
>
>
> We use .saveAsTable and dynamic partitioning as our only way to write data to S3 from Spark.
> When only 1 partition column is defined for a table, .saveAsTable behaves as expected:
>  - with Overwrite mode it will create a table if it doesn't exist and write the data
>  - with Append mode it will append to a given partition
>  - with Overwrite mode if the table exists it will overwrite the partition
> If 2 partition columns are used however, the directory is created on S3 with the SUCCESS file, but no data is actually written
> our solution is to check if the table doesn't exist, and in that case, set the partitioning mode back to static before running saveAsTable:
> {code}
> spark.conf.set("spark.sql.sources.partitionOverwriteMode", "dynamic")
> df.write.mode("overwrite").partitionBy("year", "month").option("path", "s3://hbc-data-warehouse/integration/users_test").saveAsTable("users_test")
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org