You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "nirav patel (JIRA)" <ji...@apache.org> on 2018/08/01 18:12:00 UTC

[jira] [Commented] (SPARK-17861) Store data source partitions in metastore and push partition pruning into metastore

    [ https://issues.apache.org/jira/browse/SPARK-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16565749#comment-16565749 ] 

nirav patel commented on SPARK-17861:
-------------------------------------

[~rxin] can this also be supported via dataframe? so following will also give same behavior?

`df.write.mode(SaveMode.Overwrite).partitionBy(partitionCols : _*).parquet(tableLocation)`

 

Currently it overwrites all partitions with spark 2.2.1 version

> Store data source partitions in metastore and push partition pruning into metastore
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-17861
>                 URL: https://issues.apache.org/jira/browse/SPARK-17861
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Reynold Xin
>            Assignee: Eric Liang
>            Priority: Critical
>             Fix For: 2.1.0
>
>
> Initially, Spark SQL does not store any partition information in the catalog for data source tables, because initially it was designed to work with arbitrary files. This, however, has a few issues for catalog tables:
> 1. Listing partitions for a large table (with millions of partitions) can be very slow during cold start.
> 2. Does not support heterogeneous partition naming schemes.
> 3. Cannot leverage pushing partition pruning into the metastore.
> This ticket tracks the work required to push the tracking of partitions into the metastore. This change should be feature flagged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org