You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Duin (JIRA)" <ji...@apache.org> on 2017/10/11 08:58:00 UTC

[jira] [Created] (SPARK-22247) Hive partition filter very slow

Patrick Duin created SPARK-22247:
------------------------------------

             Summary: Hive partition filter very slow
                 Key: SPARK-22247
                 URL: https://issues.apache.org/jira/browse/SPARK-22247
             Project: Spark
          Issue Type: Bug
          Components: Spark Core, SQL
    Affects Versions: 2.1.1, 2.0.2
            Reporter: Patrick Duin


I found an issue where filtering partitions using a dataframe results in very bad performance.

To reproduce:
Create a hive table with a lot of partitions and write a spark query on that table that filters based on the partition column.

In my use case I've got a table with about 30k partitions. 
I filter the partitions using some scala via spark-shell:
{{table.filter("partition=x or partition=y")}}
This results in a Hive thrift API call:{{ #get_partitions('db', 'table', -1)}} which is very slow (minutes) and loads all metastore partitions in memory.

Doing a more simple filter:
{{table.filter("partition=x)}} 
Results in a Hive Thrift API call:{{ #get_partitions_by_filter('db', 'table', 'partition = "x', -1)}} which is very fast (seconds) and only fetches partition X into memory.

If possible Spark should translate the filter into the more performant Thrift call or fallback to a more scalable solution where it filters our partitions without having to loading them all into memory first (for instance fetching the partitions in batches).

I've posted my original question on [SO|https://stackoverflow.com/questions/46152526/how-should-i-configure-spark-to-correctly-prune-hive-metastore-partitions]





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org