You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "chenliang (Jira)" <ji...@apache.org> on 2020/12/08 08:36:00 UTC

[jira] [Created] (SPARK-33707) Support multiple types of function partition pruning on hive metastore

chenliang created SPARK-33707:
---------------------------------

             Summary: Support multiple types of function partition pruning on hive metastore
                 Key: SPARK-33707
                 URL: https://issues.apache.org/jira/browse/SPARK-33707
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 3.0.0, 2.4.3, 2.3.4, 2.2.2
            Reporter: chenliang


For the current version, partition pruning support is limited to the scene.

Let's look at the implementation of the source code:
 [https://github.com/apache/spark/blob/031c5ef280e0cba8c4718a6457a44b6cccb17f46/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala#L840]

Hive getPartitionsByFilter() takes a string that represents partition predicates like "str_key=\"value\" and int_key=1 ...", but for normal functions like concat/concat_ws/substr,it  does not support.



The defect can cause a large number of partitions to be scanned which will increase the amount of data involved in the calculation and increase the pressure of service of metastore.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org