You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "chenliang (Jira)" <ji...@apache.org> on 2020/12/09 02:48:00 UTC

[jira] [Reopened] (SPARK-33707) Support multiple types of function partition pruning on hive metastore

     [ https://issues.apache.org/jira/browse/SPARK-33707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

chenliang reopened SPARK-33707:
-------------------------------

[~hyukjin.kwon] thanks for your time, I have resolved some commonly used functions
 and I will submit the code as soon as possible. This feature enhancement can cover 90% of the production environment and it's helpful for user to migrate Hive SQL to Spark SQL. Thanks.

> Support multiple types of function partition pruning on hive metastore
> ----------------------------------------------------------------------
>
>                 Key: SPARK-33707
>                 URL: https://issues.apache.org/jira/browse/SPARK-33707
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.2, 2.3.4, 2.4.3, 3.0.0
>            Reporter: chenliang
>            Priority: Major
>
> For the current version, partition pruning support is limited to the scene.
> Let's look at the implementation of the source code:
>  [https://github.com/apache/spark/blob/031c5ef280e0cba8c4718a6457a44b6cccb17f46/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala#L840]
> Hive getPartitionsByFilter() takes a string that represents partition predicates like "str_key=\"value\" and int_key=1 ...", but for normal functions like concat/concat_ws/substr,it  does not support.
> The defect can cause a large number of partitions to be scanned which will increase the amount of data involved in the calculation and increase the pressure of service of metastore.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org