You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/24 18:51:17 UTC

[GitHub] [spark] sunchao commented on pull request #36995: [SPARK-39607][SQL][DSV2] Distribution and ordering support V2 function in writing

sunchao commented on PR #36995:
URL: https://github.com/apache/spark/pull/36995#issuecomment-1226108103

   > How can we use this feature to implement bucket writing? We can use the expression (a v2 function) that calculates the bucket ID as the clustering expressions. Then Spark will make sure records with the same bucket ID will be in the same partition. However, the problem of this approach is low parallelism (at most number of buckets).
   
   @cloud-fan I think you raised a good point. With the double-hashing mentioned above the parallelism could even be less than the number of buckets due to collision (but I guess this is just a minor thing since the chance is low). Even though the actual number of Spark tasks may be much larger than the number of buckets, most of the tasks will receive empty input in this scenario.
   
   > A different approach is to use the bucket columns as the clustering expressions. Spark will make sure records with the same bucket columns values will be in the same partition. Then the v2 write can require a local sort with bucket id (a v2 function) so that records with the same bucket ID will be grouped together.
   
   This means it now relies on Spark's hash function for bucketing though, which could be different from other engines. I think it would cause compatibility issues, right?
   
   > That said, I think most users will not use bucket transform as the clustering expressions.
   
   Hmm I'm not sure whether this is true. @aokolnychyi may know more from Iceberg side.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org