You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "danny0405 (via GitHub)" <gi...@apache.org> on 2023/02/07 04:39:23 UTC

[GitHub] [hudi] danny0405 commented on a diff in pull request #7873: [MINOR] Added safety-net check to catch any potential issue to deduce parallelism from the incoming `Dataset` appropriately

danny0405 commented on code in PR #7873:
URL: https://github.com/apache/hudi/pull/7873#discussion_r1098169437


##########
hudi-client/hudi-spark-client/src/main/scala/org/apache/hudi/HoodieDatasetBulkInsertHelper.scala:
##########
@@ -203,6 +203,17 @@ object HoodieDatasetBulkInsertHelper
       .values
   }
 
+  override protected def deduceShuffleParallelism(input: DataFrame, configuredParallelism: Int): Int = {
+    val deduceParallelism = super.deduceShuffleParallelism(input, configuredParallelism)
+    // NOTE: In case parallelism deduction failed to accurately deduce parallelism level of the
+    //       incoming dataset we fallback to default parallelism level set for this Spark session
+    if (deduceParallelism > 0) {
+      deduceParallelism
+    } else {
+      input.sparkSession.sparkContext.defaultParallelism

Review Comment:
   Curious, only bulk_insert has this issue like deduced parallelism is 0 or nefative?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org