You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "wangyum (via GitHub)" <gi...@apache.org> on 2023/07/21 03:50:04 UTC

[GitHub] [spark] wangyum commented on a diff in pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

wangyum commented on code in PR #42090:
URL: https://github.com/apache/spark/pull/42090#discussion_r1270198582


##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -79,8 +79,9 @@ class HadoopTableReader(
   private val _minSplitsPerRDD = if (sparkSession.sparkContext.isLocal) {
     0 // will splitted based on block by default.
   } else {
-    math.max(hadoopConf.getInt("mapreduce.job.maps", 1),
-      sparkSession.sparkContext.defaultMinPartitions)
+    val value = sparkSession.sessionState.conf.hiveMinPartitionNum
+      .getOrElse(sparkSession.sparkContext.defaultMinPartitions)
+    math.max(hadoopConf.getInt("mapreduce.job.maps", 1), value)

Review Comment:
   Could we get the configuration value from `sparkSession.sessionState.conf` instead of `hadoopconf`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org