You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "dh20 (via GitHub)" <gi...@apache.org> on 2023/07/20 07:19:03 UTC

[GitHub] [spark] dh20 opened a new pull request, #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

dh20 opened a new pull request, #42090:
URL: https://github.com/apache/spark/pull/42090

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a faster review.
     7. If you want to add a new configuration, please read the guideline first for naming configurations in
        'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the guideline first in
        'core/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   Increase the parallelism of sparksql settings when reading the hive table
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   The current user is unable to set the parallelism for reading the hive table in the session
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   Can add a configuration for users to read the hive table and set parallelism
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
   If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions for the consistent environment, and the instructions could accord to: https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   exist test. Tested in cluster
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings [spark]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] closed pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings
URL: https://github.com/apache/spark/pull/42090


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings [spark]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #42090:
URL: https://github.com/apache/spark/pull/42090#issuecomment-1793262486

   We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
   If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dh20 commented on a diff in pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "dh20 (via GitHub)" <gi...@apache.org>.
dh20 commented on code in PR #42090:
URL: https://github.com/apache/spark/pull/42090#discussion_r1273070183


##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -79,8 +79,9 @@ class HadoopTableReader(
   private val _minSplitsPerRDD = if (sparkSession.sparkContext.isLocal) {
     0 // will splitted based on block by default.
   } else {
-    math.max(hadoopConf.getInt("mapreduce.job.maps", 1),
-      sparkSession.sparkContext.defaultMinPartitions)
+    val value = sparkSession.sessionState.conf.hiveMinPartitionNum
+      .getOrElse(sparkSession.sparkContext.defaultMinPartitions)
+    math.max(hadoopConf.getInt("mapreduce.job.maps", 1), value)

Review Comment:
   @wangyum  Hi, sir, are you still following



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dh20 commented on pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "dh20 (via GitHub)" <gi...@apache.org>.
dh20 commented on PR #42090:
URL: https://github.com/apache/spark/pull/42090#issuecomment-1644881011

   @dongjoon-hyun Hi,can you help me review PR!,ths
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dh20 commented on pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "dh20 (via GitHub)" <gi...@apache.org>.
dh20 commented on PR #42090:
URL: https://github.com/apache/spark/pull/42090#issuecomment-1650826053

   @cloud-fan hi, can you help me review this PR? ths!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] wangyum commented on a diff in pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "wangyum (via GitHub)" <gi...@apache.org>.
wangyum commented on code in PR #42090:
URL: https://github.com/apache/spark/pull/42090#discussion_r1270198582


##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -79,8 +79,9 @@ class HadoopTableReader(
   private val _minSplitsPerRDD = if (sparkSession.sparkContext.isLocal) {
     0 // will splitted based on block by default.
   } else {
-    math.max(hadoopConf.getInt("mapreduce.job.maps", 1),
-      sparkSession.sparkContext.defaultMinPartitions)
+    val value = sparkSession.sessionState.conf.hiveMinPartitionNum
+      .getOrElse(sparkSession.sparkContext.defaultMinPartitions)
+    math.max(hadoopConf.getInt("mapreduce.job.maps", 1), value)

Review Comment:
   Could we get the configuration value from `sparkSession.sessionState.conf` instead of `hadoopconf`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dh20 commented on a diff in pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "dh20 (via GitHub)" <gi...@apache.org>.
dh20 commented on code in PR #42090:
URL: https://github.com/apache/spark/pull/42090#discussion_r1270253264


##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -79,8 +79,9 @@ class HadoopTableReader(
   private val _minSplitsPerRDD = if (sparkSession.sparkContext.isLocal) {
     0 // will splitted based on block by default.
   } else {
-    math.max(hadoopConf.getInt("mapreduce.job.maps", 1),
-      sparkSession.sparkContext.defaultMinPartitions)
+    val value = sparkSession.sessionState.conf.hiveMinPartitionNum
+      .getOrElse(sparkSession.sparkContext.defaultMinPartitions)
+    math.max(hadoopConf.getInt("mapreduce.job.maps", 1), value)

Review Comment:
   This is also okay, simply discard the configuration obtained from hadoopconf, but for compatibility, this user has already been configured in hadoopconf. I think it can also be added here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dh20 commented on a diff in pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "dh20 (via GitHub)" <gi...@apache.org>.
dh20 commented on code in PR #42090:
URL: https://github.com/apache/spark/pull/42090#discussion_r1271203786


##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -79,8 +79,9 @@ class HadoopTableReader(
   private val _minSplitsPerRDD = if (sparkSession.sparkContext.isLocal) {
     0 // will splitted based on block by default.
   } else {
-    math.max(hadoopConf.getInt("mapreduce.job.maps", 1),
-      sparkSession.sparkContext.defaultMinPartitions)
+    val value = sparkSession.sessionState.conf.hiveMinPartitionNum
+      .getOrElse(sparkSession.sparkContext.defaultMinPartitions)
+    math.max(hadoopConf.getInt("mapreduce.job.maps", 1), value)

Review Comment:
   @wangyum Hi, sir, can you initiate another test? These errors seem unrelated to my current modifications



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dh20 commented on a diff in pull request #42090: [SPARK-44483] [SQL] When using Spark to read the hive table, the number of file partitions cannot be set using Spark's configuration settings

Posted by "dh20 (via GitHub)" <gi...@apache.org>.
dh20 commented on code in PR #42090:
URL: https://github.com/apache/spark/pull/42090#discussion_r1273070183


##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -79,8 +79,9 @@ class HadoopTableReader(
   private val _minSplitsPerRDD = if (sparkSession.sparkContext.isLocal) {
     0 // will splitted based on block by default.
   } else {
-    math.max(hadoopConf.getInt("mapreduce.job.maps", 1),
-      sparkSession.sparkContext.defaultMinPartitions)
+    val value = sparkSession.sessionState.conf.hiveMinPartitionNum
+      .getOrElse(sparkSession.sparkContext.defaultMinPartitions)
+    math.max(hadoopConf.getInt("mapreduce.job.maps", 1), value)

Review Comment:
   @wangyum  Hi, sir, are you still following



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org