You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@kyuubi.apache.org by "bowenliang123 (via GitHub)" <gi...@apache.org> on 2023/06/08 09:05:19 UTC

[GitHub] [kyuubi] bowenliang123 opened a new pull request, #4933: [DOCS] [MINOR] Mark spark.sql.optimizer.insertRepartitionNum for Spark 3.1 only

bowenliang123 opened a new pull request, #4933:
URL: https://github.com/apache/kyuubi/pull/4933

   <!--
   Thanks for sending a pull request!
   
   Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/CONTRIBUTING.html
     2. If the PR is related to an issue in https://github.com/apache/kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'.
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'.
   -->
   
   ### _Why are the changes needed?_
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you add a feature, you can talk about the use case of it.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   - Update doc to show that The spark plugin's config `spark.sql.optimizer.insertRepartitionNum` is used for Spark 3.1 only
   
   ### _How was this patch tested?_
   - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
   
   - [ ] Add screenshots for manual tests if appropriate
   
   - [ ] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] bowenliang123 closed pull request #4933: [DOCS] [MINOR] Mark `spark.sql.optimizer.insertRepartitionNum` config for Spark 3.1 only

Posted by "bowenliang123 (via GitHub)" <gi...@apache.org>.
bowenliang123 closed pull request #4933: [DOCS] [MINOR] Mark `spark.sql.optimizer.insertRepartitionNum` config for Spark 3.1 only
URL: https://github.com/apache/kyuubi/pull/4933


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] bowenliang123 commented on pull request #4933: [DOCS] [MINOR] Mark `spark.sql.optimizer.insertRepartitionNum` config for Spark 3.1 only

Posted by "bowenliang123 (via GitHub)" <gi...@apache.org>.
bowenliang123 commented on PR #4933:
URL: https://github.com/apache/kyuubi/pull/4933#issuecomment-1583669574

   Thanks, merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] codecov-commenter commented on pull request #4933: [DOCS] [MINOR] Mark `spark.sql.optimizer.insertRepartitionNum` config for Spark 3.1 only

Posted by "codecov-commenter (via GitHub)" <gi...@apache.org>.
codecov-commenter commented on PR #4933:
URL: https://github.com/apache/kyuubi/pull/4933#issuecomment-1582519428

   ## [Codecov](https://app.codecov.io/gh/apache/kyuubi/pull/4933?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) Report
   > Merging [#4933](https://app.codecov.io/gh/apache/kyuubi/pull/4933?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) (280a6af) into [master](https://app.codecov.io/gh/apache/kyuubi/commit/787028ec3ecaa44a66a702a957795f2f5634dabb?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) (787028e) will **not change** coverage.
   > The diff coverage is `n/a`.
   
   > :exclamation: Current head 280a6af differs from pull request most recent head 5ed6e28. Consider uploading reports for the commit 5ed6e28 to get more accurate results
   
   ```diff
   @@          Coverage Diff           @@
   ##           master   #4933   +/-   ##
   ======================================
     Coverage    0.00%   0.00%           
   ======================================
     Files         563     563           
     Lines       30884   30884           
     Branches     4030    4030           
   ======================================
     Misses      30884   30884           
   ```
   
   
   
   :mega: We’re building smart automated test selection to slash your CI/CD build times. [Learn more](https://about.codecov.io/iterative-testing/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] bowenliang123 commented on a diff in pull request #4933: [DOCS] [MINOR] Mark spark.sql.optimizer.insertRepartitionNum for Spark 3.1 only

Posted by "bowenliang123 (via GitHub)" <gi...@apache.org>.
bowenliang123 commented on code in PR #4933:
URL: https://github.com/apache/kyuubi/pull/4933#discussion_r1222988973


##########
docs/extensions/engines/spark/rules.md:
##########
@@ -66,7 +66,7 @@ Kyuubi provides some configs to make these feature easy to use.
 |                                Name                                 |             Default Value              |                                                                                                                                                                     Description                                                                                                                                                                      | Since |
 |---------------------------------------------------------------------|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|
 | spark.sql.optimizer.insertRepartitionBeforeWrite.enabled            | true                                   | Add repartition node at the top of query plan. An approach of merging small files.                                                                                                                                                                                                                                                                   | 1.2.0 |
-| spark.sql.optimizer.insertRepartitionNum                            | none                                   | The partition number if `spark.sql.optimizer.insertRepartitionBeforeWrite.enabled` is enabled. If AQE is disabled, the default value is `spark.sql.shuffle.partitions`. If AQE is enabled, the default value is none that means depend on AQE.                                                                                                       | 1.2.0 |
+| spark.sql.optimizer.insertRepartitionNum                            | none                                   | The partition number if `spark.sql.optimizer.insertRepartitionBeforeWrite.enabled` is enabled. If AQE is disabled, the default value is `spark.sql.shuffle.partitions`. If AQE is enabled, the default value is none that means depend on AQE. This config is used for Spark 3.1.x only.                                                             | 1.2.0 |

Review Comment:
   Adopted.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] pan3793 commented on a diff in pull request #4933: [DOCS] [MINOR] Mark spark.sql.optimizer.insertRepartitionNum for Spark 3.1 only

Posted by "pan3793 (via GitHub)" <gi...@apache.org>.
pan3793 commented on code in PR #4933:
URL: https://github.com/apache/kyuubi/pull/4933#discussion_r1222961468


##########
docs/extensions/engines/spark/rules.md:
##########
@@ -66,7 +66,7 @@ Kyuubi provides some configs to make these feature easy to use.
 |                                Name                                 |             Default Value              |                                                                                                                                                                     Description                                                                                                                                                                      | Since |
 |---------------------------------------------------------------------|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|
 | spark.sql.optimizer.insertRepartitionBeforeWrite.enabled            | true                                   | Add repartition node at the top of query plan. An approach of merging small files.                                                                                                                                                                                                                                                                   | 1.2.0 |
-| spark.sql.optimizer.insertRepartitionNum                            | none                                   | The partition number if `spark.sql.optimizer.insertRepartitionBeforeWrite.enabled` is enabled. If AQE is disabled, the default value is `spark.sql.shuffle.partitions`. If AQE is enabled, the default value is none that means depend on AQE.                                                                                                       | 1.2.0 |
+| spark.sql.optimizer.insertRepartitionNum                            | none                                   | The partition number if `spark.sql.optimizer.insertRepartitionBeforeWrite.enabled` is enabled. If AQE is disabled, the default value is `spark.sql.shuffle.partitions`. If AQE is enabled, the default value is none that means depend on AQE. This config is used for Spark 3.1.x only.                                                             | 1.2.0 |

Review Comment:
   nit: 3.1.x => 3.1



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org