You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by we...@apache.org on 2022/07/15 15:44:13 UTC

[spark] branch master updated: [SPARK-39773][SQL][DOCS] Update document of JDBC options for `pushDownOffset`

This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 187d43da7a0 [SPARK-39773][SQL][DOCS] Update document of JDBC options for `pushDownOffset`
187d43da7a0 is described below

commit 187d43da7a0c92471dd2ad41f11dc4e2669f2d3e
Author: Jiaan Geng <be...@163.com>
AuthorDate: Fri Jul 15 23:43:57 2022 +0800

    [SPARK-39773][SQL][DOCS] Update document of JDBC options for `pushDownOffset`
    
    ### What changes were proposed in this pull request?
    Because the DS v2 pushdown framework added new JDBC option `pushDownOffset` for offset pushdown, we should update sql-data-sources-jdbc.md.
    
    ### Why are the changes needed?
    Add doc for `pushDownOffset`.
    
    ### Does this PR introduce _any_ user-facing change?
    'No'. Updated for new feature.
    
    ### How was this patch tested?
    N/A
    
    Closes #37186 from beliefer/SPARK-39773.
    
    Authored-by: Jiaan Geng <be...@163.com>
    Signed-off-by: Wenchen Fan <we...@databricks.com>
---
 docs/sql-data-sources-jdbc.md | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/docs/sql-data-sources-jdbc.md b/docs/sql-data-sources-jdbc.md
index dec7bc36116..833e0805ec6 100644
--- a/docs/sql-data-sources-jdbc.md
+++ b/docs/sql-data-sources-jdbc.md
@@ -281,7 +281,16 @@ logging into the data sources.
     <td><code>pushDownLimit</code></td>
     <td><code>false</code></td>
     <td>
-     The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, SPARK still applies LIMIT or LIMIT with SORT on the result from data source ev [...]
+     The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, Spark still applies LIMIT or LIMIT with SORT on the result from data source ev [...]
+    </td>
+    <td>read</td>
+  </tr>
+
+  <tr>
+    <td><code>pushDownOffset</code></td>
+    <td><code>false</code></td>
+    <td>
+     The option to enable or disable OFFSET push-down into V2 JDBC data source. The default value is false, in which case Spark will not push down OFFSET to the JDBC data source. Otherwise, if sets to true, Spark will try to push down OFFSET to the JDBC data source. If <code>pushDownOffset</code> is true and <code>numPartitions</code> is equal to 1, OFFSET will be pushed down to the JDBC data source. Otherwise, OFFSET will not be pushed down and Spark still applies OFFSET on the result f [...]
     </td>
     <td>read</td>
   </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org