You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kyuubi.apache.org by ch...@apache.org on 2023/01/09 14:09:59 UTC
[kyuubi] branch master updated: [KYUUBI #4133] [Doc] Remove improper code sample for saving dataframe with JDBC Driver in PySpark
This is an automated email from the ASF dual-hosted git repository.
chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kyuubi.git
The following commit(s) were added to refs/heads/master by this push:
new 5ff98f28f [KYUUBI #4133] [Doc] Remove improper code sample for saving dataframe with JDBC Driver in PySpark
5ff98f28f is described below
commit 5ff98f28f3439aaaef213aca807f945eb27f3f9b
Author: liangbowen <li...@gf.com.cn>
AuthorDate: Mon Jan 9 22:09:49 2023 +0800
[KYUUBI #4133] [Doc] Remove improper code sample for saving dataframe with JDBC Driver in PySpark
### _Why are the changes needed?_
Remove improper docs saving dataframe for pyspark, as hive-like JDBC driver not supporting `addBatch` method which is required by Spark JDBC datasource in `JDBCUtils`.
### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
- [ ] Add screenshots for manual tests if appropriate
- [ ] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request
Closes #4133 from bowenliang123/pyspark-remove-doc.
Closes #4133
45c9aa7f [liangbowen] remove improper docs saving dataframe for pyspark,as hive like JDBC driver not supporting `addBatch` which is reuqired by Spark JDBC datasource in `JDBCUtils`
Authored-by: liangbowen <li...@gf.com.cn>
Signed-off-by: Cheng Pan <ch...@apache.org>
---
docs/client/python/pyspark.md | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/docs/client/python/pyspark.md b/docs/client/python/pyspark.md
index 2039b250c..cb459996d 100644
--- a/docs/client/python/pyspark.md
+++ b/docs/client/python/pyspark.md
@@ -92,18 +92,6 @@ jdbcDF = spark.read \
query="select * from testdb.src_table"
) \
.load()
-
-
-# Saving data to Kyuubi via HiveDriver as JDBC datasource
-jdbcDF.write \
- .format("jdbc") \
- .options(driver="org.apache.hive.jdbc.HiveDriver",
- url="jdbc:hive2://kyuubi_server_ip:port",
- user="user",
- password="password",
- dbtable="testdb.tgt_table"
- ) \
- .save()
```
### Using as JDBC Datasource table with SQL