You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/09/21 07:25:31 UTC

[GitHub] [spark] gaborgsomogyi commented on a change in pull request #34042: [SPARK-36801][DOCS] ADD "All columns are automatically converted to be nullable for compatibility reasons." IN SPARK SQL JDBC DOCUMENT

gaborgsomogyi commented on a change in pull request #34042:
URL: https://github.com/apache/spark/pull/34042#discussion_r712770187



##########
File path: docs/sql-data-sources-jdbc.md
##########
@@ -29,7 +29,9 @@ as a DataFrame and they can easily be processed in Spark SQL or joined with othe
 The JDBC data source is also easier to use from Java or Python as it does not require the user to
 provide a ClassTag.
 (Note that this is different than the Spark SQL JDBC server, which allows other applications to
-run queries using Spark SQL).
+run queries using Spark SQL). 
+
+All columns are automatically converted to be nullable for compatibility reasons.

Review comment:
       +1 on what @HyukjinKwon suggested.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org