You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/09/20 01:33:58 UTC

[GitHub] [spark] HyukjinKwon commented on a change in pull request #34042: [SPARK-36801][DOCS] ADD "All columns are automatically converted to be nullable for compatibility reasons." IN SPARK SQL JDBC DOCUMENT

HyukjinKwon commented on a change in pull request #34042:
URL: https://github.com/apache/spark/pull/34042#discussion_r711838027



##########
File path: docs/sql-data-sources-jdbc.md
##########
@@ -29,7 +29,9 @@ as a DataFrame and they can easily be processed in Spark SQL or joined with othe
 The JDBC data source is also easier to use from Java or Python as it does not require the user to
 provide a ClassTag.
 (Note that this is different than the Spark SQL JDBC server, which allows other applications to
-run queries using Spark SQL).
+run queries using Spark SQL). 
+
+All columns are automatically converted to be nullable for compatibility reasons.

Review comment:
       I actually think this is something we should fix but we couldn't because it is a too much breaking change. This is not only for JDBC but for other filed based sources.
   
   We should better have an option or configuration to set the nullability correctly, and make it disabled by default.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org