You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/10/04 07:54:40 UTC

[GitHub] [spark] maropu commented on a change in pull request #29837: [SPARK-32463][SQL][DOCS] SQL data type compatibility

maropu commented on a change in pull request #29837:
URL: https://github.com/apache/spark/pull/29837#discussion_r499217171



##########
File path: docs/sql-ref-datatypes.md
##########
@@ -314,3 +314,49 @@ SELECT COUNT(*), c2 FROM test GROUP BY c2;
 |        3| Infinity|
 +---------+---------+
 ```
+
+#### Data type compatibility
+
+The following is the hierarchy of data type compatibility and the possible implicit conversions that can be made. In an operation involving different and compatible data types, these will be promoted to the lowest common top type to perform the operation.

Review comment:
       The current description looks ambiguous and many topics get mixed up, I think. What's the topics you would like to describe in this section? If you want to describe type conversion with the default mode (ansi=false), I think we need to pick up three categories: [explicit casting, type coercion, and store assignment casting](https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html#type-conversion). Anyway, we need a clearer structure to describe type behaviours for easy-to-read user documents.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org