You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/03/01 09:38:19 UTC

[GitHub] [spark] cloud-fan commented on pull request #30421: [SPARK-33474][SQL] Support TypeConstructed partition spec value

cloud-fan commented on pull request #30421:
URL: https://github.com/apache/spark/pull/30421#issuecomment-787808014


   I don't think we want to have different partition related behaviors for DS v2. If you see an inconsistency, it's probably a bug.
   
   > can't use interval as table schema?
   
   Do we allow it anywhere? I don't think Spark allows to write out interval values.
   
   > if partition column is date, if we pass a partition value as Spark SQL, DSV2 will treat it as null, but others will throw exception that Spark SQL can't been convert to TimestampType.
   
   Can you create a JIRA for this bug?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org