You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/06/02 04:53:11 UTC

[GitHub] [spark] xuanyuanking commented on pull request #27627: [SPARK-28067][SQL] Fix incorrect results for decimal aggregate sum by returning null on decimal overflow

xuanyuanking commented on pull request #27627:
URL: https://github.com/apache/spark/pull/27627#issuecomment-637271156


   > How about we merge it to master only first, and wait for the schema incompatibility check to be done?
   
   Agree.
   
   > Just to avoid redundant efforts, have you look into #24173? If your approach is different than #24173, what approach you will be proposing?
   
   @HeartSaVioR Thanks for the reminding. I also looked into #24173 before. My approach is checking the underlying unsafe row format instead of adding a new schema file in the checkpoint. It is decided by the requirement of detecting the format changing during migration, which has no chance for the user to create a schema file.
   
   But I think our approaches can complement each other. Let's discuss in my newly created PR, I'll submit one late today.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org