You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "berniedurfee-renaissance (via GitHub)" <gi...@apache.org> on 2023/03/15 16:49:05 UTC

[GitHub] [hudi] berniedurfee-renaissance opened a new issue, #8196: [SUPPORT] Hudi not evolving Hive decimal to higher precision and scale

berniedurfee-renaissance opened a new issue, #8196:
URL: https://github.com/apache/hudi/issues/8196

   An older version of a table schema had a column `xyz` typed as `DECIMAL(10,3)` while a newer data file has the same column typed to `DECIMAL(28,6)`.
   
   ```
   Caused by: org.apache.hudi.hive.HoodieHiveSyncException: Could not convert field Type from DECIMAL(10,3) to DECIMAL(28,6) for field xyz
   	at org.apache.hudi.hive.util.HiveSchemaUtil.getSchemaDifference(HiveSchemaUtil.java:103)
   	at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:249)
   	at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:184)
   	at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:129)
   	at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:115)
   	... 48 more
   ```
   
   I would expect this to work because values in the field with type `DECIMAL(10,3)` will fit into the field of type `DECIMAL(28,6)`. I think this change would be considered 'backward compatible' and Hudi should just update the schema in Hive to set the column type to `DECIMAL(28,6)`, which would fit the old values as well as new values.
   
   Shouldn't the schema evolve if the new data type is 'bigger' than the previous data type?
   
   I'm running this a Glue 3 job.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] ad1happy2go commented on issue #8196: [SUPPORT] Hudi not evolving Hive decimal to higher precision and scale

Posted by "ad1happy2go (via GitHub)" <gi...@apache.org>.
ad1happy2go commented on issue #8196:
URL: https://github.com/apache/hudi/issues/8196#issuecomment-1531342880

   @berniedurfee-renaissance 
   As you said ideally it should support as we are increasing the precision here. Can you post the reproducible code for the same. I tried with the normal write and it worked.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] danny0405 commented on issue #8196: [SUPPORT] Hudi not evolving Hive decimal to higher precision and scale

Posted by "danny0405 (via GitHub)" <gi...@apache.org>.
danny0405 commented on issue #8196:
URL: https://github.com/apache/hudi/issues/8196#issuecomment-1475558435

   The error stack trace indicates that this is a constraint of Hive sync, not sure whether Hive supports such a schema evolution use case yet, I get a quick glimpse of the code and find that we can not actually get the precision of Hive using the Hive metastore client API, that is a question.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org