You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2023/01/11 09:35:26 UTC

[GitHub] [hudi] danny0405 commented on pull request #7295: [HUDI-5275] Fix reading data using the HoodieHiveCatalog will cause the Spark write to fail

danny0405 commented on PR #7295:
URL: https://github.com/apache/hudi/pull/7295#issuecomment-1378472232

   Yeah, I have reviewed and applied a path: 
   [5275.zip](https://github.com/apache/hudi/files/10390683/5275.zip)
   
   The idea to fix is: we better not add extra options like `FlinkOptions.HIVE_STYLE_PARTITIONING` when reading the table, a more proper way to fix the issue is to merge the table config options for reader/writer path.
   
   Can you apply the patch and add a test case in `TestHoodieTableFactory` to test 2 cases (to test that the write config options always have higher priority):
   
   1. the table source merges the built-in table config that is not defined in write config.
   2. the table source can not override the existing write config if the option value changes.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org