You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/05/18 16:09:06 UTC

[GitHub] [hudi] fengjian428 commented on issue #5074: [SUPPORT] Flink use different record_key format from spark

fengjian428 commented on issue #5074:
URL: https://github.com/apache/hudi/issues/5074#issuecomment-1130213190

   > 
   
   ![image](https://user-images.githubusercontent.com/4403474/169089471-d62d64dd-6e4d-41c0-b19e-8793714a799e.png)
   
   actually we use spark-sql create table, and use flink-sql ingested data, then delete data use spark-sql
   
   I go through the code , and I found flink will check key length, if equal to 1,  then use simple key generator, but spark always use complexKeyGenerator whether the lenght is equal to 1 or not
   ![image](https://user-images.githubusercontent.com/4403474/169090145-0d5d796c-330b-48ec-a1f7-d1d43a9c2565.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org