You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/01/08 06:34:40 UTC

[GitHub] [hudi] bvaradar commented on issue #2414: [SUPPORT]

bvaradar commented on issue #2414:
URL: https://github.com/apache/hudi/issues/2414#issuecomment-756578166


   Answer provided : https://apache-hudi.slack.com/archives/C4D716NPQ/p1609922670490200?thread_ts=1609360627.455000&cid=C4D716NPQ
   
   Regarding your first question, Bulk_Insert mode simply honors the number of parallelism you have provided and uses it to create separate files.  Please look at https://cwiki.apache.org/confluence/display/HUDI/FAQ#FAQ-Whatperformance/ingestlatencycanIexpectforHudiwriting If you want file sizing to be taken care of, use INSERT mode for the first commit. Regarding your second question, the reason is that with sorting inside a partition (Note : this is spark partition), what happens is that if the records within a spark partition falls under different hive partition, then they end up as separate file. With global sort mode though, each spark partition is likely to see records from the same hive partition. The  spark step name might be misleading, It could be the index lookup time. Try global sort mode with insert and then run upsert to observe upsert time


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org