You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Gatsby-Lee (via GitHub)" <gi...@apache.org> on 2023/02/06 00:06:05 UTC

[GitHub] [hudi] Gatsby-Lee commented on a diff in pull request #3549: [HUDI-2369] Blog on bulk_insert sort modes

Gatsby-Lee commented on code in PR #3549:
URL: https://github.com/apache/hudi/pull/3549#discussion_r1096835196


##########
website/blog/2021-08-27-bulk-insert-sort-modes.md:
##########
@@ -0,0 +1,93 @@
+---
+title: "Bulk Insert Sort Modes with Apache Hudi"
+excerpt: "Different sort modes available with BulkInsert"
+author: shivnarayan
+category: blog
+---
+
+Apache Hudi supports a `bulk_insert` operation in addition to "insert" and "upsert" to ingest data into a hudi table. 
+There are different sort modes that one could employ while using bulk_insert. This blog will talk about 
+different sort modes available out of the box, and how each compares with others. 
+<!--truncate-->
+
+Apache Hudi supports “bulk_insert” to assist in initial loading of data to a hudi table. This is expected
+to be faster when compared to using “insert” or “upsert” operations. Bulk insert differs from insert in two
+aspects. Existing records are never looked up with bulk_insert, and some writer side optimizations like 
+small files are not managed with bulk_insert. 
+
+Bulk insert offers 3 different sort modes to cater to different needs of users, based on the following principles.
+
+- Sorting will give us good compression and upsert performance, if data is laid out well. Especially if your record keys 
+  have some sort of ordering (timestamp, etc) characteristics, sorting will assist in trimming down a lot of files 
+  during upsert. If data is sorted by frequently queried columns, queries will leverage parquet predicate pushdown 
+  to trim down the data to ensure lower latency as well.
+  
+- Additionally, parquet writing is quite a memory intensive operation. When writing large volumes of data into a table 
+  that is also partitioned into 1000s of partitions, without sorting of any kind, the writer may have to keep 1000s of 
+  parquet writers open simultaneously incurring unsustainable memory pressure and eventually leading to crashes.
+  
+- It's also desirable to start with the smallest amount of files possible when bulk importing data, as to avoid 
+  metadata overhead later on for writers and queries. 
+
+3 Sort modes supported out of the box are: `PARTITION_SORT`,`GLOBAL_SORT` and `NONE`.
+
+## Configurations 
+One can set the config [“hoodie.bulkinsert.sort.mode”](https://hudi.apache.org/docs/configurations.html#withBulkInsertSortMode) to either 
+of the three values, namely NONE, GLOBAL_SORT and PARTITION_SORT. Default sort mode is `GLOBAL_SORT`.
+
+## Different Sort Modes
+
+### Global Sort
+
+As the name suggests, Hudi sorts the records globally across the input partitions, which maximizes the number of files 

Review Comment:
   hudi sorts the records globally by which column? recordKey?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org