You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "boneanxs (via GitHub)" <gi...@apache.org> on 2023/02/17 08:19:10 UTC

[GitHub] [hudi] boneanxs commented on a diff in pull request #7978: [HUDI-5812] Optimize the data size check in HoodieBaseParquetWriter

boneanxs commented on code in PR #7978:
URL: https://github.com/apache/hudi/pull/7978#discussion_r1109424306


##########
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieBaseParquetWriter.java:
##########
@@ -65,14 +63,18 @@ public HoodieBaseParquetWriter(Path file,
   }
 
   public boolean canWrite() {
-    // TODO we can actually do evaluation more accurately:
-    //      if we cache last data size check, since we account for how many records
-    //      were written we can accurately project avg record size, and therefore
-    //      estimate how many more records we can write before cut off
-    if (lastCachedDataSize == -1 || getWrittenRecordCount() % WRITTEN_RECORDS_THRESHOLD_FOR_FILE_SIZE_CHECK == 0) {
-      lastCachedDataSize = getDataSize();
+    if (getWrittenRecordCount() >= recordNumForNextCheck) {
+      long dataSize = getDataSize();
+      long avgRecordSize = dataSize / getWrittenRecordCount();
+      // Follow the parquet block size check logic here, return false
+      // if it is within ~2 records of the limit
+      if (dataSize > (maxFileSize - avgRecordSize * 2)) {
+        return false;

Review Comment:
   Follow this:



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org