You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Prashant Wason (Jira)" <ji...@apache.org> on 2021/01/06 00:53:00 UTC

[jira] [Comment Edited] (HUDI-1509) Major performance degradation due to rewriting records with default values

    [ https://issues.apache.org/jira/browse/HUDI-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259323#comment-17259323 ] 

Prashant Wason edited comment on HUDI-1509 at 1/6/21, 12:52 AM:
----------------------------------------------------------------

I timed the various code fragments involved in the above commit. The timings are as follows:

private static LinkedHashSet<Field> getCombinedFieldsToWrite(Schema oldSchema, Schema newSchema) {
 LinkedHashSet<Field> allFields = new LinkedHashSet<>(oldSchema.getFields()); ** // 75usec average for this line**
 * *// 200usec average for the lines below **
 for (Schema.Field f : newSchema.getFields())
Unknown macro: \{ if (!allFields.contains(f) && !isMetadataField(f.name())) { allFields.add(f); } }
return allFields;
 }

private static GenericRecord rewrite(GenericRecord record, LinkedHashSet<Field> fieldsToWrite, Schema newSchema) {
 GenericRecord newRecord = new GenericData.Record(newSchema);
 for (Schema.Field f : fieldsToWrite) {
 if (record.get(f.name()) == null) {
 if (f.defaultVal() instanceof JsonProperties.Null)

{ newRecord.put(f.name(), null); }

else

{ newRecord.put(f.name(), f.defaultVal()); }

} else

{ newRecord.put(f.name(), record.get(f.name())); }

}
 *// 3usec for the code above*

*// 75usec for the code below*

if (!GenericData.get().validate(newSchema, newRecord))

{ throw new SchemaCompatibilityException( "Unable to validate the rewritten record " + record + " against schema " + newSchema); }

return newRecord;
 }


was (Author: pwason):
I timed the various code fragments involved in the above commit. The timings are as follows:

  private static LinkedHashSet<Field> getCombinedFieldsToWrite(Schema oldSchema, Schema newSchema) {
    LinkedHashSet<Field> allFields = new LinkedHashSet<>(oldSchema.getFields()); * // 75usec average for this line*
    
*    // 200usec average for the lines below *    
     for (Schema.Field f : newSchema.getFields()) {
      if (!allFields.contains(f) && !isMetadataField(f.name())) {
        allFields.add(f);
      }
    }
    return allFields;
  }





private static GenericRecord rewrite(GenericRecord record, LinkedHashSet<Field> fieldsToWrite, Schema newSchema) {
    GenericRecord newRecord = new GenericData.Record(newSchema);
    for (Schema.Field f : fieldsToWrite) {
      if (record.get(f.name()) == null) {
        if (f.defaultVal() instanceof JsonProperties.Null) {
          newRecord.put(f.name(), null);
        } else {
          newRecord.put(f.name(), f.defaultVal());
        }
      } else {
        newRecord.put(f.name(), record.get(f.name()));
      }
    }
    *// 3usec for the code above*

    *// 75usec for the code below*

    if (!GenericData.get().validate(newSchema, newRecord)) {
      throw new SchemaCompatibilityException(
          "Unable to validate the rewritten record " + record + " against schema " + newSchema);
    }
    return newRecord;
  }

> Major performance degradation due to rewriting records with default values
> --------------------------------------------------------------------------
>
>                 Key: HUDI-1509
>                 URL: https://issues.apache.org/jira/browse/HUDI-1509
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: Prashant Wason
>            Priority: Blocker
>
> During the in-house testing for 0.5x to 0.6x release upgrade, I have detected a performance degradation for writes into HUDI. I have traced the issue due to the changes in the following commit
> [[HUDI-727]: Copy default values of fields if not present when rewriting incoming record with new schema|https://github.com/apache/hudi/commit/6d7ca2cf7e441ad19d32d7a25739e454f39ed253]
> I wrote a unit test to reduce the scope of testing as follows:
> # Take an existing parquet file from production dataset (size=690MB, #records=960K)
> # Read all the records from this parquet into a JavaRDD
> # Time the call HoodieWriteClient.bulkInsertPrepped(). (bulkInsertParallelism=1)
> The above scenario is directly taken from our production pipelines where each executor will ingest about a million record creating a single parquet file in a COW dataset. This is bulk insert only dataset.
> The time to complete the bulk insert prepped *decreased from 680seconds to 380seconds* when I reverted the above commit. 
> Schema details: This HUDI dataset uses a large schema with 51 fields in the record.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)