You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "aidendong (Jira)" <ji...@apache.org> on 2022/06/22 11:34:00 UTC
[jira] [Updated] (HUDI-4300) Add sync clean and archive for compaction service in Spark Env
[ https://issues.apache.org/jira/browse/HUDI-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
aidendong updated HUDI-4300:
----------------------------
Status: In Progress (was: Open)
> Add sync clean and archive for compaction service in Spark Env
> --------------------------------------------------------------
>
> Key: HUDI-4300
> URL: https://issues.apache.org/jira/browse/HUDI-4300
> Project: Apache Hudi
> Issue Type: Improvement
> Components: compaction, spark
> Reporter: aidendong
> Priority: Minor
> Fix For: 0.11.1
>
> Original Estimate: 168h
> Remaining Estimate: 168h
>
> The current situation is to provide asynchronous clean and archive in compaction.
>
> {code:java}
> // SparkRDDWriteClient.java
> @Override
> protected HoodieWriteMetadata<JavaRDD<WriteStatus>> compact(String compactionInstantTime, boolean shouldComplete) {
> HoodieSparkTable<T> table = HoodieSparkTable.create(config, context);
> preWrite(compactionInstantTime, WriteOperationType.COMPACT, table.getMetaClient());
> 。。。。
> } {code}
> The asynchronous archive will get distribute lock when {color:#FF0000}hoodie.write.concurrency.mode=OPTIMISTIC_CONCURRENCY_CONTROL{color}.
> *Archive may be locked for a long time*
> for example in spark env, In offline scheduleAndCompaction and {color:#172b4d} hoodie.write.concurrency.mode=OPTIMISTIC_CONCURRENCY_CONTROL。{color}
> {color:#172b4d}Maybe all task work on compaction and archive function does not have enough resources to work when it get lock.{color}
> I think, we can provide sync clean and archive for users to choose
>
>
--
This message was sent by Atlassian Jira
(v8.20.7#820007)