You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Ethan Guo (Jira)" <ji...@apache.org> on 2023/02/08 02:25:00 UTC
[jira] [Updated] (HUDI-5066) Support hoodie source metaclient cache for flink planner
[ https://issues.apache.org/jira/browse/HUDI-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ethan Guo updated HUDI-5066:
----------------------------
Fix Version/s: (was: 0.13.0)
> Support hoodie source metaclient cache for flink planner
> --------------------------------------------------------
>
> Key: HUDI-5066
> URL: https://issues.apache.org/jira/browse/HUDI-5066
> Project: Apache Hudi
> Issue Type: Improvement
> Components: flink-sql, performance
> Reporter: Shizhi Chen
> Assignee: Shizhi Chen
> Priority: Blocker
> Labels: pull-request-available
> Fix For: 0.12.2
>
>
> h2. Change Logs
> Flink Table Planner will invoke `HoodieTableSource.copy()` when applying its `RelOptRule` such as
> `PushPartitionIntoTableSourceScanRule`, thus result in multiple meta client instantiation, which will
> affect the performance of starting job quickly.
> {code:java}
> // apply push down
> DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
> PartitionPushDownSpec partitionPushDownSpec =
> new PartitionPushDownSpec(remainingPartitions);
> partitionPushDownSpec.apply(dynamicTableSource, SourceAbilityContext.from(scan));
> {code}
> Here we promote that meta client might be cached to reused.
> h2. Impact
> speed up flink sql job start by avoiding unnecessary meta client repeated creation.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)