You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hudi.apache.org by leesf <le...@gmail.com> on 2022/07/17 15:28:00 UTC

[ANNOUNCE] Hudi Community Update(2022-07-04 ~ 2022-07-17)

Dear community,

Nice to share Hudi community bi-weekly updates for 2022-07-04 ~ 2022-07-17
with updates on bug fixes.


=======================================
Features

[Flink] Column stats data skipping for flink [1]
[Spark] Add call procedure for UpgradeOrDowngradeCommand [2]
[Spark] Add call procedure for MetadataCommand [3]
[Spark] Add a new HoodieDropPartitionsTool to let users drop table
partitions through a standalone job [4]
[Spark] Support show_fs_path_detail command on Call Produce Command [5]
[Flink] Support flink 1.15.x [6]
[Flink] Flink offline compaction support compacting multi compaction plan
at once  [7]
[Spark] Support copyToTable on call [8]
[Spark] Add call procedure for RepairsCommand [9]
[Flink] Bump Flink versions to 1.14.5 and 1.15.1 [10]
[Flink] Flink Inline Cluster and Compact plan distribute strategy changed
from rebalance to hash to avoid potential multiple threads accessing the
same file [11]
[Spark] Add call procedure for CleanCommand [12]


[1] https://issues.apache.org/jira/browse/HUDI-4353
[2] https://issues.apache.org/jira/browse/HUDI-3505
[1] https://issues.apache.org/jira/browse/HUDI-3511
[2] https://issues.apache.org/jira/browse/HUDI-3116
[3] https://issues.apache.org/jira/browse/HUDI-4359
[4] https://issues.apache.org/jira/browse/HUDI-4357
[5] https://issues.apache.org/jira/browse/HUDI-4152
[6] https://issues.apache.org/jira/browse/HUDI-4367
[7] https://issues.apache.org/jira/browse/HUDI-4353
[8] https://issues.apache.org/jira/browse/HUDI-3505
[9] https://issues.apache.org/jira/browse/HUDI-3500
[10] https://issues.apache.org/jira/browse/HUDI-4379
[11] https://issues.apache.org/jira/browse/HUDI-4397
[12] https://issues.apache.org/jira/browse/HUDI-3503

=======================================
Bugs

[Spark] Merge Into when update expression "col=s.col+2" on precombine cause
exception [1]
[Spark] fix spark32 repartition error [2]
[Spark] Reconcile schema-inject null values for missing fields and add new
fields [3]
[Spark] Make user can use hoodie.datasource.read.paths to read necessary
files [4]



[1] https://issues.apache.org/jira/browse/HUDI-4219
[2] https://issues.apache.org/jira/browse/HUDI-4309
[3] https://issues.apache.org/jira/browse/HUDI-4267
[4] https://issues.apache.org/jira/browse/HUDI-4170



Best,
Leesf