You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Alexey Kudinkin (Jira)" <ji...@apache.org> on 2022/10/06 23:14:00 UTC
[jira] [Updated] (HUDI-4992) Spark Row-writing Bulk Insert produces incorrect Bloom Filter metadata
[ https://issues.apache.org/jira/browse/HUDI-4992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Alexey Kudinkin updated HUDI-4992:
----------------------------------
Status: In Progress (was: Open)
> Spark Row-writing Bulk Insert produces incorrect Bloom Filter metadata
> ----------------------------------------------------------------------
>
> Key: HUDI-4992
> URL: https://issues.apache.org/jira/browse/HUDI-4992
> Project: Apache Hudi
> Issue Type: Bug
> Affects Versions: 0.12.0
> Reporter: Alexey Kudinkin
> Assignee: Alexey Kudinkin
> Priority: Blocker
> Fix For: 0.12.1
>
>
> Troubleshooting duplicates issue w/ Abhishek Modi from Notion, we've found that the min/max record key stats are being currently persisted incorrectly into Parquet metadata, leading to duplicate records being produced in their pipeline after initial bulk-insert.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)