You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Udit Mehrotra (Jira)" <ji...@apache.org> on 2021/08/12 22:27:00 UTC

[jira] [Updated] (HUDI-1138) Re-implement marker files via timeline server

     [ https://issues.apache.org/jira/browse/HUDI-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Udit Mehrotra updated HUDI-1138:
--------------------------------
    Status: In Progress  (was: Open)

> Re-implement marker files via timeline server
> ---------------------------------------------
>
>                 Key: HUDI-1138
>                 URL: https://issues.apache.org/jira/browse/HUDI-1138
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: Writer Core
>    Affects Versions: 0.9.0
>            Reporter: Vinoth Chandar
>            Assignee: Ethan Guo
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 0.9.0
>
>
> Even as you can argue that RFC-15/consolidated metadata, removes the need for deleting partial files written due to spark task failures/stage retries. It will still leave extra files inside the table (and users will pay for it every month) and we need the marker mechanism to be able to delete these partial files. 
> Here we explore if we can improve the current marker file mechanism, that creates one marker file per data file written, by 
> Delegating the createMarker() call to the driver/timeline server, and have it create marker metadata into a single file handle, that is flushed for durability guarantees
>  
> P.S: I was tempted to think Spark listener mechanism can help us deal with failed tasks, but it has no guarantees. the writer job could die without deleting a partial file. i.e it can improve things, but cant provide guarantees 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)