You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2007/09/12 04:21:32 UTC
[jira] Commented: (HADOOP-89) files are not visible until they are
closed
[ https://issues.apache.org/jira/browse/HADOOP-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12526659 ]
Konstantin Shvachko commented on HADOOP-89:
-------------------------------------------
This patch does not apply anymore.
With HADOOP-1700 on the horizon does it make sense to introduce tail -f here?
Tail -f works until the client that is writing to the file is alive. If it dies before closing the file all information
reported by tail -f does not exist in the system anymore. So in a way tail -f reports an illusive data that is not
guaranteed to be present in the future.
This patch has a lot of important internal changes, like removing pendingCreates etc.
But introducing tail -f at this point may cause confusion among potential users.
> files are not visible until they are closed
> -------------------------------------------
>
> Key: HADOOP-89
> URL: https://issues.apache.org/jira/browse/HADOOP-89
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.1.0
> Reporter: Yoram Arnon
> Assignee: dhruba borthakur
> Priority: Critical
> Attachments: tail2.patch
>
>
> the current behaviour, whereby a file is not visible until it is closed has several flaws,including:
> 1. no practical way to know if a file/job is progressing
> 2. no way to implement files that never close, such as log files
> 3. failure to close a file results in loss of the file
> The part of the file that's written should be visible.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.