You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Tsz Wo Nicholas Sze (JIRA)" <ji...@apache.org> on 2014/07/21 21:36:41 UTC

[jira] [Resolved] (HDFS-196) File length not reported correctly after application crash

     [ https://issues.apache.org/jira/browse/HDFS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tsz Wo Nicholas Sze resolved HDFS-196.
--------------------------------------

    Resolution: Not a Problem

sync() does not update the length in NN.  So getFileSatus() will return the correct length immediately as Dhruba mentioned.

Anyway, sync() is already removed from trunk (HDFS-3034).  hsync(..) with UPDATE_LENGTH flag could be used instead.  So this becomes not-a-problem anymore.  Resolving ...

> File length not reported correctly after application crash
> ----------------------------------------------------------
>
>                 Key: HDFS-196
>                 URL: https://issues.apache.org/jira/browse/HDFS-196
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Doug Judd
>
> Our application (Hypertable) creates a transaction log in HDFS.  This log is written with the following pattern:
> out_stream.write(header, 0, 7);
> out_stream.sync()
> out_stream.write(data, 0, amount);
> out_stream.sync()
> [...]
> However, if the application crashes and then comes back up again, the following statement
> length = mFilesystem.getFileStatus(new Path(fileName)).getLen();
> returns the wrong length.  Apparently this is because this method fetches length information from the NameNode which is stale.  Ideally, a call to getFileStatus() would return the accurate file length by fetching the size of the last block from the primary datanode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)