You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Martin Schaaf <ms...@101tec.com> on 2008/05/23 01:14:20 UTC

Hadoop fsck displays open files as corrupt.

Hi,

we wrote a program that uses a Writer to append keys and values to a file.
If we do an fsck during these writing the opened files are reported as corrupt and
the file size is zero until they are closed. On the other side if
we copy a file from local fs to the hadoop fs the size constantly increases and the
files aren't displayed as corrupt. So my question is this the expected behaviour? What
is the difference between this two operation.

Thanks in advance for your help
martin

Re: Hadoop fsck displays open files as corrupt.

Posted by stack <st...@duboce.net>.
The first case sounds like HADOOP-2703.
St.Ack


Martin Schaaf wrote:
> Hi,
>
> we wrote a program that uses a Writer to append keys and values to a file.
> If we do an fsck during these writing the opened files are reported as corrupt and
> the file size is zero until they are closed. On the other side if
> we copy a file from local fs to the hadoop fs the size constantly increases and the
> files aren't displayed as corrupt. So my question is this the expected behaviour? What
> is the difference between this two operation.
>
> Thanks in advance for your help
> martin
>