You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Gokulakannan M <go...@huawei.com> on 2010/04/09 12:07:46 UTC

exceptions i got in HDFS - append problem?

Hi,

 

 I got the following exceptions , when I am using HDFS to write the logs
coming from Scribe

            

 1. java.io.IOException: Filesystem closed           

     <stack trace>

     ........

     ........

     call to org.apache.hadoop.fs.FSDataOutputStream::write failed!

            

 2. org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
create

      file xxx-2010-04-01-12-40_00000 for DFSClient_1355960219 on client
10.18.22.55 because current leaseholder is trying to recreate file

      <stack trace>

     ........

     ........

     call to
org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/
apache/hadoop/fs/FSDataOutputStream;)failed!

 

  I didn't apply the HDFS-265 to my hadoop patch yet.

 

  Are these exceptions due to the bugs in existing append-feature?? or some
other reason?  

 Should I need to apply the complete append patch or a simple patch will
solve this. 

            

 Thanks,

  Gokul

 

  


RE: exceptions i got in HDFS - append problem?

Posted by Gokulakannan M <go...@huawei.com>.
	

@Stack,

>> what hadoop version are you running?  hdfs-265 won't apply to hadoop
>> 0.20.x if that is what you are running	

	I am using hadoop 0.20.1. So hdfs-265 cannot be applied to it?
hmmmm.

>> Do you have hdfs-200 and friends applied to your cluster?

	No I haven't applied them yet. Can you specify the patches other
than HDFS-200.

>> I haven't looked, but my guess is that scribe documentation probably
>> has description of the patchset required to run on hadoop.
 
	No they are not mentioned :(

   Regards,
   Gokul
 

On Fri, Apr 9, 2010 at 3:07 AM, Gokulakannan M <go...@huawei.com> wrote:
> Hi,
>  I got the following exceptions , when I am using HDFS to write the logs
> coming from Scribe
>  1. java.io.IOException: Filesystem closed
>
>      <stack trace>
>      ........
>      ........
>      call to org.apache.hadoop.fs.FSDataOutputStream::write failed!
>

Above seems to be saying that filesystem is closed and as a
consequence, you are not able to write it.

>  2. org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
> create
>       file xxx-2010-04-01-12-40_00000 for DFSClient_1355960219 on client
> 10.18.22.55 because current leaseholder is trying to recreate file
>       <stack trace>
>      ........
>      ........
>      call to
>
org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/
apache/hadoop/fs/FSDataOutputStream;)failed!
>

Someone holds the lease on the file you are trying to open?

You mention scribe.  Do you have hdfs-200 and friends applied to your
cluster?

>   I didn't apply the HDFS-265 to my hadoop patch yet.
>

What hadoop version are you running?  hdfs-265 won't apply to hadoop
0.20.x if that is what you are running.

>
>   Are these exceptions due to the bugs in existing append-feature?? or
some
> other reason?
>
>  Should I need to apply the complete append patch or a simple patch will
> solve this.
>
I haven't looked, but my guess is that scribe documentation probably
has description of the patchset required to run on hadoop.

St.Ack


Re: exceptions i got in HDFS - append problem?

Posted by Stack <st...@duboce.net>.
On Fri, Apr 9, 2010 at 3:07 AM, Gokulakannan M <go...@huawei.com> wrote:
> Hi,
>  I got the following exceptions , when I am using HDFS to write the logs
> coming from Scribe
>  1. java.io.IOException: Filesystem closed
>
>      <stack trace>
>      ........
>      ........
>      call to org.apache.hadoop.fs.FSDataOutputStream::write failed!
>

Above seems to be saying that filesystem is closed and as a
consequence, you are not able to write it.

>  2. org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
> create
>       file xxx-2010-04-01-12-40_00000 for DFSClient_1355960219 on client
> 10.18.22.55 because current leaseholder is trying to recreate file
>       <stack trace>
>      ........
>      ........
>      call to
> org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apache/hadoop/fs/FSDataOutputStream;)failed!
>

Someone holds the lease on the file you are trying to open?

You mention scribe.  Do you have hdfs-200 and friends applied to your cluster?

>   I didn't apply the HDFS-265 to my hadoop patch yet.
>

What hadoop version are you running?  hdfs-265 won't apply to hadoop
0.20.x if that is what you are running.

>
>   Are these exceptions due to the bugs in existing append-feature?? or some
> other reason?
>
>  Should I need to apply the complete append patch or a simple patch will
> solve this.
>
I haven't looked, but my guess is that scribe documentation probably
has description of the patchset required to run on hadoop.

St.Ack