You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Luca Telloli <te...@yahoo-inc.com> on 2008/04/16 18:43:18 UTC
Lease expired on open file
Hello everyone,
I wrote a small application that directly gunzip files from a local
filesystem to an installation of HDFS, writing on a FSDataOutputStream.
Nevertheless, while expanding a very big file, I got this exception:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.dfs.LeaseExpiredException: No lease on
/user/luca/testfile File is not open for writing. [Lease. Holder: 44 46
53 43 6c 69 65 6e 74 5f 2d 31 39 31 34 34 39 36 31 34 30, heldlocks: 0,
pendingcreates: 1]
I wonder what the cause would be for this Exception and if there's a way
to know the default lease for a file and to possibly prolongate it.
Ciao,
Luca
RE: Lease expired on open file
Posted by Runping Qi <ru...@yahoo-inc.com>.
Sounds like you also hit this problem:
https://issues.apache.org/jira/browse/HADOOP-2669
Runping
> -----Original Message-----
> From: Luca [mailto:raskolnikoff77@yahoo.it]
> Sent: Friday, April 18, 2008 1:21 AM
> To: core-user@hadoop.apache.org
> Subject: Re: Lease expired on open file
>
> dhruba Borthakur wrote:
> > The DFSClient has a thread that renews leases periodically for all
files
> > that are being written to. I suspect that this thread is not getting
a
> > chance to run because the gunzip program is eating all the CPU. You
> > might want to put in a Sleep() after every few seconds on unzipping.
> >
> > Thanks,
> > dhruba
> >
>
> Thanks Dhruba,
> with your suggestion and a small Sleep() every block (more or
less),
> it
> worked perfectly. Good hint!
>
> Ciao,
> Luca
>
> > -----Original Message-----
> > From: Luca Telloli [mailto:telloli@yahoo-inc.com]
> > Sent: Wednesday, April 16, 2008 9:43 AM
> > To: core-user@hadoop.apache.org
> > Subject: Lease expired on open file
> >
> > Hello everyone,
> > I wrote a small application that directly gunzip files from a
> > local
> > filesystem to an installation of HDFS, writing on a
FSDataOutputStream.
> > Nevertheless, while expanding a very big file, I got this exception:
> >
> > org.apache.hadoop.ipc.RemoteException:
> > org.apache.hadoop.dfs.LeaseExpiredException: No lease on
> > /user/luca/testfile File is not open for writing. [Lease. Holder:
44 46
> >
> > 53 43 6c 69 65 6e 74 5f 2d 31 39 31 34 34 39 36 31 34 30, heldlocks:
0,
> > pendingcreates: 1]
> >
> > I wonder what the cause would be for this Exception and if there's a
way
> >
> > to know the default lease for a file and to possibly prolongate
it.
> >
> > Ciao,
> > Luca
> >
>
Re: Lease expired on open file
Posted by Luca <ra...@yahoo.it>.
dhruba Borthakur wrote:
> The DFSClient has a thread that renews leases periodically for all files
> that are being written to. I suspect that this thread is not getting a
> chance to run because the gunzip program is eating all the CPU. You
> might want to put in a Sleep() after every few seconds on unzipping.
>
> Thanks,
> dhruba
>
Thanks Dhruba,
with your suggestion and a small Sleep() every block (more or less), it
worked perfectly. Good hint!
Ciao,
Luca
> -----Original Message-----
> From: Luca Telloli [mailto:telloli@yahoo-inc.com]
> Sent: Wednesday, April 16, 2008 9:43 AM
> To: core-user@hadoop.apache.org
> Subject: Lease expired on open file
>
> Hello everyone,
> I wrote a small application that directly gunzip files from a
> local
> filesystem to an installation of HDFS, writing on a FSDataOutputStream.
> Nevertheless, while expanding a very big file, I got this exception:
>
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.dfs.LeaseExpiredException: No lease on
> /user/luca/testfile File is not open for writing. [Lease. Holder: 44 46
>
> 53 43 6c 69 65 6e 74 5f 2d 31 39 31 34 34 39 36 31 34 30, heldlocks: 0,
> pendingcreates: 1]
>
> I wonder what the cause would be for this Exception and if there's a way
>
> to know the default lease for a file and to possibly prolongate it.
>
> Ciao,
> Luca
>
RE: Lease expired on open file
Posted by dhruba Borthakur <dh...@yahoo-inc.com>.
The DFSClient has a thread that renews leases periodically for all files
that are being written to. I suspect that this thread is not getting a
chance to run because the gunzip program is eating all the CPU. You
might want to put in a Sleep() after every few seconds on unzipping.
Thanks,
dhruba
-----Original Message-----
From: Luca Telloli [mailto:telloli@yahoo-inc.com]
Sent: Wednesday, April 16, 2008 9:43 AM
To: core-user@hadoop.apache.org
Subject: Lease expired on open file
Hello everyone,
I wrote a small application that directly gunzip files from a
local
filesystem to an installation of HDFS, writing on a FSDataOutputStream.
Nevertheless, while expanding a very big file, I got this exception:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.dfs.LeaseExpiredException: No lease on
/user/luca/testfile File is not open for writing. [Lease. Holder: 44 46
53 43 6c 69 65 6e 74 5f 2d 31 39 31 34 34 39 36 31 34 30, heldlocks: 0,
pendingcreates: 1]
I wonder what the cause would be for this Exception and if there's a way
to know the default lease for a file and to possibly prolongate it.
Ciao,
Luca