You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Dmitry Simonov <di...@gmail.com> on 2015/02/24 09:10:11 UTC

recoverLeaseInternal: why current leasehonder is forbidden to append the file?

Hello!

Could you please explain, why this check exists in class
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem", method
"recoverLeaseInternal":

Lease leaseFile = leaseManager.getLeaseByPath(src);
        if (leaseFile != null && leaseFile.equals(lease)) {
          throw new AlreadyBeingCreatedException(
            "failed to create file " + src + " for " + holder +
            " for client " + clientMachine +
            " because current leaseholder is trying to recreate file.");

It prevents leaseholder to recover the lease if the lease already belongs
to him.

This method is called both in append() and create() methods
of org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer

It seems to me that current leaseholder should be able to append the file
normally. Am I wrong?

Hadoop version: 2.5.1

--
Best regards, Dmitrii Simonov.

Re: recoverLeaseInternal: why current leasehonder is forbidden to append the file?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Dmitry!
I suspect its because we don't want two streams from the same DFSClient to write to the same file. The Lease.holder is a simple string which corresponds usually to DFSClient_<someid> .
HTH
Ravi. 

     On Tuesday, February 24, 2015 12:12 AM, Dmitry Simonov <di...@gmail.com> wrote:
   

 Hello!
Could you please explain, why this check exists in class "org.apache.hadoop.hdfs.server.namenode.FSNamesystem", method "recoverLeaseInternal":

Lease leaseFile = leaseManager.getLeaseByPath(src);        if (leaseFile != null && leaseFile.equals(lease)) {          throw new AlreadyBeingCreatedException(            "failed to create file " + src + " for " + holder +            " for client " + clientMachine +            " because current leaseholder is trying to recreate file.");
It prevents leaseholder to recover the lease if the lease already belongs to him.
This method is called both in append() and create() methods of org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
It seems to me that current leaseholder should be able to append the file normally. Am I wrong?
Hadoop version: 2.5.1
--Best regards, Dmitrii Simonov.

   

Re: recoverLeaseInternal: why current leasehonder is forbidden to append the file?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Dmitry!
I suspect its because we don't want two streams from the same DFSClient to write to the same file. The Lease.holder is a simple string which corresponds usually to DFSClient_<someid> .
HTH
Ravi. 

     On Tuesday, February 24, 2015 12:12 AM, Dmitry Simonov <di...@gmail.com> wrote:
   

 Hello!
Could you please explain, why this check exists in class "org.apache.hadoop.hdfs.server.namenode.FSNamesystem", method "recoverLeaseInternal":

Lease leaseFile = leaseManager.getLeaseByPath(src);        if (leaseFile != null && leaseFile.equals(lease)) {          throw new AlreadyBeingCreatedException(            "failed to create file " + src + " for " + holder +            " for client " + clientMachine +            " because current leaseholder is trying to recreate file.");
It prevents leaseholder to recover the lease if the lease already belongs to him.
This method is called both in append() and create() methods of org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
It seems to me that current leaseholder should be able to append the file normally. Am I wrong?
Hadoop version: 2.5.1
--Best regards, Dmitrii Simonov.

   

Re: recoverLeaseInternal: why current leasehonder is forbidden to append the file?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Dmitry!
I suspect its because we don't want two streams from the same DFSClient to write to the same file. The Lease.holder is a simple string which corresponds usually to DFSClient_<someid> .
HTH
Ravi. 

     On Tuesday, February 24, 2015 12:12 AM, Dmitry Simonov <di...@gmail.com> wrote:
   

 Hello!
Could you please explain, why this check exists in class "org.apache.hadoop.hdfs.server.namenode.FSNamesystem", method "recoverLeaseInternal":

Lease leaseFile = leaseManager.getLeaseByPath(src);        if (leaseFile != null && leaseFile.equals(lease)) {          throw new AlreadyBeingCreatedException(            "failed to create file " + src + " for " + holder +            " for client " + clientMachine +            " because current leaseholder is trying to recreate file.");
It prevents leaseholder to recover the lease if the lease already belongs to him.
This method is called both in append() and create() methods of org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
It seems to me that current leaseholder should be able to append the file normally. Am I wrong?
Hadoop version: 2.5.1
--Best regards, Dmitrii Simonov.

   

Re: recoverLeaseInternal: why current leasehonder is forbidden to append the file?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Dmitry!
I suspect its because we don't want two streams from the same DFSClient to write to the same file. The Lease.holder is a simple string which corresponds usually to DFSClient_<someid> .
HTH
Ravi. 

     On Tuesday, February 24, 2015 12:12 AM, Dmitry Simonov <di...@gmail.com> wrote:
   

 Hello!
Could you please explain, why this check exists in class "org.apache.hadoop.hdfs.server.namenode.FSNamesystem", method "recoverLeaseInternal":

Lease leaseFile = leaseManager.getLeaseByPath(src);        if (leaseFile != null && leaseFile.equals(lease)) {          throw new AlreadyBeingCreatedException(            "failed to create file " + src + " for " + holder +            " for client " + clientMachine +            " because current leaseholder is trying to recreate file.");
It prevents leaseholder to recover the lease if the lease already belongs to him.
This method is called both in append() and create() methods of org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
It seems to me that current leaseholder should be able to append the file normally. Am I wrong?
Hadoop version: 2.5.1
--Best regards, Dmitrii Simonov.