You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by unmesha sreeveni <un...@gmail.com> on 2013/11/12 11:52:14 UTC

LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

While running job with 90 Mb file i am getting LeaseExpiredException

13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process
: 1
13/11/12 15:46:43 INFO mapred.JobClient: Running job: job_201310301645_25033
13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
attempt_201310301645_25033_m_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
Lease mismatch on /user/hdfs/in/map owned by
DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
DFSClient_NONMAPREDUCE_-1561990512_1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
at org.
attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple
SLF4J bindings.
attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
[jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
[jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_0: SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
attempt_201310301645_25033_m_000000_1, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
Lease mismatch on /user/hdfs/in/map owned by
DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
DFSClient_NONMAPREDUCE_-1662926329_1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple
SLF4J bindings.
attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
[jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
[jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_1: SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
log4j system properly.
attempt_201310301645_25033_m_000000_1: log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
attempt_201310301645_25033_m_000001_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /user/hdfs/in/map: File is not open for writing. Holder
DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)

Why is it so?
My mapper code is

public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);

 Path inputfile = new Path("in/map");
 BufferedWriter getdatabuffer = new BufferedWriter(new
OutputStreamWriter(fs.create(inputfile)));
   getdatabuffer.write(value.toString());
   getdatabuffer.close();
Path Attribute = new Path("in/Attribute");
    int row =0;
        BufferedReader read = new BufferedReader(new
InputStreamReader(fs.open(inputfile)));
        String str = null;
        while((str = read.readLine())!=null){

        row++; //total row count
        StringTokenizer st =new StringTokenizer(str," ");
        col = st.countTokens();
      }
        read.close();
...........
...........
.............
............
Further computation is based on the above "map" file.

Why this happens?
I think it is unable to write into in/map for several times.
How to get rid of this?
*Any Suggestions?*

-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
u r most welcome :)


On Fri, Nov 15, 2013 at 12:46 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> thanks
>
>
> On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> @chandu banavaram:
>> This exception usually happens if hdfs is trying to write into a file
>> which is no more in hdfs..
>>
>> I think in my case certain files are not created in my hdfs.it failed to
>> create due to some permissions.
>>
>> I am trying out.
>>
>>
>> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> :) Ok
>>> Why u also experienced the same?
>>>
>>>
>>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>>> chandu.banavaram@gmail.com> wrote:
>>>
>>>> plz send the answer to me  for this query
>>>>
>>>>
>>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <
>>>> unmeshabiju@gmail.com> wrote:
>>>>
>>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>>
>>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>>> parsing the arguments. Applications should implement Tool for the same.
>>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>>> process : 1
>>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>>> job_201310301645_25033
>>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>  at org.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>>  at
>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could
>>>>> be found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize
>>>>> the log4j system properly.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>
>>>>> Why is it so?
>>>>> My mapper code is
>>>>>
>>>>> public void map(Object key, Text value, Context context)
>>>>> throws IOException, InterruptedException {
>>>>>  Configuration conf = new Configuration();
>>>>> FileSystem fs = FileSystem.get(conf);
>>>>>
>>>>>  Path inputfile = new Path("in/map");
>>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>>    getdatabuffer.write(value.toString());
>>>>>    getdatabuffer.close();
>>>>> Path Attribute = new Path("in/Attribute");
>>>>>     int row =0;
>>>>>         BufferedReader read = new BufferedReader(new
>>>>> InputStreamReader(fs.open(inputfile)));
>>>>>         String str = null;
>>>>>         while((str = read.readLine())!=null){
>>>>>
>>>>>         row++; //total row count
>>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>>         col = st.countTokens();
>>>>>       }
>>>>>         read.close();
>>>>> ...........
>>>>> ...........
>>>>> .............
>>>>> ............
>>>>> Further computation is based on the above "map" file.
>>>>>
>>>>> Why this happens?
>>>>> I think it is unable to write into in/map for several times.
>>>>> How to get rid of this?
>>>>> *Any Suggestions?*
>>>>>
>>>>> --
>>>>> *Thanks & Regards*
>>>>>
>>>>> Unmesha Sreeveni U.B
>>>>>
>>>>> *Junior Developer*
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
u r most welcome :)


On Fri, Nov 15, 2013 at 12:46 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> thanks
>
>
> On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> @chandu banavaram:
>> This exception usually happens if hdfs is trying to write into a file
>> which is no more in hdfs..
>>
>> I think in my case certain files are not created in my hdfs.it failed to
>> create due to some permissions.
>>
>> I am trying out.
>>
>>
>> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> :) Ok
>>> Why u also experienced the same?
>>>
>>>
>>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>>> chandu.banavaram@gmail.com> wrote:
>>>
>>>> plz send the answer to me  for this query
>>>>
>>>>
>>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <
>>>> unmeshabiju@gmail.com> wrote:
>>>>
>>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>>
>>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>>> parsing the arguments. Applications should implement Tool for the same.
>>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>>> process : 1
>>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>>> job_201310301645_25033
>>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>  at org.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>>  at
>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could
>>>>> be found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize
>>>>> the log4j system properly.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>
>>>>> Why is it so?
>>>>> My mapper code is
>>>>>
>>>>> public void map(Object key, Text value, Context context)
>>>>> throws IOException, InterruptedException {
>>>>>  Configuration conf = new Configuration();
>>>>> FileSystem fs = FileSystem.get(conf);
>>>>>
>>>>>  Path inputfile = new Path("in/map");
>>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>>    getdatabuffer.write(value.toString());
>>>>>    getdatabuffer.close();
>>>>> Path Attribute = new Path("in/Attribute");
>>>>>     int row =0;
>>>>>         BufferedReader read = new BufferedReader(new
>>>>> InputStreamReader(fs.open(inputfile)));
>>>>>         String str = null;
>>>>>         while((str = read.readLine())!=null){
>>>>>
>>>>>         row++; //total row count
>>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>>         col = st.countTokens();
>>>>>       }
>>>>>         read.close();
>>>>> ...........
>>>>> ...........
>>>>> .............
>>>>> ............
>>>>> Further computation is based on the above "map" file.
>>>>>
>>>>> Why this happens?
>>>>> I think it is unable to write into in/map for several times.
>>>>> How to get rid of this?
>>>>> *Any Suggestions?*
>>>>>
>>>>> --
>>>>> *Thanks & Regards*
>>>>>
>>>>> Unmesha Sreeveni U.B
>>>>>
>>>>> *Junior Developer*
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
u r most welcome :)


On Fri, Nov 15, 2013 at 12:46 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> thanks
>
>
> On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> @chandu banavaram:
>> This exception usually happens if hdfs is trying to write into a file
>> which is no more in hdfs..
>>
>> I think in my case certain files are not created in my hdfs.it failed to
>> create due to some permissions.
>>
>> I am trying out.
>>
>>
>> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> :) Ok
>>> Why u also experienced the same?
>>>
>>>
>>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>>> chandu.banavaram@gmail.com> wrote:
>>>
>>>> plz send the answer to me  for this query
>>>>
>>>>
>>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <
>>>> unmeshabiju@gmail.com> wrote:
>>>>
>>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>>
>>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>>> parsing the arguments. Applications should implement Tool for the same.
>>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>>> process : 1
>>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>>> job_201310301645_25033
>>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>  at org.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>>  at
>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could
>>>>> be found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize
>>>>> the log4j system properly.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>
>>>>> Why is it so?
>>>>> My mapper code is
>>>>>
>>>>> public void map(Object key, Text value, Context context)
>>>>> throws IOException, InterruptedException {
>>>>>  Configuration conf = new Configuration();
>>>>> FileSystem fs = FileSystem.get(conf);
>>>>>
>>>>>  Path inputfile = new Path("in/map");
>>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>>    getdatabuffer.write(value.toString());
>>>>>    getdatabuffer.close();
>>>>> Path Attribute = new Path("in/Attribute");
>>>>>     int row =0;
>>>>>         BufferedReader read = new BufferedReader(new
>>>>> InputStreamReader(fs.open(inputfile)));
>>>>>         String str = null;
>>>>>         while((str = read.readLine())!=null){
>>>>>
>>>>>         row++; //total row count
>>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>>         col = st.countTokens();
>>>>>       }
>>>>>         read.close();
>>>>> ...........
>>>>> ...........
>>>>> .............
>>>>> ............
>>>>> Further computation is based on the above "map" file.
>>>>>
>>>>> Why this happens?
>>>>> I think it is unable to write into in/map for several times.
>>>>> How to get rid of this?
>>>>> *Any Suggestions?*
>>>>>
>>>>> --
>>>>> *Thanks & Regards*
>>>>>
>>>>> Unmesha Sreeveni U.B
>>>>>
>>>>> *Junior Developer*
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
u r most welcome :)


On Fri, Nov 15, 2013 at 12:46 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> thanks
>
>
> On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> @chandu banavaram:
>> This exception usually happens if hdfs is trying to write into a file
>> which is no more in hdfs..
>>
>> I think in my case certain files are not created in my hdfs.it failed to
>> create due to some permissions.
>>
>> I am trying out.
>>
>>
>> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> :) Ok
>>> Why u also experienced the same?
>>>
>>>
>>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>>> chandu.banavaram@gmail.com> wrote:
>>>
>>>> plz send the answer to me  for this query
>>>>
>>>>
>>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <
>>>> unmeshabiju@gmail.com> wrote:
>>>>
>>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>>
>>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>>> parsing the arguments. Applications should implement Tool for the same.
>>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>>> process : 1
>>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>>> job_201310301645_25033
>>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>  at org.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>>  at
>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>>> multiple SLF4J bindings.
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could
>>>>> be found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize
>>>>> the log4j system properly.
>>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>>  at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>>
>>>>> Why is it so?
>>>>> My mapper code is
>>>>>
>>>>> public void map(Object key, Text value, Context context)
>>>>> throws IOException, InterruptedException {
>>>>>  Configuration conf = new Configuration();
>>>>> FileSystem fs = FileSystem.get(conf);
>>>>>
>>>>>  Path inputfile = new Path("in/map");
>>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>>    getdatabuffer.write(value.toString());
>>>>>    getdatabuffer.close();
>>>>> Path Attribute = new Path("in/Attribute");
>>>>>     int row =0;
>>>>>         BufferedReader read = new BufferedReader(new
>>>>> InputStreamReader(fs.open(inputfile)));
>>>>>         String str = null;
>>>>>         while((str = read.readLine())!=null){
>>>>>
>>>>>         row++; //total row count
>>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>>         col = st.countTokens();
>>>>>       }
>>>>>         read.close();
>>>>> ...........
>>>>> ...........
>>>>> .............
>>>>> ............
>>>>> Further computation is based on the above "map" file.
>>>>>
>>>>> Why this happens?
>>>>> I think it is unable to write into in/map for several times.
>>>>> How to get rid of this?
>>>>> *Any Suggestions?*
>>>>>
>>>>> --
>>>>> *Thanks & Regards*
>>>>>
>>>>> Unmesha Sreeveni U.B
>>>>>
>>>>> *Junior Developer*
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
thanks


On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:

> @chandu banavaram:
> This exception usually happens if hdfs is trying to write into a file
> which is no more in hdfs..
>
> I think in my case certain files are not created in my hdfs.it failed to
> create due to some permissions.
>
> I am trying out.
>
>
> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> :) Ok
>> Why u also experienced the same?
>>
>>
>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>> chandu.banavaram@gmail.com> wrote:
>>
>>> plz send the answer to me  for this query
>>>
>>>
>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <unmeshabiju@gmail.com
>>> > wrote:
>>>
>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>
>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>> parsing the arguments. Applications should implement Tool for the same.
>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>> process : 1
>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>> job_201310301645_25033
>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>  at org.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>  at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>>> log4j system properly.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>
>>>> Why is it so?
>>>> My mapper code is
>>>>
>>>> public void map(Object key, Text value, Context context)
>>>> throws IOException, InterruptedException {
>>>>  Configuration conf = new Configuration();
>>>> FileSystem fs = FileSystem.get(conf);
>>>>
>>>>  Path inputfile = new Path("in/map");
>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>    getdatabuffer.write(value.toString());
>>>>    getdatabuffer.close();
>>>> Path Attribute = new Path("in/Attribute");
>>>>     int row =0;
>>>>         BufferedReader read = new BufferedReader(new
>>>> InputStreamReader(fs.open(inputfile)));
>>>>         String str = null;
>>>>         while((str = read.readLine())!=null){
>>>>
>>>>         row++; //total row count
>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>         col = st.countTokens();
>>>>       }
>>>>         read.close();
>>>> ...........
>>>> ...........
>>>> .............
>>>> ............
>>>> Further computation is based on the above "map" file.
>>>>
>>>> Why this happens?
>>>> I think it is unable to write into in/map for several times.
>>>> How to get rid of this?
>>>> *Any Suggestions?*
>>>>
>>>> --
>>>> *Thanks & Regards*
>>>>
>>>> Unmesha Sreeveni U.B
>>>>
>>>> *Junior Developer*
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
thanks


On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:

> @chandu banavaram:
> This exception usually happens if hdfs is trying to write into a file
> which is no more in hdfs..
>
> I think in my case certain files are not created in my hdfs.it failed to
> create due to some permissions.
>
> I am trying out.
>
>
> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> :) Ok
>> Why u also experienced the same?
>>
>>
>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>> chandu.banavaram@gmail.com> wrote:
>>
>>> plz send the answer to me  for this query
>>>
>>>
>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <unmeshabiju@gmail.com
>>> > wrote:
>>>
>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>
>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>> parsing the arguments. Applications should implement Tool for the same.
>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>> process : 1
>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>> job_201310301645_25033
>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>  at org.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>  at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>>> log4j system properly.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>
>>>> Why is it so?
>>>> My mapper code is
>>>>
>>>> public void map(Object key, Text value, Context context)
>>>> throws IOException, InterruptedException {
>>>>  Configuration conf = new Configuration();
>>>> FileSystem fs = FileSystem.get(conf);
>>>>
>>>>  Path inputfile = new Path("in/map");
>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>    getdatabuffer.write(value.toString());
>>>>    getdatabuffer.close();
>>>> Path Attribute = new Path("in/Attribute");
>>>>     int row =0;
>>>>         BufferedReader read = new BufferedReader(new
>>>> InputStreamReader(fs.open(inputfile)));
>>>>         String str = null;
>>>>         while((str = read.readLine())!=null){
>>>>
>>>>         row++; //total row count
>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>         col = st.countTokens();
>>>>       }
>>>>         read.close();
>>>> ...........
>>>> ...........
>>>> .............
>>>> ............
>>>> Further computation is based on the above "map" file.
>>>>
>>>> Why this happens?
>>>> I think it is unable to write into in/map for several times.
>>>> How to get rid of this?
>>>> *Any Suggestions?*
>>>>
>>>> --
>>>> *Thanks & Regards*
>>>>
>>>> Unmesha Sreeveni U.B
>>>>
>>>> *Junior Developer*
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
thanks


On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:

> @chandu banavaram:
> This exception usually happens if hdfs is trying to write into a file
> which is no more in hdfs..
>
> I think in my case certain files are not created in my hdfs.it failed to
> create due to some permissions.
>
> I am trying out.
>
>
> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> :) Ok
>> Why u also experienced the same?
>>
>>
>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>> chandu.banavaram@gmail.com> wrote:
>>
>>> plz send the answer to me  for this query
>>>
>>>
>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <unmeshabiju@gmail.com
>>> > wrote:
>>>
>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>
>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>> parsing the arguments. Applications should implement Tool for the same.
>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>> process : 1
>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>> job_201310301645_25033
>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>  at org.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>  at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>>> log4j system properly.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>
>>>> Why is it so?
>>>> My mapper code is
>>>>
>>>> public void map(Object key, Text value, Context context)
>>>> throws IOException, InterruptedException {
>>>>  Configuration conf = new Configuration();
>>>> FileSystem fs = FileSystem.get(conf);
>>>>
>>>>  Path inputfile = new Path("in/map");
>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>    getdatabuffer.write(value.toString());
>>>>    getdatabuffer.close();
>>>> Path Attribute = new Path("in/Attribute");
>>>>     int row =0;
>>>>         BufferedReader read = new BufferedReader(new
>>>> InputStreamReader(fs.open(inputfile)));
>>>>         String str = null;
>>>>         while((str = read.readLine())!=null){
>>>>
>>>>         row++; //total row count
>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>         col = st.countTokens();
>>>>       }
>>>>         read.close();
>>>> ...........
>>>> ...........
>>>> .............
>>>> ............
>>>> Further computation is based on the above "map" file.
>>>>
>>>> Why this happens?
>>>> I think it is unable to write into in/map for several times.
>>>> How to get rid of this?
>>>> *Any Suggestions?*
>>>>
>>>> --
>>>> *Thanks & Regards*
>>>>
>>>> Unmesha Sreeveni U.B
>>>>
>>>> *Junior Developer*
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
thanks


On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni <un...@gmail.com>wrote:

> @chandu banavaram:
> This exception usually happens if hdfs is trying to write into a file
> which is no more in hdfs..
>
> I think in my case certain files are not created in my hdfs.it failed to
> create due to some permissions.
>
> I am trying out.
>
>
> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> :) Ok
>> Why u also experienced the same?
>>
>>
>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>> chandu.banavaram@gmail.com> wrote:
>>
>>> plz send the answer to me  for this query
>>>
>>>
>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <unmeshabiju@gmail.com
>>> > wrote:
>>>
>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>
>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>> parsing the arguments. Applications should implement Tool for the same.
>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>> process : 1
>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>> job_201310301645_25033
>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>  at org.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>  at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>>> log4j system properly.
>>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>
>>>> Why is it so?
>>>> My mapper code is
>>>>
>>>> public void map(Object key, Text value, Context context)
>>>> throws IOException, InterruptedException {
>>>>  Configuration conf = new Configuration();
>>>> FileSystem fs = FileSystem.get(conf);
>>>>
>>>>  Path inputfile = new Path("in/map");
>>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>>> OutputStreamWriter(fs.create(inputfile)));
>>>>    getdatabuffer.write(value.toString());
>>>>    getdatabuffer.close();
>>>> Path Attribute = new Path("in/Attribute");
>>>>     int row =0;
>>>>         BufferedReader read = new BufferedReader(new
>>>> InputStreamReader(fs.open(inputfile)));
>>>>         String str = null;
>>>>         while((str = read.readLine())!=null){
>>>>
>>>>         row++; //total row count
>>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>>         col = st.countTokens();
>>>>       }
>>>>         read.close();
>>>> ...........
>>>> ...........
>>>> .............
>>>> ............
>>>> Further computation is based on the above "map" file.
>>>>
>>>> Why this happens?
>>>> I think it is unable to write into in/map for several times.
>>>> How to get rid of this?
>>>> *Any Suggestions?*
>>>>
>>>> --
>>>> *Thanks & Regards*
>>>>
>>>> Unmesha Sreeveni U.B
>>>>
>>>> *Junior Developer*
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
@chandu banavaram:
This exception usually happens if hdfs is trying to write into a file which
is no more in hdfs..

I think in my case certain files are not created in my hdfs.it failed to
create due to some permissions.

I am trying out.


On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:

> :) Ok
> Why u also experienced the same?
>
>
> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
> chandu.banavaram@gmail.com> wrote:
>
>> plz send the answer to me  for this query
>>
>>
>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>
>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>> parsing the arguments. Applications should implement Tool for the same.
>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>> process : 1
>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>> job_201310301645_25033
>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>  at org.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>  at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>  at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>> log4j system properly.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>
>>> Why is it so?
>>> My mapper code is
>>>
>>> public void map(Object key, Text value, Context context)
>>> throws IOException, InterruptedException {
>>>  Configuration conf = new Configuration();
>>> FileSystem fs = FileSystem.get(conf);
>>>
>>>  Path inputfile = new Path("in/map");
>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>> OutputStreamWriter(fs.create(inputfile)));
>>>    getdatabuffer.write(value.toString());
>>>    getdatabuffer.close();
>>> Path Attribute = new Path("in/Attribute");
>>>     int row =0;
>>>         BufferedReader read = new BufferedReader(new
>>> InputStreamReader(fs.open(inputfile)));
>>>         String str = null;
>>>         while((str = read.readLine())!=null){
>>>
>>>         row++; //total row count
>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>         col = st.countTokens();
>>>       }
>>>         read.close();
>>> ...........
>>> ...........
>>> .............
>>> ............
>>> Further computation is based on the above "map" file.
>>>
>>> Why this happens?
>>> I think it is unable to write into in/map for several times.
>>> How to get rid of this?
>>> *Any Suggestions?*
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
@chandu banavaram:
This exception usually happens if hdfs is trying to write into a file which
is no more in hdfs..

I think in my case certain files are not created in my hdfs.it failed to
create due to some permissions.

I am trying out.


On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:

> :) Ok
> Why u also experienced the same?
>
>
> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
> chandu.banavaram@gmail.com> wrote:
>
>> plz send the answer to me  for this query
>>
>>
>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>
>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>> parsing the arguments. Applications should implement Tool for the same.
>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>> process : 1
>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>> job_201310301645_25033
>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>  at org.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>  at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>  at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>> log4j system properly.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>
>>> Why is it so?
>>> My mapper code is
>>>
>>> public void map(Object key, Text value, Context context)
>>> throws IOException, InterruptedException {
>>>  Configuration conf = new Configuration();
>>> FileSystem fs = FileSystem.get(conf);
>>>
>>>  Path inputfile = new Path("in/map");
>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>> OutputStreamWriter(fs.create(inputfile)));
>>>    getdatabuffer.write(value.toString());
>>>    getdatabuffer.close();
>>> Path Attribute = new Path("in/Attribute");
>>>     int row =0;
>>>         BufferedReader read = new BufferedReader(new
>>> InputStreamReader(fs.open(inputfile)));
>>>         String str = null;
>>>         while((str = read.readLine())!=null){
>>>
>>>         row++; //total row count
>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>         col = st.countTokens();
>>>       }
>>>         read.close();
>>> ...........
>>> ...........
>>> .............
>>> ............
>>> Further computation is based on the above "map" file.
>>>
>>> Why this happens?
>>> I think it is unable to write into in/map for several times.
>>> How to get rid of this?
>>> *Any Suggestions?*
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
@chandu banavaram:
This exception usually happens if hdfs is trying to write into a file which
is no more in hdfs..

I think in my case certain files are not created in my hdfs.it failed to
create due to some permissions.

I am trying out.


On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:

> :) Ok
> Why u also experienced the same?
>
>
> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
> chandu.banavaram@gmail.com> wrote:
>
>> plz send the answer to me  for this query
>>
>>
>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>
>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>> parsing the arguments. Applications should implement Tool for the same.
>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>> process : 1
>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>> job_201310301645_25033
>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>  at org.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>  at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>  at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>> log4j system properly.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>
>>> Why is it so?
>>> My mapper code is
>>>
>>> public void map(Object key, Text value, Context context)
>>> throws IOException, InterruptedException {
>>>  Configuration conf = new Configuration();
>>> FileSystem fs = FileSystem.get(conf);
>>>
>>>  Path inputfile = new Path("in/map");
>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>> OutputStreamWriter(fs.create(inputfile)));
>>>    getdatabuffer.write(value.toString());
>>>    getdatabuffer.close();
>>> Path Attribute = new Path("in/Attribute");
>>>     int row =0;
>>>         BufferedReader read = new BufferedReader(new
>>> InputStreamReader(fs.open(inputfile)));
>>>         String str = null;
>>>         while((str = read.readLine())!=null){
>>>
>>>         row++; //total row count
>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>         col = st.countTokens();
>>>       }
>>>         read.close();
>>> ...........
>>> ...........
>>> .............
>>> ............
>>> Further computation is based on the above "map" file.
>>>
>>> Why this happens?
>>> I think it is unable to write into in/map for several times.
>>> How to get rid of this?
>>> *Any Suggestions?*
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
@chandu banavaram:
This exception usually happens if hdfs is trying to write into a file which
is no more in hdfs..

I think in my case certain files are not created in my hdfs.it failed to
create due to some permissions.

I am trying out.


On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni <un...@gmail.com>wrote:

> :) Ok
> Why u also experienced the same?
>
>
> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
> chandu.banavaram@gmail.com> wrote:
>
>> plz send the answer to me  for this query
>>
>>
>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>>
>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>
>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>> parsing the arguments. Applications should implement Tool for the same.
>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>> process : 1
>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>> job_201310301645_25033
>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>  at org.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> Lease mismatch on /user/hdfs/in/map owned by
>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>  at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>  at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>>> multiple SLF4J bindings.
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>>> log4j system properly.
>>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>  at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>  at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>
>>> Why is it so?
>>> My mapper code is
>>>
>>> public void map(Object key, Text value, Context context)
>>> throws IOException, InterruptedException {
>>>  Configuration conf = new Configuration();
>>> FileSystem fs = FileSystem.get(conf);
>>>
>>>  Path inputfile = new Path("in/map");
>>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>>> OutputStreamWriter(fs.create(inputfile)));
>>>    getdatabuffer.write(value.toString());
>>>    getdatabuffer.close();
>>> Path Attribute = new Path("in/Attribute");
>>>     int row =0;
>>>         BufferedReader read = new BufferedReader(new
>>> InputStreamReader(fs.open(inputfile)));
>>>         String str = null;
>>>         while((str = read.readLine())!=null){
>>>
>>>         row++; //total row count
>>>         StringTokenizer st =new StringTokenizer(str," ");
>>>         col = st.countTokens();
>>>       }
>>>         read.close();
>>> ...........
>>> ...........
>>> .............
>>> ............
>>> Further computation is based on the above "map" file.
>>>
>>> Why this happens?
>>> I think it is unable to write into in/map for several times.
>>> How to get rid of this?
>>> *Any Suggestions?*
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>>
>>> *Junior Developer*
>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
:) Ok
Why u also experienced the same?


On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> plz send the answer to me  for this query
>
>
> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> While running job with 90 Mb file i am getting LeaseExpiredException
>>
>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>> job_201310301645_25033
>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1561990512_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>  at org.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1662926329_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>  at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>  at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>> log4j system properly.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>
>> Why is it so?
>> My mapper code is
>>
>> public void map(Object key, Text value, Context context)
>> throws IOException, InterruptedException {
>>  Configuration conf = new Configuration();
>> FileSystem fs = FileSystem.get(conf);
>>
>>  Path inputfile = new Path("in/map");
>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>> OutputStreamWriter(fs.create(inputfile)));
>>    getdatabuffer.write(value.toString());
>>    getdatabuffer.close();
>> Path Attribute = new Path("in/Attribute");
>>     int row =0;
>>         BufferedReader read = new BufferedReader(new
>> InputStreamReader(fs.open(inputfile)));
>>         String str = null;
>>         while((str = read.readLine())!=null){
>>
>>         row++; //total row count
>>         StringTokenizer st =new StringTokenizer(str," ");
>>         col = st.countTokens();
>>       }
>>         read.close();
>> ...........
>> ...........
>> .............
>> ............
>> Further computation is based on the above "map" file.
>>
>> Why this happens?
>> I think it is unable to write into in/map for several times.
>> How to get rid of this?
>> *Any Suggestions?*
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
:) Ok
Why u also experienced the same?


On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> plz send the answer to me  for this query
>
>
> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> While running job with 90 Mb file i am getting LeaseExpiredException
>>
>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>> job_201310301645_25033
>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1561990512_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>  at org.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1662926329_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>  at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>  at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>> log4j system properly.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>
>> Why is it so?
>> My mapper code is
>>
>> public void map(Object key, Text value, Context context)
>> throws IOException, InterruptedException {
>>  Configuration conf = new Configuration();
>> FileSystem fs = FileSystem.get(conf);
>>
>>  Path inputfile = new Path("in/map");
>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>> OutputStreamWriter(fs.create(inputfile)));
>>    getdatabuffer.write(value.toString());
>>    getdatabuffer.close();
>> Path Attribute = new Path("in/Attribute");
>>     int row =0;
>>         BufferedReader read = new BufferedReader(new
>> InputStreamReader(fs.open(inputfile)));
>>         String str = null;
>>         while((str = read.readLine())!=null){
>>
>>         row++; //total row count
>>         StringTokenizer st =new StringTokenizer(str," ");
>>         col = st.countTokens();
>>       }
>>         read.close();
>> ...........
>> ...........
>> .............
>> ............
>> Further computation is based on the above "map" file.
>>
>> Why this happens?
>> I think it is unable to write into in/map for several times.
>> How to get rid of this?
>> *Any Suggestions?*
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
:) Ok
Why u also experienced the same?


On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> plz send the answer to me  for this query
>
>
> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> While running job with 90 Mb file i am getting LeaseExpiredException
>>
>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>> job_201310301645_25033
>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1561990512_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>  at org.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1662926329_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>  at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>  at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>> log4j system properly.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>
>> Why is it so?
>> My mapper code is
>>
>> public void map(Object key, Text value, Context context)
>> throws IOException, InterruptedException {
>>  Configuration conf = new Configuration();
>> FileSystem fs = FileSystem.get(conf);
>>
>>  Path inputfile = new Path("in/map");
>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>> OutputStreamWriter(fs.create(inputfile)));
>>    getdatabuffer.write(value.toString());
>>    getdatabuffer.close();
>> Path Attribute = new Path("in/Attribute");
>>     int row =0;
>>         BufferedReader read = new BufferedReader(new
>> InputStreamReader(fs.open(inputfile)));
>>         String str = null;
>>         while((str = read.readLine())!=null){
>>
>>         row++; //total row count
>>         StringTokenizer st =new StringTokenizer(str," ");
>>         col = st.countTokens();
>>       }
>>         read.close();
>> ...........
>> ...........
>> .............
>> ............
>> Further computation is based on the above "map" file.
>>
>> Why this happens?
>> I think it is unable to write into in/map for several times.
>> How to get rid of this?
>> *Any Suggestions?*
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by unmesha sreeveni <un...@gmail.com>.
:) Ok
Why u also experienced the same?


On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
chandu.banavaram@gmail.com> wrote:

> plz send the answer to me  for this query
>
>
> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> While running job with 90 Mb file i am getting LeaseExpiredException
>>
>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>> job_201310301645_25033
>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1561990512_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>  at org.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_0: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000000_1, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> Lease mismatch on /user/hdfs/in/map owned by
>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>> DFSClient_NONMAPREDUCE_-1662926329_1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>  at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>  at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
>> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains
>> multiple SLF4J bindings.
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> attempt_201310301645_25033_m_000000_1: SLF4J: See
>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
>> log4j system properly.
>> attempt_201310301645_25033_m_000000_1: log4j:WARN See
>> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
>> attempt_201310301645_25033_m_000001_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> No lease on /user/hdfs/in/map: File is not open for writing. Holder
>> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>  at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>
>> Why is it so?
>> My mapper code is
>>
>> public void map(Object key, Text value, Context context)
>> throws IOException, InterruptedException {
>>  Configuration conf = new Configuration();
>> FileSystem fs = FileSystem.get(conf);
>>
>>  Path inputfile = new Path("in/map");
>>  BufferedWriter getdatabuffer = new BufferedWriter(new
>> OutputStreamWriter(fs.create(inputfile)));
>>    getdatabuffer.write(value.toString());
>>    getdatabuffer.close();
>> Path Attribute = new Path("in/Attribute");
>>     int row =0;
>>         BufferedReader read = new BufferedReader(new
>> InputStreamReader(fs.open(inputfile)));
>>         String str = null;
>>         while((str = read.readLine())!=null){
>>
>>         row++; //total row count
>>         StringTokenizer st =new StringTokenizer(str," ");
>>         col = st.countTokens();
>>       }
>>         read.close();
>> ...........
>> ...........
>> .............
>> ............
>> Further computation is based on the above "map" file.
>>
>> Why this happens?
>> I think it is unable to write into in/map for several times.
>> How to get rid of this?
>> *Any Suggestions?*
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>>
>> *Junior Developer*
>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
plz send the answer to me  for this query


On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:

> While running job with 90 Mb file i am getting LeaseExpiredException
>
> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process
> : 1
> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
> job_201310301645_25033
> 13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1561990512_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>  at org.
> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_1, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1662926329_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>  at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
> log4j system properly.
> attempt_201310301645_25033_m_000000_1: log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000001_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /user/hdfs/in/map: File is not open for writing. Holder
> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>
> Why is it so?
> My mapper code is
>
> public void map(Object key, Text value, Context context)
> throws IOException, InterruptedException {
>  Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(conf);
>
>  Path inputfile = new Path("in/map");
>  BufferedWriter getdatabuffer = new BufferedWriter(new
> OutputStreamWriter(fs.create(inputfile)));
>    getdatabuffer.write(value.toString());
>    getdatabuffer.close();
> Path Attribute = new Path("in/Attribute");
>     int row =0;
>         BufferedReader read = new BufferedReader(new
> InputStreamReader(fs.open(inputfile)));
>         String str = null;
>         while((str = read.readLine())!=null){
>
>         row++; //total row count
>         StringTokenizer st =new StringTokenizer(str," ");
>         col = st.countTokens();
>       }
>         read.close();
> ...........
> ...........
> .............
> ............
> Further computation is based on the above "map" file.
>
> Why this happens?
> I think it is unable to write into in/map for several times.
> How to get rid of this?
> *Any Suggestions?*
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
plz send the answer to me  for this query


On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:

> While running job with 90 Mb file i am getting LeaseExpiredException
>
> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process
> : 1
> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
> job_201310301645_25033
> 13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1561990512_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>  at org.
> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_1, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1662926329_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>  at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
> log4j system properly.
> attempt_201310301645_25033_m_000000_1: log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000001_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /user/hdfs/in/map: File is not open for writing. Holder
> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>
> Why is it so?
> My mapper code is
>
> public void map(Object key, Text value, Context context)
> throws IOException, InterruptedException {
>  Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(conf);
>
>  Path inputfile = new Path("in/map");
>  BufferedWriter getdatabuffer = new BufferedWriter(new
> OutputStreamWriter(fs.create(inputfile)));
>    getdatabuffer.write(value.toString());
>    getdatabuffer.close();
> Path Attribute = new Path("in/Attribute");
>     int row =0;
>         BufferedReader read = new BufferedReader(new
> InputStreamReader(fs.open(inputfile)));
>         String str = null;
>         while((str = read.readLine())!=null){
>
>         row++; //total row count
>         StringTokenizer st =new StringTokenizer(str," ");
>         col = st.countTokens();
>       }
>         read.close();
> ...........
> ...........
> .............
> ............
> Further computation is based on the above "map" file.
>
> Why this happens?
> I think it is unable to write into in/map for several times.
> How to get rid of this?
> *Any Suggestions?*
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
plz send the answer to me  for this query


On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:

> While running job with 90 Mb file i am getting LeaseExpiredException
>
> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process
> : 1
> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
> job_201310301645_25033
> 13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1561990512_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>  at org.
> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_1, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1662926329_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>  at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
> log4j system properly.
> attempt_201310301645_25033_m_000000_1: log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000001_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /user/hdfs/in/map: File is not open for writing. Holder
> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>
> Why is it so?
> My mapper code is
>
> public void map(Object key, Text value, Context context)
> throws IOException, InterruptedException {
>  Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(conf);
>
>  Path inputfile = new Path("in/map");
>  BufferedWriter getdatabuffer = new BufferedWriter(new
> OutputStreamWriter(fs.create(inputfile)));
>    getdatabuffer.write(value.toString());
>    getdatabuffer.close();
> Path Attribute = new Path("in/Attribute");
>     int row =0;
>         BufferedReader read = new BufferedReader(new
> InputStreamReader(fs.open(inputfile)));
>         String str = null;
>         while((str = read.readLine())!=null){
>
>         row++; //total row count
>         StringTokenizer st =new StringTokenizer(str," ");
>         col = st.countTokens();
>       }
>         read.close();
> ...........
> ...........
> .............
> ............
> Further computation is based on the above "map" file.
>
> Why this happens?
> I think it is unable to write into in/map for several times.
> How to get rid of this?
> *Any Suggestions?*
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

Posted by chandu banavaram <ch...@gmail.com>.
plz send the answer to me  for this query


On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni <un...@gmail.com>wrote:

> While running job with 90 Mb file i am getting LeaseExpiredException
>
> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process
> : 1
> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
> job_201310301645_25033
> 13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1561990512_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>  at org.
> attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_0: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000000_1, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1662926329_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>  at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
> attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_000000_1: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the
> log4j system properly.
> attempt_201310301645_25033_m_000000_1: log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_000001_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /user/hdfs/in/map: File is not open for writing. Holder
> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>
> Why is it so?
> My mapper code is
>
> public void map(Object key, Text value, Context context)
> throws IOException, InterruptedException {
>  Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(conf);
>
>  Path inputfile = new Path("in/map");
>  BufferedWriter getdatabuffer = new BufferedWriter(new
> OutputStreamWriter(fs.create(inputfile)));
>    getdatabuffer.write(value.toString());
>    getdatabuffer.close();
> Path Attribute = new Path("in/Attribute");
>     int row =0;
>         BufferedReader read = new BufferedReader(new
> InputStreamReader(fs.open(inputfile)));
>         String str = null;
>         while((str = read.readLine())!=null){
>
>         row++; //total row count
>         StringTokenizer st =new StringTokenizer(str," ");
>         col = st.countTokens();
>       }
>         read.close();
> ...........
> ...........
> .............
> ............
> Further computation is based on the above "map" file.
>
> Why this happens?
> I think it is unable to write into in/map for several times.
> How to get rid of this?
> *Any Suggestions?*
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>