You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by jamal sasha <ja...@gmail.com> on 2013/07/29 19:59:14 UTC

Error on running a hadoop job

Hi,
 I am getting a weird error?

13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
attempt_201307102216_0145_r_000016_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
/wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
does not have any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1066)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy2.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy2.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)

I am guessing that it is because in the intermediate step (after map phase)
, there were some temp files which might have got deleted while reducer is
trying to read it.?
How do i resolve this.
I am trying to run a simple word count example but on complete wiki dump?
Thanks

Re: Error on running a hadoop job

Posted by Harsh J <ha...@cloudera.com>.
You could see this if you had removed away /wordcount_raw/ when the
reducer was running perhaps. Otherwise the path had an attempt ID so
am not seeing how it could have had two writers conflicting to cause
this. What version, if you didn't remove /wordcount_raw?

On Mon, Jul 29, 2013 at 11:32 PM, Pavan Sudheendra <pa...@gmail.com> wrote:
> I'm getting the exact same error. Only thing is I'm trying to write to a
> sequence file.
>
> Regards,
> Pavan
>
> On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>> Hi,
>>  I am getting a weird error?
>>
>> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
>> attempt_201307102216_0145_r_000016_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
>> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
>> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
>> does not have any open files.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> at $Proxy2.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> at $Proxy2.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>>
>> I am guessing that it is because in the intermediate step (after map
>> phase) , there were some temp files which might have got deleted while
>> reducer is trying to read it.?
>> How do i resolve this.
>> I am trying to run a simple word count example but on complete wiki dump?
>> Thanks
>>
>>
>



-- 
Harsh J

Re: Error on running a hadoop job

Posted by Harsh J <ha...@cloudera.com>.
You could see this if you had removed away /wordcount_raw/ when the
reducer was running perhaps. Otherwise the path had an attempt ID so
am not seeing how it could have had two writers conflicting to cause
this. What version, if you didn't remove /wordcount_raw?

On Mon, Jul 29, 2013 at 11:32 PM, Pavan Sudheendra <pa...@gmail.com> wrote:
> I'm getting the exact same error. Only thing is I'm trying to write to a
> sequence file.
>
> Regards,
> Pavan
>
> On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>> Hi,
>>  I am getting a weird error?
>>
>> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
>> attempt_201307102216_0145_r_000016_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
>> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
>> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
>> does not have any open files.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> at $Proxy2.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> at $Proxy2.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>>
>> I am guessing that it is because in the intermediate step (after map
>> phase) , there were some temp files which might have got deleted while
>> reducer is trying to read it.?
>> How do i resolve this.
>> I am trying to run a simple word count example but on complete wiki dump?
>> Thanks
>>
>>
>



-- 
Harsh J

Re: Error on running a hadoop job

Posted by Harsh J <ha...@cloudera.com>.
You could see this if you had removed away /wordcount_raw/ when the
reducer was running perhaps. Otherwise the path had an attempt ID so
am not seeing how it could have had two writers conflicting to cause
this. What version, if you didn't remove /wordcount_raw?

On Mon, Jul 29, 2013 at 11:32 PM, Pavan Sudheendra <pa...@gmail.com> wrote:
> I'm getting the exact same error. Only thing is I'm trying to write to a
> sequence file.
>
> Regards,
> Pavan
>
> On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>> Hi,
>>  I am getting a weird error?
>>
>> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
>> attempt_201307102216_0145_r_000016_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
>> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
>> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
>> does not have any open files.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> at $Proxy2.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> at $Proxy2.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>>
>> I am guessing that it is because in the intermediate step (after map
>> phase) , there were some temp files which might have got deleted while
>> reducer is trying to read it.?
>> How do i resolve this.
>> I am trying to run a simple word count example but on complete wiki dump?
>> Thanks
>>
>>
>



-- 
Harsh J

Re: Error on running a hadoop job

Posted by Harsh J <ha...@cloudera.com>.
You could see this if you had removed away /wordcount_raw/ when the
reducer was running perhaps. Otherwise the path had an attempt ID so
am not seeing how it could have had two writers conflicting to cause
this. What version, if you didn't remove /wordcount_raw?

On Mon, Jul 29, 2013 at 11:32 PM, Pavan Sudheendra <pa...@gmail.com> wrote:
> I'm getting the exact same error. Only thing is I'm trying to write to a
> sequence file.
>
> Regards,
> Pavan
>
> On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>> Hi,
>>  I am getting a weird error?
>>
>> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
>> attempt_201307102216_0145_r_000016_0, Status : FAILED
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
>> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
>> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
>> does not have any open files.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> at $Proxy2.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> at $Proxy2.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>>
>> I am guessing that it is because in the intermediate step (after map
>> phase) , there were some temp files which might have got deleted while
>> reducer is trying to read it.?
>> How do i resolve this.
>> I am trying to run a simple word count example but on complete wiki dump?
>> Thanks
>>
>>
>



-- 
Harsh J

Re: Error on running a hadoop job

Posted by Pavan Sudheendra <pa...@gmail.com>.
I'm getting the exact same error. Only thing is I'm trying to write to a
sequence file.

Regards,
Pavan
On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>  I am getting a weird error?
>
> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
> attempt_201307102216_0145_r_000016_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
> does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at $Proxy2.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy2.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>
> I am guessing that it is because in the intermediate step (after map
> phase) , there were some temp files which might have got deleted while
> reducer is trying to read it.?
> How do i resolve this.
> I am trying to run a simple word count example but on complete wiki dump?
> Thanks
>
>
>

Re: Error on running a hadoop job

Posted by Pavan Sudheendra <pa...@gmail.com>.
I'm getting the exact same error. Only thing is I'm trying to write to a
sequence file.

Regards,
Pavan
On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>  I am getting a weird error?
>
> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
> attempt_201307102216_0145_r_000016_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
> does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at $Proxy2.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy2.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>
> I am guessing that it is because in the intermediate step (after map
> phase) , there were some temp files which might have got deleted while
> reducer is trying to read it.?
> How do i resolve this.
> I am trying to run a simple word count example but on complete wiki dump?
> Thanks
>
>
>

Re: Error on running a hadoop job

Posted by Pavan Sudheendra <pa...@gmail.com>.
I'm getting the exact same error. Only thing is I'm trying to write to a
sequence file.

Regards,
Pavan
On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>  I am getting a weird error?
>
> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
> attempt_201307102216_0145_r_000016_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
> does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at $Proxy2.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy2.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>
> I am guessing that it is because in the intermediate step (after map
> phase) , there were some temp files which might have got deleted while
> reducer is trying to read it.?
> How do i resolve this.
> I am trying to run a simple word count example but on complete wiki dump?
> Thanks
>
>
>

Re: Error on running a hadoop job

Posted by Pavan Sudheendra <pa...@gmail.com>.
I'm getting the exact same error. Only thing is I'm trying to write to a
sequence file.

Regards,
Pavan
On Jul 29, 2013 11:29 PM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>  I am getting a weird error?
>
> 13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
> attempt_201307102216_0145_r_000016_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /wordcount_raw/_temporary/_attempt_201307102216_0145_r_000016_0/part-r-00016
> File does not exist. Holder DFSClient_attempt_201307102216_0145_r_000016_0
> does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1629)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1620)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at $Proxy2.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy2.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>
> I am guessing that it is because in the intermediate step (after map
> phase) , there were some temp files which might have got deleted while
> reducer is trying to read it.?
> How do i resolve this.
> I am trying to run a simple word count example but on complete wiki dump?
> Thanks
>
>
>