You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by AnilKumar B <ak...@gmail.com> on 2014/02/24 11:01:32 UTC

job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id:
731113654 , 731113656
        at
org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by:
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
mismatch. Request id and saved id: 731113654 , 731113656
        at
org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Hi Vinay,

Actually when I use multiple outputs with AvroKeyOutputFormat, then I am
facing that issue, I just removed multiple outputs and used general
context.write(), then it's working now.

I need to debug this issue. May some issue in my code.

Thanks for your inputs, I will debug with your suggestions.

Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 5:28 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think avro output emitted in reducers is written to same file from
> different tasks?
>
>
>
> Because I am pretty sure that this problem will come only this case.
> Because previous writer is fenced by new writer.
>
>
>
> To findout,
>
>
>
> 1.       Enable hdfs-audit logs for namenode ( if not done )
>
> 2.       Run the job again,
>
> 3.       Try to find out the files written by reducers using the
> hdfs-audit log and find out the exact file which is overwritten before
> closing.
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 16:15
> *To:* user@hadoop.apache.org
> *Subject:* Re: job failed on hadoop 2
>
>
>
> Thanks Vinay.
>
>
>
> I am checking my code, but this exception is coming after map 100%. That's
> why I am not getting where could be the issue.
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>
>
>
>
> In my mappers and reducers, I am emitting in avro format.
>
>
>
>
>
>
>  Thanks & Regards,
> B Anil Kumar.
>
>
>
> On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>
> wrote:
>
> Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>
>
>

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Hi Vinay,

Actually when I use multiple outputs with AvroKeyOutputFormat, then I am
facing that issue, I just removed multiple outputs and used general
context.write(), then it's working now.

I need to debug this issue. May some issue in my code.

Thanks for your inputs, I will debug with your suggestions.

Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 5:28 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think avro output emitted in reducers is written to same file from
> different tasks?
>
>
>
> Because I am pretty sure that this problem will come only this case.
> Because previous writer is fenced by new writer.
>
>
>
> To findout,
>
>
>
> 1.       Enable hdfs-audit logs for namenode ( if not done )
>
> 2.       Run the job again,
>
> 3.       Try to find out the files written by reducers using the
> hdfs-audit log and find out the exact file which is overwritten before
> closing.
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 16:15
> *To:* user@hadoop.apache.org
> *Subject:* Re: job failed on hadoop 2
>
>
>
> Thanks Vinay.
>
>
>
> I am checking my code, but this exception is coming after map 100%. That's
> why I am not getting where could be the issue.
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>
>
>
>
> In my mappers and reducers, I am emitting in avro format.
>
>
>
>
>
>
>  Thanks & Regards,
> B Anil Kumar.
>
>
>
> On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>
> wrote:
>
> Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>
>
>

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Hi Vinay,

Actually when I use multiple outputs with AvroKeyOutputFormat, then I am
facing that issue, I just removed multiple outputs and used general
context.write(), then it's working now.

I need to debug this issue. May some issue in my code.

Thanks for your inputs, I will debug with your suggestions.

Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 5:28 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think avro output emitted in reducers is written to same file from
> different tasks?
>
>
>
> Because I am pretty sure that this problem will come only this case.
> Because previous writer is fenced by new writer.
>
>
>
> To findout,
>
>
>
> 1.       Enable hdfs-audit logs for namenode ( if not done )
>
> 2.       Run the job again,
>
> 3.       Try to find out the files written by reducers using the
> hdfs-audit log and find out the exact file which is overwritten before
> closing.
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 16:15
> *To:* user@hadoop.apache.org
> *Subject:* Re: job failed on hadoop 2
>
>
>
> Thanks Vinay.
>
>
>
> I am checking my code, but this exception is coming after map 100%. That's
> why I am not getting where could be the issue.
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>
>
>
>
> In my mappers and reducers, I am emitting in avro format.
>
>
>
>
>
>
>  Thanks & Regards,
> B Anil Kumar.
>
>
>
> On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>
> wrote:
>
> Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>
>
>

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Hi Vinay,

Actually when I use multiple outputs with AvroKeyOutputFormat, then I am
facing that issue, I just removed multiple outputs and used general
context.write(), then it's working now.

I need to debug this issue. May some issue in my code.

Thanks for your inputs, I will debug with your suggestions.

Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 5:28 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think avro output emitted in reducers is written to same file from
> different tasks?
>
>
>
> Because I am pretty sure that this problem will come only this case.
> Because previous writer is fenced by new writer.
>
>
>
> To findout,
>
>
>
> 1.       Enable hdfs-audit logs for namenode ( if not done )
>
> 2.       Run the job again,
>
> 3.       Try to find out the files written by reducers using the
> hdfs-audit log and find out the exact file which is overwritten before
> closing.
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 16:15
> *To:* user@hadoop.apache.org
> *Subject:* Re: job failed on hadoop 2
>
>
>
> Thanks Vinay.
>
>
>
> I am checking my code, but this exception is coming after map 100%. That's
> why I am not getting where could be the issue.
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>
>
>
>
> In my mappers and reducers, I am emitting in avro format.
>
>
>
>
>
>
>  Thanks & Regards,
> B Anil Kumar.
>
>
>
> On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>
> wrote:
>
> Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>
>
>

RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think avro output emitted in reducers is written to same file from different tasks?

Because I am pretty sure that this problem will come only this case.  Because previous writer is fenced by new writer.

To findout,


1.       Enable hdfs-audit logs for namenode ( if not done )

2.       Run the job again,

3.       Try to find out the files written by reducers using the hdfs-audit log and find out the exact file which is overwritten before closing.

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 16:15
To: user@hadoop.apache.org
Subject: Re: job failed on hadoop 2

Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.

On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>> wrote:
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com<ma...@gmail.com>]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.


RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think avro output emitted in reducers is written to same file from different tasks?

Because I am pretty sure that this problem will come only this case.  Because previous writer is fenced by new writer.

To findout,


1.       Enable hdfs-audit logs for namenode ( if not done )

2.       Run the job again,

3.       Try to find out the files written by reducers using the hdfs-audit log and find out the exact file which is overwritten before closing.

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 16:15
To: user@hadoop.apache.org
Subject: Re: job failed on hadoop 2

Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.

On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>> wrote:
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com<ma...@gmail.com>]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.


RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think avro output emitted in reducers is written to same file from different tasks?

Because I am pretty sure that this problem will come only this case.  Because previous writer is fenced by new writer.

To findout,


1.       Enable hdfs-audit logs for namenode ( if not done )

2.       Run the job again,

3.       Try to find out the files written by reducers using the hdfs-audit log and find out the exact file which is overwritten before closing.

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 16:15
To: user@hadoop.apache.org
Subject: Re: job failed on hadoop 2

Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.

On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>> wrote:
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com<ma...@gmail.com>]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.


RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think avro output emitted in reducers is written to same file from different tasks?

Because I am pretty sure that this problem will come only this case.  Because previous writer is fenced by new writer.

To findout,


1.       Enable hdfs-audit logs for namenode ( if not done )

2.       Run the job again,

3.       Try to find out the files written by reducers using the hdfs-audit log and find out the exact file which is overwritten before closing.

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 16:15
To: user@hadoop.apache.org
Subject: Re: job failed on hadoop 2

Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.

On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>> wrote:
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com<ma...@gmail.com>]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.


Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's
why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id:
731113654 , 731113656
        at
org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's
why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id:
731113654 , 731113656
        at
org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's
why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id:
731113654 , 731113656
        at
org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>

Re: job failed on hadoop 2

Posted by AnilKumar B <ak...@gmail.com>.
Thanks Vinay.

I am checking my code, but this exception is coming after map 100%. That's
why I am not getting where could be the issue.

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id:
731113654 , 731113656
        at
org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)


In my mappers and reducers, I am emitting in avro format.



Thanks & Regards,
B Anil Kumar.


On Mon, Feb 24, 2014 at 3:35 PM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Anil,
>
>
>
> I think multiple clients/tasks are trying to write to same file with
> *overwrite* enabled
>
>
>
> Second client is overwriting the first client's file, and first client is
> getting the below mentioned exception.
>
>
>
> Please check ..
>
>
>
> Regards,
>
> Vinayakumar B
>
>
>
> *From:* AnilKumar B [mailto:akumarb2010@gmail.com]
> *Sent:* 24 February 2014 15:32
> *To:* user@hadoop.apache.org
> *Subject:* job failed on hadoop 2
>
>
>
> Hi,
>
>
>
> When I try to run MapReduce job on Hadoop 2, I am facing below issue.
>
>
>
> What could be the problem?
>
>
>
> 14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running
> in uber mode : false
>
> 14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
>
> 14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
>
> 14/02/24 02:24:22 INFO mapreduce.Job: Task Id :
> attempt_1392973982912_14477_r_000000_0, Status : FAILED
>
> Error: java.io.FileNotFoundException: ID mismatch. Request id and saved
> id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID
> mismatch. Request id and saved id: 731113654 , 731113656
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>         at java.lang.reflect.Method.invoke(Method.java:597)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at $Proxy10.addBlock(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>
>         ... 2 more
>
>
>
>
>
>
>
> Thanks & Regards,
> B Anil Kumar.
>

RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.

RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.

RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.

RE: job failed on hadoop 2

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anil,

I think multiple clients/tasks are trying to write to same file with overwrite enabled

Second client is overwriting the first client's file, and first client is getting the below mentioned exception.

Please check ..

Regards,
Vinayakumar B

From: AnilKumar B [mailto:akumarb2010@gmail.com]
Sent: 24 February 2014 15:32
To: user@hadoop.apache.org
Subject: job failed on hadoop 2

Hi,

When I try to run MapReduce job on Hadoop 2, I am facing below issue.

What could be the problem?

14/02/24 02:24:05 INFO mapreduce.Job: Job job_1392973982912_14477 running in uber mode : false
14/02/24 02:24:05 INFO mapreduce.Job:  map 0% reduce 0%
14/02/24 02:24:14 INFO mapreduce.Job:  map 100% reduce 0%
14/02/24 02:24:22 INFO mapreduce.Job: Task Id : attempt_1392973982912_14477_r_000000_0, Status : FAILED
Error: java.io.FileNotFoundException: ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1229)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): ID mismatch. Request id and saved id: 731113654 , 731113656
        at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:53)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2773)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2567)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2480)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2056)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1547)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2054)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        ... 2 more



Thanks & Regards,
B Anil Kumar.