You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Silvina Caíno Lores <si...@gmail.com> on 2014/01/08 09:59:36 UTC

Re: Authentication issue on Hadoop 2.2.0

Found out that building my code with Maven (along with the Pipes examples)
worked. Any clues why?

Thanks,
Silvina


On 19 December 2013 13:16, Silvina Caíno Lores <si...@gmail.com>wrote:

> And that it is caused by this exception, as I've found out
>
> 2013-12-19 13:14:28,237 ERROR [pipe-uplink-handler]
> org.apache.hadoop.mapred.pipes.BinaryProtocol: java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:267)
> at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
> at
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:125)
>
>
> On 18 December 2013 10:52, Silvina Caíno Lores <si...@gmail.com>wrote:
>
>> I forgot to mention that the wordcount pipes example runs successfully.
>>
>>
>> On 18 December 2013 10:50, Silvina Caíno Lores <si...@gmail.com>wrote:
>>
>>> Hi everyone,
>>>
>>> I'm working with a single node cluster and a Hadoop Pipes job that
>>> throws the following exception on execution:
>>>
>>> 13/12/18 10:44:55 INFO mapreduce.Job: Running job: job_1387359324416_0002
>>> 13/12/18 10:45:03 INFO mapreduce.Job: Job job_1387359324416_0002 running
>>> in uber mode : false
>>> 13/12/18 10:45:03 INFO mapreduce.Job: map 0% reduce 0%
>>> 13/12/18 10:45:08 INFO mapreduce.Job: Task Id :
>>> attempt_1387359324416_0002_m_000000_0, Status : FAILED
>>> Error: java.io.IOException
>>> at
>>> org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186)
>>> at
>>> org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195)
>>> at
>>> org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150)
>>> at
>>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
>>> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:171)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:166)
>>>
>>> /// same exception several times ///
>>>
>>> 13/12/18 10:45:25 INFO mapreduce.Job: map 93% reduce 0%
>>> 13/12/18 10:45:26 INFO mapreduce.Job: map 100% reduce 100%
>>> 13/12/18 10:45:26 INFO mapreduce.Job: Job job_1387359324416_0002 failed
>>> with state FAILED due to: Task failed task_1387359324416_0002_m_000000
>>> Job failed as tasks failed. failedMaps:1 failedReduces:0
>>>
>>>
>>> I don't really get why the job is at 100% if it actually failed by the
>>> way.
>>>
>>> Any ideas? Thanks in advance!
>>>
>>> Best,
>>> Silvina
>>>
>>>
>>>
>>>
>>>
>>>
>>
>