You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Adarsh Sharma <ad...@orkash.com> on 2011/03/30 08:47:54 UTC

Hadoop Pipes Error

Dear all,

Today I faced a problem while running a map-reduce job in C++. I am not 
able to understand to find the reason of the below error :


11/03/30 12:09:02 INFO mapred.JobClient: Task Id : 
attempt_201103301130_0011_m_000000_0, Status : FAILED
java.io.IOException: pipe child exception
        at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at 
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readByte(DataInputStream.java:250)
        at 
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
        at 
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
        at 
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

attempt_201103301130_0011_m_000000_0: Hadoop Pipes Exception: failed to 
open  at wordcount-nopipe.cc:82 in 
WordCountReader::WordCountReader(HadoopPipes::MapContext&)
11/03/30 12:09:02 INFO mapred.JobClient: Task Id : 
attempt_201103301130_0011_m_000001_0, Status : FAILED
java.io.IOException: pipe child exception
        at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at 
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readByte(DataInputStream.java:250)
        at 
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
        at 
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
        at 
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

attempt_201103301130_0011_m_000001_0: Hadoop Pipes Exception: failed to 
open  at wordcount-nopipe.cc:82 in 
WordCountReader::WordCountReader(HadoopPipes::MapContext&)
11/03/30 12:09:02 INFO mapred.JobClient: Task Id : 
attempt_201103301130_0011_m_000002_0, Status : FAILED
java.io.IOException: pipe child exception
        at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at 
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readByte(DataInputStream.java:250)
        at 
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
        at 
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
        at 
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
attempt_201103301130_0011_m_000002_1: Hadoop Pipes Exception: failed to 
open  at wordcount-nopipe.cc:82 in 
WordCountReader::WordCountReader(HadoopPipes::MapContext&)
11/03/30 12:09:15 INFO mapred.JobClient: Task Id : 
attempt_201103301130_0011_m_000000_2, Status : FAILED
java.io.IOException: pipe child exception
        at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at 
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35

I tried to run *wordcount-nopipe.cc* program in 
*/home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory.


make  wordcount-nopipe
bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D 
hadoop.pipes.java.recordwriter=true -input gutenberg -output 
gutenberg-out11 -program bin/wordcount-nopipe
                                                                         
                              or
bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D 
hadoop.pipes.java.recordwriter=false -input gutenberg -output 
gutenberg-out11 -program bin/wordcount-nopipe

but error remains the same. I attached my Makefile also.
Please have some comments on it.

I am able to wun a simple wordcount.cpp program in Hadoop Cluster but 
don't know why this program fails in Broken Pipe error.



Thanks & best regards
Adarsh Sharma

Re: Hadoop Pipes Error

Posted by Steve Loughran <st...@apache.org>.
On 31/03/11 07:53, Adarsh Sharma wrote:
> Thanks Amareshwari,
>
> here is the posting :
> The *nopipe* example needs more documentation. It assumes that it is run
> with the InputFormat from src/test/org/apache/*hadoop*/mapred/*pipes*/
> *WordCountInputFormat*.java, which has a very specific input split
> format. By running with a TextInputFormat, it will send binary bytes as
> the input split and won't work right. The *nopipe* example should
> probably be recoded *to* use libhdfs *too*, but that is more complicated
> *to* get running as a unit test. Also note that since the C++ example is
> using local file reads, it will only work on a cluster if you have nfs
> or something working across the cluster.
>
> Please need if I'm wrong.
>
> I need to run it with TextInputFormat.
>
> If posiible Please explain the above post more clearly.


Here goes.

1.
 > The *nopipe* example needs more documentation. It assumes that it is run
 > with the InputFormat from src/test/org/apache/*hadoop*/mapred/*pipes*/
 > *WordCountInputFormat*.java, which has a very specific input split
 > format. By running with a TextInputFormat, it will send binary bytes as
 > the input split and won't work right.

The input for the pipe is the content generated by
src/test/org/apache/hadoop/mapred/pipes/WordCountInputFormat.java

This is covered here.
http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Example%3A+WordCount+v1.0

I would recommend following the tutorial here, or either of the books 
"Hadoop the definitive guide" or "Hadoop in Action". Both authors earn 
their money by explaining how to use Hadoop, which is why both books are 
good explanations of it.

2.
 >The *nopipe* example should
 > probably be recoded *to* use libhdfs *too*, but that is more complicated
 > *to* get running as a unit test.

Ignore that -it's irrelevant for your problem as owen is discussing 
automated testing.

3.

 > Also note that since the C++ example is
 > using local file reads, it will only work on a cluster if you have nfs
 > or something working across the cluster.

unless your cluster has a shared filesystem at the OS level it won't 
work. Either have a shared filesystem like NFS, or run it on a single 
machine.

-Steve





Re: Hadoop Pipes Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Thanks Amareshwari, I find it & I'm sorry it results in another error:

bash-3.2$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D 
hadoop.pipes.java.recordwriter=true -libjars 
/home/hadoop/project/hadoop-0.20.2/hadoop-0.20.2-test.jar -inputformat 
org.apache.hadoop.mapred.pipes.WordCountInputFormat -input gutenberg 
-output gutenberg-out101  -program bin/wordcount-nopipe
11/03/31 16:36:26 WARN mapred.JobClient: No job jar file set.  User 
classes may not be found. See JobConf(Class) or JobConf#setJar(String).
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: 
hdfs://ws-test:54310/user/hadoop/gutenberg, expected: file:
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:273)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:721)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:746)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:465)
        at 
org.apache.hadoop.mapred.pipes.WordCountInputFormat.getSplits(WordCountInputFormat.java:57)
        at 
org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:810)
        at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:781)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
        at 
org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:248)
        at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:479)
        at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)

Best regards,  Adarsh


Amareshwari Sri Ramadasu wrote:
> Adarsh,
>
> The inputformat is present in test jar. So, pass -libjars <full path to testjar> to your command. libjars option should be passed before program specific options. So, it should be just after your -D parameters.
>
> -Amareshwari
>
> On 3/31/11 3:45 PM, "Adarsh Sharma" <ad...@orkash.com> wrote:
>
> Amareshwari Sri Ramadasu wrote:
> Re: Hadoop Pipes Error You can not run it with TextInputFormat. You should run it with org.apache.hadoop.mapred.pipes .WordCountInputFormat. You can pass the input format by passing it in -inputformat option.
> I did not try it myself, but it should work.
>
>
>
>
> Here is the command that I am trying and it results in exception:
>
> bash-3.2$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true  -inputformat org.apache.hadoop.mapred.pipes.WordCountInputFormat -input gutenberg -output gutenberg-out101 -program bin/wordcount-nopipe
> Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.mapred.pipes.WordCountInputFormat
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:247)
>         at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762)
>         at org.apache.hadoop.mapred.pipes.Submitter.getClass(Submitter.java:372)
>         at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:421)
>         at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)
>
>
> Thanks , Adarsh
>
>
>   


Re: Hadoop Pipes Error

Posted by Amareshwari Sri Ramadasu <am...@yahoo-inc.com>.
Also see TestPipes.java for more details.


On 3/31/11 4:29 PM, "Amareshwari Sriramadasu" <am...@yahoo-inc.com> wrote:

Adarsh,

The inputformat is present in test jar. So, pass -libjars <full path to testjar> to your command. libjars option should be passed before program specific options. So, it should be just after your -D parameters.

-Amareshwari

On 3/31/11 3:45 PM, "Adarsh Sharma" <ad...@orkash.com> wrote:

Amareshwari Sri Ramadasu wrote:
Re: Hadoop Pipes Error You can not run it with TextInputFormat. You should run it with org.apache.hadoop.mapred.pipes .WordCountInputFormat. You can pass the input format by passing it in -inputformat option.
I did not try it myself, but it should work.




Here is the command that I am trying and it results in exception:

bash-3.2$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true  -inputformat org.apache.hadoop.mapred.pipes.WordCountInputFormat -input gutenberg -output gutenberg-out101 -program bin/wordcount-nopipe
Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.mapred.pipes.WordCountInputFormat
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762)
        at org.apache.hadoop.mapred.pipes.Submitter.getClass(Submitter.java:372)
        at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:421)
        at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)


Thanks , Adarsh


Re: Hadoop Pipes Error

Posted by Amareshwari Sri Ramadasu <am...@yahoo-inc.com>.
Adarsh,

The inputformat is present in test jar. So, pass -libjars <full path to testjar> to your command. libjars option should be passed before program specific options. So, it should be just after your -D parameters.

-Amareshwari

On 3/31/11 3:45 PM, "Adarsh Sharma" <ad...@orkash.com> wrote:

Amareshwari Sri Ramadasu wrote:
Re: Hadoop Pipes Error You can not run it with TextInputFormat. You should run it with org.apache.hadoop.mapred.pipes .WordCountInputFormat. You can pass the input format by passing it in -inputformat option.
I did not try it myself, but it should work.




Here is the command that I am trying and it results in exception:

bash-3.2$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true  -inputformat org.apache.hadoop.mapred.pipes.WordCountInputFormat -input gutenberg -output gutenberg-out101 -program bin/wordcount-nopipe
Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.mapred.pipes.WordCountInputFormat
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762)
        at org.apache.hadoop.mapred.pipes.Submitter.getClass(Submitter.java:372)
        at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:421)
        at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)


Thanks , Adarsh


Re: Hadoop Pipes Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Amareshwari Sri Ramadasu wrote:
> You can not run it with TextInputFormat. You should run it with 
> org.apache.hadoop.mapred.pipes .*WordCountInputFormat. *You can pass 
> the input format by passing it in --inputformat option.
> I did not try it myself, but it should work.
>
>

Here is the command that I am trying and it results in exception:

bash-3.2$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D 
hadoop.pipes.java.recordwriter=true  -inputformat 
org.apache.hadoop.mapred.pipes.WordCountInputFormat -input gutenberg 
-output gutenberg-out101 -program bin/wordcount-nopipe
Exception in thread "main" java.lang.ClassNotFoundException: 
org.apache.hadoop.mapred.pipes.WordCountInputFormat
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762)
        at 
org.apache.hadoop.mapred.pipes.Submitter.getClass(Submitter.java:372)
        at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:421)
        at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)


Thanks , Adarsh

Re: Hadoop Pipes Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Hi, Good Morning to all of you..

Any update on the below problem.


Thanks & best Regards,
Adarsh Sharma

Amareshwari Sri Ramadasu wrote:
> You can not run it with TextInputFormat. You should run it with 
> org.apache.hadoop.mapred.pipes .*WordCountInputFormat. *You can pass 
> the input format by passing it in --inputformat option.
> I did not try it myself, but it should work.
>
> -Amareshwari
>
> On 3/31/11 12:23 PM, "Adarsh Sharma" <ad...@orkash.com> wrote:
>
>     Thanks Amareshwari,
>
>     here is the posting :
>     The *nopipe* example needs more documentation.  It assumes that it
>     is  
>     run with the InputFormat from
>     src/test/org/apache/*hadoop*/mapred/*pipes*/
>     *WordCountInputFormat*.java, which has a very specific input split  
>     format. By running with a TextInputFormat, it will send binary bytes  
>     as the input split and won't work right. The *nopipe* example should  
>     probably be recoded *to* use libhdfs *too*, but that is more
>     complicated  
>     *to* get running as a unit test. Also note that since the C++
>     example  
>     is using local file reads, it will only work on a cluster if you
>     have  
>     nfs or something working across the cluster.
>
>     Please need if I'm wrong.
>
>     I need to run it with TextInputFormat.
>
>     If posiible Please explain the above post more clearly.
>
>
>     Thanks & best Regards,
>     Adarsh Sharma
>
>
>
>     Amareshwari Sri Ramadasu wrote:
>
>
>         Here is an answer for your question in old mail archive:
>         http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html
>
Don't understand what is the reason & solution of this.
>
>
>         On 3/31/11 10:15 AM, "Adarsh Sharma"
>         <ad...@orkash.com> <ma...@orkash.com>
>          wrote:
>
>         Any update on the below error.
>
>         Please guide.
>
>
>         Thanks & best Regards,
>         Adarsh Sharma
>
>
>
>         Adarsh Sharma wrote:
>           
>          
>
>
>             Dear all,
>
>             Today I faced a problem while running a map-reduce job in
>             C++. I am
>             not able to understand to find the reason of the below error :
>
>
>             11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>             attempt_201103301130_0011_m_000000_0, Status : FAILED
>             java.io.IOException: pipe child exception
>                     at
>             org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>                     at
>             org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>                     at
>             org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>                     at
>             org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>                     at org.apache.hadoop.mapred.Child.main(Child.java:170)
>             Caused by: java.io.EOFException
>                     at
>             java.io.DataInputStream.readByte(DataInputStream.java:250)
>                     at
>             org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>                     at
>             org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>                     at
>             org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>
>             attempt_201103301130_0011_m_000000_0: Hadoop Pipes
>             Exception: failed
>             to open  at wordcount-nopipe.cc:82 in
>             WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>             11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>             attempt_201103301130_0011_m_000001_0, Status : FAILED
>             java.io.IOException: pipe child exception
>                     at
>             org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>                     at
>             org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>                     at
>             org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>                     at
>             org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>                     at org.apache.hadoop.mapred.Child.main(Child.java:170)
>             Caused by: java.io.EOFException
>                     at
>             java.io.DataInputStream.readByte(DataInputStream.java:250)
>                     at
>             org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>                     at
>             org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>                     at
>             org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>
>             attempt_201103301130_0011_m_000001_0: Hadoop Pipes
>             Exception: failed
>             to open  at wordcount-nopipe.cc:82 in
>             WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>             11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>             attempt_201103301130_0011_m_000002_0, Status : FAILED
>             java.io.IOException: pipe child exception
>                     at
>             org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>                     at
>             org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>                     at
>             org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>                     at
>             org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>                     at org.apache.hadoop.mapred.Child.main(Child.java:170)
>             Caused by: java.io.EOFException
>                     at
>             java.io.DataInputStream.readByte(DataInputStream.java:250)
>                     at
>             org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>                     at
>             org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>                     at
>             org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>             attempt_201103301130_0011_m_000002_1: Hadoop Pipes
>             Exception: failed
>             to open  at wordcount-nopipe.cc:82 in
>             WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>             11/03/30 12:09:15 INFO mapred.JobClient: Task Id :
>             attempt_201103301130_0011_m_000000_2, Status : FAILED
>             java.io.IOException: pipe child exception
>                     at
>             org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>                     at
>             org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>                     at
>             org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35
>
>             I tried to run *wordcount-nopipe.cc* program in
>             */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl*
>             directory.
>
>
>             make  wordcount-nopipe
>             bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
>             bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D
>             hadoop.pipes.java.recordwriter=true -input gutenberg -output
>             gutenberg-out11 -program bin/wordcount-nopipe
>
>                                              or
>             bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D
>             hadoop.pipes.java.recordwriter=false -input gutenberg -output
>             gutenberg-out11 -program bin/wordcount-nopipe
>
>             but error remains the same. I attached my Makefile also.
>             Please have some comments on it.
>
>             I am able to wun a simple wordcount.cpp program in Hadoop
>             Cluster but
>             don't know why this program fails in Broken Pipe error.
>
>
>
>             Thanks & best regards
>             Adarsh Sharma
>                 
>              
>
>
>
>
>
>           
>
>
>


Re: Hadoop Pipes Error

Posted by Amareshwari Sri Ramadasu <am...@yahoo-inc.com>.
You can not run it with TextInputFormat. You should run it with org.apache.hadoop.mapred.pipes .WordCountInputFormat. You can pass the input format by passing it in -inputformat option.
I did not try it myself, but it should work.

-Amareshwari

On 3/31/11 12:23 PM, "Adarsh Sharma" <ad...@orkash.com> wrote:

Thanks Amareshwari,

here is the posting :
The nopipe example needs more documentation.  It assumes that it is
run with the InputFormat from src/test/org/apache/hadoop/mapred/pipes/
WordCountInputFormat.java, which has a very specific input split
format. By running with a TextInputFormat, it will send binary bytes
as the input split and won't work right. The nopipe example should
probably be recoded to use libhdfs too, but that is more complicated
to get running as a unit test. Also note that since the C++ example
is using local file reads, it will only work on a cluster if you have
nfs or something working across the cluster.

Please need if I'm wrong.

I need to run it with TextInputFormat.

If posiible Please explain the above post more clearly.


Thanks & best Regards,
Adarsh Sharma



Amareshwari Sri Ramadasu wrote:

Here is an answer for your question in old mail archive:
http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html

On 3/31/11 10:15 AM, "Adarsh Sharma" <ad...@orkash.com> <ma...@orkash.com>  wrote:

Any update on the below error.

Please guide.


Thanks & best Regards,
Adarsh Sharma



Adarsh Sharma wrote:



Dear all,

Today I faced a problem while running a map-reduce job in C++. I am
not able to understand to find the reason of the below error :


11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
attempt_201103301130_0011_m_000000_0, Status : FAILED
java.io.IOException: pipe child exception
        at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readByte(DataInputStream.java:250)
        at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
        at
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
        at
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

attempt_201103301130_0011_m_000000_0: Hadoop Pipes Exception: failed
to open  at wordcount-nopipe.cc:82 in
WordCountReader::WordCountReader(HadoopPipes::MapContext&)
11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
attempt_201103301130_0011_m_000001_0, Status : FAILED
java.io.IOException: pipe child exception
        at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readByte(DataInputStream.java:250)
        at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
        at
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
        at
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

attempt_201103301130_0011_m_000001_0: Hadoop Pipes Exception: failed
to open  at wordcount-nopipe.cc:82 in
WordCountReader::WordCountReader(HadoopPipes::MapContext&)
11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
attempt_201103301130_0011_m_000002_0, Status : FAILED
java.io.IOException: pipe child exception
        at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readByte(DataInputStream.java:250)
        at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
        at
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
        at
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
attempt_201103301130_0011_m_000002_1: Hadoop Pipes Exception: failed
to open  at wordcount-nopipe.cc:82 in
WordCountReader::WordCountReader(HadoopPipes::MapContext&)
11/03/30 12:09:15 INFO mapred.JobClient: Task Id :
attempt_201103301130_0011_m_000000_2, Status : FAILED
java.io.IOException: pipe child exception
        at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
        at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35

I tried to run *wordcount-nopipe.cc* program in
*/home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory.


make  wordcount-nopipe
bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D
hadoop.pipes.java.recordwriter=true -input gutenberg -output
gutenberg-out11 -program bin/wordcount-nopipe

                                 or
bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D
hadoop.pipes.java.recordwriter=false -input gutenberg -output
gutenberg-out11 -program bin/wordcount-nopipe

but error remains the same. I attached my Makefile also.
Please have some comments on it.

I am able to wun a simple wordcount.cpp program in Hadoop Cluster but
don't know why this program fails in Broken Pipe error.



Thanks & best regards
Adarsh Sharma










Re: Hadoop Pipes Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Thanks Amareshwari,

here is the posting :
The *nopipe* example needs more documentation.  It assumes that it is  
run with the InputFormat from src/test/org/apache/*hadoop*/mapred/*pipes*/
*WordCountInputFormat*.java, which has a very specific input split  
format. By running with a TextInputFormat, it will send binary bytes  
as the input split and won't work right. The *nopipe* example should  
probably be recoded *to* use libhdfs *too*, but that is more complicated  
*to* get running as a unit test. Also note that since the C++ example  
is using local file reads, it will only work on a cluster if you have  
nfs or something working across the cluster.

Please need if I'm wrong.

I need to run it with TextInputFormat.

If posiible Please explain the above post more clearly.


Thanks & best Regards,
Adarsh Sharma



Amareshwari Sri Ramadasu wrote:
> Here is an answer for your question in old mail archive:
> http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html
>
> On 3/31/11 10:15 AM, "Adarsh Sharma" <ad...@orkash.com> wrote:
>
> Any update on the below error.
>
> Please guide.
>
>
> Thanks & best Regards,
> Adarsh Sharma
>
>
>
> Adarsh Sharma wrote:
>   
>> Dear all,
>>
>> Today I faced a problem while running a map-reduce job in C++. I am
>> not able to understand to find the reason of the below error :
>>
>>
>> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000000_0, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: java.io.EOFException
>>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>         at
>> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>>
>> attempt_201103301130_0011_m_000000_0: Hadoop Pipes Exception: failed
>> to open  at wordcount-nopipe.cc:82 in
>> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000001_0, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: java.io.EOFException
>>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>         at
>> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>>
>> attempt_201103301130_0011_m_000001_0: Hadoop Pipes Exception: failed
>> to open  at wordcount-nopipe.cc:82 in
>> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000002_0, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: java.io.EOFException
>>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>         at
>> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>> attempt_201103301130_0011_m_000002_1: Hadoop Pipes Exception: failed
>> to open  at wordcount-nopipe.cc:82 in
>> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>> 11/03/30 12:09:15 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000000_2, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35
>>
>> I tried to run *wordcount-nopipe.cc* program in
>> */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory.
>>
>>
>> make  wordcount-nopipe
>> bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
>> bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D
>> hadoop.pipes.java.recordwriter=true -input gutenberg -output
>> gutenberg-out11 -program bin/wordcount-nopipe
>>
>>                                  or
>> bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D
>> hadoop.pipes.java.recordwriter=false -input gutenberg -output
>> gutenberg-out11 -program bin/wordcount-nopipe
>>
>> but error remains the same. I attached my Makefile also.
>> Please have some comments on it.
>>
>> I am able to wun a simple wordcount.cpp program in Hadoop Cluster but
>> don't know why this program fails in Broken Pipe error.
>>
>>
>>
>> Thanks & best regards
>> Adarsh Sharma
>>     
>
>
>
>   


Re: Hadoop Pipes Error

Posted by Adarsh Sharma <ad...@orkash.com>.
What are the steps needed to debug the error & make worcount-nopipe.cc 
running properly.

Please if possible guide in steps.

Thanks & best  Regards,
Adarsh Sharma


Amareshwari Sri Ramadasu wrote:
> Here is an answer for your question in old mail archive:
> http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html
>
> On 3/31/11 10:15 AM, "Adarsh Sharma" <ad...@orkash.com> wrote:
>
> Any update on the below error.
>
> Please guide.
>
>
> Thanks & best Regards,
> Adarsh Sharma
>
>
>
> Adarsh Sharma wrote:
>   
>> Dear all,
>>
>> Today I faced a problem while running a map-reduce job in C++. I am
>> not able to understand to find the reason of the below error :
>>
>>
>> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000000_0, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: java.io.EOFException
>>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>         at
>> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>>
>> attempt_201103301130_0011_m_000000_0: Hadoop Pipes Exception: failed
>> to open  at wordcount-nopipe.cc:82 in
>> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000001_0, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: java.io.EOFException
>>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>         at
>> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>>
>> attempt_201103301130_0011_m_000001_0: Hadoop Pipes Exception: failed
>> to open  at wordcount-nopipe.cc:82 in
>> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000002_0, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: java.io.EOFException
>>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>         at
>> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>> attempt_201103301130_0011_m_000002_1: Hadoop Pipes Exception: failed
>> to open  at wordcount-nopipe.cc:82 in
>> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
>> 11/03/30 12:09:15 INFO mapred.JobClient: Task Id :
>> attempt_201103301130_0011_m_000000_2, Status : FAILED
>> java.io.IOException: pipe child exception
>>         at
>> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>>         at
>> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35
>>
>> I tried to run *wordcount-nopipe.cc* program in
>> */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory.
>>
>>
>> make  wordcount-nopipe
>> bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
>> bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D
>> hadoop.pipes.java.recordwriter=true -input gutenberg -output
>> gutenberg-out11 -program bin/wordcount-nopipe
>>
>>                                  or
>> bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D
>> hadoop.pipes.java.recordwriter=false -input gutenberg -output
>> gutenberg-out11 -program bin/wordcount-nopipe
>>
>> but error remains the same. I attached my Makefile also.
>> Please have some comments on it.
>>
>> I am able to wun a simple wordcount.cpp program in Hadoop Cluster but
>> don't know why this program fails in Broken Pipe error.
>>
>>
>>
>> Thanks & best regards
>> Adarsh Sharma
>>     
>
>
>
>   


Re: Hadoop Pipes Error

Posted by Amareshwari Sri Ramadasu <am...@yahoo-inc.com>.
Here is an answer for your question in old mail archive:
http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html

On 3/31/11 10:15 AM, "Adarsh Sharma" <ad...@orkash.com> wrote:

Any update on the below error.

Please guide.


Thanks & best Regards,
Adarsh Sharma



Adarsh Sharma wrote:
> Dear all,
>
> Today I faced a problem while running a map-reduce job in C++. I am
> not able to understand to find the reason of the below error :
>
>
> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
> attempt_201103301130_0011_m_000000_0, Status : FAILED
> java.io.IOException: pipe child exception
>         at
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>         at
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>         at
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>         at
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>
> attempt_201103301130_0011_m_000000_0: Hadoop Pipes Exception: failed
> to open  at wordcount-nopipe.cc:82 in
> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
> attempt_201103301130_0011_m_000001_0, Status : FAILED
> java.io.IOException: pipe child exception
>         at
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>         at
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>         at
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>         at
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>
> attempt_201103301130_0011_m_000001_0: Hadoop Pipes Exception: failed
> to open  at wordcount-nopipe.cc:82 in
> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
> attempt_201103301130_0011_m_000002_0, Status : FAILED
> java.io.IOException: pipe child exception
>         at
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>         at
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>         at
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>         at
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
> attempt_201103301130_0011_m_000002_1: Hadoop Pipes Exception: failed
> to open  at wordcount-nopipe.cc:82 in
> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
> 11/03/30 12:09:15 INFO mapred.JobClient: Task Id :
> attempt_201103301130_0011_m_000000_2, Status : FAILED
> java.io.IOException: pipe child exception
>         at
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35
>
> I tried to run *wordcount-nopipe.cc* program in
> */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory.
>
>
> make  wordcount-nopipe
> bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
> bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D
> hadoop.pipes.java.recordwriter=true -input gutenberg -output
> gutenberg-out11 -program bin/wordcount-nopipe
>
>                                  or
> bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D
> hadoop.pipes.java.recordwriter=false -input gutenberg -output
> gutenberg-out11 -program bin/wordcount-nopipe
>
> but error remains the same. I attached my Makefile also.
> Please have some comments on it.
>
> I am able to wun a simple wordcount.cpp program in Hadoop Cluster but
> don't know why this program fails in Broken Pipe error.
>
>
>
> Thanks & best regards
> Adarsh Sharma



Re: Hadoop Pipes Error

Posted by Adarsh Sharma <ad...@orkash.com>.
Any update on the below error.

Please guide.


Thanks & best Regards,
Adarsh Sharma



Adarsh Sharma wrote:
> Dear all,
>
> Today I faced a problem while running a map-reduce job in C++. I am 
> not able to understand to find the reason of the below error :
>
>
> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id : 
> attempt_201103301130_0011_m_000000_0, Status : FAILED
> java.io.IOException: pipe child exception
>         at 
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at 
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>         at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>         at 
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>         at 
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>
> attempt_201103301130_0011_m_000000_0: Hadoop Pipes Exception: failed 
> to open  at wordcount-nopipe.cc:82 in 
> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id : 
> attempt_201103301130_0011_m_000001_0, Status : FAILED
> java.io.IOException: pipe child exception
>         at 
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at 
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>         at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>         at 
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>         at 
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>
> attempt_201103301130_0011_m_000001_0: Hadoop Pipes Exception: failed 
> to open  at wordcount-nopipe.cc:82 in 
> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
> 11/03/30 12:09:02 INFO mapred.JobClient: Task Id : 
> attempt_201103301130_0011_m_000002_0, Status : FAILED
> java.io.IOException: pipe child exception
>         at 
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at 
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:250)
>         at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>         at 
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>         at 
> org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
> attempt_201103301130_0011_m_000002_1: Hadoop Pipes Exception: failed 
> to open  at wordcount-nopipe.cc:82 in 
> WordCountReader::WordCountReader(HadoopPipes::MapContext&)
> 11/03/30 12:09:15 INFO mapred.JobClient: Task Id : 
> attempt_201103301130_0011_m_000000_2, Status : FAILED
> java.io.IOException: pipe child exception
>         at 
> org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
>         at 
> org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35
>
> I tried to run *wordcount-nopipe.cc* program in 
> */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory.
>
>
> make  wordcount-nopipe
> bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
> bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D 
> hadoop.pipes.java.recordwriter=true -input gutenberg -output 
> gutenberg-out11 -program bin/wordcount-nopipe
>                                                                       
>                                  or
> bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D 
> hadoop.pipes.java.recordwriter=false -input gutenberg -output 
> gutenberg-out11 -program bin/wordcount-nopipe
>
> but error remains the same. I attached my Makefile also.
> Please have some comments on it.
>
> I am able to wun a simple wordcount.cpp program in Hadoop Cluster but 
> don't know why this program fails in Broken Pipe error.
>
>
>
> Thanks & best regards
> Adarsh Sharma