You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Visioner Sadak <vi...@gmail.com> on 2012/08/27 10:05:32 UTC

Error while executing HAR job

Hello experts,
                      While creating a HAR file sometimes the job executes
successfully and sometimes it throws an error any idea why this is
happening really a weird error i am running hadoop on psuedo distributed
mode on windows using cygwin i am getting the below error

12/08/27 13:20:11 INFO mapreduce.Job: Task Id :
attempt_201208271315_0001_m_0000
01_2, Status : FAILED
java.lang.Throwable: Child Error
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:225)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:212)
12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
outputhttp://dell-PC:50
060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
=stdout
12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
outputhttp://dell-PC:50
060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
=stderr
12/08/27 13:20:23 INFO mapreduce.Job: Job complete: job_201208271315_0001
12/08/27 13:20:23 INFO mapreduce.Job: Counters: 4
        Job Counters
                Total time spent by all maps waiting after reserving slots
(ms)=                     0
                Total time spent by all reduces waiting after reserving
slots (m                     s)=0
                SLOTS_MILLIS_MAPS=74611
                SLOTS_MILLIS_REDUCES=0
Job failed!

Re: Error while executing HAR job

Posted by Visioner Sadak <vi...@gmail.com>.
On Mon, Aug 27, 2012 at 1:35 PM, Visioner Sadak <vi...@gmail.com>wrote:

> Hello experts,
>                       While creating a HAR file sometimes the job executes
> successfully and sometimes it throws an error any idea why this is
> happening really a weird error i am running hadoop on psuedo distributed
> mode on windows using cygwin i am getting the below error
>
> 12/08/27 13:20:11 INFO mapreduce.Job: Task Id :
> attempt_201208271315_0001_m_0000
> 01_2, Status : FAILED
> java.lang.Throwable: Child Error
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:225)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:212)
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stdout
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stderr
> 12/08/27 13:20:23 INFO mapreduce.Job: Job complete: job_201208271315_0001
> 12/08/27 13:20:23 INFO mapreduce.Job: Counters: 4
>         Job Counters
>                 Total time spent by all maps waiting after reserving slots
> (ms)=                     0
>                 Total time spent by all reduces waiting after reserving
> slots (m                     s)=0
>                 SLOTS_MILLIS_MAPS=74611
>                 SLOTS_MILLIS_REDUCES=0
> Job failed!
>
>
>
>

Re: Error while executing HAR job

Posted by Visioner Sadak <vi...@gmail.com>.
On Mon, Aug 27, 2012 at 1:35 PM, Visioner Sadak <vi...@gmail.com>wrote:

> Hello experts,
>                       While creating a HAR file sometimes the job executes
> successfully and sometimes it throws an error any idea why this is
> happening really a weird error i am running hadoop on psuedo distributed
> mode on windows using cygwin i am getting the below error
>
> 12/08/27 13:20:11 INFO mapreduce.Job: Task Id :
> attempt_201208271315_0001_m_0000
> 01_2, Status : FAILED
> java.lang.Throwable: Child Error
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:225)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:212)
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stdout
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stderr
> 12/08/27 13:20:23 INFO mapreduce.Job: Job complete: job_201208271315_0001
> 12/08/27 13:20:23 INFO mapreduce.Job: Counters: 4
>         Job Counters
>                 Total time spent by all maps waiting after reserving slots
> (ms)=                     0
>                 Total time spent by all reduces waiting after reserving
> slots (m                     s)=0
>                 SLOTS_MILLIS_MAPS=74611
>                 SLOTS_MILLIS_REDUCES=0
> Job failed!
>
>
>
>

Re: Error while executing HAR job

Posted by Visioner Sadak <vi...@gmail.com>.
On Mon, Aug 27, 2012 at 1:35 PM, Visioner Sadak <vi...@gmail.com>wrote:

> Hello experts,
>                       While creating a HAR file sometimes the job executes
> successfully and sometimes it throws an error any idea why this is
> happening really a weird error i am running hadoop on psuedo distributed
> mode on windows using cygwin i am getting the below error
>
> 12/08/27 13:20:11 INFO mapreduce.Job: Task Id :
> attempt_201208271315_0001_m_0000
> 01_2, Status : FAILED
> java.lang.Throwable: Child Error
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:225)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:212)
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stdout
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stderr
> 12/08/27 13:20:23 INFO mapreduce.Job: Job complete: job_201208271315_0001
> 12/08/27 13:20:23 INFO mapreduce.Job: Counters: 4
>         Job Counters
>                 Total time spent by all maps waiting after reserving slots
> (ms)=                     0
>                 Total time spent by all reduces waiting after reserving
> slots (m                     s)=0
>                 SLOTS_MILLIS_MAPS=74611
>                 SLOTS_MILLIS_REDUCES=0
> Job failed!
>
>
>
>

Re: Error while executing HAR job

Posted by Visioner Sadak <vi...@gmail.com>.
On Mon, Aug 27, 2012 at 1:35 PM, Visioner Sadak <vi...@gmail.com>wrote:

> Hello experts,
>                       While creating a HAR file sometimes the job executes
> successfully and sometimes it throws an error any idea why this is
> happening really a weird error i am running hadoop on psuedo distributed
> mode on windows using cygwin i am getting the below error
>
> 12/08/27 13:20:11 INFO mapreduce.Job: Task Id :
> attempt_201208271315_0001_m_0000
> 01_2, Status : FAILED
> java.lang.Throwable: Child Error
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:225)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:212)
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stdout
> 12/08/27 13:20:11 WARN mapreduce.Job: Error reading task
> outputhttp://dell-PC:50
> 060/tasklog?plaintext=true&attemptid=attempt_201208271315_0001_m_000001_2&filter
> =stderr
> 12/08/27 13:20:23 INFO mapreduce.Job: Job complete: job_201208271315_0001
> 12/08/27 13:20:23 INFO mapreduce.Job: Counters: 4
>         Job Counters
>                 Total time spent by all maps waiting after reserving slots
> (ms)=                     0
>                 Total time spent by all reduces waiting after reserving
> slots (m                     s)=0
>                 SLOTS_MILLIS_MAPS=74611
>                 SLOTS_MILLIS_REDUCES=0
> Job failed!
>
>
>
>