You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by kaveh minooie <ka...@plutoz.com> on 2013/04/22 10:34:05 UTC

common error in map tasks

HI

regardless of what job I run, there are always a few map tasks that fail 
with the following, very unhelpful, message: ( that is the entire error 
message)

java.lang.Throwable: Child Error
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 126.
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


I would appreciate it if someone could show me how I could figure out 
why this error keeps happening.

thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,