You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Vjeran Marcinko <vj...@email.t-com.hr> on 2013/04/19 23:03:10 UTC

Collision with Hadoop (1.0.4) libs?

Hi,

 

I created "fat jar" to run my M/R "driver" application, and this fat jar
contains beside other libs:

slf4j-api-1.7.5.jar

slf4j-simple-1.7.5.jar

. and to delegate all commons-logging calls to slf4j.

jcl-over-slf4j-1.7.5.jar  

 

Unfortunatelly, when I start my application using jar command:

"hadoop jar myfat.jar ."

I get following:

 

13/04/19 22:50:21 INFO property.AppProps: using app.id:
F4278122BFBA5B98991997F8B15E68F6

Exception in thread "main" java.lang.NoSuchMethodError:
org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;IL
java/lang/String;[Ljava/lang/Object;Ljava/lang/Throwable;)V

            at
org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwa
reLog.java:133)

            at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:
139)

            at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformat
ion.java:205)

            at
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupI
nformation.java:184)

            at
org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupI
nformation.java:236)

            at
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInform
ation.java:466)

            at
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInfo
rmation.java:452)

            at
org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1494)

            at
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1395)

            at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)

            at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)

            at cascading.tap.hadoop.Hfs.getDefaultFileSystem(Hfs.java:307)

 

It seems that my application uses my bundled "jcl-over-slf4j-1.7.5.jar" to
delegate calls to some older slf4j-api.jar (not my bundled 1.7.5 version of
SLF4J), and I guess it can only be "slf4j-api-1.4.3.jar" found under
<hadoop_home>/lib ? That means that this old hadoop's slf4j lib has
preedence over the one that came bundled within my fat jar?!

 

Any suggestion to resolve this?

 

Regards,

Vjeran

 


Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
mainly it is caused by java.child.opt and the num of map task.
—
Sent from Mailbox for iPhone

On Tue, Apr 23, 2013 at 6:15 AM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks Chris. I only run nutch, so no to the external command. And I 
> just checked and it happens or has happened on all the nodes at some 
> point. I have to say thou that it doesn't cause the job to fail or 
> anything. the map tasks that fail will finish when they are re-spawn 
> again. it is just annoying and makes me think that some value some where 
> in the config files are either not correct or not optimal.
> On 04/22/2013 02:49 PM, Chris Nauroth wrote:
>> I'm not aware of any Hadoop-specific meaning for exit code 126.
>>   Typically, this is a standard Unix exit code used to indicate that a
>> command couldn't be executed.  Some reasons for this might be that the
>> command is not an executable file, or the command is an executable file
>> but the user doesn't have execute permissions.  (See below for an
>> example of each of these.)
>>
>> Does your job code attempt to exec an external command?  Also, are the
>> task failures consistently happening on the same set of nodes in your
>> cluster?  If so, then I recommend checking that the command has been
>> deployed and has the correct permissions on those nodes.
>>
>> Even if your code doesn't exec an external command, various parts of the
>> Hadoop code do this internally, so you still might have a case of a
>> misconfigured node.
>>
>> Hope this helps,
>> --Chris
>>
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > ./BUILDING.txt
>> -bash: ./BUILDING.txt: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>>  > echo $?
>> 126
>>
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ls -lrt exec
>> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > whoami
>> chris
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > ./exec
>> bash: ./exec: Permission denied
>> [chris@Chriss-MacBook-Pro:ttys000] test
>>  > echo $?
>> 126
>>
>>
>>
>> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     thanks. that is the issue, there is no other log files. when i go to
>>     the attempt directory of that failed map task (e.g.
>>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>>     ) it is empty. there is no other log file. thou based on the counter
>>     value, I can say that it happens right at the beginning of the map
>>     task (counter is only 1 )
>>
>>
>>
>>
>>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>>
>>         Hi
>>
>>
>>         I have the same problem before
>>         I think this is caused by the lack of memory shortage for map task.
>>         It is just a suggestion,you can post your log
>>
>>
>>         BRs
>>         Geelong
>>         —
>>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>>         <ma...@plutoz.com>
>>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>>
>>              HI
>>
>>              regardless of what job I run, there are always a few map
>>         tasks that
>>              fail with the following, very unhelpful, message: ( that is the
>>              entire error message)
>>
>>              java.lang.Throwable: Child Error
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>>              Caused by: java.io.IOException: Task process exit with
>>         nonzero status of 126.
>>                  at
>>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>>
>>
>>              I would appreciate it if someone could show me how I could
>>         figure
>>              out why this error keeps happening.
>>
>>              thanks,
>>
>>
>>
>>     --
>>     Kaveh Minooie
>>
>>
> -- 
> Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks Chris. I only run nutch, so no to the external command. And I 
just checked and it happens or has happened on all the nodes at some 
point. I have to say thou that it doesn't cause the job to fail or 
anything. the map tasks that fail will finish when they are re-spawn 
again. it is just annoying and makes me think that some value some where 
in the config files are either not correct or not optimal.


On 04/22/2013 02:49 PM, Chris Nauroth wrote:
> I'm not aware of any Hadoop-specific meaning for exit code 126.
>   Typically, this is a standard Unix exit code used to indicate that a
> command couldn't be executed.  Some reasons for this might be that the
> command is not an executable file, or the command is an executable file
> but the user doesn't have execute permissions.  (See below for an
> example of each of these.)
>
> Does your job code attempt to exec an external command?  Also, are the
> task failures consistently happening on the same set of nodes in your
> cluster?  If so, then I recommend checking that the command has been
> deployed and has the correct permissions on those nodes.
>
> Even if your code doesn't exec an external command, various parts of the
> Hadoop code do this internally, so you still might have a case of a
> misconfigured node.
>
> Hope this helps,
> --Chris
>
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > ./BUILDING.txt
> -bash: ./BUILDING.txt: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] hadoop-common
>  > echo $?
> 126
>
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ls -lrt exec
> -rwx------  1 root  staff     0B Apr 22 14:43 exec*
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > whoami
> chris
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > ./exec
> bash: ./exec: Permission denied
> [chris@Chriss-MacBook-Pro:ttys000] test
>  > echo $?
> 126
>
>
>
> On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     thanks. that is the issue, there is no other log files. when i go to
>     the attempt directory of that failed map task (e.g.
>     userlogs/job_201304191712___0015/attempt_201304191712___0015_m_000019_0
>     ) it is empty. there is no other log file. thou based on the counter
>     value, I can say that it happens right at the beginning of the map
>     task (counter is only 1 )
>
>
>
>
>     On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>         Hi
>
>
>         I have the same problem before
>         I think this is caused by the lack of memory shortage for map task.
>         It is just a suggestion,you can post your log
>
>
>         BRs
>         Geelong
>         —
>         Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
>
>         On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>         <ma...@plutoz.com>
>         <mailto:kaveh@plutoz.com <ma...@plutoz.com>>> wrote:
>
>              HI
>
>              regardless of what job I run, there are always a few map
>         tasks that
>              fail with the following, very unhelpful, message: ( that is the
>              entire error message)
>
>              java.lang.Throwable: Child Error
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:271)
>              Caused by: java.io.IOException: Task process exit with
>         nonzero status of 126.
>                  at
>         org.apache.hadoop.mapred.__TaskRunner.run(TaskRunner.__java:258)
>
>
>              I would appreciate it if someone could show me how I could
>         figure
>              out why this error keeps happening.
>
>              thanks,
>
>
>
>     --
>     Kaveh Minooie
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by Chris Nauroth <cn...@hortonworks.com>.
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common


> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test


> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test


> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test


> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test


> echo $?
126



On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <ka...@plutoz.com> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
>> <ma...@plutoz.com>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
thanks. that is the issue, there is no other log files. when i go to the 
attempt directory of that failed map task (e.g. 
userlogs/job_201304191712_0015/attempt_201304191712_0015_m_000019_0 ) it 
is empty. there is no other log file. thou based on the counter value, I 
can say that it happens right at the beginning of the map task (counter 
is only 1 )



On 04/22/2013 02:12 AM, 姚吉龙 wrote:
> Hi
>
>
> I have the same problem before
> I think this is caused by the lack of memory shortage for map task.
> It is just a suggestion,you can post your log
>
>
> BRs
> Geelong
> —
> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>
>
> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <kaveh@plutoz.com
> <ma...@plutoz.com>> wrote:
>
>     HI
>
>     regardless of what job I run, there are always a few map tasks that
>     fail with the following, very unhelpful, message: ( that is the
>     entire error message)
>
>     java.lang.Throwable: Child Error
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>     Caused by: java.io.IOException: Task process exit with nonzero status of 126.
>     	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
>     I would appreciate it if someone could show me how I could figure
>     out why this error keeps happening.
>
>     thanks,
>
>

-- 
Kaveh Minooie

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

Re: common error in map tasks

Posted by 姚吉龙 <ge...@gmail.com>.
Hi


I have the same problem before
I think this is caused by the lack of memory shortage for map task.
It is just a suggestion,you can post your log




BRs
Geelong
—
Sent from Mailbox for iPhone

On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <ka...@plutoz.com> wrote:

> HI
> regardless of what job I run, there are always a few map tasks that fail 
> with the following, very unhelpful, message: ( that is the entire error 
> message)
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 126.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> I would appreciate it if someone could show me how I could figure out 
> why this error keeps happening.
> thanks,

common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
HI

regardless of what job I run, there are always a few map tasks that fail 
with the following, very unhelpful, message: ( that is the entire error 
message)

java.lang.Throwable: Child Error
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 126.
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


I would appreciate it if someone could show me how I could figure out 
why this error keeps happening.

thanks,

common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
HI

regardless of what job I run, there are always a few map tasks that fail 
with the following, very unhelpful, message: ( that is the entire error 
message)

java.lang.Throwable: Child Error
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 126.
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


I would appreciate it if someone could show me how I could figure out 
why this error keeps happening.

thanks,

Re: error while running TestDFSIO

Posted by Ling Kun <lk...@gmail.com>.
I have got the same problem. And also need help.

Ling Kun


On Sat, Apr 20, 2013 at 8:35 AM, kaveh minooie <ka...@plutoz.com> wrote:

> Hi everyone
>
> I am getting this error when i run TestDFSIO. the job actually finishes
> successfully. ( according to jobtracker at least ) but this is what i get
> on the console :
>
> crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO
> -write -nrFiles 10 -fileSize 1000
> TestDFSIO.0.0.4
> 13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
> 13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega
> bytes, 10 files
> 13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
> 13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to
> process : 10
> 13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
> 13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
> 13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
> 13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
> 13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
> 13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
> 13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
> 13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
> 13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
> 13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
> 13/04/19 17:24:23 INFO mapred.JobClient: Job complete:
> job_201304191712_0002
> 13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
> 13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
> 13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
> 13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
> 13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized
> bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=7996702720
> 13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
> 13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=7111999488
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=28466053120
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
> java.io.FileNotFoundException: File does not exist:
> /benchmarks/TestDFSIO/io_**write/part-00000
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> fetchLocatedBlocks(DFSClient.**java:1975)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> openInfo(DFSClient.java:1944)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.<**
> init>(DFSClient.java:1936)
>         at org.apache.hadoop.hdfs.**DFSClient.open(DFSClient.java:**731)
>         at org.apache.hadoop.hdfs.**DistributedFileSystem.open(**
> DistributedFileSystem.java:**165)
>         at org.apache.hadoop.fs.**FileSystem.open(FileSystem.**java:427)
>         at org.apache.hadoop.fs.**TestDFSIO.analyzeResult(**
> TestDFSIO.java:339)
>         at org.apache.hadoop.fs.**TestDFSIO.run(TestDFSIO.java:**462)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:65)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:79)
>         at org.apache.hadoop.fs.**TestDFSIO.main(TestDFSIO.java:**317)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.**ProgramDriver$**
> ProgramDescription.invoke(**ProgramDriver.java:68)
>         at org.apache.hadoop.util.**ProgramDriver.driver(**
> ProgramDriver.java:139)
>         at org.apache.hadoop.test.**AllTestDriver.main(**
> AllTestDriver.java:81)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.RunJar.**main(RunJar.java:156)
> crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
> Found 3 items
> -rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/_SUCCESS
> drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23
> /benchmarks/TestDFSIO/io_**write/_logs
> -rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/part-00000.deflate
> crawler@d1r2n2:/hadoop$
>
>
> Does anyone have any idea what might be wrong here?
>
>


-- 
http://www.lingcc.com

Re: error while running TestDFSIO

Posted by Ling Kun <lk...@gmail.com>.
I have got the same problem. And also need help.

Ling Kun


On Sat, Apr 20, 2013 at 8:35 AM, kaveh minooie <ka...@plutoz.com> wrote:

> Hi everyone
>
> I am getting this error when i run TestDFSIO. the job actually finishes
> successfully. ( according to jobtracker at least ) but this is what i get
> on the console :
>
> crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO
> -write -nrFiles 10 -fileSize 1000
> TestDFSIO.0.0.4
> 13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
> 13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega
> bytes, 10 files
> 13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
> 13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to
> process : 10
> 13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
> 13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
> 13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
> 13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
> 13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
> 13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
> 13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
> 13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
> 13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
> 13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
> 13/04/19 17:24:23 INFO mapred.JobClient: Job complete:
> job_201304191712_0002
> 13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
> 13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
> 13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
> 13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
> 13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized
> bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=7996702720
> 13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
> 13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=7111999488
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=28466053120
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
> java.io.FileNotFoundException: File does not exist:
> /benchmarks/TestDFSIO/io_**write/part-00000
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> fetchLocatedBlocks(DFSClient.**java:1975)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> openInfo(DFSClient.java:1944)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.<**
> init>(DFSClient.java:1936)
>         at org.apache.hadoop.hdfs.**DFSClient.open(DFSClient.java:**731)
>         at org.apache.hadoop.hdfs.**DistributedFileSystem.open(**
> DistributedFileSystem.java:**165)
>         at org.apache.hadoop.fs.**FileSystem.open(FileSystem.**java:427)
>         at org.apache.hadoop.fs.**TestDFSIO.analyzeResult(**
> TestDFSIO.java:339)
>         at org.apache.hadoop.fs.**TestDFSIO.run(TestDFSIO.java:**462)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:65)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:79)
>         at org.apache.hadoop.fs.**TestDFSIO.main(TestDFSIO.java:**317)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.**ProgramDriver$**
> ProgramDescription.invoke(**ProgramDriver.java:68)
>         at org.apache.hadoop.util.**ProgramDriver.driver(**
> ProgramDriver.java:139)
>         at org.apache.hadoop.test.**AllTestDriver.main(**
> AllTestDriver.java:81)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.RunJar.**main(RunJar.java:156)
> crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
> Found 3 items
> -rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/_SUCCESS
> drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23
> /benchmarks/TestDFSIO/io_**write/_logs
> -rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/part-00000.deflate
> crawler@d1r2n2:/hadoop$
>
>
> Does anyone have any idea what might be wrong here?
>
>


-- 
http://www.lingcc.com

common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
HI

regardless of what job I run, there are always a few map tasks that fail 
with the following, very unhelpful, message: ( that is the entire error 
message)

java.lang.Throwable: Child Error
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 126.
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


I would appreciate it if someone could show me how I could figure out 
why this error keeps happening.

thanks,

Re: error while running TestDFSIO

Posted by Ling Kun <lk...@gmail.com>.
I have got the same problem. And also need help.

Ling Kun


On Sat, Apr 20, 2013 at 8:35 AM, kaveh minooie <ka...@plutoz.com> wrote:

> Hi everyone
>
> I am getting this error when i run TestDFSIO. the job actually finishes
> successfully. ( according to jobtracker at least ) but this is what i get
> on the console :
>
> crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO
> -write -nrFiles 10 -fileSize 1000
> TestDFSIO.0.0.4
> 13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
> 13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega
> bytes, 10 files
> 13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
> 13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to
> process : 10
> 13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
> 13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
> 13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
> 13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
> 13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
> 13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
> 13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
> 13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
> 13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
> 13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
> 13/04/19 17:24:23 INFO mapred.JobClient: Job complete:
> job_201304191712_0002
> 13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
> 13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
> 13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
> 13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
> 13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized
> bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=7996702720
> 13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
> 13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=7111999488
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=28466053120
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
> java.io.FileNotFoundException: File does not exist:
> /benchmarks/TestDFSIO/io_**write/part-00000
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> fetchLocatedBlocks(DFSClient.**java:1975)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> openInfo(DFSClient.java:1944)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.<**
> init>(DFSClient.java:1936)
>         at org.apache.hadoop.hdfs.**DFSClient.open(DFSClient.java:**731)
>         at org.apache.hadoop.hdfs.**DistributedFileSystem.open(**
> DistributedFileSystem.java:**165)
>         at org.apache.hadoop.fs.**FileSystem.open(FileSystem.**java:427)
>         at org.apache.hadoop.fs.**TestDFSIO.analyzeResult(**
> TestDFSIO.java:339)
>         at org.apache.hadoop.fs.**TestDFSIO.run(TestDFSIO.java:**462)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:65)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:79)
>         at org.apache.hadoop.fs.**TestDFSIO.main(TestDFSIO.java:**317)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.**ProgramDriver$**
> ProgramDescription.invoke(**ProgramDriver.java:68)
>         at org.apache.hadoop.util.**ProgramDriver.driver(**
> ProgramDriver.java:139)
>         at org.apache.hadoop.test.**AllTestDriver.main(**
> AllTestDriver.java:81)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.RunJar.**main(RunJar.java:156)
> crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
> Found 3 items
> -rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/_SUCCESS
> drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23
> /benchmarks/TestDFSIO/io_**write/_logs
> -rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/part-00000.deflate
> crawler@d1r2n2:/hadoop$
>
>
> Does anyone have any idea what might be wrong here?
>
>


-- 
http://www.lingcc.com

common error in map tasks

Posted by kaveh minooie <ka...@plutoz.com>.
HI

regardless of what job I run, there are always a few map tasks that fail 
with the following, very unhelpful, message: ( that is the entire error 
message)

java.lang.Throwable: Child Error
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 126.
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


I would appreciate it if someone could show me how I could figure out 
why this error keeps happening.

thanks,

Re: error while running TestDFSIO

Posted by Ling Kun <lk...@gmail.com>.
I have got the same problem. And also need help.

Ling Kun


On Sat, Apr 20, 2013 at 8:35 AM, kaveh minooie <ka...@plutoz.com> wrote:

> Hi everyone
>
> I am getting this error when i run TestDFSIO. the job actually finishes
> successfully. ( according to jobtracker at least ) but this is what i get
> on the console :
>
> crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO
> -write -nrFiles 10 -fileSize 1000
> TestDFSIO.0.0.4
> 13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
> 13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
> 13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega
> bytes, 10 files
> 13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
> 13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to
> process : 10
> 13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
> 13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
> 13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
> 13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
> 13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
> 13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
> 13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
> 13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
> 13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
> 13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
> 13/04/19 17:24:23 INFO mapred.JobClient: Job complete:
> job_201304191712_0002
> 13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
> 13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
> 13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
> 13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
> 13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
> 13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
> 13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
> 13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
> 13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
> 13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized
> bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
> 13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
> 13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=7996702720
> 13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
> 13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
> 13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=7111999488
> 13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
> 13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=28466053120
> 13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
> java.io.FileNotFoundException: File does not exist:
> /benchmarks/TestDFSIO/io_**write/part-00000
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> fetchLocatedBlocks(DFSClient.**java:1975)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.**
> openInfo(DFSClient.java:1944)
>         at org.apache.hadoop.hdfs.**DFSClient$DFSInputStream.<**
> init>(DFSClient.java:1936)
>         at org.apache.hadoop.hdfs.**DFSClient.open(DFSClient.java:**731)
>         at org.apache.hadoop.hdfs.**DistributedFileSystem.open(**
> DistributedFileSystem.java:**165)
>         at org.apache.hadoop.fs.**FileSystem.open(FileSystem.**java:427)
>         at org.apache.hadoop.fs.**TestDFSIO.analyzeResult(**
> TestDFSIO.java:339)
>         at org.apache.hadoop.fs.**TestDFSIO.run(TestDFSIO.java:**462)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:65)
>         at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:79)
>         at org.apache.hadoop.fs.**TestDFSIO.main(TestDFSIO.java:**317)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.**ProgramDriver$**
> ProgramDescription.invoke(**ProgramDriver.java:68)
>         at org.apache.hadoop.util.**ProgramDriver.driver(**
> ProgramDriver.java:139)
>         at org.apache.hadoop.test.**AllTestDriver.main(**
> AllTestDriver.java:81)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>         at sun.reflect.**NativeMethodAccessorImpl.**invoke(Unknown Source)
>         at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(Unknown
> Source)
>         at java.lang.reflect.Method.**invoke(Unknown Source)
>         at org.apache.hadoop.util.RunJar.**main(RunJar.java:156)
> crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
> Found 3 items
> -rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/_SUCCESS
> drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23
> /benchmarks/TestDFSIO/io_**write/_logs
> -rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24
> /benchmarks/TestDFSIO/io_**write/part-00000.deflate
> crawler@d1r2n2:/hadoop$
>
>
> Does anyone have any idea what might be wrong here?
>
>


-- 
http://www.lingcc.com

error while running TestDFSIO

Posted by kaveh minooie <ka...@plutoz.com>.
Hi everyone

I am getting this error when i run TestDFSIO. the job actually finishes 
successfully. ( according to jobtracker at least ) but this is what i 
get on the console :

crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO 
-write -nrFiles 10 -fileSize 1000
TestDFSIO.0.0.4
13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega 
bytes, 10 files
13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to 
process : 10
13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
13/04/19 17:24:23 INFO mapred.JobClient: Job complete: job_201304191712_0002
13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
reduces waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
maps waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized 
bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage 
(bytes)=7996702720
13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes) 
snapshot=7111999488
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes) 
snapshot=28466053120
13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
java.io.FileNotFoundException: File does not exist: 
/benchmarks/TestDFSIO/io_write/part-00000
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:1975)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1944)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1936)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:731)
	at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
	at org.apache.hadoop.fs.TestDFSIO.analyzeResult(TestDFSIO.java:339)
	at org.apache.hadoop.fs.TestDFSIO.run(TestDFSIO.java:462)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
	at org.apache.hadoop.fs.TestDFSIO.main(TestDFSIO.java:317)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.test.AllTestDriver.main(AllTestDriver.java:81)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
Found 3 items
-rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/_SUCCESS
drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23 
/benchmarks/TestDFSIO/io_write/_logs
-rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/part-00000.deflate
crawler@d1r2n2:/hadoop$


Does anyone have any idea what might be wrong here?


error while running TestDFSIO

Posted by kaveh minooie <ka...@plutoz.com>.
Hi everyone

I am getting this error when i run TestDFSIO. the job actually finishes 
successfully. ( according to jobtracker at least ) but this is what i 
get on the console :

crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO 
-write -nrFiles 10 -fileSize 1000
TestDFSIO.0.0.4
13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega 
bytes, 10 files
13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to 
process : 10
13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
13/04/19 17:24:23 INFO mapred.JobClient: Job complete: job_201304191712_0002
13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
reduces waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
maps waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized 
bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage 
(bytes)=7996702720
13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes) 
snapshot=7111999488
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes) 
snapshot=28466053120
13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
java.io.FileNotFoundException: File does not exist: 
/benchmarks/TestDFSIO/io_write/part-00000
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:1975)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1944)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1936)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:731)
	at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
	at org.apache.hadoop.fs.TestDFSIO.analyzeResult(TestDFSIO.java:339)
	at org.apache.hadoop.fs.TestDFSIO.run(TestDFSIO.java:462)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
	at org.apache.hadoop.fs.TestDFSIO.main(TestDFSIO.java:317)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.test.AllTestDriver.main(AllTestDriver.java:81)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
Found 3 items
-rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/_SUCCESS
drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23 
/benchmarks/TestDFSIO/io_write/_logs
-rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/part-00000.deflate
crawler@d1r2n2:/hadoop$


Does anyone have any idea what might be wrong here?


error while running TestDFSIO

Posted by kaveh minooie <ka...@plutoz.com>.
Hi everyone

I am getting this error when i run TestDFSIO. the job actually finishes 
successfully. ( according to jobtracker at least ) but this is what i 
get on the console :

crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO 
-write -nrFiles 10 -fileSize 1000
TestDFSIO.0.0.4
13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega 
bytes, 10 files
13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to 
process : 10
13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
13/04/19 17:24:23 INFO mapred.JobClient: Job complete: job_201304191712_0002
13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
reduces waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
maps waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized 
bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage 
(bytes)=7996702720
13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes) 
snapshot=7111999488
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes) 
snapshot=28466053120
13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
java.io.FileNotFoundException: File does not exist: 
/benchmarks/TestDFSIO/io_write/part-00000
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:1975)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1944)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1936)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:731)
	at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
	at org.apache.hadoop.fs.TestDFSIO.analyzeResult(TestDFSIO.java:339)
	at org.apache.hadoop.fs.TestDFSIO.run(TestDFSIO.java:462)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
	at org.apache.hadoop.fs.TestDFSIO.main(TestDFSIO.java:317)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.test.AllTestDriver.main(AllTestDriver.java:81)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
Found 3 items
-rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/_SUCCESS
drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23 
/benchmarks/TestDFSIO/io_write/_logs
-rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/part-00000.deflate
crawler@d1r2n2:/hadoop$


Does anyone have any idea what might be wrong here?


error while running TestDFSIO

Posted by kaveh minooie <ka...@plutoz.com>.
Hi everyone

I am getting this error when i run TestDFSIO. the job actually finishes 
successfully. ( according to jobtracker at least ) but this is what i 
get on the console :

crawler@d1r2n2:/hadoop$ bin/hadoop jar hadoop-test-1.1.1.jar  TestDFSIO 
-write -nrFiles 10 -fileSize 1000
TestDFSIO.0.0.4
13/04/19 17:23:43 INFO fs.TestDFSIO: nrFiles = 10
13/04/19 17:23:43 INFO fs.TestDFSIO: fileSize (MB) = 1000
13/04/19 17:23:43 INFO fs.TestDFSIO: bufferSize = 1000000
13/04/19 17:23:43 INFO fs.TestDFSIO: creating control file: 1000 mega 
bytes, 10 files
13/04/19 17:23:44 INFO fs.TestDFSIO: created control files for: 10 files
13/04/19 17:23:44 INFO mapred.FileInputFormat: Total input paths to 
process : 10
13/04/19 17:23:44 INFO mapred.JobClient: Running job: job_201304191712_0002
13/04/19 17:23:45 INFO mapred.JobClient:  map 0% reduce 0%
13/04/19 17:24:06 INFO mapred.JobClient:  map 20% reduce 0%
13/04/19 17:24:07 INFO mapred.JobClient:  map 30% reduce 0%
13/04/19 17:24:09 INFO mapred.JobClient:  map 50% reduce 0%
13/04/19 17:24:11 INFO mapred.JobClient:  map 60% reduce 0%
13/04/19 17:24:12 INFO mapred.JobClient:  map 90% reduce 0%
13/04/19 17:24:13 INFO mapred.JobClient:  map 100% reduce 0%
13/04/19 17:24:21 INFO mapred.JobClient:  map 100% reduce 33%
13/04/19 17:24:22 INFO mapred.JobClient:  map 100% reduce 100%
13/04/19 17:24:23 INFO mapred.JobClient: Job complete: job_201304191712_0002
13/04/19 17:24:23 INFO mapred.JobClient: Counters: 33
13/04/19 17:24:23 INFO mapred.JobClient:   Job Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Launched reduce tasks=1
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=210932
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
reduces waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Total time spent by all 
maps waiting after reserving slots (ms)=0
13/04/19 17:24:23 INFO mapred.JobClient:     Rack-local map tasks=2
13/04/19 17:24:23 INFO mapred.JobClient:     Launched map tasks=10
13/04/19 17:24:23 INFO mapred.JobClient:     Data-local map tasks=8
13/04/19 17:24:23 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8650
13/04/19 17:24:23 INFO mapred.JobClient:   File Input Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Read=1120
13/04/19 17:24:23 INFO mapred.JobClient:   SkippingTaskCounters
13/04/19 17:24:23 INFO mapred.JobClient:     MapProcessedRecords=10
13/04/19 17:24:23 INFO mapred.JobClient:     ReduceProcessedGroups=5
13/04/19 17:24:23 INFO mapred.JobClient:   File Output Format Counters
13/04/19 17:24:23 INFO mapred.JobClient:     Bytes Written=79
13/04/19 17:24:23 INFO mapred.JobClient:   FileSystemCounters
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_READ=871
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_READ=2330
13/04/19 17:24:23 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=272508
13/04/19 17:24:23 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=10485760079
13/04/19 17:24:23 INFO mapred.JobClient:   Map-Reduce Framework
13/04/19 17:24:23 INFO mapred.JobClient:     Map output materialized 
bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Map input records=10
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce shuffle bytes=925
13/04/19 17:24:23 INFO mapred.JobClient:     Spilled Records=100
13/04/19 17:24:23 INFO mapred.JobClient:     Map output bytes=765
13/04/19 17:24:23 INFO mapred.JobClient:     Total committed heap usage 
(bytes)=7996702720
13/04/19 17:24:23 INFO mapred.JobClient:     CPU time spent (ms)=104520
13/04/19 17:24:23 INFO mapred.JobClient:     Map input bytes=260
13/04/19 17:24:23 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1210
13/04/19 17:24:23 INFO mapred.JobClient:     Combine input records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input records=50
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce input groups=5
13/04/19 17:24:23 INFO mapred.JobClient:     Combine output records=0
13/04/19 17:24:23 INFO mapred.JobClient:     Physical memory (bytes) 
snapshot=7111999488
13/04/19 17:24:23 INFO mapred.JobClient:     Reduce output records=5
13/04/19 17:24:23 INFO mapred.JobClient:     Virtual memory (bytes) 
snapshot=28466053120
13/04/19 17:24:23 INFO mapred.JobClient:     Map output records=50
java.io.FileNotFoundException: File does not exist: 
/benchmarks/TestDFSIO/io_write/part-00000
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:1975)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1944)
	at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1936)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:731)
	at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
	at org.apache.hadoop.fs.TestDFSIO.analyzeResult(TestDFSIO.java:339)
	at org.apache.hadoop.fs.TestDFSIO.run(TestDFSIO.java:462)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
	at org.apache.hadoop.fs.TestDFSIO.main(TestDFSIO.java:317)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.test.AllTestDriver.main(AllTestDriver.java:81)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
crawler@d1r2n2:/hadoop$ bin/hadoop fs -ls /benchmarks/TestDFSIO/io_write
Found 3 items
-rw-r--r--   2 crawler supergroup          0 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/_SUCCESS
drwxr-xr-x   - crawler supergroup          0 2013-04-19 17:23 
/benchmarks/TestDFSIO/io_write/_logs
-rw-r--r--   2 crawler supergroup         79 2013-04-19 17:24 
/benchmarks/TestDFSIO/io_write/part-00000.deflate
crawler@d1r2n2:/hadoop$


Does anyone have any idea what might be wrong here?