You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by longfei li <hb...@163.com> on 2015/07/18 06:51:44 UTC

issues about hadoop-0.20.0

Hello!
I built a hadoop cluster including 12 nodes which is based on arm(cubietruck), I run simple program wordcount to find how many words of h in hello, it runs perfectly. But I run a mutiple program like pi,i run like this:
./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
infomation
15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
15/07/18 11:38:54 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process : 1
15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
15/07/18 11:49:54 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000000_0, Status : FAILED
Task attempt_201507181137_0001_r_000000_0 failed to report status for 602 seconds. Killing!
15/07/18 11:49:57 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 11:49:57 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 11:49:58 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000002_0, Status : FAILED
Task attempt_201507181137_0001_r_000002_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:00 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 11:50:00 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 11:50:00 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000003_0, Status : FAILED
Task attempt_201507181137_0001_r_000003_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:03 WARN mapreduce.Job: Error reading task outputhadoop-slave12
15/07/18 11:50:03 WARN mapreduce.Job: Error reading task outputhadoop-slave12
15/07/18 11:50:03 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000004_0, Status : FAILED
Task attempt_201507181137_0001_r_000004_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:06 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 11:50:06 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 11:50:06 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000007_0, Status : FAILED
Task attempt_201507181137_0001_r_000007_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:08 WARN mapreduce.Job: Error reading task outputhadoop-slave11
15/07/18 11:50:08 WARN mapreduce.Job: Error reading task outputhadoop-slave11
15/07/18 11:50:08 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000008_0, Status : FAILED
Task attempt_201507181137_0001_r_000008_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:11 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 11:50:11 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 11:50:11 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000009_0, Status : FAILED
Task attempt_201507181137_0001_r_000009_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:13 WARN mapreduce.Job: Error reading task outputhadoop-slave4
15/07/18 11:50:13 WARN mapreduce.Job: Error reading task outputhadoop-slave4
15/07/18 11:50:13 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000006_0, Status : FAILED
Task attempt_201507181137_0001_r_000006_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:16 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 11:50:16 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 11:50:16 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000005_0, Status : FAILED
Task attempt_201507181137_0001_r_000005_0 failed to report status for 601 seconds. Killing!
15/07/18 11:50:18 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 11:50:18 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 11:50:28 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000001_0, Status : FAILED
Task attempt_201507181137_0001_r_000001_0 failed to report status for 602 seconds. Killing!
15/07/18 11:50:30 WARN mapreduce.Job: Error reading task outputhadoop-slave2
15/07/18 11:50:30 WARN mapreduce.Job: Error reading task outputhadoop-slave2
15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
15/07/18 11:50:54 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000000_1, Status : FAILED
Task attempt_201507181137_0001_r_000000_1 failed to report status for 600 seconds. Killing!
15/07/18 11:50:57 WARN mapreduce.Job: Error reading task outputhadoop-slave10
15/07/18 11:50:57 WARN mapreduce.Job: Error reading task outputhadoop-slave10
15/07/18 11:50:58 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000001_1, Status : FAILED
Task attempt_201507181137_0001_r_000001_1 failed to report status for 601 seconds. Killing!
15/07/18 11:51:01 WARN mapreduce.Job: Error reading task outputhadoop-slave3
15/07/18 11:51:01 WARN mapreduce.Job: Error reading task outputhadoop-slave3
15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
15/07/18 12:00:29 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000007_1, Status : FAILED
Task attempt_201507181137_0001_r_000007_1 failed to report status for 600 seconds. Killing!
15/07/18 12:00:32 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 12:00:32 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 12:00:33 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000004_1, Status : FAILED
Task attempt_201507181137_0001_r_000004_1 failed to report status for 600 seconds. Killing!
15/07/18 12:00:35 WARN mapreduce.Job: Error reading task outputhadoop-slave11
15/07/18 12:00:35 WARN mapreduce.Job: Error reading task outputhadoop-slave11
15/07/18 12:00:35 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000006_1, Status : FAILED
Task attempt_201507181137_0001_r_000006_1 failed to report status for 600 seconds. Killing!
15/07/18 12:00:38 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 12:00:38 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 12:00:38 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000009_1, Status : FAILED
Task attempt_201507181137_0001_r_000009_1 failed to report status for 600 seconds. Killing!
15/07/18 12:00:41 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 12:00:41 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 12:00:41 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000003_1, Status : FAILED
Task attempt_201507181137_0001_r_000003_1 failed to report status for 602 seconds. Killing!
15/07/18 12:00:43 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 12:00:43 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 12:00:43 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000002_1, Status : FAILED
Task attempt_201507181137_0001_r_000002_1 failed to report status for 602 seconds. Killing!
15/07/18 12:00:46 WARN mapreduce.Job: Error reading task outputhadoop-slave12
15/07/18 12:00:46 WARN mapreduce.Job: Error reading task outputhadoop-slave12
15/07/18 12:00:46 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000008_1, Status : FAILED
Task attempt_201507181137_0001_r_000008_1 failed to report status for 602 seconds. Killing!
15/07/18 12:00:48 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 12:00:48 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
15/07/18 12:00:49 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000005_1, Status : FAILED
Task attempt_201507181137_0001_r_000005_1 failed to report status for 602 seconds. Killing!
15/07/18 12:00:52 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 12:00:52 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
15/07/18 12:01:29 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000000_2, Status : FAILED
Task attempt_201507181137_0001_r_000000_2 failed to report status for 602 seconds. Killing!
15/07/18 12:01:32 WARN mapreduce.Job: Error reading task outputhadoop-slave4
15/07/18 12:01:32 WARN mapreduce.Job: Error reading task outputhadoop-slave4
15/07/18 12:01:33 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000001_2, Status : FAILED
Task attempt_201507181137_0001_r_000001_2 failed to report status for 600 seconds. Killing!
15/07/18 12:01:35 WARN mapreduce.Job: Error reading task outputhadoop-slave10
15/07/18 12:01:35 WARN mapreduce.Job: Error reading task outputhadoop-slave10
15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
15/07/18 12:11:02 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000004_2, Status : FAILED
Task attempt_201507181137_0001_r_000004_2 failed to report status for 600 seconds. Killing!
15/07/18 12:11:04 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 12:11:04 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
15/07/18 12:11:05 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000009_2, Status : FAILED
Task attempt_201507181137_0001_r_000009_2 failed to report status for 600 seconds. Killing!
15/07/18 12:11:08 WARN mapreduce.Job: Error reading task outputhadoop-slave3
15/07/18 12:11:08 WARN mapreduce.Job: Error reading task outputhadoop-slave3
15/07/18 12:11:08 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000003_2, Status : FAILED
Task attempt_201507181137_0001_r_000003_2 failed to report status for 600 seconds. Killing!
15/07/18 12:11:11 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 12:11:11 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 12:11:11 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000007_2, Status : FAILED
Task attempt_201507181137_0001_r_000007_2 failed to report status for 602 seconds. Killing!
15/07/18 12:11:13 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 12:11:13 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
15/07/18 12:11:14 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000002_2, Status : FAILED
Task attempt_201507181137_0001_r_000002_2 failed to report status for 602 seconds. Killing!
15/07/18 12:11:17 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 12:11:17 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 12:11:17 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000008_2, Status : FAILED
Task attempt_201507181137_0001_r_000008_2 failed to report status for 602 seconds. Killing!
15/07/18 12:11:19 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 12:11:19 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 12:11:19 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000005_2, Status : FAILED
Task attempt_201507181137_0001_r_000005_2 failed to report status for 602 seconds. Killing!
15/07/18 12:11:22 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 12:11:22 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
15/07/18 12:11:23 INFO mapreduce.Job: Task Id : attempt_201507181137_0001_r_000006_2, Status : FAILED
Task attempt_201507181137_0001_r_000006_2 failed to report status for 601 seconds. Killing!
15/07/18 12:11:25 WARN mapreduce.Job: Error reading task outputhadoop-slave2
15/07/18 12:11:25 WARN mapreduce.Job: Error reading task outputhadoop-slave2
15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
15/07/18 12:12:12 INFO mapreduce.Job: Job complete: job_201507181137_0001
15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
FileInputFormatCounters
BYTES_READ=5
FileSystemCounters
FILE_BYTES_WRITTEN=388
HDFS_BYTES_READ=138
Job Counters 
Total time spent by all maps waiting after reserving slots (ms)=0
Total time spent by all reduces waiting after reserving slots (ms)=0
Failed reduce tasks=1
Rack-local map tasks=1
SLOTS_MILLIS_MAPS=22762
SLOTS_MILLIS_REDUCES=723290
Launched map tasks=1
Launched reduce tasks=40
Map-Reduce Framework
Combine input records=0
Failed Shuffles=0
GC time elapsed (ms)=200
Map input records=1
Map output bytes=60
Map output records=10
Merged Map outputs=0
Spilled Records=10
SPLIT_RAW_BYTES=133
the status shows failed
  reduce total  tasks:40, success tasks: 0,failed task: 32,killed tasks: 8

Re: issues about hadoop-0.20.0

Posted by Ulul <ha...@ulul.org>.
Hi

I'd say than no matter what version is running, parameters seem not fit 
the cluster that doesn't manage to handle 100 maps that each process a 
billion samples : it's hitting the mapreduce timeout of 600 seconds

I'd try with something like 20 100000

Ulul

Le 18/07/2015 12:17, Harsh J a écrit :
> Apache Hadoop 0.20 and 0.21 are both very old and unmaintained 
> releases at this point, and may carry some issues unfixed via further 
> releases. Please consider using a newer release.
>
> Is there a specific reason you intend to use 0.21.0, which came out of 
> a branch long since abandoned?
>
> On Sat, Jul 18, 2015 at 1:27 PM longfei li <hblongfei@163.com 
> <ma...@163.com>> wrote:
>
>     Hello!
>     I built a hadoop cluster including 12 nodes which is based on
>     arm(cubietruck), I run simple program wordcount to find how many
>     words of h in hello, it runs perfectly. But I run a mutiple
>     program like pi,i run like this:
>     ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
>     infomation
>     15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED:
>     hadoop-site.xml found in the classpath. Usage of hadoop-site.xml
>     is deprecated. Instead use core-site.xml, mapred-site.xml and
>     hdfs-site.xml to override properties of core-default.xml,
>     mapred-default.xml and hdfs-default.xml respectively
>     15/07/18 11:38:54 INFO security.Groups: Group mapping
>     impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
>     cacheTimeout=300000
>     15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id
>     <http://mapred.task.id> is deprecated. Instead, use
>     mapreduce.task.attempt.id <http://mapreduce.task.attempt.id>
>     15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use
>     GenericOptionsParser for parsing the arguments. Applications
>     should implement Tool for the same.
>     15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to
>     process : 1
>     15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is
>     deprecated. Instead, use mapreduce.job.maps
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the
>     following namenodes' delegation tokens:null
>     15/07/18 11:38:59 INFO mapreduce.Job: Running job:
>     job_201507181137_0001
>     15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
>     15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
>     15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
>     15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
>     15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
>     15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_1 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
>     15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
>     15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_2 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:12:12 INFO mapreduce.Job: Job complete:
>     job_201507181137_0001
>     15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
>     FileInputFormatCounters
>     BYTES_READ=5
>     FileSystemCounters
>     FILE_BYTES_WRITTEN=388
>     HDFS_BYTES_READ=138
>     Job Counters
>     Total time spent by all maps waiting after reserving slots (ms)=0
>     Total time spent by all reduces waiting after reserving slots (ms)=0
>     Failed reduce tasks=1
>     Rack-local map tasks=1
>     SLOTS_MILLIS_MAPS=22762
>     SLOTS_MILLIS_REDUCES=723290
>     Launched map tasks=1
>     Launched reduce tasks=40
>     Map-Reduce Framework
>     Combine input records=0
>     Failed Shuffles=0
>     GC time elapsed (ms)=200
>     Map input records=1
>     Map output bytes=60
>     Map output records=10
>     Merged Map outputs=0
>     Spilled Records=10
>     SPLIT_RAW_BYTES=133
>     the status shows failed
>       reduce total  tasks:40, success tasks: 0,failed task: 32,killed
>     tasks: 8
>


Re: issues about hadoop-0.20.0

Posted by Ulul <ha...@ulul.org>.
Hi

I'd say than no matter what version is running, parameters seem not fit 
the cluster that doesn't manage to handle 100 maps that each process a 
billion samples : it's hitting the mapreduce timeout of 600 seconds

I'd try with something like 20 100000

Ulul

Le 18/07/2015 12:17, Harsh J a écrit :
> Apache Hadoop 0.20 and 0.21 are both very old and unmaintained 
> releases at this point, and may carry some issues unfixed via further 
> releases. Please consider using a newer release.
>
> Is there a specific reason you intend to use 0.21.0, which came out of 
> a branch long since abandoned?
>
> On Sat, Jul 18, 2015 at 1:27 PM longfei li <hblongfei@163.com 
> <ma...@163.com>> wrote:
>
>     Hello!
>     I built a hadoop cluster including 12 nodes which is based on
>     arm(cubietruck), I run simple program wordcount to find how many
>     words of h in hello, it runs perfectly. But I run a mutiple
>     program like pi,i run like this:
>     ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
>     infomation
>     15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED:
>     hadoop-site.xml found in the classpath. Usage of hadoop-site.xml
>     is deprecated. Instead use core-site.xml, mapred-site.xml and
>     hdfs-site.xml to override properties of core-default.xml,
>     mapred-default.xml and hdfs-default.xml respectively
>     15/07/18 11:38:54 INFO security.Groups: Group mapping
>     impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
>     cacheTimeout=300000
>     15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id
>     <http://mapred.task.id> is deprecated. Instead, use
>     mapreduce.task.attempt.id <http://mapreduce.task.attempt.id>
>     15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use
>     GenericOptionsParser for parsing the arguments. Applications
>     should implement Tool for the same.
>     15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to
>     process : 1
>     15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is
>     deprecated. Instead, use mapreduce.job.maps
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the
>     following namenodes' delegation tokens:null
>     15/07/18 11:38:59 INFO mapreduce.Job: Running job:
>     job_201507181137_0001
>     15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
>     15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
>     15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
>     15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
>     15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
>     15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_1 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
>     15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
>     15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_2 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:12:12 INFO mapreduce.Job: Job complete:
>     job_201507181137_0001
>     15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
>     FileInputFormatCounters
>     BYTES_READ=5
>     FileSystemCounters
>     FILE_BYTES_WRITTEN=388
>     HDFS_BYTES_READ=138
>     Job Counters
>     Total time spent by all maps waiting after reserving slots (ms)=0
>     Total time spent by all reduces waiting after reserving slots (ms)=0
>     Failed reduce tasks=1
>     Rack-local map tasks=1
>     SLOTS_MILLIS_MAPS=22762
>     SLOTS_MILLIS_REDUCES=723290
>     Launched map tasks=1
>     Launched reduce tasks=40
>     Map-Reduce Framework
>     Combine input records=0
>     Failed Shuffles=0
>     GC time elapsed (ms)=200
>     Map input records=1
>     Map output bytes=60
>     Map output records=10
>     Merged Map outputs=0
>     Spilled Records=10
>     SPLIT_RAW_BYTES=133
>     the status shows failed
>       reduce total  tasks:40, success tasks: 0,failed task: 32,killed
>     tasks: 8
>


Re: issues about hadoop-0.20.0

Posted by Ulul <ha...@ulul.org>.
Hi

I'd say than no matter what version is running, parameters seem not fit 
the cluster that doesn't manage to handle 100 maps that each process a 
billion samples : it's hitting the mapreduce timeout of 600 seconds

I'd try with something like 20 100000

Ulul

Le 18/07/2015 12:17, Harsh J a écrit :
> Apache Hadoop 0.20 and 0.21 are both very old and unmaintained 
> releases at this point, and may carry some issues unfixed via further 
> releases. Please consider using a newer release.
>
> Is there a specific reason you intend to use 0.21.0, which came out of 
> a branch long since abandoned?
>
> On Sat, Jul 18, 2015 at 1:27 PM longfei li <hblongfei@163.com 
> <ma...@163.com>> wrote:
>
>     Hello!
>     I built a hadoop cluster including 12 nodes which is based on
>     arm(cubietruck), I run simple program wordcount to find how many
>     words of h in hello, it runs perfectly. But I run a mutiple
>     program like pi,i run like this:
>     ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
>     infomation
>     15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED:
>     hadoop-site.xml found in the classpath. Usage of hadoop-site.xml
>     is deprecated. Instead use core-site.xml, mapred-site.xml and
>     hdfs-site.xml to override properties of core-default.xml,
>     mapred-default.xml and hdfs-default.xml respectively
>     15/07/18 11:38:54 INFO security.Groups: Group mapping
>     impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
>     cacheTimeout=300000
>     15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id
>     <http://mapred.task.id> is deprecated. Instead, use
>     mapreduce.task.attempt.id <http://mapreduce.task.attempt.id>
>     15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use
>     GenericOptionsParser for parsing the arguments. Applications
>     should implement Tool for the same.
>     15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to
>     process : 1
>     15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is
>     deprecated. Instead, use mapreduce.job.maps
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the
>     following namenodes' delegation tokens:null
>     15/07/18 11:38:59 INFO mapreduce.Job: Running job:
>     job_201507181137_0001
>     15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
>     15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
>     15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
>     15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
>     15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
>     15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_1 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
>     15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
>     15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_2 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:12:12 INFO mapreduce.Job: Job complete:
>     job_201507181137_0001
>     15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
>     FileInputFormatCounters
>     BYTES_READ=5
>     FileSystemCounters
>     FILE_BYTES_WRITTEN=388
>     HDFS_BYTES_READ=138
>     Job Counters
>     Total time spent by all maps waiting after reserving slots (ms)=0
>     Total time spent by all reduces waiting after reserving slots (ms)=0
>     Failed reduce tasks=1
>     Rack-local map tasks=1
>     SLOTS_MILLIS_MAPS=22762
>     SLOTS_MILLIS_REDUCES=723290
>     Launched map tasks=1
>     Launched reduce tasks=40
>     Map-Reduce Framework
>     Combine input records=0
>     Failed Shuffles=0
>     GC time elapsed (ms)=200
>     Map input records=1
>     Map output bytes=60
>     Map output records=10
>     Merged Map outputs=0
>     Spilled Records=10
>     SPLIT_RAW_BYTES=133
>     the status shows failed
>       reduce total  tasks:40, success tasks: 0,failed task: 32,killed
>     tasks: 8
>


Re: issues about hadoop-0.20.0

Posted by Ulul <ha...@ulul.org>.
Hi

I'd say than no matter what version is running, parameters seem not fit 
the cluster that doesn't manage to handle 100 maps that each process a 
billion samples : it's hitting the mapreduce timeout of 600 seconds

I'd try with something like 20 100000

Ulul

Le 18/07/2015 12:17, Harsh J a écrit :
> Apache Hadoop 0.20 and 0.21 are both very old and unmaintained 
> releases at this point, and may carry some issues unfixed via further 
> releases. Please consider using a newer release.
>
> Is there a specific reason you intend to use 0.21.0, which came out of 
> a branch long since abandoned?
>
> On Sat, Jul 18, 2015 at 1:27 PM longfei li <hblongfei@163.com 
> <ma...@163.com>> wrote:
>
>     Hello!
>     I built a hadoop cluster including 12 nodes which is based on
>     arm(cubietruck), I run simple program wordcount to find how many
>     words of h in hello, it runs perfectly. But I run a mutiple
>     program like pi,i run like this:
>     ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
>     infomation
>     15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED:
>     hadoop-site.xml found in the classpath. Usage of hadoop-site.xml
>     is deprecated. Instead use core-site.xml, mapred-site.xml and
>     hdfs-site.xml to override properties of core-default.xml,
>     mapred-default.xml and hdfs-default.xml respectively
>     15/07/18 11:38:54 INFO security.Groups: Group mapping
>     impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
>     cacheTimeout=300000
>     15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id
>     <http://mapred.task.id> is deprecated. Instead, use
>     mapreduce.task.attempt.id <http://mapreduce.task.attempt.id>
>     15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use
>     GenericOptionsParser for parsing the arguments. Applications
>     should implement Tool for the same.
>     15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to
>     process : 1
>     15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is
>     deprecated. Instead, use mapreduce.job.maps
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
>     15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the
>     following namenodes' delegation tokens:null
>     15/07/18 11:38:59 INFO mapreduce.Job: Running job:
>     job_201507181137_0001
>     15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
>     15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
>     15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
>     15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
>     15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
>     15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_0 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_0, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_0 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_1 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave11
>     15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_1 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave12
>     15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_1, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_1 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000000_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000000_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave4
>     15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000001_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000001_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave10
>     15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
>     15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
>     15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000004_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000004_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave1
>     15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
>     15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000009_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000009_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave3
>     15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000003_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000003_2 failed to report status
>     for 600 seconds. Killing!
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave9
>     15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000007_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000007_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave6
>     15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
>     15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000002_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000002_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave7
>     15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000008_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000008_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave5
>     15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000005_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000005_2 failed to report status
>     for 602 seconds. Killing!
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave8
>     15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
>     15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
>     attempt_201507181137_0001_r_000006_2, Status : FAILED
>     Task attempt_201507181137_0001_r_000006_2 failed to report status
>     for 601 seconds. Killing!
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
>     outputhadoop-slave2
>     15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
>     15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
>     15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
>     15/07/18 12:12:12 INFO mapreduce.Job: Job complete:
>     job_201507181137_0001
>     15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
>     FileInputFormatCounters
>     BYTES_READ=5
>     FileSystemCounters
>     FILE_BYTES_WRITTEN=388
>     HDFS_BYTES_READ=138
>     Job Counters
>     Total time spent by all maps waiting after reserving slots (ms)=0
>     Total time spent by all reduces waiting after reserving slots (ms)=0
>     Failed reduce tasks=1
>     Rack-local map tasks=1
>     SLOTS_MILLIS_MAPS=22762
>     SLOTS_MILLIS_REDUCES=723290
>     Launched map tasks=1
>     Launched reduce tasks=40
>     Map-Reduce Framework
>     Combine input records=0
>     Failed Shuffles=0
>     GC time elapsed (ms)=200
>     Map input records=1
>     Map output bytes=60
>     Map output records=10
>     Merged Map outputs=0
>     Spilled Records=10
>     SPLIT_RAW_BYTES=133
>     the status shows failed
>       reduce total  tasks:40, success tasks: 0,failed task: 32,killed
>     tasks: 8
>


Re: issues about hadoop-0.20.0

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at
this point, and may carry some issues unfixed via further releases. Please
consider using a newer release.

Is there a specific reason you intend to use 0.21.0, which came out of a
branch long since abandoned?

On Sat, Jul 18, 2015 at 1:27 PM longfei li <hb...@163.com> wrote:

> Hello!
> I built a hadoop cluster including 12 nodes which is based on
> arm(cubietruck), I run simple program wordcount to find how many words of h
> in hello, it runs perfectly. But I run a mutiple program like pi,i run like
> this:
> ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
> infomation
> 15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
> found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> 15/07/18 11:38:54 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated.
> Instead, use mapreduce.task.attempt.id
> 15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
> for parsing the arguments. Applications should implement Tool for the same.
> 15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following
> namenodes' delegation tokens:null
> 15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
> 15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
> 15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
> 15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
> 15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
> 15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
> 15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_0, Status : FAILED
> Task attempt_201507181137_0001_r_000000_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_0, Status : FAILED
> Task attempt_201507181137_0001_r_000002_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_0, Status : FAILED
> Task attempt_201507181137_0001_r_000003_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_0, Status : FAILED
> Task attempt_201507181137_0001_r_000004_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_0, Status : FAILED
> Task attempt_201507181137_0001_r_000007_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_0, Status : FAILED
> Task attempt_201507181137_0001_r_000008_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_0, Status : FAILED
> Task attempt_201507181137_0001_r_000009_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_0, Status : FAILED
> Task attempt_201507181137_0001_r_000006_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_0, Status : FAILED
> Task attempt_201507181137_0001_r_000005_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_0, Status : FAILED
> Task attempt_201507181137_0001_r_000001_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_1, Status : FAILED
> Task attempt_201507181137_0001_r_000000_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_1, Status : FAILED
> Task attempt_201507181137_0001_r_000001_1 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_1, Status : FAILED
> Task attempt_201507181137_0001_r_000007_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_1, Status : FAILED
> Task attempt_201507181137_0001_r_000004_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_1, Status : FAILED
> Task attempt_201507181137_0001_r_000006_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_1, Status : FAILED
> Task attempt_201507181137_0001_r_000009_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_1, Status : FAILED
> Task attempt_201507181137_0001_r_000003_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_1, Status : FAILED
> Task attempt_201507181137_0001_r_000002_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_1, Status : FAILED
> Task attempt_201507181137_0001_r_000008_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_1, Status : FAILED
> Task attempt_201507181137_0001_r_000005_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_2, Status : FAILED
> Task attempt_201507181137_0001_r_000000_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_2, Status : FAILED
> Task attempt_201507181137_0001_r_000001_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
> 15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_2, Status : FAILED
> Task attempt_201507181137_0001_r_000004_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
> 15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_2, Status : FAILED
> Task attempt_201507181137_0001_r_000009_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_2, Status : FAILED
> Task attempt_201507181137_0001_r_000003_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_2, Status : FAILED
> Task attempt_201507181137_0001_r_000007_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_2, Status : FAILED
> Task attempt_201507181137_0001_r_000002_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_2, Status : FAILED
> Task attempt_201507181137_0001_r_000008_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_2, Status : FAILED
> Task attempt_201507181137_0001_r_000005_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_2, Status : FAILED
> Task attempt_201507181137_0001_r_000006_2 failed to report status for 601
> seconds. Killing!
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:12:12 INFO mapreduce.Job: Job complete: job_201507181137_0001
> 15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
> FileInputFormatCounters
> BYTES_READ=5
> FileSystemCounters
> FILE_BYTES_WRITTEN=388
> HDFS_BYTES_READ=138
> Job Counters
> Total time spent by all maps waiting after reserving slots (ms)=0
> Total time spent by all reduces waiting after reserving slots (ms)=0
> Failed reduce tasks=1
> Rack-local map tasks=1
> SLOTS_MILLIS_MAPS=22762
> SLOTS_MILLIS_REDUCES=723290
> Launched map tasks=1
> Launched reduce tasks=40
> Map-Reduce Framework
> Combine input records=0
> Failed Shuffles=0
> GC time elapsed (ms)=200
> Map input records=1
> Map output bytes=60
> Map output records=10
> Merged Map outputs=0
> Spilled Records=10
> SPLIT_RAW_BYTES=133
> the status shows failed
>   reduce total  tasks:40, success tasks: 0,failed task: 32,killed tasks: 8
>

Re: issues about hadoop-0.20.0

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at
this point, and may carry some issues unfixed via further releases. Please
consider using a newer release.

Is there a specific reason you intend to use 0.21.0, which came out of a
branch long since abandoned?

On Sat, Jul 18, 2015 at 1:27 PM longfei li <hb...@163.com> wrote:

> Hello!
> I built a hadoop cluster including 12 nodes which is based on
> arm(cubietruck), I run simple program wordcount to find how many words of h
> in hello, it runs perfectly. But I run a mutiple program like pi,i run like
> this:
> ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
> infomation
> 15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
> found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> 15/07/18 11:38:54 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated.
> Instead, use mapreduce.task.attempt.id
> 15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
> for parsing the arguments. Applications should implement Tool for the same.
> 15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following
> namenodes' delegation tokens:null
> 15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
> 15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
> 15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
> 15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
> 15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
> 15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
> 15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_0, Status : FAILED
> Task attempt_201507181137_0001_r_000000_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_0, Status : FAILED
> Task attempt_201507181137_0001_r_000002_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_0, Status : FAILED
> Task attempt_201507181137_0001_r_000003_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_0, Status : FAILED
> Task attempt_201507181137_0001_r_000004_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_0, Status : FAILED
> Task attempt_201507181137_0001_r_000007_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_0, Status : FAILED
> Task attempt_201507181137_0001_r_000008_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_0, Status : FAILED
> Task attempt_201507181137_0001_r_000009_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_0, Status : FAILED
> Task attempt_201507181137_0001_r_000006_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_0, Status : FAILED
> Task attempt_201507181137_0001_r_000005_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_0, Status : FAILED
> Task attempt_201507181137_0001_r_000001_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_1, Status : FAILED
> Task attempt_201507181137_0001_r_000000_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_1, Status : FAILED
> Task attempt_201507181137_0001_r_000001_1 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_1, Status : FAILED
> Task attempt_201507181137_0001_r_000007_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_1, Status : FAILED
> Task attempt_201507181137_0001_r_000004_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_1, Status : FAILED
> Task attempt_201507181137_0001_r_000006_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_1, Status : FAILED
> Task attempt_201507181137_0001_r_000009_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_1, Status : FAILED
> Task attempt_201507181137_0001_r_000003_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_1, Status : FAILED
> Task attempt_201507181137_0001_r_000002_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_1, Status : FAILED
> Task attempt_201507181137_0001_r_000008_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_1, Status : FAILED
> Task attempt_201507181137_0001_r_000005_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_2, Status : FAILED
> Task attempt_201507181137_0001_r_000000_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_2, Status : FAILED
> Task attempt_201507181137_0001_r_000001_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
> 15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_2, Status : FAILED
> Task attempt_201507181137_0001_r_000004_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
> 15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_2, Status : FAILED
> Task attempt_201507181137_0001_r_000009_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_2, Status : FAILED
> Task attempt_201507181137_0001_r_000003_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_2, Status : FAILED
> Task attempt_201507181137_0001_r_000007_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_2, Status : FAILED
> Task attempt_201507181137_0001_r_000002_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_2, Status : FAILED
> Task attempt_201507181137_0001_r_000008_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_2, Status : FAILED
> Task attempt_201507181137_0001_r_000005_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_2, Status : FAILED
> Task attempt_201507181137_0001_r_000006_2 failed to report status for 601
> seconds. Killing!
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:12:12 INFO mapreduce.Job: Job complete: job_201507181137_0001
> 15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
> FileInputFormatCounters
> BYTES_READ=5
> FileSystemCounters
> FILE_BYTES_WRITTEN=388
> HDFS_BYTES_READ=138
> Job Counters
> Total time spent by all maps waiting after reserving slots (ms)=0
> Total time spent by all reduces waiting after reserving slots (ms)=0
> Failed reduce tasks=1
> Rack-local map tasks=1
> SLOTS_MILLIS_MAPS=22762
> SLOTS_MILLIS_REDUCES=723290
> Launched map tasks=1
> Launched reduce tasks=40
> Map-Reduce Framework
> Combine input records=0
> Failed Shuffles=0
> GC time elapsed (ms)=200
> Map input records=1
> Map output bytes=60
> Map output records=10
> Merged Map outputs=0
> Spilled Records=10
> SPLIT_RAW_BYTES=133
> the status shows failed
>   reduce total  tasks:40, success tasks: 0,failed task: 32,killed tasks: 8
>

Re: issues about hadoop-0.20.0

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at
this point, and may carry some issues unfixed via further releases. Please
consider using a newer release.

Is there a specific reason you intend to use 0.21.0, which came out of a
branch long since abandoned?

On Sat, Jul 18, 2015 at 1:27 PM longfei li <hb...@163.com> wrote:

> Hello!
> I built a hadoop cluster including 12 nodes which is based on
> arm(cubietruck), I run simple program wordcount to find how many words of h
> in hello, it runs perfectly. But I run a mutiple program like pi,i run like
> this:
> ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
> infomation
> 15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
> found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> 15/07/18 11:38:54 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated.
> Instead, use mapreduce.task.attempt.id
> 15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
> for parsing the arguments. Applications should implement Tool for the same.
> 15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following
> namenodes' delegation tokens:null
> 15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
> 15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
> 15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
> 15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
> 15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
> 15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
> 15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_0, Status : FAILED
> Task attempt_201507181137_0001_r_000000_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_0, Status : FAILED
> Task attempt_201507181137_0001_r_000002_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_0, Status : FAILED
> Task attempt_201507181137_0001_r_000003_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_0, Status : FAILED
> Task attempt_201507181137_0001_r_000004_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_0, Status : FAILED
> Task attempt_201507181137_0001_r_000007_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_0, Status : FAILED
> Task attempt_201507181137_0001_r_000008_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_0, Status : FAILED
> Task attempt_201507181137_0001_r_000009_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_0, Status : FAILED
> Task attempt_201507181137_0001_r_000006_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_0, Status : FAILED
> Task attempt_201507181137_0001_r_000005_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_0, Status : FAILED
> Task attempt_201507181137_0001_r_000001_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_1, Status : FAILED
> Task attempt_201507181137_0001_r_000000_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_1, Status : FAILED
> Task attempt_201507181137_0001_r_000001_1 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_1, Status : FAILED
> Task attempt_201507181137_0001_r_000007_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_1, Status : FAILED
> Task attempt_201507181137_0001_r_000004_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_1, Status : FAILED
> Task attempt_201507181137_0001_r_000006_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_1, Status : FAILED
> Task attempt_201507181137_0001_r_000009_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_1, Status : FAILED
> Task attempt_201507181137_0001_r_000003_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_1, Status : FAILED
> Task attempt_201507181137_0001_r_000002_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_1, Status : FAILED
> Task attempt_201507181137_0001_r_000008_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_1, Status : FAILED
> Task attempt_201507181137_0001_r_000005_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_2, Status : FAILED
> Task attempt_201507181137_0001_r_000000_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_2, Status : FAILED
> Task attempt_201507181137_0001_r_000001_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
> 15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_2, Status : FAILED
> Task attempt_201507181137_0001_r_000004_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
> 15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_2, Status : FAILED
> Task attempt_201507181137_0001_r_000009_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_2, Status : FAILED
> Task attempt_201507181137_0001_r_000003_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_2, Status : FAILED
> Task attempt_201507181137_0001_r_000007_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_2, Status : FAILED
> Task attempt_201507181137_0001_r_000002_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_2, Status : FAILED
> Task attempt_201507181137_0001_r_000008_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_2, Status : FAILED
> Task attempt_201507181137_0001_r_000005_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_2, Status : FAILED
> Task attempt_201507181137_0001_r_000006_2 failed to report status for 601
> seconds. Killing!
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:12:12 INFO mapreduce.Job: Job complete: job_201507181137_0001
> 15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
> FileInputFormatCounters
> BYTES_READ=5
> FileSystemCounters
> FILE_BYTES_WRITTEN=388
> HDFS_BYTES_READ=138
> Job Counters
> Total time spent by all maps waiting after reserving slots (ms)=0
> Total time spent by all reduces waiting after reserving slots (ms)=0
> Failed reduce tasks=1
> Rack-local map tasks=1
> SLOTS_MILLIS_MAPS=22762
> SLOTS_MILLIS_REDUCES=723290
> Launched map tasks=1
> Launched reduce tasks=40
> Map-Reduce Framework
> Combine input records=0
> Failed Shuffles=0
> GC time elapsed (ms)=200
> Map input records=1
> Map output bytes=60
> Map output records=10
> Merged Map outputs=0
> Spilled Records=10
> SPLIT_RAW_BYTES=133
> the status shows failed
>   reduce total  tasks:40, success tasks: 0,failed task: 32,killed tasks: 8
>

Re: issues about hadoop-0.20.0

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at
this point, and may carry some issues unfixed via further releases. Please
consider using a newer release.

Is there a specific reason you intend to use 0.21.0, which came out of a
branch long since abandoned?

On Sat, Jul 18, 2015 at 1:27 PM longfei li <hb...@163.com> wrote:

> Hello!
> I built a hadoop cluster including 12 nodes which is based on
> arm(cubietruck), I run simple program wordcount to find how many words of h
> in hello, it runs perfectly. But I run a mutiple program like pi,i run like
> this:
> ./hadoop jar hadoop-example-0.21.0.jar pi 100 1000000000
> infomation
> 15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
> found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> 15/07/18 11:38:54 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated.
> Instead, use mapreduce.task.attempt.id
> 15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
> for parsing the arguments. Applications should implement Tool for the same.
> 15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
> 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following
> namenodes' delegation tokens:null
> 15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
> 15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
> 15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
> 15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
> 15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
> 15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
> 15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_0, Status : FAILED
> Task attempt_201507181137_0001_r_000000_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_0, Status : FAILED
> Task attempt_201507181137_0001_r_000002_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_0, Status : FAILED
> Task attempt_201507181137_0001_r_000003_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_0, Status : FAILED
> Task attempt_201507181137_0001_r_000004_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_0, Status : FAILED
> Task attempt_201507181137_0001_r_000007_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_0, Status : FAILED
> Task attempt_201507181137_0001_r_000008_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_0, Status : FAILED
> Task attempt_201507181137_0001_r_000009_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_0, Status : FAILED
> Task attempt_201507181137_0001_r_000006_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 11:50:16 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_0, Status : FAILED
> Task attempt_201507181137_0001_r_000005_0 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:18 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 11:50:19 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 11:50:28 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_0, Status : FAILED
> Task attempt_201507181137_0001_r_000001_0 failed to report status for 602
> seconds. Killing!
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:30 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 11:50:46 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:50:50 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 11:50:54 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_1, Status : FAILED
> Task attempt_201507181137_0001_r_000000_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:57 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 11:50:58 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_1, Status : FAILED
> Task attempt_201507181137_0001_r_000001_1 failed to report status for 601
> seconds. Killing!
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:01 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 11:51:11 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 11:51:17 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:00:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:22 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:00:24 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:00:25 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:00:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_1, Status : FAILED
> Task attempt_201507181137_0001_r_000007_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:00:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_1, Status : FAILED
> Task attempt_201507181137_0001_r_000004_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave11
> 15/07/18 12:00:35 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_1, Status : FAILED
> Task attempt_201507181137_0001_r_000006_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:00:38 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_1, Status : FAILED
> Task attempt_201507181137_0001_r_000009_1 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:00:41 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_1, Status : FAILED
> Task attempt_201507181137_0001_r_000003_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:00:43 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_1, Status : FAILED
> Task attempt_201507181137_0001_r_000002_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 WARN mapreduce.Job: Error reading task
> outputhadoop-slave12
> 15/07/18 12:00:46 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_1, Status : FAILED
> Task attempt_201507181137_0001_r_000008_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:48 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:00:49 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:00:49 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_1, Status : FAILED
> Task attempt_201507181137_0001_r_000005_1 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:52 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:00:53 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:00:58 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:01:21 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:25 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:01:29 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000000_2, Status : FAILED
> Task attempt_201507181137_0001_r_000000_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:32 WARN mapreduce.Job: Error reading task
> outputhadoop-slave4
> 15/07/18 12:01:33 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000001_2, Status : FAILED
> Task attempt_201507181137_0001_r_000001_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:35 WARN mapreduce.Job: Error reading task
> outputhadoop-slave10
> 15/07/18 12:01:47 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:01:50 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:10:54 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:10:55 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:10:57 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:00 INFO mapreduce.Job:  map 100% reduce 49%
> 15/07/18 12:11:01 INFO mapreduce.Job:  map 100% reduce 39%
> 15/07/18 12:11:02 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000004_2, Status : FAILED
> Task attempt_201507181137_0001_r_000004_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:04 WARN mapreduce.Job: Error reading task
> outputhadoop-slave1
> 15/07/18 12:11:05 INFO mapreduce.Job:  map 100% reduce 29%
> 15/07/18 12:11:05 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000009_2, Status : FAILED
> Task attempt_201507181137_0001_r_000009_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 WARN mapreduce.Job: Error reading task
> outputhadoop-slave3
> 15/07/18 12:11:08 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000003_2, Status : FAILED
> Task attempt_201507181137_0001_r_000003_2 failed to report status for 600
> seconds. Killing!
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 WARN mapreduce.Job: Error reading task
> outputhadoop-slave9
> 15/07/18 12:11:11 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000007_2, Status : FAILED
> Task attempt_201507181137_0001_r_000007_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:13 WARN mapreduce.Job: Error reading task
> outputhadoop-slave6
> 15/07/18 12:11:14 INFO mapreduce.Job:  map 100% reduce 19%
> 15/07/18 12:11:14 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000002_2, Status : FAILED
> Task attempt_201507181137_0001_r_000002_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 WARN mapreduce.Job: Error reading task
> outputhadoop-slave7
> 15/07/18 12:11:17 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000008_2, Status : FAILED
> Task attempt_201507181137_0001_r_000008_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 WARN mapreduce.Job: Error reading task
> outputhadoop-slave5
> 15/07/18 12:11:19 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000005_2, Status : FAILED
> Task attempt_201507181137_0001_r_000005_2 failed to report status for 602
> seconds. Killing!
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:22 WARN mapreduce.Job: Error reading task
> outputhadoop-slave8
> 15/07/18 12:11:23 INFO mapreduce.Job:  map 100% reduce 59%
> 15/07/18 12:11:23 INFO mapreduce.Job: Task Id :
> attempt_201507181137_0001_r_000006_2, Status : FAILED
> Task attempt_201507181137_0001_r_000006_2 failed to report status for 601
> seconds. Killing!
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:25 WARN mapreduce.Job: Error reading task
> outputhadoop-slave2
> 15/07/18 12:11:26 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:11:27 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:11:39 INFO mapreduce.Job:  map 100% reduce 100%
> 15/07/18 12:11:55 INFO mapreduce.Job:  map 100% reduce 89%
> 15/07/18 12:12:00 INFO mapreduce.Job:  map 100% reduce 79%
> 15/07/18 12:12:12 INFO mapreduce.Job: Job complete: job_201507181137_0001
> 15/07/18 12:12:12 INFO mapreduce.Job: Counters: 20
> FileInputFormatCounters
> BYTES_READ=5
> FileSystemCounters
> FILE_BYTES_WRITTEN=388
> HDFS_BYTES_READ=138
> Job Counters
> Total time spent by all maps waiting after reserving slots (ms)=0
> Total time spent by all reduces waiting after reserving slots (ms)=0
> Failed reduce tasks=1
> Rack-local map tasks=1
> SLOTS_MILLIS_MAPS=22762
> SLOTS_MILLIS_REDUCES=723290
> Launched map tasks=1
> Launched reduce tasks=40
> Map-Reduce Framework
> Combine input records=0
> Failed Shuffles=0
> GC time elapsed (ms)=200
> Map input records=1
> Map output bytes=60
> Map output records=10
> Merged Map outputs=0
> Spilled Records=10
> SPLIT_RAW_BYTES=133
> the status shows failed
>   reduce total  tasks:40, success tasks: 0,failed task: 32,killed tasks: 8
>