You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Irfan Mohammed <ir...@gmail.com> on 2009/07/02 22:22:20 UTC

performance help

Hi,

Hbase/Hadoop Setup:
1. 3 regionservers 
2. Run the task using 20 Map Tasks and 20 Reduce Tasks. 
3. Using an older hbase version from the trunk [ Version: 0.20.0-dev, r786695, Sat Jun 20 18:01:17 EDT 2009 ]
4. Using hadoop [ 0.20.0 ]

Test Data:
1. The input is a CSV file with a 1M rows and about 20 columns and 4 metrics. 
2. Output is 4 hbase tables "txn_m1", "txn_m2", "txn_m3", "txn_m4". 

The task is to parse through the CSV file and for each metric m1 create an entry into the hbase table "txn_m1" with the columns as needed. Attached is an pdf [from an excel] which explains how a single row in the CSV is converted into hbase data in the mapper and reducer stage. Attached is the code as well.

For processing a 1M records, it is taking about 38 minutes. I am using HTable.incrementColumnValue() in the reduce pass to create the records in the hbase tables. 

Is there anything I should be doing differently or inherently incorrect? I would like run this task in 1 minute.

Thanks for the help,
Irfan

Here is the output of the process. Let me know if I should attach any other log. 

09/07/02 15:19:11 INFO mapred.JobClient: Running job: job_200906192236_5114
09/07/02 15:19:12 INFO mapred.JobClient:  map 0% reduce 0%
09/07/02 15:19:29 INFO mapred.JobClient:  map 30% reduce 0%
09/07/02 15:19:32 INFO mapred.JobClient:  map 46% reduce 0%
09/07/02 15:19:35 INFO mapred.JobClient:  map 64% reduce 0%
09/07/02 15:19:38 INFO mapred.JobClient:  map 75% reduce 0%
09/07/02 15:19:44 INFO mapred.JobClient:  map 76% reduce 0%
09/07/02 15:19:47 INFO mapred.JobClient:  map 99% reduce 1%
09/07/02 15:19:50 INFO mapred.JobClient:  map 100% reduce 3%
09/07/02 15:19:53 INFO mapred.JobClient:  map 100% reduce 4%
09/07/02 15:19:56 INFO mapred.JobClient:  map 100% reduce 10%
09/07/02 15:19:59 INFO mapred.JobClient:  map 100% reduce 12%
09/07/02 15:20:02 INFO mapred.JobClient:  map 100% reduce 16%
09/07/02 15:20:05 INFO mapred.JobClient:  map 100% reduce 25%
09/07/02 15:20:08 INFO mapred.JobClient:  map 100% reduce 33%
09/07/02 15:20:11 INFO mapred.JobClient:  map 100% reduce 36%
09/07/02 15:20:14 INFO mapred.JobClient:  map 100% reduce 39%
09/07/02 15:20:17 INFO mapred.JobClient:  map 100% reduce 41%
09/07/02 15:20:29 INFO mapred.JobClient:  map 100% reduce 42%
09/07/02 15:20:32 INFO mapred.JobClient:  map 100% reduce 44%
09/07/02 15:20:38 INFO mapred.JobClient:  map 100% reduce 46%
09/07/02 15:20:49 INFO mapred.JobClient:  map 100% reduce 47%
09/07/02 15:20:55 INFO mapred.JobClient:  map 100% reduce 50%
09/07/02 15:21:01 INFO mapred.JobClient:  map 100% reduce 51%
09/07/02 15:21:34 INFO mapred.JobClient:  map 100% reduce 52%
09/07/02 15:21:39 INFO mapred.JobClient:  map 100% reduce 53%
09/07/02 15:22:06 INFO mapred.JobClient:  map 100% reduce 54%
09/07/02 15:22:28 INFO mapred.JobClient:  map 100% reduce 55%
09/07/02 15:22:44 INFO mapred.JobClient:  map 100% reduce 56%
09/07/02 15:23:02 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000002_0, Status : FAILED
attempt_200906192236_5114_r_000002_0: [2009-07-02 15:20:27.230] fetching new record writer ...
attempt_200906192236_5114_r_000002_0: [2009-07-02 15:22:51.429] failed to initialize the hbase configuration
09/07/02 15:23:08 INFO mapred.JobClient:  map 100% reduce 53%
09/07/02 15:23:08 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000013_0, Status : FAILED
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to locate root region
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
	at com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)

attempt_200906192236_5114_r_000013_0: [2009-07-02 15:20:33.183] fetching new record writer ...
attempt_200906192236_5114_r_000013_0: [2009-07-02 15:23:04.369] failed to initialize the hbase configuration
09/07/02 15:23:09 INFO mapred.JobClient:  map 100% reduce 50%
09/07/02 15:23:14 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000012_0, Status : FAILED
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to locate root region
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
	at com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)

attempt_200906192236_5114_r_000012_0: [2009-07-02 15:20:48.434] fetching new record writer ...
attempt_200906192236_5114_r_000012_0: [2009-07-02 15:23:10.185] failed to initialize the hbase configuration
09/07/02 15:23:15 INFO mapred.JobClient:  map 100% reduce 48%
09/07/02 15:23:17 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000014_0, Status : FAILED
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to locate root region
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
	at com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)

attempt_200906192236_5114_r_000014_0: [2009-07-02 15:20:47.442] fetching new record writer ...
attempt_200906192236_5114_r_000014_0: [2009-07-02 15:23:13.285] failed to initialize the hbase configuration
09/07/02 15:23:18 INFO mapred.JobClient:  map 100% reduce 45%
09/07/02 15:23:21 INFO mapred.JobClient:  map 100% reduce 46%
09/07/02 15:23:29 INFO mapred.JobClient:  map 100% reduce 47%
09/07/02 15:23:32 INFO mapred.JobClient:  map 100% reduce 48%
09/07/02 15:23:36 INFO mapred.JobClient:  map 100% reduce 49%
09/07/02 15:23:39 INFO mapred.JobClient:  map 100% reduce 51%
09/07/02 15:23:42 INFO mapred.JobClient:  map 100% reduce 56%
09/07/02 15:23:45 INFO mapred.JobClient:  map 100% reduce 58%
09/07/02 15:24:20 INFO mapred.JobClient:  map 100% reduce 59%
09/07/02 15:25:11 INFO mapred.JobClient:  map 100% reduce 60%
09/07/02 15:25:17 INFO mapred.JobClient:  map 100% reduce 61%
09/07/02 15:25:26 INFO mapred.JobClient:  map 100% reduce 62%
09/07/02 15:25:32 INFO mapred.JobClient:  map 100% reduce 64%
09/07/02 15:25:38 INFO mapred.JobClient:  map 100% reduce 65%
09/07/02 15:26:20 INFO mapred.JobClient:  map 100% reduce 66%
09/07/02 15:26:40 INFO mapred.JobClient:  map 100% reduce 67%
09/07/02 15:26:48 INFO mapred.JobClient:  map 100% reduce 68%
09/07/02 15:27:16 INFO mapred.JobClient:  map 100% reduce 69%
09/07/02 15:27:21 INFO mapred.JobClient:  map 100% reduce 70%
09/07/02 15:27:46 INFO mapred.JobClient:  map 100% reduce 71%
09/07/02 15:28:25 INFO mapred.JobClient:  map 100% reduce 72%
09/07/02 15:28:46 INFO mapred.JobClient:  map 100% reduce 73%
09/07/02 15:29:08 INFO mapred.JobClient:  map 100% reduce 74%
09/07/02 15:29:45 INFO mapred.JobClient:  map 100% reduce 76%
09/07/02 15:30:42 INFO mapred.JobClient:  map 100% reduce 77%
09/07/02 15:31:06 INFO mapred.JobClient:  map 100% reduce 78%
09/07/02 15:31:12 INFO mapred.JobClient:  map 100% reduce 79%
09/07/02 15:31:36 INFO mapred.JobClient:  map 100% reduce 81%
09/07/02 15:31:37 INFO mapred.JobClient:  map 100% reduce 82%
09/07/02 15:32:00 INFO mapred.JobClient:  map 100% reduce 83%
09/07/02 15:32:09 INFO mapred.JobClient:  map 100% reduce 84%
09/07/02 15:32:30 INFO mapred.JobClient:  map 100% reduce 86%
09/07/02 15:38:42 INFO mapred.JobClient:  map 100% reduce 88%
09/07/02 15:39:49 INFO mapred.JobClient:  map 100% reduce 89%
09/07/02 15:41:13 INFO mapred.JobClient:  map 100% reduce 90%
09/07/02 15:41:16 INFO mapred.JobClient:  map 100% reduce 91%
09/07/02 15:41:28 INFO mapred.JobClient:  map 100% reduce 93%
09/07/02 15:44:34 INFO mapred.JobClient:  map 100% reduce 94%
09/07/02 15:45:41 INFO mapred.JobClient:  map 100% reduce 95%
09/07/02 15:45:50 INFO mapred.JobClient:  map 100% reduce 96%
09/07/02 15:46:17 INFO mapred.JobClient:  map 100% reduce 98%
09/07/02 15:55:29 INFO mapred.JobClient:  map 100% reduce 99%
09/07/02 15:57:08 INFO mapred.JobClient:  map 100% reduce 100%
09/07/02 15:57:14 INFO mapred.JobClient: Job complete: job_200906192236_5114
09/07/02 15:57:14 INFO mapred.JobClient: Counters: 18
09/07/02 15:57:14 INFO mapred.JobClient:   Job Counters 
09/07/02 15:57:14 INFO mapred.JobClient:     Launched reduce tasks=24
09/07/02 15:57:14 INFO mapred.JobClient:     Rack-local map tasks=2
09/07/02 15:57:14 INFO mapred.JobClient:     Launched map tasks=20
09/07/02 15:57:14 INFO mapred.JobClient:     Data-local map tasks=18
09/07/02 15:57:14 INFO mapred.JobClient:   FileSystemCounters
09/07/02 15:57:14 INFO mapred.JobClient:     FILE_BYTES_READ=1848609562
09/07/02 15:57:14 INFO mapred.JobClient:     HDFS_BYTES_READ=57982980
09/07/02 15:57:14 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=2768325646
09/07/02 15:57:14 INFO mapred.JobClient:   Map-Reduce Framework
09/07/02 15:57:14 INFO mapred.JobClient:     Reduce input groups=4863
09/07/02 15:57:14 INFO mapred.JobClient:     Combine output records=0
09/07/02 15:57:14 INFO mapred.JobClient:     Map input records=294786
09/07/02 15:57:14 INFO mapred.JobClient:     Reduce shuffle bytes=883803390
09/07/02 15:57:14 INFO mapred.JobClient:     Reduce output records=0
09/07/02 15:57:14 INFO mapred.JobClient:     Spilled Records=50956464
09/07/02 15:57:14 INFO mapred.JobClient:     Map output bytes=888797024
09/07/02 15:57:14 INFO mapred.JobClient:     Map input bytes=57966580
09/07/02 15:57:14 INFO mapred.JobClient:     Combine input records=0
09/07/02 15:57:14 INFO mapred.JobClient:     Map output records=16985488
09/07/02 15:57:14 INFO mapred.JobClient:     Reduce input records=16985488


Re: performance help

Posted by Jonathan Gray <jl...@streamy.com>.
Are you having HDFS issues?  Did a RegionServer go down at all?  Not 
sure what's up with the root location issue.

With only 3 nodes and 20 reducers you are pushing things pretty hard...

The best way to investigate how to improve your performance is to look 
at the load characteristics of the nodes.  I definitely recommend 
setting up Ganglia as a first step.

Irfan Mohammed wrote:
> Hi,
> 
> Hbase/Hadoop Setup:
> 1. 3 regionservers 
> 2. Run the task using 20 Map Tasks and 20 Reduce Tasks. 
> 3. Using an older hbase version from the trunk [ Version: 0.20.0-dev, r786695, Sat Jun 20 18:01:17 EDT 2009 ]
> 4. Using hadoop [ 0.20.0 ]
> 
> Test Data:
> 1. The input is a CSV file with a 1M rows and about 20 columns and 4 metrics. 
> 2. Output is 4 hbase tables "txn_m1", "txn_m2", "txn_m3", "txn_m4". 
> 
> The task is to parse through the CSV file and for each metric m1 create an entry into the hbase table "txn_m1" with the columns as needed. Attached is an pdf [from an excel] which explains how a single row in the CSV is converted into hbase data in the mapper and reducer stage. Attached is the code as well.
> 
> For processing a 1M records, it is taking about 38 minutes. I am using HTable.incrementColumnValue() in the reduce pass to create the records in the hbase tables. 
> 
> Is there anything I should be doing differently or inherently incorrect? I would like run this task in 1 minute.
> 
> Thanks for the help,
> Irfan
> 
> Here is the output of the process. Let me know if I should attach any other log. 
> 
> 09/07/02 15:19:11 INFO mapred.JobClient: Running job: job_200906192236_5114
> 09/07/02 15:19:12 INFO mapred.JobClient:  map 0% reduce 0%
> 09/07/02 15:19:29 INFO mapred.JobClient:  map 30% reduce 0%
> 09/07/02 15:19:32 INFO mapred.JobClient:  map 46% reduce 0%
> 09/07/02 15:19:35 INFO mapred.JobClient:  map 64% reduce 0%
> 09/07/02 15:19:38 INFO mapred.JobClient:  map 75% reduce 0%
> 09/07/02 15:19:44 INFO mapred.JobClient:  map 76% reduce 0%
> 09/07/02 15:19:47 INFO mapred.JobClient:  map 99% reduce 1%
> 09/07/02 15:19:50 INFO mapred.JobClient:  map 100% reduce 3%
> 09/07/02 15:19:53 INFO mapred.JobClient:  map 100% reduce 4%
> 09/07/02 15:19:56 INFO mapred.JobClient:  map 100% reduce 10%
> 09/07/02 15:19:59 INFO mapred.JobClient:  map 100% reduce 12%
> 09/07/02 15:20:02 INFO mapred.JobClient:  map 100% reduce 16%
> 09/07/02 15:20:05 INFO mapred.JobClient:  map 100% reduce 25%
> 09/07/02 15:20:08 INFO mapred.JobClient:  map 100% reduce 33%
> 09/07/02 15:20:11 INFO mapred.JobClient:  map 100% reduce 36%
> 09/07/02 15:20:14 INFO mapred.JobClient:  map 100% reduce 39%
> 09/07/02 15:20:17 INFO mapred.JobClient:  map 100% reduce 41%
> 09/07/02 15:20:29 INFO mapred.JobClient:  map 100% reduce 42%
> 09/07/02 15:20:32 INFO mapred.JobClient:  map 100% reduce 44%
> 09/07/02 15:20:38 INFO mapred.JobClient:  map 100% reduce 46%
> 09/07/02 15:20:49 INFO mapred.JobClient:  map 100% reduce 47%
> 09/07/02 15:20:55 INFO mapred.JobClient:  map 100% reduce 50%
> 09/07/02 15:21:01 INFO mapred.JobClient:  map 100% reduce 51%
> 09/07/02 15:21:34 INFO mapred.JobClient:  map 100% reduce 52%
> 09/07/02 15:21:39 INFO mapred.JobClient:  map 100% reduce 53%
> 09/07/02 15:22:06 INFO mapred.JobClient:  map 100% reduce 54%
> 09/07/02 15:22:28 INFO mapred.JobClient:  map 100% reduce 55%
> 09/07/02 15:22:44 INFO mapred.JobClient:  map 100% reduce 56%
> 09/07/02 15:23:02 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000002_0, Status : FAILED
> attempt_200906192236_5114_r_000002_0: [2009-07-02 15:20:27.230] fetching new record writer ...
> attempt_200906192236_5114_r_000002_0: [2009-07-02 15:22:51.429] failed to initialize the hbase configuration
> 09/07/02 15:23:08 INFO mapred.JobClient:  map 100% reduce 53%
> 09/07/02 15:23:08 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000013_0, Status : FAILED
> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to locate root region
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
> 	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
> 	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
> 	at com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
> 	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:170)
> 
> attempt_200906192236_5114_r_000013_0: [2009-07-02 15:20:33.183] fetching new record writer ...
> attempt_200906192236_5114_r_000013_0: [2009-07-02 15:23:04.369] failed to initialize the hbase configuration
> 09/07/02 15:23:09 INFO mapred.JobClient:  map 100% reduce 50%
> 09/07/02 15:23:14 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000012_0, Status : FAILED
> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to locate root region
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
> 	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
> 	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
> 	at com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
> 	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:170)
> 
> attempt_200906192236_5114_r_000012_0: [2009-07-02 15:20:48.434] fetching new record writer ...
> attempt_200906192236_5114_r_000012_0: [2009-07-02 15:23:10.185] failed to initialize the hbase configuration
> 09/07/02 15:23:15 INFO mapred.JobClient:  map 100% reduce 48%
> 09/07/02 15:23:17 INFO mapred.JobClient: Task Id : attempt_200906192236_5114_r_000014_0, Status : FAILED
> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying to locate root region
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
> 	at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
> 	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
> 	at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
> 	at com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
> 	at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:170)
> 
> attempt_200906192236_5114_r_000014_0: [2009-07-02 15:20:47.442] fetching new record writer ...
> attempt_200906192236_5114_r_000014_0: [2009-07-02 15:23:13.285] failed to initialize the hbase configuration
> 09/07/02 15:23:18 INFO mapred.JobClient:  map 100% reduce 45%
> 09/07/02 15:23:21 INFO mapred.JobClient:  map 100% reduce 46%
> 09/07/02 15:23:29 INFO mapred.JobClient:  map 100% reduce 47%
> 09/07/02 15:23:32 INFO mapred.JobClient:  map 100% reduce 48%
> 09/07/02 15:23:36 INFO mapred.JobClient:  map 100% reduce 49%
> 09/07/02 15:23:39 INFO mapred.JobClient:  map 100% reduce 51%
> 09/07/02 15:23:42 INFO mapred.JobClient:  map 100% reduce 56%
> 09/07/02 15:23:45 INFO mapred.JobClient:  map 100% reduce 58%
> 09/07/02 15:24:20 INFO mapred.JobClient:  map 100% reduce 59%
> 09/07/02 15:25:11 INFO mapred.JobClient:  map 100% reduce 60%
> 09/07/02 15:25:17 INFO mapred.JobClient:  map 100% reduce 61%
> 09/07/02 15:25:26 INFO mapred.JobClient:  map 100% reduce 62%
> 09/07/02 15:25:32 INFO mapred.JobClient:  map 100% reduce 64%
> 09/07/02 15:25:38 INFO mapred.JobClient:  map 100% reduce 65%
> 09/07/02 15:26:20 INFO mapred.JobClient:  map 100% reduce 66%
> 09/07/02 15:26:40 INFO mapred.JobClient:  map 100% reduce 67%
> 09/07/02 15:26:48 INFO mapred.JobClient:  map 100% reduce 68%
> 09/07/02 15:27:16 INFO mapred.JobClient:  map 100% reduce 69%
> 09/07/02 15:27:21 INFO mapred.JobClient:  map 100% reduce 70%
> 09/07/02 15:27:46 INFO mapred.JobClient:  map 100% reduce 71%
> 09/07/02 15:28:25 INFO mapred.JobClient:  map 100% reduce 72%
> 09/07/02 15:28:46 INFO mapred.JobClient:  map 100% reduce 73%
> 09/07/02 15:29:08 INFO mapred.JobClient:  map 100% reduce 74%
> 09/07/02 15:29:45 INFO mapred.JobClient:  map 100% reduce 76%
> 09/07/02 15:30:42 INFO mapred.JobClient:  map 100% reduce 77%
> 09/07/02 15:31:06 INFO mapred.JobClient:  map 100% reduce 78%
> 09/07/02 15:31:12 INFO mapred.JobClient:  map 100% reduce 79%
> 09/07/02 15:31:36 INFO mapred.JobClient:  map 100% reduce 81%
> 09/07/02 15:31:37 INFO mapred.JobClient:  map 100% reduce 82%
> 09/07/02 15:32:00 INFO mapred.JobClient:  map 100% reduce 83%
> 09/07/02 15:32:09 INFO mapred.JobClient:  map 100% reduce 84%
> 09/07/02 15:32:30 INFO mapred.JobClient:  map 100% reduce 86%
> 09/07/02 15:38:42 INFO mapred.JobClient:  map 100% reduce 88%
> 09/07/02 15:39:49 INFO mapred.JobClient:  map 100% reduce 89%
> 09/07/02 15:41:13 INFO mapred.JobClient:  map 100% reduce 90%
> 09/07/02 15:41:16 INFO mapred.JobClient:  map 100% reduce 91%
> 09/07/02 15:41:28 INFO mapred.JobClient:  map 100% reduce 93%
> 09/07/02 15:44:34 INFO mapred.JobClient:  map 100% reduce 94%
> 09/07/02 15:45:41 INFO mapred.JobClient:  map 100% reduce 95%
> 09/07/02 15:45:50 INFO mapred.JobClient:  map 100% reduce 96%
> 09/07/02 15:46:17 INFO mapred.JobClient:  map 100% reduce 98%
> 09/07/02 15:55:29 INFO mapred.JobClient:  map 100% reduce 99%
> 09/07/02 15:57:08 INFO mapred.JobClient:  map 100% reduce 100%
> 09/07/02 15:57:14 INFO mapred.JobClient: Job complete: job_200906192236_5114
> 09/07/02 15:57:14 INFO mapred.JobClient: Counters: 18
> 09/07/02 15:57:14 INFO mapred.JobClient:   Job Counters 
> 09/07/02 15:57:14 INFO mapred.JobClient:     Launched reduce tasks=24
> 09/07/02 15:57:14 INFO mapred.JobClient:     Rack-local map tasks=2
> 09/07/02 15:57:14 INFO mapred.JobClient:     Launched map tasks=20
> 09/07/02 15:57:14 INFO mapred.JobClient:     Data-local map tasks=18
> 09/07/02 15:57:14 INFO mapred.JobClient:   FileSystemCounters
> 09/07/02 15:57:14 INFO mapred.JobClient:     FILE_BYTES_READ=1848609562
> 09/07/02 15:57:14 INFO mapred.JobClient:     HDFS_BYTES_READ=57982980
> 09/07/02 15:57:14 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=2768325646
> 09/07/02 15:57:14 INFO mapred.JobClient:   Map-Reduce Framework
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce input groups=4863
> 09/07/02 15:57:14 INFO mapred.JobClient:     Combine output records=0
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map input records=294786
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce shuffle bytes=883803390
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce output records=0
> 09/07/02 15:57:14 INFO mapred.JobClient:     Spilled Records=50956464
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map output bytes=888797024
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map input bytes=57966580
> 09/07/02 15:57:14 INFO mapred.JobClient:     Combine input records=0
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map output records=16985488
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce input records=16985488
> 

Re: performance help

Posted by stack <st...@duboce.net>.
Why 4 tables?  Why not one table and four column families, one for each
metric?  (Looking in excel spreadsheet, each row has same key).  Then you'd
be doing one insert against a single table rather than four separate ones.

Looking at your MR output below, it looks like it takes 40 seconds to
complete the map tasks.  The report says that there 294786 inputs.  Says
that the mapper outputs 17M records.  Is that expected?

A few of your reducers failed and were done over again.  The redos were
probably significant part of the overall elapsed time.  The failures are
trying to find root region.  Root region is in zk.  Odd it can't be found
there.

The fetching of map data and sort is taking a considerable amount of the
overall time.  Do you need to reduce step (Couldn't tell from the excel
spreadsheet -- there didn't seem to be any summing going on).  If not, this
could make for savings too.

You might try outputting to hdfs first to see how fast the job runs with no
hbase involved.  See how long that takes.  Tune this part of the job first.
Then add in hbase and see how much it slows things.

Looking at your code, nothing obviously onerous.

St.Ack





On Thu, Jul 2, 2009 at 1:22 PM, Irfan Mohammed <ir...@gmail.com> wrote:

> Hi,
>
> Hbase/Hadoop Setup:
> 1. 3 regionservers
> 2. Run the task using 20 Map Tasks and 20 Reduce Tasks.
> 3. Using an older hbase version from the trunk [ Version: 0.20.0-dev,
> r786695, Sat Jun 20 18:01:17 EDT 2009 ]
> 4. Using hadoop [ 0.20.0 ]
>
> Test Data:
> 1. The input is a CSV file with a 1M rows and about 20 columns and 4
> metrics.
> 2. Output is 4 hbase tables "txn_m1", "txn_m2", "txn_m3", "txn_m4".
>
> The task is to parse through the CSV file and for each metric m1 create an
> entry into the hbase table "txn_m1" with the columns as needed. Attached is
> an pdf [from an excel] which explains how a single row in the CSV is
> converted into hbase data in the mapper and reducer stage. Attached is the
> code as well.
>
> For processing a 1M records, it is taking about 38 minutes. I am using
> HTable.incrementColumnValue() in the reduce pass to create the records in
> the hbase tables.
>
> Is there anything I should be doing differently or inherently incorrect? I
> would like run this task in 1 minute.
>
> Thanks for the help,
> Irfan
>
> Here is the output of the process. Let me know if I should attach any other
> log.
>
> 09/07/02 15:19:11 INFO mapred.JobClient: Running job: job_200906192236_5114
> 09/07/02 15:19:12 INFO mapred.JobClient:  map 0% reduce 0%
> 09/07/02 15:19:29 INFO mapred.JobClient:  map 30% reduce 0%
> 09/07/02 15:19:32 INFO mapred.JobClient:  map 46% reduce 0%
> 09/07/02 15:19:35 INFO mapred.JobClient:  map 64% reduce 0%
> 09/07/02 15:19:38 INFO mapred.JobClient:  map 75% reduce 0%
> 09/07/02 15:19:44 INFO mapred.JobClient:  map 76% reduce 0%
> 09/07/02 15:19:47 INFO mapred.JobClient:  map 99% reduce 1%
> 09/07/02 15:19:50 INFO mapred.JobClient:  map 100% reduce 3%
> 09/07/02 15:19:53 INFO mapred.JobClient:  map 100% reduce 4%
> 09/07/02 15:19:56 INFO mapred.JobClient:  map 100% reduce 10%
> 09/07/02 15:19:59 INFO mapred.JobClient:  map 100% reduce 12%
> 09/07/02 15:20:02 INFO mapred.JobClient:  map 100% reduce 16%
> 09/07/02 15:20:05 INFO mapred.JobClient:  map 100% reduce 25%
> 09/07/02 15:20:08 INFO mapred.JobClient:  map 100% reduce 33%
> 09/07/02 15:20:11 INFO mapred.JobClient:  map 100% reduce 36%
> 09/07/02 15:20:14 INFO mapred.JobClient:  map 100% reduce 39%
> 09/07/02 15:20:17 INFO mapred.JobClient:  map 100% reduce 41%
> 09/07/02 15:20:29 INFO mapred.JobClient:  map 100% reduce 42%
> 09/07/02 15:20:32 INFO mapred.JobClient:  map 100% reduce 44%
> 09/07/02 15:20:38 INFO mapred.JobClient:  map 100% reduce 46%
> 09/07/02 15:20:49 INFO mapred.JobClient:  map 100% reduce 47%
> 09/07/02 15:20:55 INFO mapred.JobClient:  map 100% reduce 50%
> 09/07/02 15:21:01 INFO mapred.JobClient:  map 100% reduce 51%
> 09/07/02 15:21:34 INFO mapred.JobClient:  map 100% reduce 52%
> 09/07/02 15:21:39 INFO mapred.JobClient:  map 100% reduce 53%
> 09/07/02 15:22:06 INFO mapred.JobClient:  map 100% reduce 54%
> 09/07/02 15:22:28 INFO mapred.JobClient:  map 100% reduce 55%
> 09/07/02 15:22:44 INFO mapred.JobClient:  map 100% reduce 56%
> 09/07/02 15:23:02 INFO mapred.JobClient: Task Id :
> attempt_200906192236_5114_r_000002_0, Status : FAILED
> attempt_200906192236_5114_r_000002_0: [2009-07-02 15:20:27.230] fetching
> new record writer ...
> attempt_200906192236_5114_r_000002_0: [2009-07-02 15:22:51.429] failed to
> initialize the hbase configuration
> 09/07/02 15:23:08 INFO mapred.JobClient:  map 100% reduce 53%
> 09/07/02 15:23:08 INFO mapred.JobClient: Task Id :
> attempt_200906192236_5114_r_000013_0, Status : FAILED
> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying
> to locate root region
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
>        at
> com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
>        at
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> attempt_200906192236_5114_r_000013_0: [2009-07-02 15:20:33.183] fetching
> new record writer ...
> attempt_200906192236_5114_r_000013_0: [2009-07-02 15:23:04.369] failed to
> initialize the hbase configuration
> 09/07/02 15:23:09 INFO mapred.JobClient:  map 100% reduce 50%
> 09/07/02 15:23:14 INFO mapred.JobClient: Task Id :
> attempt_200906192236_5114_r_000012_0, Status : FAILED
> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying
> to locate root region
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
>        at
> com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
>        at
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> attempt_200906192236_5114_r_000012_0: [2009-07-02 15:20:48.434] fetching
> new record writer ...
> attempt_200906192236_5114_r_000012_0: [2009-07-02 15:23:10.185] failed to
> initialize the hbase configuration
> 09/07/02 15:23:15 INFO mapred.JobClient:  map 100% reduce 48%
> 09/07/02 15:23:17 INFO mapred.JobClient: Task Id :
> attempt_200906192236_5114_r_000014_0, Status : FAILED
> org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying
> to locate root region
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:863)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:514)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:523)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:496)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:628)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:527)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:490)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:107)
>        at
> com.qwapi.txnload.LoadMultipleCubes$CubeOutputFormat.getRecordWriter(LoadMultipleCubes.java:442)
>        at
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> attempt_200906192236_5114_r_000014_0: [2009-07-02 15:20:47.442] fetching
> new record writer ...
> attempt_200906192236_5114_r_000014_0: [2009-07-02 15:23:13.285] failed to
> initialize the hbase configuration
> 09/07/02 15:23:18 INFO mapred.JobClient:  map 100% reduce 45%
> 09/07/02 15:23:21 INFO mapred.JobClient:  map 100% reduce 46%
> 09/07/02 15:23:29 INFO mapred.JobClient:  map 100% reduce 47%
> 09/07/02 15:23:32 INFO mapred.JobClient:  map 100% reduce 48%
> 09/07/02 15:23:36 INFO mapred.JobClient:  map 100% reduce 49%
> 09/07/02 15:23:39 INFO mapred.JobClient:  map 100% reduce 51%
> 09/07/02 15:23:42 INFO mapred.JobClient:  map 100% reduce 56%
> 09/07/02 15:23:45 INFO mapred.JobClient:  map 100% reduce 58%
> 09/07/02 15:24:20 INFO mapred.JobClient:  map 100% reduce 59%
> 09/07/02 15:25:11 INFO mapred.JobClient:  map 100% reduce 60%
> 09/07/02 15:25:17 INFO mapred.JobClient:  map 100% reduce 61%
> 09/07/02 15:25:26 INFO mapred.JobClient:  map 100% reduce 62%
> 09/07/02 15:25:32 INFO mapred.JobClient:  map 100% reduce 64%
> 09/07/02 15:25:38 INFO mapred.JobClient:  map 100% reduce 65%
> 09/07/02 15:26:20 INFO mapred.JobClient:  map 100% reduce 66%
> 09/07/02 15:26:40 INFO mapred.JobClient:  map 100% reduce 67%
> 09/07/02 15:26:48 INFO mapred.JobClient:  map 100% reduce 68%
> 09/07/02 15:27:16 INFO mapred.JobClient:  map 100% reduce 69%
> 09/07/02 15:27:21 INFO mapred.JobClient:  map 100% reduce 70%
> 09/07/02 15:27:46 INFO mapred.JobClient:  map 100% reduce 71%
> 09/07/02 15:28:25 INFO mapred.JobClient:  map 100% reduce 72%
> 09/07/02 15:28:46 INFO mapred.JobClient:  map 100% reduce 73%
> 09/07/02 15:29:08 INFO mapred.JobClient:  map 100% reduce 74%
> 09/07/02 15:29:45 INFO mapred.JobClient:  map 100% reduce 76%
> 09/07/02 15:30:42 INFO mapred.JobClient:  map 100% reduce 77%
> 09/07/02 15:31:06 INFO mapred.JobClient:  map 100% reduce 78%
> 09/07/02 15:31:12 INFO mapred.JobClient:  map 100% reduce 79%
> 09/07/02 15:31:36 INFO mapred.JobClient:  map 100% reduce 81%
> 09/07/02 15:31:37 INFO mapred.JobClient:  map 100% reduce 82%
> 09/07/02 15:32:00 INFO mapred.JobClient:  map 100% reduce 83%
> 09/07/02 15:32:09 INFO mapred.JobClient:  map 100% reduce 84%
> 09/07/02 15:32:30 INFO mapred.JobClient:  map 100% reduce 86%
> 09/07/02 15:38:42 INFO mapred.JobClient:  map 100% reduce 88%
> 09/07/02 15:39:49 INFO mapred.JobClient:  map 100% reduce 89%
> 09/07/02 15:41:13 INFO mapred.JobClient:  map 100% reduce 90%
> 09/07/02 15:41:16 INFO mapred.JobClient:  map 100% reduce 91%
> 09/07/02 15:41:28 INFO mapred.JobClient:  map 100% reduce 93%
> 09/07/02 15:44:34 INFO mapred.JobClient:  map 100% reduce 94%
> 09/07/02 15:45:41 INFO mapred.JobClient:  map 100% reduce 95%
> 09/07/02 15:45:50 INFO mapred.JobClient:  map 100% reduce 96%
> 09/07/02 15:46:17 INFO mapred.JobClient:  map 100% reduce 98%
> 09/07/02 15:55:29 INFO mapred.JobClient:  map 100% reduce 99%
> 09/07/02 15:57:08 INFO mapred.JobClient:  map 100% reduce 100%
> 09/07/02 15:57:14 INFO mapred.JobClient: Job complete:
> job_200906192236_5114
> 09/07/02 15:57:14 INFO mapred.JobClient: Counters: 18
> 09/07/02 15:57:14 INFO mapred.JobClient:   Job Counters
> 09/07/02 15:57:14 INFO mapred.JobClient:     Launched reduce tasks=24
> 09/07/02 15:57:14 INFO mapred.JobClient:     Rack-local map tasks=2
> 09/07/02 15:57:14 INFO mapred.JobClient:     Launched map tasks=20
> 09/07/02 15:57:14 INFO mapred.JobClient:     Data-local map tasks=18
> 09/07/02 15:57:14 INFO mapred.JobClient:   FileSystemCounters
> 09/07/02 15:57:14 INFO mapred.JobClient:     FILE_BYTES_READ=1848609562
> 09/07/02 15:57:14 INFO mapred.JobClient:     HDFS_BYTES_READ=57982980
> 09/07/02 15:57:14 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=2768325646
> 09/07/02 15:57:14 INFO mapred.JobClient:   Map-Reduce Framework
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce input groups=4863
> 09/07/02 15:57:14 INFO mapred.JobClient:     Combine output records=0
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map input records=294786
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce shuffle bytes=883803390
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce output records=0
> 09/07/02 15:57:14 INFO mapred.JobClient:     Spilled Records=50956464
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map output bytes=888797024
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map input bytes=57966580
> 09/07/02 15:57:14 INFO mapred.JobClient:     Combine input records=0
> 09/07/02 15:57:14 INFO mapred.JobClient:     Map output records=16985488
> 09/07/02 15:57:14 INFO mapred.JobClient:     Reduce input records=16985488
>
>