You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by S Imig <si...@richrelevance.com> on 2013/08/19 19:02:31 UTC

Number of reducers in a join

Hello, I'm curious why there is only one reducer for the join below, and why I cannot change the number of reducers using mapred.reduce.tasks.  Can someone shed some light?  Thanks!

hive> set mapred.reduce.tasks=5;
hive> create table nAllTran as select nTranDtl.tran_key,  tran_line_qty, tran_line_original_amt, day_skey, tran_total_amt  from nTranDtl join nTranHdr where nTranDtl.tran_key = nTranHdr.tran_key;
Total MapReduce jobs = 1
Stage-1 is selected by condition resolver.
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201306301104_66167, Tracking URL = http://sf-namenode-02.richrelevance.com:50030/jobdetails.jsp?jobid=job_201306301104_66167
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_201306301104_66167
Hadoop job information for Stage-1: number of mappers: 33; number of reducers: 1



Thanks,
Imig
--
S. Imig | Senior Data Scientist Engineer | richrelevance |m: 425.999.5725