You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Billy Pearson (JIRA)" <ji...@apache.org> on 2009/04/14 03:06:15 UTC

[jira] Commented: (HBASE-1287) Partitioner class not used in TableMapReduceUtil.initTableReduceJob()

    [ https://issues.apache.org/jira/browse/HBASE-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698606#action_12698606 ] 

Billy Pearson commented on HBASE-1287:
--------------------------------------

yea that was my mess up should have used partitioner in the function but just hard linked the Hregion one for no reasion.

The idea behind setting the number of reducers is to lower it if the number of reducers if it is set > then we have regions the 
HRegionPartitioner will handle if the number of reduce is set lower then the number of regions it just converts back to using default hashPartitioner
if greater then number of regions it will still work but will be a wast of launching reducers that will have no work to do.

{code}
if (partitioner != null) {
      job.setPartitionerClass(HRegionPartitioner.class);
      HTable outputTable = new HTable(new HBaseConfiguration(job), table);
      int regions = outputTable.getRegionsInfo().size();
      if (job.getNumReduceTasks() > regions){
    	job.setNumReduceTasks(outputTable.getRegionsInfo().size());
      }
    }
{code}

should be something like this

{code}
if (partitioner == HRegionPartitioner.class) {
      job.setPartitionerClass(HRegionPartitioner.class);
      HTable outputTable = new HTable(new HBaseConfiguration(job), table);
      int regions = outputTable.getRegionsInfo().size();
      if (job.getNumReduceTasks() > regions){
    	job.setNumReduceTasks(outputTable.getRegionsInfo().size());
      }
  } else {
    job.setPartitionerClass(HRegionPartitioner.class);
  }
{code}



> Partitioner class not used in TableMapReduceUtil.initTableReduceJob()
> ---------------------------------------------------------------------
>
>                 Key: HBASE-1287
>                 URL: https://issues.apache.org/jira/browse/HBASE-1287
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Lars George
>            Assignee: Lars George
>         Attachments: 1287-2.patch, 1287.patch
>
>
> Upon checking the available utility methods in TableMapReduceUtil I came across this code
> {code}
>   public static void initTableReduceJob(String table,
>     Class<? extends TableReduce> reducer, JobConf job, Class partitioner)
>   throws IOException {
>     job.setOutputFormat(TableOutputFormat.class);
>     job.setReducerClass(reducer);
>     job.set(TableOutputFormat.OUTPUT_TABLE, table);
>     job.setOutputKeyClass(ImmutableBytesWritable.class);
>     job.setOutputValueClass(BatchUpdate.class);
>     if (partitioner != null) {
>       job.setPartitionerClass(HRegionPartitioner.class);
>       HTable outputTable = new HTable(new HBaseConfiguration(job), table);
>       int regions = outputTable.getRegionsInfo().size();
>       if (job.getNumReduceTasks() > regions){
>     	job.setNumReduceTasks(outputTable.getRegionsInfo().size());
>       }
>     }
>   }
> {code}
> It seems though as it should be
> {code}
>     if (partitioner != null) {
>       job.setPartitionerClass(partitioner);
> {code}
> and the provided HRegionPartitioner can be handed in to that call or a custom one can be provided.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.