You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Saptarshi Guha <sa...@gmail.com> on 2012/11/13 17:53:54 UTC

configureIncrementalLoad: 1 reduce pending, never running

Hello,

I am using

Hbase: 0.90.6-cdh3u4, r
Hadoop:0.20.2-cdh3u4

I'm also using configureIncrementalLoad to bulk load a small 1000 record
data set(it will run on larger datasets later)

The maps run to completion, the single reducer is stuck in Pending. There
is nothing wrong with the cluster since other jobs complete.

What could i be doing wrong?
Here is a fragment  of the code

    public int run(String[] args) throws Exception {
    Configuration conf = new Configuration();
    //args: jobname, tablename, input-source, output-path
    // Load hbase-site.xml
    HBaseConfiguration.addHbaseResources(conf);

    Job job = new Job(conf, args[0]);

    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
    job.setMapOutputValueClass(Put.class);
    job.setMapperClass(IndexMapper.class);


    job.setJarByClass(IndexMapper.class);
    job.setInputFormatClass(IndexerInputFormat.class);
    FileInputFormat.addInputPath(job, new Path(args[2]));
    FileOutputFormat.setOutputPath(job, new Path(args[3]));

    HTable hTable = new HTable(args[1]);

    // Auto configure partitioner and reducer
    job.setWorkingDirectory(new Path("/user/sguha/tmp/"));
    HFileOutputFormat.configureIncrementalLoad(job, hTable);

    int rcode;
    if(job.waitForCompletion(true)) rcode=1; else rcode=0;
    return(rcode);
    // Load generated HFiles into table
    // LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
    // loader.doBulkLoad(new Path(args[3]), hTable);
    }