You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Ted Yu <yu...@gmail.com> on 2010/04/10 01:22:55 UTC

Re: org.apache.hadoop.hbase.mapreduce.Export fails with an NPE

Seeking clarification from hbase-dev.

Export class calls:
    TableMapReduceUtil.initTableMapperJob(tableName, s, Exporter.class,
null,
      null, job);
which in turn calls:
    job.getConfiguration().set(TableInputFormat.INPUT_TABLE, table);

But TableInputFormat.setConf() is not called - thus this line isn't called
(TableInputFormat, line 87).
      setHTable(new HTable(new HBaseConfiguration(conf), tableName));

Please confirm this is a bug in 0.20.3

JIRA site is down so I cannot log JIRA for now.

On Fri, Apr 9, 2010 at 2:56 PM, George Stathis <gs...@gmail.com> wrote:

> No dice. Classpath is now set. Same error. Meanwhile, I'm running "$ hadoop
> org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1" just fine,
> so MapRed is working at least.
>
> Still looking for suggestions then I guess.
>
> -GS
>
> On Fri, Apr 9, 2010 at 5:31 PM, George Stathis <gs...@gmail.com> wrote:
>
> > RTFMing
> >
> http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapreduce/package-summary.htmlright
> > now...Hadoop classpath not being set properly could be the issue...
> >
> >
> > On Fri, Apr 9, 2010 at 5:26 PM, George Stathis <gs...@gmail.com>
> wrote:
> >
> >> Hi folks,
> >>
> >> I hope this is just a newbie problem.
> >>
> >> Context:
> >> - Running 0.20.3 tag locally in pseudo cluster mode
> >> - $HBASE_HOME is in env and $PATH
> >> - Running org.apache.hadoop.hbase.mapreduce.Export in the shell such as:
> $
> >> hbase org.apache.hadoop.hbase.mapreduce.Export channels
> /bkps/channels/01
> >>
> >> Symptom:
> >> - Getting an NPE at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:110):
> >>
> >> [...]
> >> 110      this.scanner = this.htable.getScanner(newScan);
> >> [...]
> >>
> >> Full output is bellow. Not sure why htable is still null at that point.
> >> User error?
> >>
> >> Any help is appreciated.
> >>
> >> -GS
> >>
> >> Full output:
> >>
> >> $ hbase org.apache.hadoop.hbase.mapreduce.Export channels
> >> /bkps/channels/01
> >> 2010-04-09 17:13:57.407::INFO:  Logging to STDERR via
> >> org.mortbay.log.StdErrLog
> >> 2010-04-09 17:13:57.408::INFO:  verisons=1, starttime=0,
> >> endtime=9223372036854775807
> >> 10/04/09 17:13:58 DEBUG zookeeper.ZooKeeperWrapper: Read ZNode
> >> /hbase/root-region-server got 192.168.1.16:52159
> >> 10/04/09 17:13:58 DEBUG client.HConnectionManager$TableServers: Found
> ROOT
> >> at 192.168.1.16:52159
> >> 10/04/09 17:13:58 DEBUG client.HConnectionManager$TableServers: Cached
> >> location for .META.,,1 is 192.168.1.16:52159
> >> 10/04/09 17:13:58 DEBUG client.HConnectionManager$TableServers: Cached
> >> location for channels,,1270753106916 is 192.168.1.16:52159
> >> 10/04/09 17:13:58 DEBUG client.HConnectionManager$TableServers: Cache
> hit
> >> for row <> in tableName channels: location server 192.168.1.16:52159,
> >> location region name channels,,1270753106916
> >> 10/04/09 17:13:58 DEBUG mapreduce.TableInputFormatBase: getSplits: split
> >> -> 0 -> 192.168.1.16:,
> >> 10/04/09 17:13:58 INFO mapred.JobClient: Running job:
> >> job_201004091642_0009
> >> 10/04/09 17:13:59 INFO mapred.JobClient:  map 0% reduce 0%
> >> 10/04/09 17:14:09 INFO mapred.JobClient: Task Id :
> >> attempt_201004091642_0009_m_000000_0, Status : FAILED
> >> java.lang.NullPointerException
> >>  at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:110)
> >> at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.init(TableInputFormatBase.java:119)
> >>  at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:262)
> >> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:588)
> >>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> >> at org.apache.hadoop.mapred.Child.main(Child.java:170)
> >>
> >> 10/04/09 17:14:15 INFO mapred.JobClient: Task Id :
> >> attempt_201004091642_0009_m_000000_1, Status : FAILED
> >> java.lang.NullPointerException
> >> at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:110)
> >>  at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.init(TableInputFormatBase.java:119)
> >> at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:262)
> >>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:588)
> >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> >>  at org.apache.hadoop.mapred.Child.main(Child.java:170)
> >>
> >> 10/04/09 17:14:21 INFO mapred.JobClient: Task Id :
> >> attempt_201004091642_0009_m_000000_2, Status : FAILED
> >> java.lang.NullPointerException
> >> at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:110)
> >>  at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.init(TableInputFormatBase.java:119)
> >> at
> >>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:262)
> >>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:588)
> >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> >>  at org.apache.hadoop.mapred.Child.main(Child.java:170)
> >>
> >> 10/04/09 17:14:30 INFO mapred.JobClient: Job complete:
> >> job_201004091642_0009
> >> 10/04/09 17:14:30 INFO mapred.JobClient: Counters: 3
> >> 10/04/09 17:14:30 INFO mapred.JobClient:   Job Counters
> >> 10/04/09 17:14:30 INFO mapred.JobClient:     Launched map tasks=4
> >> 10/04/09 17:14:30 INFO mapred.JobClient:     Data-local map tasks=4
> >> 10/04/09 17:14:30 INFO mapred.JobClient:     Failed map tasks=1
> >> 10/04/09 17:14:30 DEBUG zookeeper.ZooKeeperWrapper: Closed connection
> with
> >> ZooKeeper
> >>
> >>
> >>
> >>
> >
>