You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mike Forrest <mf...@trailfire.com> on 2008/01/23 23:54:09 UTC

TableMap fails - "Expecting at least one region"

Hello,
After updating to the latest trunk version, I find that jobs that use a 
mapper extending from TableMap now fail with:

Exception in thread "main" java.io.IOException: Expecting at least one 
region
        at 
org.apache.hadoop.hbase.mapred.TableInputFormat.getSplits(TableInputFormat.java:168)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:544)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:805)
...

I have verified that the input table does exist and I can get data from 
it using the Hbase shell app.  It probably goes without saying, but the 
same code ran before I updated and rebuilt Hadoop/Hbase (it had been a 
week or two since the last time).  Any ideas?

Thanks,
Mike

Re: TableMap fails - "Expecting at least one region"

Posted by Mike Forrest <mf...@trailfire.com>.
Oops, I didn't realize there was already a patch available, in spite of 
your mentioning it.  Sorry for any confusion.

Mike Forrest wrote:
> Hi,
> Thanks for the tip, St.Ack.  The problem ended up being in the 
> getStartKeys() method of HTable; specifically in line 232 - the 
> labeled break statement.  The code gets a list of unique table names, 
> then iterates through them to find the desired table.  But if the 
> first table in the list isn't the desired table, the break statement 
> takes execution back to line 219 (SCANNER_LOOP:), and this time the 
> call to server.next(scannerId) returns an empty list, so there are no 
> map entries to iterate through.  Removing the label from the break on 
> line 232 causes the for { } loop to iterate through all the values and 
> add keys for the desired table.  I'm not sure of the full 
> ramifications of this code change, but it fixes my issue.
> Mike
>
> stack wrote:
>> Here is the code:
>>
>>    Text[] startKeys = m_table.getStartKeys();
>>    if(startKeys == null || startKeys.length == 0) {
>>      throw new IOException("Expecting at least one region");
>>    }
>>
>> HADOOP-2631 fixed getStartKeys for when more than one table but 
>> otherwise, somethings broken if its not returning at least one region 
>> for your table.  Maybe you can figure it?
>>
>> Thanks Mike,
>> St.Ack
>>
>>
>>
>> Mike Forrest wrote:
>>> Hello,
>>> After updating to the latest trunk version, I find that jobs that 
>>> use a mapper extending from TableMap now fail with:
>>>
>>> Exception in thread "main" java.io.IOException: Expecting at least 
>>> one region
>>>        at 
>>> org.apache.hadoop.hbase.mapred.TableInputFormat.getSplits(TableInputFormat.java:168) 
>>>
>>>        at 
>>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:544)
>>>        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:805)
>>> ...
>>>
>>> I have verified that the input table does exist and I can get data 
>>> from it using the Hbase shell app.  It probably goes without saying, 
>>> but the same code ran before I updated and rebuilt Hadoop/Hbase (it 
>>> had been a week or two since the last time).  Any ideas?
>>>
>>> Thanks,
>>> Mike
>>
>


Re: TableMap fails - "Expecting at least one region"

Posted by Mike Forrest <mf...@trailfire.com>.
Hi,
Thanks for the tip, St.Ack.  The problem ended up being in the 
getStartKeys() method of HTable; specifically in line 232 - the labeled 
break statement.  The code gets a list of unique table names, then 
iterates through them to find the desired table.  But if the first table 
in the list isn't the desired table, the break statement takes execution 
back to line 219 (SCANNER_LOOP:), and this time the call to 
server.next(scannerId) returns an empty list, so there are no map 
entries to iterate through.  Removing the label from the break on line 
232 causes the for { } loop to iterate through all the values and add 
keys for the desired table.  I'm not sure of the full ramifications of 
this code change, but it fixes my issue.
Mike

stack wrote:
> Here is the code:
>
>    Text[] startKeys = m_table.getStartKeys();
>    if(startKeys == null || startKeys.length == 0) {
>      throw new IOException("Expecting at least one region");
>    }
>
> HADOOP-2631 fixed getStartKeys for when more than one table but 
> otherwise, somethings broken if its not returning at least one region 
> for your table.  Maybe you can figure it?
>
> Thanks Mike,
> St.Ack
>
>
>
> Mike Forrest wrote:
>> Hello,
>> After updating to the latest trunk version, I find that jobs that use 
>> a mapper extending from TableMap now fail with:
>>
>> Exception in thread "main" java.io.IOException: Expecting at least 
>> one region
>>        at 
>> org.apache.hadoop.hbase.mapred.TableInputFormat.getSplits(TableInputFormat.java:168) 
>>
>>        at 
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:544)
>>        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:805)
>> ...
>>
>> I have verified that the input table does exist and I can get data 
>> from it using the Hbase shell app.  It probably goes without saying, 
>> but the same code ran before I updated and rebuilt Hadoop/Hbase (it 
>> had been a week or two since the last time).  Any ideas?
>>
>> Thanks,
>> Mike
>


Re: TableMap fails - "Expecting at least one region"

Posted by stack <st...@duboce.net>.
Here is the code:

    Text[] startKeys = m_table.getStartKeys();
    if(startKeys == null || startKeys.length == 0) {
      throw new IOException("Expecting at least one region");
    }

HADOOP-2631 fixed getStartKeys for when more than one table but 
otherwise, somethings broken if its not returning at least one region 
for your table.  Maybe you can figure it?

Thanks Mike,
St.Ack



Mike Forrest wrote:
> Hello,
> After updating to the latest trunk version, I find that jobs that use 
> a mapper extending from TableMap now fail with:
>
> Exception in thread "main" java.io.IOException: Expecting at least one 
> region
>        at 
> org.apache.hadoop.hbase.mapred.TableInputFormat.getSplits(TableInputFormat.java:168) 
>
>        at 
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:544)
>        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:805)
> ...
>
> I have verified that the input table does exist and I can get data 
> from it using the Hbase shell app.  It probably goes without saying, 
> but the same code ran before I updated and rebuilt Hadoop/Hbase (it 
> had been a week or two since the last time).  Any ideas?
>
> Thanks,
> Mike