You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Hari Sreekumar <hs...@clickable.com> on 2011/02/22 15:27:53 UTC

Trying to contact region "Some region"

What does this exception signify:

org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
region server Some server, retryOnlyOne=true, index=0, islastrow=false,
tries=9, numtries=10, i=0, listsize=405,
region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
KeywordTest,20927_57901_277247_8728141,1298383184948, row
'20927_57902_277417_8744379', but failed after 10 attempts.
Exceptions:

        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
        at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
        at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
        at
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
        at
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
        at
com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
Source)
        at
com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
Source)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)

How can I avoid it?

Thanks,
Hari

Re: Trying to contact region "Some region"

Posted by Ryan Rawson <ry...@gmail.com>.
We fixed a lot of the exception handling in 0.90.  The exception text
is much better. Check it out!

-ryan

On Wed, Feb 23, 2011 at 11:18 AM, Jean-Daniel Cryans
<jd...@apache.org> wrote:
> It could be due to slow splits, heavy GC, etc. Make sure your machines
> don't swap at all, that HBase has plenty of memory, that you're not
> trying to use more CPUs than your machines actually have (like setting
> 4 maps on a 4 cores machine when also using hbase), etc.
>
> Also upgrading to 0.90.1 will help.
>
> J-D
>
> On Tue, Feb 22, 2011 at 10:18 PM, Hari Sreekumar
> <hs...@clickable.com> wrote:
>> Thanks Ted, any way I can fix this in 0.20.6? How can a single Put refer to
>> two rows? Is there any coding practice with which I can avoid this? This
>> exception is not fatal in the sense that the process still gets completed, I
>> just have a few failed tasks, but this leads to waste of time.
>>
>> Hari
>>
>> On Tue, Feb 22, 2011 at 9:05 PM, Ted Yu <yu...@gmail.com> wrote:
>>
>>> The put() call handles more than one row, destined for more than one region
>>> server.
>>> HConnectionManager wasn't able to find the region server which serves the
>>> row, hence the error.
>>>
>>> Please upgrade to 0.90.1
>>>
>>> On Tue, Feb 22, 2011 at 6:27 AM, Hari Sreekumar <hsreekumar@clickable.com
>>> >wrote:
>>>
>>> > What does this exception signify:
>>> >
>>> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>>> contact
>>> > region server Some server, retryOnlyOne=true, index=0, islastrow=false,
>>> > tries=9, numtries=10, i=0, listsize=405,
>>> > region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
>>> > KeywordTest,20927_57901_277247_8728141,1298383184948, row
>>> > '20927_57902_277417_8744379', but failed after 10 attempts.
>>> > Exceptions:
>>> >
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
>>> >        at
>>> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
>>> >        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>>> >        at
>>> >
>>> >
>>> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
>>> > Source)
>>> >        at
>>> >
>>> >
>>> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
>>> > Source)
>>> >        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>> >        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>>> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>>> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>> >
>>> > How can I avoid it?
>>> >
>>> > Thanks,
>>> > Hari
>>> >
>>>
>>
>

Re: Trying to contact region "Some region"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
It could be due to slow splits, heavy GC, etc. Make sure your machines
don't swap at all, that HBase has plenty of memory, that you're not
trying to use more CPUs than your machines actually have (like setting
4 maps on a 4 cores machine when also using hbase), etc.

Also upgrading to 0.90.1 will help.

J-D

On Tue, Feb 22, 2011 at 10:18 PM, Hari Sreekumar
<hs...@clickable.com> wrote:
> Thanks Ted, any way I can fix this in 0.20.6? How can a single Put refer to
> two rows? Is there any coding practice with which I can avoid this? This
> exception is not fatal in the sense that the process still gets completed, I
> just have a few failed tasks, but this leads to waste of time.
>
> Hari
>
> On Tue, Feb 22, 2011 at 9:05 PM, Ted Yu <yu...@gmail.com> wrote:
>
>> The put() call handles more than one row, destined for more than one region
>> server.
>> HConnectionManager wasn't able to find the region server which serves the
>> row, hence the error.
>>
>> Please upgrade to 0.90.1
>>
>> On Tue, Feb 22, 2011 at 6:27 AM, Hari Sreekumar <hsreekumar@clickable.com
>> >wrote:
>>
>> > What does this exception signify:
>> >
>> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>> contact
>> > region server Some server, retryOnlyOne=true, index=0, islastrow=false,
>> > tries=9, numtries=10, i=0, listsize=405,
>> > region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
>> > KeywordTest,20927_57901_277247_8728141,1298383184948, row
>> > '20927_57902_277417_8744379', but failed after 10 attempts.
>> > Exceptions:
>> >
>> >        at
>> >
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
>> >        at
>> >
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
>> >        at
>> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
>> >        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
>> >        at
>> >
>> >
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
>> >        at
>> >
>> >
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
>> >        at
>> >
>> >
>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
>> >        at
>> >
>> >
>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>> >        at
>> >
>> >
>> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
>> > Source)
>> >        at
>> >
>> >
>> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
>> > Source)
>> >        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>> >        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> >
>> > How can I avoid it?
>> >
>> > Thanks,
>> > Hari
>> >
>>
>

Re: Trying to contact region "Some region"

Posted by Hari Sreekumar <hs...@clickable.com>.
Thanks Ted, any way I can fix this in 0.20.6? How can a single Put refer to
two rows? Is there any coding practice with which I can avoid this? This
exception is not fatal in the sense that the process still gets completed, I
just have a few failed tasks, but this leads to waste of time.

Hari

On Tue, Feb 22, 2011 at 9:05 PM, Ted Yu <yu...@gmail.com> wrote:

> The put() call handles more than one row, destined for more than one region
> server.
> HConnectionManager wasn't able to find the region server which serves the
> row, hence the error.
>
> Please upgrade to 0.90.1
>
> On Tue, Feb 22, 2011 at 6:27 AM, Hari Sreekumar <hsreekumar@clickable.com
> >wrote:
>
> > What does this exception signify:
> >
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
> contact
> > region server Some server, retryOnlyOne=true, index=0, islastrow=false,
> > tries=9, numtries=10, i=0, listsize=405,
> > region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
> > KeywordTest,20927_57901_277247_8728141,1298383184948, row
> > '20927_57902_277417_8744379', but failed after 10 attempts.
> > Exceptions:
> >
> >        at
> >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
> >        at
> >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
> >        at
> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
> >        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
> >        at
> >
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
> >        at
> >
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
> >        at
> >
> >
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
> >        at
> >
> >
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> >        at
> >
> >
> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
> > Source)
> >        at
> >
> >
> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
> > Source)
> >        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
> >
> > How can I avoid it?
> >
> > Thanks,
> > Hari
> >
>

Re: Trying to contact region "Some region"

Posted by Ted Yu <yu...@gmail.com>.
The put() call handles more than one row, destined for more than one region
server.
HConnectionManager wasn't able to find the region server which serves the
row, hence the error.

Please upgrade to 0.90.1

On Tue, Feb 22, 2011 at 6:27 AM, Hari Sreekumar <hs...@clickable.com>wrote:

> What does this exception signify:
>
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
> region server Some server, retryOnlyOne=true, index=0, islastrow=false,
> tries=9, numtries=10, i=0, listsize=405,
> region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
> KeywordTest,20927_57901_277247_8728141,1298383184948, row
> '20927_57902_277417_8744379', but failed after 10 attempts.
> Exceptions:
>
>        at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
>        at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
>        at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
>        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
>        at
>
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
>        at
>
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>        at
>
> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
> Source)
>        at
>
> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
> Source)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> How can I avoid it?
>
> Thanks,
> Hari
>