You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by big data <bi...@outlook.com> on 2016/10/19 07:50:15 UTC

java.lang.OutOfMemoryError when count hbase table

Dear all,

I've a hbase table, one row has a huge keyvalue, about 100M size.

When I execute count table in hbase shell, hbase crash to bash, and 
display error like this:

hbase(main):005:0> count 'table', CACHE=>10000
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid17735.hprof ...
Unable to create java_pid17735.hprof: 权限不够
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
#   Executing /bin/sh -c "kill -9 17735"...
已杀死

Meanwhile, my java client execute get operation from this table, it 
hangs for a long time.

How can I adjust some parameters to support huge keyvalue?




Re: java.lang.OutOfMemoryError when count hbase table

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
IIRL this is fixed on recent versions of HBase. Which version are you
using? If you face this issue, I don't think you can fix it with any
setting :( You might have to upgrade your version of HBase...

2016-10-19 8:16 GMT-04:00 吴国泉wgq <wg...@qunar.com>:

> hi biodata:
>
>         you can try “ scan.setbatch()”  or other filter to limit the
> number of column returned.
>
>        This is because: There is a very large row in your table,when you
> try to retrieve it, OOM will happen.
>
>         As I can see, There is no other method to solve this problem.
> Default,every read operation, hbase will retrieve an entire row if you
> don’t limit  the col。
>
>         Don’t insert very large row.  you can delete it .  But don’t
> forget  trigger the major compaction after you delete the row.
>
>
>
> 吴国泉   wgq.wu
> Post: DBA  Hbase
> Email: wgq.wu@qunar.com<http://qunar.com>
> Tel: 13051697997
> Adr: 中国电子大厦17层
>
>
>
> 在 2016年10月19日,下午7:43,big data <bigdatabase@outlook.com<mailto:
> bigdatabase@outlook.com>> 写道:
>
> I've adjusted the jvm xmx in hbase-env.xml, now in hbase shell, count
> runs well.
>
> But java client still crashes because :
>
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol
> message was too large.  May be malicious.  Use
> CodedInputStream.setSizeLimit() to increase the size limit.
>
> I've browsed apache's jira, but still don't know how to call
> CodedInputStream.setSizeLimit()  in client side.
>
>
> 在 16/10/19 下午6:38, Jean-Marc Spaggiari 写道:
> Interesting. Can you bump the client heap size? How much do you have for
> the client?
>
> JMS
>
> 2016-10-19 3:50 GMT-04:00 big data <bigdatabase@outlook.com<mailto:
> bigdatabase@outlook.com>>:
>
> Dear all,
>
> I've a hbase table, one row has a huge keyvalue, about 100M size.
>
> When I execute count table in hbase shell, hbase crash to bash, and
> display error like this:
>
> hbase(main):005:0> count 'table', CACHE=>10000
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid17735.hprof ...
> Unable to create java_pid17735.hprof: 权限不够
> #
> # java.lang.OutOfMemoryError: Java heap space
> # -XX:OnOutOfMemoryError="kill -9 %p"
> #   Executing /bin/sh -c "kill -9 17735"...
> 已杀死
>
> Meanwhile, my java client execute get operation from this table, it
> hangs for a long time.
>
> How can I adjust some parameters to support huge keyvalue?
>
>
>
>
>
>
>
> 安全提示:本邮件非QUNAR内部邮件,请注意保护个人及公司信息安全,如有索取帐号密码等可疑情况请向 secteam发送邮件
>
>

Re: java.lang.OutOfMemoryError when count hbase table

Posted by Ted Yu <yu...@gmail.com>.
Storing the value on hdfs and using reference to the hdfs location in key value is an option. 

> On Oct 19, 2016, at 6:49 PM, big data <bi...@outlook.com> wrote:
> 
> actually, there is only one huge value in the hbase cell which large 
> than 100M, maybe it's not a good idea to store such huge value in hbase.
> 
> Any suggestions to store this huge objects?
> 
> 
>> 在 16/10/19 下午8:16, 吴国泉wgq 写道:
>> hi biodata:
>> 
>>         you can try “ scan.setbatch()”  or other filter to limit the number of column returned.
>> 
>>        This is because: There is a very large row in your table,when you try to retrieve it, OOM will happen.
>> 
>>         As I can see, There is no other method to solve this problem. Default,every read operation, hbase will retrieve an entire row if you don’t limit  the col。
>> 
>>         Don’t insert very large row.  you can delete it .  But don’t forget  trigger the major compaction after you delete the row.
>> 
>> 
>> 
>> 吴国泉   wgq.wu
>> Post: DBA  Hbase
>> Email: wgq.wu@qunar.com<http://qunar.com>
>> Tel: 13051697997
>> Adr: 中国电子大厦17层
>> 
>> 
>> 
>> 在 2016年10月19日,下午7:43,big data <bi...@outlook.com>> 写道:
>> 
>> I've adjusted the jvm xmx in hbase-env.xml, now in hbase shell, count
>> runs well.
>> 
>> But java client still crashes because :
>> 
>> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol
>> message was too large.  May be malicious.  Use
>> CodedInputStream.setSizeLimit() to increase the size limit.
>> 
>> I've browsed apache's jira, but still don't know how to call
>> CodedInputStream.setSizeLimit()  in client side.
>> 
>> 
>> 在 16/10/19 下午6:38, Jean-Marc Spaggiari 写道:
>> Interesting. Can you bump the client heap size? How much do you have for
>> the client?
>> 
>> JMS
>> 
>> 2016-10-19 3:50 GMT-04:00 big data <bi...@outlook.com>>:
>> 
>> Dear all,
>> 
>> I've a hbase table, one row has a huge keyvalue, about 100M size.
>> 
>> When I execute count table in hbase shell, hbase crash to bash, and
>> display error like this:
>> 
>> hbase(main):005:0> count 'table', CACHE=>10000
>> java.lang.OutOfMemoryError: Java heap space
>> Dumping heap to java_pid17735.hprof ...
>> Unable to create java_pid17735.hprof: 权限不够
>> #
>> # java.lang.OutOfMemoryError: Java heap space
>> # -XX:OnOutOfMemoryError="kill -9 %p"
>> #   Executing /bin/sh -c "kill -9 17735"...
>> 已杀死
>> 
>> Meanwhile, my java client execute get operation from this table, it
>> hangs for a long time.
>> 
>> How can I adjust some parameters to support huge keyvalue?
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 安全提示:本邮件非QUNAR内部邮件,请注意保护个人及公司信息安全,如有索取帐号密码等可疑情况请向 secteam发送邮件
> 

Re: java.lang.OutOfMemoryError when count hbase table

Posted by big data <bi...@outlook.com>.
actually, there is only one huge value in the hbase cell which large 
than 100M, maybe it's not a good idea to store such huge value in hbase.

Any suggestions to store this huge objects?


在 16/10/19 下午8:16, 吴国泉wgq 写道:
> hi biodata:
>
>          you can try “ scan.setbatch()”  or other filter to limit the number of column returned.
>
>         This is because: There is a very large row in your table,when you try to retrieve it, OOM will happen.
>
>          As I can see, There is no other method to solve this problem. Default,every read operation, hbase will retrieve an entire row if you don’t limit  the col。
>
>          Don’t insert very large row.  you can delete it .  But don’t forget  trigger the major compaction after you delete the row.
>
>
>
> 吴国泉   wgq.wu
> Post: DBA  Hbase
> Email: wgq.wu@qunar.com<http://qunar.com>
> Tel: 13051697997
> Adr: 中国电子大厦17层
>
>
>
> 在 2016年10月19日,下午7:43,big data <bi...@outlook.com>> 写道:
>
> I've adjusted the jvm xmx in hbase-env.xml, now in hbase shell, count
> runs well.
>
> But java client still crashes because :
>
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol
> message was too large.  May be malicious.  Use
> CodedInputStream.setSizeLimit() to increase the size limit.
>
> I've browsed apache's jira, but still don't know how to call
> CodedInputStream.setSizeLimit()  in client side.
>
>
> 在 16/10/19 下午6:38, Jean-Marc Spaggiari 写道:
> Interesting. Can you bump the client heap size? How much do you have for
> the client?
>
> JMS
>
> 2016-10-19 3:50 GMT-04:00 big data <bi...@outlook.com>>:
>
> Dear all,
>
> I've a hbase table, one row has a huge keyvalue, about 100M size.
>
> When I execute count table in hbase shell, hbase crash to bash, and
> display error like this:
>
> hbase(main):005:0> count 'table', CACHE=>10000
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid17735.hprof ...
> Unable to create java_pid17735.hprof: 权限不够
> #
> # java.lang.OutOfMemoryError: Java heap space
> # -XX:OnOutOfMemoryError="kill -9 %p"
> #   Executing /bin/sh -c "kill -9 17735"...
> 已杀死
>
> Meanwhile, my java client execute get operation from this table, it
> hangs for a long time.
>
> How can I adjust some parameters to support huge keyvalue?
>
>
>
>
>
>
>
> 安全提示:本邮件非QUNAR内部邮件,请注意保护个人及公司信息安全,如有索取帐号密码等可疑情况请向 secteam发送邮件
>


Re: java.lang.OutOfMemoryError when count hbase table

Posted by 吴国泉wgq <wg...@qunar.com>.
hi biodata:

        you can try “ scan.setbatch()”  or other filter to limit the number of column returned.

       This is because: There is a very large row in your table,when you try to retrieve it, OOM will happen.

        As I can see, There is no other method to solve this problem. Default,every read operation, hbase will retrieve an entire row if you don’t limit  the col。

        Don’t insert very large row.  you can delete it .  But don’t forget  trigger the major compaction after you delete the row.



吴国泉   wgq.wu
Post: DBA  Hbase
Email: wgq.wu@qunar.com<http://qunar.com>
Tel: 13051697997
Adr: 中国电子大厦17层



在 2016年10月19日,下午7:43,big data <bi...@outlook.com>> 写道:

I've adjusted the jvm xmx in hbase-env.xml, now in hbase shell, count
runs well.

But java client still crashes because :

Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol
message was too large.  May be malicious.  Use
CodedInputStream.setSizeLimit() to increase the size limit.

I've browsed apache's jira, but still don't know how to call
CodedInputStream.setSizeLimit()  in client side.


在 16/10/19 下午6:38, Jean-Marc Spaggiari 写道:
Interesting. Can you bump the client heap size? How much do you have for
the client?

JMS

2016-10-19 3:50 GMT-04:00 big data <bi...@outlook.com>>:

Dear all,

I've a hbase table, one row has a huge keyvalue, about 100M size.

When I execute count table in hbase shell, hbase crash to bash, and
display error like this:

hbase(main):005:0> count 'table', CACHE=>10000
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid17735.hprof ...
Unable to create java_pid17735.hprof: 权限不够
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
#   Executing /bin/sh -c "kill -9 17735"...
已杀死

Meanwhile, my java client execute get operation from this table, it
hangs for a long time.

How can I adjust some parameters to support huge keyvalue?







安全提示:本邮件非QUNAR内部邮件,请注意保护个人及公司信息安全,如有索取帐号密码等可疑情况请向 secteam发送邮件


Re: java.lang.OutOfMemoryError when count hbase table

Posted by big data <bi...@outlook.com>.
I've adjusted the jvm xmx in hbase-env.xml, now in hbase shell, count 
runs well.

But java client still crashes because :

Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
message was too large.  May be malicious.  Use 
CodedInputStream.setSizeLimit() to increase the size limit.

I've browsed apache's jira, but still don't know how to call 
CodedInputStream.setSizeLimit()  in client side.


在 16/10/19 下午6:38, Jean-Marc Spaggiari 写道:
> Interesting. Can you bump the client heap size? How much do you have for
> the client?
>
> JMS
>
> 2016-10-19 3:50 GMT-04:00 big data <bi...@outlook.com>:
>
>> Dear all,
>>
>> I've a hbase table, one row has a huge keyvalue, about 100M size.
>>
>> When I execute count table in hbase shell, hbase crash to bash, and
>> display error like this:
>>
>> hbase(main):005:0> count 'table', CACHE=>10000
>> java.lang.OutOfMemoryError: Java heap space
>> Dumping heap to java_pid17735.hprof ...
>> Unable to create java_pid17735.hprof: 权限不够
>> #
>> # java.lang.OutOfMemoryError: Java heap space
>> # -XX:OnOutOfMemoryError="kill -9 %p"
>> #   Executing /bin/sh -c "kill -9 17735"...
>> 已杀死
>>
>> Meanwhile, my java client execute get operation from this table, it
>> hangs for a long time.
>>
>> How can I adjust some parameters to support huge keyvalue?
>>
>>
>>
>>


Re: java.lang.OutOfMemoryError when count hbase table

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Interesting. Can you bump the client heap size? How much do you have for
the client?

JMS

2016-10-19 3:50 GMT-04:00 big data <bi...@outlook.com>:

> Dear all,
>
> I've a hbase table, one row has a huge keyvalue, about 100M size.
>
> When I execute count table in hbase shell, hbase crash to bash, and
> display error like this:
>
> hbase(main):005:0> count 'table', CACHE=>10000
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid17735.hprof ...
> Unable to create java_pid17735.hprof: 权限不够
> #
> # java.lang.OutOfMemoryError: Java heap space
> # -XX:OnOutOfMemoryError="kill -9 %p"
> #   Executing /bin/sh -c "kill -9 17735"...
> 已杀死
>
> Meanwhile, my java client execute get operation from this table, it
> hangs for a long time.
>
> How can I adjust some parameters to support huge keyvalue?
>
>
>
>