You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by N Dm <ni...@gmail.com> on 2013/04/26 22:07:13 UTC

[Question] better way to deal with Out of Memory on Region Server?

hi, folks,

pretty sure this question has been discussed a few times before, and
addressed to some degree.  I am wondering whether there is an active JIRA
or best practice to improve this? Appreciate if I can get a few pointers.

Currently, if a Region Server is running Out of Memory, checkOOM() is
called, and this Region Server will be kill to protect Master.

For example: assuming each row of 'usertable' is ~1K, and HBASE_HEAPSIZE is
1GB(as default)
@hbase shell> count 'usertable', INTERVAL=>2000000,CACHE =>1000000
the count will bring down one of the region Server.

The above problem can be fixed by either use less CACHE, or increase
HEAPSIZE. The 1GB heap is small, 1M row cache is kind of large anyway. So
this particular example won't make me concern too much, and the region
server can be restarted within a minute.

What worry me is this example:
1)  production system with 20 RegionServer each has a reasonable
HeapSize(8~16GB), and increase the heap dynamically won't be a good idea
without new physical memory.
2) a few hundreds of client threads,  each run a reasonable application,
but added up to a large number of memory requested. At a point, the
HEAPSIZE is reached on one of the regionserver, and bring it down. This is
not too bad as we still have 19 up. However, the problem is that the
clients can (and mostlikely will) resubmit their jobs just as I can
resubmit the count-cmd by two keystrokes, which brought down the next
RegionServer.
In this case, I can't stop clients requests, and can't add new hardware
immediately(at least not within minutes). Only thing I can do is to  watch
the whole cluster be brought down from the domino effect.

With that, I am wondering:
1) is there an active item to prevent the first RegionServer going down?
for example, put a 90% of HEAPSIZE as threshold?
2) or a way to prevent client to resubmit the jobs if system is unhealthy.
For example, queue the jobs if a few RegionServers is down?

I was able to find some of the discussions back in 2009 and 2011 from the
email archive.  Wondering anything active/new?   I am new in this
community, and really appreciate any inputs.

Thanks

Demai

Re: [Question] better way to deal with Out of Memory on Region Server?

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Inline.

J-D


On Fri, Apr 26, 2013 at 1:07 PM, N Dm <ni...@gmail.com> wrote:

> hi, folks,
>
> pretty sure this question has been discussed a few times before, and
> addressed to some degree.  I am wondering whether there is an active JIRA
> or best practice to improve this? Appreciate if I can get a few pointers.
>
> Currently, if a Region Server is running Out of Memory, checkOOM() is
> called, and this Region Server will be kill to protect Master.
>

Actually now we kill -9 on OOME.


>
> For example: assuming each row of 'usertable' is ~1K, and HBASE_HEAPSIZE is
> 1GB(as default)
> @hbase shell> count 'usertable', INTERVAL=>2000000,CACHE =>1000000
> the count will bring down one of the region Server.
>
> The above problem can be fixed by either use less CACHE, or increase
> HEAPSIZE. The 1GB heap is small, 1M row cache is kind of large anyway. So
> this particular example won't make me concern too much, and the region
> server can be restarted within a minute.
>
> What worry me is this example:
> 1)  production system with 20 RegionServer each has a reasonable
> HeapSize(8~16GB), and increase the heap dynamically won't be a good idea
> without new physical memory.
> 2) a few hundreds of client threads,  each run a reasonable application,
> but added up to a large number of memory requested. At a point, the
> HEAPSIZE is reached on one of the regionserver, and bring it down. This is
> not too bad as we still have 19 up. However, the problem is that the
> clients can (and mostlikely will) resubmit their jobs just as I can
> resubmit the count-cmd by two keystrokes, which brought down the next
> RegionServer.
> In this case, I can't stop clients requests, and can't add new hardware
> immediately(at least not within minutes). Only thing I can do is to  watch
> the whole cluster be brought down from the domino effect.
>
> With that, I am wondering:
> 1) is there an active item to prevent the first RegionServer going down?
> for example, put a 90% of HEAPSIZE as threshold?
>

There are a few scattered jiras that aim to guard the region server from
bad clients, recent HBase versions now block put requests that are too big.
Regarding the response size, there's hbase.client.scanner.max.result.size
(coming from HBASE-1996) that you can set to limit the scan results. A
better implementation was done in HBASE-2214 since we couldn't break the
RPC compatibility back when 1996 was done. HBase also limits the amount of
total heap that can be dedicated to the block cache and memstore, their sum
cannot exceed 80%.

But there's no system-wide safe guard that will block
requests/compactions/etc if the memory consumption is over a certain
percentage.


> 2) or a way to prevent client to resubmit the jobs if system is unhealthy.
> For example, queue the jobs if a few RegionServers is down?
>

We're talking about MapReduce jobs here or "jobs" in the general sense,
meaning any client request? How would that work?


>
> I was able to find some of the discussions back in 2009 and 2011 from the
> email archive.  Wondering anything active/new?   I am new in this
> community, and really appreciate any inputs.
>
> Thanks
>
> Demai
>