You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Sean Busbey (JIRA)" <ji...@apache.org> on 2017/01/25 16:37:26 UTC

[jira] [Comment Edited] (HBASE-17531) Memory tuning should account for undefined max heap

    [ https://issues.apache.org/jira/browse/HBASE-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15838060#comment-15838060 ] 

Sean Busbey edited comment on HBASE-17531 at 1/25/17 4:37 PM:
--------------------------------------------------------------

The logic in {{CacheConfig}} kills the RegionServer if we're configured to use Bucket Cache as a % of heap and we get an undefined for Max Heap.


was (Author: busbey):
The logic in {{CacheConfig}} kills the RegionServer if we're configured to use Bucket Cache  and we get an undefined for Max Heap.

> Memory tuning should account for undefined max heap
> ---------------------------------------------------
>
>                 Key: HBASE-17531
>                 URL: https://issues.apache.org/jira/browse/HBASE-17531
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: Sean Busbey
>
> While going through HBASE-17522, I noticed that our calculations for heap usage don't account for the JVM returning a -1 to mean "undefined" for the max heap. for example, {{getOnheapGlobalMemstoreSize}} just does the calculation as it would if a normal number is returned, which means we end up with -1 or 0 and then the logic in {{RegionServerAccounting.isAboveHighWaterMark()}} is always going to say "we need to flush".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)