You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2008/11/22 06:34:44 UTC

[jira] Commented: (HBASE-900) Regionserver memory leak causing OOME during relatively modest bulk importing

    [ https://issues.apache.org/jira/browse/HBASE-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649899#action_12649899 ] 

Andrew Purtell commented on HBASE-900:
--------------------------------------

This is a recurring issue presently causing pain on current trunk. Seems to be worse now than 0.18.1. Heap gets out of control (> 1GB) for regionservers hosting only ~20 regions or so on. Much of the heap is tied up in byte referenced by HSKs referenced by the WritableComparable[] arrays used by MapFile indexes.

>From a jgray server:

class [B 	3525873 	615313626
class org.apache.hadoop.hbase.HStoreKey 	1605046 	51361472
class java.util.TreeMap$Entry 	1178067 	48300747
class [Lorg.apache.hadoop.io.WritableComparable; 	56 	4216992

Approximately 56 mapfile indexes were resident. Approximately 15-20 regions were being hosted at the time of the crash. 

On an apurtell server, >900MB of heap was observed to be consumed by mapfile indexes for 48 store files corresponding to 16 regions.


> Regionserver memory leak causing OOME during relatively modest bulk importing
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-900
>                 URL: https://issues.apache.org/jira/browse/HBASE-900
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.2.1, 0.18.0
>            Reporter: Jonathan Gray
>            Assignee: stack
>            Priority: Critical
>         Attachments: memoryOn13.png
>
>
> I have recreated this issue several times and it appears to have been introduced in 0.2.
> During an import to a single table, memory usage of individual region servers grows w/o bounds and when set to the default 1GB it will eventually die with OOME.  This has happened to me as well as Daniel Ploeg on the mailing list.  In my case, I have 10 RS nodes and OOME happens w/ 1GB heap at only about 30-35 regions per RS.  In previous versions, I have imported to several hundred regions per RS with default heap size.
> I am able to get past this by increasing the max heap to 2GB.  However, the appearance of this in newer versions leads me to believe there is now some kind of memory leak happening in the region servers during import.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.