You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Bryan Duxbury (JIRA)" <ji...@apache.org> on 2008/04/07 20:01:24 UTC

[jira] Updated: (HBASE-512) Add configuration for global aggregate memcache size

     [ https://issues.apache.org/jira/browse/HBASE-512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bryan Duxbury updated HBASE-512:
--------------------------------

    Attachment: 512.patch

First draft. There's a new test, TestGlobalMemcacheLimit, that checks this behavior. This test passes, and I'm running the rest of the suite right now.

> Add configuration for global aggregate memcache size
> ----------------------------------------------------
>
>                 Key: HBASE-512
>                 URL: https://issues.apache.org/jira/browse/HBASE-512
>             Project: Hadoop HBase
>          Issue Type: Sub-task
>          Components: regionserver
>            Reporter: Bryan Duxbury
>            Assignee: Bryan Duxbury
>             Fix For: 0.2.0
>
>         Attachments: 512.patch
>
>
> Currently, we have a configuration parameter for the size a Memcache must reach before it is flushed. This leads to pretty even sized mapfiles when flushes run, which is nice. However, as noted in the parent issue, we can often get to a point where we run out of memory because too much data is hanging around in Memcaches.
> I think that we should add a new configuration parameter that governs the total amount of memory that the region server should spend on Memcaches. This would have to be some number less than the heap size - we'll have to discover the proper values through experimentation. Then, when a put comes in, if the global aggregate size of all the Memcaches for all the stores is at the threshold, then we should block the current and any subsequent put operations from completing until forced flushes cause the memory usage to go back down to a safe level. The existing strategy for triggering flushes will still be in play, just augmented with this blocking behavior.
> This approach has the advantage of helping us avoid OOME situations by warning us well in advance of overflow. Additionally, it becomes something of a performance tuning knob, allowing you to allocate more memory to improve write performance. This is superior to the previously suggested PhantomReference approach because that would possibly causes us to bump into further OOMEs while we're trying to flush to avoid them. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.