You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Bryan Duxbury (JIRA)" <ji...@apache.org> on 2008/02/04 22:27:08 UTC

[jira] Updated: (HBASE-69) [hbase] Make cache flush triggering less simplistic

     [ https://issues.apache.org/jira/browse/HBASE-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bryan Duxbury updated HBASE-69:
-------------------------------

    Component/s: regionserver

> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
>                 Key: HBASE-69
>                 URL: https://issues.apache.org/jira/browse/HBASE-69
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: stack
>            Assignee: Jim Kellerman
>         Attachments: patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt
>
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a configurable max size -- we flush all Stores though a Store memcache might have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those Stores > 50% of max memcache size.  Behavior would vary dependent on the prompt that provoked the flush.  Would also log why the flush is running: optional or > max size.
> This issue comes out of HADOOP-2621.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.