You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org> on 2009/04/09 19:03:13 UTC

[jira] Updated: (HBASE-1058) Prevent runaway compactions

     [ https://issues.apache.org/jira/browse/HBASE-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jean-Daniel Cryans updated HBASE-1058:
--------------------------------------

    Attachment: hbase-1058.patch

I'm using this patch on a cluster of old machines to get the whole wikipedia articles dump in a table. Not using it, I saw a store with 150+ store files in it while it was trying to compact the first 20 or so (which had the machine to start swapping, hell broke loose, etc).

Basically it does a check after we get the updatesLock write lock to make sure there is not a single store with more than hbase.hregion.memcache.store.maximum files (default is 6) and if there is, sleep 500ms until it's resolved.

This patch was made against 0.19 branch.

> Prevent runaway compactions
> ---------------------------
>
>                 Key: HBASE-1058
>                 URL: https://issues.apache.org/jira/browse/HBASE-1058
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.20.0
>
>         Attachments: hbase-1058.patch
>
>
> A rabid upload will easily outrun our compaction ability dropping flushes faster than we can compact them up.  Fix.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.