You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jonathan Gray (JIRA)" <ji...@apache.org> on 2009/07/07 09:28:14 UTC

[jira] Updated: (HBASE-1618) Investigate further into the MemStoreFlusher StoreFile limit

     [ https://issues.apache.org/jira/browse/HBASE-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1618:
---------------------------------

    Attachment: HBASE-1618-v1.patch

Patch fixes it and adds a bunch of debug.  Quick fix needs to be cleaned up.

Want to spend more time with this issue, patch not for commit.  But test with it, definitely seems to help and can watch how it behaves (very verbose).

Once we reach the StoreFile limit during a flush, we prevent the flushing and wait for compactions.  Once we drop below the limit, or we wait past the max time limit, we proceed with the flush.

During the wait period, MemStores will start to back up.  Once they reach the multiplier blocking size (i have it set to 4 not default 2), the blocking updates wall comes up.

My upload is behaving much better now, sustaining about 5k rows per second (rows have 12 columns total about 2K of data).  This is on 3 slow nodes (2 core, 2gb, single hdd).

> Investigate further into the MemStoreFlusher StoreFile limit
> ------------------------------------------------------------
>
>                 Key: HBASE-1618
>                 URL: https://issues.apache.org/jira/browse/HBASE-1618
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>             Fix For: 0.20.0
>
>         Attachments: HBASE-1618-v1.patch
>
>
> This seems to cause some weird behavior and not accomplish it's original intent (prevent a region growing to hundreds of storefiles).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.