You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2018/03/28 00:33:00 UTC

[jira] [Commented] (HBASE-13884) Fix Compactions section in HBase book

    [ https://issues.apache.org/jira/browse/HBASE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416514#comment-16416514 ] 

stack commented on HBASE-13884:
-------------------------------

What we have here is conflation of two notions of 'stuckness'. The doc is talking about being stuck because memstore is full but can't flush because in excess of blocking file count. The mayBeStuck in the compaction policy is about the policy itself getting stuck because it unable to find a candidate set of files to compact.

Let me attach a patch here that makes the doc more clear about the stuckness it refers too...

> Fix Compactions section in HBase book
> -------------------------------------
>
>                 Key: HBASE-13884
>                 URL: https://issues.apache.org/jira/browse/HBASE-13884
>             Project: HBase
>          Issue Type: Bug
>          Components: documentation
>            Reporter: Vladimir Rodionov
>            Priority: Trivial
>
> http://hbase.apache.org/book.html#_compaction
> {quote}
> Being Stuck
> When the MemStore gets too large, it needs to flush its contents to a StoreFile. However, a Store can only have hbase.hstore.blockingStoreFiles files, so the MemStore needs to wait for the number of StoreFiles to be reduced by one or more compactions. However, if the MemStore grows larger than hbase.hregion.memstore.flush.size, it is not able to flush its contents to a StoreFile. If the MemStore is too large and the number of StoreFiles is also too high, the algorithm is said to be "stuck". The compaction algorithm checks for this "stuck" situation and provides mechanisms to alleviate it.
> {quote}
> According to source code, this "stuck" situation has nothingg to do with MemStore size. 
> {code}
> // Stuck and not compacting enough (estimate). It is not guaranteed that we will be
>     // able to compact more if stuck and compacting, because ratio policy excludes some
>     // non-compacting files from consideration during compaction (see getCurrentEligibleFiles).
>     int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
>     boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() + futureFiles)
>         >= storeConfigInfo.getBlockingFileCount();
> {code}
> If the number of store files which are not being compacted yet exceeds blocking file count +(potentially)1 - we say that compaction may be stuck.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)