You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Sergey Shelukhin (JIRA)" <ji...@apache.org> on 2013/02/14 02:26:12 UTC

[jira] [Commented] (HBASE-7842) Add compaction policy that explores more storefile groups

    [ https://issues.apache.org/jira/browse/HBASE-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13578081#comment-13578081 ] 

Sergey Shelukhin commented on HBASE-7842:
-----------------------------------------

what I don't understand is the first new condition for every file. E.g. (assume ratio 1) in order 10 7 4 5 is good to compact but 7 10 4 5 is not (if reordered by size while preserving contiguousness, some other example can probably be invented.

Then, maximizing ratio without regard for number of files can be bad, e.g. files 0..3 with sizes 100 3 2 2 - max files 3, min files 2, assuming "2 2" passes ratio test - we can compact 2..3 for 2/4, or 1..3 for 3/7; 3/7 is less, but we probably want to do it anyway to minimize I/O amplification later.

Just thinking aloud: what if we use ratio/size filter *after* we find best set out of all permutations (# of files/sum of sizes)? Presumably, good permutations already won't rewrite a lot of data needlessly, because otherwise they would have large sum of sizes and thus lower preference for being the "best set".
For example, 10 2 2 2 2 3 2, max files 6 - if we choose size 6 sets, 0..5 gets us 6/21, and 1..6 gets us 6/13, we don't need ratio test for each set to establish that.
So the only thing we need to do is find if the chosen set itself satisfies some ratio test... probably ratio for biggest file (without regard to ordering of files, e.g. 6 10 6 is good). If it can then ratio is good, it's the best compaction we can do (according to this criteria), if not the best compaction we can do is not good enough.


                
> Add compaction policy that explores more storefile groups
> ---------------------------------------------------------
>
>                 Key: HBASE-7842
>                 URL: https://issues.apache.org/jira/browse/HBASE-7842
>             Project: HBase
>          Issue Type: New Feature
>          Components: Compaction
>    Affects Versions: 0.96.0
>            Reporter: Elliott Clark
>            Assignee: Elliott Clark
>
> Some workloads that are not as stable can have compactions that are too large or too small using the current storefile selection algorithm.
> Currently:
> * Find the first file that Size(fi) <= Sum(0, i-1, FileSize(fx))
> * Ensure that there are the min number of files (if there aren't then bail out)
> * If there are too many files keep the larger ones.
> I would propose something like:
> * Find all sets of storefiles where every file satisfies 
> ** FileSize(fi) <= Sum(0, i-1, FileSize(fx))
> ** Num files in set =< max
> ** Num Files in set >= min
> * Then pick the set of files that maximizes ((# storefiles in set) / Sum(FileSize(fx)))
> The thinking is that the above algorithm is pretty easy reason about, all files satisfy the ratio, and should rewrite the least amount of data to get the biggest impact in seeks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira