You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Lars George (JIRA)" <ji...@apache.org> on 2015/09/10 21:15:46 UTC

[jira] [Commented] (HBASE-7842) Add compaction policy that explores more storefile groups

    [ https://issues.apache.org/jira/browse/HBASE-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739407#comment-14739407 ] 

Lars George commented on HBASE-7842:
------------------------------------

Hey [~eclark], could you help me understand why the {{ExploringCompactionPolicy}} is overloading the {{hbase.hstore.compaction.max.size}} parameter, which is used in the original ratio-based policy _just_ to exclude store files that are larger than this threshold. The ECP does the same, but later on uses the same threshold (if set) to drop a possible selection when the sum of all store files in the selection exceeds this limit. Why?

Here the code:

{code}
        if (size > comConf.getMaxCompactSize()) {
          continue;
        }
{code}

The ref guide says this:

{noformat}
* Do size-based sanity checks against each StoreFile in this set of StoreFiles.
** If the size of this StoreFile is larger than `hbase.hstore.compaction.max.size`, take it out of consideration.
** If the size is greater than or equal to `hbase.hstore.compaction.min.size`, sanity-check it against the file-based ratio to see whether it is too large to be considered.
{noformat}

This seems wrong, no? It does not do this by each store file, but by the current selection candidate. It still speaks of the max size key, but here in the traditional sense, i.e. eliminate single store files that exceed the limit. But that is not what the code does at this spot. Please advise? 

> Add compaction policy that explores more storefile groups
> ---------------------------------------------------------
>
>                 Key: HBASE-7842
>                 URL: https://issues.apache.org/jira/browse/HBASE-7842
>             Project: HBase
>          Issue Type: New Feature
>          Components: Compaction
>            Reporter: Elliott Clark
>            Assignee: Elliott Clark
>             Fix For: 0.98.0, 0.95.1
>
>         Attachments: HBASE-7842-0.patch, HBASE-7842-2.patch, HBASE-7842-3.patch, HBASE-7842-4.patch, HBASE-7842-5.patch, HBASE-7842-6.patch, HBASE-7842-7.patch, HBASE-7842-ADD.patch
>
>
> Some workloads that are not as stable can have compactions that are too large or too small using the current storefile selection algorithm.
> Currently:
> * Find the first file that Size(fi) <= Sum(0, i-1, FileSize(fx))
> * Ensure that there are the min number of files (if there aren't then bail out)
> * If there are too many files keep the larger ones.
> I would propose something like:
> * Find all sets of storefiles where every file satisfies 
> ** FileSize(fi) <= Sum(0, i-1, FileSize(fx))
> ** Num files in set =< max
> ** Num Files in set >= min
> * Then pick the set of files that maximizes ((# storefiles in set) / Sum(FileSize(fx)))
> The thinking is that the above algorithm is pretty easy reason about, all files satisfy the ratio, and should rewrite the least amount of data to get the biggest impact in seeks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)