You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org> on 2011/03/23 00:58:06 UTC

[jira] [Commented] (HBASE-3649) Separate compression setting for flush files

    [ https://issues.apache.org/jira/browse/HBASE-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13009914#comment-13009914 ] 

Jean-Daniel Cryans commented on HBASE-3649:
-------------------------------------------

Are you working on this Andrew? Would you mind if we punt it as we are trying to get 0.90.2 ready soon?

> Separate compression setting for flush files
> --------------------------------------------
>
>                 Key: HBASE-3649
>                 URL: https://issues.apache.org/jira/browse/HBASE-3649
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>             Fix For: 0.90.2, 0.92.0
>
>
> In this thread on user@hbase: http://search-hadoop.com/m/WUnLM6ojHm1 J-D conjectures that compressing flush files leads to a suboptimal situation where "the puts are sometimes blocked on the memstores which are blocked by the flusher thread which is blocked because there's too many files to compact because the compactor is given too many small files to compact and has to compact the same data a bunch of times."
> We have a separate compression setting already for major compaction vs store files written during minor compaction, for background/archival apps. Add a separate compression setting for flush files, default to none, to avoid the above condition.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira