You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2016/08/01 19:04:20 UTC

[jira] [Commented] (HBASE-16288) HFile intermediate block level indexes might recurse forever creating multi TB files

    [ https://issues.apache.org/jira/browse/HBASE-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15402636#comment-15402636 ] 

Hudson commented on HBASE-16288:
--------------------------------

SUCCESS: Integrated in HBase-1.3-IT #770 (See [https://builds.apache.org/job/HBase-1.3-IT/770/])
HBASE-16288 HFile intermediate block level indexes might recurse forever (enis: rev 6dfaed98f248a2ef764e1875e4b6c4976511d032)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java


> HFile intermediate block level indexes might recurse forever creating multi TB files
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-16288
>                 URL: https://issues.apache.org/jira/browse/HBASE-16288
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>            Priority: Critical
>             Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3
>
>         Attachments: hbase-16288_v1.patch, hbase-16288_v2.patch, hbase-16288_v3.patch, hbase-16288_v4.patch
>
>
> Mighty [~elserj] was debugging an opentsdb cluster where some region directory ended up having 5TB+ files under <regiondir>/.tmp/ 
> Further debugging and analysis, we were able to reproduce the problem locally where we never we recursing in this code path for writing intermediate level indices: 
> {code:title=HFileBlockIndex.java}
> if (curInlineChunk != null) {
>         while (rootChunk.getRootSize() > maxChunkSize) {
>           rootChunk = writeIntermediateLevel(out, rootChunk);
>           numLevels += 1;
>         }
>       }
> {code}
> The problem happens if we end up with a very large rowKey (larger than "hfile.index.block.max.size" being the first key in the block, then moving all the way to the root-level index building. We will keep writing and building the next level of intermediate level indices with a single very-large key. This can happen in flush / compaction / region recovery causing cluster inoperability due to ever-growing files. 
> Seems the issue was also reported earlier, with a temporary workaround: 
> https://github.com/OpenTSDB/opentsdb/issues/490



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)