You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Qianxi Zhang (JIRA)" <ji...@apache.org> on 2016/01/04 03:56:39 UTC

[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

    [ https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15080643#comment-15080643 ] 

Qianxi Zhang commented on HBASE-11625:
--------------------------------------

I encounter the same problem in HBase 1.0 + Hadoop 2.6.0.
Caused by: java.io.IOException: Invalid HFile block magic: \x00\x00\x00\x00\x00\x00\x00\x00
        at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
        at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock.<init>(HFileBlock.java:252)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1644)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1467)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:430)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:865)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:254)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)

> Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum 
> -----------------------------------------------------------------------------------------
>
>                 Key: HBASE-11625
>                 URL: https://issues.apache.org/jira/browse/HBASE-11625
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 0.94.21, 0.98.4, 0.98.5
>            Reporter: qian wang
>            Assignee: Pankaj Kumar
>         Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it could happen file corruption but it only can switch to hdfs checksum inputstream till validateBlockChecksum(). If the datablock's header corrupted when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)