You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Stephen Yuan Jiang (JIRA)" <ji...@apache.org> on 2014/11/05 19:56:35 UTC

[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

    [ https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14198856#comment-14198856 ] 

Stephen Yuan Jiang commented on HBASE-11625:
--------------------------------------------

The fix in HBASE-5885 stated very clear about what the issue is (an Hadoop bug that I think still not fixed, at least not fixed fully - not sure whether there is a ticket opened for it):

{code}
diff --git c/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java w/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
index d6a4705..7e3a68c 100644

+    // If this is the local file system hadoop has a bug where seeks
+    // do not go to the correct location if setVerifyChecksum(false) is called.
+    // This manifests itself in that incorrect data is read and HFileBlocks won't be able to read
+    // their header magic numbers. See HBASE-5885
+    if (useHBaseChecksum && !(fs instanceof LocalFileSystem)) {
{code}

However, it looks like HBASE-11218 re-introduced the issue:

{code}
diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
index 8178787..f8cf7b3 100644

@@ -81,6 +81,13 @@ public class HFileSystem extends FilterFileSystem {
     this.useHBaseChecksum = useHBaseChecksum;
     
     fs.initialize(getDefaultUri(conf), conf);
+    
+    // disable checksum verification for local fileSystem, see HBASE-11218
+    if (fs instanceof LocalFileSystem) {
+      fs.setWriteChecksum(false);
+      fs.setVerifyChecksum(false);
+    }
{code}


> Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum 
> -----------------------------------------------------------------------------------------
>
>                 Key: HBASE-11625
>                 URL: https://issues.apache.org/jira/browse/HBASE-11625
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 0.94.21, 0.98.4, 0.98.5
>            Reporter: qian wang
>         Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it could happen file corruption but it only can switch to hdfs checksum inputstream till validateBlockChecksum(). If the datablock's header corrupted when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)