You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jonathan Hsieh (JIRA)" <ji...@apache.org> on 2014/03/11 02:46:43 UTC

[jira] [Commented] (HBASE-10718) TestHLogSplit fails when it sets a KV size to be negative

    [ https://issues.apache.org/jira/browse/HBASE-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13929863#comment-13929863 ] 

Jonathan Hsieh commented on HBASE-10718:
----------------------------------------

If we expect the exception, we should fail if we don't get it.

{code}
+    // length -1
+    try {
+      // even we have a good kv now in dis we will just pass length with -1 for simplicity
+      KeyValue kv_3 = KeyValue.create(-1, dis);
+    } catch (Exception e) {
+      assertEquals("Failed read -1 bytes, stream corrupt?", e.getMessage());
++    return;
+    }
++ fail("Expected corrupt stream");
+  }
+
{code}

> TestHLogSplit fails when it sets a KV size to be negative
> ---------------------------------------------------------
>
>                 Key: HBASE-10718
>                 URL: https://issues.apache.org/jira/browse/HBASE-10718
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.98.0, 0.99.0, 0.96.1.1, 0.94.17
>            Reporter: Esteban Gutierrez
>            Assignee: Esteban Gutierrez
>         Attachments: HBASE-10718.v0.txt
>
>
> From [~jdcryans]:
> {code}
> java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2259)
> 	at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2266)
> 	at org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueDecoder.parseCell(KeyValueCodec.java:64)
> 	at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:46)
> 	at org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFields(WALEdit.java:222)
> 	at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2114)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2242)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:245)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:214)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:799)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:727)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:307)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:217)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:180)
> 	at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.testMiddleGarbageCorruptionSkipErrorsReadsHalfOfFile(TestHLogSplit.java:363)
> ...
> {code}
> It seems to me that we're reading a negative length which we use to create the byte array and since it's not an IOE we don't treat it as a corrupted log. I'm surprised that not a single build has failed like this in the past 3 years.



--
This message was sent by Atlassian JIRA
(v6.2#6252)