You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Duo Zhang (JIRA)" <ji...@apache.org> on 2016/02/11 05:04:18 UTC

[jira] [Comment Edited] (HBASE-15252) Data loss when replaying wal if HDFS timeout

    [ https://issues.apache.org/jira/browse/HBASE-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142216#comment-15142216 ] 

Duo Zhang edited comment on HBASE-15252 at 2/11/16 4:03 AM:
------------------------------------------------------------

Attach a testcase to reproduce the bug.

In this testcase I mock the DFSInputStream to thrown an IOException just after reading the WALHeader.

{code:title=TestWALReplay.java}
    WAL wal2 = createWAL(this.conf);
    HRegion region2 = HRegion.openHRegion(conf, spyFs, hbaseRootDir, hri, htd, wal2);
    assertEquals(result.size(), region2.get(g).size());
{code}

For the current implementation, we just eat the exception so {{openHRegion}} will exit normally and cause the assertEquals fail.


was (Author: apache9):
Attach a testcase to reproduce the bug.

In this testcase I mock the DFSInputStream to thrown an IOException just after reading the WALHeader.

{code:title=TestWALReplay.java}
WAL wal2 = createWAL(this.conf);
    HRegion region2 = HRegion.openHRegion(conf, spyFs, hbaseRootDir, hri, htd, wal2);
    assertEquals(result.size(), region2.get(g).size());
{code}

For the current implementation, we just eat the exception so {{openHRegion}} will exit normally and cause the assertEquals fail.

> Data loss when replaying wal if HDFS timeout
> --------------------------------------------
>
>                 Key: HBASE-15252
>                 URL: https://issues.apache.org/jira/browse/HBASE-15252
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>            Reporter: Duo Zhang
>            Assignee: Duo Zhang
>         Attachments: HBASE-15252-testcase.patch
>
>
> This is a problem introduced by HBASE-13825 where we change the exception type in catch block in {{readNext}} method of {{ProtobufLogReader}}.
> {code:title=ProtobufLogReader.java}
>       try {
>           ......
>           ProtobufUtil.mergeFrom(builder, new LimitInputStream(this.inputStream, size),
>             (int)size);
>         } catch (IOException ipbe) { // <------ used to be InvalidProtocolBufferException
>           throw (EOFException) new EOFException("Invalid PB, EOF? Ignoring; originalPosition=" +
>             originalPosition + ", currentPosition=" + this.inputStream.getPos() +
>             ", messageSize=" + size + ", currentAvailable=" + available).initCause(ipbe);
>         }
> {code}
> Here if the {{inputStream}} throws an {{IOException}} due to timeout or something, we just convert it to an {{EOFException}} and at the bottom of this method, we ignore {{EOFException}} and return false. This cause the upper layer think we reach the end of file. So when replaying we will treat the HDFS timeout error as a normal end of file and cause data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)