You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Duo Zhang (JIRA)" <ji...@apache.org> on 2016/02/12 13:45:18 UTC

[jira] [Reopened] (HBASE-15252) Data loss when replaying wal if HDFS timeout

     [ https://issues.apache.org/jira/browse/HBASE-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Duo Zhang reopened HBASE-15252:
-------------------------------

The patch for 0.98 breaks hadoop-1.1 compatibility.

> Data loss when replaying wal if HDFS timeout
> --------------------------------------------
>
>                 Key: HBASE-15252
>                 URL: https://issues.apache.org/jira/browse/HBASE-15252
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.17
>            Reporter: Duo Zhang
>            Assignee: Duo Zhang
>            Priority: Blocker
>             Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
>         Attachments: HBASE-15252-testcase.patch, HBASE-15252-v1.patch, HBASE-15252.patch
>
>
> This is a problem introduced by HBASE-13825 where we change the exception type in catch block in {{readNext}} method of {{ProtobufLogReader}}.
> {code:title=ProtobufLogReader.java}
>       try {
>           ......
>           ProtobufUtil.mergeFrom(builder, new LimitInputStream(this.inputStream, size),
>             (int)size);
>         } catch (IOException ipbe) { // <------ used to be InvalidProtocolBufferException
>           throw (EOFException) new EOFException("Invalid PB, EOF? Ignoring; originalPosition=" +
>             originalPosition + ", currentPosition=" + this.inputStream.getPos() +
>             ", messageSize=" + size + ", currentAvailable=" + available).initCause(ipbe);
>         }
> {code}
> Here if the {{inputStream}} throws an {{IOException}} due to timeout or something, we just convert it to an {{EOFException}} and at the bottom of this method, we ignore {{EOFException}} and return false. This cause the upper layer think we reach the end of file. So when replaying we will treat the HDFS timeout error as a normal end of file and cause data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)