You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Duo Zhang (JIRA)" <ji...@apache.org> on 2016/02/11 04:59:18 UTC

[jira] [Updated] (HBASE-15252) Data loss when replaying wal if HDFS timeout

     [ https://issues.apache.org/jira/browse/HBASE-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Duo Zhang updated HBASE-15252:
------------------------------
    Description: 
This is a problem introduced by HBASE-13825 where we change the exception type in catch block in {{readNext}} method of {{ProtobufLogReader}}.

{code:title=ProtobufLogReader.java}
      try {
          ......
          ProtobufUtil.mergeFrom(builder, new LimitInputStream(this.inputStream, size),
            (int)size);
        } catch (IOException ipbe) { // <------ used to be InvalidProtocolBufferException
          throw (EOFException) new EOFException("Invalid PB, EOF? Ignoring; originalPosition=" +
            originalPosition + ", currentPosition=" + this.inputStream.getPos() +
            ", messageSize=" + size + ", currentAvailable=" + available).initCause(ipbe);
        }
{code}

Here if the {{inputStream}} throws an {{IOException}} due to timeout or something, we just convert it to an {{EOFException}} and at the bottom of this method, we ignore {{EOFException}} and return false. This cause the upper layer think we reach the end of file. So when replaying we will treat the HDFS timeout error as a normal end of file and cause data loss.

> Data loss when replaying wal if HDFS timeout
> --------------------------------------------
>
>                 Key: HBASE-15252
>                 URL: https://issues.apache.org/jira/browse/HBASE-15252
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>            Reporter: Duo Zhang
>            Assignee: Duo Zhang
>
> This is a problem introduced by HBASE-13825 where we change the exception type in catch block in {{readNext}} method of {{ProtobufLogReader}}.
> {code:title=ProtobufLogReader.java}
>       try {
>           ......
>           ProtobufUtil.mergeFrom(builder, new LimitInputStream(this.inputStream, size),
>             (int)size);
>         } catch (IOException ipbe) { // <------ used to be InvalidProtocolBufferException
>           throw (EOFException) new EOFException("Invalid PB, EOF? Ignoring; originalPosition=" +
>             originalPosition + ", currentPosition=" + this.inputStream.getPos() +
>             ", messageSize=" + size + ", currentAvailable=" + available).initCause(ipbe);
>         }
> {code}
> Here if the {{inputStream}} throws an {{IOException}} due to timeout or something, we just convert it to an {{EOFException}} and at the bottom of this method, we ignore {{EOFException}} and return false. This cause the upper layer think we reach the end of file. So when replaying we will treat the HDFS timeout error as a normal end of file and cause data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)