You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ashish Singhi (JIRA)" <ji...@apache.org> on 2017/01/05 12:25:58 UTC
[jira] [Comment Edited] (HBASE-17290) Potential loss of data for
replication of bulk loaded hfiles
[ https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801226#comment-15801226 ]
Ashish Singhi edited comment on HBASE-17290 at 1/5/17 12:25 PM:
----------------------------------------------------------------
Sorry for the delay, got stuck with company work.
I have attached the patch.
Added a new RS observer, ReplicationObserver to solve this bug.
Please review.
was (Author: ashish singhi):
Sorry, got stuck with company work.
I have attached the patch.
Added a new RS observer, ReplicationObserver to solve this bug.
Please review.
> Potential loss of data for replication of bulk loaded hfiles
> ------------------------------------------------------------
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
> Issue Type: Bug
> Affects Versions: 1.3.0
> Reporter: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human retry is not robust solution.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)