You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Zheng Hu (JIRA)" <ji...@apache.org> on 2018/03/20 06:54:00 UTC

[jira] [Comment Edited] (HBASE-20138) Find a way to deal with the conflicts when updating replication position

    [ https://issues.apache.org/jira/browse/HBASE-20138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405884#comment-16405884 ] 

Zheng Hu edited comment on HBASE-20138 at 3/20/18 6:53 AM:
-----------------------------------------------------------

bq. This does not work. The rs which has the region online may be blocked because of the previous range has not been finished yet...
It's true.  So need another way to handle this.

As the comment[1] in HBASE-20147 said,   we'll reopen all regions to save the initialized last pushed sequence id when change no-serial peer to serial peer. There are two scenarios as following: 
1.  For the wal entries whose seq id > the initialized last pushed seq id,   the replication will be serial,  it means that only one rs will update the region's last pushed id, so no conflict happen.
2.  For the wal entries whose seq id <= the initialized last pushed seq id,   there may be many rs which are pushing the different ranges of  log for one region.  In this case, we only need to check that if the seq id of log  <=  the current last pushed seq id, if true, then we skip to update the last pushed seq id,  DO NOT need CAS here (In fact,  CAS is hard to implement by ZK multi API), because all the concurrent pushing RS will skip to update the last pushed seq id,  so no conflict here too. 

Finally, the patch will be quite simple, I think.   Thanks to  HBASE-20147.

[1] https://issues.apache.org/jira/browse/HBASE-20147?focusedCommentId=16404693&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16404693


was (Author: openinx):
bq. This does not work. The rs which has the region online may be blocked because of the previous range has not been finished yet...
It's true.  So need another way to handle this.

As the comment[1] in HBASE-20147 said,   we'll reopen all regions to save the initialized last pushed sequence id when change no-serial peer to serial peer. There are two scenarios as following: 
1.  For the wal entries whose seq id > the initialized last pushed seq id,   the replication will be serial,  it means that only one rs will update the region's last pushed id, so no conflict happen.
2.  For the wal entries whose seq id <= the initialized last pushed seq id,   there may be many rs which are pushing the different ranges of  log for one region.  In this case, we only need to check that if the seq id of log  <=  the current last pushed seq id, if true, then we skip to update the last pushed seq id,  DO NOT need CAS here, because all the concurrent pushing RS will skip to update the last pushed seq id,  so no conflict here too. 

Finally, the patch will be quite simple, I think.   Thanks to  HBASE-20147.

[1] https://issues.apache.org/jira/browse/HBASE-20147?focusedCommentId=16404693&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16404693

> Find a way to deal with the conflicts when updating replication position
> ------------------------------------------------------------------------
>
>                 Key: HBASE-20138
>                 URL: https://issues.apache.org/jira/browse/HBASE-20138
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Duo Zhang
>            Assignee: Zheng Hu
>            Priority: Major
>
> For now if a table is not created with SERIAL_REPLICATION_SCOPE and later converted to SERIAL_REPLICATION_SCOPE , then we may have multiple replication sources which replicate the different ranges for the same region and update the queue storage concurrently. This will cause problem if the lower range finish at last since the replication position will be wrong...
> Need to find a way to deal with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)