You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2018/06/01 20:36:00 UTC

[jira] [Resolved] (HBASE-18116) Replication source in-memory accounting should not include bulk transfer hfiles

     [ https://issues.apache.org/jira/browse/HBASE-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Purtell resolved HBASE-18116.
------------------------------------
    Resolution: Fixed

> Replication source in-memory accounting should not include bulk transfer hfiles
> -------------------------------------------------------------------------------
>
>                 Key: HBASE-18116
>                 URL: https://issues.apache.org/jira/browse/HBASE-18116
>             Project: HBase
>          Issue Type: Bug
>          Components: Replication
>            Reporter: Andrew Purtell
>            Assignee: Xu Cang
>            Priority: Major
>             Fix For: 3.0.0, 2.1.0, 1.5.0
>
>         Attachments: HBASE-18116.master.001.patch, HBASE-18116.master.002.patch, HBASE-18116.master.003.patch
>
>
> In ReplicationSourceWALReaderThread we maintain a global quota on enqueued replication work for preventing OOM by queuing up too many edits into queues on heap. When calculating the size of a given replication queue entry, if it has associated hfiles (is a bulk load to be replicated as a batch of hfiles), we get the file sizes and include the sum. We then apply that result to the quota. This isn't quite right. Those hfiles will be pulled by the sink as a file copy, not pushed by the source. The cells in those files are not queued in memory at the source and therefore shouldn't be counted against the quota.
> Related, the sum of the hfile sizes are also included when checking if queued work exceeds the configured replication queue capacity, which is by default 64 MB. HFiles are commonly much larger than this. 
> So what happens is when we encounter a bulk load replication entry typically both the quota and capacity limits are exceeded, we break out of loops, and send right away. What is transferred on the wire via HBase RPC though has only a partial relationship to the calculation. 
> Depending how you look at it, it makes sense to factor hfile file sizes against replication queue capacity limits. The sink will be occupied transferring those files at the HDFS level. Anyway, this is how we have been doing it and it is too late to change now. I do not however think it is correct to apply hfile file sizes against a quota for in memory state on the source. The source doesn't queue or even transfer those bytes. 
> Something I noticed while working on HBASE-18027.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)