You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ted Yu (JIRA)" <ji...@apache.org> on 2016/04/19 19:56:25 UTC
[jira] [Commented] (HBASE-15669) HFile size is not considered
correctly in a replication request
[ https://issues.apache.org/jira/browse/HBASE-15669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15248289#comment-15248289 ]
Ted Yu commented on HBASE-15669:
--------------------------------
{code}
3063 * @param storeFilesSize Map of store files and its length
{code}
'its length' -> 'their lengths'
{code}
3083 builder.setStoreFileSize(storeFilesSize.get(name));
{code}
What if there is no size for this file ?
I see LOG.warn() below. Is that enough ?
{code}
5314 Map<String, Long> storeFilesSize = new HashMap<String, Long>();
{code}
Rename the variable storeFilesSizes
{code}
762 LOG.error("Failed to deserialize bulk load entry from wal edit. "
763 + "This its hfiles count will not be added into metric.");
{code}
Can more information be added to the above log so that user has more information ?
> HFile size is not considered correctly in a replication request
> ---------------------------------------------------------------
>
> Key: HBASE-15669
> URL: https://issues.apache.org/jira/browse/HBASE-15669
> Project: HBase
> Issue Type: Bug
> Components: Replication
> Affects Versions: 1.3.0
> Reporter: Ashish Singhi
> Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15669.patch
>
>
> In a single replication request from source cluster a RS can send either at most {{replication.source.size.capacity}} size of data or {{replication.source.nb.capacity}} entries.
> The size is calculated by considering the cells size in each entry which will get calculated wrongly in case of bulk loaded data replication, in this case we need to consider the size of hfiles not cell.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)