You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Laxman (Commented) (JIRA)" <ji...@apache.org> on 2012/03/12 16:16:37 UTC
[jira] [Commented] (HBASE-5564) Bulkload is discarding duplicate
records
[ https://issues.apache.org/jira/browse/HBASE-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227597#comment-13227597 ]
Laxman commented on HBASE-5564:
-------------------------------
I think this is a bug and its not any intentional behavior.
Usage of TreeSet in the below code snippet is causing the issue.
PutSortReducer.reduce()
======================
TreeSet<KeyValue> map = new TreeSet<KeyValue>(KeyValue.COMPARATOR);
long curSize = 0;
// stop at the end or the RAM threshold
while (iter.hasNext() && curSize < threshold) {
Put p = iter.next();
for (List<KeyValue> kvs : p.getFamilyMap().values()) {
for (KeyValue kv : kvs) {
map.add(kv);
curSize += kv.getLength();
}
}
Changing this back to List and then sort explicitly will solve the issue.
> Bulkload is discarding duplicate records
> ----------------------------------------
>
> Key: HBASE-5564
> URL: https://issues.apache.org/jira/browse/HBASE-5564
> Project: HBase
> Issue Type: Bug
> Components: mapreduce
> Affects Versions: 0.90.7, 0.92.2, 0.94.0, 0.96.0
> Environment: HBase 0.92
> Reporter: Laxman
> Assignee: Laxman
> Labels: bulkloader
>
> Duplicate records are getting discarded when duplicate records exists in same input file and more specifically if they exists in same split.
> Duplicate records are considered if the records are from diffrent different splits.
> Version under test: HBase 0.92
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira