You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2009/09/01 21:56:32 UTC
[jira] Resolved: (HBASE-1784) Missing rows after medium intensity
insert
[ https://issues.apache.org/jira/browse/HBASE-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
stack resolved HBASE-1784.
--------------------------
Resolution: Fixed
Assignee: stack
Hadoop Flags: [Reviewed]
Committed branch and trunk. Thanks for testing and review Mathias.
> Missing rows after medium intensity insert
> ------------------------------------------
>
> Key: HBASE-1784
> URL: https://issues.apache.org/jira/browse/HBASE-1784
> Project: Hadoop HBase
> Issue Type: Bug
> Affects Versions: 0.20.0
> Reporter: Jean-Daniel Cryans
> Assignee: stack
> Priority: Blocker
> Fix For: 0.20.0
>
> Attachments: 1784-v2.patch, 1784.patch, DataLoad.java, dbl-assignment-20090831, double-assignment, HBASE-1784-StoreFileScanner-hack.patch, HBASE-1784.log, META.log, post-1784v2.log, processSplitRegion-check-regionIsOpening.patch
>
>
> This bug was uncovered by Mathias in his mail "Issue on data load with 0.20.0-rc2". Basically, somehow, after a medium intensity insert a lot of rows goes missing. Easy way to reproduce : PE. Doing a PE scan or randomRead afterwards won't uncover anything since it doesn't bother about null rows. Simply do a count in the shell, easy to test (I changed my scanner caching in the shell to do it faster).
> I tested some light insertions with force flush/compact/split in the shell and it doesn't break.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.