You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jonathan Gray (JIRA)" <ji...@apache.org> on 2009/08/24 20:07:59 UTC
[jira] Commented: (HBASE-1784) Missing rows after medium intensity
insert
[ https://issues.apache.org/jira/browse/HBASE-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12746983#action_12746983 ]
Jonathan Gray commented on HBASE-1784:
--------------------------------------
Current thinking is that this is caused by failed compaction(s). Mathias is turning on DEBUG and running smaller jobs, increasing load each run, until it triggers missing rows.
> Missing rows after medium intensity insert
> ------------------------------------------
>
> Key: HBASE-1784
> URL: https://issues.apache.org/jira/browse/HBASE-1784
> Project: Hadoop HBase
> Issue Type: Bug
> Affects Versions: 0.20.0
> Reporter: Jean-Daniel Cryans
> Priority: Blocker
> Attachments: DataLoad.java
>
>
> This bug was uncovered by Mathias in his mail "Issue on data load with 0.20.0-rc2". Basically, somehow, after a medium intensity insert a lot of rows goes missing. Easy way to reproduce : PE. Doing a PE scan or randomRead afterwards won't uncover anything since it doesn't bother about null rows. Simply do a count in the shell, easy to test (I changed my scanner caching in the shell to do it faster).
> I tested some light insertions with force flush/compact/split in the shell and it doesn't break.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.