You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jonathan Hsieh (JIRA)" <ji...@apache.org> on 2013/12/05 07:02:35 UTC
[jira] [Commented] (HBASE-10079) Race in TableName cache
[ https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13839847#comment-13839847 ]
Jonathan Hsieh commented on HBASE-10079:
----------------------------------------
Rig came back clean. committing to 0.96/0.98/0.99. Not relevant to 94.
Thanks for those who took a look.
> Race in TableName cache
> -----------------------
>
> Key: HBASE-10079
> URL: https://issues.apache.org/jira/browse/HBASE-10079
> Project: HBase
> Issue Type: Bug
> Components: regionserver
> Affects Versions: 0.96.1
> Reporter: Jonathan Hsieh
> Assignee: Jonathan Hsieh
> Priority: Blocker
> Fix For: 0.98.0, 0.96.1, 0.99.0
>
> Attachments: 10079.v1.patch, hbase-10079.v2.patch
>
>
> Testing 0.96.1rc1.
> With one process incrementing a row in a table, we increment single col. We flush or do kills/kill-9 and data is lost. flush and kill are likely the same problem (kill would flush), kill -9 may or may not have the same root cause.
> 5 nodes
> hadoop 2.1.0 (a pre cdh5b1 hdfs).
> hbase 0.96.1 rc1
> Test: 250000 increments on a single row an single col with various number of client threads (IncrementBlaster). Verify we have a count of 250000 after the run (IncrementVerifier).
> Run 1: No fault injection. 5 runs. count = 250000. on multiple runs. Correctness verified. 1638 inc/s throughput.
> Run 2: flushes table with incrementing row. count = 246875 !=250000. correctness failed. 1517 inc/s throughput.
> Run 3: kill of rs hosting incremented row. count = 243750 != 250000. Correctness failed. 1451 inc/s throughput.
> Run 4: one kill -9 of rs hosting incremented row. 246878.!= 250000. Correctness failed. 1395 inc/s (including recovery)
--
This message was sent by Atlassian JIRA
(v6.1#6144)