You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org> on 2008/10/21 22:20:44 UTC

[jira] Commented: (HBASE-946) Row with 55k deletes timesout scanner lease

    [ https://issues.apache.org/jira/browse/HBASE-946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641570#action_12641570 ] 

Jean-Daniel Cryans commented on HBASE-946:
------------------------------------------

Issue is easy to recreate. In the shell : 

{code}
(1..500000).each do |i|
put 't1', 'r1', 'f1:', 'value'
delete 't1', 'r1', 'f1:'
end
{code}

Then see the speed of that random read : 

{code}
hbase(main):017:0> put 't1', 'r1', 'f1:', 'value'
0 row(s) in 0.0020 seconds
hbase(main):018:0> get 't1', 'r1'                
COLUMN                       CELL                                                                             
 f1:                         timestamp=1224620378984, value=value                                             
1 row(s) in 0.1090 seconds
{code}

0.1 sec instead of 0.01

> Row with 55k deletes timesout scanner lease
> -------------------------------------------
>
>                 Key: HBASE-946
>                 URL: https://issues.apache.org/jira/browse/HBASE-946
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.18.1, 0.19.0
>
>
> Made a blocker because it was found by Jon Gray (smile)
> So, Jon Gray has a row with 55k deletes all in the same row.  When he tries to scan, his scanner timesout when it gets to this row.  The root cause is the mechanism we use to make sure a delete in a new store file overshadows an entry at same address in an old file.   We accumulate a List of all deletes encountered.  Before adding a delete to the List, we check if already a deleted.  This check is whats killing us.  One issue is that its doing super inefficient check of whether table is root but even fixing this inefficency -- and then removing the check for root since its redundant we're still too slow.
> Chatting with Jim K, he suggested that ArrayList check is linear.  Changing the aggregation of deletes to instead use HashSet makes all run an order of magnitude faster.
> Also part of this issue, need to figure why on compaction we are not letting go of these deletes.
> Filing this issue against 0.18.1 so it gets into the RC2 (after chatting w/ J-D and JK -- J-D is seeing the issue also).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.