You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Jing Zhao (JIRA)" <ji...@apache.org> on 2013/09/04 22:00:53 UTC
[jira] [Commented] (HADOOP-9932) Improper synchronization in
RetryCache
[ https://issues.apache.org/jira/browse/HADOOP-9932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13758272#comment-13758272 ]
Jing Zhao commented on HADOOP-9932:
-----------------------------------
+1 as well. Thanks for the fix Kihwal!
> Improper synchronization in RetryCache
> --------------------------------------
>
> Key: HADOOP-9932
> URL: https://issues.apache.org/jira/browse/HADOOP-9932
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Priority: Blocker
> Attachments: HADOOP-9932.patch, HADOOP-9932.patch
>
>
> In LightWeightCache#evictExpiredEntries(), the precondition check can fail. [~patwhitey2007] ran a HA failover test and it occurred while the SBN was catching up with edits during a transition to active. This caused NN to terminate.
> Here is my theory: If an RPC handler calls waitForCompletion() and it happens to remove the head of the queue in get(), it will race with evictExpiredEntries() frrom put().
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira