You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Vinay (JIRA)" <ji...@apache.org> on 2013/10/04 07:15:41 UTC

[jira] [Created] (HDFS-5299) DFS client hangs in updatePipeline RPC when failover happened

Vinay created HDFS-5299:
---------------------------

             Summary: DFS client hangs in updatePipeline RPC when failover happened
                 Key: HDFS-5299
                 URL: https://issues.apache.org/jira/browse/HDFS-5299
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: namenode
    Affects Versions: 2.1.0-beta, 3.0.0
            Reporter: Vinay
            Assignee: Vinay
            Priority: Blocker


DFSClient got hanged in updatedPipeline call to namenode when the failover happened at exactly sametime.


When we digged down, issue found to be with handling the RetryCache in updatePipeline.

Here are the steps :
1. Client was writing slowly.
2. One of the datanode was down and updatePipeline was called to ANN.
3. Call reached the ANN, while processing updatePipeline call it got shutdown.
3. Now Client retried (Since the api marked as AtMostOnce) to another NameNode. at that time still NN was in STANDBY. and got StandbyException.
4. Now one more time client failover happened. 
5. Now SNN became Active.
6. Client called to current ANN again for updatePipeline, 

Now client call got hanged in NN, waiting for the cached call with same callid to be over. But this cached call is already got over last time with StandbyException.

Conclusion :
Always whenever the new entry is added to cache we need to update the result of the call before returning the call or throwing exception.
I can see similar issue multiple RPCs in FSNameSystem.



--
This message was sent by Atlassian JIRA
(v6.1#6144)