You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Jason Lowe (JIRA)" <ji...@apache.org> on 2012/11/14 03:26:12 UTC

[jira] [Commented] (MAPREDUCE-4797) LocalContainerAllocator can loop forever trying to contact the RM

    [ https://issues.apache.org/jira/browse/MAPREDUCE-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496769#comment-13496769 ] 

Jason Lowe commented on MAPREDUCE-4797:
---------------------------------------

The code looks like it will only try to connect so many times before giving up, but there's a bug in LocalContainerAllocator.heartbeat:

{code:title=LocalContainerAllocator.heartbeat}
AllocateResponse allocateResponse = scheduler.allocate(allocateRequest);
AMResponse response;
try {
  response = allocateResponse.getAMResponse();
  // Reset retry count if no exception occurred.
  retrystartTime = System.currentTimeMillis();
} catch (Exception e) {
{code}

Note that the try block is surrounding the retrieval of the response *after* the {{allocate}} RPC call, so we're missing where the exception is really being thrown and not handling it here where it has retry count logic.  The exception then bubbles up to the RMCommunicator allocator thread where if the exception isn't a {{YarnException}} then it simply loops around to try again, forever.
                
> LocalContainerAllocator can loop forever trying to contact the RM
> -----------------------------------------------------------------
>
>                 Key: MAPREDUCE-4797
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4797
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster
>    Affects Versions: 0.23.3, 2.0.1-alpha
>            Reporter: Jason Lowe
>
> If LocalContainerAllocator has trouble communicating with the RM it can end up retrying forever if the nature of the error is not a YarnException.
> This can be particulary bad if the connection went down because the cluster was reset such that the RM and NM have lost track of the process and therefore nothing else will eventually kill the process.  In this scenario, the looping AM continues to pelt the RM with connection requests every second using a stale token, and the RM logs the SASL exceptions over and over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira