You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Ayush Saxena (JIRA)" <ji...@apache.org> on 2019/06/20 18:26:00 UTC

[jira] [Comment Edited] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

    [ https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868821#comment-16868821 ] 

Ayush Saxena edited comment on HADOOP-16385 at 6/20/19 6:25 PM:
----------------------------------------------------------------

HADOOP-16028 can be directly cherry-picked to 3.1, Doesn't require a separate patch. I will ping up there.
The 3.1.1 mentioned is our internal one, We have this HADOOP-16028 in. Will check once again tomorrow too. 
So, if so this shouldn't be the issue.


was (Author: ayushtkn):
HADOOP-16028 can be directly cherry-picked to 3.1, Doesn't require a separate patch. I will ping up there.
The 3.1.1 mentioned is our internal one, We have this HADOOP-16028.
So this can't be the issue.

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-16385
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16385
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 3.1.1
>            Reporter: krishna reddy
>            Assignee: Ayush Saxena
>            Priority: Major
>         Attachments: HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be positive.
>         at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>         at org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
>         at org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
>         at org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
>         at org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
>         at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.IllegalArgumentException: 247 should >= 248, and both should be positive.
> 2019-06-19 05:54:07,298 INFO org.apache.hadoop.hdfs.server.common.HadoopAuditLogger.audit: process=Namenode     operation=shutdown      result=invoked
> 2019-06-19 05:54:07,298 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at namenode/255.255.182.104
> ************************************************************/
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org