You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Harsh J (Resolved) (JIRA)" <ji...@apache.org> on 2011/12/29 15:57:30 UTC
[jira] [Resolved] (HDFS-47) dead datanodes because of
OutOfMemoryError
[ https://issues.apache.org/jira/browse/HDFS-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Harsh J resolved HDFS-47.
-------------------------
Resolution: Not A Problem
This has gone stale. FWIW, haven't seen DNs go OOM on its own in recent years. Probably a leak that was fixed?
Resolving as Not a Problem (anymore).
> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
> Key: HDFS-47
> URL: https://issues.apache.org/jira/browse/HDFS-47
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is found in the out file:
> Exception in thread "org.apache.hadoop.dfs.DataBlockScanner@18166e5" java.lang.OutOfMemoryError: Java heap space
> Exception in thread "DataNode: [dfs.data.dir-value]" java.lang.OutOfMemoryError: Java heap space
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira