You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2006/06/01 04:40:29 UTC
[jira] Created: (HADOOP-267) Data node should shutdown when a
"critical" error is returned by the name node
Data node should shutdown when a "critical" error is returned by the name node
------------------------------------------------------------------------------
Key: HADOOP-267
URL: http://issues.apache.org/jira/browse/HADOOP-267
Project: Hadoop
Type: Bug
Components: dfs
Reporter: Konstantin Shvachko
Priority: Minor
Currently data node does not distinguish between critical and non critical exceptions.
Any exception is treated as a signal to sleep and then try again. See
org.apache.hadoop.dfs.DataNode.run()
This is happening because RPC always throws the same RemoteException.
In some cases (like UnregisteredDatanodeException, IncorrectVersionException) the data
node should shutdown rather than retry.
This logic naturally belongs to the
org.apache.hadoop.dfs.DataNode.offerService()
but can be reasonably implemented (without examining the RemoteException.className
field) after HADOOP-266 (2) is fixed.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira