You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Tali K <nc...@hotmail.com> on 2010/12/07 19:40:16 UTC

Help: 1) Hadoop processes still are running after we stopped hadoop.2) How to exclude a dead node?

1)When I stopped hadoop, we checked all the nodes and found that 2 or 3 java/hadoop processes were still running on each node.  So we went to each node and did a 'killall java' - in some cases I had to do 'killall -9 java'.
My question : why is is this happening and what would be recommendations , how to make sure that there is no hadoop processes running after I stopped hadoop with stop-all.sh?
 
2) Also we have a dead node. We  removed this node  from $HADOOP_HOME/conf/slaves.  This file is supposed to tell the namenode 
 which machines are supposed to be datanodes/tasktrackers.
We  started hadoop again, and were surprised to see a dead node in  hadoop 'report' ("$HADOOP_HOME/bin/hadoop dfsadmin -report|less")
It is only after blocking a deadnode and restarting hadoop, deadnode no longer showed up in hreport.
Any recommendations, how to deal with dead nodes?