You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Ulul <ha...@ulul.org> on 2015/03/01 13:12:11 UTC

Re: Hadoop 2.6.0 - No DataNode to stop

Hi

Did you check your slaves file is correct ?
That the datanode process is actually running ?
Did you check its log file ?
That the datanode is available ? (dfsadmin -report, through the WUI)

We need more detail

Ulul

Le 28/02/2015 22:05, Daniel Klinger a écrit :
> Thanks but i know how to kill a process in Linux. But this didn’t answer the question why the command say no Datanode to stop instead of stopping the Datanode:
>   
> $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
>   
>   
>
> *Von:*Surbhi Gupta [mailto:surbhi.gupta01@gmail.com]
> *Gesendet:* Samstag, 28. Februar 2015 20:16
> *An:* user@hadoop.apache.org
> *Betreff:* Re: Hadoop 2.6.0 - No DataNode to stop
>
> Issue jps and get the process id or
> Try to get the process id of datanode.
>
> Issue ps-fu userid of the user through which datanode is running.
>
> Then kill the process using kill -9
>
> On 28 Feb 2015 09:38, "Daniel Klinger" <dk@web-computing.de 
> <ma...@web-computing.de>> wrote:
>
>     Hello,
>
>     I used a lot of Hadoop-Distributions. Now I’m trying to install a
>     pure Hadoop on a little „cluster“ for testing (2 CentOS-VMs: 1
>     Name+DataNode 1 DataNode). I followed the instructions on the
>     Documentation site:
>     http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html.
>
>     I’m starting the Cluster like it is described in the Chapter
>     „Operating the Hadoop Cluster“(with different users). The starting
>     process works great. The PID-Files are created in /var/run and u
>     can see that Folders and Files are created in the Data- and
>     NameNode folders. I’m getting no errors in the log-files.
>
>     When I try to stop the cluster all Services are stopped (NameNode,
>     ResourceManager etc.). But when I stop the DataNodes I’m getting
>     the message: „No DataNode to stop“. The PID-File and the
>     in_use.lock-File are still there and if I try to start the
>     DataNode again I’m getting the error that the Process is already
>     running. When I stop the DataNode as hdfs instead of root the PID
>     and in_use-File are removed but I’m still getting the message: „No
>     DataNode to stop“
>
>     What I’m doing wrong?
>
>     Greets
>
>     dk
>