You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Daniel Klinger <dk...@web-computing.de> on 2015/02/28 18:37:10 UTC

Hadoop 2.6.0 - No DataNode to stop

Hello,

 

I used a lot of Hadoop-Distributions. Now I'm trying to install a pure
Hadoop on a little "cluster" for testing (2 CentOS-VMs: 1 Name+DataNode 1
DataNode). I followed the instructions on the Documentation site:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Clus
terSetup.html.

 

I'm starting the Cluster like it is described in the Chapter "Operating the
Hadoop Cluster"(with different users). The starting process works great. The
PID-Files are created in /var/run and u can see that Folders and Files are
created in the Data- and NameNode folders. I'm getting no errors in the
log-files.

 

When I try to stop the cluster all Services are stopped (NameNode,
ResourceManager etc.). But when I stop the DataNodes I'm getting the
message: "No DataNode to stop". The PID-File and the in_use.lock-File are
still there and if I try to start the DataNode again I'm getting the error
that the Process is already running. When I stop the DataNode as hdfs
instead of root the PID and in_use-File are removed but I'm still getting
the message: "No DataNode to stop"

 

What I'm doing wrong?

 

Greets

dk


Re: Hadoop 2.6.0 - No DataNode to stop

Posted by Surbhi Gupta <su...@gmail.com>.
Issue jps and get the process id or
Try to get the process id of datanode.

Issue ps-fu userid of the user through which datanode is running.

Then kill the process using kill -9
On 28 Feb 2015 09:38, "Daniel Klinger" <dk...@web-computing.de> wrote:

> Hello,
>
>
>
> I used a lot of Hadoop-Distributions. Now I’m trying to install a pure
> Hadoop on a little „cluster“ for testing (2 CentOS-VMs: 1 Name+DataNode 1
> DataNode). I followed the instructions on the Documentation site:
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
> .
>
>
>
> I’m starting the Cluster like it is described in the Chapter „Operating
> the Hadoop Cluster“(with different users). The starting process works
> great. The PID-Files are created in /var/run and u can see that Folders and
> Files are created in the Data- and NameNode folders. I’m getting no errors
> in the log-files.
>
>
>
> When I try to stop the cluster all Services are stopped (NameNode,
> ResourceManager etc.). But when I stop the DataNodes I’m getting the
> message: „No DataNode to stop“. The PID-File and the in_use.lock-File are
> still there and if I try to start the DataNode again I’m getting the error
> that the Process is already running. When I stop the DataNode as hdfs
> instead of root the PID and in_use-File are removed but I’m still getting
> the message: „No DataNode to stop“
>
>
>
> What I’m doing wrong?
>
>
>
> Greets
>
> dk
>

Re: Hadoop 2.6.0 - No DataNode to stop

Posted by Surbhi Gupta <su...@gmail.com>.
Issue jps and get the process id or
Try to get the process id of datanode.

Issue ps-fu userid of the user through which datanode is running.

Then kill the process using kill -9
On 28 Feb 2015 09:38, "Daniel Klinger" <dk...@web-computing.de> wrote:

> Hello,
>
>
>
> I used a lot of Hadoop-Distributions. Now I’m trying to install a pure
> Hadoop on a little „cluster“ for testing (2 CentOS-VMs: 1 Name+DataNode 1
> DataNode). I followed the instructions on the Documentation site:
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
> .
>
>
>
> I’m starting the Cluster like it is described in the Chapter „Operating
> the Hadoop Cluster“(with different users). The starting process works
> great. The PID-Files are created in /var/run and u can see that Folders and
> Files are created in the Data- and NameNode folders. I’m getting no errors
> in the log-files.
>
>
>
> When I try to stop the cluster all Services are stopped (NameNode,
> ResourceManager etc.). But when I stop the DataNodes I’m getting the
> message: „No DataNode to stop“. The PID-File and the in_use.lock-File are
> still there and if I try to start the DataNode again I’m getting the error
> that the Process is already running. When I stop the DataNode as hdfs
> instead of root the PID and in_use-File are removed but I’m still getting
> the message: „No DataNode to stop“
>
>
>
> What I’m doing wrong?
>
>
>
> Greets
>
> dk
>

Re: Hadoop 2.6.0 - No DataNode to stop

Posted by Surbhi Gupta <su...@gmail.com>.
Issue jps and get the process id or
Try to get the process id of datanode.

Issue ps-fu userid of the user through which datanode is running.

Then kill the process using kill -9
On 28 Feb 2015 09:38, "Daniel Klinger" <dk...@web-computing.de> wrote:

> Hello,
>
>
>
> I used a lot of Hadoop-Distributions. Now I’m trying to install a pure
> Hadoop on a little „cluster“ for testing (2 CentOS-VMs: 1 Name+DataNode 1
> DataNode). I followed the instructions on the Documentation site:
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
> .
>
>
>
> I’m starting the Cluster like it is described in the Chapter „Operating
> the Hadoop Cluster“(with different users). The starting process works
> great. The PID-Files are created in /var/run and u can see that Folders and
> Files are created in the Data- and NameNode folders. I’m getting no errors
> in the log-files.
>
>
>
> When I try to stop the cluster all Services are stopped (NameNode,
> ResourceManager etc.). But when I stop the DataNodes I’m getting the
> message: „No DataNode to stop“. The PID-File and the in_use.lock-File are
> still there and if I try to start the DataNode again I’m getting the error
> that the Process is already running. When I stop the DataNode as hdfs
> instead of root the PID and in_use-File are removed but I’m still getting
> the message: „No DataNode to stop“
>
>
>
> What I’m doing wrong?
>
>
>
> Greets
>
> dk
>

Re: Hadoop 2.6.0 - No DataNode to stop

Posted by Surbhi Gupta <su...@gmail.com>.
Issue jps and get the process id or
Try to get the process id of datanode.

Issue ps-fu userid of the user through which datanode is running.

Then kill the process using kill -9
On 28 Feb 2015 09:38, "Daniel Klinger" <dk...@web-computing.de> wrote:

> Hello,
>
>
>
> I used a lot of Hadoop-Distributions. Now I’m trying to install a pure
> Hadoop on a little „cluster“ for testing (2 CentOS-VMs: 1 Name+DataNode 1
> DataNode). I followed the instructions on the Documentation site:
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
> .
>
>
>
> I’m starting the Cluster like it is described in the Chapter „Operating
> the Hadoop Cluster“(with different users). The starting process works
> great. The PID-Files are created in /var/run and u can see that Folders and
> Files are created in the Data- and NameNode folders. I’m getting no errors
> in the log-files.
>
>
>
> When I try to stop the cluster all Services are stopped (NameNode,
> ResourceManager etc.). But when I stop the DataNodes I’m getting the
> message: „No DataNode to stop“. The PID-File and the in_use.lock-File are
> still there and if I try to start the DataNode again I’m getting the error
> that the Process is already running. When I stop the DataNode as hdfs
> instead of root the PID and in_use-File are removed but I’m still getting
> the message: „No DataNode to stop“
>
>
>
> What I’m doing wrong?
>
>
>
> Greets
>
> dk
>