You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by Ilya Vishnevsky <Il...@e-legion.com> on 2007/06/07 14:32:36 UTC

Why datanode does not work properly on slave?

 Hello! I'm deploying Nutch on two computers. When I run start-all.sh
script all goes good but data node on slave computer does not log
anything. All other parts of Hadoop (namenode, jobtracker, both
tasktrackers and datanode on master) log their information properly.
 Also, when I put some files from local file system into hadoop fs they
are put only into master's data folder. Slave's data folder is empty.
 At the same time when I run stop-all.sh script I get message, that
slave's datanode is being stopped. It means that it has been running
before.
 Do you know what may cause this problem?

RE: Why datanode does not work properly on slave?

Posted by Ilya Vishnevsky <Il...@e-legion.com>.
I've just changed dfs.replication property in the hadoop-site.xml from 1
to 2. Now I get "ArithmeticException: / by" zero when try to put
something into dfs.
 Also I want to say that both nodes are on Windows.



-----Original Message-----
From: Ilya Vishnevsky [mailto:Ilya.Vishnevsky@e-legion.com] 
Sent: Thursday, June 07, 2007 4:33 PM
To: nutch-user@lucene.apache.org
Subject: Why datanode does not work properly on slave?

 Hello! I'm deploying Nutch on two computers. When I run start-all.sh
script all goes good but data node on slave computer does not log
anything. All other parts of Hadoop (namenode, jobtracker, both
tasktrackers and datanode on master) log their information properly.
 Also, when I put some files from local file system into hadoop fs they
are put only into master's data folder. Slave's data folder is empty.
 At the same time when I run stop-all.sh script I get message, that
slave's datanode is being stopped. It means that it has been running
before.
 Do you know what may cause this problem?