You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Jeremy Chow <co...@gmail.com> on 2008/09/26 10:03:43 UTC

Failed to start datanodes

Hi list,
  I've created my hadoop cluster following the tutorial on
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster).

  but failed. when i use bin/hadoop dfsadmin -report
 it shows that there is only one datanode on,
$ bin/hadoop dfsadmin -report
Safe mode is ON
Total raw bytes: 20317106176 (18.92 GB)
Remaining raw bytes: 12607342427 (11.74 GB)
Used raw bytes: 834834432 (796.16 MB)
% used: 4.11%

Total effective bytes: 0 (0 KB)
Effective replication multiplier: Infinity
-------------------------------------------------
Datanodes available: 1

Name: 192.168.3.8:50010
State          : In Service
Total raw bytes: 20317106176 (18.92 GB)
Remaining raw bytes: 12607342427(11.74 GB)
Used raw bytes: 834834432 (796.16 MB)
% used: 4.11%
Last contact: Fri Sep 26 15:46:19 CST 2008

and then I check the logs of datanodes which expect to run on other hosts, I
found that

2008-09-26 15:47:54,744 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = localhost.jobui.com/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 694836;
compiled by 'hadoopqa' on Fri Sep 12 23:29:35 UTC 2008
************************************************************/
2008-09-26 15:48:15,896 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /192.168.3.8:54310. Already tried 0 time(s).
2008-09-26 15:48:36,898 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /192.168.3.8:54310. Already tried 1 time(s).
2008-09-26 15:48:57,900 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /192.168.3.8:54310. Already tried 2 time(s).
...

I take 192.168.3.8 as namenode, and 192.168.3.7, 192.168.3.8, 192.168.3.9 as
datanodes.
But obviously remote datanodes cannot start sucessfully.
when I use jps on 192.168.3.7, it seems work fine.
$ jps
5131 Jps
4561 DataNode

but the namenode cannot find it.

can anyone give me a solution?

thanks a lot.

Jeremy
-- 
My research interests are distributed systems, parallel computing and
bytecode based virtual machine.

http://coderplay.javaeye.com

Re: Failed to start datanodes

Posted by Jeremy Chow <co...@gmail.com>.
Hey,

I've fixed it. :)  The server has turn on a firewall.


Regards,
Jeremy

Re: Failed to start datanodes

Posted by 叶双明 <ye...@gmail.com>.
Do you config hostname right in all node?

2008/9/26 Jeremy Chow <co...@gmail.com>

> Hi list,
>  I've created my hadoop cluster following the tutorial on
>
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>
> .
>
>  but failed. when i use bin/hadoop dfsadmin -report
>  it shows that there is only one datanode on,
> $ bin/hadoop dfsadmin -report
> Safe mode is ON
> Total raw bytes: 20317106176 (18.92 GB)
> Remaining raw bytes: 12607342427 (11.74 GB)
> Used raw bytes: 834834432 (796.16 MB)
> % used: 4.11%
>
> Total effective bytes: 0 (0 KB)
> Effective replication multiplier: Infinity
> -------------------------------------------------
> Datanodes available: 1
>
> Name: 192.168.3.8:50010
> State          : In Service
> Total raw bytes: 20317106176 (18.92 GB)
> Remaining raw bytes: 12607342427(11.74 GB)
> Used raw bytes: 834834432 (796.16 MB)
> % used: 4.11%
> Last contact: Fri Sep 26 15:46:19 CST 2008
>
> and then I check the logs of datanodes which expect to run on other hosts,
> I
> found that
>
> 2008-09-26 15:47:54,744 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.jobui.com/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.18.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
> 694836;
> compiled by 'hadoopqa' on Fri Sep 12 23:29:35 UTC 2008
> ************************************************************/
> 2008-09-26 15:48:15,896 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /192.168.3.8:54310. Already tried 0 time(s).
> 2008-09-26 15:48:36,898 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /192.168.3.8:54310. Already tried 1 time(s).
> 2008-09-26 15:48:57,900 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /192.168.3.8:54310. Already tried 2 time(s).
> ...
>
> I take 192.168.3.8 as namenode, and 192.168.3.7, 192.168.3.8, 192.168.3.9as
> datanodes.
> But obviously remote datanodes cannot start sucessfully.
> when I use jps on 192.168.3.7, it seems work fine.
> $ jps
> 5131 Jps
> 4561 DataNode
>
> but the namenode cannot find it.
>
> can anyone give me a solution?
>
> thanks a lot.
>
> Jeremy
> --
> My research interests are distributed systems, parallel computing and
> bytecode based virtual machine.
>
> http://coderplay.javaeye.com
>



-- 
Sorry for my English!! 明
Please help me correct my English expression and error in syntax