You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by sigma syd <si...@yahoo.com.mx> on 2008/05/10 03:15:04 UTC

I need help to set HADOOP

Hello!!:

I am trying to set hadoop  two  pc.

I have in conf/master:
master master.visid.com

and in conf/slave:
master.visid.com
slave3.visid.com.

When i execute bin/start-dfs.sh and bin/start-mapred.sh in logs/hadoop-hadoop-datanode-slave3.visid.com.log is displayed the next error:
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = slave3.visid.com/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.16.2
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16 -r 642481; compiled by 'hadoopqa' on Sat Mar 29 01:59:04 UTC 2008
************************************************************/
2008-05-08 21:15:40,133 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 1 time(s).
2008-05-08 21:15:41,133 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 2 time(s).
2008-05-08 21:15:42,134 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 3 time(s).
2008-05-08 21:15:43,135 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 4 time(s).
2008-05-08 21:15:44,135 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 5 time(s).
2008-05-08 21:15:45,136 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 6 time(s).
2008-05-08 21:15:46,136 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 7 time(s).
2008-05-08 21:15:47,137 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 8 time(s).
2008-05-08 21:15:48,138 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 9 time(s).
2008-05-08 21:15:49,138 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master.visid.com/192.168.46.242:54310. Already tried 10 time(s).
2008-05-08 21:15:50,141 ERROR org.apache.hadoop.dfs.DataNode: java.net.NoRouteToHostException: No route to host
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
        at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
        at java.net.Socket.connect(Socket.java:519)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:161)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:578)
        at org.apache.hadoop.ipc.Client.call(Client.java:501)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
        at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:291)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:278)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:315)
        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:260)
        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:207)
        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:162)
        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2512)
        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:2456)
        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2477)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:2673)

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
in the master.visid.com i execute jps and all services is running:
3984 DataNode
4148 SecondaryNameNode
4373 TaskTracker
4461 Jps
3873 NameNode
4246 JobTracker
but in the slave3.visid.com none service is running:
jps

here is my file config -> hadoop-site.xml:
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/nutch/filesystem/hadoop-datastore/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>master.visid.com:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>master.visid.com:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

and here is my  configuration in the file hadoop-env.sh

export JAVA_HOME=/opt/jdk1.5.0_10/
export HADOOP_PID_DIR=/nutch/filesystem/hadoop_pids/

my installation path is : /nutch/filesystem/

i have this in the /etc/hosts

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
127.0.0.1       localhost.localdomain   localhost       master.visid.com
::1     localhost.localdomain   localhost       master.visid.com
192.168.46.242  master.visid.com
192.168.46.243  slave2.visid.com
192.168.46.244  slave3.visid.com
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

Carlos Barcos


      ____________________________________________________________________________________
Yahoo! Deportes Beta
¡No te pierdas lo último sobre el torneo clausura 2008! Entérate aquí http://deportes.yahoo.com