You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by souravm <SO...@infosys.com> on 2008/09/16 08:04:23 UTC

Need help in hdfs configuration fully distributed way in Mac OSX...

Hi All,

I'm facing a problem in configuring hdfs in a fully distributed way in Mac OSX.

Here is the topology -

1. The namenode is in machine 1
2. There is 1 datanode in machine 2

Now when I execute start-dfs.sh from machine 1, it connects to machine 2 (after it asks for password for connecting to machine 2) and starts datanode in machine 2 (as the console message says).

However -
1. When I go to http://machine1:50070 - it does not show the data node at all. It says 0 data node configured
2. In the log file in machine 2 what I see is -
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = rc0902b-dhcp169.apple.com/17.229.22.169
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.17.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
************************************************************/
2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 1 time(s).
2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 2 time(s).
2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 3 time(s).
2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 4 time(s).
2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 5 time(s).
2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 6 time(s).
2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 7 time(s).
2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 8 time(s).
2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 9 time(s).
2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /17.229.23.77:9000. Already tried 10 time(s).
2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /17.229.23.77:9000 not available yet, Zzzzz...

....... and this retyring gets on repeating


The  hadoop-site.xmls are like this -

1. In machine 1
-
<configuration>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>

   <property>
    <name>dfs.name.dir</name>
    <value>/Users/souravm/hdpn</value>
  </property>

  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>


2. In machine 2

<configuration>

 <property>
    <name>fs.default.name</name>
    <value>hdfs://<machine1 ip>:9000</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value>/Users/nirdosh/hdfsd1</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

The slaves file in machine 1 has single entry - <user name>@<ip of machine2>

The exact steps I did -

1. Reformat the namenode in machine 1
2. execute start-dfs.sh in machine 1
3. Then I try to see whether the datanode is created through http://<machine 1 ip>:50070

Any pointer to resolve this issue would be appreciated.

Regards,
Sourav



**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely 
for the use of the addressee(s). If you are not the intended recipient, please 
notify the sender by e-mail and delete the original message. Further, you are not 
to copy, disclose, or distribute this e-mail or its contents to any other person and 
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken 
every reasonable precaution to minimize this risk, but is not liable for any damage 
you may sustain as a result of any virus in this e-mail. You should carry out your 
own virus checks before opening the e-mail or attachment. Infosys reserves the 
right to monitor and review the content of all messages sent to or from this e-mail 
address. Messages sent to or from this e-mail address may be stored on the 
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***

RE: Need help in hdfs configuration fully distributed way in Mac OSX...

Posted by souravm <SO...@infosys.com>.
Hi Mafish,

Thanks for your suggestions.

Finally I could resolve the issue. The *site.xml in namenode had ds.default.name as localhost where as in data nodes it were the actual ip. I changed the local host to actual ip in name node and it started working.

Regards,
Sourav

-----Original Message-----
From: Mafish Liu [mailto:mafish@gmail.com]
Sent: Tuesday, September 16, 2008 7:37 PM
To: core-user@hadoop.apache.org
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...

Hi, souravm:
  I don't know exactly what's wrong with your configuration from your post
and I guest the possible causes are:

  1. Make sure firewall on namenode is off or the port of 9000 is free to
connect in your firewall configuration.

  2. Namenode. Check the namenode start up log to see if namenode starts up
correctly, or try run 'jps' on your namenode to see if there is process
called "namenode".

May this help.


On Tue, Sep 16, 2008 at 10:41 PM, souravm <SO...@infosys.com> wrote:

> Hi,
>
> Tha namenode in machine 1 has started. I can see the following log. Is
> there a specific way to provide the master name in masters file (in
> hadoop/conf) in datanode ? I've currently specified
>
> 2008-09-16 07:23:46,321 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=9000
> 2008-09-16 07:23:46,325 INFO org.apache.hadoop.dfs.NameNode: Namenode up
> at: localhost/127.0.0.1:9000
> 2008-09-16 07:23:46,327 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2008-09-16 07:23:46,329 INFO org.apache.hadoop.dfs.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2008-09-16 07:23:46,404 INFO org.apache.hadoop.fs.FSNamesystem:
> fsOwner=souravm,souravm,_lpadmin,_appserveradm,_appserverusr,admin
> 2008-09-16 07:23:46,405 INFO org.apache.hadoop.fs.FSNamesystem:
> supergroup=supergroup
> 2008-09-16 07:23:46,405 INFO org.apache.hadoop.fs.FSNamesystem:
> isPermissionEnabled=true
> 2008-09-16 07:23:46,473 INFO org.apache.hadoop.fs.FSNamesystem: Finished
> loading FSImage in 112 msecs
> 2008-09-16 07:23:46,475 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2008-09-16 07:23:46,475 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2008-09-16 07:23:46,480 INFO org.apache.hadoop.dfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2008-09-16 07:23:46,486 INFO org.apache.hadoop.fs.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2008-09-16 07:23:46,561 INFO org.mortbay.util.Credential: Checking Resource
> aliases
> 2008-09-16 07:23:46,627 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-09-16 07:23:46,907 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@cf7fd0
> 2008-09-16 07:23:46,937 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-09-16 07:23:46,938 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-09-16 07:23:46,938 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-09-16 07:23:46,939 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50070
> 2008-09-16 07:23:46,939 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@dd725b
> 2008-09-16 07:23:46,940 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
> up at: 0.0.0.0:50070
> 2008-09-16 07:23:46,940 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2008-09-16 07:23:46,942 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 9000: starting
> 2008-09-16 07:23:46,944 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000: starting
>
> Is there a specific way to provide the master name in masters file (in
> hadoop/conf) in datanode ? I've currently specified <username>@<namenode
> server ip>. I'm thinking there might be a problem as in log file of data
> node I can see the message '2008-09-16 14:38:51,501 INFO
> org.apache.hadoop.ipc.RPC: Server at /192.168.1.102:9000 not available
> yet, Zzzzz...'
>
> Any help ?
>
> Regards,
> Sourav
>
>
> ________________________________________
> From: Samuel Guo [guosijie@gmail.com]
> Sent: Tuesday, September 16, 2008 5:49 AM
> To: core-user@hadoop.apache.org
> Subject: Re: Need help in hdfs configuration fully distributed way in Mac
> OSX...
>
> check the namenode's log in machine1 to see if your namenode started
> successfully :)
>
> On Tue, Sep 16, 2008 at 2:04 PM, souravm <SO...@infosys.com> wrote:
>
> > Hi All,
> >
> > I'm facing a problem in configuring hdfs in a fully distributed way in
> Mac
> > OSX.
> >
> > Here is the topology -
> >
> > 1. The namenode is in machine 1
> > 2. There is 1 datanode in machine 2
> >
> > Now when I execute start-dfs.sh from machine 1, it connects to machine 2
> > (after it asks for password for connecting to machine 2) and starts
> datanode
> > in machine 2 (as the console message says).
> >
> > However -
> > 1. When I go to http://machine1:50070 - it does not show the data node
> at
> > all. It says 0 data node configured
> > 2. In the log file in machine 2 what I see is -
> > /************************************************************
> > STARTUP_MSG: Starting DataNode
> > STARTUP_MSG:   host = rc0902b-dhcp169.apple.com/17.229.22.169
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.17.2.1
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> > 684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
> > ************************************************************/
> > 2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 1 time(s).
> > 2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 2 time(s).
> > 2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 3 time(s).
> > 2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 4 time(s).
> > 2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 5 time(s).
> > 2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 6 time(s).
> > 2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 7 time(s).
> > 2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 8 time(s).
> > 2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 9 time(s).
> > 2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 10 time(s).
> > 2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
> > 17.229.23.77:9000 not available yet, Zzzzz...
> >
> > ....... and this retyring gets on repeating
> >
> >
> > The  hadoop-site.xmls are like this -
> >
> > 1. In machine 1
> > -
> > <configuration>
> >
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>hdfs://localhost:9000</value>
> >  </property>
> >
> >   <property>
> >    <name>dfs.name.dir</name>
> >    <value>/Users/souravm/hdpn</value>
> >  </property>
> >
> >  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>localhost:9001</value>
> >  </property>
> >  <property>
> >    <name>dfs.replication</name>
> >    <value>1</value>
> >  </property>
> > </configuration>
> >
> >
> > 2. In machine 2
> >
> > <configuration>
> >
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>hdfs://<machine1 ip>:9000</value>
> >  </property>
> >  <property>
> >    <name>dfs.data.dir</name>
> >    <value>/Users/nirdosh/hdfsd1</value>
> >  </property>
> >  <property>
> >    <name>dfs.replication</name>
> >    <value>1</value>
> >  </property>
> > </configuration>
> >
> > The slaves file in machine 1 has single entry - <user name>@<ip of
> > machine2>
> >
> > The exact steps I did -
> >
> > 1. Reformat the namenode in machine 1
> > 2. execute start-dfs.sh in machine 1
> > 3. Then I try to see whether the datanode is created through http://
> <machine
> > 1 ip>:50070
> >
> > Any pointer to resolve this issue would be appreciated.
> >
> > Regards,
> > Sourav
> >
> >
> >
> > **************** CAUTION - Disclaimer *****************
> > This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
> > solely
> > for the use of the addressee(s). If you are not the intended recipient,
> > please
> > notify the sender by e-mail and delete the original message. Further, you
> > are not
> > to copy, disclose, or distribute this e-mail or its contents to any other
> > person and
> > any such actions are unlawful. This e-mail may contain viruses. Infosys
> has
> > taken
> > every reasonable precaution to minimize this risk, but is not liable for
> > any damage
> > you may sustain as a result of any virus in this e-mail. You should carry
> > out your
> > own virus checks before opening the e-mail or attachment. Infosys
> reserves
> > the
> > right to monitor and review the content of all messages sent to or from
> > this e-mail
> > address. Messages sent to or from this e-mail address may be stored on
> the
> > Infosys e-mail system.
> > ***INFOSYS******** End of Disclaimer ********INFOSYS***
> >
>



--
Mafish@gmail.com
Institute of Computing Technology, Chinese Academy of Sciences, Beijing.

Re: Need help in hdfs configuration fully distributed way in Mac OSX...

Posted by Mafish Liu <ma...@gmail.com>.
Hi, souravm:
  I don't know exactly what's wrong with your configuration from your post
and I guest the possible causes are:

  1. Make sure firewall on namenode is off or the port of 9000 is free to
connect in your firewall configuration.

  2. Namenode. Check the namenode start up log to see if namenode starts up
correctly, or try run 'jps' on your namenode to see if there is process
called "namenode".

May this help.


On Tue, Sep 16, 2008 at 10:41 PM, souravm <SO...@infosys.com> wrote:

> Hi,
>
> Tha namenode in machine 1 has started. I can see the following log. Is
> there a specific way to provide the master name in masters file (in
> hadoop/conf) in datanode ? I've currently specified
>
> 2008-09-16 07:23:46,321 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=9000
> 2008-09-16 07:23:46,325 INFO org.apache.hadoop.dfs.NameNode: Namenode up
> at: localhost/127.0.0.1:9000
> 2008-09-16 07:23:46,327 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2008-09-16 07:23:46,329 INFO org.apache.hadoop.dfs.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2008-09-16 07:23:46,404 INFO org.apache.hadoop.fs.FSNamesystem:
> fsOwner=souravm,souravm,_lpadmin,_appserveradm,_appserverusr,admin
> 2008-09-16 07:23:46,405 INFO org.apache.hadoop.fs.FSNamesystem:
> supergroup=supergroup
> 2008-09-16 07:23:46,405 INFO org.apache.hadoop.fs.FSNamesystem:
> isPermissionEnabled=true
> 2008-09-16 07:23:46,473 INFO org.apache.hadoop.fs.FSNamesystem: Finished
> loading FSImage in 112 msecs
> 2008-09-16 07:23:46,475 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2008-09-16 07:23:46,475 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2008-09-16 07:23:46,480 INFO org.apache.hadoop.dfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2008-09-16 07:23:46,486 INFO org.apache.hadoop.fs.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2008-09-16 07:23:46,561 INFO org.mortbay.util.Credential: Checking Resource
> aliases
> 2008-09-16 07:23:46,627 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-09-16 07:23:46,907 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@cf7fd0
> 2008-09-16 07:23:46,937 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-09-16 07:23:46,938 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-09-16 07:23:46,938 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-09-16 07:23:46,939 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50070
> 2008-09-16 07:23:46,939 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@dd725b
> 2008-09-16 07:23:46,940 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
> up at: 0.0.0.0:50070
> 2008-09-16 07:23:46,940 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2008-09-16 07:23:46,942 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 9000: starting
> 2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 9000: starting
> 2008-09-16 07:23:46,944 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000: starting
>
> Is there a specific way to provide the master name in masters file (in
> hadoop/conf) in datanode ? I've currently specified <username>@<namenode
> server ip>. I'm thinking there might be a problem as in log file of data
> node I can see the message '2008-09-16 14:38:51,501 INFO
> org.apache.hadoop.ipc.RPC: Server at /192.168.1.102:9000 not available
> yet, Zzzzz...'
>
> Any help ?
>
> Regards,
> Sourav
>
>
> ________________________________________
> From: Samuel Guo [guosijie@gmail.com]
> Sent: Tuesday, September 16, 2008 5:49 AM
> To: core-user@hadoop.apache.org
> Subject: Re: Need help in hdfs configuration fully distributed way in Mac
> OSX...
>
> check the namenode's log in machine1 to see if your namenode started
> successfully :)
>
> On Tue, Sep 16, 2008 at 2:04 PM, souravm <SO...@infosys.com> wrote:
>
> > Hi All,
> >
> > I'm facing a problem in configuring hdfs in a fully distributed way in
> Mac
> > OSX.
> >
> > Here is the topology -
> >
> > 1. The namenode is in machine 1
> > 2. There is 1 datanode in machine 2
> >
> > Now when I execute start-dfs.sh from machine 1, it connects to machine 2
> > (after it asks for password for connecting to machine 2) and starts
> datanode
> > in machine 2 (as the console message says).
> >
> > However -
> > 1. When I go to http://machine1:50070 - it does not show the data node
> at
> > all. It says 0 data node configured
> > 2. In the log file in machine 2 what I see is -
> > /************************************************************
> > STARTUP_MSG: Starting DataNode
> > STARTUP_MSG:   host = rc0902b-dhcp169.apple.com/17.229.22.169
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.17.2.1
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> > 684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
> > ************************************************************/
> > 2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 1 time(s).
> > 2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 2 time(s).
> > 2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 3 time(s).
> > 2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 4 time(s).
> > 2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 5 time(s).
> > 2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 6 time(s).
> > 2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 7 time(s).
> > 2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 8 time(s).
> > 2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 9 time(s).
> > 2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /17.229.23.77:9000. Already tried 10 time(s).
> > 2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
> > 17.229.23.77:9000 not available yet, Zzzzz...
> >
> > ....... and this retyring gets on repeating
> >
> >
> > The  hadoop-site.xmls are like this -
> >
> > 1. In machine 1
> > -
> > <configuration>
> >
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>hdfs://localhost:9000</value>
> >  </property>
> >
> >   <property>
> >    <name>dfs.name.dir</name>
> >    <value>/Users/souravm/hdpn</value>
> >  </property>
> >
> >  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>localhost:9001</value>
> >  </property>
> >  <property>
> >    <name>dfs.replication</name>
> >    <value>1</value>
> >  </property>
> > </configuration>
> >
> >
> > 2. In machine 2
> >
> > <configuration>
> >
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>hdfs://<machine1 ip>:9000</value>
> >  </property>
> >  <property>
> >    <name>dfs.data.dir</name>
> >    <value>/Users/nirdosh/hdfsd1</value>
> >  </property>
> >  <property>
> >    <name>dfs.replication</name>
> >    <value>1</value>
> >  </property>
> > </configuration>
> >
> > The slaves file in machine 1 has single entry - <user name>@<ip of
> > machine2>
> >
> > The exact steps I did -
> >
> > 1. Reformat the namenode in machine 1
> > 2. execute start-dfs.sh in machine 1
> > 3. Then I try to see whether the datanode is created through http://
> <machine
> > 1 ip>:50070
> >
> > Any pointer to resolve this issue would be appreciated.
> >
> > Regards,
> > Sourav
> >
> >
> >
> > **************** CAUTION - Disclaimer *****************
> > This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
> > solely
> > for the use of the addressee(s). If you are not the intended recipient,
> > please
> > notify the sender by e-mail and delete the original message. Further, you
> > are not
> > to copy, disclose, or distribute this e-mail or its contents to any other
> > person and
> > any such actions are unlawful. This e-mail may contain viruses. Infosys
> has
> > taken
> > every reasonable precaution to minimize this risk, but is not liable for
> > any damage
> > you may sustain as a result of any virus in this e-mail. You should carry
> > out your
> > own virus checks before opening the e-mail or attachment. Infosys
> reserves
> > the
> > right to monitor and review the content of all messages sent to or from
> > this e-mail
> > address. Messages sent to or from this e-mail address may be stored on
> the
> > Infosys e-mail system.
> > ***INFOSYS******** End of Disclaimer ********INFOSYS***
> >
>



-- 
Mafish@gmail.com
Institute of Computing Technology, Chinese Academy of Sciences, Beijing.

RE: Need help in hdfs configuration fully distributed way in Mac OSX...

Posted by souravm <SO...@infosys.com>.
Hi,

Tha namenode in machine 1 has started. I can see the following log. Is there a specific way to provide the master name in masters file (in hadoop/conf) in datanode ? I've currently specified

2008-09-16 07:23:46,321 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
2008-09-16 07:23:46,325 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: localhost/127.0.0.1:9000
2008-09-16 07:23:46,327 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2008-09-16 07:23:46,329 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2008-09-16 07:23:46,404 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=souravm,souravm,_lpadmin,_appserveradm,_appserverusr,admin
2008-09-16 07:23:46,405 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup
2008-09-16 07:23:46,405 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true
2008-09-16 07:23:46,473 INFO org.apache.hadoop.fs.FSNamesystem: Finished loading FSImage in 112 msecs
2008-09-16 07:23:46,475 INFO org.apache.hadoop.dfs.StateChange: STATE* Leaving safe mode after 0 secs.
2008-09-16 07:23:46,475 INFO org.apache.hadoop.dfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2008-09-16 07:23:46,480 INFO org.apache.hadoop.dfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2008-09-16 07:23:46,486 INFO org.apache.hadoop.fs.FSNamesystem: Registered FSNamesystemStatusMBean
2008-09-16 07:23:46,561 INFO org.mortbay.util.Credential: Checking Resource aliases
2008-09-16 07:23:46,627 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
2008-09-16 07:23:46,907 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@cf7fd0
2008-09-16 07:23:46,937 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/]
2008-09-16 07:23:46,938 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs]
2008-09-16 07:23:46,938 INFO org.mortbay.util.Container: Started HttpContext[/static,/static]
2008-09-16 07:23:46,939 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50070
2008-09-16 07:23:46,939 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@dd725b
2008-09-16 07:23:46,940 INFO org.apache.hadoop.fs.FSNamesystem: Web-server up at: 0.0.0.0:50070
2008-09-16 07:23:46,940 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2008-09-16 07:23:46,942 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2008-09-16 07:23:46,943 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2008-09-16 07:23:46,944 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting

Is there a specific way to provide the master name in masters file (in hadoop/conf) in datanode ? I've currently specified <username>@<namenode server ip>. I'm thinking there might be a problem as in log file of data node I can see the message '2008-09-16 14:38:51,501 INFO org.apache.hadoop.ipc.RPC: Server at /192.168.1.102:9000 not available yet, Zzzzz...'

Any help ?

Regards,
Sourav


________________________________________
From: Samuel Guo [guosijie@gmail.com]
Sent: Tuesday, September 16, 2008 5:49 AM
To: core-user@hadoop.apache.org
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...

check the namenode's log in machine1 to see if your namenode started
successfully :)

On Tue, Sep 16, 2008 at 2:04 PM, souravm <SO...@infosys.com> wrote:

> Hi All,
>
> I'm facing a problem in configuring hdfs in a fully distributed way in Mac
> OSX.
>
> Here is the topology -
>
> 1. The namenode is in machine 1
> 2. There is 1 datanode in machine 2
>
> Now when I execute start-dfs.sh from machine 1, it connects to machine 2
> (after it asks for password for connecting to machine 2) and starts datanode
> in machine 2 (as the console message says).
>
> However -
> 1. When I go to http://machine1:50070 - it does not show the data node at
> all. It says 0 data node configured
> 2. In the log file in machine 2 what I see is -
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = rc0902b-dhcp169.apple.com/17.229.22.169
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.17.2.1
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> 684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
> ************************************************************/
> 2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 1 time(s).
> 2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 2 time(s).
> 2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 3 time(s).
> 2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 4 time(s).
> 2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 5 time(s).
> 2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 6 time(s).
> 2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 7 time(s).
> 2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 8 time(s).
> 2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 9 time(s).
> 2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 10 time(s).
> 2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
> 17.229.23.77:9000 not available yet, Zzzzz...
>
> ....... and this retyring gets on repeating
>
>
> The  hadoop-site.xmls are like this -
>
> 1. In machine 1
> -
> <configuration>
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
>  </property>
>
>   <property>
>    <name>dfs.name.dir</name>
>    <value>/Users/souravm/hdpn</value>
>  </property>
>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>localhost:9001</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> </configuration>
>
>
> 2. In machine 2
>
> <configuration>
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://<machine1 ip>:9000</value>
>  </property>
>  <property>
>    <name>dfs.data.dir</name>
>    <value>/Users/nirdosh/hdfsd1</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> </configuration>
>
> The slaves file in machine 1 has single entry - <user name>@<ip of
> machine2>
>
> The exact steps I did -
>
> 1. Reformat the namenode in machine 1
> 2. execute start-dfs.sh in machine 1
> 3. Then I try to see whether the datanode is created through http://<machine
> 1 ip>:50070
>
> Any pointer to resolve this issue would be appreciated.
>
> Regards,
> Sourav
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
> solely
> for the use of the addressee(s). If you are not the intended recipient,
> please
> notify the sender by e-mail and delete the original message. Further, you
> are not
> to copy, disclose, or distribute this e-mail or its contents to any other
> person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has
> taken
> every reasonable precaution to minimize this risk, but is not liable for
> any damage
> you may sustain as a result of any virus in this e-mail. You should carry
> out your
> own virus checks before opening the e-mail or attachment. Infosys reserves
> the
> right to monitor and review the content of all messages sent to or from
> this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>

Re: Need help in hdfs configuration fully distributed way in Mac OSX...

Posted by Samuel Guo <gu...@gmail.com>.
check the namenode's log in machine1 to see if your namenode started
successfully :)

On Tue, Sep 16, 2008 at 2:04 PM, souravm <SO...@infosys.com> wrote:

> Hi All,
>
> I'm facing a problem in configuring hdfs in a fully distributed way in Mac
> OSX.
>
> Here is the topology -
>
> 1. The namenode is in machine 1
> 2. There is 1 datanode in machine 2
>
> Now when I execute start-dfs.sh from machine 1, it connects to machine 2
> (after it asks for password for connecting to machine 2) and starts datanode
> in machine 2 (as the console message says).
>
> However -
> 1. When I go to http://machine1:50070 - it does not show the data node at
> all. It says 0 data node configured
> 2. In the log file in machine 2 what I see is -
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = rc0902b-dhcp169.apple.com/17.229.22.169
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.17.2.1
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> 684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
> ************************************************************/
> 2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 1 time(s).
> 2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 2 time(s).
> 2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 3 time(s).
> 2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 4 time(s).
> 2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 5 time(s).
> 2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 6 time(s).
> 2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 7 time(s).
> 2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 8 time(s).
> 2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 9 time(s).
> 2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 10 time(s).
> 2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
> 17.229.23.77:9000 not available yet, Zzzzz...
>
> ....... and this retyring gets on repeating
>
>
> The  hadoop-site.xmls are like this -
>
> 1. In machine 1
> -
> <configuration>
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
>  </property>
>
>   <property>
>    <name>dfs.name.dir</name>
>    <value>/Users/souravm/hdpn</value>
>  </property>
>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>localhost:9001</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> </configuration>
>
>
> 2. In machine 2
>
> <configuration>
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://<machine1 ip>:9000</value>
>  </property>
>  <property>
>    <name>dfs.data.dir</name>
>    <value>/Users/nirdosh/hdfsd1</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> </configuration>
>
> The slaves file in machine 1 has single entry - <user name>@<ip of
> machine2>
>
> The exact steps I did -
>
> 1. Reformat the namenode in machine 1
> 2. execute start-dfs.sh in machine 1
> 3. Then I try to see whether the datanode is created through http://<machine
> 1 ip>:50070
>
> Any pointer to resolve this issue would be appreciated.
>
> Regards,
> Sourav
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
> solely
> for the use of the addressee(s). If you are not the intended recipient,
> please
> notify the sender by e-mail and delete the original message. Further, you
> are not
> to copy, disclose, or distribute this e-mail or its contents to any other
> person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has
> taken
> every reasonable precaution to minimize this risk, but is not liable for
> any damage
> you may sustain as a result of any virus in this e-mail. You should carry
> out your
> own virus checks before opening the e-mail or attachment. Infosys reserves
> the
> right to monitor and review the content of all messages sent to or from
> this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>

Re: Need help in hdfs configuration fully distributed way in Mac OSX...

Posted by Mafish Liu <ma...@gmail.com>.
Hi:
  You need to configure your nodes to ensure that node 1 can connect to node
2 without password.

On Tue, Sep 16, 2008 at 2:04 PM, souravm <SO...@infosys.com> wrote:

> Hi All,
>
> I'm facing a problem in configuring hdfs in a fully distributed way in Mac
> OSX.
>
> Here is the topology -
>
> 1. The namenode is in machine 1
> 2. There is 1 datanode in machine 2
>
> Now when I execute start-dfs.sh from machine 1, it connects to machine 2
> (after it asks for password for connecting to machine 2) and starts datanode
> in machine 2 (as the console message says).
>
> However -
> 1. When I go to http://machine1:50070 - it does not show the data node at
> all. It says 0 data node configured
> 2. In the log file in machine 2 what I see is -
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = rc0902b-dhcp169.apple.com/17.229.22.169
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.17.2.1
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> 684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
> ************************************************************/
> 2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 1 time(s).
> 2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 2 time(s).
> 2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 3 time(s).
> 2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 4 time(s).
> 2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 5 time(s).
> 2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 6 time(s).
> 2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 7 time(s).
> 2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 8 time(s).
> 2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 9 time(s).
> 2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: /17.229.23.77:9000. Already tried 10 time(s).
> 2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
> 17.229.23.77:9000 not available yet, Zzzzz...
>
> ....... and this retyring gets on repeating
>
>
> The  hadoop-site.xmls are like this -
>
> 1. In machine 1
> -
> <configuration>
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
>  </property>
>
>   <property>
>    <name>dfs.name.dir</name>
>    <value>/Users/souravm/hdpn</value>
>  </property>
>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>localhost:9001</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> </configuration>
>
>
> 2. In machine 2
>
> <configuration>
>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://<machine1 ip>:9000</value>
>  </property>
>  <property>
>    <name>dfs.data.dir</name>
>    <value>/Users/nirdosh/hdfsd1</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>1</value>
>  </property>
> </configuration>
>
> The slaves file in machine 1 has single entry - <user name>@<ip of
> machine2>
>
> The exact steps I did -
>
> 1. Reformat the namenode in machine 1
> 2. execute start-dfs.sh in machine 1
> 3. Then I try to see whether the datanode is created through http://<machine
> 1 ip>:50070
>
> Any pointer to resolve this issue would be appreciated.
>
> Regards,
> Sourav
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
> solely
> for the use of the addressee(s). If you are not the intended recipient,
> please
> notify the sender by e-mail and delete the original message. Further, you
> are not
> to copy, disclose, or distribute this e-mail or its contents to any other
> person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has
> taken
> every reasonable precaution to minimize this risk, but is not liable for
> any damage
> you may sustain as a result of any virus in this e-mail. You should carry
> out your
> own virus checks before opening the e-mail or attachment. Infosys reserves
> the
> right to monitor and review the content of all messages sent to or from
> this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>



-- 
Mafish@gmail.com
Institute of Computing Technology, Chinese Academy of Sciences, Beijing.