You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Alan Miller <Al...@synopsys.com> on 2012/08/08 09:02:23 UTC

datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ....
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ....
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I'm on Fedora 16/x86_64.

Regards,
Alan


Re: datanode startup before hostname is resovable

Posted by Michael Segel <mi...@hotmail.com>.
On Aug 8, 2012, at 9:52 AM, Alan Miller <Al...@synopsys.com> wrote:

> Actually (from day to day)  I don’t get a NEW IP address.
>  
Right...

> The OS just can’t resolve the hostname when the DataNode starts up.
> NameNode, JobTracker & TaskTracker services all start successfully as is.
>  
> > Take out the boot up starting of the cluster and start the cluster manually.
> Sorry, but that’s a silly suggestion as it’s worse than what I do now.
>  
> As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
> and I’m ready to go.
>  
> I think all we need to do is postpone the datanode startup for a few seconds.
>  
> Alan

Alan, 

While you may find the suggestion silly, I think you should realize that its actually something you really want to do when you're running a real cluster. 
Case in point. You've got to take a server down due to a hardware problem. You don't want it to join the cluster at start up because the odds are you want to actually run some diagnostics before releasing the server. 
Not to mention that there are a lot of IT shops where there are two different teams managing the cluster. One team manages the physical machine and OS, while the other manages the cluster. 

Of course you happen to be running a pseudo cluster on a machine that you turn on and off frequently. 
I'm running a pseudo cluster on one of my Linux servers that I try not to power cycle unless I have to.  Even on this machine, I don't have a problem. 
But then again, i'm running CentOS.

Sorry I couldn't be more helpful...

-Mike


>  
> From: Michel Segel [mailto:michael_segel@hotmail.com] 
> Sent: Wednesday, August 08, 2012 12:48 PM
> To: user@hadoop.apache.org
> Cc: user@hadoop.apache.org
> Subject: Re: datanode startup before hostname is resovable
>  
> So you're running a pseudo cluster...
>  
> Take out the boot up starting of the cluster and start the cluster manually.
> Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 
>  
> Manually start Hadoop...
> 
> 
> Sent from a remote device. Please excuse any typos...
>  
> Mike Segel
> 
> On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:
> 
> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  


Re: datanode startup before hostname is resovable

Posted by Michael Segel <mi...@hotmail.com>.
On Aug 8, 2012, at 9:52 AM, Alan Miller <Al...@synopsys.com> wrote:

> Actually (from day to day)  I don’t get a NEW IP address.
>  
Right...

> The OS just can’t resolve the hostname when the DataNode starts up.
> NameNode, JobTracker & TaskTracker services all start successfully as is.
>  
> > Take out the boot up starting of the cluster and start the cluster manually.
> Sorry, but that’s a silly suggestion as it’s worse than what I do now.
>  
> As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
> and I’m ready to go.
>  
> I think all we need to do is postpone the datanode startup for a few seconds.
>  
> Alan

Alan, 

While you may find the suggestion silly, I think you should realize that its actually something you really want to do when you're running a real cluster. 
Case in point. You've got to take a server down due to a hardware problem. You don't want it to join the cluster at start up because the odds are you want to actually run some diagnostics before releasing the server. 
Not to mention that there are a lot of IT shops where there are two different teams managing the cluster. One team manages the physical machine and OS, while the other manages the cluster. 

Of course you happen to be running a pseudo cluster on a machine that you turn on and off frequently. 
I'm running a pseudo cluster on one of my Linux servers that I try not to power cycle unless I have to.  Even on this machine, I don't have a problem. 
But then again, i'm running CentOS.

Sorry I couldn't be more helpful...

-Mike


>  
> From: Michel Segel [mailto:michael_segel@hotmail.com] 
> Sent: Wednesday, August 08, 2012 12:48 PM
> To: user@hadoop.apache.org
> Cc: user@hadoop.apache.org
> Subject: Re: datanode startup before hostname is resovable
>  
> So you're running a pseudo cluster...
>  
> Take out the boot up starting of the cluster and start the cluster manually.
> Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 
>  
> Manually start Hadoop...
> 
> 
> Sent from a remote device. Please excuse any typos...
>  
> Mike Segel
> 
> On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:
> 
> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  


Re: datanode startup before hostname is resovable

Posted by Michael Segel <mi...@hotmail.com>.
On Aug 8, 2012, at 9:52 AM, Alan Miller <Al...@synopsys.com> wrote:

> Actually (from day to day)  I don’t get a NEW IP address.
>  
Right...

> The OS just can’t resolve the hostname when the DataNode starts up.
> NameNode, JobTracker & TaskTracker services all start successfully as is.
>  
> > Take out the boot up starting of the cluster and start the cluster manually.
> Sorry, but that’s a silly suggestion as it’s worse than what I do now.
>  
> As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
> and I’m ready to go.
>  
> I think all we need to do is postpone the datanode startup for a few seconds.
>  
> Alan

Alan, 

While you may find the suggestion silly, I think you should realize that its actually something you really want to do when you're running a real cluster. 
Case in point. You've got to take a server down due to a hardware problem. You don't want it to join the cluster at start up because the odds are you want to actually run some diagnostics before releasing the server. 
Not to mention that there are a lot of IT shops where there are two different teams managing the cluster. One team manages the physical machine and OS, while the other manages the cluster. 

Of course you happen to be running a pseudo cluster on a machine that you turn on and off frequently. 
I'm running a pseudo cluster on one of my Linux servers that I try not to power cycle unless I have to.  Even on this machine, I don't have a problem. 
But then again, i'm running CentOS.

Sorry I couldn't be more helpful...

-Mike


>  
> From: Michel Segel [mailto:michael_segel@hotmail.com] 
> Sent: Wednesday, August 08, 2012 12:48 PM
> To: user@hadoop.apache.org
> Cc: user@hadoop.apache.org
> Subject: Re: datanode startup before hostname is resovable
>  
> So you're running a pseudo cluster...
>  
> Take out the boot up starting of the cluster and start the cluster manually.
> Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 
>  
> Manually start Hadoop...
> 
> 
> Sent from a remote device. Please excuse any typos...
>  
> Mike Segel
> 
> On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:
> 
> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  


Re: datanode startup before hostname is resovable

Posted by Michael Segel <mi...@hotmail.com>.
On Aug 8, 2012, at 9:52 AM, Alan Miller <Al...@synopsys.com> wrote:

> Actually (from day to day)  I don’t get a NEW IP address.
>  
Right...

> The OS just can’t resolve the hostname when the DataNode starts up.
> NameNode, JobTracker & TaskTracker services all start successfully as is.
>  
> > Take out the boot up starting of the cluster and start the cluster manually.
> Sorry, but that’s a silly suggestion as it’s worse than what I do now.
>  
> As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
> and I’m ready to go.
>  
> I think all we need to do is postpone the datanode startup for a few seconds.
>  
> Alan

Alan, 

While you may find the suggestion silly, I think you should realize that its actually something you really want to do when you're running a real cluster. 
Case in point. You've got to take a server down due to a hardware problem. You don't want it to join the cluster at start up because the odds are you want to actually run some diagnostics before releasing the server. 
Not to mention that there are a lot of IT shops where there are two different teams managing the cluster. One team manages the physical machine and OS, while the other manages the cluster. 

Of course you happen to be running a pseudo cluster on a machine that you turn on and off frequently. 
I'm running a pseudo cluster on one of my Linux servers that I try not to power cycle unless I have to.  Even on this machine, I don't have a problem. 
But then again, i'm running CentOS.

Sorry I couldn't be more helpful...

-Mike


>  
> From: Michel Segel [mailto:michael_segel@hotmail.com] 
> Sent: Wednesday, August 08, 2012 12:48 PM
> To: user@hadoop.apache.org
> Cc: user@hadoop.apache.org
> Subject: Re: datanode startup before hostname is resovable
>  
> So you're running a pseudo cluster...
>  
> Take out the boot up starting of the cluster and start the cluster manually.
> Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 
>  
> Manually start Hadoop...
> 
> 
> Sent from a remote device. Please excuse any typos...
>  
> Mike Segel
> 
> On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:
> 
> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Actually (from day to day)  I don’t get a NEW IP address.

The OS just can’t resolve the hostname when the DataNode starts up.
NameNode, JobTracker & TaskTracker services all start successfully as is.

> Take out the boot up starting of the cluster and start the cluster manually.
Sorry, but that’s a silly suggestion as it’s worse than what I do now.

As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
and I’m ready to go.

I think all we need to do is postpone the datanode startup for a few seconds.

Alan

From: Michel Segel [mailto:michael_segel@hotmail.com]
Sent: Wednesday, August 08, 2012 12:48 PM
To: user@hadoop.apache.org
Cc: user@hadoop.apache.org
Subject: Re: datanode startup before hostname is resovable

So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly...

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com>> wrote:
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13<http://myhostname.mycompany.com/10.11.12.13>
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]<mailto:[mailto:Ananda.Murugan@honeywell.com]>
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Actually (from day to day)  I don’t get a NEW IP address.

The OS just can’t resolve the hostname when the DataNode starts up.
NameNode, JobTracker & TaskTracker services all start successfully as is.

> Take out the boot up starting of the cluster and start the cluster manually.
Sorry, but that’s a silly suggestion as it’s worse than what I do now.

As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
and I’m ready to go.

I think all we need to do is postpone the datanode startup for a few seconds.

Alan

From: Michel Segel [mailto:michael_segel@hotmail.com]
Sent: Wednesday, August 08, 2012 12:48 PM
To: user@hadoop.apache.org
Cc: user@hadoop.apache.org
Subject: Re: datanode startup before hostname is resovable

So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly...

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com>> wrote:
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13<http://myhostname.mycompany.com/10.11.12.13>
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]<mailto:[mailto:Ananda.Murugan@honeywell.com]>
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Actually (from day to day)  I don’t get a NEW IP address.

The OS just can’t resolve the hostname when the DataNode starts up.
NameNode, JobTracker & TaskTracker services all start successfully as is.

> Take out the boot up starting of the cluster and start the cluster manually.
Sorry, but that’s a silly suggestion as it’s worse than what I do now.

As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
and I’m ready to go.

I think all we need to do is postpone the datanode startup for a few seconds.

Alan

From: Michel Segel [mailto:michael_segel@hotmail.com]
Sent: Wednesday, August 08, 2012 12:48 PM
To: user@hadoop.apache.org
Cc: user@hadoop.apache.org
Subject: Re: datanode startup before hostname is resovable

So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly...

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com>> wrote:
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13<http://myhostname.mycompany.com/10.11.12.13>
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]<mailto:[mailto:Ananda.Murugan@honeywell.com]>
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Actually (from day to day)  I don’t get a NEW IP address.

The OS just can’t resolve the hostname when the DataNode starts up.
NameNode, JobTracker & TaskTracker services all start successfully as is.

> Take out the boot up starting of the cluster and start the cluster manually.
Sorry, but that’s a silly suggestion as it’s worse than what I do now.

As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
and I’m ready to go.

I think all we need to do is postpone the datanode startup for a few seconds.

Alan

From: Michel Segel [mailto:michael_segel@hotmail.com]
Sent: Wednesday, August 08, 2012 12:48 PM
To: user@hadoop.apache.org
Cc: user@hadoop.apache.org
Subject: Re: datanode startup before hostname is resovable

So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly...

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com>> wrote:
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13<http://myhostname.mycompany.com/10.11.12.13>
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]<mailto:[mailto:Ananda.Murugan@honeywell.com]>
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


Re: datanode startup before hostname is resovable

Posted by Michel Segel <mi...@hotmail.com>.
So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:

> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  

Re: datanode startup before hostname is resovable

Posted by Michel Segel <mi...@hotmail.com>.
So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:

> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  

Re: datanode startup before hostname is resovable

Posted by Michel Segel <mi...@hotmail.com>.
So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:

> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  

Re: datanode startup before hostname is resovable

Posted by Michel Segel <mi...@hotmail.com>.
So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 

Manually start Hadoop...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Al...@synopsys.com> wrote:

> Sure but like I said, I’m on DHCP so my IP always changes.
>  
> In my config files I tried using “localhost4” and “127.0.0.1” but in
> both cases it still uses my FQ hostname instead of 127.0.0.1
> E.g.:
>   STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
>   STARTUP_MSG:   args = []
>   STARTUP_MSG:   version = 2.0.0-cdh4.0.1
>  
> From: /etc/hadoop/conf/core-site.xml
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost4:8020</value>
>   </property
>  
> From: /etc/hadoop/conf/mapred-site.xml
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost4:8021</value>
>   </property>
>  
> Alan
> From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com] 
> Sent: Wednesday, August 08, 2012 9:19 AM
> To: user@hadoop.apache.org
> Subject: RE: datanode startup before hostname is resovable
>  
> I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
>  
> From: Alan Miller [mailto:Alan.Miller@synopsys.com] 
> Sent: Wednesday, August 08, 2012 12:32 PM
> To: user@hadoop.apache.org
> Subject: datanode startup before hostname is resovable
>  
> For development I run CDH4 on my local machine  but I notice that I have to
> manually start the datanode (sudo service hadoop-hdfs-datanode start)
> after each reboot.
>  
> Looks like the datanode process is getting started before my DHCP address Is resolvable.
> From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log
>  
>     2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>     ….
>     STARTUP_MSG: Starting DataNode
>     STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
>     ….
>     2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
>     SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
>  
>  
> I’m on Fedora 16/x86_64.
>  
> Regards,
> Alan
>  

RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by Alan Miller <Al...@synopsys.com>.
Sure but like I said, I’m on DHCP so my IP always changes.

In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
E.g.:
  STARTUP_MSG:   host = myhostname.mycompany.com/10.11.12.13
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 2.0.0-cdh4.0.1

From: /etc/hadoop/conf/core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost4:8020</value>
  </property

From: /etc/hadoop/conf/mapred-site.xml
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost4:8021</value>
  </property>

Alan
From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org
Subject: RE: datanode startup before hostname is resovable

I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]<mailto:[mailto:Alan.Miller@synopsys.com]>
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ….
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ….
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I’m on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by "Chandra Mohan, Ananda Vel Murugan" <An...@honeywell.com>.
I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ....
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ....
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I'm on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by "Chandra Mohan, Ananda Vel Murugan" <An...@honeywell.com>.
I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ....
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ....
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I'm on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by "Chandra Mohan, Ananda Vel Murugan" <An...@honeywell.com>.
I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ....
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ....
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I'm on Fedora 16/x86_64.

Regards,
Alan


RE: datanode startup before hostname is resovable

Posted by "Chandra Mohan, Ananda Vel Murugan" <An...@honeywell.com>.
I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file

________________________________
From: Alan Miller [mailto:Alan.Miller@synopsys.com]
Sent: Wednesday, August 08, 2012 12:32 PM
To: user@hadoop.apache.org
Subject: datanode startup before hostname is resovable

For development I run CDH4 on my local machine  but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.

Looks like the datanode process is getting started before my DHCP address Is resolvable.
From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log

    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    ....
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = java.net.UnknownHostException: myhostname: myhostname
    ....
    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
    SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname


I'm on Fedora 16/x86_64.

Regards,
Alan