You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by "S.L" <si...@gmail.com> on 2014/08/03 23:57:38 UTC
Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Hi All,
I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
three slave nodes , the slave nodes are listed in the
$HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
Master Name node on port 9000, however when I start the datanode on any of
the slaves I get the following exception .
2014-08-03 08:04:27,952 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode because hostname cannot be
resolved .
The following are the contents of my core-site.xml.
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://server1.mydomain.com:9000</value>
</property>
</configuration>
Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
dfs.hosts.exclude properties.
Thanks.
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by hadoop hive <ha...@gmail.com>.
Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L" <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by hadoop hive <ha...@gmail.com>.
Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L" <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by hadoop hive <ha...@gmail.com>.
Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L" <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by hadoop hive <ha...@gmail.com>.
Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L" <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Wellington Chevreuil <we...@gmail.com>.
You should have /etc/hosts properly configured on all your cluster nodes.
On 5 Aug 2014, at 07:28, S.L <si...@gmail.com> wrote:
> when you say /etc/hosts/ file , you mean only on the master of on both the master and slaves?
>
>
>
>
> On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com> wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command, it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>>
>> The contents are
>>
>> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>
>>
>>
>> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <ri...@gmail.com> wrote:
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and three slave nodes , the slave nodes are listed in the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the Master Name node on port 9000, however when I start the datanode on any of the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode because hostname cannot be resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Wellington Chevreuil <we...@gmail.com>.
You should have /etc/hosts properly configured on all your cluster nodes.
On 5 Aug 2014, at 07:28, S.L <si...@gmail.com> wrote:
> when you say /etc/hosts/ file , you mean only on the master of on both the master and slaves?
>
>
>
>
> On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com> wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command, it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>>
>> The contents are
>>
>> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>
>>
>>
>> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <ri...@gmail.com> wrote:
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and three slave nodes , the slave nodes are listed in the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the Master Name node on port 9000, however when I start the datanode on any of the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode because hostname cannot be resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Wellington Chevreuil <we...@gmail.com>.
You should have /etc/hosts properly configured on all your cluster nodes.
On 5 Aug 2014, at 07:28, S.L <si...@gmail.com> wrote:
> when you say /etc/hosts/ file , you mean only on the master of on both the master and slaves?
>
>
>
>
> On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com> wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command, it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>>
>> The contents are
>>
>> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>
>>
>>
>> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <ri...@gmail.com> wrote:
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and three slave nodes , the slave nodes are listed in the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the Master Name node on port 9000, however when I start the datanode on any of the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode because hostname cannot be resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Wellington Chevreuil <we...@gmail.com>.
You should have /etc/hosts properly configured on all your cluster nodes.
On 5 Aug 2014, at 07:28, S.L <si...@gmail.com> wrote:
> when you say /etc/hosts/ file , you mean only on the master of on both the master and slaves?
>
>
>
>
> On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com> wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command, it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>>
>> The contents are
>>
>> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>
>>
>>
>> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <ri...@gmail.com> wrote:
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and three slave nodes , the slave nodes are listed in the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the Master Name node on port 9000, however when I start the datanode on any of the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode because hostname cannot be resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
when you say /etc/hosts/ file , you mean only on the master of on both the
master and slaves?
On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com>
wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't
> resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command,
> it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also
> make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
> riteshoneinamillion@gmail.com> wrote:
>
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master
>>> and three slave nodes , the slave nodes are listed in the
>>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>>> Master Name node on port 9000, however when I start the datanode on any of
>>> the slaves I get the following exception .
>>>
>>> 2014-08-03 08:04:27,952 FATAL
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>>> Datanode denied communication with namenode because hostname cannot be
>>> resolved .
>>>
>>> The following are the contents of my core-site.xml.
>>>
>>> <configuration>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://server1.mydomain.com:9000</value>
>>> </property>
>>> </configuration>
>>>
>>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>>> dfs.hosts.exclude properties.
>>>
>>> Thanks.
>>>
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
when you say /etc/hosts/ file , you mean only on the master of on both the
master and slaves?
On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com>
wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't
> resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command,
> it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also
> make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
> riteshoneinamillion@gmail.com> wrote:
>
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master
>>> and three slave nodes , the slave nodes are listed in the
>>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>>> Master Name node on port 9000, however when I start the datanode on any of
>>> the slaves I get the following exception .
>>>
>>> 2014-08-03 08:04:27,952 FATAL
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>>> Datanode denied communication with namenode because hostname cannot be
>>> resolved .
>>>
>>> The following are the contents of my core-site.xml.
>>>
>>> <configuration>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://server1.mydomain.com:9000</value>
>>> </property>
>>> </configuration>
>>>
>>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>>> dfs.hosts.exclude properties.
>>>
>>> Thanks.
>>>
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
when you say /etc/hosts/ file , you mean only on the master of on both the
master and slaves?
On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com>
wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't
> resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command,
> it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also
> make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
> riteshoneinamillion@gmail.com> wrote:
>
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master
>>> and three slave nodes , the slave nodes are listed in the
>>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>>> Master Name node on port 9000, however when I start the datanode on any of
>>> the slaves I get the following exception .
>>>
>>> 2014-08-03 08:04:27,952 FATAL
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>>> Datanode denied communication with namenode because hostname cannot be
>>> resolved .
>>>
>>> The following are the contents of my core-site.xml.
>>>
>>> <configuration>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://server1.mydomain.com:9000</value>
>>> </property>
>>> </configuration>
>>>
>>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>>> dfs.hosts.exclude properties.
>>>
>>> Thanks.
>>>
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
when you say /etc/hosts/ file , you mean only on the master of on both the
master and slaves?
On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh <sa...@ericsson.com>
wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't
> resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command,
> it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also
> make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
> riteshoneinamillion@gmail.com> wrote:
>
>> check the contents of '/etc/hosts' file
>>
>>
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master
>>> and three slave nodes , the slave nodes are listed in the
>>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>>> Master Name node on port 9000, however when I start the datanode on any of
>>> the slaves I get the following exception .
>>>
>>> 2014-08-03 08:04:27,952 FATAL
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>>> Datanode denied communication with namenode because hostname cannot be
>>> resolved .
>>>
>>> The following are the contents of my core-site.xml.
>>>
>>> <configuration>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://server1.mydomain.com:9000</value>
>>> </property>
>>> </configuration>
>>>
>>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>>> dfs.hosts.exclude properties.
>>>
>>> Thanks.
>>>
>>
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0
cluster.
Posted by Satyam Singh <sa...@ericsson.com>.
You have not given namenode uri in /etc/hosts file , thus it can't
resolve it to ipaddress and your namenode would also be not started.
Preferable practice is to start your cluster through start-dfs.sh
command, it implicitly starts first namenode and then all its datanodes.
Also make sure you have given ipaddress in salve file, if not then also
make entry for hostnames in /etc/hosts file
BR,
Satyam
On 08/05/2014 12:21 AM, S.L wrote:
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh
> <riteshoneinamillion@gmail.com <ma...@gmail.com>>
> wrote:
>
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <simpleliving016@gmail.com
> <ma...@gmail.com>> wrote:
>
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a
> master and three slave nodes , the slave nodes are listed in
> the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from
> the slaves to the Master Name node on port 9000, however when
> I start the datanode on any of the slaves I get the following
> exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> Initialization failed for block pool Block pool
> BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid
> null) service to server1.dealyaft.com/170.75.152.162:9000
> <http://server1.dealyaft.com/170.75.152.162:9000>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname
> cannot be resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name <http://fs.default.name></name>
> <value>hdfs://server1.mydomain.com:9000
> <http://server1.mydomain.com:9000></value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for
> dfs.hosts or dfs.hosts.exclude properties.
>
> Thanks.
>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0
cluster.
Posted by Satyam Singh <sa...@ericsson.com>.
You have not given namenode uri in /etc/hosts file , thus it can't
resolve it to ipaddress and your namenode would also be not started.
Preferable practice is to start your cluster through start-dfs.sh
command, it implicitly starts first namenode and then all its datanodes.
Also make sure you have given ipaddress in salve file, if not then also
make entry for hostnames in /etc/hosts file
BR,
Satyam
On 08/05/2014 12:21 AM, S.L wrote:
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh
> <riteshoneinamillion@gmail.com <ma...@gmail.com>>
> wrote:
>
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <simpleliving016@gmail.com
> <ma...@gmail.com>> wrote:
>
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a
> master and three slave nodes , the slave nodes are listed in
> the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from
> the slaves to the Master Name node on port 9000, however when
> I start the datanode on any of the slaves I get the following
> exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> Initialization failed for block pool Block pool
> BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid
> null) service to server1.dealyaft.com/170.75.152.162:9000
> <http://server1.dealyaft.com/170.75.152.162:9000>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname
> cannot be resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name <http://fs.default.name></name>
> <value>hdfs://server1.mydomain.com:9000
> <http://server1.mydomain.com:9000></value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for
> dfs.hosts or dfs.hosts.exclude properties.
>
> Thanks.
>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0
cluster.
Posted by Satyam Singh <sa...@ericsson.com>.
You have not given namenode uri in /etc/hosts file , thus it can't
resolve it to ipaddress and your namenode would also be not started.
Preferable practice is to start your cluster through start-dfs.sh
command, it implicitly starts first namenode and then all its datanodes.
Also make sure you have given ipaddress in salve file, if not then also
make entry for hostnames in /etc/hosts file
BR,
Satyam
On 08/05/2014 12:21 AM, S.L wrote:
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh
> <riteshoneinamillion@gmail.com <ma...@gmail.com>>
> wrote:
>
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <simpleliving016@gmail.com
> <ma...@gmail.com>> wrote:
>
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a
> master and three slave nodes , the slave nodes are listed in
> the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from
> the slaves to the Master Name node on port 9000, however when
> I start the datanode on any of the slaves I get the following
> exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> Initialization failed for block pool Block pool
> BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid
> null) service to server1.dealyaft.com/170.75.152.162:9000
> <http://server1.dealyaft.com/170.75.152.162:9000>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname
> cannot be resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name <http://fs.default.name></name>
> <value>hdfs://server1.mydomain.com:9000
> <http://server1.mydomain.com:9000></value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for
> dfs.hosts or dfs.hosts.exclude properties.
>
> Thanks.
>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0
cluster.
Posted by Satyam Singh <sa...@ericsson.com>.
You have not given namenode uri in /etc/hosts file , thus it can't
resolve it to ipaddress and your namenode would also be not started.
Preferable practice is to start your cluster through start-dfs.sh
command, it implicitly starts first namenode and then all its datanodes.
Also make sure you have given ipaddress in salve file, if not then also
make entry for hostnames in /etc/hosts file
BR,
Satyam
On 08/05/2014 12:21 AM, S.L wrote:
>
> The contents are
>
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh
> <riteshoneinamillion@gmail.com <ma...@gmail.com>>
> wrote:
>
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <simpleliving016@gmail.com
> <ma...@gmail.com>> wrote:
>
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a
> master and three slave nodes , the slave nodes are listed in
> the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from
> the slaves to the Master Name node on port 9000, however when
> I start the datanode on any of the slaves I get the following
> exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> Initialization failed for block pool Block pool
> BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid
> null) service to server1.dealyaft.com/170.75.152.162:9000
> <http://server1.dealyaft.com/170.75.152.162:9000>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname
> cannot be resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name <http://fs.default.name></name>
> <value>hdfs://server1.mydomain.com:9000
> <http://server1.mydomain.com:9000></value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for
> dfs.hosts or dfs.hosts.exclude properties.
>
> Thanks.
>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
The contents are
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
riteshoneinamillion@gmail.com> wrote:
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
>> three slave nodes , the slave nodes are listed in the
>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>> Master Name node on port 9000, however when I start the datanode on any of
>> the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode because hostname cannot be
>> resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>> dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
The contents are
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
riteshoneinamillion@gmail.com> wrote:
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
>> three slave nodes , the slave nodes are listed in the
>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>> Master Name node on port 9000, however when I start the datanode on any of
>> the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode because hostname cannot be
>> resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>> dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
The contents are
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
riteshoneinamillion@gmail.com> wrote:
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
>> three slave nodes , the slave nodes are listed in the
>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>> Master Name node on port 9000, however when I start the datanode on any of
>> the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode because hostname cannot be
>> resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>> dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by "S.L" <si...@gmail.com>.
The contents are
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
riteshoneinamillion@gmail.com> wrote:
> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
>> three slave nodes , the slave nodes are listed in the
>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>> Master Name node on port 9000, however when I start the datanode on any of
>> the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode because hostname cannot be
>> resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://server1.mydomain.com:9000</value>
>> </property>
>> </configuration>
>>
>> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
>> dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Ritesh Kumar Singh <ri...@gmail.com>.
check the contents of '/etc/hosts' file
On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Ritesh Kumar Singh <ri...@gmail.com>.
check the contents of '/etc/hosts' file
On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Ritesh Kumar Singh <ri...@gmail.com>.
check the contents of '/etc/hosts' file
On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>
Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.
Posted by Ritesh Kumar Singh <ri...@gmail.com>.
check the contents of '/etc/hosts' file
On Mon, Aug 4, 2014 at 3:27 AM, S.L <si...@gmail.com> wrote:
> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://server1.mydomain.com:9000</value>
> </property>
> </configuration>
>
> Also in my hdfs-site.xml I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>