You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Ed Sweeney <ed...@falkonry.com> on 2014/07/26 21:30:36 UTC

Datanode denied communication with namenode

All,

New AWS cluster with Cloudera 4.3 RPMs.

dfs.hosts contains 3 host names, they all resolve from each of the 3 hosts.

the datanode on the same machine as the namenode starts fine (once I
added it's longname hostname to dfs.hosts file).

the 2 remote datanodes both get the error below.

org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
Datanode denied communication with namenode because hostname cannot be
resolved (ip=10.0.7.61, hostname=10.0.7.61):
DatanodeRegistration(0.0.0.0,
datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,

It is AWS VPC so no reverse dns and I don't want to add anything to
the /etc/hosts files - shouldn't have to since the long and short
names all resolve properly.

Seeing hostname field in the error message has the ip field, I tried
using dfs.client.use.datanode.hostname = true but no change.

Any help appreciated!

-Ed

Re: Datanode denied communication with namenode

Posted by Ed Sweeney <ed...@falkonry.com>.
worked around it for now by telling namenode to stop doing reverse dns
checks in aws.

dfs.namenode.datanode.registration.ip-hostname-check=false

On Sat, Jul 26, 2014 at 12:47 PM, hadoop hive <ha...@gmail.com> wrote:
> Did you allowed RPC and TCP communication in you security group, which you
> have added to you hosts.
>
> Please also check your exclude file and third point is to increase your dn
> heapsize and start it.
>
> Thanks
>
> On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:
>>
>> All,
>>
>> New AWS cluster with Cloudera 4.3 RPMs.
>>
>> dfs.hosts contains 3 host names, they all resolve from each of the 3
>> hosts.
>>
>> the datanode on the same machine as the namenode starts fine (once I
>> added it's longname hostname to dfs.hosts file).
>>
>> the 2 remote datanodes both get the error below.
>>
>> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
>> Datanode denied communication with namenode because hostname cannot be
>> resolved (ip=10.0.7.61, hostname=10.0.7.61):
>> DatanodeRegistration(0.0.0.0,
>> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>>
>> It is AWS VPC so no reverse dns and I don't want to add anything to
>> the /etc/hosts files - shouldn't have to since the long and short
>> names all resolve properly.
>>
>> Seeing hostname field in the error message has the ip field, I tried
>> using dfs.client.use.datanode.hostname = true but no change.
>>
>> Any help appreciated!
>>
>> -Ed

Re: Datanode denied communication with namenode

Posted by Ed Sweeney <ed...@falkonry.com>.
worked around it for now by telling namenode to stop doing reverse dns
checks in aws.

dfs.namenode.datanode.registration.ip-hostname-check=false

On Sat, Jul 26, 2014 at 12:47 PM, hadoop hive <ha...@gmail.com> wrote:
> Did you allowed RPC and TCP communication in you security group, which you
> have added to you hosts.
>
> Please also check your exclude file and third point is to increase your dn
> heapsize and start it.
>
> Thanks
>
> On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:
>>
>> All,
>>
>> New AWS cluster with Cloudera 4.3 RPMs.
>>
>> dfs.hosts contains 3 host names, they all resolve from each of the 3
>> hosts.
>>
>> the datanode on the same machine as the namenode starts fine (once I
>> added it's longname hostname to dfs.hosts file).
>>
>> the 2 remote datanodes both get the error below.
>>
>> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
>> Datanode denied communication with namenode because hostname cannot be
>> resolved (ip=10.0.7.61, hostname=10.0.7.61):
>> DatanodeRegistration(0.0.0.0,
>> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>>
>> It is AWS VPC so no reverse dns and I don't want to add anything to
>> the /etc/hosts files - shouldn't have to since the long and short
>> names all resolve properly.
>>
>> Seeing hostname field in the error message has the ip field, I tried
>> using dfs.client.use.datanode.hostname = true but no change.
>>
>> Any help appreciated!
>>
>> -Ed

Re: Datanode denied communication with namenode

Posted by Ed Sweeney <ed...@falkonry.com>.
worked around it for now by telling namenode to stop doing reverse dns
checks in aws.

dfs.namenode.datanode.registration.ip-hostname-check=false

On Sat, Jul 26, 2014 at 12:47 PM, hadoop hive <ha...@gmail.com> wrote:
> Did you allowed RPC and TCP communication in you security group, which you
> have added to you hosts.
>
> Please also check your exclude file and third point is to increase your dn
> heapsize and start it.
>
> Thanks
>
> On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:
>>
>> All,
>>
>> New AWS cluster with Cloudera 4.3 RPMs.
>>
>> dfs.hosts contains 3 host names, they all resolve from each of the 3
>> hosts.
>>
>> the datanode on the same machine as the namenode starts fine (once I
>> added it's longname hostname to dfs.hosts file).
>>
>> the 2 remote datanodes both get the error below.
>>
>> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
>> Datanode denied communication with namenode because hostname cannot be
>> resolved (ip=10.0.7.61, hostname=10.0.7.61):
>> DatanodeRegistration(0.0.0.0,
>> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>>
>> It is AWS VPC so no reverse dns and I don't want to add anything to
>> the /etc/hosts files - shouldn't have to since the long and short
>> names all resolve properly.
>>
>> Seeing hostname field in the error message has the ip field, I tried
>> using dfs.client.use.datanode.hostname = true but no change.
>>
>> Any help appreciated!
>>
>> -Ed

Re: Datanode denied communication with namenode

Posted by Ed Sweeney <ed...@falkonry.com>.
worked around it for now by telling namenode to stop doing reverse dns
checks in aws.

dfs.namenode.datanode.registration.ip-hostname-check=false

On Sat, Jul 26, 2014 at 12:47 PM, hadoop hive <ha...@gmail.com> wrote:
> Did you allowed RPC and TCP communication in you security group, which you
> have added to you hosts.
>
> Please also check your exclude file and third point is to increase your dn
> heapsize and start it.
>
> Thanks
>
> On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:
>>
>> All,
>>
>> New AWS cluster with Cloudera 4.3 RPMs.
>>
>> dfs.hosts contains 3 host names, they all resolve from each of the 3
>> hosts.
>>
>> the datanode on the same machine as the namenode starts fine (once I
>> added it's longname hostname to dfs.hosts file).
>>
>> the 2 remote datanodes both get the error below.
>>
>> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
>> Datanode denied communication with namenode because hostname cannot be
>> resolved (ip=10.0.7.61, hostname=10.0.7.61):
>> DatanodeRegistration(0.0.0.0,
>> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>>
>> It is AWS VPC so no reverse dns and I don't want to add anything to
>> the /etc/hosts files - shouldn't have to since the long and short
>> names all resolve properly.
>>
>> Seeing hostname field in the error message has the ip field, I tried
>> using dfs.client.use.datanode.hostname = true but no change.
>>
>> Any help appreciated!
>>
>> -Ed

Re: Datanode denied communication with namenode

Posted by hadoop hive <ha...@gmail.com>.
Did you allowed RPC and TCP communication in you security group, which you
have added to you hosts.

Please also check your exclude file and third point is to increase your dn
heapsize and start it.

Thanks
On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:

> All,
>
> New AWS cluster with Cloudera 4.3 RPMs.
>
> dfs.hosts contains 3 host names, they all resolve from each of the 3 hosts.
>
> the datanode on the same machine as the namenode starts fine (once I
> added it's longname hostname to dfs.hosts file).
>
> the 2 remote datanodes both get the error below.
>
> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
> Datanode denied communication with namenode because hostname cannot be
> resolved (ip=10.0.7.61, hostname=10.0.7.61):
> DatanodeRegistration(0.0.0.0,
> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>
> It is AWS VPC so no reverse dns and I don't want to add anything to
> the /etc/hosts files - shouldn't have to since the long and short
> names all resolve properly.
>
> Seeing hostname field in the error message has the ip field, I tried
> using dfs.client.use.datanode.hostname = true but no change.
>
> Any help appreciated!
>
> -Ed
>

Re: Datanode denied communication with namenode

Posted by hadoop hive <ha...@gmail.com>.
Did you allowed RPC and TCP communication in you security group, which you
have added to you hosts.

Please also check your exclude file and third point is to increase your dn
heapsize and start it.

Thanks
On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:

> All,
>
> New AWS cluster with Cloudera 4.3 RPMs.
>
> dfs.hosts contains 3 host names, they all resolve from each of the 3 hosts.
>
> the datanode on the same machine as the namenode starts fine (once I
> added it's longname hostname to dfs.hosts file).
>
> the 2 remote datanodes both get the error below.
>
> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
> Datanode denied communication with namenode because hostname cannot be
> resolved (ip=10.0.7.61, hostname=10.0.7.61):
> DatanodeRegistration(0.0.0.0,
> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>
> It is AWS VPC so no reverse dns and I don't want to add anything to
> the /etc/hosts files - shouldn't have to since the long and short
> names all resolve properly.
>
> Seeing hostname field in the error message has the ip field, I tried
> using dfs.client.use.datanode.hostname = true but no change.
>
> Any help appreciated!
>
> -Ed
>

Re: Datanode denied communication with namenode

Posted by hadoop hive <ha...@gmail.com>.
Did you allowed RPC and TCP communication in you security group, which you
have added to you hosts.

Please also check your exclude file and third point is to increase your dn
heapsize and start it.

Thanks
On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:

> All,
>
> New AWS cluster with Cloudera 4.3 RPMs.
>
> dfs.hosts contains 3 host names, they all resolve from each of the 3 hosts.
>
> the datanode on the same machine as the namenode starts fine (once I
> added it's longname hostname to dfs.hosts file).
>
> the 2 remote datanodes both get the error below.
>
> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
> Datanode denied communication with namenode because hostname cannot be
> resolved (ip=10.0.7.61, hostname=10.0.7.61):
> DatanodeRegistration(0.0.0.0,
> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>
> It is AWS VPC so no reverse dns and I don't want to add anything to
> the /etc/hosts files - shouldn't have to since the long and short
> names all resolve properly.
>
> Seeing hostname field in the error message has the ip field, I tried
> using dfs.client.use.datanode.hostname = true but no change.
>
> Any help appreciated!
>
> -Ed
>

Re: Datanode denied communication with namenode

Posted by hadoop hive <ha...@gmail.com>.
Did you allowed RPC and TCP communication in you security group, which you
have added to you hosts.

Please also check your exclude file and third point is to increase your dn
heapsize and start it.

Thanks
On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed...@falkonry.com> wrote:

> All,
>
> New AWS cluster with Cloudera 4.3 RPMs.
>
> dfs.hosts contains 3 host names, they all resolve from each of the 3 hosts.
>
> the datanode on the same machine as the namenode starts fine (once I
> added it's longname hostname to dfs.hosts file).
>
> the 2 remote datanodes both get the error below.
>
> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
> Datanode denied communication with namenode because hostname cannot be
> resolved (ip=10.0.7.61, hostname=10.0.7.61):
> DatanodeRegistration(0.0.0.0,
> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>
> It is AWS VPC so no reverse dns and I don't want to add anything to
> the /etc/hosts files - shouldn't have to since the long and short
> names all resolve properly.
>
> Seeing hostname field in the error message has the ip field, I tried
> using dfs.client.use.datanode.hostname = true but no change.
>
> Any help appreciated!
>
> -Ed
>