You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by 麦树荣 <sh...@qunar.com> on 2014/07/14 05:46:57 UTC
答复: DisallowedDatanodeException: Datanode denied communication with namenode
hi,
your problem is here:
172.16.XXX.XX1 172.16.XXX.XX1
172.16.XXX.XX2 172.16.XXX.XX2
172.16.XXX.XX3 172.16.XXX.XX3
172.16.XXX.XX4 172.16.XXX.XX4
nodes in hadoop cluster should use hostname, not IP。
发件人: xeon Mailinglist [mailto:xeonmailinglist@gmail.com]
发送时间: 2014年2月7日 2:54
收件人: user@hadoop.apache.org
主题: DisallowedDatanodeException: Datanode denied communication with namenode
I am trying to launch the datanodes in Hadoop MRv2, and I get the error below. I looked to Hadoop conf files and the /etc/hosts and everything looks ok. What is wrong in my configuration?
org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0, storageID=DS-1286267910-172.16.XXX.XXX-50010-1391710467907, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-86007361-15b7-4022-ac5f-52ca83d98373;nsid=1884118048;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:631)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3398)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735)
The /etc/hosts is well configured, and the Hadoop's slaves file also. Here are my conf files:
172:~/Programs/hadoop/etc/hadoop# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property> <name>fs.default.name<http://fs.default.name></name> <value>hdfs://172.16.YYY.YYY:9000</value> </property>
<property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop-temp</value> </property>
<!-- property><name>hadoop.proxyuser.xeon.hosts</name><value>*</value></property>
<property><name>hadoop.proxyuser.xeon.groups</name><value>*</value></property-->
</configuration>
172:~/Programs/hadoop/etc/hadoop# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property> <name>dfs.replication</name> <value>1</value> </property>
<property> <name>dfs.permissions</name> <value>false</value> </property>
<property> <name>dfs.name.dir</name> <value>/tmp/data/dfs/name/</value> </property>
<property> <name>dfs.data.dir</name> <value>/tmp/data/dfs/data/</value> </property>
</configuration>
172:~/Programs/hadoop/etc/hadoop# cat mapred-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name<http://mapreduce.framework.name></name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/root/Programs/hadoop/logs/history/done</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/root/Programs/hadoop/logs/history/intermediate-done-dir</value>
</property>
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapred.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.BZip2Codec</value>
</property>
</configuration>
I don't use any dfs_hosts_allow.txt. I also say that /etc/hosts is ok because I can access all the nodes with ssh. Just for curiosity the hostname have the ipaddress. Here is /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.16.XXX.XX1 172.16.XXX.XX1
172.16.XXX.XX2 172.16.XXX.XX2
172.16.XXX.XX3 172.16.XXX.XX3
172.16.XXX.XX4 172.16.XXX.XX4