You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2008/09/30 20:39:44 UTC
[jira] Commented: (HADOOP-4313) ease-of-use: missing
fs.default.name should be caught and give a helpful message
[ https://issues.apache.org/jira/browse/HADOOP-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635805#action_12635805 ]
Allen Wittenauer commented on HADOOP-4313:
------------------------------------------
I was playing around with setting up a single node HDFS. As part of this experiment, I set the following in my hadoop-site.xml:
<property>
<name>hadoop.tmp.dir</name>
<value>/grid/3/tmp</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/grid/3/hadoop/var/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/grid/3/hadoop/var/hdfs/data</value>
</property>
clearly forgetting to set fs.default.name and formatting the name node. Upon running start-all.sh, i was greeted with:
2008-09-30 18:14:07,565 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.NullPointerException
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:132)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:130)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:134)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:235)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:205)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1199)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1154)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1162)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1284
with a similar error in the name node log.
This particular NPE is extremely unhelpful in letting someone know they forgot something in the config. Also, as a side note the description says:
<description>The name of the default file system. Either the
literal string "local" or a hdfs://host:port for NDFS.</description>
Actually using 'local' reports that it is deprecated. So we probably shouldn't put that in the config file as an option. :)
> ease-of-use: missing fs.default.name should be caught and give a helpful message
> ---------------------------------------------------------------------------------
>
> Key: HADOOP-4313
> URL: https://issues.apache.org/jira/browse/HADOOP-4313
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Allen Wittenauer
> Priority: Trivial
>
> Start a new data node and name node with the default fs.default.name can trigger a null pointer exception with no helpful information as to why. Instead, it should suggest checking fs.default.name .
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.