You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Eli Collins (Created) (JIRA)" <ji...@apache.org> on 2012/02/05 00:09:55 UTC

[jira] [Created] (HDFS-2893) The 2NN won't start if dfs.namenode.secondary.http-address is default or specified with a wildcard IP and port

The 2NN won't start if dfs.namenode.secondary.http-address is default or specified with a wildcard IP and port
--------------------------------------------------------------------------------------------------------------

                 Key: HDFS-2893
                 URL: https://issues.apache.org/jira/browse/HDFS-2893
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 0.23.1
            Reporter: Eli Collins
            Priority: Critical


Looks like DFSUtil address matching doesn't find a match if the http-address is specified using a wildcard IP and a port. It should return 0.0.0.0:50090 in this case which would allow the 2NN to start.

Also, unless http-address is explicitly configured in hdfs-site.xml the 2NN will not start, since DFSUtil#getSecondaryNameNodeAddresses does not use the default value as a fallback. That may be confusing to people who expect the default value to be used.

{noformat}
hadoop-0.23.1-SNAPSHOT $ cat /home/eli/hadoop/conf3/hdfs-site.xml
...
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>0.0.0.0:50090</value>
  </property>
</configuration>

hadoop-0.23.1-SNAPSHOT $ ./bin/hdfs --config ~/hadoop/conf3 getconf -secondarynamenodes
0.0.0.0
hadoop-0.23.1-SNAPSHOT $ ./sbin/start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/eli/hadoop/dirs3/logs/eli/hadoop-eli-namenode-eli-thinkpad.out
localhost: starting datanode, logging to /home/eli/hadoop/dirs3/logs/eli/hadoop-eli-datanode-eli-thinkpad.out
Secondary namenodes are not configured.  Cannot start secondary namenodes.
{noformat}

This works if eg localhost:50090 is used.

We should also update the hdfs user guide to remove the reference to the masters file since it's no longer used to configure which hosts the 2NN runs on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (HDFS-2893) The start/stop scripts don't start/stop the 2NN when using the default configuration

Posted by "Eli Collins (Resolved) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HDFS-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eli Collins resolved HDFS-2893.
-------------------------------

      Resolution: Fixed
    Hadoop Flags: Reviewed

Thanks Todd and ATM. I committed this and merged. Didn't run test-patch since it just updates the start/stop scripts. I tested the generated tarball from branch-23 by hand.
                
> The start/stop scripts don't start/stop the 2NN when using the default configuration
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-2893
>                 URL: https://issues.apache.org/jira/browse/HDFS-2893
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.23.1
>            Reporter: Eli Collins
>            Assignee: Eli Collins
>            Priority: Minor
>         Attachments: hdfs-2893.txt
>
>
> HDFS-1703 changed the behavior of the start/stop scripts so that the masters file is no longer used to indicate which hosts to start the 2NN on. The 2NN is now started, when using start-dfs.sh, on hosts only when dfs.namenode.secondary.http-address is configured with a non-wildcard IP. This means you can not start a NN using an http-address specified using a wildcard IP. We should allow a 2NN to be started with the default config, ie start-dfs.sh should start a NN, 2NN and DN. The packaging already works this way (it doesn't use start-dfs.sh, it uses hadoop-daemon.sh directly w/o first checking getconf) so let's bring start-dfs.sh in line with this behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira