You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Ivan Mitic (JIRA)" <ji...@apache.org> on 2014/11/10 05:53:33 UTC

[jira] [Created] (AMBARI-8244) Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs

Ivan Mitic created AMBARI-8244:
----------------------------------

             Summary: Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs
                 Key: AMBARI-8244
                 URL: https://issues.apache.org/jira/browse/AMBARI-8244
             Project: Ambari
          Issue Type: Bug
          Components: stacks
    Affects Versions: 2.0.0
            Reporter: Ivan Mitic


Right now changing the default file system does not work with the HDP 2.0.6+ stacks. Given that it might be common to run HDP against some other file system in the cloud, adding support for this will be super useful. One alternative is to consider a separate stack definition for other file systems, however, given that I noticed just 2 minor bugs needed to support this, I would rather extend on the existing code.

Bugs:
 - One issue is in Nagios install scripts, where it is assumed that fs.defaultFS has the namenode port number.
 - Another issue is in HDFS install scripts, where {{hadoop dfsadmin}} command only works when hdfs is the default file system.

Fix for both places is to extract the namenode address/port from {{dfs.namenode.rpc-address}} if one is defined and use it instead of relying on {{fs.defaultFS}}. 

Haven't included any tests yet (my first Ambari patch, not sure what is appropriate, so please comment).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)