You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Jayush Luniya (JIRA)" <ji...@apache.org> on 2015/03/18 07:21:38 UTC
[jira] [Updated] (AMBARI-8244) Ambari HDP 2.0.6+ stacks do not work
with fs.defaultFS not being hdfs
[ https://issues.apache.org/jira/browse/AMBARI-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jayush Luniya updated AMBARI-8244:
----------------------------------
Fix Version/s: (was: 2.0.0)
2.1.0
> Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs
> ---------------------------------------------------------------------
>
> Key: AMBARI-8244
> URL: https://issues.apache.org/jira/browse/AMBARI-8244
> Project: Ambari
> Issue Type: Bug
> Components: stacks
> Affects Versions: 2.0.0
> Reporter: Ivan Mitic
> Assignee: Ivan Mitic
> Labels: HDP
> Fix For: 2.1.0
>
> Attachments: AMBARI-8244.2.patch, AMBARI-8244.3.patch, AMBARI-8244.4.patch, AMBARI-8244.5.patch, AMBARI-8244.patch
>
>
> Right now changing the default file system does not work with the HDP 2.0.6+ stacks. Given that it might be common to run HDP against some other file system in the cloud, adding support for this will be super useful. One alternative is to consider a separate stack definition for other file systems, however, given that I noticed just 2 minor bugs needed to support this, I would rather extend on the existing code.
> Bugs:
> - One issue is in Nagios install scripts, where it is assumed that fs.defaultFS has the namenode port number.
> - Another issue is in HDFS install scripts, where {{hadoop dfsadmin}} command only works when hdfs is the default file system.
> Fix for both places is to extract the namenode address/port from {{dfs.namenode.rpc-address}} if one is defined and use it instead of relying on {{fs.defaultFS}}.
> Haven't included any tests yet (my first Ambari patch, not sure what is appropriate, so please comment).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)