You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Attila Magyar (JIRA)" <ji...@apache.org> on 2018/07/19 08:31:00 UTC
[jira] [Resolved] (AMBARI-24273) hadoop-env is not regenerated when
OneFS is used as a FileSystem
[ https://issues.apache.org/jira/browse/AMBARI-24273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Attila Magyar resolved AMBARI-24273.
------------------------------------
Resolution: Fixed
> hadoop-env is not regenerated when OneFS is used as a FileSystem
> ----------------------------------------------------------------
>
> Key: AMBARI-24273
> URL: https://issues.apache.org/jira/browse/AMBARI-24273
> Project: Ambari
> Issue Type: Bug
> Components: ambari-server
> Affects Versions: 2.7.0
> Reporter: Attila Magyar
> Assignee: Attila Magyar
> Priority: Major
> Labels: pull-request-available
> Fix For: 2.7.1
>
> Time Spent: 1h
> Remaining Estimate: 0h
>
> The before-ANY/shared_initialization.py only regenerates hadop_env if there is a namenode or dfs_type is set to HCFS
> {code}
> def hook(self, env):
> import params
> env.set_params(params)
> setup_users()
> if params.has_namenode or params.dfs_type == 'HCFS':
> setup_hadoop_env()
> setup_java()
> {code}
> This is no longer true because in the latest ambari-server we set dfs_type as follows:
> {code}
> Map<String, ServiceInfo> serviceInfos = ambariMetaInfo.getServices(stackId.getStackName(), stackId.getStackVersion());
> for (ServiceInfo serviceInfoInstance : serviceInfos.values()) {
> if (serviceInfoInstance.getServiceType() != null) {
> LOG.debug("Adding {} to command parameters for {}", serviceInfoInstance.getServiceType(),
> serviceInfoInstance.getName());
> clusterLevelParams.put(DFS_TYPE, serviceInfoInstance.getServiceType());
> break;
> }
> }
> {code}
> This iterates over all of the stack service which will find HDFS first, so that the dfs_type will be HDFS instead of HCFS.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)