You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2018/07/18 22:17:00 UTC

[jira] [Commented] (AMBARI-24273) hadoop-env is not regenerated when OneFS is used as a FileSystem

    [ https://issues.apache.org/jira/browse/AMBARI-24273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16548478#comment-16548478 ] 

Hudson commented on AMBARI-24273:
---------------------------------

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9614 (See [https://builds.apache.org/job/Ambari-trunk-Commit/9614/])
AMBARI-24273. hadoop-env is not regenerated when OneFS is used as a (github: [https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=e6061fd4bc3ca87cd09cfad08b1233afd9779982])
* (edit) ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java


> hadoop-env is not regenerated when OneFS is used as a FileSystem
> ----------------------------------------------------------------
>
>                 Key: AMBARI-24273
>                 URL: https://issues.apache.org/jira/browse/AMBARI-24273
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.7.0
>            Reporter: Attila Magyar
>            Assignee: Attila Magyar
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 2.7.1
>
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The before-ANY/shared_initialization.py only regenerates hadop_env if there is a namenode or dfs_type is set to HCFS
> {code}
>   def hook(self, env):
>     import params
>     env.set_params(params)
>     setup_users()
>     if params.has_namenode or params.dfs_type == 'HCFS':
>       setup_hadoop_env()
>     setup_java()
> {code}
> This is no longer true because in the latest ambari-server we set dfs_type as follows:
> {code}
>     Map<String, ServiceInfo> serviceInfos = ambariMetaInfo.getServices(stackId.getStackName(), stackId.getStackVersion());
>     for (ServiceInfo serviceInfoInstance : serviceInfos.values()) {
>       if (serviceInfoInstance.getServiceType() != null) {
>         LOG.debug("Adding {} to command parameters for {}", serviceInfoInstance.getServiceType(),
>             serviceInfoInstance.getName());
>         clusterLevelParams.put(DFS_TYPE, serviceInfoInstance.getServiceType());
>         break;
>       }
>     }
> {code}
> This iterates over all of the stack service which will find HDFS first, so that the dfs_type will be HDFS instead of HCFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)