You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Antonenko Alexander (JIRA)" <ji...@apache.org> on 2015/03/18 23:07:38 UTC

[jira] [Assigned] (AMBARI-8620) weird directory suggestions upon Docker containers

     [ https://issues.apache.org/jira/browse/AMBARI-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Antonenko Alexander reassigned AMBARI-8620:
-------------------------------------------

    Assignee: Antonenko Alexander  (was: jun aoki)

> weird directory suggestions upon Docker containers
> --------------------------------------------------
>
>                 Key: AMBARI-8620
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8620
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-web
>    Affects Versions: 2.0.0
>            Reporter: jun aoki
>            Assignee: Antonenko Alexander
>              Labels: patch
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-8620.patch, screenshot-1.png, screenshot-2.png
>
>
> Ambari cluster install wizard recommends some directory settings (NameNode directories, ZooKeep directory etc.) based upon directories mounted on LInux system.
> The recommendation has some good logic, briefly
> 1.  hit cluster API e.g. http://host:8080/api/v1/clusters/cluster1/hosts/agent1.mydomain.com
> {code}
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com",
> Hosts: {
> cluster_name: "aaa",
> cpu_count: 8,
> disk_info: [
> {
> available: "5911904",
> used: "3737524",
> percent: "39%",
> size: "10190100",
> type: "rootfs",
> mountpoint: "/"
> },
> {
> available: "5911904",
> used: "3737524",
> percent: "39%",
> size: "10190100",
> type: "ext4",
> mountpoint: "/"
> },
> {
> available: "4005892",
> used: "0",
> percent: "0%",
> size: "4005892",
> type: "tmpfs",
> mountpoint: "/dev"
> },
> {
> available: "65536",
> used: "0",
> percent: "0%",
> size: "65536",
> type: "tmpfs",
> mountpoint: "/dev/shm"
> },
> {
> available: "22421136",
> used: "15874140",
> percent: "42%",
> size: "38295276",
> type: "xfs",
> mountpoint: "/etc/resolv.conf"
> },
> {
> available: "22421136",
> used: "15874140",
> percent: "42%",
> size: "38295276",
> type: "xfs",
> mountpoint: "/etc/hostname"
> },
> {
> available: "22421136",
> used: "15874140",
> percent: "42%",
> size: "38295276",
> type: "xfs",
> mountpoint: "/etc/hosts"
> }
> ],
> host_health_report: "",
> host_name: "agent1.mydomain.com",
> host_state: "HEALTHY",
> host_status: "UNHEALTHY",
> ip: "172.17.0.8",
> last_agent_env: {
> stackFoldersAndFiles: [ ],
> alternatives: [ ],
> existingUsers: [ ],
> existingRepos: [
> "unable_to_determine"
> ],
> installedPackages: [ ],
> hostHealth: {
> activeJavaProcs: [ ],
> agentTimeStampAtReporting: 1418160197099,
> serverTimeStampAtReporting: 1418160197173,
> liveServices: [
> {
> desc: "ntpd is stopped ",
> name: "ntpd",
> status: "Unhealthy"
> }
> ]
> },
> umask: 18,
> transparentHugePage: "",
> iptablesIsRunning: true,
> reverseLookup: true
> },
> last_heartbeat_time: 1418160197173,
> last_registration_time: 1418097149332,
> maintenance_state: "OFF",
> os_arch: "x86_64",
> os_type: "centos6",
> ph_cpu_count: 8,
> public_host_name: "agent1.mydomain.com",
> rack_info: "/default-rack",
> total_mem: 8011788,
> desired_configs: {
> capacity-scheduler: {
> default: "version1"
> },
> cluster-env: {
> default: "version1"
> },
> core-site: {
> default: "version1"
> },
> ganglia-env: {
> default: "version1"
> },
> hadoop-env: {
> default: "version1"
> },
> hadoop-policy: {
> default: "version1"
> },
> hdfs-log4j: {
> default: "version1"
> },
> hdfs-site: {
> default: "version1"
> },
> mapred-env: {
> default: "version1"
> },
> mapred-site: {
> default: "version1"
> },
> nagios-env: {
> default: "version1"
> },
> pig-env: {
> default: "version1"
> },
> pig-log4j: {
> default: "version1"
> },
> pig-properties: {
> default: "version1"
> },
> tez-env: {
> default: "version1"
> },
> tez-site: {
> default: "version1"
> },
> yarn-env: {
> default: "version1"
> },
> yarn-log4j: {
> default: "version1"
> },
> yarn-site: {
> default: "version1"
> },
> zoo.cfg: {
> default: "version1"
> },
> zookeeper-env: {
> default: "version1"
> },
> zookeeper-log4j: {
> default: "version1"
> }
> }
> },
> host_components: [
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/DATANODE",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "DATANODE",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/GANGLIA_MONITOR",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "GANGLIA_MONITOR",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/GANGLIA_SERVER",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "GANGLIA_SERVER",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/HDFS_CLIENT",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "HDFS_CLIENT",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/MAPREDUCE2_CLIENT",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "MAPREDUCE2_CLIENT",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/NAGIOS_SERVER",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "NAGIOS_SERVER",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/NAMENODE",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "NAMENODE",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/NODEMANAGER",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "NODEMANAGER",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/PIG",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "PIG",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/SPARK_CLIENT",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "SPARK_CLIENT",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/TEZ_CLIENT",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "TEZ_CLIENT",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/YARN_CLIENT",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "YARN_CLIENT",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/ZOOKEEPER_CLIENT",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "ZOOKEEPER_CLIENT",
> host_name: "agent1.mydomain.com"
> }
> },
> {
> href: "http://ambari_automation_centos7:8080/api/v1/clusters/aaa/hosts/agent1.mydomain.com/host_components/ZOOKEEPER_SERVER",
> HostRoles: {
> cluster_name: "aaa",
> component_name: "ZOOKEEPER_SERVER",
> host_name: "agent1.mydomain.com"
> }
> }
> ]
> }
> {code}
> 2. Filter out "/", "/home", "/boot"
> 3. Filter out devtmpfs, tmpfs vboxsf
> The problem is, upon docker environment, some directories is concatenated with xfs mounts Docker uses
> e.g. /etc/resolv.conf, /etc/hostname
> Thus, recommended directory paths are weird 
> e.g.
> /etc/resolv.conf/hadoop/hdfs/namenode
> /etc/hostname/hadoop/hdfs/namenode
> /etc/hosts/hadoop/hdfs/namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)