You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by SeyyedAhmad Javadi <sj...@cs.stonybrook.edu> on 2018/04/19 22:59:22 UTC

Regarding Docker container run_time setup

Hi All,

I am following the below guide to setup Docker container run_time but face
some non-trivial errors at least for my level. Would you please comment if
you have some idea about the root-cause?
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html

After the error, I have provided the config files and Dockerfile as  well
as Docker image inspect command results (should have null for Entry Point
and CMS?).

I have three nodes, 1 RM and 2 NMs  and default LCE works fine.

********************submit job script
vars="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=hadoop-ubuntu,YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE=false,YARN_CONTAINER_RUNTIME_DOCKER_CONTAINER_NETWORK=host"

#vars="YARN_CONTAINER_RUNTIME_TYPE=default"
hadoop jar
/home/ubuntu/hadoop-3.1.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar
pi -Dyarn.app.mapreduce.am.env=$vars -Dmapreduce.map.env=$vars
-Dmapreduce.reduce.env=$vars 2 10

******************** AM Log in one the nodes
2018-04-19 18:55:46,311 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
Auth successful for appattempt_1524178188987_0001_000001 (auth:SIMPLE)
2018-04-19 18:55:46,515 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Start request for container_1524178188987_0001_01_000001 by user ubuntu
2018-04-19 18:55:46,617 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Creating a new application reference for app application_1524178188987_0001
2018-04-19 18:55:46,634 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524178188987_0001 transitioned from NEW to INITING
2018-04-19 18:55:46,634 INFO
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu
IP=130.245.127.176    OPERATION=Start Container Request
TARGET=ContainerManageImpl    RESULT=SUCCESS
APPID=application_1524178188987_0001
CONTAINERID=container_1524178188987_0001_01_000001
2018-04-19 18:55:46,635 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Adding container_1524178188987_0001_01_000001 to application
application_1524178188987_0001
2018-04-19 18:55:46,649 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524178188987_0001 transitioned from INITING to
RUNNING
2018-04-19 18:55:46,655 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524178188987_0001_01_000001 transitioned from NEW to
LOCALIZING
2018-04-19 18:55:46,655 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_INIT for appId application_1524178188987_0001
2018-04-19 18:55:46,698 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
Created localizer for container_1524178188987_0001_01_000001
2018-04-19 18:55:46,898 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
Writing credentials to the nmPrivate file
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524178188987_0001_01_000001.tokens
2018-04-19 18:55:50,371 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524178188987_0001_01_000001 transitioned from
LOCALIZING to SCHEDULED
2018-04-19 18:55:50,374 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
Starting container [container_1524178188987_0001_01_000001]
2018-04-19 18:55:50,479 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524178188987_0001_01_000001 transitioned from
SCHEDULED to RUNNING
2018-04-19 18:55:50,481 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Starting resource-monitoring for container_1524178188987_0001_01_000001
2018-04-19 18:55:51,842 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Container container_1524178188987_0001_01_000001 succeeded
2018-04-19 18:55:51,844 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524178188987_0001_01_000001 transitioned from RUNNING
to EXITED_WITH_SUCCESS
2018-04-19 18:55:51,844 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Cleaning up container container_1524178188987_0001_01_000001
2018-04-19 18:55:51,957 INFO
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Removing
Docker container : container_1524178188987_0001_01_000001
2018-04-19 18:55:56,963 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Could not get pid for container_1524178188987_0001_01_000001. Waited for
5000 ms.
2018-04-19 18:55:56,963 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Unable to obtain pid, but docker container request detected. Attempting to
reap container container_1524178188987_0001_01_000001
2018-04-19 18:55:59,395 INFO
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting
absolute path :
/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524178188987_0001/container_1524178188987_0001_01_000001
2018-04-19 18:55:59,395 INFO
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu
OPERATION=Container Finished - Succeeded    TARGET=ContainerImpl
RESULT=SUCCESS    APPID=application_1524178188987_0001
CONTAINERID=container_1524178188987_0001_01_000001
2018-04-19 18:55:59,403 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524178188987_0001_01_000001 transitioned from
EXITED_WITH_SUCCESS to DONE
2018-04-19 18:55:59,404 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Removing container_1524178188987_0001_01_000001 from application
application_1524178188987_0001
2018-04-19 18:55:59,404 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Stopping resource-monitoring for container_1524178188987_0001_01_000001
2018-04-19 18:55:59,405 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_STOP for appId application_1524178188987_0001
2018-04-19 18:56:00,412 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed
completed containers from NM context:
[container_1524178188987_0001_01_000001]
2018-04-19 18:56:13,455 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524178188987_0001 transitioned from RUNNING to
APPLICATION_RESOURCES_CLEANINGUP
2018-04-19 18:56:13,457 INFO
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting
absolute path :
/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524178188987_0001
2018-04-19 18:56:13,459 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event APPLICATION_STOP for appId application_1524178188987_0001
2018-04-19 18:56:13,470 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524178188987_0001 transitioned from
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2018-04-19 18:56:13,470 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
Scheduling Log Deletion for application: application_1524178188987_0001,
with delay of 10800 seconds

**********************




yarn-site.xml: according to the above link
container-executor.cfg:

yarn.nodemanager.linux-container-executor.group=ubuntu
min.user.id=0
#feature.tc.enabled=1
#feature.docker.enabled=1
allowed.system.users=ubuntu
# The configs below deal with settings for Docker
[docker]
module.enabled=true
docker.privileged-containers.enabled=true
docker.binary=/usr/bin/docker
docker.allowed.capabilities=SYS_CHROOT,MKNOD,SETFCAP,SETPCAP,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,SETUID,DAC_OVERRIDE,KILL,NET_BIND_SERVICE
#docker.allowed.devices=## comma seperated list of devices that can be
mounted into a container
docker.allowed.networks=bridge,host,none
docker.allowed.ro-mounts=/sys/fs/cgroup,/tmp/hadoop-ubuntu/nm-local-dir
docker.privileged-containers.registries=local
#docker.host-pid-namespace.enabled=false
docker.allowed.rw-mounts=/home/ubuntu/hadoop-3.1.0,/home/ubuntu/hadoop-3.1.0/logs


Dockerfile:
FROM ubuntu:16.04
#RUN rm /bin/sh && ln -s /bin/bash /bin/sh
SHELL ["/bin/bash", "-c"]

RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y  software-properties-common && \
#    apt-get install -y --no-install-recommends apt-utils && \
#    apt-get install -y  curl && \
    add-apt-repository ppa:webupd8team/java -y && \
    apt-get update && \
    echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select
true | /usr/bin/debconf-set-selections && \
    apt-get install -y oracle-java8-installer && \
#    apt-get install -y ssh && \
#    apt-get install -y rsync && \
    apt-get install -y vim && \
    apt-get clean

ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
ENV PATH $PATH:$JAVA_HOME/bin

# HADOOP
ARG HADOOP_ARCHIVE=
http://mirror.cc.columbia.edu/pub/software/apache/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz

ENV HADOOP_HOME /usr/local/hadoop
ENV HADOOP_COMMON_PATH /usr/local/hadoop
ENV HADOOP_HDFS_HOME /usr/local/hadoop
ENV HADOOP_MAPRED_HOME /usr/local/hadoop
ENV HADOOP_YARN_HOME /usr/local/hadoop
ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop

# download and extract hadoop, set JAVA_HOME in hadoop-env.sh, update path
RUN wget $HADOOP_ARCHIVE && \
  tar -xzf hadoop-3.1.0.tar.gz && \
  mv hadoop-3.1.0 $HADOOP_HOME

ADD rm-hadoop-config/* $HADOOP_HOME/etc/hadoop/

ENV PATH $PATH:$HADOOP_COMMON_PATH/bin

WORKDIR $HADOOP_COMMON_PATH

# Declare user
RUN groupadd -g 1000 ubuntu && \
    useradd -r -u 1000  -g 1000 ubuntu
USER ubuntu


~/hadoop-common$ docker inspect local/hadoop-ubuntu
[
    {
        "Id":
"sha256:d8335693084b5823675056b7d649b13d04a7e3c3b63688f83e9807405506b088",
        "RepoTags": [
            "hadoop-ubuntu:latest",
            "local/hadoop-ubuntu:latest"
        ],
        "RepoDigests": [],
        "Parent":
"sha256:f9b92fa15eadd74b9e3712ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
        "Comment": "",
        "Created": "2018-04-19T15:16:52.903547395Z",
        "Container":
"4fcff6fa65639fe0e2a9379a335a401a8cde0c0e1eaaa2dd98d35ced402c29e3",
        "ContainerConfig": {
            "Hostname": "4fcff6fa6563",
            "Domainname": "",
            "User": "ubuntu",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [

"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/local/hadoop/bin",
                "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
                "HADOOP_HOME=/usr/local/hadoop",
                "HADOOP_COMMON_PATH=/usr/local/hadoop",
                "HADOOP_HDFS_HOME=/usr/local/hadoop",
                "HADOOP_MAPRED_HOME=/usr/local/hadoop",
                "HADOOP_YARN_HOME=/usr/local/hadoop",
                "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
            ],
            "Cmd": [
                "/bin/bash",
                "-c",
                "#(nop) ",
                "USER ubuntu"
            ],
            "ArgsEscaped": true,
            "Image":
"sha256:f9b92fa15eadd74b9e3712ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
            "Volumes": null,
            "WorkingDir": "/usr/local/hadoop",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {},
            "Shell": [
                "/bin/bash",
                "-c"
            ]
        },
        "DockerVersion": "18.03.0-ce",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "ubuntu",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [

"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/local/hadoop/bin",
                "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
                "HADOOP_HOME=/usr/local/hadoop",
                "HADOOP_COMMON_PATH=/usr/local/hadoop",
                "HADOOP_HDFS_HOME=/usr/local/hadoop",
                "HADOOP_MAPRED_HOME=/usr/local/hadoop",
                "HADOOP_YARN_HOME=/usr/local/hadoop",
                "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "ArgsEscaped": true,
            "Image":
"sha256:f9b92fa15eadd74b9e3712ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
            "Volumes": null,
            "WorkingDir": "/usr/local/hadoop",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null,
            "Shell": [
                "/bin/bash",
                "-c"
            ]
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 2058935914,
        "VirtualSize": 2058935914,
        "GraphDriver": {
            "Data": null,
            "Name": "aufs"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [

"sha256:fccbfa2912f0cd6b9d13f91f288f112a2b825f3f758a4443aacb45bfc108cc74",

"sha256:e1a9a6284d0d24d8194ac84b372619e75cd35a46866b74925b7274c7056561e4",

"sha256:ac7299292f8b2f710d3b911c6a4e02ae8f06792e39822e097f9c4e9c2672b32d",

"sha256:a5e66470b2812e91798db36eb103c1f1e135bbe167e4b2ad5ba425b8db98ee8d",

"sha256:a8de0e025d94b33db3542e1e8ce58829144b30c6cd1fff057eec55b1491933c3",

"sha256:7e9a788452589001d42e7995dc0583bcca1e6f7780a301066ee0d6668aaf9c91",

"sha256:65fac5b99df0506e1d204d9024264105ab2fe142ea970f0eba630669dc606055",

"sha256:cd0da20f97700ab8e2ec37c464fcb8864cba86672aa96e7fc40e2572228895e2",

"sha256:21094366bc298af4a7e83680cf9d732ce69cb92b11a3b20005b12c15bac3e486"
            ]
        },
        "Metadata": {
            "LastTagTime": "2018-04-19T17:20:41.054733026-04:00"
        }
    }
]




Best,
Ahmad

Re: Regarding Docker container run_time setup

Posted by Shane Kumpf <sh...@gmail.com>.
Hi Ahmad,

This looks to be a classpath related issue. IIUC, Hadoop is installed at
/home/ubuntu/hadoop-3.1.0 on the host and at /usr/local/hadoop within the
image/container? I noticed "ADD rm-hadoop-config/* $HADOOP_HOME/etc/hadoop/"
in the Dockerfile. Is rm-hadoop-config a copy of the configuration from the
host?

I'm wondering if the configuration in the container may be referring to
/home/ubuntu/hadoop-3.1.0 for the classpaths instead of /usr/local/hadoop?
If that is the case, it may be easier to have both the container and host
refer to the same installation path by updating your Dockerfile to use
/home/ubuntu/hadoop-3.1.0. If not, could you share your mapred-site.xml and
yarn-site.xml from the host and in the container?

Alternatively, you could bind mount the host's Hadoop binaries/configs into
the container at the same path. See the "Using Bind Mounted Docker Volumes"
in the documentation for more on that feature [1].

Thanks,
-Shane

[1] http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/
DockerContainers.html


On Thu, Apr 19, 2018 at 7:44 PM, SeyyedAhmad Javadi <
sjavadi@cs.stonybrook.edu> wrote:

> Hi Shane,
>
> Sorry I got that I missed your suggestion in my last test and later
> updated to local/hadoop-ubuntu:latest and now I am working on the following
> error. I confirmed that mapred-site.xml within the contianer image has the
> information this error asking for.
>
> Do you think container is being launched but the aplication master/mp and
> reduce tasks have faced obstacles to run successfully wihtin the container?
>
> Many thanks,
> Ahmad
>
> *************************************RM node,
> submission*************************
> $ ./run_hadoop_job_docker_image_v1.sh
> rm: `/user/ubuntu/teraOutput': No such file or directory
> Number of Maps  = 2
> Samples per Map = 10
> Wrote input for Map #0
> Wrote input for Map #1
> Starting Job
> 2018-04-19 21:38:38,757 INFO client.RMProxy: Connecting to ResourceManager
> at bay1-vm1/130.245.127.176:8032
> 2018-04-19 21:38:39,412 INFO mapreduce.JobResourceUploader: Disabling
> Erasure Coding for path: /tmp/hadoop-yarn/staging/ubuntu/.staging/job_
> 1524187975341_0003
> 2018-04-19 21:38:39,999 INFO input.FileInputFormat: Total input files to
> process : 2
> 2018-04-19 21:38:40,141 INFO mapreduce.JobSubmitter: number of splits:2
> 2018-04-19 21:38:40,205 INFO Configuration.deprecation:
> yarn.resourcemanager.system-metrics-publisher.enabled is deprecated.
> Instead, use yarn.system-metrics-publisher.enabled
> 2018-04-19 21:38:40,411 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1524187975341_0003
> 2018-04-19 21:38:40,413 INFO mapreduce.JobSubmitter: Executing with
> tokens: []
> 2018-04-19 21:38:40,665 INFO conf.Configuration: resource-types.xml not
> found
> 2018-04-19 21:38:40,666 INFO resource.ResourceUtils: Unable to find
> 'resource-types.xml'.
> 2018-04-19 21:38:40,766 INFO impl.YarnClientImpl: Submitted application
> application_1524187975341_0003
> 2018-04-19 21:38:40,823 INFO mapreduce.Job: The url to track the job:
> http://bay1-vm1:8088/proxy/application_1524187975341_0003/
> 2018-04-19 21:38:40,824 INFO mapreduce.Job: Running job:
> job_1524187975341_0003
> 2018-04-19 21:38:56,990 INFO mapreduce.Job: Job job_1524187975341_0003
> running in uber mode : false
> 2018-04-19 21:38:56,993 INFO mapreduce.Job:  map 0% reduce 0%
> 2018-04-19 21:38:57,013 INFO mapreduce.Job: Job job_1524187975341_0003
> failed with state FAILED due to: Application application_1524187975341_0003
> failed 2 times due to AM Container for appattempt_1524187975341_0003_000002
> exited with  exitCode: 1
> Failing this attempt.Diagnostics: [2018-04-19 21:38:54.466]Exception from
> container-launch.
> Container id: container_1524187975341_0003_02_000001
> Exit code: 1
> Shell output: main : command provided 4
> main : run as user is ubuntu
> main : requested yarn user is ubuntu
> Creating script paths...
> Creating local dirs...
> Getting exit code file...
> Changing effective user to root...
> Launching docker container...
> Docker run command: /usr/bin/docker --config='/tmp/hadoop-ubuntu/
> nm-local-dir/nmPrivate/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001' run --name='container_1524187975341_0003_02_000001'
> --user='1000:1000' -d --workdir='/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/appcache/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001' --net='host'
> -v '/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-
> ubuntu/nm-local-dir/filecache:ro' -v '/tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/filecache:ro' -v '/home/ubuntu/hadoop-3.1.0/
> logs/userlogs/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/
> userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001'
> -v '/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-
> local-dir/usercache/ubuntu/appcache/application_1524187975341_0003'
> --cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD'
> --cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID'
> --cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID'
> --cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID'
> --cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE'
> --group-add '1000' --group-add '4' --group-add '24' --group-add '27'
> --group-add '30' --group-add '46' --group-add '110' --group-add '115'
> --group-add '116' --group-add '117' 'local/hadoop-ubuntu:latest' 'bash'
> '/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003/container_1524187975341_
> 0003_02_000001/launch_container.sh'
> Inspecting docker container...
> Docker inspect command: /usr/bin/docker inspect --format {{.State.Pid}}
> container_1524187975341_0003_02_000001
> pid from docker inspect: 2519
> Writing to cgroup task files...
> Writing pid file...
> Writing to tmp file /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001/
> container_1524187975341_0003_02_000001.pid.tmp
> Waiting for docker container to finish.
> Obtaining the exit code...
> Docker inspect command: /usr/bin/docker inspect --format
> {{.State.ExitCode}} container_1524187975341_0003_02_000001
> Exit code from docker inspect: 1
> Wrote the exit code 1 to /tmp/hadoop-ubuntu/nm-local-
> dir/nmPrivate/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001/container_1524187975341_0003_
> 02_000001.pid.exitcode
>
>
> [2018-04-19 21:38:54.473]Container exited with a non-zero exit code 1.
> Error file: prelaunch.err.
> Last 4096 bytes of prelaunch.err :
> Last 4096 bytes of stderr :
> Error: Could not find or load main class org.apache.hadoop.mapreduce.
> v2.app.MRAppMaster
>
> Please check whether your etc/hadoop/mapred-site.xml contains the below
> configuration:
> <property>
>   <name>yarn.app.mapreduce.am.env</name>
>   <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
> directory}</value>
> </property>
> <property>
>   <name>mapreduce.map.env</name>
>   <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
> directory}</value>
> </property>
> <property>
>   <name>mapreduce.reduce.env</name>
>   <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
> directory}</value>
> </property>
>
> [2018-04-19 21:38:54.475]Container exited with a non-zero exit code 1.
> Error file: prelaunch.err.
> Last 4096 bytes of prelaunch.err :
> Last 4096 bytes of stderr :
> Error: Could not find or load main class org.apache.hadoop.mapreduce.
> v2.app.MRAppMaster
>
> Please check whether your etc/hadoop/mapred-site.xml contains the below
> configuration:
> <property>
>   <name>yarn.app.mapreduce.am.env</name>
>   <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
> directory}</value>
> </property>
> <property>
>   <name>mapreduce.map.env</name>
>   <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
> directory}</value>
> </property>
> <property>
>   <name>mapreduce.reduce.env</name>
>   <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
> directory}</value>
> </property>
>
> For more detailed output, check the application tracking page:
> http://bay1-vm1:8088/cluster/app/application_1524187975341_0003 Then
> click on links to logs of each attempt.
> . Failing the application.
> 2018-04-19 21:38:57,042 INFO mapreduce.Job: Counters: 0
> Job job_1524187975341_0003 failed!
> runtime in seconds: 25
> **************************************************
>
> ***************************1 of the NM logs********************
> 2018-04-19 21:38:49,056 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
> Auth successful for appattempt_1524187975341_0003_000002 (auth:SIMPLE)
> 2018-04-19 21:38:49,068 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.ContainerManagerImpl: Start request for
> container_1524187975341_0003_02_000001 by user ubuntu
> 2018-04-19 21:38:49,073 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.ContainerManagerImpl: Creating a new
> application reference for app application_1524187975341_0003
> 2018-04-19 21:38:49,074 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=ubuntu    IP=130.245.127.176    OPERATION=Start Container Request
> TARGET=ContainerManageImpl    RESULT=SUCCESS    APPID=application_1524187975341_0003
> CONTAINERID=container_1524187975341_0003_02_000001
> 2018-04-19 21:38:49,074 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524187975341_0003 transitioned from NEW to INITING
> 2018-04-19 21:38:49,075 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Adding
> container_1524187975341_0003_02_000001 to application
> application_1524187975341_0003
> 2018-04-19 21:38:49,076 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524187975341_0003 transitioned from INITING to RUNNING
> 2018-04-19 21:38:49,079 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524187975341_0003_02_000001 transitioned from NEW to LOCALIZING
> 2018-04-19 21:38:49,079 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for
> appId application_1524187975341_0003
> 2018-04-19 21:38:49,080 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.localizer.ResourceLocalizationService:
> Created localizer for container_1524187975341_0003_02_000001
> 2018-04-19 21:38:49,101 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.localizer.ResourceLocalizationService:
> Writing credentials to the nmPrivate file /tmp/hadoop-ubuntu/nm-local-
> dir/nmPrivate/container_1524187975341_0003_02_000001.tokens
> 2018-04-19 21:38:52,446 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524187975341_0003_02_000001 transitioned from LOCALIZING to
> SCHEDULED
> 2018-04-19 21:38:52,446 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.scheduler.ContainerScheduler: Starting
> container [container_1524187975341_0003_02_000001]
> 2018-04-19 21:38:52,528 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524187975341_0003_02_000001 transitioned from SCHEDULED to
> RUNNING
> 2018-04-19 21:38:52,529 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
> resource-monitoring for container_1524187975341_0003_02_000001
> 2018-04-19 21:38:53,608 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
> Docker inspect output for container_1524187975341_0003_02_000001:
> ,bay14-vm1
>
> 2018-04-19 21:38:53,609 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl:
> container_1524187975341_0003_02_000001's ip = 130.245.127.158, and
> hostname = bay14-vm1
> 2018-04-19 21:38:54,463 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor:
> Shell execution returned exit code: 1. Privileged Execution Operation
> Stderr:
>
> Stdout: main : command provided 4
> main : run as user is ubuntu
> main : requested yarn user is ubuntu
> Creating script paths...
> Creating local dirs...
> Getting exit code file...
> Changing effective user to root...
> Launching docker container...
> Docker run command: /usr/bin/docker --config='/tmp/hadoop-ubuntu/
> nm-local-dir/nmPrivate/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001' run --name='container_1524187975341_0003_02_000001'
> --user='1000:1000' -d --workdir='/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/appcache/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001' --net='host'
> -v '/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-
> ubuntu/nm-local-dir/filecache:ro' -v '/tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/filecache:ro' -v '/home/ubuntu/hadoop-3.1.0/
> logs/userlogs/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/
> userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001'
> -v '/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-
> local-dir/usercache/ubuntu/appcache/application_1524187975341_0003'
> --cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD'
> --cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID'
> --cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID'
> --cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID'
> --cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE'
> --group-add '1000' --group-add '4' --group-add '24' --group-add '27'
> --group-add '30' --group-add '46' --group-add '110' --group-add '115'
> --group-add '116' --group-add '117' 'local/hadoop-ubuntu:latest' 'bash'
> '/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003/container_1524187975341_
> 0003_02_000001/launch_container.sh'
> Inspecting docker container...
> Docker inspect command: /usr/bin/docker inspect --format {{.State.Pid}}
> container_1524187975341_0003_02_000001
> pid from docker inspect: 2519
> Writing to cgroup task files...
> Writing pid file...
> Writing to tmp file /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001/
> container_1524187975341_0003_02_000001.pid.tmp
> Waiting for docker container to finish.
> Obtaining the exit code...
> Docker inspect command: /usr/bin/docker inspect --format
> {{.State.ExitCode}} container_1524187975341_0003_02_000001
> Exit code from docker inspect: 1
> Wrote the exit code 1 to /tmp/hadoop-ubuntu/nm-local-
> dir/nmPrivate/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001/container_1524187975341_0003_
> 02_000001.pid.exitcode
>
> Full command array for failed execution:
> [/home/ubuntu/hadoop-3.1.0/bin/container-executor, ubuntu, ubuntu, 4,
> application_1524187975341_0003, container_1524187975341_0003_02_000001,
> /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003/container_1524187975341_0003_02_000001,
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001/launch_container.sh,
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001/
> container_1524187975341_0003_02_000001.tokens,
> /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001/
> container_1524187975341_0003_02_000001.pid, /tmp/hadoop-ubuntu/nm-local-dir,
> /home/ubuntu/hadoop-3.1.0/logs/userlogs, /tmp/hadoop-ubuntu/nm-docker-
> cmds/docker.container_1524187975341_0003_02_0000017038281858825700163.cmd,
> cgroups=none]
> 2018-04-19 21:38:54,464 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
> Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.privileged.PrivilegedOperationException: ExitCodeException
> exitCode=1:
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(
> PrivilegedOperationExecutor.java:180)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.runtime.DockerLinuxContainerRuntime.launchContainer(
> DockerLinuxContainerRuntime.java:897)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(
> DelegatingLinuxContainerRuntime.java:141)
>     at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.
> launchContainer(LinuxContainerExecutor.java:545)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:511)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:304)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:101)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=1:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
>     at org.apache.hadoop.util.Shell.run(Shell.java:902)
>     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:1227)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(
> PrivilegedOperationExecutor.java:152)
>     ... 10 more
> 2018-04-19 21:38:54,464 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
> Docker command used: run cap-add=SYS_CHROOT,MKNOD,
> SETFCAP,SETPCAP,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,
> FOWNER,SETUID,DAC_OVERRIDE,KILL,NET_BIND_SERVICE cap-drop=ALL detach=true
> docker-command=run docker-config=/tmp/hadoop-
> ubuntu/nm-local-dir/nmPrivate/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001 group-add=1000,4,24,27,30,46,110,115,116,117
> image=local/hadoop-ubuntu:latest launch-command=bash,/tmp/
> hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003/container_1524187975341_
> 0003_02_000001/launch_container.sh name=container_1524187975341_0003_02_000001
> net=host ro-mounts=/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/
> hadoop-ubuntu/nm-local-dir/filecache,/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/filecache rw-mounts=/home/ubuntu/hadoop-
> 3.1.0/logs/userlogs/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001:/home/ubuntu/
> hadoop-3.1.0/logs/userlogs/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001,/tmp/hadoop-
> ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_
> 1524187975341_0003:/tmp/hadoop-ubuntu/nm-local-dir/
> usercache/ubuntu/appcache/application_1524187975341_0003 user=1000:1000
> workdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/
> appcache/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001
> 2018-04-19 21:38:54,465 WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Exit code from container container_1524187975341_0003_02_000001 is : 1
> 2018-04-19 21:38:54,465 WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Exception from container-launch with container ID:
> container_1524187975341_0003_02_000001 and exit code: 1
> org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException:
> Launch container failed
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.runtime.DockerLinuxContainerRuntime.launchContainer(
> DockerLinuxContainerRuntime.java:904)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(
> DelegatingLinuxContainerRuntime.java:141)
>     at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.
> launchContainer(LinuxContainerExecutor.java:545)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:511)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:304)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:101)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:748)
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Exception from container-launch.
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Container id: container_1524187975341_0003_02_000001
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Exit code: 1
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Shell output: main : command provided 4
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> main : run as user is ubuntu
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> main : requested yarn user is ubuntu
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Creating script paths...
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Creating local dirs...
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Getting exit code file...
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Changing effective user to root...
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Launching docker container...
> 2018-04-19 21:38:54,465 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Docker run command: /usr/bin/docker --config='/tmp/hadoop-ubuntu/
> nm-local-dir/nmPrivate/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001' run --name='container_1524187975341_0003_02_000001'
> --user='1000:1000' -d --workdir='/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/appcache/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001' --net='host'
> -v '/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-
> ubuntu/nm-local-dir/filecache:ro' -v '/tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/
> nm-local-dir/usercache/ubuntu/filecache:ro' -v '/home/ubuntu/hadoop-3.1.0/
> logs/userlogs/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/
> userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001'
> -v '/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-
> local-dir/usercache/ubuntu/appcache/application_1524187975341_0003'
> --cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD'
> --cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID'
> --cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID'
> --cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID'
> --cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE'
> --group-add '1000' --group-add '4' --group-add '24' --group-add '27'
> --group-add '30' --group-add '46' --group-add '110' --group-add '115'
> --group-add '116' --group-add '117' 'local/hadoop-ubuntu:latest' 'bash'
> '/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/
> application_1524187975341_0003/container_1524187975341_
> 0003_02_000001/launch_container.sh'
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Inspecting docker container...
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Docker inspect command: /usr/bin/docker inspect --format {{.State.Pid}}
> container_1524187975341_0003_02_000001
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> pid from docker inspect: 2519
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Writing to cgroup task files...
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Writing pid file...
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Writing to tmp file /tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_
> 1524187975341_0003/container_1524187975341_0003_02_000001/
> container_1524187975341_0003_02_000001.pid.tmp
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Waiting for docker container to finish.
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Obtaining the exit code...
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Docker inspect command: /usr/bin/docker inspect --format
> {{.State.ExitCode}} container_1524187975341_0003_02_000001
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Exit code from docker inspect: 1
> 2018-04-19 21:38:54,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Wrote the exit code 1 to /tmp/hadoop-ubuntu/nm-local-
> dir/nmPrivate/application_1524187975341_0003/container_
> 1524187975341_0003_02_000001/container_1524187975341_0003_
> 02_000001.pid.exitcode
> 2018-04-19 21:38:54,466 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Container launch
> failed : Container exited with a non-zero exit code 1.
> 2018-04-19 21:38:54,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524187975341_0003_02_000001 transitioned from RUNNING to
> EXITED_WITH_FAILURE
> 2018-04-19 21:38:54,476 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
> container container_1524187975341_0003_02_000001
> 2018-04-19 21:38:54,597 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Removing Docker container : container_1524187975341_0003_02_000001
> 2018-04-19 21:38:56,643 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Skipping
> monitoring container container_1524187975341_0003_02_000001 since CPU
> usage is not yet available.
> 2018-04-19 21:38:56,869 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Deleting absolute path : /tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/appcache/application_1524187975341_
> 0003/container_1524187975341_0003_02_000001
> 2018-04-19 21:38:56,869 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=ubuntu    OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE    DESCRIPTION=Container failed with
> state: EXITED_WITH_FAILURE    APPID=application_1524187975341_0003
> CONTAINERID=container_1524187975341_0003_02_000001
> 2018-04-19 21:38:56,871 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524187975341_0003_02_000001 transitioned from
> EXITED_WITH_FAILURE to DONE
> 2018-04-19 21:38:56,872 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Removing
> container_1524187975341_0003_02_000001 from application
> application_1524187975341_0003
> 2018-04-19 21:38:56,877 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
> resource-monitoring for container_1524187975341_0003_02_000001
> 2018-04-19 21:38:56,877 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
> appId application_1524187975341_0003
> 2018-04-19 21:38:57,888 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Removed completed containers from NM context: [container_1524187975341_0003_
> 02_000001]
> 2018-04-19 21:38:57,888 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524187975341_0003 transitioned from RUNNING to
> APPLICATION_RESOURCES_CLEANINGUP
> 2018-04-19 21:38:57,889 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Deleting absolute path : /tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/appcache/application_1524187975341_0003
> 2018-04-19 21:38:57,889 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
> appId application_1524187975341_0003
> 2018-04-19 21:38:57,890 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524187975341_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP
> to FINISHED
> 2018-04-19 21:38:57,890 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1524187975341_0003,
> with delay of 10800 seconds
>
> On Thu, Apr 19, 2018 at 9:28 PM, Shane Kumpf <shane.kumpf.apache@gmail.com
> > wrote:
>
>> Hello Ahmad,
>>
>> The image being used is not privileged/untrusted based on the settings in
>> container-executor.cfg. In container-executor.cfg you have set
>> docker.privileged-containers.registries=local, but the image name
>> variable in the job is using "hadoop-ubuntu:latest". Based on that
>> setting, YARN is expecting the image to be in the "local" namespace. Can
>> you set YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=local/hadoop-ubuntu:latest
>> and see if that resolves the issue?
>>
>> Thanks,
>> -Shane
>>
>> On Thu, Apr 19, 2018 at 4:59 PM, SeyyedAhmad Javadi <
>> sjavadi@cs.stonybrook.edu> wrote:
>>
>>> Hi All,
>>>
>>> I am following the below guide to setup Docker container run_time but
>>> face some non-trivial errors at least for my level. Would you please
>>> comment if you have some idea about the root-cause?
>>> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yar
>>> n-site/DockerContainers.html
>>>
>>> After the error, I have provided the config files and Dockerfile as
>>> well as Docker image inspect command results (should have null for Entry
>>> Point and CMS?).
>>>
>>> I have three nodes, 1 RM and 2 NMs  and default LCE works fine.
>>>
>>> ********************submit job script
>>> vars="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_RUNT
>>> IME_DOCKER_IMAGE=hadoop-ubuntu,YARN_CONTAINER_RUNTIME_DOCKER
>>> _RUN_OVERRIDE_DISABLE=false,YARN_CONTAINER_RUNTIME_DOCKER_CO
>>> NTAINER_NETWORK=host"
>>>
>>> #vars="YARN_CONTAINER_RUNTIME_TYPE=default"
>>> hadoop jar /home/ubuntu/hadoop-3.1.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar
>>> pi -Dyarn.app.mapreduce.am.env=$vars -Dmapreduce.map.env=$vars
>>> -Dmapreduce.reduce.env=$vars 2 10
>>>
>>> ******************** AM Log in one the nodes
>>> 2018-04-19 18:55:46,311 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
>>> Auth successful for appattempt_1524178188987_0001_000001 (auth:SIMPLE)
>>> 2018-04-19 18:55:46,515 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.ContainerManagerImpl: Start request for
>>> container_1524178188987_0001_01_000001 by user ubuntu
>>> 2018-04-19 18:55:46,617 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.ContainerManagerImpl: Creating a new
>>> application reference for app application_1524178188987_0001
>>> 2018-04-19 18:55:46,634 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.application.ApplicationImpl: Application
>>> application_1524178188987_0001 transitioned from NEW to INITING
>>> 2018-04-19 18:55:46,634 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
>>> USER=ubuntu    IP=130.245.127.176    OPERATION=Start Container Request
>>> TARGET=ContainerManageImpl    RESULT=SUCCESS
>>> APPID=application_1524178188987_0001    CONTAINERID=container_15241781
>>> 88987_0001_01_000001
>>> 2018-04-19 18:55:46,635 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.application.ApplicationImpl: Adding
>>> container_1524178188987_0001_01_000001 to application
>>> application_1524178188987_0001
>>> 2018-04-19 18:55:46,649 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.application.ApplicationImpl: Application
>>> application_1524178188987_0001 transitioned from INITING to RUNNING
>>> 2018-04-19 18:55:46,655 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.container.ContainerImpl: Container
>>> container_1524178188987_0001_01_000001 transitioned from NEW to
>>> LOCALIZING
>>> 2018-04-19 18:55:46,655 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for
>>> appId application_1524178188987_0001
>>> 2018-04-19 18:55:46,698 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.localizer.ResourceLocalizationService:
>>> Created localizer for container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:46,898 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.localizer.ResourceLocalizationService:
>>> Writing credentials to the nmPrivate file /tmp/hadoop-ubuntu/nm-local-di
>>> r/nmPrivate/container_1524178188987_0001_01_000001.tokens
>>> 2018-04-19 18:55:50,371 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.container.ContainerImpl: Container
>>> container_1524178188987_0001_01_000001 transitioned from LOCALIZING to
>>> SCHEDULED
>>> 2018-04-19 18:55:50,374 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.scheduler.ContainerScheduler: Starting
>>> container [container_1524178188987_0001_01_000001]
>>> 2018-04-19 18:55:50,479 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.container.ContainerImpl: Container
>>> container_1524178188987_0001_01_000001 transitioned from SCHEDULED to
>>> RUNNING
>>> 2018-04-19 18:55:50,481 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
>>> resource-monitoring for container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:51,842 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.launcher.ContainerLaunch: Container
>>> container_1524178188987_0001_01_000001 succeeded
>>> 2018-04-19 18:55:51,844 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.container.ContainerImpl: Container
>>> container_1524178188987_0001_01_000001 transitioned from RUNNING to
>>> EXITED_WITH_SUCCESS
>>> 2018-04-19 18:55:51,844 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
>>> container container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:51,957 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.LinuxContainerExecutor: Removing Docker container :
>>> container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:56,963 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.launcher.ContainerLaunch: Could not get
>>> pid for container_1524178188987_0001_01_000001. Waited for 5000 ms.
>>> 2018-04-19 18:55:56,963 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.launcher.ContainerLaunch: Unable to obtain
>>> pid, but docker container request detected. Attempting to reap container
>>> container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:59,395 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.LinuxContainerExecutor: Deleting absolute path :
>>> /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/ap
>>> plication_1524178188987_0001/container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:59,395 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
>>> USER=ubuntu    OPERATION=Container Finished - Succeeded
>>> TARGET=ContainerImpl    RESULT=SUCCESS    APPID=application_1524178188987_0001
>>> CONTAINERID=container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:59,403 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.container.ContainerImpl: Container
>>> container_1524178188987_0001_01_000001 transitioned from
>>> EXITED_WITH_SUCCESS to DONE
>>> 2018-04-19 18:55:59,404 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.application.ApplicationImpl: Removing
>>> container_1524178188987_0001_01_000001 from application
>>> application_1524178188987_0001
>>> 2018-04-19 18:55:59,404 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
>>> resource-monitoring for container_1524178188987_0001_01_000001
>>> 2018-04-19 18:55:59,405 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
>>> appId application_1524178188987_0001
>>> 2018-04-19 18:56:00,412 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM
>>> context: [container_1524178188987_0001_01_000001]
>>> 2018-04-19 18:56:13,455 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.application.ApplicationImpl: Application
>>> application_1524178188987_0001 transitioned from RUNNING to
>>> APPLICATION_RESOURCES_CLEANINGUP
>>> 2018-04-19 18:56:13,457 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.LinuxContainerExecutor: Deleting absolute path :
>>> /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/ap
>>> plication_1524178188987_0001
>>> 2018-04-19 18:56:13,459 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP
>>> for appId application_1524178188987_0001
>>> 2018-04-19 18:56:13,470 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.application.ApplicationImpl: Application
>>> application_1524178188987_0001 transitioned from
>>> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
>>> 2018-04-19 18:56:13,470 INFO org.apache.hadoop.yarn.server.
>>> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
>>> Scheduling Log Deletion for application: application_1524178188987_0001,
>>> with delay of 10800 seconds
>>>
>>> **********************
>>>
>>>
>>>
>>>
>>> yarn-site.xml: according to the above link
>>> container-executor.cfg:
>>>
>>> yarn.nodemanager.linux-container-executor.group=ubuntu
>>> min.user.id=0
>>> #feature.tc.enabled=1
>>> #feature.docker.enabled=1
>>> allowed.system.users=ubuntu
>>> # The configs below deal with settings for Docker
>>> [docker]
>>> module.enabled=true
>>> docker.privileged-containers.enabled=true
>>> docker.binary=/usr/bin/docker
>>> docker.allowed.capabilities=SYS_CHROOT,MKNOD,SETFCAP,SETPCAP
>>> ,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,SETUID,DAC_
>>> OVERRIDE,KILL,NET_BIND_SERVICE
>>> #docker.allowed.devices=## comma seperated list of devices that can be
>>> mounted into a container
>>> docker.allowed.networks=bridge,host,none
>>> docker.allowed.ro-mounts=/sys/fs/cgroup,/tmp/hadoop-ubuntu/nm-local-dir
>>> docker.privileged-containers.registries=local
>>> #docker.host-pid-namespace.enabled=false
>>> docker.allowed.rw-mounts=/home/ubuntu/hadoop-3.1.0,/home/ubu
>>> ntu/hadoop-3.1.0/logs
>>>
>>>
>>> Dockerfile:
>>> FROM ubuntu:16.04
>>> #RUN rm /bin/sh && ln -s /bin/bash /bin/sh
>>> SHELL ["/bin/bash", "-c"]
>>>
>>> RUN apt-get update && \
>>>     apt-get upgrade -y && \
>>>     apt-get install -y  software-properties-common && \
>>> #    apt-get install -y --no-install-recommends apt-utils && \
>>> #    apt-get install -y  curl && \
>>>     add-apt-repository ppa:webupd8team/java -y && \
>>>     apt-get update && \
>>>     echo oracle-java7-installer shared/accepted-oracle-license-v1-1
>>> select true | /usr/bin/debconf-set-selections && \
>>>     apt-get install -y oracle-java8-installer && \
>>> #    apt-get install -y ssh && \
>>> #    apt-get install -y rsync && \
>>>     apt-get install -y vim && \
>>>     apt-get clean
>>>
>>> ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
>>> ENV PATH $PATH:$JAVA_HOME/bin
>>>
>>> # HADOOP
>>> ARG HADOOP_ARCHIVE=http://mirror.cc.columbia.edu/pub/software/ap
>>> ache/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz
>>>
>>> ENV HADOOP_HOME /usr/local/hadoop
>>> ENV HADOOP_COMMON_PATH /usr/local/hadoop
>>> ENV HADOOP_HDFS_HOME /usr/local/hadoop
>>> ENV HADOOP_MAPRED_HOME /usr/local/hadoop
>>> ENV HADOOP_YARN_HOME /usr/local/hadoop
>>> ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop
>>>
>>> # download and extract hadoop, set JAVA_HOME in hadoop-env.sh, update
>>> path
>>> RUN wget $HADOOP_ARCHIVE && \
>>>   tar -xzf hadoop-3.1.0.tar.gz && \
>>>   mv hadoop-3.1.0 $HADOOP_HOME
>>>
>>> ADD rm-hadoop-config/* $HADOOP_HOME/etc/hadoop/
>>>
>>> ENV PATH $PATH:$HADOOP_COMMON_PATH/bin
>>>
>>> WORKDIR $HADOOP_COMMON_PATH
>>>
>>> # Declare user
>>> RUN groupadd -g 1000 ubuntu && \
>>>     useradd -r -u 1000  -g 1000 ubuntu
>>> USER ubuntu
>>>
>>>
>>> ~/hadoop-common$ docker inspect local/hadoop-ubuntu
>>> [
>>>     {
>>>         "Id": "sha256:d8335693084b5823675056
>>> b7d649b13d04a7e3c3b63688f83e9807405506b088",
>>>         "RepoTags": [
>>>             "hadoop-ubuntu:latest",
>>>             "local/hadoop-ubuntu:latest"
>>>         ],
>>>         "RepoDigests": [],
>>>         "Parent": "sha256:f9b92fa15eadd74b9e3712
>>> ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
>>>         "Comment": "",
>>>         "Created": "2018-04-19T15:16:52.903547395Z",
>>>         "Container": "4fcff6fa65639fe0e2a9379a335a4
>>> 01a8cde0c0e1eaaa2dd98d35ced402c29e3",
>>>         "ContainerConfig": {
>>>             "Hostname": "4fcff6fa6563",
>>>             "Domainname": "",
>>>             "User": "ubuntu",
>>>             "AttachStdin": false,
>>>             "AttachStdout": false,
>>>             "AttachStderr": false,
>>>             "Tty": false,
>>>             "OpenStdin": false,
>>>             "StdinOnce": false,
>>>             "Env": [
>>>                 "PATH=/usr/local/sbin:/usr/loc
>>> al/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-ora
>>> cle/bin:/usr/local/hadoop/bin",
>>>                 "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
>>>                 "HADOOP_HOME=/usr/local/hadoop",
>>>                 "HADOOP_COMMON_PATH=/usr/local/hadoop",
>>>                 "HADOOP_HDFS_HOME=/usr/local/hadoop",
>>>                 "HADOOP_MAPRED_HOME=/usr/local/hadoop",
>>>                 "HADOOP_YARN_HOME=/usr/local/hadoop",
>>>                 "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
>>>             ],
>>>             "Cmd": [
>>>                 "/bin/bash",
>>>                 "-c",
>>>                 "#(nop) ",
>>>                 "USER ubuntu"
>>>             ],
>>>             "ArgsEscaped": true,
>>>             "Image": "sha256:f9b92fa15eadd74b9e3712
>>> ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
>>>             "Volumes": null,
>>>             "WorkingDir": "/usr/local/hadoop",
>>>             "Entrypoint": null,
>>>             "OnBuild": null,
>>>             "Labels": {},
>>>             "Shell": [
>>>                 "/bin/bash",
>>>                 "-c"
>>>             ]
>>>         },
>>>         "DockerVersion": "18.03.0-ce",
>>>         "Author": "",
>>>         "Config": {
>>>             "Hostname": "",
>>>             "Domainname": "",
>>>             "User": "ubuntu",
>>>             "AttachStdin": false,
>>>             "AttachStdout": false,
>>>             "AttachStderr": false,
>>>             "Tty": false,
>>>             "OpenStdin": false,
>>>             "StdinOnce": false,
>>>             "Env": [
>>>                 "PATH=/usr/local/sbin:/usr/loc
>>> al/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-ora
>>> cle/bin:/usr/local/hadoop/bin",
>>>                 "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
>>>                 "HADOOP_HOME=/usr/local/hadoop",
>>>                 "HADOOP_COMMON_PATH=/usr/local/hadoop",
>>>                 "HADOOP_HDFS_HOME=/usr/local/hadoop",
>>>                 "HADOOP_MAPRED_HOME=/usr/local/hadoop",
>>>                 "HADOOP_YARN_HOME=/usr/local/hadoop",
>>>                 "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
>>>             ],
>>>             "Cmd": [
>>>                 "/bin/bash"
>>>             ],
>>>             "ArgsEscaped": true,
>>>             "Image": "sha256:f9b92fa15eadd74b9e3712
>>> ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
>>>             "Volumes": null,
>>>             "WorkingDir": "/usr/local/hadoop",
>>>             "Entrypoint": null,
>>>             "OnBuild": null,
>>>             "Labels": null,
>>>             "Shell": [
>>>                 "/bin/bash",
>>>                 "-c"
>>>             ]
>>>         },
>>>         "Architecture": "amd64",
>>>         "Os": "linux",
>>>         "Size": 2058935914,
>>>         "VirtualSize": 2058935914,
>>>         "GraphDriver": {
>>>             "Data": null,
>>>             "Name": "aufs"
>>>         },
>>>         "RootFS": {
>>>             "Type": "layers",
>>>             "Layers": [
>>>                 "sha256:fccbfa2912f0cd6b9d13f9
>>> 1f288f112a2b825f3f758a4443aacb45bfc108cc74",
>>>                 "sha256:e1a9a6284d0d24d8194ac8
>>> 4b372619e75cd35a46866b74925b7274c7056561e4",
>>>                 "sha256:ac7299292f8b2f710d3b91
>>> 1c6a4e02ae8f06792e39822e097f9c4e9c2672b32d",
>>>                 "sha256:a5e66470b2812e91798db3
>>> 6eb103c1f1e135bbe167e4b2ad5ba425b8db98ee8d",
>>>                 "sha256:a8de0e025d94b33db3542e
>>> 1e8ce58829144b30c6cd1fff057eec55b1491933c3",
>>>                 "sha256:7e9a788452589001d42e79
>>> 95dc0583bcca1e6f7780a301066ee0d6668aaf9c91",
>>>                 "sha256:65fac5b99df0506e1d204d
>>> 9024264105ab2fe142ea970f0eba630669dc606055",
>>>                 "sha256:cd0da20f97700ab8e2ec37
>>> c464fcb8864cba86672aa96e7fc40e2572228895e2",
>>>                 "sha256:21094366bc298af4a7e836
>>> 80cf9d732ce69cb92b11a3b20005b12c15bac3e486"
>>>             ]
>>>         },
>>>         "Metadata": {
>>>             "LastTagTime": "2018-04-19T17:20:41.054733026-04:00"
>>>         }
>>>     }
>>> ]
>>>
>>>
>>>
>>>
>>> Best,
>>> Ahmad
>>>
>>>
>>
>

Re: Regarding Docker container run_time setup

Posted by SeyyedAhmad Javadi <sj...@cs.stonybrook.edu>.
Hi Shane,

Sorry I got that I missed your suggestion in my last test and later updated
to local/hadoop-ubuntu:latest and now I am working on the following error.
I confirmed that mapred-site.xml within the contianer image has the
information this error asking for.

Do you think container is being launched but the aplication master/mp and
reduce tasks have faced obstacles to run successfully wihtin the container?

Many thanks,
Ahmad

*************************************RM node,
submission*************************
$ ./run_hadoop_job_docker_image_v1.sh
rm: `/user/ubuntu/teraOutput': No such file or directory
Number of Maps  = 2
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Starting Job
2018-04-19 21:38:38,757 INFO client.RMProxy: Connecting to ResourceManager
at bay1-vm1/130.245.127.176:8032
2018-04-19 21:38:39,412 INFO mapreduce.JobResourceUploader: Disabling
Erasure Coding for path:
/tmp/hadoop-yarn/staging/ubuntu/.staging/job_1524187975341_0003
2018-04-19 21:38:39,999 INFO input.FileInputFormat: Total input files to
process : 2
2018-04-19 21:38:40,141 INFO mapreduce.JobSubmitter: number of splits:2
2018-04-19 21:38:40,205 INFO Configuration.deprecation:
yarn.resourcemanager.system-metrics-publisher.enabled is deprecated.
Instead, use yarn.system-metrics-publisher.enabled
2018-04-19 21:38:40,411 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1524187975341_0003
2018-04-19 21:38:40,413 INFO mapreduce.JobSubmitter: Executing with tokens:
[]
2018-04-19 21:38:40,665 INFO conf.Configuration: resource-types.xml not
found
2018-04-19 21:38:40,666 INFO resource.ResourceUtils: Unable to find
'resource-types.xml'.
2018-04-19 21:38:40,766 INFO impl.YarnClientImpl: Submitted application
application_1524187975341_0003
2018-04-19 21:38:40,823 INFO mapreduce.Job: The url to track the job:
http://bay1-vm1:8088/proxy/application_1524187975341_0003/
2018-04-19 21:38:40,824 INFO mapreduce.Job: Running job:
job_1524187975341_0003
2018-04-19 21:38:56,990 INFO mapreduce.Job: Job job_1524187975341_0003
running in uber mode : false
2018-04-19 21:38:56,993 INFO mapreduce.Job:  map 0% reduce 0%
2018-04-19 21:38:57,013 INFO mapreduce.Job: Job job_1524187975341_0003
failed with state FAILED due to: Application application_1524187975341_0003
failed 2 times due to AM Container for appattempt_1524187975341_0003_000002
exited with  exitCode: 1
Failing this attempt.Diagnostics: [2018-04-19 21:38:54.466]Exception from
container-launch.
Container id: container_1524187975341_0003_02_000001
Exit code: 1
Shell output: main : command provided 4
main : run as user is ubuntu
main : requested yarn user is ubuntu
Creating script paths...
Creating local dirs...
Getting exit code file...
Changing effective user to root...
Launching docker container...
Docker run command: /usr/bin/docker
--config='/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001'
run --name='container_1524187975341_0003_02_000001' --user='1000:1000' -d
--workdir='/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001'
--net='host' -v
'/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-ubuntu/nm-local-dir/filecache:ro'
-v
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:ro'
-v
'/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001'
-v
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003'
--cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD'
--cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID'
--cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID'
--cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID'
--cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE'
--group-add '1000' --group-add '4' --group-add '24' --group-add '27'
--group-add '30' --group-add '46' --group-add '110' --group-add '115'
--group-add '116' --group-add '117' 'local/hadoop-ubuntu:latest' 'bash'
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001/launch_container.sh'

Inspecting docker container...
Docker inspect command: /usr/bin/docker inspect --format {{.State.Pid}}
container_1524187975341_0003_02_000001
pid from docker inspect: 2519
Writing to cgroup task files...
Writing pid file...
Writing to tmp file
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid.tmp
Waiting for docker container to finish.
Obtaining the exit code...
Docker inspect command: /usr/bin/docker inspect --format
{{.State.ExitCode}} container_1524187975341_0003_02_000001
Exit code from docker inspect: 1
Wrote the exit code 1 to
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid.exitcode


[2018-04-19 21:38:54.473]Container exited with a non-zero exit code 1.
Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below
configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
directory}</value>
</property>

[2018-04-19 21:38:54.475]Container exited with a non-zero exit code 1.
Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below
configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution
directory}</value>
</property>

For more detailed output, check the application tracking page:
http://bay1-vm1:8088/cluster/app/application_1524187975341_0003 Then click
on links to logs of each attempt.
. Failing the application.
2018-04-19 21:38:57,042 INFO mapreduce.Job: Counters: 0
Job job_1524187975341_0003 failed!
runtime in seconds: 25
**************************************************

***************************1 of the NM logs********************
2018-04-19 21:38:49,056 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
Auth successful for appattempt_1524187975341_0003_000002 (auth:SIMPLE)
2018-04-19 21:38:49,068 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Start request for container_1524187975341_0003_02_000001 by user ubuntu
2018-04-19 21:38:49,073 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Creating a new application reference for app application_1524187975341_0003
2018-04-19 21:38:49,074 INFO
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu
IP=130.245.127.176    OPERATION=Start Container Request
TARGET=ContainerManageImpl    RESULT=SUCCESS
APPID=application_1524187975341_0003
CONTAINERID=container_1524187975341_0003_02_000001
2018-04-19 21:38:49,074 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524187975341_0003 transitioned from NEW to INITING
2018-04-19 21:38:49,075 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Adding container_1524187975341_0003_02_000001 to application
application_1524187975341_0003
2018-04-19 21:38:49,076 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524187975341_0003 transitioned from INITING to
RUNNING
2018-04-19 21:38:49,079 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524187975341_0003_02_000001 transitioned from NEW to
LOCALIZING
2018-04-19 21:38:49,079 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_INIT for appId application_1524187975341_0003
2018-04-19 21:38:49,080 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
Created localizer for container_1524187975341_0003_02_000001
2018-04-19 21:38:49,101 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
Writing credentials to the nmPrivate file
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/container_1524187975341_0003_02_000001.tokens
2018-04-19 21:38:52,446 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524187975341_0003_02_000001 transitioned from
LOCALIZING to SCHEDULED
2018-04-19 21:38:52,446 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler:
Starting container [container_1524187975341_0003_02_000001]
2018-04-19 21:38:52,528 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524187975341_0003_02_000001 transitioned from
SCHEDULED to RUNNING
2018-04-19 21:38:52,529 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Starting resource-monitoring for container_1524187975341_0003_02_000001
2018-04-19 21:38:53,608 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
Docker inspect output for container_1524187975341_0003_02_000001: ,bay14-vm1

2018-04-19 21:38:53,609 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
container_1524187975341_0003_02_000001's ip = 130.245.127.158, and hostname
= bay14-vm1
2018-04-19 21:38:54,463 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor:
Shell execution returned exit code: 1. Privileged Execution Operation
Stderr:

Stdout: main : command provided 4
main : run as user is ubuntu
main : requested yarn user is ubuntu
Creating script paths...
Creating local dirs...
Getting exit code file...
Changing effective user to root...
Launching docker container...
Docker run command: /usr/bin/docker
--config='/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001'
run --name='container_1524187975341_0003_02_000001' --user='1000:1000' -d
--workdir='/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001'
--net='host' -v
'/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-ubuntu/nm-local-dir/filecache:ro'
-v
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:ro'
-v
'/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001'
-v
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003'
--cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD'
--cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID'
--cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID'
--cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID'
--cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE'
--group-add '1000' --group-add '4' --group-add '24' --group-add '27'
--group-add '30' --group-add '46' --group-add '110' --group-add '115'
--group-add '116' --group-add '117' 'local/hadoop-ubuntu:latest' 'bash'
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001/launch_container.sh'

Inspecting docker container...
Docker inspect command: /usr/bin/docker inspect --format {{.State.Pid}}
container_1524187975341_0003_02_000001
pid from docker inspect: 2519
Writing to cgroup task files...
Writing pid file...
Writing to tmp file
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid.tmp
Waiting for docker container to finish.
Obtaining the exit code...
Docker inspect command: /usr/bin/docker inspect --format
{{.State.ExitCode}} container_1524187975341_0003_02_000001
Exit code from docker inspect: 1
Wrote the exit code 1 to
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid.exitcode

Full command array for failed execution:
[/home/ubuntu/hadoop-3.1.0/bin/container-executor, ubuntu, ubuntu, 4,
application_1524187975341_0003, container_1524187975341_0003_02_000001,
/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001,
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/launch_container.sh,
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.tokens,
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid,
/tmp/hadoop-ubuntu/nm-local-dir, /home/ubuntu/hadoop-3.1.0/logs/userlogs,
/tmp/hadoop-ubuntu/nm-docker-cmds/docker.container_1524187975341_0003_02_0000017038281858825700163.cmd,
cgroups=none]
2018-04-19 21:38:54,464 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
Launch container failed. Exception:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
ExitCodeException exitCode=1:
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(DockerLinuxContainerRuntime.java:897)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
    at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:545)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:511)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
    at org.apache.hadoop.util.Shell.run(Shell.java:902)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
    ... 10 more
2018-04-19 21:38:54,464 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime:
Docker command used: run
cap-add=SYS_CHROOT,MKNOD,SETFCAP,SETPCAP,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,SETUID,DAC_OVERRIDE,KILL,NET_BIND_SERVICE
cap-drop=ALL detach=true docker-command=run
docker-config=/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001
group-add=1000,4,24,27,30,46,110,115,116,117
image=local/hadoop-ubuntu:latest
launch-command=bash,/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001/launch_container.sh
name=container_1524187975341_0003_02_000001 net=host
ro-mounts=/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-ubuntu/nm-local-dir/filecache,/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache
rw-mounts=/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001,/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003
user=1000:1000
workdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001
2018-04-19 21:38:54,465 WARN
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code
from container container_1524187975341_0003_02_000001 is : 1
2018-04-19 21:38:54,465 WARN
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exception
from container-launch with container ID:
container_1524187975341_0003_02_000001 and exit code: 1
org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException:
Launch container failed
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(DockerLinuxContainerRuntime.java:904)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
    at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:545)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:511)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from
container-launch.
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id:
container_1524187975341_0003_02_000001
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 1
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Shell output:
main : command provided 4
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: main : run as
user is ubuntu
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: main :
requested yarn user is ubuntu
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Creating
script paths...
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Creating local
dirs...
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Getting exit
code file...
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Changing
effective user to root...
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Launching
docker container...
2018-04-19 21:38:54,465 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Docker run
command: /usr/bin/docker
--config='/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001'
run --name='container_1524187975341_0003_02_000001' --user='1000:1000' -d
--workdir='/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001'
--net='host' -v
'/tmp/hadoop-ubuntu/nm-local-dir/filecache:/tmp/hadoop-ubuntu/nm-local-dir/filecache:ro'
-v
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/filecache:ro'
-v
'/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001:/home/ubuntu/hadoop-3.1.0/logs/userlogs/application_1524187975341_0003/container_1524187975341_0003_02_000001'
-v
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003:/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003'
--cap-drop='ALL' --cap-add='SYS_CHROOT' --cap-add='MKNOD'
--cap-add='SETFCAP' --cap-add='SETPCAP' --cap-add='FSETID'
--cap-add='CHOWN' --cap-add='AUDIT_WRITE' --cap-add='SETGID'
--cap-add='NET_RAW' --cap-add='FOWNER' --cap-add='SETUID'
--cap-add='DAC_OVERRIDE' --cap-add='KILL' --cap-add='NET_BIND_SERVICE'
--group-add '1000' --group-add '4' --group-add '24' --group-add '27'
--group-add '30' --group-add '46' --group-add '110' --group-add '115'
--group-add '116' --group-add '117' 'local/hadoop-ubuntu:latest' 'bash'
'/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001/launch_container.sh'

2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Inspecting
docker container...
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Docker inspect
command: /usr/bin/docker inspect --format {{.State.Pid}}
container_1524187975341_0003_02_000001
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: pid from
docker inspect: 2519
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Writing to
cgroup task files...
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Writing pid
file...
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Writing to tmp
file
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid.tmp
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Waiting for
docker container to finish.
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Obtaining the
exit code...
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Docker inspect
command: /usr/bin/docker inspect --format {{.State.ExitCode}}
container_1524187975341_0003_02_000001
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code from
docker inspect: 1
2018-04-19 21:38:54,466 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Wrote the exit
code 1 to
/tmp/hadoop-ubuntu/nm-local-dir/nmPrivate/application_1524187975341_0003/container_1524187975341_0003_02_000001/container_1524187975341_0003_02_000001.pid.exitcode
2018-04-19 21:38:54,466 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Container launch failed : Container exited with a non-zero exit code 1.
2018-04-19 21:38:54,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524187975341_0003_02_000001 transitioned from RUNNING
to EXITED_WITH_FAILURE
2018-04-19 21:38:54,476 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Cleaning up container container_1524187975341_0003_02_000001
2018-04-19 21:38:54,597 INFO
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Removing
Docker container : container_1524187975341_0003_02_000001
2018-04-19 21:38:56,643 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Skipping monitoring container container_1524187975341_0003_02_000001 since
CPU usage is not yet available.
2018-04-19 21:38:56,869 INFO
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting
absolute path :
/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003/container_1524187975341_0003_02_000001
2018-04-19 21:38:56,869 WARN
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu
OPERATION=Container Finished - Failed    TARGET=ContainerImpl
RESULT=FAILURE    DESCRIPTION=Container failed with state:
EXITED_WITH_FAILURE    APPID=application_1524187975341_0003
CONTAINERID=container_1524187975341_0003_02_000001
2018-04-19 21:38:56,871 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
Container container_1524187975341_0003_02_000001 transitioned from
EXITED_WITH_FAILURE to DONE
2018-04-19 21:38:56,872 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Removing container_1524187975341_0003_02_000001 from application
application_1524187975341_0003
2018-04-19 21:38:56,877 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Stopping resource-monitoring for container_1524187975341_0003_02_000001
2018-04-19 21:38:56,877 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_STOP for appId application_1524187975341_0003
2018-04-19 21:38:57,888 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed
completed containers from NM context:
[container_1524187975341_0003_02_000001]
2018-04-19 21:38:57,888 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524187975341_0003 transitioned from RUNNING to
APPLICATION_RESOURCES_CLEANINGUP
2018-04-19 21:38:57,889 INFO
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting
absolute path :
/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1524187975341_0003
2018-04-19 21:38:57,889 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event APPLICATION_STOP for appId application_1524187975341_0003
2018-04-19 21:38:57,890 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
Application application_1524187975341_0003 transitioned from
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2018-04-19 21:38:57,890 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
Scheduling Log Deletion for application: application_1524187975341_0003,
with delay of 10800 seconds

On Thu, Apr 19, 2018 at 9:28 PM, Shane Kumpf <sh...@gmail.com>
wrote:

> Hello Ahmad,
>
> The image being used is not privileged/untrusted based on the settings in
> container-executor.cfg. In container-executor.cfg you have set
> docker.privileged-containers.registries=local, but the image name
> variable in the job is using "hadoop-ubuntu:latest". Based on that
> setting, YARN is expecting the image to be in the "local" namespace. Can
> you set YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=local/hadoop-ubuntu:latest
> and see if that resolves the issue?
>
> Thanks,
> -Shane
>
> On Thu, Apr 19, 2018 at 4:59 PM, SeyyedAhmad Javadi <
> sjavadi@cs.stonybrook.edu> wrote:
>
>> Hi All,
>>
>> I am following the below guide to setup Docker container run_time but
>> face some non-trivial errors at least for my level. Would you please
>> comment if you have some idea about the root-cause?
>> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yar
>> n-site/DockerContainers.html
>>
>> After the error, I have provided the config files and Dockerfile as  well
>> as Docker image inspect command results (should have null for Entry Point
>> and CMS?).
>>
>> I have three nodes, 1 RM and 2 NMs  and default LCE works fine.
>>
>> ********************submit job script
>> vars="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_RUNT
>> IME_DOCKER_IMAGE=hadoop-ubuntu,YARN_CONTAINER_RUNTIME_DOCKER
>> _RUN_OVERRIDE_DISABLE=false,YARN_CONTAINER_RUNTIME_DOCKER_
>> CONTAINER_NETWORK=host"
>>
>> #vars="YARN_CONTAINER_RUNTIME_TYPE=default"
>> hadoop jar /home/ubuntu/hadoop-3.1.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar
>> pi -Dyarn.app.mapreduce.am.env=$vars -Dmapreduce.map.env=$vars
>> -Dmapreduce.reduce.env=$vars 2 10
>>
>> ******************** AM Log in one the nodes
>> 2018-04-19 18:55:46,311 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
>> Auth successful for appattempt_1524178188987_0001_000001 (auth:SIMPLE)
>> 2018-04-19 18:55:46,515 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.ContainerManagerImpl: Start request for
>> container_1524178188987_0001_01_000001 by user ubuntu
>> 2018-04-19 18:55:46,617 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.ContainerManagerImpl: Creating a new
>> application reference for app application_1524178188987_0001
>> 2018-04-19 18:55:46,634 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.application.ApplicationImpl: Application
>> application_1524178188987_0001 transitioned from NEW to INITING
>> 2018-04-19 18:55:46,634 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
>> USER=ubuntu    IP=130.245.127.176    OPERATION=Start Container Request
>> TARGET=ContainerManageImpl    RESULT=SUCCESS
>> APPID=application_1524178188987_0001    CONTAINERID=container_15241781
>> 88987_0001_01_000001
>> 2018-04-19 18:55:46,635 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.application.ApplicationImpl: Adding
>> container_1524178188987_0001_01_000001 to application
>> application_1524178188987_0001
>> 2018-04-19 18:55:46,649 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.application.ApplicationImpl: Application
>> application_1524178188987_0001 transitioned from INITING to RUNNING
>> 2018-04-19 18:55:46,655 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.container.ContainerImpl: Container
>> container_1524178188987_0001_01_000001 transitioned from NEW to
>> LOCALIZING
>> 2018-04-19 18:55:46,655 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for
>> appId application_1524178188987_0001
>> 2018-04-19 18:55:46,698 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.localizer.ResourceLocalizationService:
>> Created localizer for container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:46,898 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.localizer.ResourceLocalizationService:
>> Writing credentials to the nmPrivate file /tmp/hadoop-ubuntu/nm-local-di
>> r/nmPrivate/container_1524178188987_0001_01_000001.tokens
>> 2018-04-19 18:55:50,371 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.container.ContainerImpl: Container
>> container_1524178188987_0001_01_000001 transitioned from LOCALIZING to
>> SCHEDULED
>> 2018-04-19 18:55:50,374 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.scheduler.ContainerScheduler: Starting
>> container [container_1524178188987_0001_01_000001]
>> 2018-04-19 18:55:50,479 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.container.ContainerImpl: Container
>> container_1524178188987_0001_01_000001 transitioned from SCHEDULED to
>> RUNNING
>> 2018-04-19 18:55:50,481 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
>> resource-monitoring for container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:51,842 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.launcher.ContainerLaunch: Container
>> container_1524178188987_0001_01_000001 succeeded
>> 2018-04-19 18:55:51,844 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.container.ContainerImpl: Container
>> container_1524178188987_0001_01_000001 transitioned from RUNNING to
>> EXITED_WITH_SUCCESS
>> 2018-04-19 18:55:51,844 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
>> container container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:51,957 INFO org.apache.hadoop.yarn.server.
>> nodemanager.LinuxContainerExecutor: Removing Docker container :
>> container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:56,963 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.launcher.ContainerLaunch: Could not get pid
>> for container_1524178188987_0001_01_000001. Waited for 5000 ms.
>> 2018-04-19 18:55:56,963 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.launcher.ContainerLaunch: Unable to obtain
>> pid, but docker container request detected. Attempting to reap container
>> container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:59,395 INFO org.apache.hadoop.yarn.server.
>> nodemanager.LinuxContainerExecutor: Deleting absolute path :
>> /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/ap
>> plication_1524178188987_0001/container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:59,395 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
>> USER=ubuntu    OPERATION=Container Finished - Succeeded
>> TARGET=ContainerImpl    RESULT=SUCCESS    APPID=application_1524178188987_0001
>> CONTAINERID=container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:59,403 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.container.ContainerImpl: Container
>> container_1524178188987_0001_01_000001 transitioned from
>> EXITED_WITH_SUCCESS to DONE
>> 2018-04-19 18:55:59,404 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.application.ApplicationImpl: Removing
>> container_1524178188987_0001_01_000001 from application
>> application_1524178188987_0001
>> 2018-04-19 18:55:59,404 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
>> resource-monitoring for container_1524178188987_0001_01_000001
>> 2018-04-19 18:55:59,405 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
>> appId application_1524178188987_0001
>> 2018-04-19 18:56:00,412 INFO org.apache.hadoop.yarn.server.
>> nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM
>> context: [container_1524178188987_0001_01_000001]
>> 2018-04-19 18:56:13,455 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.application.ApplicationImpl: Application
>> application_1524178188987_0001 transitioned from RUNNING to
>> APPLICATION_RESOURCES_CLEANINGUP
>> 2018-04-19 18:56:13,457 INFO org.apache.hadoop.yarn.server.
>> nodemanager.LinuxContainerExecutor: Deleting absolute path :
>> /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/ap
>> plication_1524178188987_0001
>> 2018-04-19 18:56:13,459 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
>> appId application_1524178188987_0001
>> 2018-04-19 18:56:13,470 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.application.ApplicationImpl: Application
>> application_1524178188987_0001 transitioned from
>> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
>> 2018-04-19 18:56:13,470 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
>> Scheduling Log Deletion for application: application_1524178188987_0001,
>> with delay of 10800 seconds
>>
>> **********************
>>
>>
>>
>>
>> yarn-site.xml: according to the above link
>> container-executor.cfg:
>>
>> yarn.nodemanager.linux-container-executor.group=ubuntu
>> min.user.id=0
>> #feature.tc.enabled=1
>> #feature.docker.enabled=1
>> allowed.system.users=ubuntu
>> # The configs below deal with settings for Docker
>> [docker]
>> module.enabled=true
>> docker.privileged-containers.enabled=true
>> docker.binary=/usr/bin/docker
>> docker.allowed.capabilities=SYS_CHROOT,MKNOD,SETFCAP,SETPCAP
>> ,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,SETUID,
>> DAC_OVERRIDE,KILL,NET_BIND_SERVICE
>> #docker.allowed.devices=## comma seperated list of devices that can be
>> mounted into a container
>> docker.allowed.networks=bridge,host,none
>> docker.allowed.ro-mounts=/sys/fs/cgroup,/tmp/hadoop-ubuntu/nm-local-dir
>> docker.privileged-containers.registries=local
>> #docker.host-pid-namespace.enabled=false
>> docker.allowed.rw-mounts=/home/ubuntu/hadoop-3.1.0,/home/
>> ubuntu/hadoop-3.1.0/logs
>>
>>
>> Dockerfile:
>> FROM ubuntu:16.04
>> #RUN rm /bin/sh && ln -s /bin/bash /bin/sh
>> SHELL ["/bin/bash", "-c"]
>>
>> RUN apt-get update && \
>>     apt-get upgrade -y && \
>>     apt-get install -y  software-properties-common && \
>> #    apt-get install -y --no-install-recommends apt-utils && \
>> #    apt-get install -y  curl && \
>>     add-apt-repository ppa:webupd8team/java -y && \
>>     apt-get update && \
>>     echo oracle-java7-installer shared/accepted-oracle-license-v1-1
>> select true | /usr/bin/debconf-set-selections && \
>>     apt-get install -y oracle-java8-installer && \
>> #    apt-get install -y ssh && \
>> #    apt-get install -y rsync && \
>>     apt-get install -y vim && \
>>     apt-get clean
>>
>> ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
>> ENV PATH $PATH:$JAVA_HOME/bin
>>
>> # HADOOP
>> ARG HADOOP_ARCHIVE=http://mirror.cc.columbia.edu/pub/software/ap
>> ache/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz
>>
>> ENV HADOOP_HOME /usr/local/hadoop
>> ENV HADOOP_COMMON_PATH /usr/local/hadoop
>> ENV HADOOP_HDFS_HOME /usr/local/hadoop
>> ENV HADOOP_MAPRED_HOME /usr/local/hadoop
>> ENV HADOOP_YARN_HOME /usr/local/hadoop
>> ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop
>>
>> # download and extract hadoop, set JAVA_HOME in hadoop-env.sh, update path
>> RUN wget $HADOOP_ARCHIVE && \
>>   tar -xzf hadoop-3.1.0.tar.gz && \
>>   mv hadoop-3.1.0 $HADOOP_HOME
>>
>> ADD rm-hadoop-config/* $HADOOP_HOME/etc/hadoop/
>>
>> ENV PATH $PATH:$HADOOP_COMMON_PATH/bin
>>
>> WORKDIR $HADOOP_COMMON_PATH
>>
>> # Declare user
>> RUN groupadd -g 1000 ubuntu && \
>>     useradd -r -u 1000  -g 1000 ubuntu
>> USER ubuntu
>>
>>
>> ~/hadoop-common$ docker inspect local/hadoop-ubuntu
>> [
>>     {
>>         "Id": "sha256:d8335693084b5823675056
>> b7d649b13d04a7e3c3b63688f83e9807405506b088",
>>         "RepoTags": [
>>             "hadoop-ubuntu:latest",
>>             "local/hadoop-ubuntu:latest"
>>         ],
>>         "RepoDigests": [],
>>         "Parent": "sha256:f9b92fa15eadd74b9e3712
>> ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
>>         "Comment": "",
>>         "Created": "2018-04-19T15:16:52.903547395Z",
>>         "Container": "4fcff6fa65639fe0e2a9379a335a4
>> 01a8cde0c0e1eaaa2dd98d35ced402c29e3",
>>         "ContainerConfig": {
>>             "Hostname": "4fcff6fa6563",
>>             "Domainname": "",
>>             "User": "ubuntu",
>>             "AttachStdin": false,
>>             "AttachStdout": false,
>>             "AttachStderr": false,
>>             "Tty": false,
>>             "OpenStdin": false,
>>             "StdinOnce": false,
>>             "Env": [
>>                 "PATH=/usr/local/sbin:/usr/loc
>> al/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-ora
>> cle/bin:/usr/local/hadoop/bin",
>>                 "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
>>                 "HADOOP_HOME=/usr/local/hadoop",
>>                 "HADOOP_COMMON_PATH=/usr/local/hadoop",
>>                 "HADOOP_HDFS_HOME=/usr/local/hadoop",
>>                 "HADOOP_MAPRED_HOME=/usr/local/hadoop",
>>                 "HADOOP_YARN_HOME=/usr/local/hadoop",
>>                 "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
>>             ],
>>             "Cmd": [
>>                 "/bin/bash",
>>                 "-c",
>>                 "#(nop) ",
>>                 "USER ubuntu"
>>             ],
>>             "ArgsEscaped": true,
>>             "Image": "sha256:f9b92fa15eadd74b9e3712
>> ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
>>             "Volumes": null,
>>             "WorkingDir": "/usr/local/hadoop",
>>             "Entrypoint": null,
>>             "OnBuild": null,
>>             "Labels": {},
>>             "Shell": [
>>                 "/bin/bash",
>>                 "-c"
>>             ]
>>         },
>>         "DockerVersion": "18.03.0-ce",
>>         "Author": "",
>>         "Config": {
>>             "Hostname": "",
>>             "Domainname": "",
>>             "User": "ubuntu",
>>             "AttachStdin": false,
>>             "AttachStdout": false,
>>             "AttachStderr": false,
>>             "Tty": false,
>>             "OpenStdin": false,
>>             "StdinOnce": false,
>>             "Env": [
>>                 "PATH=/usr/local/sbin:/usr/loc
>> al/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-ora
>> cle/bin:/usr/local/hadoop/bin",
>>                 "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
>>                 "HADOOP_HOME=/usr/local/hadoop",
>>                 "HADOOP_COMMON_PATH=/usr/local/hadoop",
>>                 "HADOOP_HDFS_HOME=/usr/local/hadoop",
>>                 "HADOOP_MAPRED_HOME=/usr/local/hadoop",
>>                 "HADOOP_YARN_HOME=/usr/local/hadoop",
>>                 "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
>>             ],
>>             "Cmd": [
>>                 "/bin/bash"
>>             ],
>>             "ArgsEscaped": true,
>>             "Image": "sha256:f9b92fa15eadd74b9e3712
>> ee379b56bc30a73492db2fd4e7b7f4f1d74e5671f2",
>>             "Volumes": null,
>>             "WorkingDir": "/usr/local/hadoop",
>>             "Entrypoint": null,
>>             "OnBuild": null,
>>             "Labels": null,
>>             "Shell": [
>>                 "/bin/bash",
>>                 "-c"
>>             ]
>>         },
>>         "Architecture": "amd64",
>>         "Os": "linux",
>>         "Size": 2058935914,
>>         "VirtualSize": 2058935914,
>>         "GraphDriver": {
>>             "Data": null,
>>             "Name": "aufs"
>>         },
>>         "RootFS": {
>>             "Type": "layers",
>>             "Layers": [
>>                 "sha256:fccbfa2912f0cd6b9d13f9
>> 1f288f112a2b825f3f758a4443aacb45bfc108cc74",
>>                 "sha256:e1a9a6284d0d24d8194ac8
>> 4b372619e75cd35a46866b74925b7274c7056561e4",
>>                 "sha256:ac7299292f8b2f710d3b91
>> 1c6a4e02ae8f06792e39822e097f9c4e9c2672b32d",
>>                 "sha256:a5e66470b2812e91798db3
>> 6eb103c1f1e135bbe167e4b2ad5ba425b8db98ee8d",
>>                 "sha256:a8de0e025d94b33db3542e
>> 1e8ce58829144b30c6cd1fff057eec55b1491933c3",
>>                 "sha256:7e9a788452589001d42e79
>> 95dc0583bcca1e6f7780a301066ee0d6668aaf9c91",
>>                 "sha256:65fac5b99df0506e1d204d
>> 9024264105ab2fe142ea970f0eba630669dc606055",
>>                 "sha256:cd0da20f97700ab8e2ec37
>> c464fcb8864cba86672aa96e7fc40e2572228895e2",
>>                 "sha256:21094366bc298af4a7e836
>> 80cf9d732ce69cb92b11a3b20005b12c15bac3e486"
>>             ]
>>         },
>>         "Metadata": {
>>             "LastTagTime": "2018-04-19T17:20:41.054733026-04:00"
>>         }
>>     }
>> ]
>>
>>
>>
>>
>> Best,
>> Ahmad
>>
>>
>

Re: Regarding Docker container run_time setup

Posted by Shane Kumpf <sh...@gmail.com>.
Hello Ahmad,

The image being used is not privileged/untrusted based on the settings in
container-executor.cfg. In container-executor.cfg you have set
docker.privileged-containers.registries=local, but the image name variable
in the job is using "hadoop-ubuntu:latest". Based on that setting, YARN is
expecting the image to be in the "local" namespace. Can you set
YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=local/hadoop-ubuntu:latest and see if
that resolves the issue?

Thanks,
-Shane

On Thu, Apr 19, 2018 at 4:59 PM, SeyyedAhmad Javadi <
sjavadi@cs.stonybrook.edu> wrote:

> Hi All,
>
> I am following the below guide to setup Docker container run_time but face
> some non-trivial errors at least for my level. Would you please comment if
> you have some idea about the root-cause?
> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-
> yarn-site/DockerContainers.html
>
> After the error, I have provided the config files and Dockerfile as  well
> as Docker image inspect command results (should have null for Entry Point
> and CMS?).
>
> I have three nodes, 1 RM and 2 NMs  and default LCE works fine.
>
> ********************submit job script
> vars="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_
> RUNTIME_DOCKER_IMAGE=hadoop-ubuntu,YARN_CONTAINER_RUNTIME_
> DOCKER_RUN_OVERRIDE_DISABLE=false,YARN_CONTAINER_RUNTIME_
> DOCKER_CONTAINER_NETWORK=host"
>
> #vars="YARN_CONTAINER_RUNTIME_TYPE=default"
> hadoop jar /home/ubuntu/hadoop-3.1.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar
> pi -Dyarn.app.mapreduce.am.env=$vars -Dmapreduce.map.env=$vars
> -Dmapreduce.reduce.env=$vars 2 10
>
> ******************** AM Log in one the nodes
> 2018-04-19 18:55:46,311 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
> Auth successful for appattempt_1524178188987_0001_000001 (auth:SIMPLE)
> 2018-04-19 18:55:46,515 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.ContainerManagerImpl: Start request for
> container_1524178188987_0001_01_000001 by user ubuntu
> 2018-04-19 18:55:46,617 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.ContainerManagerImpl: Creating a new
> application reference for app application_1524178188987_0001
> 2018-04-19 18:55:46,634 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524178188987_0001 transitioned from NEW to INITING
> 2018-04-19 18:55:46,634 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=ubuntu    IP=130.245.127.176    OPERATION=Start Container Request
> TARGET=ContainerManageImpl    RESULT=SUCCESS    APPID=application_1524178188987_0001
> CONTAINERID=container_1524178188987_0001_01_000001
> 2018-04-19 18:55:46,635 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Adding
> container_1524178188987_0001_01_000001 to application
> application_1524178188987_0001
> 2018-04-19 18:55:46,649 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524178188987_0001 transitioned from INITING to RUNNING
> 2018-04-19 18:55:46,655 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524178188987_0001_01_000001 transitioned from NEW to LOCALIZING
> 2018-04-19 18:55:46,655 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for
> appId application_1524178188987_0001
> 2018-04-19 18:55:46,698 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.localizer.ResourceLocalizationService:
> Created localizer for container_1524178188987_0001_01_000001
> 2018-04-19 18:55:46,898 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.localizer.ResourceLocalizationService:
> Writing credentials to the nmPrivate file /tmp/hadoop-ubuntu/nm-local-
> dir/nmPrivate/container_1524178188987_0001_01_000001.tokens
> 2018-04-19 18:55:50,371 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524178188987_0001_01_000001 transitioned from LOCALIZING to
> SCHEDULED
> 2018-04-19 18:55:50,374 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.scheduler.ContainerScheduler: Starting
> container [container_1524178188987_0001_01_000001]
> 2018-04-19 18:55:50,479 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524178188987_0001_01_000001 transitioned from SCHEDULED to
> RUNNING
> 2018-04-19 18:55:50,481 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
> resource-monitoring for container_1524178188987_0001_01_000001
> 2018-04-19 18:55:51,842 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Container
> container_1524178188987_0001_01_000001 succeeded
> 2018-04-19 18:55:51,844 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524178188987_0001_01_000001 transitioned from RUNNING to
> EXITED_WITH_SUCCESS
> 2018-04-19 18:55:51,844 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
> container container_1524178188987_0001_01_000001
> 2018-04-19 18:55:51,957 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Removing Docker container : container_1524178188987_0001_01_000001
> 2018-04-19 18:55:56,963 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Could not get pid
> for container_1524178188987_0001_01_000001. Waited for 5000 ms.
> 2018-04-19 18:55:56,963 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Unable to obtain
> pid, but docker container request detected. Attempting to reap container
> container_1524178188987_0001_01_000001
> 2018-04-19 18:55:59,395 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Deleting absolute path : /tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/appcache/application_1524178188987_
> 0001/container_1524178188987_0001_01_000001
> 2018-04-19 18:55:59,395 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=ubuntu    OPERATION=Container Finished - Succeeded
> TARGET=ContainerImpl    RESULT=SUCCESS    APPID=application_1524178188987_0001
> CONTAINERID=container_1524178188987_0001_01_000001
> 2018-04-19 18:55:59,403 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.ContainerImpl: Container
> container_1524178188987_0001_01_000001 transitioned from
> EXITED_WITH_SUCCESS to DONE
> 2018-04-19 18:55:59,404 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Removing
> container_1524178188987_0001_01_000001 from application
> application_1524178188987_0001
> 2018-04-19 18:55:59,404 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
> resource-monitoring for container_1524178188987_0001_01_000001
> 2018-04-19 18:55:59,405 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
> appId application_1524178188987_0001
> 2018-04-19 18:56:00,412 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Removed completed containers from NM context: [container_1524178188987_0001_
> 01_000001]
> 2018-04-19 18:56:13,455 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524178188987_0001 transitioned from RUNNING to
> APPLICATION_RESOURCES_CLEANINGUP
> 2018-04-19 18:56:13,457 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
> Deleting absolute path : /tmp/hadoop-ubuntu/nm-local-
> dir/usercache/ubuntu/appcache/application_1524178188987_0001
> 2018-04-19 18:56:13,459 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
> appId application_1524178188987_0001
> 2018-04-19 18:56:13,470 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.ApplicationImpl: Application
> application_1524178188987_0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP
> to FINISHED
> 2018-04-19 18:56:13,470 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1524178188987_0001,
> with delay of 10800 seconds
>
> **********************
>
>
>
>
> yarn-site.xml: according to the above link
> container-executor.cfg:
>
> yarn.nodemanager.linux-container-executor.group=ubuntu
> min.user.id=0
> #feature.tc.enabled=1
> #feature.docker.enabled=1
> allowed.system.users=ubuntu
> # The configs below deal with settings for Docker
> [docker]
> module.enabled=true
> docker.privileged-containers.enabled=true
> docker.binary=/usr/bin/docker
> docker.allowed.capabilities=SYS_CHROOT,MKNOD,SETFCAP,
> SETPCAP,FSETID,CHOWN,AUDIT_WRITE,SETGID,NET_RAW,FOWNER,
> SETUID,DAC_OVERRIDE,KILL,NET_BIND_SERVICE
> #docker.allowed.devices=## comma seperated list of devices that can be
> mounted into a container
> docker.allowed.networks=bridge,host,none
> docker.allowed.ro-mounts=/sys/fs/cgroup,/tmp/hadoop-ubuntu/nm-local-dir
> docker.privileged-containers.registries=local
> #docker.host-pid-namespace.enabled=false
> docker.allowed.rw-mounts=/home/ubuntu/hadoop-3.1.0,/
> home/ubuntu/hadoop-3.1.0/logs
>
>
> Dockerfile:
> FROM ubuntu:16.04
> #RUN rm /bin/sh && ln -s /bin/bash /bin/sh
> SHELL ["/bin/bash", "-c"]
>
> RUN apt-get update && \
>     apt-get upgrade -y && \
>     apt-get install -y  software-properties-common && \
> #    apt-get install -y --no-install-recommends apt-utils && \
> #    apt-get install -y  curl && \
>     add-apt-repository ppa:webupd8team/java -y && \
>     apt-get update && \
>     echo oracle-java7-installer shared/accepted-oracle-license-v1-1
> select true | /usr/bin/debconf-set-selections && \
>     apt-get install -y oracle-java8-installer && \
> #    apt-get install -y ssh && \
> #    apt-get install -y rsync && \
>     apt-get install -y vim && \
>     apt-get clean
>
> ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
> ENV PATH $PATH:$JAVA_HOME/bin
>
> # HADOOP
> ARG HADOOP_ARCHIVE=http://mirror.cc.columbia.edu/pub/software/
> apache/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz
>
> ENV HADOOP_HOME /usr/local/hadoop
> ENV HADOOP_COMMON_PATH /usr/local/hadoop
> ENV HADOOP_HDFS_HOME /usr/local/hadoop
> ENV HADOOP_MAPRED_HOME /usr/local/hadoop
> ENV HADOOP_YARN_HOME /usr/local/hadoop
> ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop
>
> # download and extract hadoop, set JAVA_HOME in hadoop-env.sh, update path
> RUN wget $HADOOP_ARCHIVE && \
>   tar -xzf hadoop-3.1.0.tar.gz && \
>   mv hadoop-3.1.0 $HADOOP_HOME
>
> ADD rm-hadoop-config/* $HADOOP_HOME/etc/hadoop/
>
> ENV PATH $PATH:$HADOOP_COMMON_PATH/bin
>
> WORKDIR $HADOOP_COMMON_PATH
>
> # Declare user
> RUN groupadd -g 1000 ubuntu && \
>     useradd -r -u 1000  -g 1000 ubuntu
> USER ubuntu
>
>
> ~/hadoop-common$ docker inspect local/hadoop-ubuntu
> [
>     {
>         "Id": "sha256:d8335693084b5823675056b7d649b1
> 3d04a7e3c3b63688f83e9807405506b088",
>         "RepoTags": [
>             "hadoop-ubuntu:latest",
>             "local/hadoop-ubuntu:latest"
>         ],
>         "RepoDigests": [],
>         "Parent": "sha256:f9b92fa15eadd74b9e3712ee379b56
> bc30a73492db2fd4e7b7f4f1d74e5671f2",
>         "Comment": "",
>         "Created": "2018-04-19T15:16:52.903547395Z",
>         "Container": "4fcff6fa65639fe0e2a9379a335a40
> 1a8cde0c0e1eaaa2dd98d35ced402c29e3",
>         "ContainerConfig": {
>             "Hostname": "4fcff6fa6563",
>             "Domainname": "",
>             "User": "ubuntu",
>             "AttachStdin": false,
>             "AttachStdout": false,
>             "AttachStderr": false,
>             "Tty": false,
>             "OpenStdin": false,
>             "StdinOnce": false,
>             "Env": [
>                 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/
> sbin:/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/local/hadoop/bin",
>                 "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
>                 "HADOOP_HOME=/usr/local/hadoop",
>                 "HADOOP_COMMON_PATH=/usr/local/hadoop",
>                 "HADOOP_HDFS_HOME=/usr/local/hadoop",
>                 "HADOOP_MAPRED_HOME=/usr/local/hadoop",
>                 "HADOOP_YARN_HOME=/usr/local/hadoop",
>                 "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
>             ],
>             "Cmd": [
>                 "/bin/bash",
>                 "-c",
>                 "#(nop) ",
>                 "USER ubuntu"
>             ],
>             "ArgsEscaped": true,
>             "Image": "sha256:f9b92fa15eadd74b9e3712ee379b56
> bc30a73492db2fd4e7b7f4f1d74e5671f2",
>             "Volumes": null,
>             "WorkingDir": "/usr/local/hadoop",
>             "Entrypoint": null,
>             "OnBuild": null,
>             "Labels": {},
>             "Shell": [
>                 "/bin/bash",
>                 "-c"
>             ]
>         },
>         "DockerVersion": "18.03.0-ce",
>         "Author": "",
>         "Config": {
>             "Hostname": "",
>             "Domainname": "",
>             "User": "ubuntu",
>             "AttachStdin": false,
>             "AttachStdout": false,
>             "AttachStderr": false,
>             "Tty": false,
>             "OpenStdin": false,
>             "StdinOnce": false,
>             "Env": [
>                 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/
> sbin:/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/local/hadoop/bin",
>                 "JAVA_HOME=/usr/lib/jvm/java-8-oracle",
>                 "HADOOP_HOME=/usr/local/hadoop",
>                 "HADOOP_COMMON_PATH=/usr/local/hadoop",
>                 "HADOOP_HDFS_HOME=/usr/local/hadoop",
>                 "HADOOP_MAPRED_HOME=/usr/local/hadoop",
>                 "HADOOP_YARN_HOME=/usr/local/hadoop",
>                 "HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop"
>             ],
>             "Cmd": [
>                 "/bin/bash"
>             ],
>             "ArgsEscaped": true,
>             "Image": "sha256:f9b92fa15eadd74b9e3712ee379b56
> bc30a73492db2fd4e7b7f4f1d74e5671f2",
>             "Volumes": null,
>             "WorkingDir": "/usr/local/hadoop",
>             "Entrypoint": null,
>             "OnBuild": null,
>             "Labels": null,
>             "Shell": [
>                 "/bin/bash",
>                 "-c"
>             ]
>         },
>         "Architecture": "amd64",
>         "Os": "linux",
>         "Size": 2058935914,
>         "VirtualSize": 2058935914,
>         "GraphDriver": {
>             "Data": null,
>             "Name": "aufs"
>         },
>         "RootFS": {
>             "Type": "layers",
>             "Layers": [
>                 "sha256:fccbfa2912f0cd6b9d13f91f288f11
> 2a2b825f3f758a4443aacb45bfc108cc74",
>                 "sha256:e1a9a6284d0d24d8194ac84b372619
> e75cd35a46866b74925b7274c7056561e4",
>                 "sha256:ac7299292f8b2f710d3b911c6a4e02
> ae8f06792e39822e097f9c4e9c2672b32d",
>                 "sha256:a5e66470b2812e91798db36eb103c1
> f1e135bbe167e4b2ad5ba425b8db98ee8d",
>                 "sha256:a8de0e025d94b33db3542e1e8ce588
> 29144b30c6cd1fff057eec55b1491933c3",
>                 "sha256:7e9a788452589001d42e7995dc0583
> bcca1e6f7780a301066ee0d6668aaf9c91",
>                 "sha256:65fac5b99df0506e1d204d90242641
> 05ab2fe142ea970f0eba630669dc606055",
>                 "sha256:cd0da20f97700ab8e2ec37c464fcb8
> 864cba86672aa96e7fc40e2572228895e2",
>                 "sha256:21094366bc298af4a7e83680cf9d73
> 2ce69cb92b11a3b20005b12c15bac3e486"
>             ]
>         },
>         "Metadata": {
>             "LastTagTime": "2018-04-19T17:20:41.054733026-04:00"
>         }
>     }
> ]
>
>
>
>
> Best,
> Ahmad
>
>