You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mesos.apache.org by "Gilbert Song (JIRA)" <ji...@apache.org> on 2016/08/08 17:57:20 UTC

[jira] [Comment Edited] (MESOS-6004) Tasks fail when provisioning multiple containers with large docker images using copy backend

    [ https://issues.apache.org/jira/browse/MESOS-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412159#comment-15412159 ] 

Gilbert Song edited comment on MESOS-6004 at 8/8/16 5:56 PM:
-------------------------------------------------------------

Thanks [~mito]. We need to fix this issue. Most likely this is because the image size is too large, and it takes time to download/copy. Could you please:

1. Just for curious, could you test using localpuller&&overlay backend (--docker_registry=/path/to/your/image/tarballs/folder and --image_provider_backend=overlay). Want to know whether you still have the scheduling issue.

2. Attach the GLOG_v=1 log, should be fine in size if you are using `noformat`.


was (Author: gilbert):
Thanks [~mito]. We need to fix this issue. Most likely this is because the image size is too large, and it takes time to download/copy. Could you please:

1. Just for curious, could you test using localpuller&&overlay backend (`--docker_registry=/path/to/your/image/tarballs/folder` and `--image_provider_backend=overlay`). Want to know whether you still have the scheduling issue.

2. Attach the GLOG_v=1 log, should be fine in size if you are using `noformat`.

>  Tasks fail when provisioning multiple containers with large docker images using copy backend
> ---------------------------------------------------------------------------------------------
>
>                 Key: MESOS-6004
>                 URL: https://issues.apache.org/jira/browse/MESOS-6004
>             Project: Mesos
>          Issue Type: Bug
>          Components: containerization, docker
>    Affects Versions: 0.28.2, 1.0.0
>         Environment: h4. Agent Platform
> - Ubuntu 16.04
> - AWS g2.x2large instance
> - Nvidia support enabled
> h4. Agent Configuration
> -{noformat}
> --containerizers=mesos,docker
> --docker_config=<docker auth json>
> --docker_store_dir=/mnt/mesos/store/docker
> --executor_registration_timeout=3mins
> --hostname=<aws public dns>
> --image_providers=docker
> --image_provisioner_backend=copy
> --isolation=filesystem/linux,docker/runtime,cgroups/devices,gpu/nvidia
> --switch_user=false
> --work_dir=/mnt/mesos
> {noformat}
> h4. Framework
> - custom framework written in python
> - using unified containerizer with docker images
> h4. Test Setup
> * 1 master
> * 1 agent
> * 5 tasks scheduled at the same time:
> ** resources: cpus: 0.1, mem: 128
> ** command: `echo test`
> ** docker image: custom docker image, based on nvidia/cuda ~5gb
> ** the same docker image was for all tasks, already pulled.
>            Reporter: Michael Thomas
>              Labels: containerizer, docker, performance
>
> When scheduling more than one task on the same agent, all tasks fail a as containers seem to be destroyed during provisioning.
> Specifically, the errors on the agent logs are:
> {noformat}
>  E0808 15:53:09.691315 30996 slave.cpp:3976] Container 'eb20f642-bb90-4293-8eec-6f1576ccaeb1' for executor '3' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 failed to start: Container is being destroyed during provisioning
> {noformat}
> and 
> {noformat}
> I0808 15:52:32.510210 30999 slave.cpp:4539] Terminating executor ''2' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001' because it did not register within 3mins
> {noformat}
> As the default provisioning method {{copy}} is being used, I assume this is due to the provisioning of multiple containers taking too long and the agent will not wait. For large images, this method is simply not performant.
> The issue did not occur, when only one tasks was scheduled.
> Increasing the {{executor_registration_timeout}} parameter, seemed to help a bit as it allowed to schedule at least 2 tasks at the same time. But still fails with more (5 in this case)
> h4. Complete logs
> (with GLOG_v=0, as with 1 it was too long)
> {noformat}
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.661067 30961 main.cpp:434] Starting Mesos agent
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.661551 30961 slave.cpp:198] Agent started on 1)@172.31.23.17:5051
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.661578 30961 slave.cpp:199] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos,docker" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_config="{"auths":{"https:\/\/index.docker.io\/v1\/":{"auth":"dGVycmFsb3VwZTpUYWxFWUFOSXR5","email":"sebastian.gerke@terraloupe.com"}}}" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/mnt/mesos/store/docker" --do
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: cker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="3mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname="ec2-52-59-113-0.eu-central-1.compute.amazonaws.com" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --image_providers="docker" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="filesystem/linux,docker/runtime,cgroups/devices,gpu/nvidia" --launcher_dir="/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://172.31.19.240:2181/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --recov
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: er="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="false" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/mnt/mesos"
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.662147 30961 slave.cpp:519] Agent resources: gpus(*):1; cpus(*):8; mem(*):14014; disk(*):60257; ports(*):[31000-32000]
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.662211 30961 slave.cpp:527] Agent attributes: [  ]
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.662230 30961 slave.cpp:532] Agent hostname: ec2-52-59-113-0.eu-central-1.compute.amazonaws.com
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.663354 31000 state.cpp:57] Recovering state from '/mnt/mesos/meta'
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.663918 30995 status_update_manager.cpp:200] Recovering status update manager
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.664131 30996 containerizer.cpp:522] Recovering containerizer
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.664136 31000 docker.cpp:775] Recovering Docker containers
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: 2016-08-08 15:48:32,665:30961(0x7fce36077700):ZOO_INFO@check_events@1728: initiated connection to server [172.31.19.240:2181]
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: 2016-08-08 15:48:32,667:30961(0x7fce36077700):ZOO_INFO@check_events@1775: session establishment complete on server [172.31.19.240:2181], sessionId=0x1566a66ab9b000a, negotiated timeout=10000
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.667783 31000 group.cpp:349] Group process (group(1)@172.31.23.17:5051) connected to ZooKeeper
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.667835 31000 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.667857 31000 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.668104 30994 metadata_manager.cpp:251] Successfully loaded 1 Docker images
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.668200 30993 provisioner.cpp:253] Provisioner recovery complete
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.671417 31000 detector.cpp:152] Detected a new leader: (id='64')
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.671563 30998 group.cpp:706] Trying to get '/mesos/json.info_0000000064' in ZooKeeper
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.672611 30996 zookeeper.cpp:259] A new leading master (UPID=master@172.31.19.240:5050) is detected
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761471 30998 slave.cpp:4782] Finished recovery
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761726 30998 slave.cpp:4815] Garbage collecting old agent 524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S2
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761812 30998 slave.cpp:4815] Garbage collecting old agent c9852a23-bc07-422d-8d69-23c167a1924d-S0
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761808 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S2' for gc 6.99999118329778days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761893 30998 slave.cpp:4815] Garbage collecting old agent 524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S1
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761904 31000 gc.cpp:55] Scheduling '/mnt/mesos/meta/slaves/524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S2' for gc 6.99999118289481days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761978 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S0' for gc 6.99999118225778days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.761988 30998 slave.cpp:4815] Garbage collecting old agent 524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S3
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762029 31000 gc.cpp:55] Scheduling '/mnt/mesos/meta/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S0' for gc 6.9999911819437days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762064 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S1' for gc 6.99999118122667days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762105 31000 gc.cpp:55] Scheduling '/mnt/mesos/meta/slaves/524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S1' for gc 6.9999911809037days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762168 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S3' for gc 6.9999911798963days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762207 31000 gc.cpp:55] Scheduling '/mnt/mesos/meta/slaves/524105e7-de7a-43f0-8b28-d3ff3e0c4a44-S3' for gc 6.99999117958222days in the future
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762230 30998 slave.cpp:895] New master detected at master@172.31.19.240:5050
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762212 30993 status_update_manager.cpp:174] Pausing sending status updates
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762277 30998 slave.cpp:916] No credentials provided. Attempting to register without authentication
> Aug  8 15:48:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:32.762344 30998 slave.cpp:927] Detecting new master
> Aug  8 15:48:33 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:33.067215 30999 slave.cpp:1197] Re-registered with master master@172.31.19.240:5050
> Aug  8 15:48:33 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:33.067312 30999 slave.cpp:1233] Forwarding total oversubscribed resources
> Aug  8 15:48:33 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:33.067313 30993 status_update_manager.cpp:181] Resuming sending status updates
> Aug  8 15:48:33 ip-172-31-23-17 mesos-slave[30961]: I0808 15:48:33.067610 31000 slave.cpp:2526] Updated checkpointed resources from  to
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.507480 31000 slave.cpp:1495] Got assigned task 2 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.508069 31000 gc.cpp:83] Unscheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001' from gc
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.508167 30998 slave.cpp:1614] Launching task 2 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.509050 30998 slave.cpp:5674] Launching executor 2 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/2/runs/a58a1b2c-14a4-4ada-a4b3-666d8d077597'
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.509269 30998 slave.cpp:1840] Queuing task '2' for executor '2' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.509430 30993 containerizer.cpp:781] Starting container 'a58a1b2c-14a4-4ada-a4b3-666d8d077597' for executor '2' of framework 'c9852a23-bc07-422d-8d69-23c167a1924d-0001'
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.511643 30995 provisioner.cpp:294] Provisioning image rootfs '/mnt/mesos/provisioner/containers/a58a1b2c-14a4-4ada-a4b3-666d8d077597/backends/copy/rootfses/a688d80b-3eb9-4301-abfc-3e6a742cc8be' for container a58a1b2c-14a4-4ada-a4b3-666d8d077597
> Aug  8 15:49:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:32.663204 30996 slave.cpp:4591] Current disk usage 17.44%. Max allowed age: 5.078955723221944days
> Aug  8 15:49:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:38.511801 30995 slave.cpp:1495] Got assigned task 3 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:38.512176 30995 slave.cpp:1614] Launching task 3 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:38.512511 30995 slave.cpp:5674] Launching executor 3 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/3/runs/eb20f642-bb90-4293-8eec-6f1576ccaeb1'
> Aug  8 15:49:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:38.512737 30995 slave.cpp:1840] Queuing task '3' for executor '3' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:38.512948 30998 containerizer.cpp:781] Starting container 'eb20f642-bb90-4293-8eec-6f1576ccaeb1' for executor '3' of framework 'c9852a23-bc07-422d-8d69-23c167a1924d-0001'
> Aug  8 15:49:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:38.514993 30996 provisioner.cpp:294] Provisioning image rootfs '/mnt/mesos/provisioner/containers/eb20f642-bb90-4293-8eec-6f1576ccaeb1/backends/copy/rootfses/8d2e66c8-f0d6-4891-86e3-7e5c222adfd2' for container eb20f642-bb90-4293-8eec-6f1576ccaeb1
> Aug  8 15:49:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:44.524807 31000 slave.cpp:1495] Got assigned task 4 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:44.525184 31000 slave.cpp:1614] Launching task 4 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:44.525995 31000 slave.cpp:5674] Launching executor 4 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/4/runs/d248d565-e9d1-438e-8524-d71f601ba981'
> Aug  8 15:49:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:44.526221 31000 slave.cpp:1840] Queuing task '4' for executor '4' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:44.526448 30995 containerizer.cpp:781] Starting container 'd248d565-e9d1-438e-8524-d71f601ba981' for executor '4' of framework 'c9852a23-bc07-422d-8d69-23c167a1924d-0001'
> Aug  8 15:49:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:44.528301 30998 provisioner.cpp:294] Provisioning image rootfs '/mnt/mesos/provisioner/containers/d248d565-e9d1-438e-8524-d71f601ba981/backends/copy/rootfses/d05cefcb-6f83-49b6-a1c1-8e53d463712c' for container d248d565-e9d1-438e-8524-d71f601ba981
> Aug  8 15:49:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:50.531419 30996 slave.cpp:1495] Got assigned task 5 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:50.531810 30996 slave.cpp:1614] Launching task 5 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:50.532166 30996 slave.cpp:5674] Launching executor 5 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/5/runs/2f372338-3c40-4463-b61f-4f7e9e42766a'
> Aug  8 15:49:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:50.532412 30996 slave.cpp:1840] Queuing task '5' for executor '5' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:50.532740 30998 containerizer.cpp:781] Starting container '2f372338-3c40-4463-b61f-4f7e9e42766a' for executor '5' of framework 'c9852a23-bc07-422d-8d69-23c167a1924d-0001'
> Aug  8 15:49:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:50.534970 30993 provisioner.cpp:294] Provisioning image rootfs '/mnt/mesos/provisioner/containers/2f372338-3c40-4463-b61f-4f7e9e42766a/backends/copy/rootfses/691d6235-1596-4a5d-aeef-21bf43b2cc03' for container 2f372338-3c40-4463-b61f-4f7e9e42766a
> Aug  8 15:49:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:56.540551 30999 slave.cpp:1495] Got assigned task 6 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:56.540967 30999 slave.cpp:1614] Launching task 6 for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:56.541805 30999 slave.cpp:5674] Launching executor 6 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/6/runs/b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455'
> Aug  8 15:49:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:56.542049 30999 slave.cpp:1840] Queuing task '6' for executor '6' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:49:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:56.542263 30996 containerizer.cpp:781] Starting container 'b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455' for executor '6' of framework 'c9852a23-bc07-422d-8d69-23c167a1924d-0001'
> Aug  8 15:49:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:49:56.543998 30996 provisioner.cpp:294] Provisioning image rootfs '/mnt/mesos/provisioner/containers/b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455/backends/copy/rootfses/3601630b-5b57-4a45-9e2c-cfa79eacf2fd' for container b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455
> Aug  8 15:50:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:50:32.664178 30995 slave.cpp:4591] Current disk usage 31.17%. Max allowed age: 4.117972973602755days
> Aug  8 15:51:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:51:32.665262 30997 slave.cpp:4591] Current disk usage 41.95%. Max allowed age: 3.363669281005290days
> Aug  8 15:52:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:32.510210 30999 slave.cpp:4539] Terminating executor ''2' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001' because it did not register within 3mins
> Aug  8 15:52:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:32.510499 31000 containerizer.cpp:1622] Destroying container 'a58a1b2c-14a4-4ada-a4b3-666d8d077597'
> Aug  8 15:52:32 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:32.665740 30994 slave.cpp:4591] Current disk usage 52.57%. Max allowed age: 2.620120712155822days
> Aug  8 15:52:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:38.512807 30995 slave.cpp:4539] Terminating executor ''3' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001' because it did not register within 3mins
> Aug  8 15:52:38 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:38.513015 30994 containerizer.cpp:1622] Destroying container 'eb20f642-bb90-4293-8eec-6f1576ccaeb1'
> Aug  8 15:52:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:44.526844 30996 slave.cpp:4539] Terminating executor ''4' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001' because it did not register within 3mins
> Aug  8 15:52:44 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:44.527053 30996 containerizer.cpp:1622] Destroying container 'd248d565-e9d1-438e-8524-d71f601ba981'
> Aug  8 15:52:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:50.532806 31000 slave.cpp:4539] Terminating executor ''5' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001' because it did not register within 3mins
> Aug  8 15:52:50 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:50.532981 31000 containerizer.cpp:1622] Destroying container '2f372338-3c40-4463-b61f-4f7e9e42766a'
> Aug  8 15:52:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:56.542345 30995 slave.cpp:4539] Terminating executor ''6' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001' because it did not register within 3mins
> Aug  8 15:52:56 ip-172-31-23-17 mesos-slave[30961]: I0808 15:52:56.542580 30998 containerizer.cpp:1622] Destroying container 'b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455'
> Aug  8 15:53:09 ip-172-31-23-17 mesos-slave[30961]: E0808 15:53:09.691315 30996 slave.cpp:3976] Container 'eb20f642-bb90-4293-8eec-6f1576ccaeb1' for executor '3' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 failed to start: Container is being destroyed during provisioning
> Aug  8 15:53:09 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:09.695711 30996 composing.cpp:541] Container 'eb20f642-bb90-4293-8eec-6f1576ccaeb1' is already destroyed
> Aug  8 15:53:11 ip-172-31-23-17 mesos-slave[30961]: E0808 15:53:11.270226 30993 slave.cpp:3976] Container 'a58a1b2c-14a4-4ada-a4b3-666d8d077597' for executor '2' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 failed to start: Container is being destroyed during provisioning
> Aug  8 15:53:11 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:11.270447 30999 composing.cpp:541] Container 'a58a1b2c-14a4-4ada-a4b3-666d8d077597' is already destroyed
> Aug  8 15:53:11 ip-172-31-23-17 mesos-slave[30961]: E0808 15:53:11.689219 30995 slave.cpp:3976] Container 'd248d565-e9d1-438e-8524-d71f601ba981' for executor '4' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 failed to start: Container is being destroyed during provisioning
> Aug  8 15:53:11 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:11.689400 31000 composing.cpp:541] Container 'd248d565-e9d1-438e-8524-d71f601ba981' is already destroyed
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: E0808 15:53:12.400029 30999 slave.cpp:3976] Container '2f372338-3c40-4463-b61f-4f7e9e42766a' for executor '5' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 failed to start: Container is being destroyed during provisioning
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:12.400218 30994 composing.cpp:541] Container '2f372338-3c40-4463-b61f-4f7e9e42766a' is already destroyed
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:12.799772 30998 provisioner.cpp:434] Destroying container rootfs at '/mnt/mesos/provisioner/containers/eb20f642-bb90-4293-8eec-6f1576ccaeb1/backends/copy/rootfses/8d2e66c8-f0d6-4891-86e3-7e5c222adfd2' for container eb20f642-bb90-4293-8eec-6f1576ccaeb1
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: E0808 15:53:12.799855 30995 slave.cpp:3976] Container 'b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455' for executor '6' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 failed to start: Container is being destroyed during provisioning
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:12.810742 30999 composing.cpp:541] Container 'b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455' is already destroyed
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:12.811067 30998 provisioner.cpp:434] Destroying container rootfs at '/mnt/mesos/provisioner/containers/a58a1b2c-14a4-4ada-a4b3-666d8d077597/backends/copy/rootfses/a688d80b-3eb9-4301-abfc-3e6a742cc8be' for container a58a1b2c-14a4-4ada-a4b3-666d8d077597
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:12.812954 30998 provisioner.cpp:434] Destroying container rootfs at '/mnt/mesos/provisioner/containers/d248d565-e9d1-438e-8524-d71f601ba981/backends/copy/rootfses/d05cefcb-6f83-49b6-a1c1-8e53d463712c' for container d248d565-e9d1-438e-8524-d71f601ba981
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:12.814383 30998 provisioner.cpp:434] Destroying container rootfs at '/mnt/mesos/provisioner/containers/2f372338-3c40-4463-b61f-4f7e9e42766a/backends/copy/rootfses/691d6235-1596-4a5d-aeef-21bf43b2cc03' for container 2f372338-3c40-4463-b61f-4f7e9e42766a
> Aug  8 15:53:12 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:12.815577 30998 provisioner.cpp:434] Destroying container rootfs at '/mnt/mesos/provisioner/containers/b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455/backends/copy/rootfses/3601630b-5b57-4a45-9e2c-cfa79eacf2fd' for container b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:15.995162 31000 slave.cpp:4082] Executor '4' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 has terminated with unknown status
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:15.995355 31000 slave.cpp:3211] Handling status update TASK_FAILED (UUID: 4f428b5a-1538-4f78-8383-6c9b68f67f3f) for task 4 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 from @0.0.0.0:0
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:15.995841 31000 containerizer.cpp:1451] Ignoring update for unknown container: d248d565-e9d1-438e-8524-d71f601ba981
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:15.996011 31000 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 4f428b5a-1538-4f78-8383-6c9b68f67f3f) for task 4 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:15.996346 31000 slave.cpp:3604] Forwarding the update TASK_FAILED (UUID: 4f428b5a-1538-4f78-8383-6c9b68f67f3f) for task 4 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 to master@172.31.19.240:5050
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:15.998983 30993 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 4f428b5a-1538-4f78-8383-6c9b68f67f3f) for task 4 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:15 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:15.999192 30993 slave.cpp:4193] Cleaning up executor '4' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.000690 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/4/runs/d248d565-e9d1-438e-8524-d71f601ba981' for gc 6.9999884190637days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.000754 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/4' for gc 6.99998841775407days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.344812 30995 slave.cpp:4082] Executor '3' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 has terminated with unknown status
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.344981 30995 slave.cpp:3211] Handling status update TASK_FAILED (UUID: c33d9357-e749-4e05-82d2-cb305d1ff0d2) for task 3 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 from @0.0.0.0:0
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:16.346043 30994 containerizer.cpp:1451] Ignoring update for unknown container: eb20f642-bb90-4293-8eec-6f1576ccaeb1
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.346206 30998 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: c33d9357-e749-4e05-82d2-cb305d1ff0d2) for task 3 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.346482 30997 slave.cpp:3604] Forwarding the update TASK_FAILED (UUID: c33d9357-e749-4e05-82d2-cb305d1ff0d2) for task 3 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 to master@172.31.19.240:5050
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.348764 30998 status_update_manager.cpp:392] Received status update acknowledgement (UUID: c33d9357-e749-4e05-82d2-cb305d1ff0d2) for task 3 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.348863 30998 slave.cpp:4193] Cleaning up executor '3' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.348984 30998 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/3/runs/eb20f642-bb90-4293-8eec-6f1576ccaeb1' for gc 6.99999596136593days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.349020 30998 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/3' for gc 6.99999596097185days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.498633 30996 slave.cpp:4082] Executor '2' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 has terminated with unknown status
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.498803 30996 slave.cpp:3211] Handling status update TASK_FAILED (UUID: 3d818e6a-f600-412c-b07d-f21794a0af30) for task 2 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 from @0.0.0.0:0
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.498921 30996 slave.cpp:4082] Executor '5' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 has terminated with unknown status
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.498991 30996 slave.cpp:3211] Handling status update TASK_FAILED (UUID: 16048628-6112-42ed-b3db-5b418a29d40d) for task 5 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 from @0.0.0.0:0
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:16.499202 30996 containerizer.cpp:1451] Ignoring update for unknown container: a58a1b2c-14a4-4ada-a4b3-666d8d077597
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:16.499289 30996 containerizer.cpp:1451] Ignoring update for unknown container: 2f372338-3c40-4463-b61f-4f7e9e42766a
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.499400 30997 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 3d818e6a-f600-412c-b07d-f21794a0af30) for task 2 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.499665 30995 slave.cpp:3604] Forwarding the update TASK_FAILED (UUID: 3d818e6a-f600-412c-b07d-f21794a0af30) for task 2 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 to master@172.31.19.240:5050
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.499678 30997 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 16048628-6112-42ed-b3db-5b418a29d40d) for task 5 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.499928 30997 slave.cpp:3604] Forwarding the update TASK_FAILED (UUID: 16048628-6112-42ed-b3db-5b418a29d40d) for task 5 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 to master@172.31.19.240:5050
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.504577 30994 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 3d818e6a-f600-412c-b07d-f21794a0af30) for task 2 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.504712 30994 slave.cpp:4193] Cleaning up executor '2' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.505421 30999 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/2/runs/a58a1b2c-14a4-4ada-a4b3-666d8d077597' for gc 6.99999415104296days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.505486 30994 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 16048628-6112-42ed-b3db-5b418a29d40d) for task 5 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.505493 30999 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/2' for gc 6.99999415021926days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.505623 30994 slave.cpp:4193] Cleaning up executor '5' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.505748 30993 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/5/runs/2f372338-3c40-4463-b61f-4f7e9e42766a' for gc 6.99999414686222days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.505798 30993 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/5' for gc 6.9999941464237days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.598873 30999 slave.cpp:4082] Executor '6' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 has terminated with unknown status
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.599030 30999 slave.cpp:3211] Handling status update TASK_FAILED (UUID: 62805972-05e7-419f-8ff3-f80a88cf8199) for task 6 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 from @0.0.0.0:0
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: W0808 15:53:16.599324 30996 containerizer.cpp:1451] Ignoring update for unknown container: b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.599454 30996 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 62805972-05e7-419f-8ff3-f80a88cf8199) for task 6 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.599701 30993 slave.cpp:3604] Forwarding the update TASK_FAILED (UUID: 62805972-05e7-419f-8ff3-f80a88cf8199) for task 6 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001 to master@172.31.19.240:5050
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602051 30996 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 62805972-05e7-419f-8ff3-f80a88cf8199) for task 6 of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602171 30996 slave.cpp:4193] Cleaning up executor '6' of framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602313 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/6/runs/b77f9aa9-b2e5-488d-9fdd-fcbd2e5c8455' for gc 6.99999302943407days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602334 30996 slave.cpp:4281] Cleaning up framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602388 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001/executors/6' for gc 6.99999302868148days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602442 31000 gc.cpp:55] Scheduling '/mnt/mesos/slaves/c9852a23-bc07-422d-8d69-23c167a1924d-S1/frameworks/c9852a23-bc07-422d-8d69-23c167a1924d-0001' for gc 6.99999302741926days in the future
> Aug  8 15:53:16 ip-172-31-23-17 mesos-slave[30961]: I0808 15:53:16.602445 30999 status_update_manager.cpp:282] Closing status update streams for framework c9852a23-bc07-422d-8d69-23c167a1924d-0001
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)