You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@mesos.apache.org by Greg Mann <gr...@mesosphere.io> on 2020/04/15 21:28:46 UTC

Review Request 72368: Fixed a bug in the agent's draining handler.

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch update's the agent's handler to
avoid checkpointing anything to disk in this case.


Diffs
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 


Diff: https://reviews.apache.org/r/72368/diff/1/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220326
-----------------------------------------------------------



Bad patch!

Reviews applied: [72363, 72364, 72368]

Failed command: ['bash', '-c', "set -o pipefail; export OS='ubuntu:16.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/jenkins/buildbot.sh 2>&1 | tee build_72368"]

Error:
ubuntu-16.04: Pulling from mesos/mesos-build
18d680d61657: Pulling fs layer
0addb6fece63: Pulling fs layer
78e58219b215: Pulling fs layer
eb6959a66df2: Pulling fs layer
1105027a5560: Pulling fs layer
c36946130a38: Pulling fs layer
6a6a5e68faab: Pulling fs layer
80e07249924c: Pulling fs layer
c4c63e2501db: Pulling fs layer
668b207c2829: Pulling fs layer
ed76dddad568: Pulling fs layer
c36946130a38: Waiting
6a6a5e68faab: Waiting
80e07249924c: Waiting
ed76dddad568: Waiting
c4c63e2501db: Waiting
668b207c2829: Waiting
eb6959a66df2: Waiting
0addb6fece63: Verifying Checksum
0addb6fece63: Download complete
78e58219b215: Download complete
eb6959a66df2: Download complete
18d680d61657: Verifying Checksum
18d680d61657: Download complete
c36946130a38: Verifying Checksum
c36946130a38: Download complete
6a6a5e68faab: Verifying Checksum
6a6a5e68faab: Download complete
80e07249924c: Verifying Checksum
80e07249924c: Download complete
c4c63e2501db: Verifying Checksum
c4c63e2501db: Download complete
668b207c2829: Download complete
ed76dddad568: Verifying Checksum
ed76dddad568: Download complete
18d680d61657: Pull complete
0addb6fece63: Pull complete
78e58219b215: Pull complete
eb6959a66df2: Pull complete
1105027a5560: Verifying Checksum
1105027a5560: Download complete
1105027a5560: Pull complete
c36946130a38: Pull complete
6a6a5e68faab: Pull complete
80e07249924c: Pull complete
c4c63e2501db: Pull complete
668b207c2829: Pull complete
ed76dddad568: Pull complete
Digest: sha256:fa967cbcfb44f55708a3cbc87f245c6d29dd891464db558af56a03ee321526bb
Status: Downloaded newer image for mesos/mesos-build:ubuntu-16.04
docker.io/mesos/mesos-build:ubuntu-16.04
Cloning into '/tmp/SRC'...
Note: checking out '92efe4b55fbb04531e9086631b54f1dfc350d823'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

autoreconf: Entering directory `.'
autoreconf: configure.ac: not using Gettext
autoreconf: running: aclocal --warnings=all -I m4
autoreconf: configure.ac: tracing
configure.ac:1873: warning: cannot check for file existence when cross compiling
../../lib/autoconf/general.m4:2788: AC_CHECK_FILE is expanded from...
configure.ac:1873: the top level
configure.ac:1878: warning: cannot check for file existence when cross compiling
../../lib/autoconf/general.m4:2788: AC_CHECK_FILE is expanded from...
configure.ac:1878: the top level
configure.ac:2399: warning: AC_RUN_IFELSE called without default to allow cross compiling
../../lib/autoconf/general.m4:2759: AC_RUN_IFELSE is expanded from...
configure.ac:2399: the top level
autoreconf: running: libtoolize --copy
libtoolize: putting auxiliary files in '.'.
libtoolize: copying file './ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
autoreconf: running: /usr/bin/autoconf --warnings=all
configure.ac:1873: warning: cannot check for file existence when cross compiling
../../lib/autoconf/general.m4:2788: AC_CHECK_FILE is expanded from...
configure.ac:1873: the top level
configure.ac:1878: warning: cannot check for file existence when cross compiling
../../lib/autoconf/general.m4:2788: AC_CHECK_FILE is expanded from...
configure.ac:1878: the top level
configure.ac:2399: warning: AC_RUN_IFELSE called without default to allow cross compiling
../../lib/autoconf/general.m4:2759: AC_RUN_IFELSE is expanded from...
configure.ac:2399: the top level
autoreconf: configure.ac: not using Autoheader
autoreconf: running: automake --add-missing --copy --no-force --warnings=all
configure.ac:50: installing './ar-lib'
configure.ac:34: installing './compile'
configure.ac:24: installing './config.guess'
configure.ac:24: installing './config.sub'
configure.ac:46: installing './install-sh'
configure.ac:46: installing './missing'
3rdparty/Makefile.am:307: warning: source file '$(HTTP_PARSER)/http_parser.c' is in a subdirectory,
3rdparty/Makefile.am:307: but option 'subdir-objects' is disabled
automake: warning: possible forward-incompatibility.
automake: At least a source file is in a subdirectory, but the 'subdir-objects'
automake: automake option hasn't been enabled.  For now, the corresponding output
automake: object file(s) will be placed in the top-level directory.  However,
automake: this behaviour will change in future Automake versions: they will
automake: unconditionally cause object files to be placed in the same subdirectory
automake: of the corresponding sources.
automake: You are advised to start using 'subdir-objects' option throughout your
automake: project, to avoid future incompatibilities.
3rdparty/Makefile.am: installing './depcomp'
3rdparty/Makefile.am:262: warning: variable 'GLOG_LDFLAGS' is defined but no program or
3rdparty/Makefile.am:262: library has 'GLOG' as canonical name (possible typo)
3rdparty/stout/Makefile.am:138: warning: source file 'tests/adaptor_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/archiver_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/base64_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/bits_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/boundedhashmap_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/bytes_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/cache_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/cpp17_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/duration_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/dynamiclibrary_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/error_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/flags_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/gzip_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/hashmap_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/hashset_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/interval_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/ip_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/json_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/jsonify_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/lambda_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/linkedhashmap_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/mac_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/main.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/multimap_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/none_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/numify_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/option_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/copyfile_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/env_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/filesystem_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/process_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/rmdir_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/sendfile_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/signals_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/socket_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/strerror_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/os/systems_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/path_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/protobuf_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/recordio_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/result_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/some_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/strings_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/subcommand_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/svn_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/try_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/uri_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/uuid_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/variant_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:138: warning: source file 'tests/version_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:138: but option 'subdir-objects' is disabled
3rdparty/stout/Makefile.am:193: warning: source file 'tests/proc_tests.cpp' is in a subdirectory,
3rdparty/stout/Makefile.am:193: but option 'subdir-objects' is disabled
autoreconf: Leaving directory `.'
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking target system type... x86_64-pc-linux-gnu
checking whether the C++ compiler works... yes
checking for C++ compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking whether ln -s works... yes
checking for C++ compiler vendor... gnu
checking for a sed that does not truncate output... /bin/sed
checking for C++ compiler version... 5.4.0
checking for C++ compiler vendor... (cached) gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking whether make supports nested variables... yes
checking how to create a pax tar archive... gnutar
checking dependency style of gcc... gcc3
checking dependency style of g++... gcc3
checking whether to enable maintainer-specific portions of Makefiles... yes
checking for ar... ar
checking the archiver (ar) interface... ar
checking how to print strings... printf
checking for a sed that does not truncate output... (cached) /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking the maximum length of command line arguments... 1572864
checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for a working dd... /bin/dd
checking how to truncate binary pipes... /bin/dd bs=4096 count=1
checking for mt... no
checking if : is a manifest tool... no
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking how to run the C++ preprocessor... g++ -E
checking for ld used by g++... /usr/bin/ld -m elf_x86_64
checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking for g++ option to produce PIC... -fPIC -DPIC
checking if g++ PIC flag -fPIC -DPIC works... yes
checking if g++ static flag -static works... yes
checking if g++ supports -c -o file.o... yes
checking if g++ supports -c -o file.o... (cached) yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking dynamic linker characteristics... (cached) GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
configure: creating ./config.lt
config.lt: creating libtool
checking whether to enable GC of unused sections... no
configure: Setting up CXXFLAGS for g++ version >= 4.8
checking whether C++ compiler accepts -fstack-protector-strong... yes
checking whether g++ supports C++11 features by default... no
checking whether g++ supports C++11 features with -std=c++11... yes
checking if compiler needs -Werror to reject unknown flags... no
checking for the pthreads library -lpthreads... no
checking whether pthreads work without any flags... no
checking whether pthreads work with -Kthread... no
checking whether pthreads work with -kthread... no
checking for the pthreads library -llthread... no
checking whether pthreads work with -pthread... yes
checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
checking if more special flags are required for pthreads... no
checking for PTHREAD_PRIO_INHERIT... yes
configure: Setting up build environment for x86_64 linux-gnu
checking for backtrace in -lunwind... no
checking for RAND_poll in -lcrypto... yes
checking openssl/ssl.h usability... yes
checking openssl/ssl.h presence... yes
checking for openssl/ssl.h... yes
checking for SSL_CTX_new in -lssl... yes
checking for main in -lgflags... no
checking for patch... patch
checking fts.h usability... yes
checking fts.h presence... yes
checking for fts.h... yes
checking for library containing fts_close... none required
checking apr_pools.h usability... yes
checking apr_pools.h presence... yes
checking for apr_pools.h... yes
checking for apr_initialize in -lapr-1... yes
checking for curl_global_init in -lcurl... yes
checking for javac... /usr/bin/javac
checking for java... /usr/bin/java
checking value of Java system property 'java.home'... /usr/lib/jvm/java-8-openjdk-amd64/jre
configure: using JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
checking whether or not we can build with JNI... yes
checking for mvn... /usr/bin/mvn
checking for javah... /usr/lib/jvm/java-8-openjdk-amd64/bin/javah
checking whether to enable launcher sealing... no
checking whether to enable the Seccomp isolator... no
checking for sasl_done in -lsasl2... yes
checking SASL CRAM-MD5 support... yes
checking svn_version.h usability... yes
checking svn_version.h presence... yes
checking for svn_version.h... yes
checking for svn_stringbuf_create_ensure in -lsvn_subr-1... yes
checking svn_delta.h usability... yes
checking svn_delta.h presence... yes
checking for svn_delta.h... yes
checking for svn_txdelta in -lsvn_delta-1... yes
checking whether to enable the XFS disk isolator... no
checking zlib.h usability... yes
checking zlib.h presence... yes
checking for zlib.h... yes
checking for deflate, gzread, gzwrite, inflate in -lz... yes
checking C++ standard library for undefined behaviour with selected optimization level... no
checking for python... /usr/bin/python
checking for python version... 2.7
checking for python platform... linux2
checking for python script directory... ${prefix}/lib/python2.7/dist-packages
checking for python extension module directory... ${exec_prefix}/lib/python2.7/dist-packages
checking for python2.7... (cached) /usr/bin/python
checking for a version of Python >= '2.1.0'... yes
checking for a version of Python >= '2.6'... yes
checking for the distutils Python package... yes
checking for Python include path... -I/usr/include/python2.7
checking for Python library path... -L/usr/lib -lpython2.7
checking for Python site-packages path... /usr/lib/python2.7/dist-packages
checking python extra libraries... -lpthread -ldl  -lutil -lm
checking python extra linking flags... -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions
checking consistency of all components of python development environment... yes
checking whether we can build usable Python eggs... cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
yes
checking for an old installation of the Mesos egg (before 0.20.0)... no
checking whether to enable new CLI... no
checking src/common/git_version.hpp presence... no
configure: generating src/common/git_version.hpp
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating mesos.pc
config.status: creating src/Makefile
config.status: creating 3rdparty/Makefile
config.status: creating 3rdparty/libprocess/Makefile
config.status: creating 3rdparty/libprocess/include/Makefile
config.status: creating 3rdparty/stout/Makefile
config.status: creating 3rdparty/stout/include/Makefile
config.status: creating 3rdparty/gmock_sources.cc
config.status: creating bin/mesos.sh
config.status: creating bin/mesos-agent.sh
config.status: creating bin/mesos-local.sh
config.status: creating bin/mesos-master.sh
config.status: creating bin/mesos-slave.sh
config.status: creating bin/mesos-tests.sh
config.status: creating bin/mesos-agent-flags.sh
config.status: creating bin/mesos-local-flags.sh
config.status: creating bin/mesos-master-flags.sh
config.status: creating bin/mesos-slave-flags.sh
config.status: creating bin/mesos-tests-flags.sh
config.status: creating bin/gdb-mesos-agent.sh
config.status: creating bin/gdb-mesos-local.sh
config.status: creating bin/gdb-mesos-master.sh
config.status: creating bin/gdb-mesos-slave.sh
config.status: creating bin/gdb-mesos-tests.sh
config.status: creating bin/lldb-mesos-agent.sh
config.status: creating bin/lldb-mesos-local.sh
config.status: creating bin/lldb-mesos-master.sh
config.status: creating bin/lldb-mesos-slave.sh
config.status: creating bin/lldb-mesos-tests.sh
config.status: creating bin/valgrind-mesos-agent.sh
config.status: creating bin/valgrind-mesos-local.sh
config.status: creating bin/valgrind-mesos-master.sh
config.status: creating bin/valgrind-mesos-slave.sh
config.status: creating bin/valgrind-mesos-tests.sh
config.status: creating src/deploy/mesos-daemon.sh
config.status: creating src/deploy/mesos-start-agents.sh
config.status: creating src/deploy/mesos-start-cluster.sh
config.status: creating src/deploy/mesos-start-masters.sh
config.status: creating src/deploy/mesos-start-slaves.sh
config.status: creating src/deploy/mesos-stop-agents.sh
config.status: creating src/deploy/mesos-stop-cluster.sh
config.status: creating src/deploy/mesos-stop-masters.sh
config.status: creating src/deploy/mesos-stop-slaves.sh
config.status: creating include/mesos/version.hpp
config.status: creating src/java/generated/org/apache/mesos/MesosNativeLibrary.java
config.status: creating src/examples/java/test-exception-framework
config.status: creating src/examples/java/test-executor
config.status: creating src/examples/java/test-framework
config.status: creating src/examples/java/test-multiple-executors-framework
config.status: creating src/examples/java/test-log
config.status: creating src/examples/java/v1-test-framework
config.status: creating src/java/mesos.pom
config.status: creating src/examples/python/test-executor
config.status: creating src/examples/python/test-framework
config.status: creating src/python/setup.py
config.status: creating src/python/cli/setup.py
config.status: creating src/python/interface/setup.py
config.status: creating src/python/native_common/ext_modules.py
config.status: creating src/python/executor/setup.py
config.status: creating src/python/native/setup.py
config.status: creating src/python/scheduler/setup.py
config.status: creating src/common/git_version.hpp
config.status: linking src/python/native_common/ext_modules.py to src/python/executor/ext_modules.py
config.status: linking src/python/native_common/ext_modules.py to src/python/scheduler/ext_modules.py
config.status: executing depfiles commands
config.status: executing libtool commands
configure: Build option summary:
    CXX:        g++
    CXXFLAGS:   -g1 -O0 -Wno-unused-local-typedefs -std=c++11
    CPPFLAGS:   -I/usr/include/subversion-1 -I/usr/include/apr-1 -I/usr/include/apr-1.0	      
    LDFLAGS:    
    LIBS:       -lz -lsvn_delta-1 -lsvn_subr-1 -lsasl2 -lcurl -lapr-1  -lrt
    JAVA_TEST_LDFLAGS: -L/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server -R/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server -Wl,-ljvm
    JAVA_JVM_LIBRARY:  /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so

make  dist-gzip am__post_remove_distdir='@:'
make[1]: Entering directory '/tmp/SRC/build'
if test -d "mesos-1.10.0"; then find "mesos-1.10.0" -type d ! -perm -200 -exec chmod u+w {} ';' && rm -rf "mesos-1.10.0" || { sleep 5 && rm -rf "mesos-1.10.0"; }; else :; fi
test -d "mesos-1.10.0" || mkdir "mesos-1.10.0"
 (cd 3rdparty && make  top_distdir=../mesos-1.10.0 distdir=../mesos-1.10.0/3rdparty \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[2]: Entering directory '/tmp/SRC/build/3rdparty'
 (cd stout && make  top_distdir=../../mesos-1.10.0 distdir=../../mesos-1.10.0/3rdparty/stout \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[3]: Entering directory '/tmp/SRC/build/3rdparty/stout'
 (cd include && make  top_distdir=../../../mesos-1.10.0 distdir=../../../mesos-1.10.0/3rdparty/stout/include \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[4]: Entering directory '/tmp/SRC/build/3rdparty/stout/include'
make[4]: Leaving directory '/tmp/SRC/build/3rdparty/stout/include'
make  \
  top_distdir="../../mesos-1.10.0" distdir="../../mesos-1.10.0/3rdparty/stout" \
  dist-hook
make[4]: Entering directory '/tmp/SRC/build/3rdparty/stout'
cp -r ../../../3rdparty/stout/3rdparty ../../mesos-1.10.0/3rdparty/stout/
make[4]: Leaving directory '/tmp/SRC/build/3rdparty/stout'
make[3]: Leaving directory '/tmp/SRC/build/3rdparty/stout'
 (cd libprocess && make  top_distdir=../../mesos-1.10.0 distdir=../../mesos-1.10.0/3rdparty/libprocess \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[3]: Entering directory '/tmp/SRC/build/3rdparty/libprocess'
 (cd include && make  top_distdir=../../../mesos-1.10.0 distdir=../../../mesos-1.10.0/3rdparty/libprocess/include \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[4]: Entering directory '/tmp/SRC/build/3rdparty/libprocess/include'
make[4]: Leaving directory '/tmp/SRC/build/3rdparty/libprocess/include'
make  \
  top_distdir="../../mesos-1.10.0" distdir="../../mesos-1.10.0/3rdparty/libprocess" \
  dist-hook
make[4]: Entering directory '/tmp/SRC/build/3rdparty/libprocess'
cp -r ../../../3rdparty/libprocess/3rdparty ../../mesos-1.10.0/3rdparty/libprocess/
make[4]: Leaving directory '/tmp/SRC/build/3rdparty/libprocess'
make[3]: Leaving directory '/tmp/SRC/build/3rdparty/libprocess'
make[2]: Leaving directory '/tmp/SRC/build/3rdparty'
 (cd src && make  top_distdir=../mesos-1.10.0 distdir=../mesos-1.10.0/src \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[2]: Entering directory '/tmp/SRC/build/src'
../3rdparty/protobuf-3.5.0/src/protoc -I../../include -I../../src -I../3rdparty/csi-0.2.0 -I../3rdparty/csi-1.1.0 -I../3rdparty/protobuf-3.5.0/src  --cpp_out=. ../../src/tests/common/http_tests.proto
/bin/bash: ../3rdparty/protobuf-3.5.0/src/protoc: No such file or directory
Makefile:15651: recipe for target 'tests/common/http_tests.pb.cc' failed
make[2]: *** [tests/common/http_tests.pb.cc] Error 127
make[2]: Leaving directory '/tmp/SRC/build/src'
Makefile:885: recipe for target 'distdir' failed
make[1]: *** [distdir] Error 1
make[1]: Leaving directory '/tmp/SRC/build'
Makefile:984: recipe for target 'dist' failed
make: *** [dist] Error 2
Untagged: mesos/mesos-build:ubuntu-16.04
Untagged: mesos/mesos-build@sha256:fa967cbcfb44f55708a3cbc87f245c6d29dd891464db558af56a03ee321526bb
Deleted: sha256:e895c0531b9a9a288fabe479a49f7059aed83645351ac99ec2ea2616822c9f97
Deleted: sha256:09119b0b314a69ba6ec6251f2e89f4199fe1e874c84f9abf07dcbd23dbc3f1c1
Deleted: sha256:562fe6af5a3883058be9c784a839392215ed4185a21d21f1d99df0d17f3ae6e7
Deleted: sha256:0dcdfbe322a4f332f73ac70905d557300ec6dae3bd19586758772c750c7b4a19
Deleted: sha256:c66652d605f01094e2db53d62505dbd524e076d5aa69b89d5e620003803eb149
Deleted: sha256:a2cf79bfb9593c05fd7142ddb49afef77ea4ad5d2464e841f09fe62ffee396e0
Deleted: sha256:f7a904214b390f39d98573882f631dc908df8b2b540cf04e1062f8182c1efffd
Deleted: sha256:ce1f6fcaa83dfce189d76e08f184085732eab4eeb2562d2399953958405c5bec
Deleted: sha256:cce92fda689ab9033f0b8db214bc63edd1ae3e05831a0f3a9418976d7dc7ccdd
Deleted: sha256:d22094bbd65447c59a42c580eaa3a44cee9cd855f00905f59409be21bcefc745
Deleted: sha256:b8976847450013f3eb5e9a81a5778f73ed7bef67e6393049712ef17102b4b7b7
Deleted: sha256:b8c891f0ffec910a12757d733b178e3f62d81dbbde2b31d3b754071c416108ed

Full log: https://builds.apache.org/job/Mesos-Reviewbot-Linux/8856/console

- Mesos Reviewbot


On April 15, 2020, 2:28 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated April 15, 2020, 2:28 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch update's the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/1/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.

> On April 17, 2020, 3:40 p.m., Andrei Sekretenko wrote:
> > Btw, why did you mark this patch as depending on https://reviews.apache.org/r/72364/ ? 
> > 
> > I would say this is the other way round: after this one is committed, I'll be able to properly use the offer in the test in r72364.

Whoops sorry, I intended to remove the dependency before submitting.


- Greg


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220348
-----------------------------------------------------------


On April 17, 2020, 6:03 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated April 17, 2020, 6:03 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch update's the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/1/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Andrei Sekretenko <as...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220348
-----------------------------------------------------------


Ship it!




Btw, why did you mark this patch as depending on https://reviews.apache.org/r/72364/ ? 

I would say this is the other way round: after this one is committed, I'll be able to properly use the offer in the test in r72364.

- Andrei Sekretenko


On April 15, 2020, 9:28 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated April 15, 2020, 9:28 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch update's the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/1/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Andrei Sekretenko <as...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220641
-----------------------------------------------------------


Ship it!




Ship It!

- Andrei Sekretenko


On May 5, 2020, 5:52 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated May 5, 2020, 5:52 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch updates the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
> and its functionality is moved into the test
> `SlaveTest.DrainAgentKillsRunningTask`. The running task in
> the latter test allows us to verify agent API outputs both
> before and after the task's terminal update is acknowleged.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
>   src/tests/slave_tests.cpp 5ad04b2fc494cb1120d2dc68a8cd40e5a27c129e 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/4/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated May 6, 2020, 12:24 a.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch updates the agent's handler to
avoid checkpointing anything to disk in this case.

The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
and its functionality is moved into the test
`SlaveTest.DrainAgentKillsRunningTask`. The running task in
the latter test allows us to verify agent API outputs both
before and after the task's terminal update is acknowleged.


Diffs
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
  src/tests/slave_tests.cpp 5ad04b2fc494cb1120d2dc68a8cd40e5a27c129e 


Diff: https://reviews.apache.org/r/72368/diff/4/


Testing (updated)
-------

This patch is tested in https://reviews.apache.org/r/72364/.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated May 5, 2020, 5:52 p.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description (updated)
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch updates the agent's handler to
avoid checkpointing anything to disk in this case.

The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
and its functionality is moved into the test
`SlaveTest.DrainAgentKillsRunningTask`. The running task in
the latter test allows us to verify agent API outputs both
before and after the task's terminal update is acknowleged.


Diffs
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
  src/tests/slave_tests.cpp 5ad04b2fc494cb1120d2dc68a8cd40e5a27c129e 


Diff: https://reviews.apache.org/r/72368/diff/4/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated May 5, 2020, 5:51 p.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description (updated)
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch updates the agent's handler to
avoid checkpointing anything to disk in this case.


Diffs (updated)
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
  src/tests/slave_tests.cpp 5ad04b2fc494cb1120d2dc68a8cd40e5a27c129e 


Diff: https://reviews.apache.org/r/72368/diff/4/

Changes: https://reviews.apache.org/r/72368/diff/3-4/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.

> On May 5, 2020, 12:27 p.m., Andrei Sekretenko wrote:
> > src/tests/slave_tests.cpp
> > Lines 12227-12228 (patched)
> > <https://reviews.apache.org/r/72368/diff/3/?file=2230178#file2230178line12324>
> >
> >     I 'm not sure if these assertions add any value: these properties of V1 API are not specific to agent draining and are covered by corresponding tests. Maybe we can just remove them to improve readability?

Good call, removed.


> On May 5, 2020, 12:27 p.m., Andrei Sekretenko wrote:
> > src/tests/slave_tests.cpp
> > Lines 12229 (patched)
> > <https://reviews.apache.org/r/72368/diff/3/?file=2230178#file2230178line12326>
> >
> >     Given that `drainConfig` in `EXPECT_EQ` below is different from default value of `DrainConfig`, isn't this assertion redundant as well?

I think it's a bit better to explicitly assert here that the field has been set, since not doing so would require the reader to know something about the field's default value in order to reason about this code.


- Greg


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220634
-----------------------------------------------------------


On May 5, 2020, 5:52 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated May 5, 2020, 5:52 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch updates the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
> and its functionality is moved into the test
> `SlaveTest.DrainAgentKillsRunningTask`. The running task in
> the latter test allows us to verify agent API outputs both
> before and after the task's terminal update is acknowleged.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
>   src/tests/slave_tests.cpp 5ad04b2fc494cb1120d2dc68a8cd40e5a27c129e 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/4/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Andrei Sekretenko <as...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220634
-----------------------------------------------------------


Fix it, then Ship it!





src/tests/slave_tests.cpp
Lines 12227-12228 (patched)
<https://reviews.apache.org/r/72368/#comment309201>

    I 'm not sure if these assertions add any value: these properties of V1 API are not specific to agent draining and are covered by corresponding tests. Maybe we can just remove them to improve readability?



src/tests/slave_tests.cpp
Lines 12229 (patched)
<https://reviews.apache.org/r/72368/#comment309202>

    Given that `drainConfig` in `EXPECT_EQ` below is different from default value of `DrainConfig`, isn't this assertion redundant as well?



src/tests/slave_tests.cpp
Lines 12252 (patched)
<https://reviews.apache.org/r/72368/#comment309203>

    The purpose of waiting for `removeFramework` doesn't look obvious at all - can you also explain it in this comment or nearby?


- Andrei Sekretenko


On May 5, 2020, 5:22 a.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated May 5, 2020, 5:22 a.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch updates the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
> and its functionality is moved into the test
> `SlaveTest.DrainAgentKillsRunningTask`. The running task in
> the latter test allows us to verify agent API outputs both
> before and after the task's terminal update is acknowleged.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
>   src/tests/slave_tests.cpp 6b264d067accb9800ba75ea8f5d27c1e4c5593db 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/3/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated May 5, 2020, 5:22 a.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch updates the agent's handler to
avoid checkpointing anything to disk in this case.

The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
and its functionality is moved into the test
`SlaveTest.DrainAgentKillsRunningTask`. The running task in
the latter test allows us to verify agent API outputs both
before and after the task's terminal update is acknowleged.


Diffs (updated)
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
  src/tests/slave_tests.cpp 6b264d067accb9800ba75ea8f5d27c1e4c5593db 


Diff: https://reviews.apache.org/r/72368/diff/3/

Changes: https://reviews.apache.org/r/72368/diff/2-3/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated May 5, 2020, 5:10 a.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description (updated)
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch updates the agent's handler to
avoid checkpointing anything to disk in this case.

The `SlaveTest.DrainInfoInAPIOutputs` test is also removed
and its functionality is moved into the test
`SlaveTest.DrainAgentKillsRunningTask`. The running task in
the latter test allows us to verify agent API outputs both
before and after the task's terminal update is acknowleged.


Diffs (updated)
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
  src/tests/slave_tests.cpp 6b264d067accb9800ba75ea8f5d27c1e4c5593db 


Diff: https://reviews.apache.org/r/72368/diff/2/

Changes: https://reviews.apache.org/r/72368/diff/1-2/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Andrei Sekretenko <as...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220377
-----------------------------------------------------------


Fix it, then Ship it!





src/slave/slave.cpp
Lines 1077 (patched)
<https://reviews.apache.org/r/72368/#comment308736>

    Please make sure that you have dealt with `SlaveTest.DrainInfoInAPIOutputs` that expect an empty agent to be draining.
    
    I've seen this at some point but completely forgot about that; thankfully, this test showed up in the CI results.


- Andrei Sekretenko


On April 17, 2020, 6:04 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated April 17, 2020, 6:04 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch updates the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/1/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/#review220354
-----------------------------------------------------------



Bad patch!

Reviews applied: [72368]

Failed command: ['bash', '-c', "set -o pipefail; export OS='ubuntu:16.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/jenkins/buildbot.sh 2>&1 | tee build_72368"]

Error:
...<truncated>...
1/slave(1243)/api/v1/resource_provider
I0417 20:08:45.543876  3078 process.cpp:3671] Handling HTTP event for process 'slave(1243)' with path: '/slave(1243)/api/v1/resource_provider'
I0417 20:08:45.546689  3083 hierarchical.cpp:1853] Performed allocation for 1 agents in 1.043737ms
I0417 20:08:45.547475  3084 master.cpp:9485] Sending offers [ ec1af400-0652-47dc-8154-65cae0ec4668-O3 ] to framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:45.548245  3084 sched.cpp:934] Scheduler::resourceOffers took 101674ns
I0417 20:08:45.550959  3081 process.cpp:3671] Handling HTTP event for process 'master' with path: '/master/api/v1'
I0417 20:08:45.552739  3079 http.cpp:1436] HTTP POST for /master/api/v1 from 172.17.0.2:42466
I0417 20:08:45.553037  3079 http.cpp:277] Processing call CREATE_VOLUMES
I0417 20:08:45.554137  3079 master.cpp:3705] Authorizing principal 'test-principal' to perform action CREATE_VOLUME on object {"value":"storage/default-role","resource":{"provider_id":{"value":"dd37db72-ac43-4b0c-9ca2-20bc1b234d1d"},"name":"disk","type":"SCALAR","scalar":{"value":2048.0},"reservations":[{"type":"DYNAMIC","role":"storage"},{"type":"DYNAMIC","role":"storage/default-role","principal":"test-principal"}],"disk":{"persistence":{"id":"b851db44-97a0-4edf-ac0a-2175d5ff70bb","principal":"test-principal"},"volume":{"mode":"RW","container_path":"volume"},"source":{"type":"MOUNT","mount":{"root":"./csi/org.apache.mesos.csi.test/local/mounts"},"vendor":"org.apache.mesos.csi.test.local","id":"/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c","profile":"test"}}}}
I0417 20:08:45.555987  3085 sched.cpp:960] Rescinded offer ec1af400-0652-47dc-8154-65cae0ec4668-O3
I0417 20:08:45.556063  3085 sched.cpp:971] Scheduler::offerRescinded took 21563ns
I0417 20:08:45.556623  3084 hierarchical.cpp:1576] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test)]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test)]:2048, offered or allocated: {}) on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 from framework ec1af
 400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:45.556998  3071 master.cpp:11784] Removing offer ec1af400-0652-47dc-8154-65cae0ec4668-O3
I0417 20:08:45.558696  3084 hierarchical.cpp:1625] Framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 filtered agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 for 5secs
I0417 20:08:45.561342  3077 master.cpp:11649] Sending operation '' (uuid: d8fbd4eb-142c-4b80-a540-fc42ee17b5e8) to agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 at slave(1243)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:45.562094  3081 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:45.565274  3075 provider.cpp:498] Received APPLY_OPERATION event
I0417 20:08:45.565318  3075 provider.cpp:1351] Received CREATE operation '' (uuid: d8fbd4eb-142c-4b80-a540-fc42ee17b5e8)
I0417 20:08:45.569677  3080 master.cpp:5533] Processing REVIVE call for framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:45.570154  3073 hierarchical.cpp:1721] Unsuppressed offers and cleared filters for roles { storage/default-role } of framework ec1af400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:45.571521  3073 hierarchical.cpp:1853] Performed allocation for 1 agents in 1.16854ms
I0417 20:08:45.571988  3073 hierarchical.cpp:1853] Performed allocation for 1 agents in 215591ns
I0417 20:08:45.572480  3072 master.cpp:9485] Sending offers [ ec1af400-0652-47dc-8154-65cae0ec4668-O4 ] to framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:45.573199  3074 sched.cpp:934] Scheduler::resourceOffers took 99374ns
I0417 20:08:45.585371  3085 http.cpp:1436] HTTP POST for /slave(1243)/api/v1/resource_provider from 172.17.0.2:42458
I0417 20:08:45.586519  3083 slave.cpp:8714] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 88dd6059-795d-4933-82b0-f6db4b2b5600) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I0417 20:08:45.586731  3083 slave.cpp:9167] Updating the state of operation with no ID (uuid: 88dd6059-795d-4933-82b0-f6db4b2b5600) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I0417 20:08:45.586793  3083 slave.cpp:8921] Forwarding status update of operation with no ID (operation_uuid: 88dd6059-795d-4933-82b0-f6db4b2b5600) for an operator API call
I0417 20:08:45.587167  3076 master.cpp:11301] Updating the state of operation '' (uuid: 88dd6059-795d-4933-82b0-f6db4b2b5600) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I0417 20:08:45.587764  3080 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:45.677860  3083 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: 12c3b6f4-83ca-497b-82d5-61fd3f2b8e86) for operation UUID d8fbd4eb-142c-4b80-a540-fc42ee17b5e8 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:45.677932  3083 status_update_manager_process.hpp:414] Creating operation status update stream d8fbd4eb-142c-4b80-a540-fc42ee17b5e8 checkpoint=true
I0417 20:08:45.677938  3075 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I0417 20:08:45.678161  3083 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: 12c3b6f4-83ca-497b-82d5-61fd3f2b8e86) for operation UUID d8fbd4eb-142c-4b80-a540-fc42ee17b5e8 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:45.729996  3083 status_update_manager_process.hpp:528] Forwarding operation status update OPERATION_FINISHED (Status UUID: 12c3b6f4-83ca-497b-82d5-61fd3f2b8e86) for operation UUID d8fbd4eb-142c-4b80-a540-fc42ee17b5e8 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:45.730404  3083 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 44d70ab4-5cd2-415e-8177-6ec4a3a7be8f) for stream 88dd6059-795d-4933-82b0-f6db4b2b5600
I0417 20:08:45.730491  3083 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: 44d70ab4-5cd2-415e-8177-6ec4a3a7be8f) for operation UUID 88dd6059-795d-4933-82b0-f6db4b2b5600 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:45.731089  3076 http_connection.hpp:131] Sending UPDATE_OPERATION_STATUS call to http://172.17.0.2:37151/slave(1243)/api/v1/resource_provider
I0417 20:08:45.732040  3080 process.cpp:3671] Handling HTTP event for process 'slave(1243)' with path: '/slave(1243)/api/v1/resource_provider'
I0417 20:08:45.763470  3083 status_update_manager_process.hpp:490] Cleaning up operation status update stream 88dd6059-795d-4933-82b0-f6db4b2b5600
I0417 20:08:45.773573  3078 http.cpp:1436] HTTP POST for /slave(1243)/api/v1/resource_provider from 172.17.0.2:42458
I0417 20:08:45.774724  3081 slave.cpp:8714] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: d8fbd4eb-142c-4b80-a540-fc42ee17b5e8) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I0417 20:08:45.774926  3081 slave.cpp:9167] Updating the state of operation with no ID (uuid: d8fbd4eb-142c-4b80-a540-fc42ee17b5e8) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I0417 20:08:45.774994  3081 slave.cpp:8921] Forwarding status update of operation with no ID (operation_uuid: d8fbd4eb-142c-4b80-a540-fc42ee17b5e8) for an operator API call
I0417 20:08:45.775399  3084 master.cpp:11301] Updating the state of operation '' (uuid: d8fbd4eb-142c-4b80-a540-fc42ee17b5e8) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I0417 20:08:45.775899  3077 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:45.815140  3072 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I0417 20:08:45.815307  3081 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 12c3b6f4-83ca-497b-82d5-61fd3f2b8e86) for stream d8fbd4eb-142c-4b80-a540-fc42ee17b5e8
I0417 20:08:45.815387  3081 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: 12c3b6f4-83ca-497b-82d5-61fd3f2b8e86) for operation UUID d8fbd4eb-142c-4b80-a540-fc42ee17b5e8 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:45.901582  3081 status_update_manager_process.hpp:490] Cleaning up operation status update stream d8fbd4eb-142c-4b80-a540-fc42ee17b5e8
I0417 20:08:45.955536  3084 process.cpp:3671] Handling HTTP event for process 'master' with path: '/master/api/v1'
I0417 20:08:45.957799  3079 http.cpp:1436] HTTP POST for /master/api/v1 from 172.17.0.2:42468
I0417 20:08:45.958112  3079 http.cpp:277] Processing call DESTROY_VOLUMES
I0417 20:08:45.958931  3079 master.cpp:3705] Authorizing principal 'test-principal' to perform action DESTROY_VOLUME on object {"value":"test-principal","resource":{"provider_id":{"value":"dd37db72-ac43-4b0c-9ca2-20bc1b234d1d"},"name":"disk","type":"SCALAR","scalar":{"value":2048.0},"reservations":[{"type":"DYNAMIC","role":"storage"},{"type":"DYNAMIC","role":"storage/default-role","principal":"test-principal"}],"disk":{"persistence":{"id":"b851db44-97a0-4edf-ac0a-2175d5ff70bb","principal":"test-principal"},"volume":{"mode":"RW","container_path":"volume"},"source":{"type":"MOUNT","mount":{"root":"./csi/org.apache.mesos.csi.test/local/mounts"},"vendor":"org.apache.mesos.csi.test.local","id":"/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c","profile":"test"}}}}
I0417 20:08:45.960378  3075 sched.cpp:960] Rescinded offer ec1af400-0652-47dc-8154-65cae0ec4668-O4
I0417 20:08:45.960464  3075 sched.cpp:971] Scheduler::offerRescinded took 27275ns
I0417 20:08:45.960948  3074 master.cpp:11784] Removing offer ec1af400-0652-47dc-8154-65cae0ec4668-O4
I0417 20:08:45.961266  3081 hierarchical.cpp:1576] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test),b851db44-97a0-4edf-ac0a-2175d5ff70bb:volume]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test),b851db44-97a0-4edf-ac0a-2175d5ff70bb:volume]:2048, offered
  or allocated: {}) on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 from framework ec1af400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:45.963676  3081 hierarchical.cpp:1625] Framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 filtered agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 for 5secs
I0417 20:08:45.967262  3077 master.cpp:11649] Sending operation '' (uuid: 38ff6f69-f943-4469-8b4b-567e67db5c65) to agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 at slave(1243)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:45.967996  3084 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:45.971371  3083 provider.cpp:498] Received APPLY_OPERATION event
I0417 20:08:45.971424  3083 provider.cpp:1351] Received DESTROY operation '' (uuid: 38ff6f69-f943-4469-8b4b-567e67db5c65)
I0417 20:08:45.976856  3076 hierarchical.cpp:1853] Performed allocation for 1 agents in 1.797303ms
I0417 20:08:45.977612  3077 master.cpp:9485] Sending offers [ ec1af400-0652-47dc-8154-65cae0ec4668-O5 ] to framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:45.978328  3073 sched.cpp:934] Scheduler::resourceOffers took 82460ns
I0417 20:08:46.098735  3086 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: 860d3418-7314-47c7-af83-b7a2d85281f3) for operation UUID 38ff6f69-f943-4469-8b4b-567e67db5c65 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.098816  3086 status_update_manager_process.hpp:414] Creating operation status update stream 38ff6f69-f943-4469-8b4b-567e67db5c65 checkpoint=true
I0417 20:08:46.099149  3086 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: 860d3418-7314-47c7-af83-b7a2d85281f3) for operation UUID 38ff6f69-f943-4469-8b4b-567e67db5c65 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.179642  3086 status_update_manager_process.hpp:528] Forwarding operation status update OPERATION_FINISHED (Status UUID: 860d3418-7314-47c7-af83-b7a2d85281f3) for operation UUID 38ff6f69-f943-4469-8b4b-567e67db5c65 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.180804  3071 http_connection.hpp:131] Sending UPDATE_OPERATION_STATUS call to http://172.17.0.2:37151/slave(1243)/api/v1/resource_provider
I0417 20:08:46.182118  3079 process.cpp:3671] Handling HTTP event for process 'slave(1243)' with path: '/slave(1243)/api/v1/resource_provider'
I0417 20:08:46.185715  3078 process.cpp:3671] Handling HTTP event for process 'master' with path: '/master/api/v1'
I0417 20:08:46.187979  3073 http.cpp:1436] HTTP POST for /master/api/v1 from 172.17.0.2:42470
I0417 20:08:46.188256  3073 http.cpp:277] Processing call UNRESERVE_RESOURCES
I0417 20:08:46.189232  3073 master.cpp:3705] Authorizing principal 'test-principal' to perform action UNRESERVE_RESOURCES on object {"value":"test-principal","resource":{"provider_id":{"value":"dd37db72-ac43-4b0c-9ca2-20bc1b234d1d"},"name":"disk","type":"SCALAR","scalar":{"value":2048.0},"reservations":[{"type":"DYNAMIC","role":"storage"},{"type":"DYNAMIC","role":"storage/default-role","principal":"test-principal"}],"disk":{"source":{"type":"MOUNT","mount":{"root":"./csi/org.apache.mesos.csi.test/local/mounts"},"vendor":"org.apache.mesos.csi.test.local","id":"/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c","profile":"test"}}}}
I0417 20:08:46.190575  3084 sched.cpp:960] Rescinded offer ec1af400-0652-47dc-8154-65cae0ec4668-O5
I0417 20:08:46.190651  3084 sched.cpp:971] Scheduler::offerRescinded took 21672ns
I0417 20:08:46.191105  3085 hierarchical.cpp:1576] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test)]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test)]:2048, offered or allocated: {}) on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 from framework ec1af
 400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:46.191360  3082 master.cpp:11784] Removing offer ec1af400-0652-47dc-8154-65cae0ec4668-O5
I0417 20:08:46.192574  3085 hierarchical.cpp:1625] Framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 filtered agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 for 5secs
I0417 20:08:46.195493  3072 master.cpp:11649] Sending operation '' (uuid: 8810d2ef-02fc-4a65-a4a0-378f5861cc32) to agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 at slave(1243)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:46.196256  3074 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:46.199508  3075 provider.cpp:498] Received APPLY_OPERATION event
I0417 20:08:46.199558  3075 provider.cpp:1351] Received UNRESERVE operation '' (uuid: 8810d2ef-02fc-4a65-a4a0-378f5861cc32)
I0417 20:08:46.206406  3072 hierarchical.cpp:1853] Performed allocation for 1 agents in 1.277014ms
I0417 20:08:46.207190  3078 master.cpp:9485] Sending offers [ ec1af400-0652-47dc-8154-65cae0ec4668-O6 ] to framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:46.207926  3078 sched.cpp:934] Scheduler::resourceOffers took 139075ns
I0417 20:08:46.225637  3073 http.cpp:1436] HTTP POST for /slave(1243)/api/v1/resource_provider from 172.17.0.2:42458
I0417 20:08:46.226799  3084 slave.cpp:8714] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 38ff6f69-f943-4469-8b4b-567e67db5c65) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I0417 20:08:46.227046  3084 slave.cpp:9167] Updating the state of operation with no ID (uuid: 38ff6f69-f943-4469-8b4b-567e67db5c65) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I0417 20:08:46.227124  3084 slave.cpp:8921] Forwarding status update of operation with no ID (operation_uuid: 38ff6f69-f943-4469-8b4b-567e67db5c65) for an operator API call
I0417 20:08:46.227516  3086 master.cpp:11301] Updating the state of operation '' (uuid: 38ff6f69-f943-4469-8b4b-567e67db5c65) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I0417 20:08:46.228113  3085 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:46.291939  3084 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: 32110269-6dd9-4eea-ac30-1800ccf7826c) for operation UUID 8810d2ef-02fc-4a65-a4a0-378f5861cc32 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.292009  3084 status_update_manager_process.hpp:414] Creating operation status update stream 8810d2ef-02fc-4a65-a4a0-378f5861cc32 checkpoint=true
I0417 20:08:46.292007  3075 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I0417 20:08:46.292341  3084 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: 32110269-6dd9-4eea-ac30-1800ccf7826c) for operation UUID 8810d2ef-02fc-4a65-a4a0-378f5861cc32 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.378366  3084 status_update_manager_process.hpp:528] Forwarding operation status update OPERATION_FINISHED (Status UUID: 32110269-6dd9-4eea-ac30-1800ccf7826c) for operation UUID 8810d2ef-02fc-4a65-a4a0-378f5861cc32 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.378671  3084 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 860d3418-7314-47c7-af83-b7a2d85281f3) for stream 38ff6f69-f943-4469-8b4b-567e67db5c65
I0417 20:08:46.378741  3084 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: 860d3418-7314-47c7-af83-b7a2d85281f3) for operation UUID 38ff6f69-f943-4469-8b4b-567e67db5c65 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.379142  3079 http_connection.hpp:131] Sending UPDATE_OPERATION_STATUS call to http://172.17.0.2:37151/slave(1243)/api/v1/resource_provider
I0417 20:08:46.380118  3080 process.cpp:3671] Handling HTTP event for process 'slave(1243)' with path: '/slave(1243)/api/v1/resource_provider'
I0417 20:08:46.421425  3074 http.cpp:1436] HTTP POST for /slave(1243)/api/v1/resource_provider from 172.17.0.2:42458
I0417 20:08:46.422535  3081 slave.cpp:8714] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 8810d2ef-02fc-4a65-a4a0-378f5861cc32) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I0417 20:08:46.422734  3081 slave.cpp:9167] Updating the state of operation with no ID (uuid: 8810d2ef-02fc-4a65-a4a0-378f5861cc32) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I0417 20:08:46.422786  3081 slave.cpp:8921] Forwarding status update of operation with no ID (operation_uuid: 8810d2ef-02fc-4a65-a4a0-378f5861cc32) for an operator API call
I0417 20:08:46.423153  3075 master.cpp:11301] Updating the state of operation '' (uuid: 8810d2ef-02fc-4a65-a4a0-378f5861cc32) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I0417 20:08:46.423720  3079 slave.cpp:4557] Ignoring new checkpointed resources and operations identical to the current version
I0417 20:08:46.425357  3071 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I0417 20:08:46.428515  3084 status_update_manager_process.hpp:490] Cleaning up operation status update stream 38ff6f69-f943-4469-8b4b-567e67db5c65
I0417 20:08:46.428773  3084 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 32110269-6dd9-4eea-ac30-1800ccf7826c) for stream 8810d2ef-02fc-4a65-a4a0-378f5861cc32
I0417 20:08:46.428879  3084 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: 32110269-6dd9-4eea-ac30-1800ccf7826c) for operation UUID 8810d2ef-02fc-4a65-a4a0-378f5861cc32 on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.462370  3084 status_update_manager_process.hpp:490] Cleaning up operation status update stream 8810d2ef-02fc-4a65-a4a0-378f5861cc32
I0417 20:08:46.505476  3079 master.cpp:1407] Framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151 disconnected
I0417 20:08:46.505529  3079 master.cpp:3326] Deactivating framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:46.505967  3080 hierarchical.cpp:813] Deactivated framework ec1af400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:46.506386  3071 slave.cpp:1000] Agent terminating
I0417 20:08:46.506518  3080 hierarchical.cpp:1576] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test)]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_YpaXWN/2GB-0e489544-23aa-4c74-b223-d45e52fbe22c,test)]:2048, offered or allocated: {}) on agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 from framework ec1af400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:46.506732  3079 master.cpp:11784] Removing offer ec1af400-0652-47dc-8154-65cae0ec4668-O6
I0417 20:08:46.506832  3079 master.cpp:3310] Disconnecting framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:46.507027  3079 master.cpp:1422] Giving framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151 0ns to failover
I0417 20:08:46.507594  3071 manager.cpp:127] Terminating resource provider dd37db72-ac43-4b0c-9ca2-20bc1b234d1d
I0417 20:08:46.508003  3071 master.cpp:1292] Agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 at slave(1243)@172.17.0.2:37151 (2f50ab4c41c0) disconnected
I0417 20:08:46.508030  3071 master.cpp:3361] Disconnecting agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 at slave(1243)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:46.508103  3071 master.cpp:3380] Deactivating agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 at slave(1243)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:46.508327  3071 master.cpp:9269] Framework failover timeout, removing framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
I0417 20:08:46.508369  3071 master.cpp:10285] Removing framework ec1af400-0652-47dc-8154-65cae0ec4668-0000 (default) at scheduler-969dfec5-d98d-4866-9739-1ce02380bf2f@172.17.0.2:37151
E0417 20:08:46.508667  3081 http_connection.hpp:449] End-Of-File received
I0417 20:08:46.508688  3080 hierarchical.cpp:1156] Agent ec1af400-0652-47dc-8154-65cae0ec4668-S0 deactivated
I0417 20:08:46.508930  3086 hierarchical.cpp:1777] Allocation paused
I0417 20:08:46.509341  3081 http_connection.hpp:217] Re-detecting endpoint
I0417 20:08:46.509558  3086 hierarchical.cpp:757] Removed framework ec1af400-0652-47dc-8154-65cae0ec4668-0000
I0417 20:08:46.509673  3086 hierarchical.cpp:1787] Allocation resumed
I0417 20:08:46.509965  3081 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I0417 20:08:46.510053  3081 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I0417 20:08:46.510077  3085 provider.cpp:488] Disconnected from resource provider manager
I0417 20:08:46.510155  3081 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:37151/slave(1243)/api/v1/resource_provider
I0417 20:08:46.510246  3082 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:46.511797  3077 containerizer.cpp:2623] Destroying container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE in RUNNING state
I0417 20:08:46.511898  3077 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from RUNNING to DESTROYING after 2.487750016secs
I0417 20:08:46.512369  3077 launcher.cpp:161] Asked to destroy container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:46.513473  3076 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:37151/slave(1243)/api/v1/resource_provider
I0417 20:08:46.514199  3078 provider.cpp:476] Connected to resource provider manager
I0417 20:08:46.514722  3086 hierarchical.cpp:1853] Performed allocation for 1 agents in 219926ns
I0417 20:08:46.514909  3073 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:37151/slave(1243)/api/v1/resource_provider
E0417 20:08:46.515538  3073 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Cannot process 'SUBSCRIBE' call as the driver is in state SUBSCRIBING
I0417 20:08:46.515867  3087 process.cpp:2781] Returning '404 Not Found' for '/slave(1243)/api/v1/resource_provider'
E0417 20:08:46.517081  3082 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Received '404 Not Found' ()
I0417 20:08:46.566203  3072 hierarchical.cpp:1853] Performed allocation for 1 agents in 192879ns
I0417 20:08:46.616559  3081 containerizer.cpp:3159] Container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE has exited
I0417 20:08:46.618165  3084 hierarchical.cpp:1853] Performed allocation for 1 agents in 198974ns
I0417 20:08:46.618358  3077 provisioner.cpp:652] Ignoring destroy request for unknown container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:46.621284  3075 container_daemon.cpp:189] Invoking post-stop hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.621556  3085 service_manager.cpp:723] Disconnected from endpoint 'unix:///tmp/mesos-csi-2G13L4/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:46.621884  3085 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.625033  3087 process.cpp:2781] Returning '404 Not Found' for '/slave(1243)/api/v1'
I0417 20:08:46.637209  3070 master.cpp:1140] Master terminating
I0417 20:08:46.638021  3083 hierarchical.cpp:1132] Removed all filters for agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
I0417 20:08:46.638046  3083 hierarchical.cpp:1008] Removed agent ec1af400-0652-47dc-8154-65cae0ec4668-S0
[       OK ] CSIVersion/StorageLocalResourceProviderTest.OperatorOperationsWithResourceProviderResources/v1 (2373 ms)
[ RUN      ] CSIVersion/StorageLocalResourceProviderTest.Update/v0
I0417 20:08:46.653872  3070 cluster.cpp:186] Creating default 'local' authorizer
I0417 20:08:46.658635  3083 master.cpp:443] Master 86205872-5b8f-448c-8c57-5d4e2e4e99eb (2f50ab4c41c0) started on 172.17.0.2:37151
I0417 20:08:46.658664  3083 master.cpp:446] Flags at startup: --acls="" --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="50ms" --allocator="hierarchical" --authenticate_agents="true" --authenticate_frameworks="true" --authenticate_http_frameworks="true" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authentication_v0_timeout="15secs" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/HikpZD/credentials" --filter_gpu_resources="true" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_operator_event_stream_subscribers="1000" --max_unreachable_tasks_per_framework="1000" --memory_profiling="false" --min_alloca
 table_resources="cpus:0.01|mem:32" --port="5050" --publish_per_framework_metrics="true" --quiet="false" --recovery_agent_removal_limit="100%" --registry="in_memory" --registry_fetch_timeout="1mins" --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400" --registry_store_timeout="100secs" --registry_strict="false" --require_agent_domain="false" --role_sorter="drf" --root_submissions="true" --version="false" --webui_dir="/tmp/SRC/build/mesos-1.10.0/_inst/share/mesos/webui" --work_dir="/tmp/HikpZD/master" --zk_session_timeout="10secs"
I0417 20:08:46.658975  3083 master.cpp:495] Master only allowing authenticated frameworks to register
I0417 20:08:46.658989  3083 master.cpp:501] Master only allowing authenticated agents to register
I0417 20:08:46.658998  3083 master.cpp:507] Master only allowing authenticated HTTP frameworks to register
I0417 20:08:46.659006  3083 credentials.hpp:37] Loading credentials for authentication from '/tmp/HikpZD/credentials'
I0417 20:08:46.659265  3083 master.cpp:551] Using default 'crammd5' authenticator
I0417 20:08:46.659482  3083 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
I0417 20:08:46.659715  3083 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
I0417 20:08:46.660025  3083 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
I0417 20:08:46.660259  3083 master.cpp:632] Authorization enabled
I0417 20:08:46.660671  3084 hierarchical.cpp:567] Initialized hierarchical allocator process
I0417 20:08:46.660758  3071 whitelist_watcher.cpp:77] No whitelist given
I0417 20:08:46.664078  3085 master.cpp:2165] Elected as the leading master!
I0417 20:08:46.664116  3085 master.cpp:1661] Recovering from registrar
I0417 20:08:46.664376  3076 registrar.cpp:339] Recovering registrar
I0417 20:08:46.665243  3076 registrar.cpp:383] Successfully fetched the registry (0B) in 0ns
I0417 20:08:46.665398  3076 registrar.cpp:487] Applied 1 operations in 51149ns; attempting to update the registry
I0417 20:08:46.666160  3076 registrar.cpp:544] Successfully updated the registry in 0ns
I0417 20:08:46.666337  3076 registrar.cpp:416] Successfully recovered registrar
I0417 20:08:46.666889  3082 hierarchical.cpp:606] Skipping recovery of hierarchical allocator: nothing to recover
I0417 20:08:46.666875  3084 master.cpp:1814] Recovered 0 agents from the registry (144B); allowing 10mins for agents to reregister
W0417 20:08:46.672979  3070 process.cpp:2877] Attempted to spawn already running process files@172.17.0.2:37151
I0417 20:08:46.674422  3070 containerizer.cpp:317] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
W0417 20:08:46.675021  3070 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges
W0417 20:08:46.675048  3070 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges
W0417 20:08:46.675065  3070 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges
I0417 20:08:46.675093  3070 provisioner.cpp:294] Using default backend 'copy'
I0417 20:08:46.677762  3070 cluster.cpp:533] Creating default 'local' authorizer
I0417 20:08:46.680213  3072 slave.cpp:279] Mesos agent started on (1244)@172.17.0.2:37151
I0417 20:08:46.680243  3072 slave.cpp:280] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/HikpZD/JJUp4k/store/appc" --authenticate_http_readonly="true" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authentication_timeout_max="1mins" --authentication_timeout_min="5secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_destroy_timeout="1mins" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/HikpZD/JJUp4k/credential" --default_role="*" --disallow_sharing_agent_ipc_namespace="false" --disallow_sharing_agent_pid_namespace="false" --disk_profile_adaptor="org_apache_mesos_UriDiskProfileAdaptor" --disk_watch_interval="1mins" --docker="docker" --docker_ignore_runtime="false" --docker_kill_orphans="true" 
 --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/HikpZD/JJUp4k/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --docker_volume_chown="false" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_reregistration_timeout="2secs" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/HikpZD/JJUp4k/fetch" --fetcher_cache_size="2GB" --fetcher_stall_timeout="1mins" --frameworks_home="/tmp/HikpZD/JJUp4k/frameworks" --gc_delay="1weeks" --gc_disk_headroom="0.1" --gc_non_executor_container_sandboxes="false" --help="false" --hostname_lookup="true" --http_command_executor="false" --http_credentials="/tmp/HikpZD/JJUp4k/http_credentials" --http_executor_domain_sockets="false" --http_heartbeat_interval="30secs" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posi
 x" --launcher_dir="/tmp/SRC/build/mesos-1.10.0/_build/sub/src" --logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --memory_profiling="false" --network_cni_metrics="true" --network_cni_root_dir_persist="false" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --reconfiguration_policy="equal" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resource_provider_config_dir="/tmp/HikpZD/resource_provider_configs" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --runtime_dir="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_5jWOMU" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/CSIVersion_StorageLoca
 lResourceProviderTest_Update_v0_NdAevu" --zk_session_timeout="10secs"
I0417 20:08:46.680706  3072 credentials.hpp:86] Loading credential for authentication from '/tmp/HikpZD/JJUp4k/credential'
I0417 20:08:46.680862  3072 slave.cpp:312] Agent using credential for: test-principal
I0417 20:08:46.680889  3072 credentials.hpp:37] Loading credentials for authentication from '/tmp/HikpZD/JJUp4k/http_credentials'
I0417 20:08:46.681161  3072 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly'
I0417 20:08:46.681645  3072 disk_profile_adaptor.cpp:82] Creating disk profile adaptor module 'org_apache_mesos_UriDiskProfileAdaptor'
I0417 20:08:46.683452  3083 uri_disk_profile_adaptor.cpp:305] Updated disk profile mapping to 1 active profiles
I0417 20:08:46.683588  3072 slave.cpp:627] Agent resources: [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
I0417 20:08:46.683990  3072 slave.cpp:635] Agent attributes: [  ]
I0417 20:08:46.684013  3072 slave.cpp:644] Agent hostname: 2f50ab4c41c0
I0417 20:08:46.684221  3075 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:46.684237  3081 task_status_update_manager.cpp:181] Pausing sending task status updates
I0417 20:08:46.686014  3086 state.cpp:68] Recovering state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/meta'
I0417 20:08:46.686324  3074 slave.cpp:7723] Finished recovering checkpointed state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/meta', beginning agent recovery
I0417 20:08:46.687064  3073 task_status_update_manager.cpp:207] Recovering task status update manager
I0417 20:08:46.687665  3076 containerizer.cpp:822] Recovering Mesos containers
I0417 20:08:46.688144  3076 containerizer.cpp:1162] Recovering isolators
I0417 20:08:46.689275  3077 containerizer.cpp:1201] Recovering provisioner
I0417 20:08:46.690182  3071 provisioner.cpp:518] Provisioner recovery complete
I0417 20:08:46.690516  3082 hierarchical.cpp:1853] Performed allocation for 0 agents in 95169ns
I0417 20:08:46.691200  3076 composing.cpp:343] Finished recovering all containerizers
I0417 20:08:46.691529  3083 slave.cpp:8205] Recovering executors
I0417 20:08:46.691676  3083 slave.cpp:8358] Finished recovery
I0417 20:08:46.692651  3071 task_status_update_manager.cpp:181] Pausing sending task status updates
I0417 20:08:46.692690  3086 slave.cpp:1471] New master detected at master@172.17.0.2:37151
I0417 20:08:46.692708  3075 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:46.692849  3086 slave.cpp:1536] Detecting new master
I0417 20:08:46.698310  3082 slave.cpp:1563] Authenticating with master master@172.17.0.2:37151
I0417 20:08:46.698415  3082 slave.cpp:1572] Using default CRAM-MD5 authenticatee
I0417 20:08:46.698792  3085 authenticatee.cpp:121] Creating new client SASL connection
I0417 20:08:46.699174  3085 master.cpp:9676] Authenticating slave(1244)@172.17.0.2:37151
I0417 20:08:46.699339  3072 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(2107)@172.17.0.2:37151
I0417 20:08:46.699721  3079 authenticator.cpp:98] Creating new server SASL connection
I0417 20:08:46.700109  3076 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
I0417 20:08:46.700152  3076 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
I0417 20:08:46.700291  3076 authenticator.cpp:204] Received SASL authentication start
I0417 20:08:46.700371  3076 authenticator.cpp:326] Authentication requires more steps
I0417 20:08:46.700549  3074 authenticatee.cpp:259] Received SASL authentication step
I0417 20:08:46.700732  3074 authenticator.cpp:232] Received SASL authentication step
I0417 20:08:46.700774  3074 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '2f50ab4c41c0' server FQDN: '2f50ab4c41c0' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
I0417 20:08:46.700791  3074 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
I0417 20:08:46.700845  3074 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
I0417 20:08:46.700876  3074 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '2f50ab4c41c0' server FQDN: '2f50ab4c41c0' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
I0417 20:08:46.700899  3074 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
I0417 20:08:46.700922  3074 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I0417 20:08:46.700959  3074 authenticator.cpp:318] Authentication success
I0417 20:08:46.701103  3073 authenticatee.cpp:299] Authentication success
I0417 20:08:46.701208  3077 master.cpp:9708] Successfully authenticated principal 'test-principal' at slave(1244)@172.17.0.2:37151
I0417 20:08:46.701287  3075 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(2107)@172.17.0.2:37151
I0417 20:08:46.701669  3071 slave.cpp:1663] Successfully authenticated with master master@172.17.0.2:37151
I0417 20:08:46.702168  3071 slave.cpp:2113] Will retry registration in 1.228533ms if necessary
I0417 20:08:46.702432  3081 master.cpp:6174] Received register agent message from slave(1244)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:46.702913  3081 master.cpp:3705] Authorizing principal 'test-principal' to perform action REGISTER_AGENT on ANY object
I0417 20:08:46.703541  3079 master.cpp:6241] Authorized registration of agent at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:46.703660  3079 master.cpp:6353] Registering agent at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0) with id 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0
I0417 20:08:46.704536  3076 registrar.cpp:487] Applied 1 operations in 296546ns; attempting to update the registry
I0417 20:08:46.705168  3083 slave.cpp:2113] Will retry registration in 18.066335ms if necessary
I0417 20:08:46.705364  3076 registrar.cpp:544] Successfully updated the registry in 742144ns
I0417 20:08:46.705421  3078 master.cpp:6167] Ignoring register agent message from slave(1244)@172.17.0.2:37151 (2f50ab4c41c0) as registration is already in progress
I0417 20:08:46.705597  3073 master.cpp:6401] Admitted agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:46.706483  3073 master.cpp:6446] Registered agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0) with cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I0417 20:08:46.706568  3075 slave.cpp:1696] Registered with master master@172.17.0.2:37151; given agent ID 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0
I0417 20:08:46.706744  3081 task_status_update_manager.cpp:188] Resuming sending task status updates
I0417 20:08:46.706740  3071 hierarchical.cpp:959] Added agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 (2f50ab4c41c0) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (offered or allocated: {})
I0417 20:08:46.707157  3075 slave.cpp:1731] Checkpointing SlaveInfo to '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/meta/slaves/86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0/slave.info'
I0417 20:08:46.707191  3071 hierarchical.cpp:1853] Performed allocation for 1 agents in 160079ns
I0417 20:08:46.707295  3080 status_update_manager_process.hpp:385] Resuming operation status update manager
I0417 20:08:46.708496  3075 slave.cpp:1783] Forwarding agent update {"operations":{},"resource_providers":{},"resource_version_uuid":{"value":"WAakmPOcSYGna8KMPEiv+Q=="},"slave_id":{"value":"86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0"},"update_oversubscribed_resources":false}
I0417 20:08:46.709280  3075 master.cpp:7584] Ignoring update on agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0) as it reports no changes
I0417 20:08:46.713639  3077 process.cpp:3671] Handling HTTP event for process 'slave(1244)' with path: '/slave(1244)/api/v1'
I0417 20:08:46.715052  3075 http.cpp:1436] HTTP POST for /slave(1244)/api/v1 from 172.17.0.2:42480
I0417 20:08:46.715574  3075 http.cpp:2828] Processing GET_CONTAINERS call
I0417 20:08:46.721940  3075 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.724624  3080 process.cpp:3671] Handling HTTP event for process 'slave(1244)' with path: '/slave(1244)/api/v1'
I0417 20:08:46.725847  3078 http.cpp:1436] HTTP POST for /slave(1244)/api/v1 from 172.17.0.2:42482
I0417 20:08:46.726532  3078 http.cpp:3289] Processing LAUNCH_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.727639  3076 http.cpp:3399] Creating sandbox '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.728528  3081 containerizer.cpp:1397] Starting container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:46.729172  3081 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from STARTING to PROVISIONING after 271104ns
I0417 20:08:46.729990  3081 containerizer.cpp:1575] Checkpointed ContainerConfig at '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_5jWOMU/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE/config'
I0417 20:08:46.730046  3081 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PROVISIONING to PREPARING after 877056ns
I0417 20:08:46.733429  3071 containerizer.cpp:2101] Launching 'mesos-containerizer' with flags '--help="false" --launch_info="{"command":{"arguments":["/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin","--api_version=v0","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_BA7Ed4","--available_capacity=0B","--volumes=","--forward=unix:///tmp/HikpZD/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin"},"environment":{"variables":[{"name":"MESOS_SANDBOX","type":"VALUE","value":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"},{"name":"CSI_ENDPOINT","type":"VALUE","value":"unix:///tmp/mesos-csi-xMl6C2/endpoint.sock"}]},"task_environment":{},"working_directory":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/containers/o
 rg-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"}" --pipe_read="111" --pipe_write="112" --runtime_directory="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_5jWOMU/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE" --unshare_namespace_mnt="false"'
I0417 20:08:46.743770  3071 launcher.cpp:145] Forked child with pid '14677' for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.744956  3078 hierarchical.cpp:1853] Performed allocation for 1 agents in 209711ns
I0417 20:08:46.760084  3071 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PREPARING to ISOLATING after 30.03008ms
I0417 20:08:46.761399  3071 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from ISOLATING to FETCHING after 1.308928ms
I0417 20:08:46.761662  3071 fetcher.cpp:369] Starting to fetch URIs for container: org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE, directory: /tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_NdAevu/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:46.762756  3080 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from FETCHING to RUNNING after 1.332992ms
I0417 20:08:46.765053  3084 container_daemon.cpp:140] Invoking post-start hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:46.765185  3084 service_manager.cpp:703] Connecting to endpoint 'unix:///tmp/mesos-csi-xMl6C2/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:46.796815  3071 hierarchical.cpp:1853] Performed allocation for 1 agents in 209541ns
I0417 20:08:46.848311  3081 hierarchical.cpp:1853] Performed allocation for 1 agents in 169082ns
I0417 20:08:46.900115  3072 hierarchical.cpp:1853] Performed allocation for 1 agents in 286732ns
I0417 20:08:46.951196  3081 hierarchical.cpp:1853] Performed allocation for 1 agents in 154874ns
I0417 20:08:47.002813  3072 hierarchical.cpp:1853] Performed allocation for 1 agents in 122772ns
I0417 20:08:47.054575  3082 hierarchical.cpp:1853] Performed allocation for 1 agents in 152592ns
I0417 20:08:47.077095  3081 service_manager.cpp:545] Probing endpoint 'unix:///tmp/mesos-csi-xMl6C2/endpoint.sock' with CSI v1
I0417 20:08:47.080025  3073 service_manager.cpp:532] Probing endpoint 'unix:///tmp/mesos-csi-xMl6C2/endpoint.sock' with CSI v0
I0417 20:08:47.081353 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/Probe call
I0417 20:08:47.084103  3071 container_daemon.cpp:171] Waiting for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.086876 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/GetPluginCapabilities call
I0417 20:08:47.087606  3074 process.cpp:3671] Handling HTTP event for process 'slave(1244)' with path: '/slave(1244)/api/v1'
I0417 20:08:47.089046  3079 http.cpp:1436] HTTP POST for /slave(1244)/api/v1 from 172.17.0.2:42484
I0417 20:08:47.089587  3079 http.cpp:3513] Processing WAIT_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.090910 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/GetPluginInfo call
I0417 20:08:47.091217 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/GetPluginInfo call
I0417 20:08:47.093084  3083 v0_volume_manager.cpp:628] NODE_SERVICE loaded: {}
I0417 20:08:47.093641  3083 v0_volume_manager.cpp:628] CONTROLLER_SERVICE loaded: {}
I0417 20:08:47.096002 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ControllerGetCapabilities call
I0417 20:08:47.099897 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Node/NodeGetCapabilities call
I0417 20:08:47.103559 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Node/NodeGetId call
I0417 20:08:47.105995  3080 hierarchical.cpp:1853] Performed allocation for 1 agents in 172385ns
I0417 20:08:47.106088  3082 provider.cpp:676] Recovered resources '{}' and 0 operations for resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test'
I0417 20:08:47.106299  3076 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:47.106633  3079 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.109760  3075 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.110486  3071 provider.cpp:476] Connected to resource provider manager
I0417 20:08:47.111209  3082 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.112449  3081 process.cpp:3671] Handling HTTP event for process 'slave(1244)' with path: '/slave(1244)/api/v1/resource_provider'
I0417 20:08:47.114298  3077 http.cpp:1436] HTTP POST for /slave(1244)/api/v1/resource_provider from 172.17.0.2:42488
I0417 20:08:47.115083  3073 manager.cpp:779] Subscribing resource provider {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin","--api_version=v0","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_BA7Ed4","--available_capacity=0B","--volumes=","--forward=unix:///tmp/HikpZD/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}
I0417 20:08:47.148541  3079 slave.cpp:8714] Handling resource provider message 'SUBSCRIBE: {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"id":{"value":"8baf0dd7-0179-4387-8ae0-5fdf4a1cac65"},"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin","--api_version=v0","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_BA7Ed4","--available_capacity=0B","--volumes=","--forward=unix:///tmp/HikpZD/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}'
I0417 20:08:47.150292  3085 provider.cpp:498] Received SUBSCRIBED event
I0417 20:08:47.150334  3085 provider.cpp:1309] Subscribed with ID 8baf0dd7-0179-4387-8ae0-5fdf4a1cac65
I0417 20:08:47.151130  3075 status_update_manager_process.hpp:314] Recovering operation status update manager
I0417 20:08:47.157441  3071 hierarchical.cpp:1853] Performed allocation for 1 agents in 188249ns
I0417 20:08:47.198319  3076 provider.cpp:790] Reconciling storage pools and volumes
I0417 20:08:47.200598 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ListVolumes call
I0417 20:08:47.203868  3082 provider.cpp:2217] Sending UPDATE_STATE call with resources '{}' and 0 operations to agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0
I0417 20:08:47.204243  3072 http_connection.hpp:131] Sending UPDATE_STATE call to http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.204282  3082 provider.cpp:748] Resource provider 8baf0dd7-0179-4387-8ae0-5fdf4a1cac65 is in READY state
I0417 20:08:47.204360  3076 status_update_manager_process.hpp:385] Resuming operation status update manager
I0417 20:08:47.205183  3084 provider.cpp:1235] Updating profiles { test } for resource provider 8baf0dd7-0179-4387-8ae0-5fdf4a1cac65
I0417 20:08:47.205649  3081 process.cpp:3671] Handling HTTP event for process 'slave(1244)' with path: '/slave(1244)/api/v1/resource_provider'
I0417 20:08:47.206658  3072 provider.cpp:790] Reconciling storage pools and volumes
I0417 20:08:47.207448  3085 http.cpp:1436] HTTP POST for /slave(1244)/api/v1/resource_provider from 172.17.0.2:42486
I0417 20:08:47.208104  3083 manager.cpp:1011] Received UPDATE_STATE call with resources '[]' and 0 operations from resource provider 8baf0dd7-0179-4387-8ae0-5fdf4a1cac65
I0417 20:08:47.208326  3081 slave.cpp:8714] Handling resource provider message 'UPDATE_STATE: 8baf0dd7-0179-4387-8ae0-5fdf4a1cac65 {}'
I0417 20:08:47.208443  3081 slave.cpp:8834] Forwarding new total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I0417 20:08:47.208619 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ListVolumes call
I0417 20:08:47.209022  3074 hierarchical.cpp:1853] Performed allocation for 1 agents in 134004ns
I0417 20:08:47.209167 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/GetCapacity call
I0417 20:08:47.210309  3071 hierarchical.cpp:1110] Grew agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 by {} (total), {  } (used)
I0417 20:08:47.210748  3071 hierarchical.cpp:1067] Agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 (2f50ab4c41c0) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I0417 20:08:47.215685  3077 hierarchical.cpp:1853] Performed allocation for 1 agents in 156017ns
I0417 20:08:47.216150  3086 provider.cpp:790] Reconciling storage pools and volumes
I0417 20:08:47.218396 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ListVolumes call
I0417 20:08:47.218894 14681 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/GetCapacity call
I0417 20:08:47.229238  3078 slave.cpp:1000] Agent terminating
I0417 20:08:47.230119  3078 manager.cpp:127] Terminating resource provider 8baf0dd7-0179-4387-8ae0-5fdf4a1cac65
I0417 20:08:47.230603  3085 master.cpp:1292] Agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0) disconnected
I0417 20:08:47.230648  3085 master.cpp:3361] Disconnecting agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.230756  3085 master.cpp:3380] Deactivating agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 at slave(1244)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.230933  3086 hierarchical.cpp:1156] Agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0 deactivated
E0417 20:08:47.231218  3073 http_connection.hpp:449] End-Of-File received
I0417 20:08:47.231745  3073 http_connection.hpp:217] Re-detecting endpoint
I0417 20:08:47.232239  3073 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I0417 20:08:47.232319  3073 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I0417 20:08:47.232355  3077 provider.cpp:488] Disconnected from resource provider manager
I0417 20:08:47.232426  3073 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.232537  3078 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:47.233978  3072 containerizer.cpp:2623] Destroying container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE in RUNNING state
I0417 20:08:47.234031  3072 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from RUNNING to DESTROYING after 15.471314944secs
I0417 20:08:47.234470  3072 launcher.cpp:161] Asked to destroy container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.235685  3080 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.236405  3085 provider.cpp:476] Connected to resource provider manager
I0417 20:08:47.237067  3073 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:37151/slave(1244)/api/v1/resource_provider
I0417 20:08:47.237917  3087 process.cpp:2781] Returning '404 Not Found' for '/slave(1244)/api/v1/resource_provider'
E0417 20:08:47.239274  3074 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Received '404 Not Found' ()
I0417 20:08:47.262423  3084 hierarchical.cpp:1853] Performed allocation for 1 agents in 170641ns
I0417 20:08:47.311748  3075 containerizer.cpp:3159] Container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE has exited
I0417 20:08:47.313459  3079 provisioner.cpp:652] Ignoring destroy request for unknown container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.314172  3078 hierarchical.cpp:1853] Performed allocation for 1 agents in 189069ns
I0417 20:08:47.316468  3075 container_daemon.cpp:189] Invoking post-stop hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.316763  3075 service_manager.cpp:723] Disconnected from endpoint 'unix:///tmp/mesos-csi-xMl6C2/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.317111  3081 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.320349  3087 process.cpp:2781] Returning '404 Not Found' for '/slave(1244)/api/v1'
I0417 20:08:47.331890  3070 master.cpp:1140] Master terminating
I0417 20:08:47.332231  3078 hierarchical.cpp:1132] Removed all filters for agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0
I0417 20:08:47.332257  3078 hierarchical.cpp:1008] Removed agent 86205872-5b8f-448c-8c57-5d4e2e4e99eb-S0
[       OK ] CSIVersion/StorageLocalResourceProviderTest.Update/v0 (692 ms)
[ RUN      ] CSIVersion/StorageLocalResourceProviderTest.Update/v1
I0417 20:08:47.347837  3070 cluster.cpp:186] Creating default 'local' authorizer
I0417 20:08:47.354643  3081 master.cpp:443] Master eea7e54d-7420-4eff-bdb1-ff94fe19e933 (2f50ab4c41c0) started on 172.17.0.2:37151
I0417 20:08:47.354688  3081 master.cpp:446] Flags at startup: --acls="" --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="50ms" --allocator="hierarchical" --authenticate_agents="true" --authenticate_frameworks="true" --authenticate_http_frameworks="true" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authentication_v0_timeout="15secs" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/RztO1D/credentials" --filter_gpu_resources="true" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_operator_event_stream_subscribers="1000" --max_unreachable_tasks_per_framework="1000" --memory_profiling="false" --min_alloca
 table_resources="cpus:0.01|mem:32" --port="5050" --publish_per_framework_metrics="true" --quiet="false" --recovery_agent_removal_limit="100%" --registry="in_memory" --registry_fetch_timeout="1mins" --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400" --registry_store_timeout="100secs" --registry_strict="false" --require_agent_domain="false" --role_sorter="drf" --root_submissions="true" --version="false" --webui_dir="/tmp/SRC/build/mesos-1.10.0/_inst/share/mesos/webui" --work_dir="/tmp/RztO1D/master" --zk_session_timeout="10secs"
I0417 20:08:47.355211  3081 master.cpp:495] Master only allowing authenticated frameworks to register
I0417 20:08:47.355237  3081 master.cpp:501] Master only allowing authenticated agents to register
I0417 20:08:47.355257  3081 master.cpp:507] Master only allowing authenticated HTTP frameworks to register
I0417 20:08:47.355269  3081 credentials.hpp:37] Loading credentials for authentication from '/tmp/RztO1D/credentials'
I0417 20:08:47.355619  3081 master.cpp:551] Using default 'crammd5' authenticator
I0417 20:08:47.355928  3081 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
I0417 20:08:47.356225  3081 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
I0417 20:08:47.356501  3081 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
I0417 20:08:47.356758  3081 master.cpp:632] Authorization enabled
I0417 20:08:47.357192  3072 hierarchical.cpp:567] Initialized hierarchical allocator process
I0417 20:08:47.357275  3077 whitelist_watcher.cpp:77] No whitelist given
I0417 20:08:47.360702  3078 master.cpp:2165] Elected as the leading master!
I0417 20:08:47.360734  3078 master.cpp:1661] Recovering from registrar
I0417 20:08:47.361013  3086 registrar.cpp:339] Recovering registrar
I0417 20:08:47.361961  3086 registrar.cpp:383] Successfully fetched the registry (0B) in 0ns
I0417 20:08:47.362130  3086 registrar.cpp:487] Applied 1 operations in 52285ns; attempting to update the registry
I0417 20:08:47.362964  3086 registrar.cpp:544] Successfully updated the registry in 0ns
I0417 20:08:47.363163  3086 registrar.cpp:416] Successfully recovered registrar
I0417 20:08:47.363762  3072 master.cpp:1814] Recovered 0 agents from the registry (144B); allowing 10mins for agents to reregister
I0417 20:08:47.363775  3080 hierarchical.cpp:606] Skipping recovery of hierarchical allocator: nothing to recover
W0417 20:08:47.369352  3070 process.cpp:2877] Attempted to spawn already running process files@172.17.0.2:37151
I0417 20:08:47.370693  3070 containerizer.cpp:317] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
W0417 20:08:47.371243  3070 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges
W0417 20:08:47.371269  3070 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges
W0417 20:08:47.371284  3070 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges
I0417 20:08:47.371313  3070 provisioner.cpp:294] Using default backend 'copy'
I0417 20:08:47.374042  3070 cluster.cpp:533] Creating default 'local' authorizer
I0417 20:08:47.376471  3082 slave.cpp:279] Mesos agent started on (1245)@172.17.0.2:37151
I0417 20:08:47.376509  3082 slave.cpp:280] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/RztO1D/d8rZos/store/appc" --authenticate_http_readonly="true" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authentication_timeout_max="1mins" --authentication_timeout_min="5secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_destroy_timeout="1mins" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/RztO1D/d8rZos/credential" --default_role="*" --disallow_sharing_agent_ipc_namespace="false" --disallow_sharing_agent_pid_namespace="false" --disk_profile_adaptor="org_apache_mesos_UriDiskProfileAdaptor" --disk_watch_interval="1mins" --docker="docker" --docker_ignore_runtime="false" --docker_kill_orphans="true" 
 --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/RztO1D/d8rZos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --docker_volume_chown="false" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_reregistration_timeout="2secs" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/RztO1D/d8rZos/fetch" --fetcher_cache_size="2GB" --fetcher_stall_timeout="1mins" --frameworks_home="/tmp/RztO1D/d8rZos/frameworks" --gc_delay="1weeks" --gc_disk_headroom="0.1" --gc_non_executor_container_sandboxes="false" --help="false" --hostname_lookup="true" --http_command_executor="false" --http_credentials="/tmp/RztO1D/d8rZos/http_credentials" --http_executor_domain_sockets="false" --http_heartbeat_interval="30secs" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posi
 x" --launcher_dir="/tmp/SRC/build/mesos-1.10.0/_build/sub/src" --logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --memory_profiling="false" --network_cni_metrics="true" --network_cni_root_dir_persist="false" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --reconfiguration_policy="equal" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resource_provider_config_dir="/tmp/RztO1D/resource_provider_configs" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --runtime_dir="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_NHBfi0" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/CSIVersion_StorageLoca
 lResourceProviderTest_Update_v1_rjnwby" --zk_session_timeout="10secs"
I0417 20:08:47.377135  3082 credentials.hpp:86] Loading credential for authentication from '/tmp/RztO1D/d8rZos/credential'
I0417 20:08:47.377328  3082 slave.cpp:312] Agent using credential for: test-principal
I0417 20:08:47.377358  3082 credentials.hpp:37] Loading credentials for authentication from '/tmp/RztO1D/d8rZos/http_credentials'
I0417 20:08:47.377602  3082 http.cpp:1296] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly'
I0417 20:08:47.378062  3082 disk_profile_adaptor.cpp:82] Creating disk profile adaptor module 'org_apache_mesos_UriDiskProfileAdaptor'
I0417 20:08:47.379771  3081 uri_disk_profile_adaptor.cpp:305] Updated disk profile mapping to 1 active profiles
I0417 20:08:47.379922  3082 slave.cpp:627] Agent resources: [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
I0417 20:08:47.380215  3082 slave.cpp:635] Agent attributes: [  ]
I0417 20:08:47.380235  3082 slave.cpp:644] Agent hostname: 2f50ab4c41c0
I0417 20:08:47.380481  3076 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:47.380491  3078 task_status_update_manager.cpp:181] Pausing sending task status updates
I0417 20:08:47.382580  3073 state.cpp:68] Recovering state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/meta'
I0417 20:08:47.382791  3084 slave.cpp:7723] Finished recovering checkpointed state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/meta', beginning agent recovery
I0417 20:08:47.383443  3077 task_status_update_manager.cpp:207] Recovering task status update manager
I0417 20:08:47.383997  3080 containerizer.cpp:822] Recovering Mesos containers
I0417 20:08:47.384403  3080 containerizer.cpp:1162] Recovering isolators
I0417 20:08:47.384685  3081 hierarchical.cpp:1853] Performed allocation for 0 agents in 93786ns
I0417 20:08:47.385576  3084 containerizer.cpp:1201] Recovering provisioner
I0417 20:08:47.386353  3074 provisioner.cpp:518] Provisioner recovery complete
I0417 20:08:47.387284  3075 composing.cpp:343] Finished recovering all containerizers
I0417 20:08:47.387625  3080 slave.cpp:8205] Recovering executors
I0417 20:08:47.387756  3080 slave.cpp:8358] Finished recovery
I0417 20:08:47.388684  3086 slave.cpp:1471] New master detected at master@172.17.0.2:37151
I0417 20:08:47.388713  3072 task_status_update_manager.cpp:181] Pausing sending task status updates
I0417 20:08:47.388757  3074 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:47.388847  3086 slave.cpp:1536] Detecting new master
I0417 20:08:47.397240  3085 slave.cpp:1563] Authenticating with master master@172.17.0.2:37151
I0417 20:08:47.397370  3085 slave.cpp:1572] Using default CRAM-MD5 authenticatee
I0417 20:08:47.397732  3076 authenticatee.cpp:121] Creating new client SASL connection
I0417 20:08:47.398109  3076 master.cpp:9676] Authenticating slave(1245)@172.17.0.2:37151
I0417 20:08:47.398285  3073 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(2108)@172.17.0.2:37151
I0417 20:08:47.398622  3081 authenticator.cpp:98] Creating new server SASL connection
I0417 20:08:47.398902  3075 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
I0417 20:08:47.398941  3075 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
I0417 20:08:47.399096  3078 authenticator.cpp:204] Received SASL authentication start
I0417 20:08:47.399188  3078 authenticator.cpp:326] Authentication requires more steps
I0417 20:08:47.399370  3080 authenticatee.cpp:259] Received SASL authentication step
I0417 20:08:47.399535  3080 authenticator.cpp:232] Received SASL authentication step
I0417 20:08:47.399585  3080 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '2f50ab4c41c0' server FQDN: '2f50ab4c41c0' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
I0417 20:08:47.399612  3080 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
I0417 20:08:47.399667  3080 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
I0417 20:08:47.399698  3080 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '2f50ab4c41c0' server FQDN: '2f50ab4c41c0' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
I0417 20:08:47.399715  3080 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
I0417 20:08:47.399729  3080 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I0417 20:08:47.399757  3080 authenticator.cpp:318] Authentication success
I0417 20:08:47.399979  3083 authenticatee.cpp:299] Authentication success
I0417 20:08:47.400010  3084 master.cpp:9708] Successfully authenticated principal 'test-principal' at slave(1245)@172.17.0.2:37151
I0417 20:08:47.400035  3072 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(2108)@172.17.0.2:37151
I0417 20:08:47.400490  3086 slave.cpp:1663] Successfully authenticated with master master@172.17.0.2:37151
I0417 20:08:47.400914  3086 slave.cpp:2113] Will retry registration in 10.607454ms if necessary
I0417 20:08:47.401226  3079 master.cpp:6174] Received register agent message from slave(1245)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.401733  3079 master.cpp:3705] Authorizing principal 'test-principal' to perform action REGISTER_AGENT on ANY object
I0417 20:08:47.402405  3075 master.cpp:6241] Authorized registration of agent at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.402542  3075 master.cpp:6353] Registering agent at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0) with id eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0
I0417 20:08:47.403344  3080 registrar.cpp:487] Applied 1 operations in 280216ns; attempting to update the registry
I0417 20:08:47.404170  3080 registrar.cpp:544] Successfully updated the registry in 748032ns
I0417 20:08:47.404402  3072 master.cpp:6401] Admitted agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.405301  3072 master.cpp:6446] Registered agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0) with cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I0417 20:08:47.405335  3085 slave.cpp:1696] Registered with master master@172.17.0.2:37151; given agent ID eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0
I0417 20:08:47.405488  3072 task_status_update_manager.cpp:188] Resuming sending task status updates
I0417 20:08:47.405483  3086 hierarchical.cpp:959] Added agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 (2f50ab4c41c0) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (offered or allocated: {})
I0417 20:08:47.405839  3086 hierarchical.cpp:1853] Performed allocation for 1 agents in 138078ns
I0417 20:08:47.405900  3085 slave.cpp:1731] Checkpointing SlaveInfo to '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/meta/slaves/eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0/slave.info'
I0417 20:08:47.405977  3086 status_update_manager_process.hpp:385] Resuming operation status update manager
I0417 20:08:47.407158  3085 slave.cpp:1783] Forwarding agent update {"operations":{},"resource_providers":{},"resource_version_uuid":{"value":"0Bb3Cji+RJmyuzzJ/ZD6EA=="},"slave_id":{"value":"eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0"},"update_oversubscribed_resources":false}
I0417 20:08:47.407891  3084 master.cpp:7584] Ignoring update on agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0) as it reports no changes
I0417 20:08:47.412372  3079 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1'
I0417 20:08:47.413771  3084 http.cpp:1436] HTTP POST for /slave(1245)/api/v1 from 172.17.0.2:42496
I0417 20:08:47.414317  3084 http.cpp:2828] Processing GET_CONTAINERS call
I0417 20:08:47.435703  3084 hierarchical.cpp:1853] Performed allocation for 1 agents in 149851ns
I0417 20:08:47.443729  3072 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.446856  3083 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1'
I0417 20:08:47.448081  3085 http.cpp:1436] HTTP POST for /slave(1245)/api/v1 from 172.17.0.2:42498
I0417 20:08:47.448717  3085 http.cpp:3289] Processing LAUNCH_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.450060  3073 http.cpp:3399] Creating sandbox '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.450897  3072 containerizer.cpp:1397] Starting container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.451452  3072 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from STARTING to PROVISIONING after 217088ns
I0417 20:08:47.476943  3072 containerizer.cpp:1575] Checkpointed ContainerConfig at '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_NHBfi0/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE/config'
I0417 20:08:47.476994  3072 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PROVISIONING to PREPARING after 25.543936ms
I0417 20:08:47.480180  3075 containerizer.cpp:2101] Launching 'mesos-containerizer' with flags '--help="false" --launch_info="{"command":{"arguments":["/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin","--api_version=v1","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_FB0N45","--available_capacity=0B","--volumes=","--forward=unix:///tmp/RztO1D/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin"},"environment":{"variables":[{"name":"MESOS_SANDBOX","type":"VALUE","value":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"},{"name":"CSI_ENDPOINT","type":"VALUE","value":"unix:///tmp/mesos-csi-oKZIoh/endpoint.sock"}]},"task_environment":{},"working_directory":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/containers/o
 rg-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"}" --pipe_read="112" --pipe_write="114" --runtime_directory="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_NHBfi0/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE" --unshare_namespace_mnt="false"'
I0417 20:08:47.490624  3075 launcher.cpp:145] Forked child with pid '14690' for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.491935  3075 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PREPARING to ISOLATING after 14.916096ms
I0417 20:08:47.492002  3085 hierarchical.cpp:1853] Performed allocation for 1 agents in 361875ns
I0417 20:08:47.493786  3075 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from ISOLATING to FETCHING after 1.85088ms
I0417 20:08:47.494082  3075 fetcher.cpp:369] Starting to fetch URIs for container: org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE, directory: /tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_rjnwby/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.495194  3076 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from FETCHING to RUNNING after 1.378304ms
I0417 20:08:47.498194  3085 container_daemon.cpp:140] Invoking post-start hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.498417  3082 service_manager.cpp:703] Connecting to endpoint 'unix:///tmp/mesos-csi-oKZIoh/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.542845  3083 hierarchical.cpp:1853] Performed allocation for 1 agents in 155919ns
I0417 20:08:47.593966  3071 hierarchical.cpp:1853] Performed allocation for 1 agents in 168133ns
I0417 20:08:47.644876  3076 hierarchical.cpp:1853] Performed allocation for 1 agents in 141794ns
I0417 20:08:47.696704  3071 hierarchical.cpp:1853] Performed allocation for 1 agents in 201782ns
I0417 20:08:47.748425  3086 hierarchical.cpp:1853] Performed allocation for 1 agents in 172166ns
I0417 20:08:47.799660  3079 hierarchical.cpp:1853] Performed allocation for 1 agents in 146389ns
I0417 20:08:47.819979  3073 service_manager.cpp:545] Probing endpoint 'unix:///tmp/mesos-csi-oKZIoh/endpoint.sock' with CSI v1
I0417 20:08:47.822021 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/Probe call
I0417 20:08:47.825147  3072 container_daemon.cpp:171] Waiting for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.827950 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/GetPluginCapabilities call
I0417 20:08:47.829027  3079 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1'
I0417 20:08:47.830485  3080 http.cpp:1436] HTTP POST for /slave(1245)/api/v1 from 172.17.0.2:42502
I0417 20:08:47.831065  3080 http.cpp:3513] Processing WAIT_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:47.832800 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/GetPluginInfo call
I0417 20:08:47.833094 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/GetPluginInfo call
I0417 20:08:47.834795  3071 v1_volume_manager.cpp:649] NODE_SERVICE loaded: {}
I0417 20:08:47.835330  3071 v1_volume_manager.cpp:649] CONTROLLER_SERVICE loaded: {}
I0417 20:08:47.837517 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ControllerGetCapabilities call
I0417 20:08:47.841384 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Node/NodeGetCapabilities call
I0417 20:08:47.844916 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Node/NodeGetInfo call
I0417 20:08:47.847352  3082 provider.cpp:676] Recovered resources '{}' and 0 operations for resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test'
I0417 20:08:47.847517  3075 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:47.847923  3072 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.850515  3076 hierarchical.cpp:1853] Performed allocation for 1 agents in 192937ns
I0417 20:08:47.851198  3079 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.851918  3086 provider.cpp:476] Connected to resource provider manager
I0417 20:08:47.852622  3080 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.853760  3077 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1/resource_provider'
I0417 20:08:47.855661  3071 http.cpp:1436] HTTP POST for /slave(1245)/api/v1/resource_provider from 172.17.0.2:42506
I0417 20:08:47.856392  3078 manager.cpp:779] Subscribing resource provider {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin","--api_version=v1","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_FB0N45","--available_capacity=0B","--volumes=","--forward=unix:///tmp/RztO1D/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}
I0417 20:08:47.883471  3072 slave.cpp:8714] Handling resource provider message 'SUBSCRIBE: {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"id":{"value":"5404607d-71cd-47d3-9c41-e5a866001d87"},"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin","--api_version=v1","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_FB0N45","--available_capacity=0B","--volumes=","--forward=unix:///tmp/RztO1D/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/tmp/SRC/build/mesos-1.10.0/_build/sub/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}'
I0417 20:08:47.885326  3073 provider.cpp:498] Received SUBSCRIBED event
I0417 20:08:47.885377  3073 provider.cpp:1309] Subscribed with ID 5404607d-71cd-47d3-9c41-e5a866001d87
I0417 20:08:47.886173  3075 status_update_manager_process.hpp:314] Recovering operation status update manager
I0417 20:08:47.902267  3086 hierarchical.cpp:1853] Performed allocation for 1 agents in 128357ns
I0417 20:08:47.925195  3080 provider.cpp:790] Reconciling storage pools and volumes
I0417 20:08:47.927474 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ListVolumes call
I0417 20:08:47.930748  3084 provider.cpp:2217] Sending UPDATE_STATE call with resources '{}' and 0 operations to agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0
I0417 20:08:47.931147  3084 provider.cpp:748] Resource provider 5404607d-71cd-47d3-9c41-e5a866001d87 is in READY state
I0417 20:08:47.931145  3077 http_connection.hpp:131] Sending UPDATE_STATE call to http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.931200  3080 status_update_manager_process.hpp:385] Resuming operation status update manager
I0417 20:08:47.932075  3078 provider.cpp:1235] Updating profiles { test } for resource provider 5404607d-71cd-47d3-9c41-e5a866001d87
I0417 20:08:47.932600  3086 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1/resource_provider'
I0417 20:08:47.933372  3085 provider.cpp:790] Reconciling storage pools and volumes
I0417 20:08:47.934630  3085 http.cpp:1436] HTTP POST for /slave(1245)/api/v1/resource_provider from 172.17.0.2:42504
I0417 20:08:47.935319  3075 manager.cpp:1011] Received UPDATE_STATE call with resources '[]' and 0 operations from resource provider 5404607d-71cd-47d3-9c41-e5a866001d87
I0417 20:08:47.935546  3072 slave.cpp:8714] Handling resource provider message 'UPDATE_STATE: 5404607d-71cd-47d3-9c41-e5a866001d87 {}'
I0417 20:08:47.935652  3072 slave.cpp:8834] Forwarding new total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I0417 20:08:47.935717 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ListVolumes call
I0417 20:08:47.936290 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/GetCapacity call
I0417 20:08:47.937072  3071 hierarchical.cpp:1110] Grew agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 by {} (total), {  } (used)
I0417 20:08:47.937604  3071 hierarchical.cpp:1067] Agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 (2f50ab4c41c0) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I0417 20:08:47.942543  3077 hierarchical.cpp:1853] Performed allocation for 1 agents in 170230ns
I0417 20:08:47.943012  3072 provider.cpp:790] Reconciling storage pools and volumes
I0417 20:08:47.945139 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ListVolumes call
I0417 20:08:47.945812 14694 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/GetCapacity call
I0417 20:08:47.956429  3070 slave.cpp:1000] Agent terminating
I0417 20:08:47.957378  3070 manager.cpp:127] Terminating resource provider 5404607d-71cd-47d3-9c41-e5a866001d87
I0417 20:08:47.957870  3071 master.cpp:1292] Agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0) disconnected
I0417 20:08:47.957906  3071 master.cpp:3361] Disconnecting agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.957993  3071 master.cpp:3380] Deactivating agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 at slave(1245)@172.17.0.2:37151 (2f50ab4c41c0)
I0417 20:08:47.958218  3074 hierarchical.cpp:1156] Agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0 deactivated
E0417 20:08:47.958439  3081 http_connection.hpp:449] End-Of-File received
I0417 20:08:47.958927  3081 http_connection.hpp:217] Re-detecting endpoint
I0417 20:08:47.959434  3081 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I0417 20:08:47.959528  3081 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I0417 20:08:47.959590  3086 provider.cpp:488] Disconnected from resource provider manager
I0417 20:08:47.959625  3081 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.959753  3086 status_update_manager_process.hpp:379] Pausing operation status update manager
I0417 20:08:47.961274  3076 containerizer.cpp:2623] Destroying container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE in RUNNING state
I0417 20:08:47.961339  3076 containerizer.cpp:3321] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from RUNNING to DESTROYING after 15.466194688secs
I0417 20:08:47.961824  3076 launcher.cpp:161] Asked to destroy container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:47.963152  3078 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.963891  3083 provider.cpp:476] Connected to resource provider manager
I0417 20:08:47.964542  3081 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:37151/slave(1245)/api/v1/resource_provider
I0417 20:08:47.965481  3087 process.cpp:2781] Returning '404 Not Found' for '/slave(1245)/api/v1/resource_provider'
E0417 20:08:47.966794  3075 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Received '404 Not Found' ()
I0417 20:08:47.989934  3086 hierarchical.cpp:1853] Performed allocation for 1 agents in 158189ns
I0417 20:08:48.039063  3073 containerizer.cpp:3159] Container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE has exited
I0417 20:08:48.040809  3073 provisioner.cpp:652] Ignoring destroy request for unknown container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:48.041712  3076 hierarchical.cpp:1853] Performed allocation for 1 agents in 145299ns
I0417 20:08:48.043942  3081 container_daemon.cpp:189] Invoking post-stop hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:48.044189  3082 service_manager.cpp:723] Disconnected from endpoint 'unix:///tmp/mesos-csi-oKZIoh/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I0417 20:08:48.044462  3073 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I0417 20:08:48.047641  3087 process.cpp:2781] Returning '404 Not Found' for '/slave(1245)/api/v1'
I0417 20:08:48.059909  3070 master.cpp:1140] Master terminating
I0417 20:08:48.060330  3076 hierarchical.cpp:1132] Removed all filters for agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0
I0417 20:08:48.060359  3076 hierarchical.cpp:1008] Removed agent eea7e54d-7420-4eff-bdb1-ff94fe19e933-S0
[       OK ] CSIVersion/StorageLocalResourceProviderTest.Update/v1 (726 ms)
[----------] 54 tests from CSIVersion/StorageLocalResourceProviderTest (94847 ms total)

[----------] Global test environment tear-down
[==========] 2320 tests from 223 test cases ran. (1142798 ms total)
[  PASSED  ] 2319 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] SlaveTest.DrainInfoInAPIOutputs

 1 FAILED TEST
  YOU HAVE 34 DISABLED TESTS

I0417 20:08:48.153097  3087 process.cpp:935] Stopped the socket accept loop
Makefile:15957: recipe for target 'check-local' failed
make[4]: *** [check-local] Error 1
make[4]: Leaving directory '/tmp/SRC/build/mesos-1.10.0/_build/sub/src'
Makefile:15073: recipe for target 'check-am' failed
make[3]: *** [check-am] Error 2
make[3]: Leaving directory '/tmp/SRC/build/mesos-1.10.0/_build/sub/src'
Makefile:15077: recipe for target 'check' failed
make[2]: *** [check] Error 2
make[2]: Leaving directory '/tmp/SRC/build/mesos-1.10.0/_build/sub/src'
Makefile:785: recipe for target 'check-recursive' failed
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory '/tmp/SRC/build/mesos-1.10.0/_build/sub'
Makefile:991: recipe for target 'distcheck' failed
make: *** [distcheck] Error 1
Untagged: mesos/mesos-build:ubuntu-16.04
Untagged: mesos/mesos-build@sha256:fa967cbcfb44f55708a3cbc87f245c6d29dd891464db558af56a03ee321526bb
Deleted: sha256:e895c0531b9a9a288fabe479a49f7059aed83645351ac99ec2ea2616822c9f97
Deleted: sha256:09119b0b314a69ba6ec6251f2e89f4199fe1e874c84f9abf07dcbd23dbc3f1c1
Deleted: sha256:562fe6af5a3883058be9c784a839392215ed4185a21d21f1d99df0d17f3ae6e7
Deleted: sha256:0dcdfbe322a4f332f73ac70905d557300ec6dae3bd19586758772c750c7b4a19
Deleted: sha256:c66652d605f01094e2db53d62505dbd524e076d5aa69b89d5e620003803eb149
Deleted: sha256:a2cf79bfb9593c05fd7142ddb49afef77ea4ad5d2464e841f09fe62ffee396e0
Deleted: sha256:f7a904214b390f39d98573882f631dc908df8b2b540cf04e1062f8182c1efffd
Deleted: sha256:ce1f6fcaa83dfce189d76e08f184085732eab4eeb2562d2399953958405c5bec
Deleted: sha256:cce92fda689ab9033f0b8db214bc63edd1ae3e05831a0f3a9418976d7dc7ccdd
Deleted: sha256:d22094bbd65447c59a42c580eaa3a44cee9cd855f00905f59409be21bcefc745
Deleted: sha256:b8976847450013f3eb5e9a81a5778f73ed7bef67e6393049712ef17102b4b7b7
Deleted: sha256:b8c891f0ffec910a12757d733b178e3f62d81dbbde2b31d3b754071c416108ed

Full log: https://builds.apache.org/job/Mesos-Reviewbot-Linux/8908/console

- Mesos Reviewbot


On April 17, 2020, 6:04 p.m., Greg Mann wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72368/
> -----------------------------------------------------------
> 
> (Updated April 17, 2020, 6:04 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-10118
>     https://issues.apache.org/jira/browse/MESOS-10118
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Previously, when the agent had no tasks or operations and
> received a `DrainSlaveMessage`, it would checkpoint the
> `DrainConfig` to disk, implicitly placing it into a "draining"
> state indefinitely. This patch updates the agent's handler to
> avoid checkpointing anything to disk in this case.
> 
> 
> Diffs
> -----
> 
>   src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 
> 
> 
> Diff: https://reviews.apache.org/r/72368/diff/1/
> 
> 
> Testing
> -------
> 
> Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.
> 
> 
> Thanks,
> 
> Greg Mann
> 
>


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated April 17, 2020, 6:04 p.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description (updated)
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch updates the agent's handler to
avoid checkpointing anything to disk in this case.


Diffs
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 


Diff: https://reviews.apache.org/r/72368/diff/1/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann


Re: Review Request 72368: Fixed a bug in the agent's draining handler.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72368/
-----------------------------------------------------------

(Updated April 17, 2020, 6:03 p.m.)


Review request for mesos, Andrei Sekretenko and Benjamin Mahler.


Bugs: MESOS-10118
    https://issues.apache.org/jira/browse/MESOS-10118


Repository: mesos


Description
-------

Previously, when the agent had no tasks or operations and
received a `DrainSlaveMessage`, it would checkpoint the
`DrainConfig` to disk, implicitly placing it into a "draining"
state indefinitely. This patch update's the agent's handler to
avoid checkpointing anything to disk in this case.


Diffs
-----

  src/slave/slave.cpp 1a32c813eeacf21b903db3f9602d034e7fb085c0 


Diff: https://reviews.apache.org/r/72368/diff/1/


Testing
-------

Ran the test in https://reviews.apache.org/r/72364/ with this patch applied and verified that the final task reached TASK_RUNNING.


Thanks,

Greg Mann