You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@mesos.apache.org by Joseph Wu <jo...@mesosphere.io> on 2019/10/24 01:06:58 UTC

Review Request 71666: WIP: SSL Wrapper: Implemented send/recv and shutdown.

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs
-----

  3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/1/


Testing
-------

Successfully fetched from a webpage:
```
  http::URL url = http::URL(
     "https",
     "www.google.com",
     443);

  Future<http::Response> response = http::get(url);
  AWAIT_READY(response);
  EXPECT_EQ(http::Status::OK, response->code);
```


Thanks,

Joseph Wu


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218520
-----------------------------------------------------------



Patch looks great!

Reviews applied: [71659, 71660, 71661, 71662, 71663, 71664, 71665, 71666]

Passed command: export OS='ubuntu:14.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/docker-build.sh

- Mesos Reviewbot


On Nov. 6, 2019, 3:41 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 6, 2019, 3:41 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/4/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> [  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
> [  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218597
-----------------------------------------------------------



Patch looks great!

Reviews applied: [71659, 71660, 71661, 71662, 71663, 71664, 71665, 71666]

Passed command: export OS='ubuntu:14.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/docker-build.sh

- Mesos Reviewbot


On Nov. 11, 2019, 11:41 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 11, 2019, 11:41 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/5/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> [  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
> [  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Greg Mann <gr...@mesosphere.io>.

> On Dec. 5, 2019, 6:38 p.m., Greg Mann wrote:
> > 3rdparty/libprocess/src/ssl/socket_wrapper.hpp
> > Lines 94 (patched)
> > <https://reviews.apache.org/r/71666/diff/6/?file=2174527#file2174527line94>
> >
> >     In this case, we don't really need an actor context, since there isn't any actor state associated with the compute thread. We really just want some context (any context) to dispatch the SSL-related functions onto, right?
> >     
> >     It would make a bit more sense to me to dispatch these functions without specifying an actor, so that libprocess can run them wherever it pleases.
> >     
> >     We could consider updating `loop()` to dispatch in all cases, even when no pid is specified. However, I do wonder if we're unknowingly depending on the existing behavior somewhere. In any case, changing loop to always `dispatch()` the iterate and body seems more desirable to me?
> >     
> >     However, the `loop()` calls below aren't strictly necessary I think. We could accomplish the same thing with dispatches and chained continuations, so we could also just use `dispatch()` directly instead of `loop()`, that might be the simplest thing to do.
> >     
> >     WDYT?
> 
> Joseph Wu wrote:
>     I think a UPID/actor is required for any dispatching/looping on libprocess worker threads, so this variable would still exist regardless of how the loops are implemented.
>     
>     The alternative is to run everything on the event loop thread (or special threads we spin up/acquire out of band?).

Ah I was remembering the version of `defer()` which can be invoked without a pid: https://github.com/apache/mesos/blob/925ad30c0f3b249afe74bdeb1921c5fdbf1c5886/3rdparty/libprocess/include/process/defer.hpp#L275-L283

Actually, I wish we had an overload of `dispatch()` that did something similar. In any case, the `defer()` overload might work here, WDYT?


- Greg


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218948
-----------------------------------------------------------


On Nov. 20, 2019, 12:29 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 20, 2019, 12:29 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/6/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.

> On Dec. 5, 2019, 10:38 a.m., Greg Mann wrote:
> > 3rdparty/libprocess/src/ssl/socket_wrapper.hpp
> > Lines 94 (patched)
> > <https://reviews.apache.org/r/71666/diff/6/?file=2174527#file2174527line94>
> >
> >     In this case, we don't really need an actor context, since there isn't any actor state associated with the compute thread. We really just want some context (any context) to dispatch the SSL-related functions onto, right?
> >     
> >     It would make a bit more sense to me to dispatch these functions without specifying an actor, so that libprocess can run them wherever it pleases.
> >     
> >     We could consider updating `loop()` to dispatch in all cases, even when no pid is specified. However, I do wonder if we're unknowingly depending on the existing behavior somewhere. In any case, changing loop to always `dispatch()` the iterate and body seems more desirable to me?
> >     
> >     However, the `loop()` calls below aren't strictly necessary I think. We could accomplish the same thing with dispatches and chained continuations, so we could also just use `dispatch()` directly instead of `loop()`, that might be the simplest thing to do.
> >     
> >     WDYT?
> 
> Joseph Wu wrote:
>     I think a UPID/actor is required for any dispatching/looping on libprocess worker threads, so this variable would still exist regardless of how the loops are implemented.
>     
>     The alternative is to run everything on the event loop thread (or special threads we spin up/acquire out of band?).
> 
> Greg Mann wrote:
>     Ah I was remembering the version of `defer()` which can be invoked without a pid: https://github.com/apache/mesos/blob/925ad30c0f3b249afe74bdeb1921c5fdbf1c5886/3rdparty/libprocess/include/process/defer.hpp#L275-L283
>     
>     Actually, I wish we had an overload of `dispatch()` that did something similar. In any case, the `defer()` overload might work here, WDYT?
> 
> Joseph Wu wrote:
>     That overload of `defer()` ends up running things on a thread_local UPID:
>     ```
>     // Per thread executor pointer. We use a pointer to lazily construct the
>     // actual executor.
>     extern thread_local Executor* _executor_;
>     
>     #define __executor__                                                    \
>       (_executor_ == nullptr ? _executor_ = new Executor() : _executor_)
>     ```
>     
>     In this case, I think we would end up constructing a single `__executor__` object on the EventLoop thread (since that is where the `defer()` is called), so all socket IO would end up deferred onto the same UPID.
> 
> Greg Mann wrote:
>     Yea, I think you're right. That seems a bit better to me than constructing a one-off UPID specifically for SSL work, WDYT?

I mean the _same_ UPID (only one).  Just one actor would handle all the SSL work.


- Joseph


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218948
-----------------------------------------------------------


On Nov. 19, 2019, 4:29 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 19, 2019, 4:29 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/6/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Greg Mann <gr...@mesosphere.io>.

> On Dec. 5, 2019, 6:38 p.m., Greg Mann wrote:
> > 3rdparty/libprocess/src/ssl/socket_wrapper.hpp
> > Lines 94 (patched)
> > <https://reviews.apache.org/r/71666/diff/6/?file=2174527#file2174527line94>
> >
> >     In this case, we don't really need an actor context, since there isn't any actor state associated with the compute thread. We really just want some context (any context) to dispatch the SSL-related functions onto, right?
> >     
> >     It would make a bit more sense to me to dispatch these functions without specifying an actor, so that libprocess can run them wherever it pleases.
> >     
> >     We could consider updating `loop()` to dispatch in all cases, even when no pid is specified. However, I do wonder if we're unknowingly depending on the existing behavior somewhere. In any case, changing loop to always `dispatch()` the iterate and body seems more desirable to me?
> >     
> >     However, the `loop()` calls below aren't strictly necessary I think. We could accomplish the same thing with dispatches and chained continuations, so we could also just use `dispatch()` directly instead of `loop()`, that might be the simplest thing to do.
> >     
> >     WDYT?
> 
> Joseph Wu wrote:
>     I think a UPID/actor is required for any dispatching/looping on libprocess worker threads, so this variable would still exist regardless of how the loops are implemented.
>     
>     The alternative is to run everything on the event loop thread (or special threads we spin up/acquire out of band?).
> 
> Greg Mann wrote:
>     Ah I was remembering the version of `defer()` which can be invoked without a pid: https://github.com/apache/mesos/blob/925ad30c0f3b249afe74bdeb1921c5fdbf1c5886/3rdparty/libprocess/include/process/defer.hpp#L275-L283
>     
>     Actually, I wish we had an overload of `dispatch()` that did something similar. In any case, the `defer()` overload might work here, WDYT?
> 
> Joseph Wu wrote:
>     That overload of `defer()` ends up running things on a thread_local UPID:
>     ```
>     // Per thread executor pointer. We use a pointer to lazily construct the
>     // actual executor.
>     extern thread_local Executor* _executor_;
>     
>     #define __executor__                                                    \
>       (_executor_ == nullptr ? _executor_ = new Executor() : _executor_)
>     ```
>     
>     In this case, I think we would end up constructing a single `__executor__` object on the EventLoop thread (since that is where the `defer()` is called), so all socket IO would end up deferred onto the same UPID.

Yea, I think you're right. That seems a bit better to me than constructing a one-off UPID specifically for SSL work, WDYT?


- Greg


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218948
-----------------------------------------------------------


On Nov. 20, 2019, 12:29 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 20, 2019, 12:29 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/6/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.

> On Dec. 5, 2019, 10:38 a.m., Greg Mann wrote:
> > 3rdparty/libprocess/src/ssl/socket_wrapper.hpp
> > Lines 94 (patched)
> > <https://reviews.apache.org/r/71666/diff/6/?file=2174527#file2174527line94>
> >
> >     In this case, we don't really need an actor context, since there isn't any actor state associated with the compute thread. We really just want some context (any context) to dispatch the SSL-related functions onto, right?
> >     
> >     It would make a bit more sense to me to dispatch these functions without specifying an actor, so that libprocess can run them wherever it pleases.
> >     
> >     We could consider updating `loop()` to dispatch in all cases, even when no pid is specified. However, I do wonder if we're unknowingly depending on the existing behavior somewhere. In any case, changing loop to always `dispatch()` the iterate and body seems more desirable to me?
> >     
> >     However, the `loop()` calls below aren't strictly necessary I think. We could accomplish the same thing with dispatches and chained continuations, so we could also just use `dispatch()` directly instead of `loop()`, that might be the simplest thing to do.
> >     
> >     WDYT?
> 
> Joseph Wu wrote:
>     I think a UPID/actor is required for any dispatching/looping on libprocess worker threads, so this variable would still exist regardless of how the loops are implemented.
>     
>     The alternative is to run everything on the event loop thread (or special threads we spin up/acquire out of band?).
> 
> Greg Mann wrote:
>     Ah I was remembering the version of `defer()` which can be invoked without a pid: https://github.com/apache/mesos/blob/925ad30c0f3b249afe74bdeb1921c5fdbf1c5886/3rdparty/libprocess/include/process/defer.hpp#L275-L283
>     
>     Actually, I wish we had an overload of `dispatch()` that did something similar. In any case, the `defer()` overload might work here, WDYT?

That overload of `defer()` ends up running things on a thread_local UPID:
```
// Per thread executor pointer. We use a pointer to lazily construct the
// actual executor.
extern thread_local Executor* _executor_;

#define __executor__                                                    \
  (_executor_ == nullptr ? _executor_ = new Executor() : _executor_)
```

In this case, I think we would end up constructing a single `__executor__` object on the EventLoop thread (since that is where the `defer()` is called), so all socket IO would end up deferred onto the same UPID.


- Joseph


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218948
-----------------------------------------------------------


On Nov. 19, 2019, 4:29 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 19, 2019, 4:29 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/6/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.

> On Dec. 5, 2019, 10:38 a.m., Greg Mann wrote:
> > 3rdparty/libprocess/src/ssl/socket_wrapper.hpp
> > Lines 94 (patched)
> > <https://reviews.apache.org/r/71666/diff/6/?file=2174527#file2174527line94>
> >
> >     In this case, we don't really need an actor context, since there isn't any actor state associated with the compute thread. We really just want some context (any context) to dispatch the SSL-related functions onto, right?
> >     
> >     It would make a bit more sense to me to dispatch these functions without specifying an actor, so that libprocess can run them wherever it pleases.
> >     
> >     We could consider updating `loop()` to dispatch in all cases, even when no pid is specified. However, I do wonder if we're unknowingly depending on the existing behavior somewhere. In any case, changing loop to always `dispatch()` the iterate and body seems more desirable to me?
> >     
> >     However, the `loop()` calls below aren't strictly necessary I think. We could accomplish the same thing with dispatches and chained continuations, so we could also just use `dispatch()` directly instead of `loop()`, that might be the simplest thing to do.
> >     
> >     WDYT?

I think a UPID/actor is required for any dispatching/looping on libprocess worker threads, so this variable would still exist regardless of how the loops are implemented.

The alternative is to run everything on the event loop thread (or special threads we spin up/acquire out of band?).


- Joseph


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218948
-----------------------------------------------------------


On Nov. 19, 2019, 4:29 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 19, 2019, 4:29 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/6/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218948
-----------------------------------------------------------




3rdparty/libprocess/src/ssl/socket_wrapper.hpp
Lines 94 (patched)
<https://reviews.apache.org/r/71666/#comment306923>

    In this case, we don't really need an actor context, since there isn't any actor state associated with the compute thread. We really just want some context (any context) to dispatch the SSL-related functions onto, right?
    
    It would make a bit more sense to me to dispatch these functions without specifying an actor, so that libprocess can run them wherever it pleases.
    
    We could consider updating `loop()` to dispatch in all cases, even when no pid is specified. However, I do wonder if we're unknowingly depending on the existing behavior somewhere. In any case, changing loop to always `dispatch()` the iterate and body seems more desirable to me?
    
    However, the `loop()` calls below aren't strictly necessary I think. We could accomplish the same thing with dispatches and chained continuations, so we could also just use `dispatch()` directly instead of `loop()`, that might be the simplest thing to do.
    
    WDYT?


- Greg Mann


On Nov. 20, 2019, 12:29 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 20, 2019, 12:29 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/6/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review219053
-----------------------------------------------------------


Ship it!




Ship It!

- Greg Mann


On Dec. 16, 2019, 10:05 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Dec. 16, 2019, 10:05 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/openssl_socket.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/openssl_socket.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/8/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Dec. 16, 2019, 2:05 p.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Changes
-------

Comment tweak.


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/openssl_socket.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/openssl_socket.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/8/

Changes: https://reviews.apache.org/r/71666/diff/7-8/


Testing
-------

```
cmake --build . --target libprocess-tests
libprocess-tests
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
```


Thanks,

Joseph Wu


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.

> On Dec. 13, 2019, 11:41 a.m., Greg Mann wrote:
> > 3rdparty/libprocess/src/ssl/openssl_socket.cpp
> > Lines 478 (patched)
> > <https://reviews.apache.org/r/71666/diff/7/?file=2183605#file2183605line478>
> >
> >     Is it possible that a client-initiated renegotiation will lead to an SSL_ERROR_WANT_READ result here? Would we hang in that case?
> >     
> >     Same question for an SSL_ERROR_WANT_WRITE result in `recv()`.

https://issues.apache.org/jira/browse/MESOS-10070

We have reason to believe the OpenSSL library may handle this transparently for us, but a unit test would make sure of this.


- Joseph


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review219029
-----------------------------------------------------------


On Dec. 16, 2019, 2:05 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Dec. 16, 2019, 2:05 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/openssl_socket.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/openssl_socket.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/8/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Greg Mann <gr...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review219029
-----------------------------------------------------------




3rdparty/libprocess/src/ssl/openssl_socket.cpp
Lines 478 (patched)
<https://reviews.apache.org/r/71666/#comment307057>

    Is it possible that a client-initiated renegotiation will lead to an SSL_ERROR_WANT_READ result here? Would we hang in that case?
    
    Same question for an SSL_ERROR_WANT_WRITE result in `recv()`.



3rdparty/libprocess/src/ssl/openssl_socket.cpp
Lines 683 (patched)
<https://reviews.apache.org/r/71666/#comment307056>

    Maybe s/Server sockets/Listening sockets/
    
    Since accepted server-side sockets will create a UPID.


- Greg Mann


On Dec. 10, 2019, 11:55 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Dec. 10, 2019, 11:55 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/openssl_socket.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/openssl_socket.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/7/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Dec. 10, 2019, 3:55 p.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Changes
-------

Rebase on earlier rename.


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/openssl_socket.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/openssl_socket.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/7/

Changes: https://reviews.apache.org/r/71666/diff/6-7/


Testing
-------

```
cmake --build . --target libprocess-tests
libprocess-tests
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
```


Thanks,

Joseph Wu


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Nov. 19, 2019, 4:29 p.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Changes
-------

* Added a "compute_thread" so that calls to `SSL_read` and `SSL_write` can be spread onto (many) libprocess worker threads rather than the (one) event loop thread.
* Changed shutdown to happen asynchronously but accept dirty shutdowns.
* Guarded against new send/recv calls after shutdown.


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/6/

Changes: https://reviews.apache.org/r/71666/diff/5-6/


Testing (updated)
-------

```
cmake --build . --target libprocess-tests
libprocess-tests
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
```


Thanks,

Joseph Wu


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218610
-----------------------------------------------------------



Notes from an offline review session:


3rdparty/libprocess/src/ssl/socket_wrapper.cpp
Lines 464-465 (patched)
<https://reviews.apache.org/r/71666/#comment306402>

    Consider adding UPIDs to spread the load (of SSL encryption/decryption) off the event loop thread.
    
    Because `io::read` and `io::write` always complete on the event loop thread, the continuations from these loops will also run on the event loop thread.  In practice, this means all the SSL encryption/decryption happens in the same thread.  There is no benefit to forcing everything onto the same thread, as OpenSSL is threadsafe (some caveats on older versions though).



3rdparty/libprocess/src/ssl/socket_wrapper.cpp
Lines 657-663 (patched)
<https://reviews.apache.org/r/71666/#comment306401>

    Consider not blocking, but instead just trying to send the shutdown bits once.


- Joseph Wu


On Nov. 11, 2019, 11:41 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 11, 2019, 11:41 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/5/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> [  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
> [  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218700
-----------------------------------------------------------



Bad review!

Reviews applied: [71666, 71665, 71664, 71663, 71662, 71661, 71660, 71764, 71659]

Error:
2019-11-19 22:48:17 URL:https://reviews.apache.org/r/71665/diff/raw/ [16358/16358] -> "71665.patch" [1]
error: patch failed: 3rdparty/libprocess/src/ssl/socket_wrapper.cpp:30
error: 3rdparty/libprocess/src/ssl/socket_wrapper.cpp: patch does not apply

- Mesos Reviewbot


On Nov. 11, 2019, 7:41 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 11, 2019, 7:41 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/5/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> [  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
> [  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218629
-----------------------------------------------------------



Patch looks great!

Reviews applied: [71659, 71764, 71660, 71661, 71662, 71663, 71664, 71665, 71666]

Passed command: export OS='ubuntu:14.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/docker-build.sh

- Mesos Reviewbot


On Nov. 11, 2019, 7:41 p.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Nov. 11, 2019, 7:41 p.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/5/
> 
> 
> Testing
> -------
> 
> ```
> cmake --build . --target libprocess-tests
> libprocess-tests
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> [  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
> [  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Nov. 11, 2019, 11:41 a.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Changes
-------

Reflected a tweak made in the previous review to handle EOF from `recv`-ing


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/5/

Changes: https://reviews.apache.org/r/71666/diff/4-5/


Testing
-------

```
cmake --build . --target libprocess-tests
libprocess-tests
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
[  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
[  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
```


Thanks,

Joseph Wu


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Nov. 5, 2019, 7:41 p.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Changes
-------

Adjusted shutdown logic to avoid deadlocks with the ProcessManager/SocketManager.  (Basically, don't use process::loop here)


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/4/

Changes: https://reviews.apache.org/r/71666/diff/3-4/


Testing (updated)
-------

```
cmake --build . --target libprocess-tests
libprocess-tests
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
[  FAILED  ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = "SSL"
[  FAILED  ] Encryption/NetSocketTest.EOFAfterRecv/0, where GetParam() = "SSL"
```


Thanks,

Joseph Wu


Re: Review Request 71666: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Nov. 5, 2019, 5:57 p.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Changes
-------

Changed lambda captures of `this` to a weak pointer instead.


Summary (updated)
-----------------

SSL Wrapper: Implemented send/recv and shutdown.


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/3/

Changes: https://reviews.apache.org/r/71666/diff/2-3/


Testing (updated)
-------

```
cmake --build . --target libprocess-tests
libprocess-tests --gtest_filter="SSLTest.*"
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
[  FAILED  ] SSLTest.ShutdownThenSend
```


Thanks,

Joseph Wu


Re: Review Request 71666: WIP: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218472
-----------------------------------------------------------



Patch looks great!

Reviews applied: [71659, 71660, 71661, 71662, 71663, 71664, 71665, 71666]

Passed command: export OS='ubuntu:14.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/docker-build.sh

- Mesos Reviewbot


On Oct. 31, 2019, 1:35 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Oct. 31, 2019, 1:35 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/2/
> 
> 
> Testing
> -------
> 
> Successfully fetched from a webpage:
> ```
>   http::URL url = http::URL(
>      "https",
>      "www.google.com",
>      443);
> 
>   Future<http::Response> response = http::get(url);
>   AWAIT_READY(response);
>   EXPECT_EQ(http::Status::OK, response->code);
> ```
> 
> Running libprocess-tests yields:
> ```
> [  FAILED  ] SSLTest.SilentSocket (hangs indefinitely)
> [  FAILED  ] SSLTest.ValidDowngrade
> [  FAILED  ] SSLTest.ValidDowngradeEachProtocol
> [  FAILED  ] SSLTest.ShutdownThenSend
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>


Re: Review Request 71666: WIP: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Joseph Wu <jo...@mesosphere.io>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/
-----------------------------------------------------------

(Updated Oct. 30, 2019, 6:35 p.m.)


Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.


Bugs: MESOS-10010
    https://issues.apache.org/jira/browse/MESOS-10010


Repository: mesos


Description
-------

This completes a fully functional client-side SSL socket.

Needs a bit of cleanup and more error handling though.


Diffs (updated)
-----

  3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
  3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 


Diff: https://reviews.apache.org/r/71666/diff/2/

Changes: https://reviews.apache.org/r/71666/diff/1-2/


Testing (updated)
-------

Successfully fetched from a webpage:
```
  http::URL url = http::URL(
     "https",
     "www.google.com",
     443);

  Future<http::Response> response = http::get(url);
  AWAIT_READY(response);
  EXPECT_EQ(http::Status::OK, response->code);
```

Running libprocess-tests yields:
```
[  FAILED  ] SSLTest.SilentSocket (hangs indefinitely)
[  FAILED  ] SSLTest.ValidDowngrade
[  FAILED  ] SSLTest.ValidDowngradeEachProtocol
[  FAILED  ] SSLTest.ShutdownThenSend
```


Thanks,

Joseph Wu


Re: Review Request 71666: WIP: SSL Wrapper: Implemented send/recv and shutdown.

Posted by Mesos Reviewbot <re...@mesos.apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71666/#review218381
-----------------------------------------------------------



Bad patch!

Reviews applied: [71659, 71660, 71661, 71662, 71663, 71664, 71665, 71666]

Failed command: ['bash', '-c', "set -o pipefail; export OS='ubuntu:14.04' BUILDTOOL='autotools' COMPILER='gcc' CONFIGURATION='--verbose --disable-libtool-wrappers --disable-parallel-test-execution' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1'; ./support/docker-build.sh 2>&1 | tee build_71666"]

Error:
...<truncated>...
incipal","role":"storage/default-role","type":"DYNAMIC"}],"scalar":{"value":2048.0},"type":"SCALAR"}]'
I1024 04:09:16.837707 18970 sched.cpp:960] Rescinded offer 4e513900-c18f-44fd-b61e-94980d41083c-O3
I1024 04:09:16.837848 18970 sched.cpp:971] Scheduler::offerRescinded took 43513ns
I1024 04:09:16.838539 18964 hierarchical.cpp:1566] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test)]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test)]:2048, offered or allocated: {}) on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 from framework 4e513
 900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:16.838770 18959 master.cpp:12706] Removing offer 4e513900-c18f-44fd-b61e-94980d41083c-O3
I1024 04:09:16.840548 18964 hierarchical.cpp:1615] Framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 filtered agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 for 5secs
I1024 04:09:16.844607 18958 master.cpp:12571] Sending operation '' (uuid: 431a5832-b5dc-4070-8864-e5905e28b81e) to agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 at slave(1245)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:16.845479 18958 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:16.848281 18963 master.cpp:6412] Processing REVIVE call for framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:16.849131 18960 hierarchical.cpp:1711] Unsuppressed offers and cleared filters for roles { storage/default-role } of framework 4e513900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:16.849989 18969 provider.cpp:498] Received APPLY_OPERATION event
I1024 04:09:16.850067 18969 provider.cpp:1351] Received CREATE operation '' (uuid: 431a5832-b5dc-4070-8864-e5905e28b81e)
I1024 04:09:16.851294 18960 hierarchical.cpp:1843] Performed allocation for 1 agents in 1.70449ms
I1024 04:09:16.852298 18959 master.cpp:10409] Sending offers [ 4e513900-c18f-44fd-b61e-94980d41083c-O4 ] to framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:16.852977 18965 sched.cpp:934] Scheduler::resourceOffers took 71323ns
I1024 04:09:16.863402 18955 http.cpp:1115] HTTP POST for /slave(1245)/api/v1/resource_provider from 172.17.0.2:36920
I1024 04:09:16.865109 18953 slave.cpp:8483] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I1024 04:09:16.865459 18953 slave.cpp:8936] Updating the state of operation with no ID (uuid: e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I1024 04:09:16.865547 18953 slave.cpp:8690] Forwarding status update of operation with no ID (operation_uuid: e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73) for an operator API call
I1024 04:09:16.866223 18972 master.cpp:12223] Updating the state of operation '' (uuid: e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I1024 04:09:16.867074 18968 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:16.950788 18959 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: a12dedde-ff62-4851-ab61-9768f77a0a3d) for operation UUID 431a5832-b5dc-4070-8864-e5905e28b81e on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:16.950886 18959 status_update_manager_process.hpp:414] Creating operation status update stream 431a5832-b5dc-4070-8864-e5905e28b81e checkpoint=true
I1024 04:09:16.951009 18969 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I1024 04:09:16.951305 18959 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: a12dedde-ff62-4851-ab61-9768f77a0a3d) for operation UUID 431a5832-b5dc-4070-8864-e5905e28b81e on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.000857 18959 status_update_manager_process.hpp:528] Forwarding operation status update OPERATION_FINISHED (Status UUID: a12dedde-ff62-4851-ab61-9768f77a0a3d) for operation UUID 431a5832-b5dc-4070-8864-e5905e28b81e on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.001210 18959 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 4fa85ad1-d990-474b-a124-8edffee181d5) for stream e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73
I1024 04:09:17.001314 18959 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: 4fa85ad1-d990-474b-a124-8edffee181d5) for operation UUID e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73 on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.002012 18973 http_connection.hpp:131] Sending UPDATE_OPERATION_STATUS call to http://172.17.0.2:42005/slave(1245)/api/v1/resource_provider
I1024 04:09:17.003497 18950 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1/resource_provider'
I1024 04:09:17.047626 18951 http.cpp:1115] HTTP POST for /slave(1245)/api/v1/resource_provider from 172.17.0.2:36920
I1024 04:09:17.049250 18957 slave.cpp:8483] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 431a5832-b5dc-4070-8864-e5905e28b81e) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I1024 04:09:17.049624 18957 slave.cpp:8936] Updating the state of operation with no ID (uuid: 431a5832-b5dc-4070-8864-e5905e28b81e) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I1024 04:09:17.049728 18957 slave.cpp:8690] Forwarding status update of operation with no ID (operation_uuid: 431a5832-b5dc-4070-8864-e5905e28b81e) for an operator API call
I1024 04:09:17.050470 18968 master.cpp:12223] Updating the state of operation '' (uuid: 431a5832-b5dc-4070-8864-e5905e28b81e) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I1024 04:09:17.051097 18959 status_update_manager_process.hpp:490] Cleaning up operation status update stream e7bbe480-05f9-4f0e-9b53-7d2bb48e9a73
I1024 04:09:17.051389 18961 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:17.093894 18962 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I1024 04:09:17.094228 18966 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: a12dedde-ff62-4851-ab61-9768f77a0a3d) for stream 431a5832-b5dc-4070-8864-e5905e28b81e
I1024 04:09:17.094384 18966 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: a12dedde-ff62-4851-ab61-9768f77a0a3d) for operation UUID 431a5832-b5dc-4070-8864-e5905e28b81e on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.135694 18966 status_update_manager_process.hpp:490] Cleaning up operation status update stream 431a5832-b5dc-4070-8864-e5905e28b81e
I1024 04:09:17.191262 18951 process.cpp:3671] Handling HTTP event for process 'master' with path: '/master/api/v1'
I1024 04:09:17.194038 18971 http.cpp:1115] HTTP POST for /master/api/v1 from 172.17.0.2:36928
I1024 04:09:17.194308 18971 http.cpp:263] Processing call DESTROY_VOLUMES
I1024 04:09:17.195261 18971 master.cpp:3983] Authorizing principal 'test-principal' to destroy volumes '[{"disk":{"persistence":{"id":"010cad7b-e875-4a88-8977-ac7449dddfda","principal":"test-principal"},"source":{"id":"/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84","mount":{"root":"./csi/org.apache.mesos.csi.test/local/mounts"},"profile":"test","type":"MOUNT","vendor":"org.apache.mesos.csi.test.local"},"volume":{"container_path":"volume","mode":"RW"}},"name":"disk","provider_id":{"value":"11fbbf8d-67b7-4da2-81bb-ece0f8e7ad2d"},"reservations":[{"role":"storage","type":"DYNAMIC"},{"principal":"test-principal","role":"storage/default-role","type":"DYNAMIC"}],"scalar":{"value":2048.0},"type":"SCALAR"}]'
I1024 04:09:17.197199 18969 sched.cpp:960] Rescinded offer 4e513900-c18f-44fd-b61e-94980d41083c-O4
I1024 04:09:17.197397 18969 sched.cpp:971] Scheduler::offerRescinded took 108181ns
I1024 04:09:17.198186 18956 master.cpp:12706] Removing offer 4e513900-c18f-44fd-b61e-94980d41083c-O4
I1024 04:09:17.198129 18960 hierarchical.cpp:1566] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test),010cad7b-e875-4a88-8977-ac7449dddfda:volume]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test),010cad7b-e875-4a88-8977-ac7449dddfda:volume]:2048, offered
  or allocated: {}) on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 from framework 4e513900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:17.200817 18960 hierarchical.cpp:1615] Framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 filtered agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 for 5secs
I1024 04:09:17.205857 18965 master.cpp:12571] Sending operation '' (uuid: 84384e74-15f5-47d8-9bb6-a86e3cab2344) to agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 at slave(1245)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.206737 18966 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:17.211086 18952 hierarchical.cpp:1843] Performed allocation for 1 agents in 1.813751ms
I1024 04:09:17.211603 18971 provider.cpp:498] Received APPLY_OPERATION event
I1024 04:09:17.211673 18971 provider.cpp:1351] Received DESTROY operation '' (uuid: 84384e74-15f5-47d8-9bb6-a86e3cab2344)
I1024 04:09:17.212340 18959 master.cpp:10409] Sending offers [ 4e513900-c18f-44fd-b61e-94980d41083c-O5 ] to framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:17.213408 18970 sched.cpp:934] Scheduler::resourceOffers took 132511ns
I1024 04:09:17.329574 18969 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: e96e4c19-e47f-4709-9ead-c259d8fa5a49) for operation UUID 84384e74-15f5-47d8-9bb6-a86e3cab2344 on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.329668 18969 status_update_manager_process.hpp:414] Creating operation status update stream 84384e74-15f5-47d8-9bb6-a86e3cab2344 checkpoint=true
I1024 04:09:17.330063 18969 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: e96e4c19-e47f-4709-9ead-c259d8fa5a49) for operation UUID 84384e74-15f5-47d8-9bb6-a86e3cab2344 on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.379720 18969 status_update_manager_process.hpp:528] Forwarding operation status update OPERATION_FINISHED (Status UUID: e96e4c19-e47f-4709-9ead-c259d8fa5a49) for operation UUID 84384e74-15f5-47d8-9bb6-a86e3cab2344 on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.380975 18973 http_connection.hpp:131] Sending UPDATE_OPERATION_STATUS call to http://172.17.0.2:42005/slave(1245)/api/v1/resource_provider
I1024 04:09:17.382381 18964 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1/resource_provider'
I1024 04:09:17.386802 18955 process.cpp:3671] Handling HTTP event for process 'master' with path: '/master/api/v1'
I1024 04:09:17.389070 18967 http.cpp:1115] HTTP POST for /master/api/v1 from 172.17.0.2:36930
I1024 04:09:17.389468 18967 http.cpp:263] Processing call UNRESERVE_RESOURCES
I1024 04:09:17.390410 18967 master.cpp:3875] Authorizing principal 'test-principal' to unreserve resources '[{"disk":{"source":{"id":"/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84","mount":{"root":"./csi/org.apache.mesos.csi.test/local/mounts"},"profile":"test","type":"MOUNT","vendor":"org.apache.mesos.csi.test.local"}},"name":"disk","provider_id":{"value":"11fbbf8d-67b7-4da2-81bb-ece0f8e7ad2d"},"reservations":[{"role":"storage","type":"DYNAMIC"},{"principal":"test-principal","role":"storage/default-role","type":"DYNAMIC"}],"scalar":{"value":2048.0},"type":"SCALAR"}]'
I1024 04:09:17.392415 18952 sched.cpp:960] Rescinded offer 4e513900-c18f-44fd-b61e-94980d41083c-O5
I1024 04:09:17.392516 18952 sched.cpp:971] Scheduler::offerRescinded took 28442ns
I1024 04:09:17.393165 18959 hierarchical.cpp:1566] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test)]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage),(DYNAMIC,storage/default-role,test-principal)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test)]:2048, offered or allocated: {}) on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 from framework 4e513
 900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:17.393447 18961 master.cpp:12706] Removing offer 4e513900-c18f-44fd-b61e-94980d41083c-O5
I1024 04:09:17.395400 18959 hierarchical.cpp:1615] Framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 filtered agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 for 5secs
I1024 04:09:17.399453 18969 master.cpp:12571] Sending operation '' (uuid: fc71d3d2-1e02-430c-b79d-634c61da75ad) to agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 at slave(1245)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.400280 18960 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:17.404453 18955 provider.cpp:498] Received APPLY_OPERATION event
I1024 04:09:17.404523 18955 provider.cpp:1351] Received UNRESERVE operation '' (uuid: fc71d3d2-1e02-430c-b79d-634c61da75ad)
I1024 04:09:17.407121 18972 hierarchical.cpp:1843] Performed allocation for 1 agents in 1.216514ms
I1024 04:09:17.407675 18951 master.cpp:10409] Sending offers [ 4e513900-c18f-44fd-b61e-94980d41083c-O6 ] to framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:17.408283 18951 sched.cpp:934] Scheduler::resourceOffers took 80694ns
I1024 04:09:17.427634 18956 http.cpp:1115] HTTP POST for /slave(1245)/api/v1/resource_provider from 172.17.0.2:36920
I1024 04:09:17.429145 18973 slave.cpp:8483] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 84384e74-15f5-47d8-9bb6-a86e3cab2344) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I1024 04:09:17.429489 18973 slave.cpp:8936] Updating the state of operation with no ID (uuid: 84384e74-15f5-47d8-9bb6-a86e3cab2344) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I1024 04:09:17.429586 18973 slave.cpp:8690] Forwarding status update of operation with no ID (operation_uuid: 84384e74-15f5-47d8-9bb6-a86e3cab2344) for an operator API call
I1024 04:09:17.430261 18965 master.cpp:12223] Updating the state of operation '' (uuid: 84384e74-15f5-47d8-9bb6-a86e3cab2344) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I1024 04:09:17.431108 18962 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:17.506042 18951 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: 5a4a2522-25e0-406e-b3c7-d0d4edb5a055) for operation UUID fc71d3d2-1e02-430c-b79d-634c61da75ad on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.506170 18951 status_update_manager_process.hpp:414] Creating operation status update stream fc71d3d2-1e02-430c-b79d-634c61da75ad checkpoint=true
I1024 04:09:17.506309 18955 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I1024 04:09:17.506789 18951 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: 5a4a2522-25e0-406e-b3c7-d0d4edb5a055) for operation UUID fc71d3d2-1e02-430c-b79d-634c61da75ad on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.556231 18951 status_update_manager_process.hpp:528] Forwarding operation status update OPERATION_FINISHED (Status UUID: 5a4a2522-25e0-406e-b3c7-d0d4edb5a055) for operation UUID fc71d3d2-1e02-430c-b79d-634c61da75ad on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.556695 18951 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: e96e4c19-e47f-4709-9ead-c259d8fa5a49) for stream 84384e74-15f5-47d8-9bb6-a86e3cab2344
I1024 04:09:17.556802 18951 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: e96e4c19-e47f-4709-9ead-c259d8fa5a49) for operation UUID 84384e74-15f5-47d8-9bb6-a86e3cab2344 on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.557334 18970 http_connection.hpp:131] Sending UPDATE_OPERATION_STATUS call to http://172.17.0.2:42005/slave(1245)/api/v1/resource_provider
I1024 04:09:17.558418 18959 process.cpp:3671] Handling HTTP event for process 'slave(1245)' with path: '/slave(1245)/api/v1/resource_provider'
I1024 04:09:17.603652 18950 http.cpp:1115] HTTP POST for /slave(1245)/api/v1/resource_provider from 172.17.0.2:36920
I1024 04:09:17.605124 18958 slave.cpp:8483] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: fc71d3d2-1e02-430c-b79d-634c61da75ad) for framework  (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)'
I1024 04:09:17.605542 18958 slave.cpp:8936] Updating the state of operation with no ID (uuid: fc71d3d2-1e02-430c-b79d-634c61da75ad) for an operation API call (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)
I1024 04:09:17.605639 18958 slave.cpp:8690] Forwarding status update of operation with no ID (operation_uuid: fc71d3d2-1e02-430c-b79d-634c61da75ad) for an operator API call
I1024 04:09:17.606158 18957 master.cpp:12223] Updating the state of operation '' (uuid: fc71d3d2-1e02-430c-b79d-634c61da75ad) for an operator API call (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED)
I1024 04:09:17.606283 18951 status_update_manager_process.hpp:490] Cleaning up operation status update stream 84384e74-15f5-47d8-9bb6-a86e3cab2344
I1024 04:09:17.607051 18968 slave.cpp:4352] Ignoring new checkpointed resources and operations identical to the current version
I1024 04:09:17.657438 18962 provider.cpp:498] Received ACKNOWLEDGE_OPERATION_STATUS event
I1024 04:09:17.657742 18956 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 5a4a2522-25e0-406e-b3c7-d0d4edb5a055) for stream fc71d3d2-1e02-430c-b79d-634c61da75ad
I1024 04:09:17.657889 18956 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FINISHED (Status UUID: 5a4a2522-25e0-406e-b3c7-d0d4edb5a055) for operation UUID fc71d3d2-1e02-430c-b79d-634c61da75ad on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.707314 18956 status_update_manager_process.hpp:490] Cleaning up operation status update stream fc71d3d2-1e02-430c-b79d-634c61da75ad
I1024 04:09:17.750463 18965 master.cpp:1411] Framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005 disconnected
I1024 04:09:17.750525 18965 master.cpp:3356] Deactivating framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:17.751109 18960 hierarchical.cpp:813] Deactivated framework 4e513900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:17.751709 18949 slave.cpp:924] Agent terminating
I1024 04:09:17.751760 18960 hierarchical.cpp:1566] Recovered ports(allocated: storage/default-role):[31000-32000]; disk(allocated: storage/default-role)(reservations: [(DYNAMIC,storage)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test)]:2048; cpus(allocated: storage/default-role):2; mem(allocated: storage/default-role):1024; disk(allocated: storage/default-role):1024 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage)])[MOUNT(org.apache.mesos.csi.test.local,/tmp/CSIVersion_StorageLocalResourceProviderTest_OperatorOperationsWithResourceProviderResources_v1_9tOuB1/2GB-8ec4d43b-5633-47b6-85bd-6e180958bc84,test)]:2048, offered or allocated: {}) on agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 from framework 4e513900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:17.751927 18965 master.cpp:12706] Removing offer 4e513900-c18f-44fd-b61e-94980d41083c-O6
I1024 04:09:17.752041 18965 master.cpp:3333] Disconnecting framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:17.752123 18965 master.cpp:1426] Giving framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005 0ns to failover
I1024 04:09:17.752997 18949 manager.cpp:163] Terminating resource provider 11fbbf8d-67b7-4da2-81bb-ece0f8e7ad2d
I1024 04:09:17.753644 18956 master.cpp:1296] Agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 at slave(1245)@172.17.0.2:42005 (af3ba927af2a) disconnected
I1024 04:09:17.753697 18956 master.cpp:3391] Disconnecting agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 at slave(1245)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.753814 18956 master.cpp:3410] Deactivating agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 at slave(1245)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.754053 18960 hierarchical.cpp:1146] Agent 4e513900-c18f-44fd-b61e-94980d41083c-S0 deactivated
I1024 04:09:17.754163 18956 master.cpp:10195] Framework failover timeout, removing framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
I1024 04:09:17.754237 18956 master.cpp:11197] Removing framework 4e513900-c18f-44fd-b61e-94980d41083c-0000 (default) at scheduler-fae671da-508d-496e-9b21-5f0e8463bce6@172.17.0.2:42005
E1024 04:09:17.754396 18958 http_connection.hpp:452] End-Of-File received
I1024 04:09:17.754936 18967 hierarchical.cpp:1767] Allocation paused
I1024 04:09:17.755093 18958 http_connection.hpp:217] Re-detecting endpoint
I1024 04:09:17.755785 18967 hierarchical.cpp:757] Removed framework 4e513900-c18f-44fd-b61e-94980d41083c-0000
I1024 04:09:17.755882 18967 hierarchical.cpp:1777] Allocation resumed
I1024 04:09:17.756017 18958 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I1024 04:09:17.756053 18970 provider.cpp:488] Disconnected from resource provider manager
I1024 04:09:17.756146 18958 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:42005/slave(1245)/api/v1/resource_provider
I1024 04:09:17.756256 18970 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:17.756515 18958 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I1024 04:09:17.758455 18973 containerizer.cpp:2620] Destroying container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE in RUNNING state
I1024 04:09:17.758543 18973 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from RUNNING to DESTROYING after 2.685379968secs
I1024 04:09:17.759218 18973 launcher.cpp:161] Asked to destroy container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:17.760370 18952 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:42005/slave(1245)/api/v1/resource_provider
I1024 04:09:17.761265 18967 provider.cpp:476] Connected to resource provider manager
I1024 04:09:17.762015 18954 hierarchical.cpp:1843] Performed allocation for 1 agents in 313156ns
I1024 04:09:17.762020 18959 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:42005/slave(1245)/api/v1/resource_provider
E1024 04:09:17.762781 18959 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Cannot process 'SUBSCRIBE' call as the driver is in state SUBSCRIBING
I1024 04:09:17.763013 18974 process.cpp:2781] Returning '404 Not Found' for '/slave(1245)/api/v1/resource_provider'
E1024 04:09:17.764221 18958 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Received '404 Not Found' ()
I1024 04:09:17.813241 18960 hierarchical.cpp:1843] Performed allocation for 1 agents in 251043ns
I1024 04:09:17.863765 18953 containerizer.cpp:3156] Container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE has exited
I1024 04:09:17.864959 18972 hierarchical.cpp:1843] Performed allocation for 1 agents in 194348ns
I1024 04:09:17.866255 18956 provisioner.cpp:652] Ignoring destroy request for unknown container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:17.869783 18951 container_daemon.cpp:189] Invoking post-stop hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:17.870127 18955 service_manager.cpp:723] Disconnected from endpoint 'unix:///tmp/mesos-csi-w0KWHF/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:17.870468 18953 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:17.874270 18974 process.cpp:2781] Returning '404 Not Found' for '/slave(1245)/api/v1'
I1024 04:09:17.886777 18949 master.cpp:1137] Master terminating
I1024 04:09:17.888247 18956 hierarchical.cpp:1122] Removed all filters for agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
I1024 04:09:17.888293 18956 hierarchical.cpp:998] Removed agent 4e513900-c18f-44fd-b61e-94980d41083c-S0
[       OK ] CSIVersion/StorageLocalResourceProviderTest.OperatorOperationsWithResourceProviderResources/v1 (2578 ms)
[ RUN      ] CSIVersion/StorageLocalResourceProviderTest.Update/v0
I1024 04:09:17.912412 18949 cluster.cpp:177] Creating default 'local' authorizer
I1024 04:09:17.921221 18973 master.cpp:440] Master 1f6d5661-2513-4c8b-a928-a37b9ccf4556 (af3ba927af2a) started on 172.17.0.2:42005
I1024 04:09:17.921269 18973 master.cpp:443] Flags at startup: --acls="" --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="50ms" --allocator="hierarchical" --authenticate_agents="true" --authenticate_frameworks="true" --authenticate_http_frameworks="true" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authentication_v0_timeout="15secs" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/ZQHg6K/credentials" --filter_gpu_resources="true" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_operator_event_stream_subscribers="1000" --max_unreachable_tasks_per_framework="1000" --memory_profiling="false" --min_alloca
 table_resources="cpus:0.01|mem:32" --port="5050" --publish_per_framework_metrics="true" --quiet="false" --recovery_agent_removal_limit="100%" --registry="in_memory" --registry_fetch_timeout="1mins" --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400" --registry_store_timeout="100secs" --registry_strict="false" --require_agent_domain="false" --role_sorter="drf" --root_submissions="true" --version="false" --webui_dir="/mesos/mesos-1.10.0/_inst/share/mesos/webui" --work_dir="/tmp/ZQHg6K/master" --zk_session_timeout="10secs"
I1024 04:09:17.921993 18973 master.cpp:492] Master only allowing authenticated frameworks to register
I1024 04:09:17.922027 18973 master.cpp:498] Master only allowing authenticated agents to register
I1024 04:09:17.922044 18973 master.cpp:504] Master only allowing authenticated HTTP frameworks to register
I1024 04:09:17.922073 18973 credentials.hpp:37] Loading credentials for authentication from '/tmp/ZQHg6K/credentials'
I1024 04:09:17.922570 18973 master.cpp:548] Using default 'crammd5' authenticator
I1024 04:09:17.922911 18973 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
I1024 04:09:17.923321 18973 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
I1024 04:09:17.923615 18973 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
I1024 04:09:17.923918 18973 master.cpp:629] Authorization enabled
I1024 04:09:17.924507 18959 hierarchical.cpp:567] Initialized hierarchical allocator process
I1024 04:09:17.924568 18962 whitelist_watcher.cpp:77] No whitelist given
I1024 04:09:17.929411 18961 master.cpp:2169] Elected as the leading master!
I1024 04:09:17.929476 18961 master.cpp:1665] Recovering from registrar
I1024 04:09:17.929770 18954 registrar.cpp:339] Recovering registrar
I1024 04:09:17.931078 18954 registrar.cpp:383] Successfully fetched the registry (0B) in 0ns
I1024 04:09:17.931311 18954 registrar.cpp:487] Applied 1 operations in 74396ns; attempting to update the registry
I1024 04:09:17.932485 18954 registrar.cpp:544] Successfully updated the registry in 0ns
I1024 04:09:17.932726 18954 registrar.cpp:416] Successfully recovered registrar
I1024 04:09:17.933549 18957 master.cpp:1818] Recovered 0 agents from the registry (144B); allowing 10mins for agents to reregister
I1024 04:09:17.933634 18955 hierarchical.cpp:606] Skipping recovery of hierarchical allocator: nothing to recover
W1024 04:09:17.943511 18949 process.cpp:2877] Attempted to spawn already running process files@172.17.0.2:42005
I1024 04:09:17.945814 18949 containerizer.cpp:318] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
W1024 04:09:17.946749 18949 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges
W1024 04:09:17.946801 18949 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges
W1024 04:09:17.947046 18949 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges
I1024 04:09:17.947134 18949 provisioner.cpp:294] Using default backend 'copy'
I1024 04:09:17.951319 18949 cluster.cpp:524] Creating default 'local' authorizer
I1024 04:09:17.955313 18963 slave.cpp:267] Mesos agent started on (1246)@172.17.0.2:42005
I1024 04:09:17.955366 18963 slave.cpp:268] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/ZQHg6K/0peXDd/store/appc" --authenticate_http_readonly="true" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authentication_timeout_max="1mins" --authentication_timeout_min="5secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_destroy_timeout="1mins" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/ZQHg6K/0peXDd/credential" --default_role="*" --disallow_sharing_agent_ipc_namespace="false" --disallow_sharing_agent_pid_namespace="false" --disk_profile_adaptor="org_apache_mesos_UriDiskProfileAdaptor" --disk_watch_interval="1mins" --docker="docker" --docker_ignore_runtime="false" --docker_kill_orphans="true" 
 --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/ZQHg6K/0peXDd/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --docker_volume_chown="false" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_reregistration_timeout="2secs" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/ZQHg6K/0peXDd/fetch" --fetcher_cache_size="2GB" --fetcher_stall_timeout="1mins" --frameworks_home="/tmp/ZQHg6K/0peXDd/frameworks" --gc_delay="1weeks" --gc_disk_headroom="0.1" --gc_non_executor_container_sandboxes="false" --help="false" --hostname_lookup="true" --http_command_executor="false" --http_credentials="/tmp/ZQHg6K/0peXDd/http_credentials" --http_heartbeat_interval="30secs" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posix" --launcher_dir="/mesos/mesos-1.10.0/
 _build/src" --logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --memory_profiling="false" --network_cni_metrics="true" --network_cni_root_dir_persist="false" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --reconfiguration_policy="equal" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resource_provider_config_dir="/tmp/ZQHg6K/resource_provider_configs" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --runtime_dir="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_cSUiuB" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ" --zk_sessio
 n_timeout="10secs"
I1024 04:09:17.956171 18963 credentials.hpp:86] Loading credential for authentication from '/tmp/ZQHg6K/0peXDd/credential'
I1024 04:09:17.956477 18963 slave.cpp:300] Agent using credential for: test-principal
I1024 04:09:17.956526 18963 credentials.hpp:37] Loading credentials for authentication from '/tmp/ZQHg6K/0peXDd/http_credentials'
I1024 04:09:17.956915 18963 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly'
I1024 04:09:17.957603 18963 disk_profile_adaptor.cpp:82] Creating disk profile adaptor module 'org_apache_mesos_UriDiskProfileAdaptor'
I1024 04:09:17.957896 18962 hierarchical.cpp:1843] Performed allocation for 0 agents in 138609ns
I1024 04:09:17.960202 18953 uri_disk_profile_adaptor.cpp:305] Updated disk profile mapping to 1 active profiles
I1024 04:09:17.960338 18963 slave.cpp:615] Agent resources: [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
I1024 04:09:17.960768 18963 slave.cpp:623] Agent attributes: [  ]
I1024 04:09:17.960796 18963 slave.cpp:632] Agent hostname: af3ba927af2a
I1024 04:09:17.961102 18956 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:17.961158 18969 task_status_update_manager.cpp:181] Pausing sending task status updates
I1024 04:09:17.963766 18966 state.cpp:67] Recovering state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/meta'
I1024 04:09:17.964145 18970 slave.cpp:7492] Finished recovering checkpointed state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/meta', beginning agent recovery
I1024 04:09:17.965149 18960 task_status_update_manager.cpp:207] Recovering task status update manager
I1024 04:09:17.965939 18951 containerizer.cpp:821] Recovering Mesos containers
I1024 04:09:17.966601 18951 containerizer.cpp:1161] Recovering isolators
I1024 04:09:17.967984 18953 containerizer.cpp:1200] Recovering provisioner
I1024 04:09:17.969260 18950 provisioner.cpp:518] Provisioner recovery complete
I1024 04:09:17.970558 18960 composing.cpp:339] Finished recovering all containerizers
I1024 04:09:17.971004 18952 slave.cpp:7974] Recovering executors
I1024 04:09:17.971217 18952 slave.cpp:8127] Finished recovery
I1024 04:09:17.972507 18973 task_status_update_manager.cpp:181] Pausing sending task status updates
I1024 04:09:17.972535 18962 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:17.972501 18952 slave.cpp:1351] New master detected at master@172.17.0.2:42005
I1024 04:09:17.972774 18952 slave.cpp:1416] Detecting new master
I1024 04:09:17.978224 18972 slave.cpp:1443] Authenticating with master master@172.17.0.2:42005
I1024 04:09:17.978382 18972 slave.cpp:1452] Using default CRAM-MD5 authenticatee
I1024 04:09:17.978955 18965 authenticatee.cpp:121] Creating new client SASL connection
I1024 04:09:17.979508 18969 master.cpp:10594] Authenticating slave(1246)@172.17.0.2:42005
I1024 04:09:17.979746 18953 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(2106)@172.17.0.2:42005
I1024 04:09:17.980216 18956 authenticator.cpp:98] Creating new server SASL connection
I1024 04:09:17.980650 18961 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
I1024 04:09:17.980705 18961 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
I1024 04:09:17.980957 18961 authenticator.cpp:204] Received SASL authentication start
I1024 04:09:17.981086 18961 authenticator.cpp:326] Authentication requires more steps
I1024 04:09:17.981334 18961 authenticatee.cpp:259] Received SASL authentication step
I1024 04:09:17.981631 18963 authenticator.cpp:232] Received SASL authentication step
I1024 04:09:17.981694 18963 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'af3ba927af2a' server FQDN: 'af3ba927af2a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
I1024 04:09:17.981721 18963 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
I1024 04:09:17.981801 18963 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
I1024 04:09:17.981858 18963 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'af3ba927af2a' server FQDN: 'af3ba927af2a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
I1024 04:09:17.981885 18963 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
I1024 04:09:17.981909 18963 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I1024 04:09:17.981946 18963 authenticator.cpp:318] Authentication success
I1024 04:09:17.982192 18964 authenticatee.cpp:299] Authentication success
I1024 04:09:17.982282 18968 master.cpp:10626] Successfully authenticated principal 'test-principal' at slave(1246)@172.17.0.2:42005
I1024 04:09:17.982312 18954 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(2106)@172.17.0.2:42005
I1024 04:09:17.982848 18970 slave.cpp:1543] Successfully authenticated with master master@172.17.0.2:42005
I1024 04:09:17.983485 18970 slave.cpp:1993] Will retry registration in 15.215071ms if necessary
I1024 04:09:17.983810 18959 master.cpp:7083] Received register agent message from slave(1246)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.984294 18959 master.cpp:4189] Authorizing agent providing resources 'cpus:2; mem:1024; disk:1024; ports:[31000-32000]' with principal 'test-principal'
I1024 04:09:17.985386 18962 master.cpp:7150] Authorized registration of agent at slave(1246)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.985551 18962 master.cpp:7262] Registering agent at slave(1246)@172.17.0.2:42005 (af3ba927af2a) with id 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0
I1024 04:09:17.986657 18967 registrar.cpp:487] Applied 1 operations in 361832ns; attempting to update the registry
I1024 04:09:17.987764 18967 registrar.cpp:544] Successfully updated the registry in 994048ns
I1024 04:09:17.988087 18953 master.cpp:7310] Admitted agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 at slave(1246)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:17.989349 18953 master.cpp:7355] Registered agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 at slave(1246)@172.17.0.2:42005 (af3ba927af2a) with cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I1024 04:09:17.989543 18958 slave.cpp:1576] Registered with master master@172.17.0.2:42005; given agent ID 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0
I1024 04:09:17.989745 18963 task_status_update_manager.cpp:188] Resuming sending task status updates
I1024 04:09:17.989717 18950 hierarchical.cpp:955] Added agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 (af3ba927af2a) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (offered or allocated: {})
I1024 04:09:17.990234 18958 slave.cpp:1611] Checkpointing SlaveInfo to '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/meta/slaves/1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0/slave.info'
I1024 04:09:17.990267 18950 hierarchical.cpp:1843] Performed allocation for 1 agents in 206477ns
I1024 04:09:17.990339 18968 status_update_manager_process.hpp:385] Resuming operation status update manager
I1024 04:09:17.991955 18958 slave.cpp:1663] Forwarding agent update {"operations":{},"resource_providers":{},"resource_version_uuid":{"value":"SJ+wQniBQbanxu6sIQPTLg=="},"slave_id":{"value":"1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0"},"update_oversubscribed_resources":false}
I1024 04:09:17.993094 18966 master.cpp:8474] Ignoring update on agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 at slave(1246)@172.17.0.2:42005 (af3ba927af2a) as it reports no changes
I1024 04:09:17.997215 18967 process.cpp:3671] Handling HTTP event for process 'slave(1246)' with path: '/slave(1246)/api/v1'
I1024 04:09:17.998831 18968 http.cpp:1115] HTTP POST for /slave(1246)/api/v1 from 172.17.0.2:36938
I1024 04:09:17.999531 18968 http.cpp:2146] Processing GET_CONTAINERS call
I1024 04:09:18.007282 18951 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.009789 18956 hierarchical.cpp:1843] Performed allocation for 1 agents in 208664ns
I1024 04:09:18.011082 18953 process.cpp:3671] Handling HTTP event for process 'slave(1246)' with path: '/slave(1246)/api/v1'
I1024 04:09:18.012740 18957 http.cpp:1115] HTTP POST for /slave(1246)/api/v1 from 172.17.0.2:36940
I1024 04:09:18.014060 18957 http.cpp:2606] Processing LAUNCH_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.015558 18970 http.cpp:2710] Creating sandbox '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.016655 18952 containerizer.cpp:1396] Starting container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:18.017642 18952 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from STARTING to PROVISIONING after 427008ns
I1024 04:09:18.018743 18952 containerizer.cpp:1574] Checkpointed ContainerConfig at '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_cSUiuB/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE/config'
I1024 04:09:18.018827 18952 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PROVISIONING to PREPARING after 1.189888ms
I1024 04:09:18.023128 18969 containerizer.cpp:2100] Launching 'mesos-containerizer' with flags '--help="false" --launch_info="{"command":{"arguments":["/mesos/mesos-1.10.0/_build/src/test-csi-plugin","--api_version=v0","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_Mi03an","--available_capacity=0B","--volumes=","--forward=unix:///tmp/ZQHg6K/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/mesos/mesos-1.10.0/_build/src/test-csi-plugin"},"environment":{"variables":[{"name":"MESOS_SANDBOX","type":"VALUE","value":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"},{"name":"CSI_ENDPOINT","type":"VALUE","value":"unix:///tmp/mesos-csi-nliRLG/endpoint.sock"}]},"task_environment":{},"working_directory":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/containers/org-apache-mesos-rp-local
 -storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"}" --pipe_read="96" --pipe_write="97" --runtime_directory="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_cSUiuB/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE" --unshare_namespace_mnt="false"'
I1024 04:09:18.040844 18969 launcher.cpp:145] Forked child with pid '786' for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.042178 18969 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PREPARING to ISOLATING after 23.348224ms
I1024 04:09:18.044036 18969 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from ISOLATING to FETCHING after 1.84192ms
I1024 04:09:18.044544 18957 fetcher.cpp:369] Starting to fetch URIs for container: org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE, directory: /tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_6DGFkZ/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:18.046007 18958 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from FETCHING to RUNNING after 1.901056ms
I1024 04:09:18.051121 18959 container_daemon.cpp:140] Invoking post-start hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.051561 18962 service_manager.cpp:703] Connecting to endpoint 'unix:///tmp/mesos-csi-nliRLG/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:18.061393 18967 hierarchical.cpp:1843] Performed allocation for 1 agents in 310560ns
I1024 04:09:18.113020 18968 hierarchical.cpp:1843] Performed allocation for 1 agents in 171038ns
I1024 04:09:18.164542 18970 hierarchical.cpp:1843] Performed allocation for 1 agents in 320572ns
I1024 04:09:18.216125 18962 hierarchical.cpp:1843] Performed allocation for 1 agents in 210311ns
I1024 04:09:18.268100 18953 hierarchical.cpp:1843] Performed allocation for 1 agents in 318313ns
I1024 04:09:18.319370 18957 hierarchical.cpp:1843] Performed allocation for 1 agents in 199011ns
I1024 04:09:18.371202 18964 hierarchical.cpp:1843] Performed allocation for 1 agents in 199980ns
I1024 04:09:18.422751 18951 hierarchical.cpp:1843] Performed allocation for 1 agents in 289396ns
I1024 04:09:18.474273 18968 hierarchical.cpp:1843] Performed allocation for 1 agents in 270585ns
I1024 04:09:18.525945 18973 hierarchical.cpp:1843] Performed allocation for 1 agents in 326239ns
I1024 04:09:18.578022 18972 hierarchical.cpp:1843] Performed allocation for 1 agents in 293196ns
I1024 04:09:18.611189 18951 service_manager.cpp:545] Probing endpoint 'unix:///tmp/mesos-csi-nliRLG/endpoint.sock' with CSI v1
I1024 04:09:18.614604 18950 service_manager.cpp:532] Probing endpoint 'unix:///tmp/mesos-csi-nliRLG/endpoint.sock' with CSI v0
I1024 04:09:18.616012   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/Probe call
I1024 04:09:18.619781 18954 container_daemon.cpp:171] Waiting for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.623502   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/GetPluginCapabilities call
I1024 04:09:18.624570 18966 process.cpp:3671] Handling HTTP event for process 'slave(1246)' with path: '/slave(1246)/api/v1'
I1024 04:09:18.626438 18967 http.cpp:1115] HTTP POST for /slave(1246)/api/v1 from 172.17.0.2:36942
I1024 04:09:18.627208 18967 http.cpp:2824] Processing WAIT_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.629580   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/GetPluginInfo call
I1024 04:09:18.630045 18969 hierarchical.cpp:1843] Performed allocation for 1 agents in 238009ns
I1024 04:09:18.630079   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Identity/GetPluginInfo call
I1024 04:09:18.632308 18958 v0_volume_manager.cpp:628] NODE_SERVICE loaded: {}
I1024 04:09:18.633039 18958 v0_volume_manager.cpp:628] CONTROLLER_SERVICE loaded: {}
I1024 04:09:18.636062   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ControllerGetCapabilities call
I1024 04:09:18.641331   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Node/NodeGetCapabilities call
I1024 04:09:18.646450   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Node/NodeGetId call
I1024 04:09:18.649929 18959 provider.cpp:676] Recovered resources '{}' and 0 operations for resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test'
I1024 04:09:18.650161 18955 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:18.650660 18953 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.655129 18970 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.656173 18952 provider.cpp:476] Connected to resource provider manager
I1024 04:09:18.657073 18951 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.658609 18950 process.cpp:3671] Handling HTTP event for process 'slave(1246)' with path: '/slave(1246)/api/v1/resource_provider'
I1024 04:09:18.661173 18955 http.cpp:1115] HTTP POST for /slave(1246)/api/v1/resource_provider from 172.17.0.2:36946
I1024 04:09:18.662284 18969 manager.cpp:813] Subscribing resource provider {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/mesos/mesos-1.10.0/_build/src/test-csi-plugin","--api_version=v0","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_Mi03an","--available_capacity=0B","--volumes=","--forward=unix:///tmp/ZQHg6K/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/mesos/mesos-1.10.0/_build/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}
I1024 04:09:18.682061 18966 hierarchical.cpp:1843] Performed allocation for 1 agents in 282946ns
I1024 04:09:18.695065 18965 slave.cpp:8483] Handling resource provider message 'SUBSCRIBE: {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"id":{"value":"0f2c4465-2138-4d13-858b-6507714fe566"},"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/mesos/mesos-1.10.0/_build/src/test-csi-plugin","--api_version=v0","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v0_Mi03an","--available_capacity=0B","--volumes=","--forward=unix:///tmp/ZQHg6K/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/mesos/mesos-1.10.0/_build/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}'
I1024 04:09:18.696818 18954 provider.cpp:498] Received SUBSCRIBED event
I1024 04:09:18.696867 18954 provider.cpp:1309] Subscribed with ID 0f2c4465-2138-4d13-858b-6507714fe566
I1024 04:09:18.697921 18955 status_update_manager_process.hpp:314] Recovering operation status update manager
I1024 04:09:18.734112 18959 hierarchical.cpp:1843] Performed allocation for 1 agents in 205614ns
I1024 04:09:18.736759 18969 provider.cpp:790] Reconciling storage pools and volumes
I1024 04:09:18.739578   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ListVolumes call
I1024 04:09:18.743198 18967 provider.cpp:2217] Sending UPDATE_STATE call with resources '{}' and 0 operations to agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0
I1024 04:09:18.743571 18963 http_connection.hpp:131] Sending UPDATE_STATE call to http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.743662 18967 provider.cpp:748] Resource provider 0f2c4465-2138-4d13-858b-6507714fe566 is in READY state
I1024 04:09:18.743734 18968 status_update_manager_process.hpp:385] Resuming operation status update manager
I1024 04:09:18.744637 18959 provider.cpp:1235] Updating profiles { test } for resource provider 0f2c4465-2138-4d13-858b-6507714fe566
I1024 04:09:18.745031 18969 process.cpp:3671] Handling HTTP event for process 'slave(1246)' with path: '/slave(1246)/api/v1/resource_provider'
I1024 04:09:18.746387 18952 provider.cpp:790] Reconciling storage pools and volumes
I1024 04:09:18.747460 18954 http.cpp:1115] HTTP POST for /slave(1246)/api/v1/resource_provider from 172.17.0.2:36944
I1024 04:09:18.748245 18950 manager.cpp:1045] Received UPDATE_STATE call with resources '[]' and 0 operations from resource provider 0f2c4465-2138-4d13-858b-6507714fe566
I1024 04:09:18.748502 18968 slave.cpp:8483] Handling resource provider message 'UPDATE_STATE: 0f2c4465-2138-4d13-858b-6507714fe566 {}'
I1024 04:09:18.748622 18968 slave.cpp:8603] Forwarding new total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I1024 04:09:18.749658   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ListVolumes call
I1024 04:09:18.750255   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/GetCapacity call
I1024 04:09:18.750278 18972 hierarchical.cpp:1100] Grew agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 by {} (total), {  } (used)
I1024 04:09:18.750697 18972 hierarchical.cpp:1057] Agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 (af3ba927af2a) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I1024 04:09:18.757864 18968 hierarchical.cpp:1843] Performed allocation for 1 agents in 139724ns
I1024 04:09:18.758261 18972 provider.cpp:790] Reconciling storage pools and volumes
I1024 04:09:18.760386   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/ListVolumes call
I1024 04:09:18.760792   790 test_csi_plugin.cpp:1915] Forwarding /csi.v0.Controller/GetCapacity call
I1024 04:09:18.771663 18949 slave.cpp:924] Agent terminating
I1024 04:09:18.772692 18949 manager.cpp:163] Terminating resource provider 0f2c4465-2138-4d13-858b-6507714fe566
I1024 04:09:18.773316 18954 master.cpp:1296] Agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 at slave(1246)@172.17.0.2:42005 (af3ba927af2a) disconnected
I1024 04:09:18.773408 18954 master.cpp:3391] Disconnecting agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 at slave(1246)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:18.773584 18954 master.cpp:3410] Deactivating agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 at slave(1246)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:18.773826 18960 hierarchical.cpp:1146] Agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0 deactivated
E1024 04:09:18.774129 18959 http_connection.hpp:452] End-Of-File received
I1024 04:09:18.774888 18959 http_connection.hpp:217] Re-detecting endpoint
I1024 04:09:18.775560 18959 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I1024 04:09:18.775635 18959 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I1024 04:09:18.775722 18959 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.775791 18953 provider.cpp:488] Disconnected from resource provider manager
I1024 04:09:18.776021 18970 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:18.777995 18965 containerizer.cpp:2620] Destroying container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE in RUNNING state
I1024 04:09:18.778080 18965 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from RUNNING to DESTROYING after 15.73216384secs
I1024 04:09:18.778720 18965 launcher.cpp:161] Asked to destroy container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:18.780517 18951 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.781504 18971 provider.cpp:476] Connected to resource provider manager
I1024 04:09:18.782344 18968 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:42005/slave(1246)/api/v1/resource_provider
I1024 04:09:18.783339 18974 process.cpp:2781] Returning '404 Not Found' for '/slave(1246)/api/v1/resource_provider'
E1024 04:09:18.784734 18963 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Received '404 Not Found' ()
I1024 04:09:18.803916 18964 hierarchical.cpp:1843] Performed allocation for 1 agents in 207875ns
I1024 04:09:18.853718 18952 containerizer.cpp:3156] Container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE has exited
I1024 04:09:18.855743 18969 hierarchical.cpp:1843] Performed allocation for 1 agents in 158158ns
I1024 04:09:18.856235 18973 provisioner.cpp:652] Ignoring destroy request for unknown container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:18.859939 18956 container_daemon.cpp:189] Invoking post-stop hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.860271 18950 service_manager.cpp:723] Disconnected from endpoint 'unix:///tmp/mesos-csi-nliRLG/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:18.860630 18957 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:18.864760 18974 process.cpp:2781] Returning '404 Not Found' for '/slave(1246)/api/v1'
I1024 04:09:18.877449 18949 master.cpp:1137] Master terminating
I1024 04:09:18.878165 18960 hierarchical.cpp:1122] Removed all filters for agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0
I1024 04:09:18.878206 18960 hierarchical.cpp:998] Removed agent 1f6d5661-2513-4c8b-a928-a37b9ccf4556-S0
[       OK ] CSIVersion/StorageLocalResourceProviderTest.Update/v0 (986 ms)
[ RUN      ] CSIVersion/StorageLocalResourceProviderTest.Update/v1
I1024 04:09:18.900259 18949 cluster.cpp:177] Creating default 'local' authorizer
I1024 04:09:18.908680 18969 master.cpp:440] Master 9d678145-1151-4f89-90ca-8c448e3896d2 (af3ba927af2a) started on 172.17.0.2:42005
I1024 04:09:18.908723 18969 master.cpp:443] Flags at startup: --acls="" --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="50ms" --allocator="hierarchical" --authenticate_agents="true" --authenticate_frameworks="true" --authenticate_http_frameworks="true" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authentication_v0_timeout="15secs" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/saknqs/credentials" --filter_gpu_resources="true" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_operator_event_stream_subscribers="1000" --max_unreachable_tasks_per_framework="1000" --memory_profiling="false" --min_alloca
 table_resources="cpus:0.01|mem:32" --port="5050" --publish_per_framework_metrics="true" --quiet="false" --recovery_agent_removal_limit="100%" --registry="in_memory" --registry_fetch_timeout="1mins" --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400" --registry_store_timeout="100secs" --registry_strict="false" --require_agent_domain="false" --role_sorter="drf" --root_submissions="true" --version="false" --webui_dir="/mesos/mesos-1.10.0/_inst/share/mesos/webui" --work_dir="/tmp/saknqs/master" --zk_session_timeout="10secs"
I1024 04:09:18.909279 18969 master.cpp:492] Master only allowing authenticated frameworks to register
I1024 04:09:18.909301 18969 master.cpp:498] Master only allowing authenticated agents to register
I1024 04:09:18.909315 18969 master.cpp:504] Master only allowing authenticated HTTP frameworks to register
I1024 04:09:18.909330 18969 credentials.hpp:37] Loading credentials for authentication from '/tmp/saknqs/credentials'
I1024 04:09:18.909788 18969 master.cpp:548] Using default 'crammd5' authenticator
I1024 04:09:18.910101 18969 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
I1024 04:09:18.910437 18969 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
I1024 04:09:18.910710 18969 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
I1024 04:09:18.910975 18969 master.cpp:629] Authorization enabled
I1024 04:09:18.911653 18963 hierarchical.cpp:567] Initialized hierarchical allocator process
I1024 04:09:18.911746 18962 whitelist_watcher.cpp:77] No whitelist given
I1024 04:09:18.916569 18957 master.cpp:2169] Elected as the leading master!
I1024 04:09:18.916636 18957 master.cpp:1665] Recovering from registrar
I1024 04:09:18.916934 18951 registrar.cpp:339] Recovering registrar
I1024 04:09:18.918339 18951 registrar.cpp:383] Successfully fetched the registry (0B) in 0ns
I1024 04:09:18.918560 18951 registrar.cpp:487] Applied 1 operations in 63536ns; attempting to update the registry
I1024 04:09:18.919669 18951 registrar.cpp:544] Successfully updated the registry in 0ns
I1024 04:09:18.919911 18951 registrar.cpp:416] Successfully recovered registrar
I1024 04:09:18.920660 18970 master.cpp:1818] Recovered 0 agents from the registry (144B); allowing 10mins for agents to reregister
I1024 04:09:18.920730 18968 hierarchical.cpp:606] Skipping recovery of hierarchical allocator: nothing to recover
W1024 04:09:18.930626 18949 process.cpp:2877] Attempted to spawn already running process files@172.17.0.2:42005
I1024 04:09:18.932947 18949 containerizer.cpp:318] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
W1024 04:09:18.933936 18949 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges
W1024 04:09:18.933993 18949 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges
W1024 04:09:18.934271 18949 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges
I1024 04:09:18.934350 18949 provisioner.cpp:294] Using default backend 'copy'
I1024 04:09:18.938491 18949 cluster.cpp:524] Creating default 'local' authorizer
I1024 04:09:18.942694 18955 slave.cpp:267] Mesos agent started on (1247)@172.17.0.2:42005
I1024 04:09:18.942750 18955 slave.cpp:268] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/saknqs/zExt13/store/appc" --authenticate_http_readonly="true" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authentication_timeout_max="1mins" --authentication_timeout_min="5secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_destroy_timeout="1mins" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/saknqs/zExt13/credential" --default_role="*" --disallow_sharing_agent_ipc_namespace="false" --disallow_sharing_agent_pid_namespace="false" --disk_profile_adaptor="org_apache_mesos_UriDiskProfileAdaptor" --disk_watch_interval="1mins" --docker="docker" --docker_ignore_runtime="false" --docker_kill_orphans="true" 
 --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/saknqs/zExt13/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --docker_volume_chown="false" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_reregistration_timeout="2secs" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/saknqs/zExt13/fetch" --fetcher_cache_size="2GB" --fetcher_stall_timeout="1mins" --frameworks_home="/tmp/saknqs/zExt13/frameworks" --gc_delay="1weeks" --gc_disk_headroom="0.1" --gc_non_executor_container_sandboxes="false" --help="false" --hostname_lookup="true" --http_command_executor="false" --http_credentials="/tmp/saknqs/zExt13/http_credentials" --http_heartbeat_interval="30secs" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posix" --launcher_dir="/mesos/mesos-1.10.0/
 _build/src" --logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --memory_profiling="false" --network_cni_metrics="true" --network_cni_root_dir_persist="false" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --reconfiguration_policy="equal" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resource_provider_config_dir="/tmp/saknqs/resource_provider_configs" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --runtime_dir="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6b3wBp" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL" --zk_sessio
 n_timeout="10secs"
I1024 04:09:18.943948 18955 credentials.hpp:86] Loading credential for authentication from '/tmp/saknqs/zExt13/credential'
I1024 04:09:18.944211 18955 slave.cpp:300] Agent using credential for: test-principal
I1024 04:09:18.944254 18955 credentials.hpp:37] Loading credentials for authentication from '/tmp/saknqs/zExt13/http_credentials'
I1024 04:09:18.944607 18955 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly'
I1024 04:09:18.945199 18955 disk_profile_adaptor.cpp:82] Creating disk profile adaptor module 'org_apache_mesos_UriDiskProfileAdaptor'
I1024 04:09:18.945479 18962 hierarchical.cpp:1843] Performed allocation for 0 agents in 141872ns
I1024 04:09:18.947857 18961 uri_disk_profile_adaptor.cpp:305] Updated disk profile mapping to 1 active profiles
I1024 04:09:18.947938 18955 slave.cpp:615] Agent resources: [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
I1024 04:09:18.948323 18955 slave.cpp:623] Agent attributes: [  ]
I1024 04:09:18.948359 18955 slave.cpp:632] Agent hostname: af3ba927af2a
I1024 04:09:18.948632 18966 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:18.948673 18972 task_status_update_manager.cpp:181] Pausing sending task status updates
I1024 04:09:18.951310 18960 state.cpp:67] Recovering state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/meta'
I1024 04:09:18.951711 18973 slave.cpp:7492] Finished recovering checkpointed state from '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/meta', beginning agent recovery
I1024 04:09:18.952664 18959 task_status_update_manager.cpp:207] Recovering task status update manager
I1024 04:09:18.953481 18953 containerizer.cpp:821] Recovering Mesos containers
I1024 04:09:18.954198 18953 containerizer.cpp:1161] Recovering isolators
I1024 04:09:18.955638 18966 containerizer.cpp:1200] Recovering provisioner
I1024 04:09:18.956945 18956 provisioner.cpp:518] Provisioner recovery complete
I1024 04:09:18.958309 18954 composing.cpp:339] Finished recovering all containerizers
I1024 04:09:18.958854 18970 slave.cpp:7974] Recovering executors
I1024 04:09:18.959048 18970 slave.cpp:8127] Finished recovery
I1024 04:09:18.960331 18963 task_status_update_manager.cpp:181] Pausing sending task status updates
I1024 04:09:18.960407 18958 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:18.960412 18969 slave.cpp:1351] New master detected at master@172.17.0.2:42005
I1024 04:09:18.960629 18969 slave.cpp:1416] Detecting new master
I1024 04:09:18.968104 18962 slave.cpp:1443] Authenticating with master master@172.17.0.2:42005
I1024 04:09:18.968281 18962 slave.cpp:1452] Using default CRAM-MD5 authenticatee
I1024 04:09:18.968781 18953 authenticatee.cpp:121] Creating new client SASL connection
I1024 04:09:18.969265 18966 master.cpp:10594] Authenticating slave(1247)@172.17.0.2:42005
I1024 04:09:18.969530 18972 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(2107)@172.17.0.2:42005
I1024 04:09:18.970007 18957 authenticator.cpp:98] Creating new server SASL connection
I1024 04:09:18.970361 18955 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
I1024 04:09:18.970410 18955 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
I1024 04:09:18.970590 18955 authenticator.cpp:204] Received SASL authentication start
I1024 04:09:18.970693 18955 authenticator.cpp:326] Authentication requires more steps
I1024 04:09:18.970890 18960 authenticatee.cpp:259] Received SASL authentication step
I1024 04:09:18.971094 18960 authenticator.cpp:232] Received SASL authentication step
I1024 04:09:18.971145 18960 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'af3ba927af2a' server FQDN: 'af3ba927af2a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
I1024 04:09:18.971168 18960 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
I1024 04:09:18.971256 18960 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
I1024 04:09:18.971323 18960 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'af3ba927af2a' server FQDN: 'af3ba927af2a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
I1024 04:09:18.971364 18960 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
I1024 04:09:18.971396 18960 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I1024 04:09:18.971439 18960 authenticator.cpp:318] Authentication success
I1024 04:09:18.971648 18973 authenticatee.cpp:299] Authentication success
I1024 04:09:18.971823 18951 master.cpp:10626] Successfully authenticated principal 'test-principal' at slave(1247)@172.17.0.2:42005
I1024 04:09:18.971882 18959 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(2107)@172.17.0.2:42005
I1024 04:09:18.972412 18964 slave.cpp:1543] Successfully authenticated with master master@172.17.0.2:42005
I1024 04:09:18.973039 18964 slave.cpp:1993] Will retry registration in 1.067668ms if necessary
I1024 04:09:18.973304 18968 master.cpp:7083] Received register agent message from slave(1247)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:18.973839 18968 master.cpp:4189] Authorizing agent providing resources 'cpus:2; mem:1024; disk:1024; ports:[31000-32000]' with principal 'test-principal'
I1024 04:09:18.975018 18967 master.cpp:7150] Authorized registration of agent at slave(1247)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:18.975069 18958 slave.cpp:1993] Will retry registration in 25.145046ms if necessary
I1024 04:09:18.975178 18967 master.cpp:7262] Registering agent at slave(1247)@172.17.0.2:42005 (af3ba927af2a) with id 9d678145-1151-4f89-90ca-8c448e3896d2-S0
I1024 04:09:18.975736 18967 master.cpp:7076] Ignoring register agent message from slave(1247)@172.17.0.2:42005 (af3ba927af2a) as registration is already in progress
I1024 04:09:18.976236 18962 registrar.cpp:487] Applied 1 operations in 364132ns; attempting to update the registry
I1024 04:09:18.977402 18962 registrar.cpp:544] Successfully updated the registry in 1.03296ms
I1024 04:09:18.977720 18956 master.cpp:7310] Admitted agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 at slave(1247)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:18.978988 18956 master.cpp:7355] Registered agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 at slave(1247)@172.17.0.2:42005 (af3ba927af2a) with cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I1024 04:09:18.979122 18952 slave.cpp:1576] Registered with master master@172.17.0.2:42005; given agent ID 9d678145-1151-4f89-90ca-8c448e3896d2-S0
I1024 04:09:18.979177 18960 hierarchical.cpp:955] Added agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 (af3ba927af2a) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (offered or allocated: {})
I1024 04:09:18.979362 18970 task_status_update_manager.cpp:188] Resuming sending task status updates
I1024 04:09:18.979823 18960 hierarchical.cpp:1843] Performed allocation for 1 agents in 243543ns
I1024 04:09:18.979972 18952 slave.cpp:1611] Checkpointing SlaveInfo to '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/meta/slaves/9d678145-1151-4f89-90ca-8c448e3896d2-S0/slave.info'
I1024 04:09:18.980082 18954 status_update_manager_process.hpp:385] Resuming operation status update manager
I1024 04:09:18.982084 18952 slave.cpp:1663] Forwarding agent update {"operations":{},"resource_providers":{},"resource_version_uuid":{"value":"5Hby7i5XTXKBNTkTPaHpJg=="},"slave_id":{"value":"9d678145-1151-4f89-90ca-8c448e3896d2-S0"},"update_oversubscribed_resources":false}
I1024 04:09:18.982970 18964 master.cpp:8474] Ignoring update on agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 at slave(1247)@172.17.0.2:42005 (af3ba927af2a) as it reports no changes
I1024 04:09:18.988222 18957 process.cpp:3671] Handling HTTP event for process 'slave(1247)' with path: '/slave(1247)/api/v1'
I1024 04:09:18.990188 18960 http.cpp:1115] HTTP POST for /slave(1247)/api/v1 from 172.17.0.2:36954
I1024 04:09:18.990974 18960 http.cpp:2146] Processing GET_CONTAINERS call
I1024 04:09:18.997052 18953 hierarchical.cpp:1843] Performed allocation for 1 agents in 196701ns
I1024 04:09:18.998772 18962 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.002461 18956 process.cpp:3671] Handling HTTP event for process 'slave(1247)' with path: '/slave(1247)/api/v1'
I1024 04:09:19.004098 18973 http.cpp:1115] HTTP POST for /slave(1247)/api/v1 from 172.17.0.2:36956
I1024 04:09:19.005520 18973 http.cpp:2606] Processing LAUNCH_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.006927 18968 http.cpp:2710] Creating sandbox '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.008074 18950 containerizer.cpp:1396] Starting container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:19.009107 18950 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from STARTING to PROVISIONING after 375040ns
I1024 04:09:19.010351 18950 containerizer.cpp:1574] Checkpointed ContainerConfig at '/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6b3wBp/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE/config'
I1024 04:09:19.010468 18950 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PROVISIONING to PREPARING after 1.349888ms
I1024 04:09:19.015172 18972 containerizer.cpp:2100] Launching 'mesos-containerizer' with flags '--help="false" --launch_info="{"command":{"arguments":["/mesos/mesos-1.10.0/_build/src/test-csi-plugin","--api_version=v1","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_nxDHL6","--available_capacity=0B","--volumes=","--forward=unix:///tmp/saknqs/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/mesos/mesos-1.10.0/_build/src/test-csi-plugin"},"environment":{"variables":[{"name":"MESOS_SANDBOX","type":"VALUE","value":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"},{"name":"CSI_ENDPOINT","type":"VALUE","value":"unix:///tmp/mesos-csi-fnTgeG/endpoint.sock"}]},"task_environment":{},"working_directory":"/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/containers/org-apache-mesos-rp-local
 -storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE"}" --pipe_read="96" --pipe_write="97" --runtime_directory="/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6b3wBp/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE" --unshare_namespace_mnt="false"'
I1024 04:09:19.032505 18972 launcher.cpp:145] Forked child with pid '799' for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.033958 18972 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from PREPARING to ISOLATING after 23.496192ms
I1024 04:09:19.036012 18972 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from ISOLATING to FETCHING after 2.055936ms
I1024 04:09:19.036448 18959 fetcher.cpp:369] Starting to fetch URIs for container: org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE, directory: /tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_6JwBbL/containers/org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:19.038462 18971 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from FETCHING to RUNNING after 2240us
I1024 04:09:19.043032 18966 container_daemon.cpp:140] Invoking post-start hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.043357 18967 service_manager.cpp:703] Connecting to endpoint 'unix:///tmp/mesos-csi-fnTgeG/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:19.048612 18965 hierarchical.cpp:1843] Performed allocation for 1 agents in 301616ns
I1024 04:09:19.100785 18954 hierarchical.cpp:1843] Performed allocation for 1 agents in 306302ns
I1024 04:09:19.151988 18971 hierarchical.cpp:1843] Performed allocation for 1 agents in 267999ns
I1024 04:09:19.203825 18958 hierarchical.cpp:1843] Performed allocation for 1 agents in 203594ns
I1024 04:09:19.255259 18970 hierarchical.cpp:1843] Performed allocation for 1 agents in 201158ns
I1024 04:09:19.307304 18959 hierarchical.cpp:1843] Performed allocation for 1 agents in 304283ns
I1024 04:09:19.358796 18961 hierarchical.cpp:1843] Performed allocation for 1 agents in 211371ns
I1024 04:09:19.410020 18962 hierarchical.cpp:1843] Performed allocation for 1 agents in 286799ns
I1024 04:09:19.462121 18956 hierarchical.cpp:1843] Performed allocation for 1 agents in 311699ns
I1024 04:09:19.513550 18951 hierarchical.cpp:1843] Performed allocation for 1 agents in 250104ns
I1024 04:09:19.565554 18953 hierarchical.cpp:1843] Performed allocation for 1 agents in 262793ns
I1024 04:09:19.603178 18962 service_manager.cpp:545] Probing endpoint 'unix:///tmp/mesos-csi-fnTgeG/endpoint.sock' with CSI v1
I1024 04:09:19.605777   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/Probe call
I1024 04:09:19.610059 18970 container_daemon.cpp:171] Waiting for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.614065   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/GetPluginCapabilities call
I1024 04:09:19.615272 18968 process.cpp:3671] Handling HTTP event for process 'slave(1247)' with path: '/slave(1247)/api/v1'
I1024 04:09:19.617228 18966 http.cpp:1115] HTTP POST for /slave(1247)/api/v1 from 172.17.0.2:36958
I1024 04:09:19.617605 18958 hierarchical.cpp:1843] Performed allocation for 1 agents in 252921ns
I1024 04:09:19.617908 18966 http.cpp:2824] Processing WAIT_CONTAINER call for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.620259   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/GetPluginInfo call
I1024 04:09:19.620736   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Identity/GetPluginInfo call
I1024 04:09:19.622721 18972 v1_volume_manager.cpp:649] NODE_SERVICE loaded: {}
I1024 04:09:19.623303 18972 v1_volume_manager.cpp:649] CONTROLLER_SERVICE loaded: {}
I1024 04:09:19.626118   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ControllerGetCapabilities call
I1024 04:09:19.631070   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Node/NodeGetCapabilities call
I1024 04:09:19.635505   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Node/NodeGetInfo call
I1024 04:09:19.638748 18969 provider.cpp:676] Recovered resources '{}' and 0 operations for resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test'
I1024 04:09:19.638990 18952 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:19.639485 18973 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.643867 18971 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.644877 18950 provider.cpp:476] Connected to resource provider manager
I1024 04:09:19.645927 18958 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.647524 18955 process.cpp:3671] Handling HTTP event for process 'slave(1247)' with path: '/slave(1247)/api/v1/resource_provider'
I1024 04:09:19.649863 18954 http.cpp:1115] HTTP POST for /slave(1247)/api/v1/resource_provider from 172.17.0.2:36962
I1024 04:09:19.650789 18969 manager.cpp:813] Subscribing resource provider {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/mesos/mesos-1.10.0/_build/src/test-csi-plugin","--api_version=v1","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_nxDHL6","--available_capacity=0B","--volumes=","--forward=unix:///tmp/saknqs/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/mesos/mesos-1.10.0/_build/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}
I1024 04:09:19.669553 18961 hierarchical.cpp:1843] Performed allocation for 1 agents in 269407ns
I1024 04:09:19.690212 18967 slave.cpp:8483] Handling resource provider message 'SUBSCRIBE: {"default_reservations":[{"role":"storage","type":"DYNAMIC"}],"id":{"value":"aa7510c9-cb26-4faf-be4a-c84a73cd248a"},"name":"test","storage":{"plugin":{"containers":[{"command":{"arguments":["/mesos/mesos-1.10.0/_build/src/test-csi-plugin","--api_version=v1","--work_dir=/tmp/CSIVersion_StorageLocalResourceProviderTest_Update_v1_nxDHL6","--available_capacity=0B","--volumes=","--forward=unix:///tmp/saknqs/mock_csi.sock","--create_parameters=","--volume_metadata="],"shell":false,"value":"/mesos/mesos-1.10.0/_build/src/test-csi-plugin"},"resources":[{"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"}],"services":["CONTROLLER_SERVICE","NODE_SERVICE"]}],"name":"local","type":"org.apache.mesos.csi.test"},"reconciliation_interval_seconds":15.0},"type":"org.apache.mesos.rp.local.storage"}'
I1024 04:09:19.692260 18956 provider.cpp:498] Received SUBSCRIBED event
I1024 04:09:19.692312 18956 provider.cpp:1309] Subscribed with ID aa7510c9-cb26-4faf-be4a-c84a73cd248a
I1024 04:09:19.693233 18960 status_update_manager_process.hpp:314] Recovering operation status update manager
I1024 04:09:19.721439 18952 hierarchical.cpp:1843] Performed allocation for 1 agents in 254143ns
I1024 04:09:19.731765 18962 provider.cpp:790] Reconciling storage pools and volumes
I1024 04:09:19.734839   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ListVolumes call
I1024 04:09:19.738620 18957 provider.cpp:2217] Sending UPDATE_STATE call with resources '{}' and 0 operations to agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0
I1024 04:09:19.739044 18955 http_connection.hpp:131] Sending UPDATE_STATE call to http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.739130 18957 provider.cpp:748] Resource provider aa7510c9-cb26-4faf-be4a-c84a73cd248a is in READY state
I1024 04:09:19.739209 18970 status_update_manager_process.hpp:385] Resuming operation status update manager
I1024 04:09:19.740172 18952 provider.cpp:1235] Updating profiles { test } for resource provider aa7510c9-cb26-4faf-be4a-c84a73cd248a
I1024 04:09:19.740504 18969 process.cpp:3671] Handling HTTP event for process 'slave(1247)' with path: '/slave(1247)/api/v1/resource_provider'
I1024 04:09:19.741966 18959 provider.cpp:790] Reconciling storage pools and volumes
I1024 04:09:19.742810 18958 http.cpp:1115] HTTP POST for /slave(1247)/api/v1/resource_provider from 172.17.0.2:36960
I1024 04:09:19.743593 18966 manager.cpp:1045] Received UPDATE_STATE call with resources '[]' and 0 operations from resource provider aa7510c9-cb26-4faf-be4a-c84a73cd248a
I1024 04:09:19.743909 18957 slave.cpp:8483] Handling resource provider message 'UPDATE_STATE: aa7510c9-cb26-4faf-be4a-c84a73cd248a {}'
I1024 04:09:19.744071 18957 slave.cpp:8603] Forwarding new total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I1024 04:09:19.745178   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ListVolumes call
I1024 04:09:19.746052   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/GetCapacity call
I1024 04:09:19.746949 18962 hierarchical.cpp:1100] Grew agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 by {} (total), {  } (used)
I1024 04:09:19.747550 18962 hierarchical.cpp:1057] Agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 (af3ba927af2a) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]
I1024 04:09:19.754691 18960 hierarchical.cpp:1843] Performed allocation for 1 agents in 116611ns
I1024 04:09:19.755285 18953 provider.cpp:790] Reconciling storage pools and volumes
I1024 04:09:19.758219   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/ListVolumes call
I1024 04:09:19.758780   803 test_csi_plugin.cpp:1915] Forwarding /csi.v1.Controller/GetCapacity call
I1024 04:09:19.769263 18973 slave.cpp:924] Agent terminating
I1024 04:09:19.770362 18973 manager.cpp:163] Terminating resource provider aa7510c9-cb26-4faf-be4a-c84a73cd248a
I1024 04:09:19.770977 18959 master.cpp:1296] Agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 at slave(1247)@172.17.0.2:42005 (af3ba927af2a) disconnected
I1024 04:09:19.771039 18959 master.cpp:3391] Disconnecting agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 at slave(1247)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:19.771168 18959 master.cpp:3410] Deactivating agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 at slave(1247)@172.17.0.2:42005 (af3ba927af2a)
I1024 04:09:19.771370 18966 hierarchical.cpp:1146] Agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0 deactivated
E1024 04:09:19.771724 18957 http_connection.hpp:452] End-Of-File received
I1024 04:09:19.772398 18957 http_connection.hpp:217] Re-detecting endpoint
I1024 04:09:19.773092 18957 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I1024 04:09:19.773202 18957 http_connection.hpp:338] Ignoring disconnection attempt from stale connection
I1024 04:09:19.773344 18963 provider.cpp:488] Disconnected from resource provider manager
I1024 04:09:19.773416 18957 http_connection.hpp:227] New endpoint detected at http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.773625 18964 status_update_manager_process.hpp:379] Pausing operation status update manager
I1024 04:09:19.775729 18970 containerizer.cpp:2620] Destroying container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE in RUNNING state
I1024 04:09:19.775821 18970 containerizer.cpp:3318] Transitioning the state of container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE from RUNNING to DESTROYING after 15.737575936secs
I1024 04:09:19.776482 18970 launcher.cpp:161] Asked to destroy container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:19.778416 18950 http_connection.hpp:283] Connected with the remote endpoint at http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.779302 18960 provider.cpp:476] Connected to resource provider manager
I1024 04:09:19.780176 18968 http_connection.hpp:131] Sending SUBSCRIBE call to http://172.17.0.2:42005/slave(1247)/api/v1/resource_provider
I1024 04:09:19.781008 18974 process.cpp:2781] Returning '404 Not Found' for '/slave(1247)/api/v1/resource_provider'
E1024 04:09:19.782124 18967 provider.cpp:721] Failed to subscribe resource provider with type 'org.apache.mesos.rp.local.storage' and name 'test': Received '404 Not Found' ()
I1024 04:09:19.800313 18971 hierarchical.cpp:1843] Performed allocation for 1 agents in 228714ns
I1024 04:09:19.850057 18952 containerizer.cpp:3156] Container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE has exited
I1024 04:09:19.852030 18966 hierarchical.cpp:1843] Performed allocation for 1 agents in 172838ns
I1024 04:09:19.852353 18954 provisioner.cpp:652] Ignoring destroy request for unknown container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:19.856062 18955 container_daemon.cpp:189] Invoking post-stop hook for container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.856354 18973 service_manager.cpp:723] Disconnected from endpoint 'unix:///tmp/mesos-csi-fnTgeG/endpoint.sock' of CSI plugin container org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE
I1024 04:09:19.856765 18959 container_daemon.cpp:121] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-local--CONTROLLER_SERVICE-NODE_SERVICE'
I1024 04:09:19.860632 18974 process.cpp:2781] Returning '404 Not Found' for '/slave(1247)/api/v1'
I1024 04:09:19.875108 18949 master.cpp:1137] Master terminating
I1024 04:09:19.875887 18963 hierarchical.cpp:1122] Removed all filters for agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0
I1024 04:09:19.875931 18963 hierarchical.cpp:998] Removed agent 9d678145-1151-4f89-90ca-8c448e3896d2-S0
[       OK ] CSIVersion/StorageLocalResourceProviderTest.Update/v1 (992 ms)
[----------] 54 tests from CSIVersion/StorageLocalResourceProviderTest (109190 ms total)

[----------] Global test environment tear-down
[==========] 2319 tests from 222 test cases ran. (1223888 ms total)
[  PASSED  ] 2318 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] RoleTest.RolesEndpointContainsConsumedQuota

 1 FAILED TEST
  YOU HAVE 34 DISABLED TESTS

I1024 04:09:20.040026 18974 process.cpp:935] Stopped the socket accept loop
make[4]: *** [check-local] Error 1
make[4]: Leaving directory `/mesos/mesos-1.10.0/_build/src'
make[3]: *** [check-am] Error 2
make[3]: Leaving directory `/mesos/mesos-1.10.0/_build/src'
make[2]: *** [check] Error 2
make[2]: Leaving directory `/mesos/mesos-1.10.0/_build/src'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/mesos/mesos-1.10.0/_build'
make: *** [distcheck] Error 1
+ docker rmi --force mesos-1571880616-26467
Untagged: mesos-1571880616-26467:latest
Deleted: sha256:2c9de19a941c76b348e1c7ad19e5cce0e2bb882122789ec09128b358e8e35d23
Deleted: sha256:d889a4901ef1baef324bfc3e8c60fc1387109c2a5ad6f04c2202e0f0951abbce
Deleted: sha256:6ec804c59cd2df133d6c48dfd9c03003591b52fbaf4eb4822029383860a465a7
Deleted: sha256:f92b1583b96118fae9720ecf5e425a69bdcc3176faeb6720bfea5a49077417eb
Deleted: sha256:9896d0ba44adcb3f91a9465da3c2ab5e9893a102282591d13d9a50c9f576d238
Deleted: sha256:9b7aa4f4d5913cfae9910cf2af3e4734428422168e8fb6cbf2088d14a63edd50
Deleted: sha256:e47643b9049f6ac6de2ef52b528157320af1ff253ba9fb2c0a7374d014782f4b
Deleted: sha256:4a630e006c1faab079914a3fd5299e574ed8493fa62062568f8bac257118b94c
Deleted: sha256:30da4b12672895ae1130c2aaabf8f1ef0ddd57f154ef19b9a3d67ee47b639cc8
Deleted: sha256:442f9c150f88ab29fbfc8aeb6a8c7c21b9c0f4a6d00d9e0d01cdc9cd1b6a6716
Deleted: sha256:575fc6b95719ab4ee995694da8235c61ae781695f3552179e4eb42cce7359eeb
Deleted: sha256:daf2fcbc7d50d146c071d3c33cee6c22426412da68744519252d481f3a0b4e31
Deleted: sha256:218659a89205b73d09bef779c37dccc25c633e770f25b1c80b05e70c1c69e322
Deleted: sha256:8f731d6073ca7c135704fa26be28249dd5e89d826aa048c09586b169c0f3368d
Deleted: sha256:860ef546a9b4af022ebe34cc06c67e1e16a9e5535bb1c2ec991af527cffa4834
Deleted: sha256:4ac53a080dbd7a6c3cccf5745ea60bd8e7dafb24fd529fbb92bc14248a7b7568
Deleted: sha256:af7ad92bbfd910099508e08fda4244255dcd141964d78f1db958f3dad2d4af90
Deleted: sha256:ecdc362c5f6197ee493baa6a70a3a66ee6fd48f7136c596eb529859757099119
Deleted: sha256:811ac6500e445ce84e6fd25daf240479ffea04578cf88ffd8d79de3d131dc803
Deleted: sha256:f47ace4aa025bfa41dd98cc66bd38609ee347c27b5a183085f4034ba509890e8
Deleted: sha256:c717e59be97cc1590d1386d9e044170a7b74910f21f30564d8f07ac7e542e4c3
Deleted: sha256:978d27aca85c0b182798197a486243429703d26810a87f57e8d027aa501a2408
Deleted: sha256:a128c9121c12ef54076cff94185d61a330860bebbcc1198dad1c0c13d36d1e70
Deleted: sha256:5d04cc83b75c20b253ccb7c13edc19033fa20e5f3c762259dbe4a1de11e9b3e2
Deleted: sha256:5a03c4f360d14f8225422866c04851f2c372ee1561827a51344aac503eba4f4e
Deleted: sha256:b5d3fa41f38252a41f8ad7c520b88a126319771eb87ead0f53f7c8df16fea848
Deleted: sha256:28c2e819b0b3edde7b8122fad3e49b84c706672ebf68441da31500b05d9ac759
Deleted: sha256:df99338f9c3a51d68e25d33683521038a6f439804b53fa074bbf8d4006d0386c
Deleted: sha256:bf863f43e54ce45150c44a902cc3b336c1a386e69b64e92c80bbd3120e666fa2
Deleted: sha256:ad4f2993c2c5db04cd0845012b233cb0c5c4382a61cfad5d6627752ffb45a87d

Full log: https://builds.apache.org/job/Mesos-Reviewbot-Linux/4509/console

- Mesos Reviewbot


On Oct. 24, 2019, 1:06 a.m., Joseph Wu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71666/
> -----------------------------------------------------------
> 
> (Updated Oct. 24, 2019, 1:06 a.m.)
> 
> 
> Review request for mesos, Benno Evers, Benjamin Mahler, Greg Mann, and Till Toenshoff.
> 
> 
> Bugs: MESOS-10010
>     https://issues.apache.org/jira/browse/MESOS-10010
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This completes a fully functional client-side SSL socket.
> 
> Needs a bit of cleanup and more error handling though.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/ssl/socket_wrapper.hpp PRE-CREATION 
>   3rdparty/libprocess/src/ssl/socket_wrapper.cpp PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/71666/diff/1/
> 
> 
> Testing
> -------
> 
> Successfully fetched from a webpage:
> ```
>   http::URL url = http::URL(
>      "https",
>      "www.google.com",
>      443);
> 
>   Future<http::Response> response = http::get(url);
>   AWAIT_READY(response);
>   EXPECT_EQ(http::Status::OK, response->code);
> ```
> 
> 
> Thanks,
> 
> Joseph Wu
> 
>