You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mesos.apache.org by "Joseph Wu (JIRA)" <ji...@apache.org> on 2016/02/01 19:41:40 UTC

[jira] [Issue Comment Deleted] (MESOS-4554) Investigate test suite crashes after ZK socket disconnections.

     [ https://issues.apache.org/jira/browse/MESOS-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joseph Wu updated MESOS-4554:
-----------------------------
    Comment: was deleted

(was: Similar segfault appeared again, this time during {{MasterMaintenanceTest.InverseOffersFilters}} (which appeared to be near successful completion).

Separated out the zookeeper segfault:
{code}
*** Aborted at 1454095687 (unix time) try "date -d @1454095687" if you are using GNU date ***
PC: @     0x2b2faec908dd (unknown)
*** SIGSEGV (@0x51) received by PID 30503 (TID 0x2b307fe5f700) from PID 81; stack trace: ***
    @     0x2b2fbb1e50b7 os::Linux::chained_handler()
    @     0x2b2fbb1e9219 JVM_handle_linux_signal
    @     0x2b2faea47340 (unknown)
    @     0x2b2faec908dd (unknown)
    @     0x2b2faec86681 (unknown)
    @     0x2b2faecdd98e (unknown)
    @     0x2b2faecdd8cf (unknown)
    @     0x2b2facc1b0d0 handle_socket_error_msg
    @     0x2b2facc1bf6b zookeeper_process
    @     0x2b2facc2209b do_io
    @     0x2b2faea3f182 start_thread
    @     0x2b2faed4f47d (unknown)
make[4]: *** [check-local] Segmentation fault
make[4]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
make[3]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
make[3]: *** [check-am] Error 2
make[2]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
make[2]: *** [check] Error 2
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/mesos/mesos-0.28.0/_build'
make: *** [distcheck] Error 1
{code})

> Investigate test suite crashes after ZK socket disconnections.
> --------------------------------------------------------------
>
>                 Key: MESOS-4554
>                 URL: https://issues.apache.org/jira/browse/MESOS-4554
>             Project: Mesos
>          Issue Type: Bug
>            Reporter: Anand Mazumdar
>              Labels: flaky-test, mesosphere
>
> Showed up on ASF CI:
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu:14.04,label_exp=docker%7C%7CHadoop/1579/console
> The test crashed with the following logs:
> {code}
> [ RUN      ] ContentType/ExecutorHttpApiTest.DefaultAccept/1
> I0129 02:00:35.137161 31926 leveldb.cpp:174] Opened db in 118.902333ms
> I0129 02:00:35.187021 31926 leveldb.cpp:181] Compacted db in 49.836241ms
> I0129 02:00:35.187088 31926 leveldb.cpp:196] Created db iterator in 33825ns
> I0129 02:00:35.187109 31926 leveldb.cpp:202] Seeked to beginning of db in 7965ns
> I0129 02:00:35.187121 31926 leveldb.cpp:271] Iterated through 0 keys in the db in 6350ns
> I0129 02:00:35.187165 31926 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
> I0129 02:00:35.188433 31950 recover.cpp:447] Starting replica recovery
> I0129 02:00:35.188796 31950 recover.cpp:473] Replica is in EMPTY status
> I0129 02:00:35.190021 31949 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (11817)@172.17.0.3:60904
> I0129 02:00:35.190569 31958 recover.cpp:193] Received a recover response from a replica in EMPTY status
> I0129 02:00:35.190994 31959 recover.cpp:564] Updating replica status to STARTING
> I0129 02:00:35.191522 31953 master.cpp:374] Master 823f2212-bf28-4dd6-959d-796029d32afb (90665f991b70) started on 172.17.0.3:60904
> I0129 02:00:35.191640 31953 master.cpp:376] Flags at startup: --acls="" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate="true" --authenticate_http="true" --authenticate_slaves="true" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/B9O6zq/credentials" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" --quiet="false" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="25secs" --registry_strict="true" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/mesos/mesos-0.28.0/_inst/share/mesos/webui" --work_dir="/tmp/B9O6zq/master" --zk_session_timeout="10secs"
> I0129 02:00:35.191926 31953 master.cpp:421] Master only allowing authenticated frameworks to register
> I0129 02:00:35.191936 31953 master.cpp:426] Master only allowing authenticated slaves to register
> I0129 02:00:35.191943 31953 credentials.hpp:35] Loading credentials for authentication from '/tmp/B9O6zq/credentials'
> I0129 02:00:35.192229 31953 master.cpp:466] Using default 'crammd5' authenticator
> I0129 02:00:35.192366 31953 master.cpp:535] Using default 'basic' HTTP authenticator
> I0129 02:00:35.192530 31953 master.cpp:569] Authorization enabled
> I0129 02:00:35.192719 31950 whitelist_watcher.cpp:77] No whitelist given
> I0129 02:00:35.192756 31957 hierarchical.cpp:144] Initialized hierarchical allocator process
> I0129 02:00:35.194291 31955 master.cpp:1710] The newly elected leader is master@172.17.0.3:60904 with id 823f2212-bf28-4dd6-959d-796029d32afb
> I0129 02:00:35.194335 31955 master.cpp:1723] Elected as the leading master!
> I0129 02:00:35.194350 31955 master.cpp:1468] Recovering from registrar
> I0129 02:00:35.194545 31958 registrar.cpp:307] Recovering registrar
> I0129 02:00:35.220226 31948 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 29.150097ms
> I0129 02:00:35.220262 31948 replica.cpp:320] Persisted replica status to STARTING
> I0129 02:00:35.220484 31959 recover.cpp:473] Replica is in STARTING status
> I0129 02:00:35.221220 31954 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (11819)@172.17.0.3:60904
> I0129 02:00:35.221539 31959 recover.cpp:193] Received a recover response from a replica in STARTING status
> I0129 02:00:35.221871 31954 recover.cpp:564] Updating replica status to VOTING
> I0129 02:00:35.245329 31949 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 23.326002ms
> I0129 02:00:35.245367 31949 replica.cpp:320] Persisted replica status to VOTING
> I0129 02:00:35.245522 31955 recover.cpp:578] Successfully joined the Paxos group
> I0129 02:00:35.245800 31955 recover.cpp:462] Recover process terminated
> I0129 02:00:35.246181 31951 log.cpp:659] Attempting to start the writer
> I0129 02:00:35.247228 31953 replica.cpp:493] Replica received implicit promise request from (11820)@172.17.0.3:60904 with proposal 1
> I0129 02:00:35.270472 31953 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 23.225846ms
> I0129 02:00:35.270510 31953 replica.cpp:342] Persisted promised to 1
> I0129 02:00:35.271306 31957 coordinator.cpp:238] Coordinator attempting to fill missing positions
> I0129 02:00:35.272373 31949 replica.cpp:388] Replica received explicit promise request from (11821)@172.17.0.3:60904 for position 0 with proposal 2
> I0129 02:00:35.295600 31949 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 23.181008ms
> I0129 02:00:35.295639 31949 replica.cpp:712] Persisted action at 0
> I0129 02:00:35.296815 31950 replica.cpp:537] Replica received write request for position 0 from (11822)@172.17.0.3:60904
> I0129 02:00:35.296879 31950 leveldb.cpp:436] Reading position from leveldb took 43203ns
> I0129 02:00:35.320659 31950 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 23.753935ms
> I0129 02:00:35.320699 31950 replica.cpp:712] Persisted action at 0
> I0129 02:00:35.321394 31950 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0
> I0129 02:00:35.345837 31950 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 24.358655ms
> I0129 02:00:35.345877 31950 replica.cpp:712] Persisted action at 0
> I0129 02:00:35.345898 31950 replica.cpp:697] Replica learned NOP action at position 0
> I0129 02:00:35.346683 31950 log.cpp:675] Writer started with ending position 0
> I0129 02:00:35.347913 31957 leveldb.cpp:436] Reading position from leveldb took 55621ns
> I0129 02:00:35.349047 31947 registrar.cpp:340] Successfully fetched the registry (0B) in 154.395904ms
> I0129 02:00:35.349185 31947 registrar.cpp:439] Applied 1 operations in 46347ns; attempting to update the 'registry'
> I0129 02:00:35.350008 31952 log.cpp:683] Attempting to append 170 bytes to the log
> I0129 02:00:35.350132 31957 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1
> I0129 02:00:35.351042 31953 replica.cpp:537] Replica received write request for position 1 from (11823)@172.17.0.3:60904
> I0129 02:00:35.370906 31953 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 19.829257ms
> I0129 02:00:35.370946 31953 replica.cpp:712] Persisted action at 1
> I0129 02:00:35.371840 31952 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0
> I0129 02:00:35.396082 31952 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 24.218894ms
> I0129 02:00:35.396122 31952 replica.cpp:712] Persisted action at 1
> I0129 02:00:35.396144 31952 replica.cpp:697] Replica learned APPEND action at position 1
> I0129 02:00:35.397250 31954 registrar.cpp:484] Successfully updated the 'registry' in 47.99104ms
> I0129 02:00:35.397452 31954 registrar.cpp:370] Successfully recovered registrar
> I0129 02:00:35.397678 31946 log.cpp:702] Attempting to truncate the log to 1
> I0129 02:00:35.397881 31956 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2
> I0129 02:00:35.398066 31951 master.cpp:1520] Recovered 0 slaves from the Registry (131B) ; allowing 10mins for slaves to re-register
> I0129 02:00:35.398111 31957 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover
> I0129 02:00:35.398982 31955 replica.cpp:537] Replica received write request for position 2 from (11824)@172.17.0.3:60904
> I0129 02:00:35.421293 31955 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 22.286476ms
> I0129 02:00:35.421339 31955 replica.cpp:712] Persisted action at 2
> I0129 02:00:35.422046 31944 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0
> I0129 02:00:35.446316 31944 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 24.246177ms
> I0129 02:00:35.446406 31944 leveldb.cpp:399] Deleting ~1 keys from leveldb took 84415ns
> I0129 02:00:35.446466 31944 replica.cpp:712] Persisted action at 2
> I0129 02:00:35.446491 31944 replica.cpp:697] Replica learned TRUNCATE action at position 2
> I0129 02:00:35.452579 31957 slave.cpp:192] Slave started on 372)@172.17.0.3:60904
> I0129 02:00:35.452620 31957 slave.cpp:193] Flags at startup: --appc_store_dir="/tmp/mesos/store/appc" --authenticatee="crammd5" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/credential" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_auth_server="https://auth.docker.io" --docker_kill_orphans="true" --docker_puller_timeout="60" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher_dir="/mesos/mesos-0.28.0/_build/src" --logbufsecs="0" --logging_level="INFO" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resources="cpus:2;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM"
> I0129 02:00:35.453012 31957 credentials.hpp:83] Loading credential for authentication from '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/credential'
> I0129 02:00:35.453191 31957 slave.cpp:323] Slave using credential for: test-principal
> I0129 02:00:35.453368 31957 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000]
> Trying semicolon-delimited string format instead
> I0129 02:00:35.453853 31957 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0129 02:00:35.453938 31957 slave.cpp:471] Slave attributes: [  ]
> I0129 02:00:35.453953 31957 slave.cpp:476] Slave hostname: 90665f991b70
> I0129 02:00:35.454794 31950 state.cpp:58] Recovering state from '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/meta'
> I0129 02:00:35.455080 31948 status_update_manager.cpp:200] Recovering status update manager
> I0129 02:00:35.455225 31926 sched.cpp:222] Version: 0.28.0
> I0129 02:00:35.455535 31956 slave.cpp:4495] Finished recovery
> I0129 02:00:35.455798 31945 sched.cpp:326] New master detected at master@172.17.0.3:60904
> I0129 02:00:35.455879 31945 sched.cpp:382] Authenticating with master master@172.17.0.3:60904
> I0129 02:00:35.455904 31945 sched.cpp:389] Using default CRAM-MD5 authenticatee
> I0129 02:00:35.455943 31956 slave.cpp:4667] Querying resource estimator for oversubscribable resources
> I0129 02:00:35.456167 31950 authenticatee.cpp:121] Creating new client SASL connection
> I0129 02:00:35.456218 31953 status_update_manager.cpp:174] Pausing sending status updates
> I0129 02:00:35.456219 31956 slave.cpp:795] New master detected at master@172.17.0.3:60904
> I0129 02:00:35.456298 31956 slave.cpp:858] Authenticating with master master@172.17.0.3:60904
> I0129 02:00:35.456323 31956 slave.cpp:863] Using default CRAM-MD5 authenticatee
> I0129 02:00:35.456490 31948 authenticatee.cpp:121] Creating new client SASL connection
> I0129 02:00:35.456492 31956 slave.cpp:831] Detecting new master
> I0129 02:00:35.456588 31946 master.cpp:5521] Authenticating scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904
> I0129 02:00:35.456686 31956 slave.cpp:4681] Received oversubscribable resources  from the resource estimator
> I0129 02:00:35.456805 31953 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(804)@172.17.0.3:60904
> I0129 02:00:35.456878 31946 master.cpp:5521] Authenticating slave(372)@172.17.0.3:60904
> I0129 02:00:35.457124 31953 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(805)@172.17.0.3:60904
> I0129 02:00:35.457157 31948 authenticator.cpp:98] Creating new server SASL connection
> I0129 02:00:35.457373 31946 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5
> I0129 02:00:35.457381 31951 authenticator.cpp:98] Creating new server SASL connection
> I0129 02:00:35.457491 31946 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5'
> I0129 02:00:35.457598 31946 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5
> I0129 02:00:35.457612 31951 authenticator.cpp:203] Received SASL authentication start
> I0129 02:00:35.457635 31946 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5'
> I0129 02:00:35.457680 31951 authenticator.cpp:325] Authentication requires more steps
> I0129 02:00:35.457767 31954 authenticator.cpp:203] Received SASL authentication start
> I0129 02:00:35.457768 31948 authenticatee.cpp:258] Received SASL authentication step
> I0129 02:00:35.457830 31954 authenticator.cpp:325] Authentication requires more steps
> I0129 02:00:35.457885 31948 authenticator.cpp:231] Received SASL authentication step
> I0129 02:00:35.457918 31948 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '90665f991b70' server FQDN: '90665f991b70' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> I0129 02:00:35.457933 31948 auxprop.cpp:179] Looking up auxiliary property '*userPassword'
> I0129 02:00:35.457954 31959 authenticatee.cpp:258] Received SASL authentication step
> I0129 02:00:35.457993 31948 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I0129 02:00:35.458031 31948 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '90665f991b70' server FQDN: '90665f991b70' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> I0129 02:00:35.458050 31948 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0129 02:00:35.458065 31948 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0129 02:00:35.458096 31948 authenticator.cpp:317] Authentication success
> I0129 02:00:35.458112 31944 authenticator.cpp:231] Received SASL authentication step
> I0129 02:00:35.458142 31944 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '90665f991b70' server FQDN: '90665f991b70' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> I0129 02:00:35.458173 31944 auxprop.cpp:179] Looking up auxiliary property '*userPassword'
> I0129 02:00:35.458206 31954 authenticatee.cpp:298] Authentication success
> I0129 02:00:35.458256 31957 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904
> I0129 02:00:35.458206 31944 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I0129 02:00:35.458360 31944 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '90665f991b70' server FQDN: '90665f991b70' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> I0129 02:00:35.458382 31944 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0129 02:00:35.458397 31944 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0129 02:00:35.458489 31944 authenticator.cpp:317] Authentication success
> I0129 02:00:35.458623 31953 sched.cpp:471] Successfully authenticated with master master@172.17.0.3:60904
> I0129 02:00:35.458649 31953 sched.cpp:780] Sending SUBSCRIBE call to master@172.17.0.3:60904
> I0129 02:00:35.458653 31956 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(804)@172.17.0.3:60904
> I0129 02:00:35.458673 31951 authenticatee.cpp:298] Authentication success
> I0129 02:00:35.458709 31952 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(372)@172.17.0.3:60904
> I0129 02:00:35.458906 31955 slave.cpp:926] Successfully authenticated with master master@172.17.0.3:60904
> I0129 02:00:35.458983 31956 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(805)@172.17.0.3:60904
> I0129 02:00:35.459033 31955 slave.cpp:1320] Will retry registration in 7.075135ms if necessary
> I0129 02:00:35.459128 31953 sched.cpp:813] Will retry registration in 86.579738ms if necessary
> I0129 02:00:35.459193 31950 master.cpp:4235] Registering slave at slave(372)@172.17.0.3:60904 (90665f991b70) with id 823f2212-bf28-4dd6-959d-796029d32afb-S0
> I0129 02:00:35.459489 31950 master.cpp:2278] Received SUBSCRIBE call for framework 'default' at scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904
> I0129 02:00:35.459513 31950 master.cpp:1749] Authorizing framework principal 'test-principal' to receive offers for role '*'
> I0129 02:00:35.459516 31959 registrar.cpp:439] Applied 1 operations in 62499ns; attempting to update the 'registry'
> I0129 02:00:35.459766 31956 master.cpp:2349] Subscribing framework default with checkpointing disabled and capabilities [  ]
> I0129 02:00:35.460095 31955 log.cpp:683] Attempting to append 339 bytes to the log
> I0129 02:00:35.460192 31948 hierarchical.cpp:265] Added framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.460247 31956 sched.cpp:707] Framework registered with 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.460314 31958 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3
> I0129 02:00:35.460388 31948 hierarchical.cpp:1403] No resources available to allocate!
> I0129 02:00:35.460449 31948 hierarchical.cpp:1498] No inverse offers to send out!
> I0129 02:00:35.460402 31956 sched.cpp:721] Scheduler::registered took 136519ns
> I0129 02:00:35.460482 31948 hierarchical.cpp:1096] Performed allocation for 0 slaves in 158218ns
> I0129 02:00:35.461187 31944 replica.cpp:537] Replica received write request for position 3 from (11829)@172.17.0.3:60904
> I0129 02:00:35.467929 31954 slave.cpp:1320] Will retry registration in 14.701381ms if necessary
> I0129 02:00:35.468183 31952 master.cpp:4223] Ignoring register slave message from slave(372)@172.17.0.3:60904 (90665f991b70) as admission is already in progress
> I0129 02:00:35.483300 31959 slave.cpp:1320] Will retry registration in 8.003223ms if necessary
> I0129 02:00:35.483500 31946 master.cpp:4223] Ignoring register slave message from slave(372)@172.17.0.3:60904 (90665f991b70) as admission is already in progress
> I0129 02:00:35.491843 31945 slave.cpp:1320] Will retry registration in 52.952447ms if necessary
> I0129 02:00:35.491962 31948 master.cpp:4223] Ignoring register slave message from slave(372)@172.17.0.3:60904 (90665f991b70) as admission is already in progress
> I0129 02:00:35.503868 31944 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 42.66008ms
> I0129 02:00:35.503917 31944 replica.cpp:712] Persisted action at 3
> I0129 02:00:35.504838 31953 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0
> I0129 02:00:35.545286 31955 slave.cpp:1320] Will retry registration in 298.440134ms if necessary
> I0129 02:00:35.545500 31957 master.cpp:4223] Ignoring register slave message from slave(372)@172.17.0.3:60904 (90665f991b70) as admission is already in progress
> I0129 02:00:35.545524 31953 leveldb.cpp:341] Persisting action (360 bytes) to leveldb took 40.663886ms
> I0129 02:00:35.545560 31953 replica.cpp:712] Persisted action at 3
> I0129 02:00:35.545584 31953 replica.cpp:697] Replica learned APPEND action at position 3
> I0129 02:00:35.547586 31945 registrar.cpp:484] Successfully updated the 'registry' in 87.995136ms
> I0129 02:00:35.547767 31949 log.cpp:702] Attempting to truncate the log to 3
> I0129 02:00:35.547906 31954 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4
> I0129 02:00:35.548713 31945 slave.cpp:3435] Received ping from slave-observer(343)@172.17.0.3:60904
> I0129 02:00:35.549018 31957 replica.cpp:537] Replica received write request for position 4 from (11830)@172.17.0.3:60904
> I0129 02:00:35.549124 31956 slave.cpp:970] Registered with master master@172.17.0.3:60904; given slave ID 823f2212-bf28-4dd6-959d-796029d32afb-S0
> I0129 02:00:35.549049 31953 master.cpp:4303] Registered slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0129 02:00:35.549175 31956 fetcher.cpp:81] Clearing fetcher cache
> I0129 02:00:35.549362 31954 status_update_manager.cpp:181] Resuming sending status updates
> I0129 02:00:35.549350 31959 hierarchical.cpp:473] Added slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 (90665f991b70) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: )
> I0129 02:00:35.549720 31956 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/meta/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/slave.info'
> I0129 02:00:35.550135 31956 slave.cpp:1029] Forwarding total oversubscribed resources 
> I0129 02:00:35.550341 31949 master.cpp:4644] Received update of slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70) with total oversubscribed resources 
> I0129 02:00:35.550400 31959 hierarchical.cpp:1498] No inverse offers to send out!
> I0129 02:00:35.550475 31959 hierarchical.cpp:1116] Performed allocation for slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 in 1.046149ms
> I0129 02:00:35.550946 31956 hierarchical.cpp:531] Slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 (90665f991b70) updated with oversubscribed resources  (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000])
> I0129 02:00:35.550976 31949 master.cpp:5350] Sending 1 offers to framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 (default) at scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904
> I0129 02:00:35.551132 31956 hierarchical.cpp:1403] No resources available to allocate!
> I0129 02:00:35.551187 31956 hierarchical.cpp:1498] No inverse offers to send out!
> I0129 02:00:35.551225 31956 hierarchical.cpp:1116] Performed allocation for slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 in 229801ns
> I0129 02:00:35.551635 31951 sched.cpp:877] Scheduler::resourceOffers took 155532ns
> I0129 02:00:35.553310 31944 master.cpp:3136] Processing ACCEPT call for offers: [ 823f2212-bf28-4dd6-959d-796029d32afb-O0 ] on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70) for framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 (default) at scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904
> I0129 02:00:35.553364 31944 master.cpp:2823] Authorizing framework principal 'test-principal' to launch task 558bdc51-38dc-48e3-9b81-ad42b942050c as user 'mesos'
> W0129 02:00:35.554951 31944 validation.cpp:404] Executor default for task 558bdc51-38dc-48e3-9b81-ad42b942050c uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases.
> W0129 02:00:35.555004 31944 validation.cpp:416] Executor default for task 558bdc51-38dc-48e3-9b81-ad42b942050c uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases.
> I0129 02:00:35.555403 31944 master.hpp:176] Adding task 558bdc51-38dc-48e3-9b81-ad42b942050c with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 (90665f991b70)
> I0129 02:00:35.555660 31944 master.cpp:3621] Launching task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 (default) at scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70)
> I0129 02:00:35.556067 31948 slave.cpp:1360] Got assigned task 558bdc51-38dc-48e3-9b81-ad42b942050c for framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.556691 31948 slave.cpp:1479] Launching task 558bdc51-38dc-48e3-9b81-ad42b942050c for framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.557307 31948 paths.cpp:472] Trying to chown '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/frameworks/823f2212-bf28-4dd6-959d-796029d32afb-0000/executors/default/runs/92af8a30-2bb0-48fc-874a-e854ff82225b' to user 'mesos'
> I0129 02:00:35.580426 31948 slave.cpp:5281] Launching executor default of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 with resources  in work directory '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/frameworks/823f2212-bf28-4dd6-959d-796029d32afb-0000/executors/default/runs/92af8a30-2bb0-48fc-874a-e854ff82225b'
> *** Aborted at 1454032835 (unix time) try "date -d @1454032835" if you are using GNU date ***
> I0129 02:00:35.582674 31948 exec.cpp:134] Version: 0.28.0
> PC: @     0x2b342648a8dd (unknown)
> I0129 02:00:35.582969 31958 exec.cpp:184] Executor started at: executor(123)@172.17.0.3:60904 with pid 31926
> I0129 02:00:35.583271 31948 slave.cpp:1697] Queuing task '558bdc51-38dc-48e3-9b81-ad42b942050c' for executor 'default' of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.583444 31948 slave.cpp:748] Successfully attached file '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/frameworks/823f2212-bf28-4dd6-959d-796029d32afb-0000/executors/default/runs/92af8a30-2bb0-48fc-874a-e854ff82225b'
> I0129 02:00:35.583636 31948 slave.cpp:2642] Got registration for executor 'default' of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 from executor(123)@172.17.0.3:60904
> I0129 02:00:35.584103 31950 exec.cpp:208] Executor registered on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0
> I0129 02:00:35.584170 31950 exec.cpp:220] Executor::registered took 39070ns
> I0129 02:00:35.584476 31948 slave.cpp:1862] Sending queued task '558bdc51-38dc-48e3-9b81-ad42b942050c' to executor 'default' of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 at executor(123)@172.17.0.3:60904
> I0129 02:00:35.584918 31944 exec.cpp:295] Executor asked to run task '558bdc51-38dc-48e3-9b81-ad42b942050c'
> I0129 02:00:35.585036 31944 exec.cpp:304] Executor::launchTask took 93979ns
> I0129 02:00:35.585160 31944 exec.cpp:517] Executor sending status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.585564 31956 slave.cpp:3001] Handling status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 from executor(123)@172.17.0.3:60904
> I0129 02:00:35.585914 31944 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.585973 31944 status_update_manager.cpp:497] Creating StatusUpdate stream for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.586472 31944 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 to the slave
> I0129 02:00:35.586774 31953 slave.cpp:3353] Forwarding the update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 to master@172.17.0.3:60904
> I0129 02:00:35.587050 31953 slave.cpp:3247] Status update manager successfully handled status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.587118 31953 slave.cpp:3263] Sending acknowledgement for status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 to executor(123)@172.17.0.3:60904
> I0129 02:00:35.587172 31948 master.cpp:4789] Status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 from slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70)
> I0129 02:00:35.587226 31948 master.cpp:4837] Forwarding status update TASK_RUNNING (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.587316 31953 exec.cpp:341] Executor received status update acknowledgement a9be0e7b-c011-4099-aba9-c914c911d7a9 for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.587404 31948 master.cpp:6445] Updating the state of task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING)
> I0129 02:00:35.587743 31953 sched.cpp:985] Scheduler::statusUpdate took 84229ns
> I0129 02:00:35.588039 31957 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 38.985866ms
> I0129 02:00:35.588095 31957 replica.cpp:712] Persisted action at 4
> I0129 02:00:35.588568 31948 master.cpp:3947] Processing ACKNOWLEDGE call a9be0e7b-c011-4099-aba9-c914c911d7a9 for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 (default) at scheduler-93e745f0-0e48-4a8f-b227-93569976c5e8@172.17.0.3:60904 on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0
> I0129 02:00:35.588979 31950 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0
> I0129 02:00:35.589004 31957 status_update_manager.cpp:392] Received status update acknowledgement (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.589700 31954 slave.cpp:2411] Status update manager successfully handled status update acknowledgement (UUID: a9be0e7b-c011-4099-aba9-c914c911d7a9) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.590204 31948 process.cpp:3141] Handling HTTP event for process 'slave(372)' with path: '/slave(372)/api/v1/executor'
> I0129 02:00:35.590828 31951 http.cpp:190] HTTP POST for /slave(372)/api/v1/executor from 172.17.0.3:52186
> I0129 02:00:35.591156 31951 slave.cpp:2475] Received Subscribe request for HTTP executor 'default' of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 at executor(123)@172.17.0.3:60904
> I0129 02:00:35.593617 31948 master.cpp:1025] Master terminating
> W0129 02:00:35.593758 31948 master.cpp:6497] Removing task 558bdc51-38dc-48e3-9b81-ad42b942050c with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70) in non-terminal state TASK_RUNNING
> I0129 02:00:35.594292 31958 hierarchical.cpp:505] Removed slave 823f2212-bf28-4dd6-959d-796029d32afb-S0
> I0129 02:00:35.594655 31948 master.cpp:6540] Removing executor 'default' with resources  of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 on slave 823f2212-bf28-4dd6-959d-796029d32afb-S0 at slave(372)@172.17.0.3:60904 (90665f991b70)
> I0129 02:00:35.595233 31955 hierarchical.cpp:326] Removed framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.595634 31944 slave.cpp:3481] master@172.17.0.3:60904 exited
> W0129 02:00:35.595664 31944 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected
> *** SIGSEGV (@0xd0) received by PID 31926 (TID 0x2b34f7a20700) from PID 208; stack trace: ***
>     @     0x2b34330d90b7 os::Linux::chained_handler()
>     @     0x2b34330dd219 JVM_handle_linux_signal
>     @     0x2b3426241340 (unknown)
>     @     0x2b342648a8dd (unknown)
>     @     0x2b3426480681 (unknown)
>     @     0x2b34264d798e (unknown)
>     @     0x2b34264d78cf (unknown)
>     @     0x2b342402a520 handle_socket_error_msg
> I0129 02:00:35.646546 31950 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 57.450403ms
> I0129 02:00:35.646659 31950 leveldb.cpp:399] Deleting ~2 keys from leveldb took 64897ns
> I0129 02:00:35.646680 31950 replica.cpp:712] Persisted action at 4
>     @     0x2b342402b3bb zookeeper_process
> I0129 02:00:35.646709 31950 replica.cpp:697] Replica learned TRUNCATE action at position 4
> I0129 02:00:35.648186 31948 slave.cpp:3481] executor(123)@172.17.0.3:60904 exited
> I0129 02:00:35.648372 31952 slave.cpp:3816] Executor 'default' of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 exited with status 0
> I0129 02:00:35.648558 31952 slave.cpp:3001] Handling status update TASK_FAILED (UUID: 7584c149-3dfe-47d7-9f91-ddfa2c3b4e38) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 from @0.0.0.0:0
> I0129 02:00:35.648720 31952 slave.cpp:5591] Terminating task 558bdc51-38dc-48e3-9b81-ad42b942050c
> I0129 02:00:35.649889 31948 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 7584c149-3dfe-47d7-9f91-ddfa2c3b4e38) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.649999 31952 slave.cpp:667] Slave terminating
> I0129 02:00:35.650074 31952 slave.cpp:2078] Asked to shut down framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 by @0.0.0.0:0
> I0129 02:00:35.650120 31948 status_update_manager.cpp:374] Forwarding update TASK_FAILED (UUID: 7584c149-3dfe-47d7-9f91-ddfa2c3b4e38) for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 to the slave
> I0129 02:00:35.650193 31952 slave.cpp:2103] Shutting down framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.650257 31952 slave.cpp:3920] Cleaning up executor 'default' of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000 (via HTTP)
> I0129 02:00:35.650461 31957 gc.cpp:54] Scheduling '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/frameworks/823f2212-bf28-4dd6-959d-796029d32afb-0000/executors/default/runs/92af8a30-2bb0-48fc-874a-e854ff82225b' for gc 6.9999924726163days in the future
> I0129 02:00:35.650661 31952 slave.cpp:4008] Cleaning up framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.650661 31955 gc.cpp:54] Scheduling '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/frameworks/823f2212-bf28-4dd6-959d-796029d32afb-0000/executors/default' for gc 6.99999247009185days in the future
> I0129 02:00:35.650902 31955 status_update_manager.cpp:282] Closing status update streams for framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.650960 31955 status_update_manager.cpp:528] Cleaning up status update stream for task 558bdc51-38dc-48e3-9b81-ad42b942050c of framework 823f2212-bf28-4dd6-959d-796029d32afb-0000
> I0129 02:00:35.651008 31948 gc.cpp:54] Scheduling '/tmp/ContentType_ExecutorHttpApiTest_DefaultAccept_1_r4GUhM/slaves/823f2212-bf28-4dd6-959d-796029d32afb-S0/frameworks/823f2212-bf28-4dd6-959d-796029d32afb-0000' for gc 6.99999246598519days in the future
>     @     0x2b34240319cb do_io
>     @     0x2b3426239182 start_thread
> [       OK ] ContentType/ExecutorHttpApiTest.DefaultAccept/1 (637 ms)
>     @     0x2b342654947d (unknown)
> {code}
> There was another test {{MasterMaintenanceTest.InverseOffersFilters}} which failed right after completion with a similar stack trace:
> {code}
> *** Aborted at 1454095687 (unix time) try "date -d @1454095687" if you are using GNU date ***
> PC: @     0x2b2faec908dd (unknown)
> *** SIGSEGV (@0x51) received by PID 30503 (TID 0x2b307fe5f700) from PID 81; stack trace: ***
>     @     0x2b2fbb1e50b7 os::Linux::chained_handler()
>     @     0x2b2fbb1e9219 JVM_handle_linux_signal
>     @     0x2b2faea47340 (unknown)
>     @     0x2b2faec908dd (unknown)
>     @     0x2b2faec86681 (unknown)
>     @     0x2b2faecdd98e (unknown)
>     @     0x2b2faecdd8cf (unknown)
>     @     0x2b2facc1b0d0 handle_socket_error_msg
>     @     0x2b2facc1bf6b zookeeper_process
>     @     0x2b2facc2209b do_io
>     @     0x2b2faea3f182 start_thread
>     @     0x2b2faed4f47d (unknown)
> make[4]: *** [check-local] Segmentation fault
> make[4]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
> make[3]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
> make[3]: *** [check-am] Error 2
> make[2]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
> make[2]: *** [check] Error 2
> make[1]: *** [check-recursive] Error 1
> make[1]: Leaving directory `/mesos/mesos-0.28.0/_build'
> make: *** [distcheck] Error 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)