You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@mesos.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/01/29 22:30:57 UTC

Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,centos:7,docker||Hadoop #1587

See <https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=docker%7C%7CHadoop/1587/>

------------------------------------------
[...truncated 160338 lines...]
I0129 21:25:03.723858  2437 gc.cpp:54] Scheduling '/tmp/ContentType_SchedulerTest_Message_1_Lzy3mL/slaves/8a701c8d-f1ae-4771-8035-fb31ee473eb0-S0/frameworks/8a701c8d-f1ae-4771-8035-fb31ee473eb0-0000/executors/default' for gc 6.99999162292741days in the future
I0129 21:25:03.723948  2445 status_update_manager.cpp:282] Closing status update streams for framework 8a701c8d-f1ae-4771-8035-fb31ee473eb0-0000
I0129 21:25:03.724000  2445 status_update_manager.cpp:528] Cleaning up status update stream for task e970a59a-ec88-4e4d-8556-976f77aaf408 of framework 8a701c8d-f1ae-4771-8035-fb31ee473eb0-0000
I0129 21:25:03.724017  2449 gc.cpp:54] Scheduling '/tmp/ContentType_SchedulerTest_Message_1_Lzy3mL/slaves/8a701c8d-f1ae-4771-8035-fb31ee473eb0-S0/frameworks/8a701c8d-f1ae-4771-8035-fb31ee473eb0-0000' for gc 6.99999162097185days in the future
[       OK ] ContentType/SchedulerTest.Message/1 (105 ms)
[ RUN      ] ContentType/SchedulerTest.Request/0
I0129 21:25:03.732007  2415 leveldb.cpp:174] Opened db in 3.325621ms
I0129 21:25:03.732858  2415 leveldb.cpp:181] Compacted db in 793277ns
I0129 21:25:03.732902  2415 leveldb.cpp:196] Created db iterator in 18269ns
I0129 21:25:03.732925  2415 leveldb.cpp:202] Seeked to beginning of db in 1869ns
I0129 21:25:03.733099  2415 leveldb.cpp:271] Iterated through 0 keys in the db in 520ns
I0129 21:25:03.733157  2415 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
I0129 21:25:03.733574  2438 recover.cpp:447] Starting replica recovery
I0129 21:25:03.733788  2438 recover.cpp:473] Replica is in EMPTY status
I0129 21:25:03.735548  2446 master.cpp:374] Master a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e (c17944d9f94b) started on 172.17.0.4:47940
I0129 21:25:03.735993  2440 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (14168)@172.17.0.4:47940
I0129 21:25:03.736377  2446 master.cpp:376] Flags at startup: --acls="" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate="false" --authenticate_http="true" --authenticate_slaves="true" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/DNRJ3F/credentials" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" --quiet="false" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="25secs" --registry_strict="true" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/mesos/mesos-0.28.0/_inst/share/mesos/webui" --work_dir="/tmp/DNRJ3F/master" --zk_session_timeout="10secs"
I0129 21:25:03.736716  2436 recover.cpp:193] Received a recover response from a replica in EMPTY status
I0129 21:25:03.737700  2446 master.cpp:423] Master allowing unauthenticated frameworks to register
I0129 21:25:03.737716  2446 master.cpp:426] Master only allowing authenticated slaves to register
I0129 21:25:03.737726  2446 credentials.hpp:35] Loading credentials for authentication from '/tmp/DNRJ3F/credentials'
I0129 21:25:03.738097  2446 master.cpp:466] Using default 'crammd5' authenticator
I0129 21:25:03.738242  2436 recover.cpp:564] Updating replica status to STARTING
I0129 21:25:03.738252  2446 master.cpp:535] Using default 'basic' HTTP authenticator
I0129 21:25:03.738392  2446 master.cpp:569] Authorization enabled
I0129 21:25:03.738894  2441 hierarchical.cpp:144] Initialized hierarchical allocator process
I0129 21:25:03.738963  2441 whitelist_watcher.cpp:77] No whitelist given
I0129 21:25:03.739233  2436 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 695883ns
I0129 21:25:03.739264  2436 replica.cpp:320] Persisted replica status to STARTING
I0129 21:25:03.739522  2436 recover.cpp:473] Replica is in STARTING status
I0129 21:25:03.740592  2449 master.cpp:1710] The newly elected leader is master@172.17.0.4:47940 with id a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e
I0129 21:25:03.740624  2449 master.cpp:1723] Elected as the leading master!
I0129 21:25:03.740644  2449 master.cpp:1468] Recovering from registrar
I0129 21:25:03.741812  2434 registrar.cpp:307] Recovering registrar
I0129 21:25:03.742508  2439 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (14170)@172.17.0.4:47940
I0129 21:25:03.742787  2434 recover.cpp:193] Received a recover response from a replica in STARTING status
I0129 21:25:03.743249  2438 recover.cpp:564] Updating replica status to VOTING
I0129 21:25:03.744009  2438 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 564080ns
I0129 21:25:03.744042  2438 replica.cpp:320] Persisted replica status to VOTING
I0129 21:25:03.744277  2435 recover.cpp:578] Successfully joined the Paxos group
I0129 21:25:03.744607  2435 recover.cpp:462] Recover process terminated
I0129 21:25:03.745507  2435 log.cpp:659] Attempting to start the writer
I0129 21:25:03.747033  2438 replica.cpp:493] Replica received implicit promise request from (14171)@172.17.0.4:47940 with proposal 1
I0129 21:25:03.747503  2438 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 429775ns
I0129 21:25:03.747532  2438 replica.cpp:342] Persisted promised to 1
I0129 21:25:03.748580  2438 coordinator.cpp:238] Coordinator attempting to fill missing positions
I0129 21:25:03.750001  2449 replica.cpp:388] Replica received explicit promise request from (14172)@172.17.0.4:47940 for position 0 with proposal 2
I0129 21:25:03.750668  2449 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 621770ns
I0129 21:25:03.750708  2449 replica.cpp:712] Persisted action at 0
I0129 21:25:03.752346  2438 replica.cpp:537] Replica received write request for position 0 from (14173)@172.17.0.4:47940
I0129 21:25:03.752424  2438 leveldb.cpp:436] Reading position from leveldb took 45286ns
I0129 21:25:03.752825  2438 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 355265ns
I0129 21:25:03.752849  2438 replica.cpp:712] Persisted action at 0
I0129 21:25:03.753584  2438 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0
I0129 21:25:03.754206  2438 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 593179ns
I0129 21:25:03.754232  2438 replica.cpp:712] Persisted action at 0
I0129 21:25:03.754251  2438 replica.cpp:697] Replica learned NOP action at position 0
I0129 21:25:03.755219  2436 log.cpp:675] Writer started with ending position 0
I0129 21:25:03.756616  2435 leveldb.cpp:436] Reading position from leveldb took 27251ns
I0129 21:25:03.757902  2439 registrar.cpp:340] Successfully fetched the registry (0B) in 16.039936ms
I0129 21:25:03.758038  2439 registrar.cpp:439] Applied 1 operations in 34861ns; attempting to update the 'registry'
I0129 21:25:03.759372  2439 log.cpp:683] Attempting to append 170 bytes to the log
I0129 21:25:03.759659  2447 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1
I0129 21:25:03.760488  2446 replica.cpp:537] Replica received write request for position 1 from (14174)@172.17.0.4:47940
I0129 21:25:03.761023  2446 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 491354ns
I0129 21:25:03.761112  2446 replica.cpp:712] Persisted action at 1
I0129 21:25:03.762333  2448 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0
I0129 21:25:03.762855  2448 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 431449ns
I0129 21:25:03.762943  2448 replica.cpp:712] Persisted action at 1
I0129 21:25:03.763062  2448 replica.cpp:697] Replica learned APPEND action at position 1
I0129 21:25:03.764732  2445 log.cpp:702] Attempting to truncate the log to 1
I0129 21:25:03.765103  2445 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2
I0129 21:25:03.766185  2445 replica.cpp:537] Replica received write request for position 2 from (14175)@172.17.0.4:47940
I0129 21:25:03.766418  2448 registrar.cpp:484] Successfully updated the 'registry' in 8.279808ms
I0129 21:25:03.766618  2448 registrar.cpp:370] Successfully recovered registrar
I0129 21:25:03.767092  2448 master.cpp:1520] Recovered 0 slaves from the Registry (131B) ; allowing 10mins for slaves to re-register
I0129 21:25:03.767318  2442 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover
I0129 21:25:03.767412  2445 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.110188ms
I0129 21:25:03.767726  2445 replica.cpp:712] Persisted action at 2
I0129 21:25:03.768554  2443 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0
I0129 21:25:03.768949  2443 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 362877ns
I0129 21:25:03.769006  2443 leveldb.cpp:399] Deleting ~1 keys from leveldb took 31442ns
I0129 21:25:03.769040  2443 replica.cpp:712] Persisted action at 2
I0129 21:25:03.769062  2443 replica.cpp:697] Replica learned TRUNCATE action at position 2
I0129 21:25:03.778211  2415 scheduler.cpp:154] Version: 0.28.0
I0129 21:25:03.779119  2443 scheduler.cpp:236] New master detected at master@172.17.0.4:47940
I0129 21:25:03.780131  2443 scheduler.cpp:298] Sending SUBSCRIBE call to master@172.17.0.4:47940
I0129 21:25:03.781904  2436 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler'
I0129 21:25:03.782382  2449 http.cpp:503] HTTP POST for /master/api/v1/scheduler from 172.17.0.4:40320
I0129 21:25:03.782588  2449 master.cpp:1972] Received subscription request for HTTP framework 'default'
I0129 21:25:03.782658  2449 master.cpp:1749] Authorizing framework principal 'test-principal' to receive offers for role '*'
I0129 21:25:03.782938  2449 master.cpp:2063] Subscribing framework 'default' with checkpointing disabled and capabilities [  ]
I0129 21:25:03.783398  2449 master.hpp:1658] Sending heartbeat to a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e-0000
I0129 21:25:03.783416  2436 hierarchical.cpp:265] Added framework a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e-0000
I0129 21:25:03.783502  2436 hierarchical.cpp:1403] No resources available to allocate!
I0129 21:25:03.783565  2436 hierarchical.cpp:1498] No inverse offers to send out!
I0129 21:25:03.783604  2436 hierarchical.cpp:1096] Performed allocation for 0 slaves in 164959ns
I0129 21:25:03.784330  2434 scheduler.cpp:457] Enqueuing event SUBSCRIBED received from master@172.17.0.4:47940
I0129 21:25:03.784822  2434 scheduler.cpp:457] Enqueuing event HEARTBEAT received from master@172.17.0.4:47940
I0129 21:25:03.785562  2434 scheduler.cpp:298] Sending REQUEST call to master@172.17.0.4:47940
I0129 21:25:03.785827  2441 master_maintenance_tests.cpp:177] Ignoring HEARTBEAT event
I0129 21:25:03.786998  2436 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler'
I0129 21:25:03.787304  2443 http.cpp:503] HTTP POST for /master/api/v1/scheduler from 172.17.0.4:40321
I0129 21:25:03.787402  2443 master.cpp:2717] Processing REQUEST call for framework a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e-0000 (default)
I0129 21:25:03.787556  2447 hierarchical.cpp:589] Received resource request from framework a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e-0000
I0129 21:25:03.788048  2447 master.cpp:1025] Master terminating
I0129 21:25:03.788244  2443 hierarchical.cpp:326] Removed framework a85fe6ef-2243-41e6-b7a1-3eaf2b1b2c2e-0000
E0129 21:25:03.788923  2435 scheduler.cpp:431] End-Of-File received from master. The master closed the event stream
[       OK ] ContentType/SchedulerTest.Request/0 (66 ms)
[ RUN      ] ContentType/SchedulerTest.Request/1
I0129 21:25:03.797405  2415 leveldb.cpp:174] Opened db in 2.647289ms
I0129 21:25:03.798316  2415 leveldb.cpp:181] Compacted db in 864503ns
I0129 21:25:03.798377  2415 leveldb.cpp:196] Created db iterator in 33699ns
I0129 21:25:03.798398  2415 leveldb.cpp:202] Seeked to beginning of db in 2166ns
I0129 21:25:03.798429  2415 leveldb.cpp:271] Iterated through 0 keys in the db in 492ns
I0129 21:25:03.798490  2415 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
I0129 21:25:03.799962  2441 recover.cpp:447] Starting replica recovery
I0129 21:25:03.800261  2441 recover.cpp:473] Replica is in EMPTY status
I0129 21:25:03.801952  2440 master.cpp:374] Master d93c864a-57de-4656-a59a-e9944586c2f0 (c17944d9f94b) started on 172.17.0.4:47940
I0129 21:25:03.802278  2440 master.cpp:376] Flags at startup: --acls="" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate="false" --authenticate_http="true" --authenticate_slaves="true" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/3aFYSj/credentials" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" --quiet="false" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="25secs" --registry_strict="true" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/mesos/mesos-0.28.0/_inst/share/mesos/webui" --work_dir="/tmp/3aFYSj/master" --zk_session_timeout="10secs"
I0129 21:25:03.803225  2440 master.cpp:423] Master allowing unauthenticated frameworks to register
I0129 21:25:03.803243  2440 master.cpp:426] Master only allowing authenticated slaves to register
I0129 21:25:03.803253  2440 credentials.hpp:35] Loading credentials for authentication from '/tmp/3aFYSj/credentials'
I0129 21:25:03.803625  2440 master.cpp:466] Using default 'crammd5' authenticator
I0129 21:25:03.804235  2449 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (14183)@172.17.0.4:47940
I0129 21:25:03.804321  2440 master.cpp:535] Using default 'basic' HTTP authenticator
I0129 21:25:03.804522  2440 master.cpp:569] Authorization enabled
I0129 21:25:03.805060  2449 recover.cpp:193] Received a recover response from a replica in EMPTY status
I0129 21:25:03.805647  2434 whitelist_watcher.cpp:77] No whitelist given
I0129 21:25:03.805762  2441 hierarchical.cpp:144] Initialized hierarchical allocator process
I0129 21:25:03.806457  2449 recover.cpp:564] Updating replica status to STARTING
I0129 21:25:03.807387  2435 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 544429ns
I0129 21:25:03.807497  2435 replica.cpp:320] Persisted replica status to STARTING
I0129 21:25:03.807857  2435 recover.cpp:473] Replica is in STARTING status
I0129 21:25:03.808290  2440 master.cpp:1710] The newly elected leader is master@172.17.0.4:47940 with id d93c864a-57de-4656-a59a-e9944586c2f0
I0129 21:25:03.808323  2440 master.cpp:1723] Elected as the leading master!
I0129 21:25:03.808348  2440 master.cpp:1468] Recovering from registrar
I0129 21:25:03.808653  2440 registrar.cpp:307] Recovering registrar
I0129 21:25:03.809598  2440 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (14185)@172.17.0.4:47940
I0129 21:25:03.810201  2449 recover.cpp:193] Received a recover response from a replica in STARTING status
I0129 21:25:03.810657  2442 recover.cpp:564] Updating replica status to VOTING
I0129 21:25:03.811329  2449 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 450684ns
I0129 21:25:03.811357  2449 replica.cpp:320] Persisted replica status to VOTING
I0129 21:25:03.811466  2442 recover.cpp:578] Successfully joined the Paxos group
I0129 21:25:03.811827  2442 recover.cpp:462] Recover process terminated
I0129 21:25:03.812357  2434 log.cpp:659] Attempting to start the writer
I0129 21:25:03.813952  2434 replica.cpp:493] Replica received implicit promise request from (14186)@172.17.0.4:47940 with proposal 1
I0129 21:25:03.814558  2434 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 544854ns
I0129 21:25:03.814599  2434 replica.cpp:342] Persisted promised to 1
I0129 21:25:03.816249  2444 coordinator.cpp:238] Coordinator attempting to fill missing positions
I0129 21:25:03.817891  2439 replica.cpp:388] Replica received explicit promise request from (14187)@172.17.0.4:47940 for position 0 with proposal 2
I0129 21:25:03.818724  2439 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 519660ns
I0129 21:25:03.818770  2439 replica.cpp:712] Persisted action at 0
I0129 21:25:03.820261  2439 replica.cpp:537] Replica received write request for position 0 from (14188)@172.17.0.4:47940
I0129 21:25:03.820348  2439 leveldb.cpp:436] Reading position from leveldb took 39568ns
I0129 21:25:03.820988  2439 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 469728ns
I0129 21:25:03.821030  2439 replica.cpp:712] Persisted action at 0
I0129 21:25:03.822151  2442 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0
I0129 21:25:03.823731  2442 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.388275ms
I0129 21:25:03.823778  2442 replica.cpp:712] Persisted action at 0
I0129 21:25:03.823810  2442 replica.cpp:697] Replica learned NOP action at position 0
I0129 21:25:03.824764  2441 log.cpp:675] Writer started with ending position 0
I0129 21:25:03.826340  2442 leveldb.cpp:436] Reading position from leveldb took 59283ns
I0129 21:25:03.829782  2436 registrar.cpp:340] Successfully fetched the registry (0B) in 20.98304ms
I0129 21:25:03.829948  2436 registrar.cpp:439] Applied 1 operations in 41763ns; attempting to update the 'registry'
I0129 21:25:03.834053  2436 log.cpp:683] Attempting to append 170 bytes to the log
I0129 21:25:03.834347  2436 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1
I0129 21:25:03.835553  2438 replica.cpp:537] Replica received write request for position 1 from (14189)@172.17.0.4:47940
I0129 21:25:03.836732  2438 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 1.124096ms
I0129 21:25:03.836778  2438 replica.cpp:712] Persisted action at 1
I0129 21:25:03.837939  2438 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0
I0129 21:25:03.838721  2438 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 746304ns
I0129 21:25:03.838752  2438 replica.cpp:712] Persisted action at 1
I0129 21:25:03.838781  2438 replica.cpp:697] Replica learned APPEND action at position 1
I0129 21:25:03.841599  2444 registrar.cpp:484] Successfully updated the 'registry' in 11.56608ms
I0129 21:25:03.841804  2444 registrar.cpp:370] Successfully recovered registrar
I0129 21:25:03.842021  2438 log.cpp:702] Attempting to truncate the log to 1
I0129 21:25:03.842356  2438 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2
I0129 21:25:03.842808  2444 master.cpp:1520] Recovered 0 slaves from the Registry (131B) ; allowing 10mins for slaves to re-register
I0129 21:25:03.842923  2436 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover
I0129 21:25:03.843724  2442 replica.cpp:537] Replica received write request for position 2 from (14190)@172.17.0.4:47940
I0129 21:25:03.844264  2442 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 494001ns
I0129 21:25:03.844293  2442 replica.cpp:712] Persisted action at 2
I0129 21:25:03.845108  2442 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0
I0129 21:25:03.845626  2442 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 483285ns
I0129 21:25:03.845839  2442 leveldb.cpp:399] Deleting ~1 keys from leveldb took 60038ns
I0129 21:25:03.845953  2442 replica.cpp:712] Persisted action at 2
I0129 21:25:03.846070  2442 replica.cpp:697] Replica learned TRUNCATE action at position 2
I0129 21:25:03.853407  2415 scheduler.cpp:154] Version: 0.28.0
I0129 21:25:03.854439  2438 scheduler.cpp:236] New master detected at master@172.17.0.4:47940
I0129 21:25:03.858309  2434 scheduler.cpp:298] Sending SUBSCRIBE call to master@172.17.0.4:47940
I0129 21:25:03.860339  2440 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler'
I0129 21:25:03.860879  2449 http.cpp:503] HTTP POST for /master/api/v1/scheduler from 172.17.0.4:40322
I0129 21:25:03.861227  2449 master.cpp:1972] Received subscription request for HTTP framework 'default'
I0129 21:25:03.861292  2449 master.cpp:1749] Authorizing framework principal 'test-principal' to receive offers for role '*'
I0129 21:25:03.861618  2442 master.cpp:2063] Subscribing framework 'default' with checkpointing disabled and capabilities [  ]
I0129 21:25:03.862102  2437 hierarchical.cpp:265] Added framework d93c864a-57de-4656-a59a-e9944586c2f0-0000
I0129 21:25:03.862190  2437 hierarchical.cpp:1403] No resources available to allocate!
I0129 21:25:03.862216  2442 master.hpp:1658] Sending heartbeat to d93c864a-57de-4656-a59a-e9944586c2f0-0000
I0129 21:25:03.862340  2437 hierarchical.cpp:1498] No inverse offers to send out!
I0129 21:25:03.862551  2437 hierarchical.cpp:1096] Performed allocation for 0 slaves in 421278ns
I0129 21:25:03.862977  2442 scheduler.cpp:457] Enqueuing event SUBSCRIBED received from master@172.17.0.4:47940
I0129 21:25:03.863443  2437 scheduler.cpp:457] Enqueuing event HEARTBEAT received from master@172.17.0.4:47940
I0129 21:25:03.864063  2437 scheduler.cpp:298] Sending REQUEST call to master@172.17.0.4:47940
I0129 21:25:03.864563  2438 master_maintenance_tests.cpp:177] Ignoring HEARTBEAT event
I0129 21:25:03.865859  2440 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler'
I0129 21:25:03.866154  2445 http.cpp:503] HTTP POST for /master/api/v1/scheduler from 172.17.0.4:40323
I0129 21:25:03.866361  2445 master.cpp:2717] Processing REQUEST call for framework d93c864a-57de-4656-a59a-e9944586c2f0-0000 (default)
I0129 21:25:03.866485  2447 hierarchical.cpp:589] Received resource request from framework d93c864a-57de-4656-a59a-e9944586c2f0-0000
I0129 21:25:03.866775  2449 master.cpp:1025] Master terminating
I0129 21:25:03.867036  2446 hierarchical.cpp:326] Removed framework d93c864a-57de-4656-a59a-e9944586c2f0-0000
E0129 21:25:03.868137  2445 scheduler.cpp:431] End-Of-File received from master. The master closed the event stream
[       OK ] ContentType/SchedulerTest.Request/1 (80 ms)
[----------] 22 tests from ContentType/SchedulerTest (4204 ms total)

[----------] Global test environment tear-down
[==========] 969 tests from 127 test cases ran. (371788 ms total)
[  PASSED  ] 968 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] ShasumTest.SHA512SimpleFile

 1 FAILED TEST
  YOU HAVE 9 DISABLED TESTS

make[4]: *** [check-local] Error 1
make[4]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
make[3]: *** [check-am] Error 2
make[3]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
make[2]: *** [check] Error 2
make[2]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/mesos/mesos-0.28.0/_build'
make: *** [distcheck] Error 1
+ docker rmi mesos-1454100653-8437
Untagged: mesos-1454100653-8437:latest
Deleted: 56843bcb8431aa6e75fd2c16b1cfeb04bf95aef590312e078c08f83d268873eb
Deleted: e2eb495ca9730facba159ae1e866d696808085bfbca87081667f35a7e31cf83a
Deleted: 1d67ef86c0ee98bdf7448b565b493a044b282b1551f05ea35441a70d51d89f90
Deleted: 0f534fc80e12edf0f2a091778376854bb45208c5116018fc1eaf67b819efcd00
Deleted: 8b8e476ed78b8080de130c0c2624d39fd2c1139ca44673cfbceeb998f3b5a139
Deleted: 644b3a25c2067fd37e9b133cc68fa23079eec3f3867e7795796cdb0970fa47a0
Deleted: dbce68e3f2a604976b09e6b73c6156738387e6fcd023f4de52eee2738277e9f0
Deleted: 0b200bb95bc5bba10c1d02edae3c14733d81941da6343ccc9ddc54408a8cb5e8
Deleted: b8c7e83314f28b99c1f55d160fe213b15b9a875ed95e3ec19701be9a0ba142d0
Deleted: 70905922a9ee342ab7071031ff9b8d3d0115c491a4c1e1b97b482442af6a09ec
Deleted: d5ade59ba765852960ce8e86630d3da50e6c52870bfe9912a132ebfeade9f867
Deleted: 5765ab690810c6589ef805dda56d004b12a92db8d52a53ea8ba79d7ce52a4a82
Deleted: dbb765ce6aaf6ebc6fb5e42216dc16ca2f0278b8bfad3e80373571a2e5a1aca0
Build step 'Execute shell' marked build as failure

Jenkins build is back to normal : Mesos » gcc,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,centos:7,docker||Hadoop #1589

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=docker%7C%7CHadoop/1589/changes>


Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,centos:7,docker||Hadoop #1588

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=docker%7C%7CHadoop/1588/>

------------------------------------------
[...truncated 162388 lines...]
rm -f slave/.deps/.dirstamp
rm -f slave/.dirstamp
rm -f slave/container_loggers/.deps/.dirstamp
rm -f examples/*.lo
rm -f slave/container_loggers/.dirstamp
rm -f slave/containerizer/.deps/.dirstamp
rm -f slave/containerizer/.dirstamp
rm -f exec/*.o
rm -f slave/containerizer/mesos/.deps/.dirstamp
rm -f exec/*.lo
rm -f slave/containerizer/mesos/.dirstamp
rm -f slave/containerizer/mesos/isolators/cgroups/.deps/.dirstamp
rm -f slave/containerizer/mesos/isolators/cgroups/.dirstamp
rm -f executor/*.o
rm -f slave/containerizer/mesos/isolators/filesystem/.deps/.dirstamp
rm -f slave/containerizer/mesos/isolators/filesystem/.dirstamp
rm -f slave/containerizer/mesos/isolators/namespaces/.deps/.dirstamp
rm -f slave/containerizer/mesos/isolators/namespaces/.dirstamp
rm -f executor/*.lo
rm -f slave/containerizer/mesos/isolators/network/.deps/.dirstamp
rm -f slave/containerizer/mesos/isolators/network/.dirstamp
rm -f slave/containerizer/mesos/isolators/posix/.deps/.dirstamp
rm -f files/*.o
rm -f slave/containerizer/mesos/isolators/posix/.dirstamp
rm -f slave/containerizer/mesos/provisioner/.deps/.dirstamp
rm -f slave/containerizer/mesos/provisioner/.dirstamp
rm -f files/*.lo
rm -f slave/containerizer/mesos/provisioner/appc/.deps/.dirstamp
rm -f slave/containerizer/mesos/provisioner/appc/.dirstamp
rm -f slave/containerizer/mesos/provisioner/backends/.deps/.dirstamp
rm -f hdfs/*.o
rm -f slave/containerizer/mesos/provisioner/backends/.dirstamp
rm -f slave/containerizer/mesos/provisioner/docker/.deps/.dirstamp
rm -f slave/containerizer/mesos/provisioner/docker/.dirstamp
rm -f hdfs/*.lo
rm -f slave/qos_controllers/.deps/.dirstamp
rm -f slave/qos_controllers/.dirstamp
rm -f slave/resource_estimators/.deps/.dirstamp
rm -f slave/resource_estimators/.dirstamp
rm -f health-check/*.o
rm -f state/.deps/.dirstamp
rm -f state/.dirstamp
rm -f tests/.deps/.dirstamp
rm -f hook/*.o
rm -f tests/.dirstamp
rm -f tests/common/.deps/.dirstamp
rm -f tests/common/.dirstamp
rm -f tests/containerizer/.deps/.dirstamp
rm -f hook/*.lo
rm -f tests/containerizer/.dirstamp
rm -f uri/.deps/.dirstamp
rm -f uri/.dirstamp
rm -rf ../include/mesos/.libs ../include/mesos/_libs
rm -f uri/fetchers/.deps/.dirstamp
rm -f internal/*.o
rm -f uri/fetchers/.dirstamp
rm -f usage/.deps/.dirstamp
rm -f usage/.dirstamp
rm -f v1/.deps/.dirstamp
rm -f internal/*.lo
rm -f v1/.dirstamp
rm -f version/.deps/.dirstamp
rm -f version/.dirstamp
rm -f java/jni/*.o
rm -f java/jni/*.lo
rm -rf ../include/mesos/authentication/.libs ../include/mesos/authentication/_libs
rm -f jvm/*.o
rm -rf ../include/mesos/authorizer/.libs ../include/mesos/authorizer/_libs
rm -rf ../include/mesos/containerizer/.libs ../include/mesos/containerizer/_libs
rm -f jvm/*.lo
rm -f examples/java/*.class
rm -rf ../include/mesos/docker/.libs ../include/mesos/docker/_libs
rm -f watcher/.deps/.dirstamp
rm -f jvm/org/apache/*.o
rm -f watcher/.dirstamp
rm -f java/jni/org_apache_mesos*.h
rm -rf ../include/mesos/executor/.libs ../include/mesos/executor/_libs
rm -f jvm/org/apache/*.lo
rm -f zookeeper/.deps/.dirstamp
rm -rf ../include/mesos/fetcher/.libs ../include/mesos/fetcher/_libs
rm -f zookeeper/.dirstamp
rm -rf ../include/mesos/maintenance/.libs ../include/mesos/maintenance/_libs
rm -f launcher/*.o
rm -rf ../include/mesos/master/.libs ../include/mesos/master/_libs
rm -rf ../include/mesos/module/.libs ../include/mesos/module/_libs
rm -f linux/*.o
rm -rf ../include/mesos/quota/.libs ../include/mesos/quota/_libs
rm -rf ../include/mesos/scheduler/.libs ../include/mesos/scheduler/_libs
rm -rf ../include/mesos/slave/.libs ../include/mesos/slave/_libs
rm -f linux/*.lo
rm -rf ../include/mesos/uri/.libs ../include/mesos/uri/_libs
rm -rf ../include/mesos/v1/.libs ../include/mesos/v1/_libs
rm -rf ../include/mesos/v1/executor/.libs ../include/mesos/v1/executor/_libs
rm -f linux/routing/*.o
rm -rf ../include/mesos/v1/scheduler/.libs ../include/mesos/v1/scheduler/_libs
rm -f linux/routing/*.lo
rm -rf authentication/cram_md5/.libs authentication/cram_md5/_libs
rm -f linux/routing/diagnosis/*.o
rm -rf authentication/http/.libs authentication/http/_libs
rm -f linux/routing/diagnosis/*.lo
rm -f linux/routing/filter/*.o
rm -f linux/routing/filter/*.lo
rm -rf authorizer/.libs authorizer/_libs
rm -f linux/routing/link/*.o
rm -rf authorizer/local/.libs authorizer/local/_libs
rm -rf common/.libs common/_libs
rm -f linux/routing/link/*.lo
rm -f linux/routing/queueing/*.o
rm -f linux/routing/queueing/*.lo
rm -f local/*.o
rm -rf docker/.libs docker/_libs
rm -f local/*.lo
rm -f log/*.o
rm -rf examples/.libs examples/_libs
rm -rf exec/.libs exec/_libs
rm -f log/*.lo
rm -f log/tool/*.o
rm -f log/tool/*.lo
rm -f logging/*.o
rm -rf executor/.libs executor/_libs
rm -f logging/*.lo
rm -rf files/.libs files/_libs
rm -rf hdfs/.libs hdfs/_libs
rm -f master/*.o
rm -f master/*.lo
rm -f master/allocator/*.o
rm -rf hook/.libs hook/_libs
rm -f master/allocator/*.lo
rm -rf internal/.libs internal/_libs
rm -rf java/jni/.libs java/jni/_libs
rm -f master/allocator/mesos/*.o
rm -rf jvm/.libs jvm/_libs
rm -rf jvm/org/apache/.libs jvm/org/apache/_libs
rm -f master/allocator/mesos/*.lo
rm -rf linux/.libs linux/_libs
rm -f master/allocator/sorter/drf/*.o
rm -f master/allocator/sorter/drf/*.lo
rm -rf linux/routing/.libs linux/routing/_libs
rm -f messages/*.o
rm -rf linux/routing/diagnosis/.libs linux/routing/diagnosis/_libs
rm -rf linux/routing/filter/.libs linux/routing/filter/_libs
rm -rf linux/routing/link/.libs linux/routing/link/_libs
rm -f messages/*.lo
rm -rf linux/routing/queueing/.libs linux/routing/queueing/_libs
rm -f module/*.o
rm -rf local/.libs local/_libs
rm -f module/*.lo
rm -rf log/.libs log/_libs
rm -f sched/*.o
rm -rf log/tool/.libs log/tool/_libs
rm -f sched/*.lo
rm -rf logging/.libs logging/_libs
rm -rf master/.libs master/_libs
rm -f scheduler/*.o
rm -f scheduler/*.lo
rm -f slave/*.o
rm -f slave/*.lo
rm -f slave/container_loggers/*.o
rm -f slave/container_loggers/*.lo
rm -f slave/containerizer/*.o
rm -f slave/containerizer/*.lo
rm -f slave/containerizer/mesos/*.o
rm -f slave/containerizer/mesos/*.lo
rm -f slave/containerizer/mesos/isolators/cgroups/*.o
rm -rf master/allocator/.libs master/allocator/_libs
rm -f slave/containerizer/mesos/isolators/cgroups/*.lo
rm -rf master/allocator/mesos/.libs master/allocator/mesos/_libs
rm -rf master/allocator/sorter/drf/.libs master/allocator/sorter/drf/_libs
rm -f slave/containerizer/mesos/isolators/filesystem/*.o
rm -rf messages/.libs messages/_libs
rm -f slave/containerizer/mesos/isolators/filesystem/*.lo
rm -rf module/.libs module/_libs
rm -rf sched/.libs sched/_libs
rm -f slave/containerizer/mesos/isolators/namespaces/*.o
rm -rf scheduler/.libs scheduler/_libs
rm -rf slave/.libs slave/_libs
rm -f slave/containerizer/mesos/isolators/namespaces/*.lo
rm -f slave/containerizer/mesos/isolators/network/*.o
rm -f slave/containerizer/mesos/isolators/network/*.lo
rm -f slave/containerizer/mesos/isolators/posix/*.o
rm -f slave/containerizer/mesos/isolators/posix/*.lo
rm -f slave/containerizer/mesos/provisioner/*.o
rm -f slave/containerizer/mesos/provisioner/*.lo
rm -f slave/containerizer/mesos/provisioner/appc/*.o
rm -f slave/containerizer/mesos/provisioner/appc/*.lo
rm -rf slave/container_loggers/.libs slave/container_loggers/_libs
rm -f slave/containerizer/mesos/provisioner/backends/*.o
rm -f slave/containerizer/mesos/provisioner/backends/*.lo
rm -f slave/containerizer/mesos/provisioner/docker/*.o
rm -rf slave/containerizer/.libs slave/containerizer/_libs
rm -f slave/containerizer/mesos/provisioner/docker/*.lo
rm -f slave/qos_controllers/*.o
rm -f slave/qos_controllers/*.lo
rm -rf slave/containerizer/mesos/.libs slave/containerizer/mesos/_libs
rm -f slave/resource_estimators/*.o
rm -f slave/resource_estimators/*.lo
rm -rf slave/containerizer/mesos/isolators/cgroups/.libs slave/containerizer/mesos/isolators/cgroups/_libs
rm -f state/*.o
rm -rf slave/containerizer/mesos/isolators/filesystem/.libs slave/containerizer/mesos/isolators/filesystem/_libs
rm -rf slave/containerizer/mesos/isolators/namespaces/.libs slave/containerizer/mesos/isolators/namespaces/_libs
rm -f state/*.lo
rm -rf slave/containerizer/mesos/isolators/network/.libs slave/containerizer/mesos/isolators/network/_libs
rm -rf slave/containerizer/mesos/isolators/posix/.libs slave/containerizer/mesos/isolators/posix/_libs
rm -rf slave/containerizer/mesos/provisioner/.libs slave/containerizer/mesos/provisioner/_libs
rm -f tests/*.o
rm -rf slave/containerizer/mesos/provisioner/appc/.libs slave/containerizer/mesos/provisioner/appc/_libs
rm -rf slave/containerizer/mesos/provisioner/backends/.libs slave/containerizer/mesos/provisioner/backends/_libs
rm -rf slave/containerizer/mesos/provisioner/docker/.libs slave/containerizer/mesos/provisioner/docker/_libs
rm -rf slave/qos_controllers/.libs slave/qos_controllers/_libs
rm -rf slave/resource_estimators/.libs slave/resource_estimators/_libs
rm -rf state/.libs state/_libs
rm -rf uri/.libs uri/_libs
rm -rf uri/fetchers/.libs uri/fetchers/_libs
rm -rf usage/.libs usage/_libs
rm -rf v1/.libs v1/_libs
rm -rf version/.libs version/_libs
rm -rf watcher/.libs watcher/_libs
rm -rf zookeeper/.libs zookeeper/_libs
rm -f tests/common/*.o
rm -f tests/containerizer/*.o
rm -f uri/*.o
rm -f uri/*.lo
rm -f uri/fetchers/*.o
rm -f uri/fetchers/*.lo
rm -f usage/*.o
rm -f usage/*.lo
rm -f v1/*.o
rm -f v1/*.lo
rm -f version/*.o
rm -f version/*.lo
rm -f watcher/*.o
rm -f watcher/*.lo
rm -f zookeeper/*.o
rm -f zookeeper/*.lo
rm -rf ../include/mesos/.deps ../include/mesos/authentication/.deps ../include/mesos/authorizer/.deps ../include/mesos/containerizer/.deps ../include/mesos/docker/.deps ../include/mesos/executor/.deps ../include/mesos/fetcher/.deps ../include/mesos/maintenance/.deps ../include/mesos/master/.deps ../include/mesos/module/.deps ../include/mesos/quota/.deps ../include/mesos/scheduler/.deps ../include/mesos/slave/.deps ../include/mesos/uri/.deps ../include/mesos/v1/.deps ../include/mesos/v1/executor/.deps ../include/mesos/v1/scheduler/.deps authentication/cram_md5/.deps authentication/http/.deps authorizer/.deps authorizer/local/.deps cli/.deps common/.deps docker/.deps examples/.deps exec/.deps executor/.deps files/.deps hdfs/.deps health-check/.deps hook/.deps internal/.deps java/jni/.deps jvm/.deps jvm/org/apache/.deps launcher/.deps linux/.deps linux/routing/.deps linux/routing/diagnosis/.deps linux/routing/filter/.deps linux/routing/link/.deps linux/routing/queueing/.deps local/.deps log/.deps log/tool/.deps logging/.deps master/.deps master/allocator/.deps master/allocator/mesos/.deps master/allocator/sorter/drf/.deps messages/.deps module/.deps sched/.deps scheduler/.deps slave/.deps slave/container_loggers/.deps slave/containerizer/.deps slave/containerizer/mesos/.deps slave/containerizer/mesos/isolators/cgroups/.deps slave/containerizer/mesos/isolators/filesystem/.deps slave/containerizer/mesos/isolators/namespaces/.deps slave/containerizer/mesos/isolators/network/.deps slave/containerizer/mesos/isolators/posix/.deps slave/containerizer/mesos/provisioner/.deps slave/containerizer/mesos/provisioner/appc/.deps slave/containerizer/mesos/provisioner/backends/.deps slave/containerizer/mesos/provisioner/docker/.deps slave/qos_controllers/.deps slave/resource_estimators/.deps state/.deps tests/.deps tests/common/.deps tests/containerizer/.deps uri/.deps uri/fetchers/.deps usage/.deps v1/.deps version/.deps watcher/.deps zookeeper/.deps
rm -f Makefile
make[2]: Leaving directory `/mesos/mesos-0.28.0/_build/src'
rm -f config.status config.cache config.log configure.lineno config.status.lineno
rm -f Makefile
make[1]: Leaving directory `/mesos/mesos-0.28.0/_build'
if test -d "mesos-0.28.0"; then find "mesos-0.28.0" -type d ! -perm -200 -exec chmod u+w {} ';' && rm -rf "mesos-0.28.0" || { sleep 5 && rm -rf "mesos-0.28.0"; }; else :; fi
==============================================
mesos-0.28.0 archives ready for distribution: 
mesos-0.28.0.tar.gz
==============================================
+ docker rmi mesos-1454104055-28274
Error response from daemon: conflict: unable to delete ee3e49bbf0d4 (must be forced) - image is being used by stopped container 007a621048ef
Error: failed to remove images: [mesos-1454104055-28274]
Build step 'Execute shell' marked build as failure