You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mesos.apache.org by "Hepple, Robert" <RH...@tnsi.com> on 2015/01/30 06:06:27 UTC

Is mesos spamming me?

I have a single mesos master and 19 slaves. I have several jenkins
servers making on-demand requests using the jenkins-mesos plugin - it
all seems to be working correctly, mesos slaves are assigned to the
jenkins servers, they execute jobs and eventually they detach.

Except.

Except the jenkins servers are getting spammed about every 1 or 2
seconds with this in /var/log/jenkins/jenkins.log:

Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
WARNING: Ignoring disk resources from offer
Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
INFO: Ignoring ports resources from offer
Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
INFO: Offer not sufficient for slave request:
[name: "cpus"
type: SCALAR
scalar {
  value: 1.6
}
role: "*"
, name: "mem"
type: SCALAR
scalar {
  value: 455.0
}
role: "*"
, name: "disk"
type: SCALAR
scalar {
  value: 32833.0
}
role: "*"
, name: "ports"
type: RANGES
ranges {
  range {
    begin: 31000
    end: 32000
  }
}
role: "*"
]
[]
Requested for Jenkins slave:
  cpus: 0.2
  mem:  704.0
  attributes:  


The mesos master side is also hitting the logs with eg:

I0130 14:59:43.789172 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665754 ] on slave 20150129-120204-1408111020-5050-10811-S2 at slave(1)@172.17.238.75:5051 (ci00bldslv15v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
I0130 14:59:43.789654 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665755 ] on slave 20150129-120204-1408111020-5050-10811-S13 at slave(1)@172.17.238.98:5051 (ci00bldslv12v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
I0130 14:59:43.790004 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665756 ] on slave 20150129-120204-1408111020-5050-10811-S11 at slave(1)@172.17.238.95:5051 (ci00bldslv11v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
I0130 14:59:43.790349 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665757 ] on slave 20150129-120204-1408111020-5050-10811-S7 at slave(1)@172.17.238.108:5051 (ci00bldslv19v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
I0130 14:59:43.790670 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665758 ] on slave 20150129-120204-1408111020-5050-10811-S14 at slave(1)@172.17.238.78:5051 (ci00bldslv06v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
I0130 14:59:43.791192 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S2 from framework 20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.791507 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S13 from framework 20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.791857 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S11 from framework 20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.792145 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S7 from framework 20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.792417 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S14 from framework 20150129-120204-1408111020-5050-10811-0001


Is that normal? Certainly it's not desirable especially as jenkins is
also throwing a new config.xml file into the config-history directory on
every iteration and filling up the disc!!!!:

jenkins/config-history/config/2015-01-30_10-49-10
jenkins/config-history/config/2015-01-30_10-49-11
jenkins/config-history/config/2015-01-30_10-49-12
jenkins/config-history/config/2015-01-30_10-49-13
jenkins/config-history/config/2015-01-30_10-49-14
jenkins/config-history/config/2015-01-30_10-49-15
jenkins/config-history/config/2015-01-30_10-49-16
jenkins/config-history/config/2015-01-30_10-49-17

Any advice? I'm not too concerned about the log spamming, but the
version history spamming is serious.


Thanks


Bob

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Fri, 2015-01-30 at 10:00 +0100, Geoffroy Jabouley wrote:
> Hello
> 
> 
> The message means that the received resource offer from Mesos cluster
> does not meet your jenkins slave requirements (memory or cpu). This is
> normal message.
> 

This puzzles me as the jenkins-mesos plugin is only specifying 0.1 CPU
and 512Mb RAM. Our slaves have 2Gb RAM each - so they should certainly
be 'adequate' - why would they be rejected? How can I make them
'acceptable'?

> 
> 
> 
> you can filter logs from specific classes in Jenkins
> 
>      1. from the webUI, in the "jenkins_url"/log/levels panel, set the
>         logging level for org.jenkinsci.plugins.mesos.JenkinsScheduler
>         to WARNING
>      2. use a logging.properties file
> 

The logs are not what worries us - it's having Jenkins throw a
config-history version every time it talks to the mesos master. Also,
the mesos slaves often go 'offline'


Thanks


Bob


-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Mon, 2015-02-02 at 17:41 -0800, Vinod Kone wrote:
> 
> On Mon, Feb 2, 2015 at 5:34 PM, Hepple, Robert <RH...@tnsi.com>
> wrote:
>         No - it is not 'restricted' to a label but the only executors
>         this
>         jenkins has are 'mesos' ones.
>         
> 
> You need to restrict it to 'mesos'. That is how the plugin is
> implemented. It will look into offers from mesos *iff* it has any jobs
> labeled 'mesos'.

OK - so I tried that and it went through! Thanks for that. 

It's a bit of unexpected behaviour really, or am I being unreasonable?

I'm 100% sure that that is not the behaviour I experienced before eg on
our build server:

mesos slaves have the labels "mesos RHEL6" and we have jobs with the
"Restrict where this project can be run" set to "!dedicated" - ie
without a specific mesos label. These have run on mesos. Definitely.
Positively. Let me double-check again - yup!

Perhaps it takes the set of known executor labels (in our case something
like "Master RHEL5 RHEL6 PERF PROD dedicated swarm mesos") and does a !
dedicated on that. So as long as there are jobs label 'mesos' somewhere,
then !dedicated will work. Hmmm. I'll try that out on the test
server ... 

later ... 

no, it doesn't work with "!dedicated" while another job restricted to
"mesos" works fine. Something deeper.


BTW - thanks to mesos we are getting our "build world" down from 4 hours
to 20m !!! Hope I can shake out these relatively minor problems so that
we can really getting it humming.



Cheers


Bob




-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Vinod Kone <vi...@apache.org>.
On Mon, Feb 2, 2015 at 5:34 PM, Hepple, Robert <RH...@tnsi.com> wrote:

> No - it is not 'restricted' to a label but the only executors this
> jenkins has are 'mesos' ones.
>

You need to restrict it to 'mesos'. That is how the plugin is implemented.
It will look into offers from mesos *iff* it has any jobs labeled 'mesos'.

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Mon, 2015-02-02 at 17:20 -0800, Vinod Kone wrote:
> 
> On Mon, Feb 2, 2015 at 5:04 PM, Hepple, Robert <RH...@tnsi.com>
> wrote:
>         As I say, it can't seem to get a slave started, the jenkins
>         job is just
>         hanging waiting for an executor (this test server presently
>         only has
>         mesos slaves as executors).
>         
> 
> Did you set the "mesos" label on the job?

No - it is not 'restricted' to a label but the only executors this
jenkins has are 'mesos' ones.

From the logs: I still have only one jenkins server actively making
requests on framework 20150202-114251-1408111020-5050-25794-0000 and the
mesos cluster is idle:

/var/log/jenkins/jenkins.log:

Feb 03, 2015 11:29:58 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 3
Feb 03, 2015 11:29:59 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 5
Feb 03, 2015 11:30:00 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 5
Feb 03, 2015 11:30:01 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 4
Feb 03, 2015 11:30:02 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 2
Feb 03, 2015 11:30:03 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 3
Feb 03, 2015 11:30:04 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 5
Feb 03, 2015 11:30:05 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 5
Feb 03, 2015 11:30:06 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 4

mesos-master.INFO: very chatty log!
I0203 11:29:58.519286 25808 master.cpp:3843] Sending 2 offers to framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:58.519865 25808 master.cpp:3843] Sending 3 offers to framework 20150202-114251-1408111020-5050-25794-0001 (Jenkins Scheduler) at scheduler-20f04e73-8a14-4ed6-97c1-05049c636b88@172.17.238.100:44828
I0203 11:29:58.522814 25809 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254435 ] on slave 20150129-120204-1408111020-5050-10811-S3 at slave(1)@172.17.238.105:5051 (ci00bldslv16v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:58.523115 25809 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254436 ] on slave 20150129-120204-1408111020-5050-10811-S12 at slave(1)@172.17.238.84:5051 (ci00bldslv08v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:58.523596 25809 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254437 ] on slave 20150129-120204-1408111020-5050-10811-S8 at slave(1)@172.17.238.104:5051 (ci00bldslv09v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0001 (Jenkins Scheduler) at scheduler-20f04e73-8a14-4ed6-97c1-05049c636b88@172.17.238.100:44828
I0203 11:29:58.523980 25809 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254438 ] on slave 20150129-120204-1408111020-5050-10811-S16 at slave(1)@172.17.238.90:5051 (ci00bldslv14v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0001 (Jenkins Scheduler) at scheduler-20f04e73-8a14-4ed6-97c1-05049c636b88@172.17.238.100:44828
I0203 11:29:58.524466 25809 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254439 ] on slave 20150129-120204-1408111020-5050-10811-S2 at slave(1)@172.17.238.75:5051 (ci00bldslv15v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0001 (Jenkins Scheduler) at scheduler-20f04e73-8a14-4ed6-97c1-05049c636b88@172.17.238.100:44828
I0203 11:29:58.524162 25807 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1861; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1861; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S3 from framework 20150202-114251-1408111020-5050-25794-0000
I0203 11:29:58.525182 25807 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1863; disk(*):32833; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1863; disk(*):32833; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S12 from framework 20150202-114251-1408111020-5050-25794-0000
I0203 11:29:58.525475 25807 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1863; disk(*):32833; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1863; disk(*):32833; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S8 from framework 20150202-114251-1408111020-5050-25794-0001
I0203 11:29:58.525740 25807 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.8; mem(*):1157; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.8; mem(*):1157; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S16 from framework 20150202-114251-1408111020-5050-25794-0001
I0203 11:29:58.526021 25807 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1861; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1861; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S2 from framework 20150202-114251-1408111020-5050-25794-0001
I0203 11:29:59.523082 25810 master.cpp:3843] Sending 3 offers to framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:59.523748 25810 master.cpp:3843] Sending 5 offers to framework 20150202-114251-1408111020-5050-25794-0001 (Jenkins Scheduler) at scheduler-20f04e73-8a14-4ed6-97c1-05049c636b88@172.17.238.100:44828
I0203 11:29:59.526736 25806 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254440 ] on slave 20150129-120204-1408111020-5050-10811-S15 at slave(1)@172.17.238.86:5051 (ci00bldslv10v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:59.527063 25806 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254441 ] on slave 20150129-120204-1408111020-5050-10811-S5 at slave(1)@172.17.238.93:5051 (ci00bldslv13v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:59.527453 25806 master.cpp:2344] Processing reply for offers: [ 20150202-114251-1408111020-5050-25794-O254442 ] on slave 20150129-120204-1408111020-5050-10811-S7 at slave(1)@172.17.238.108:5051 (ci00bldslv19v.ss.corp.cnp.tnsi.com) for framework 20150202-114251-1408111020-5050-25794-0000 (Jenkins Scheduler) at scheduler-7ef71bb0-cb91-400b-85b6-faa4f88953bd@172.17.152.168:49673
I0203 11:29:59.527925 25806 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1861; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1861; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S15 from framework 20150202-114251-1408111020-5050-25794-0000
... repeats endlessly ...

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Vinod Kone <vi...@apache.org>.
On Mon, Feb 2, 2015 at 5:04 PM, Hepple, Robert <RH...@tnsi.com> wrote:

> As I say, it can't seem to get a slave started, the jenkins job is just
> hanging waiting for an executor (this test server presently only has
> mesos slaves as executors).
>

Did you set the "mesos" label on the job?

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Mon, 2015-02-02 at 10:43 -0800, Adam Bordelon wrote:
> The Jenkins framework may not be properly declining offers, nor
> merging new offers with other outstanding offers from the same slave.
> For example, if two smaller (455MB) tasks on the same slave complete
> in sequence, your Jenkins framework could get two separate 455MB
> offers, but you could merge them and launch a single task that uses
> 910MB.
> 
> 
> On Sun, Feb 1, 2015 at 11:50 PM, Dick Davies <di...@hellooperator.net>
> wrote:
>         The offer is only for 455 Mb of RAM. You can check that in the
>         slave UI,
>         but it looks like you have other tasks running that are using
>         some of that
>         1863Mb.
>         
>         On 2 February 2015 at 05:11, Hepple, Robert <RH...@tnsi.com>
>         wrote:
>         
>         > Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS -
>         so how come
>         > that is rejected by jenkins which is asking for the default
>         0.1 cpu and
>         > 512Mb RAM???
>         >

At the jenkins server end of things, I have specified 1 executor per
slave - so I don't think we should be getting this sort of interference
between tasks.

The mesos cluster is completely idle at the moment. I have our main
build server under no (mesos) load - and mysteriously it stopped getting
spammed with resourceOffers yesterday:

Feb 02, 2015 11:42:34 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 2

The last config file change was thrown at 2015-02-03_00-15-39

I'll see what happens next time we have a build storm and mesos is put
under load again.
===================================

Meanwhile, I have another test jenkins server that can't get a mesos
slave for love or money.

Its framework on the mesos server is reporting the following
continuously:

I0203 10:36:51.974747 25811 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1863; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1863; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S18 from framework 20150202-114251-1408111020-5050-25794-0001

At the jenkins end, this is posted every 1-4 secs:

Feb 03, 2015 10:59:05 AM org.jenkinsci.plugins.mesos.JenkinsScheduler resourceOffers
INFO: Received offers 19

It is _not_ throwing config file changes!!

As I say, it can't seem to get a slave started, the jenkins job is just
hanging waiting for an executor (this test server presently only has
mesos slaves as executors).



Sorry for all the noise on this and thanks for the kind responses from
all so far!!


Cheers


Bob

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Tue, 2015-02-03 at 02:33 +0000, Hepple, Robert wrote:
> On Mon, 2015-02-02 at 11:43 -0800, Vinod Kone wrote:
> > The "config-history" thing is interesting. Do you happen to know when
> > it gets generated? Are the config.xml files different in each of those
> > directories?
> 
> I did a bit more digging and reduced the number of config.xml change
> files to about 4500 by removing all those with no actual change with:
> 
> cd /mnt/storage/jenkins/config-history/config
> ls -r1| ( PREV=; while read DIR; do [[ "$PREV" ]] || { PREV=$DIR; continue; }; diff -q $PREV/config.xml $DIR/config.xml && rm -rf $PREV; PREV=$DIR; done )

In case anyone needs it, this gets rid of jenkins config.xml changes
files that only differ in their mesos slaves:

cd /mnt/storage/jenkins/config-history/config
ls -r1| ( PREV=; while read DIR; do [[ "$PREV" ]] || { PREV=$DIR; continue; }; if diff -q <( xmlstarlet ed --delete //slaves/org.jenkinsci.plugins.mesos.MesosSlave $PREV/config.xml) <( xmlstarlet ed --delete //slaves/org.jenkinsci.plugins.mesos.MesosSlave $DIR/config.xml ) >/dev/null; then echo deleting $DIR; rm -rf $PREV; else echo "keeping $DIR"; fi; PREV=$DIR; done )

Hope it helps someone!


Cheers


Bob

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Vinod Kone <vi...@apache.org>.
Can you ask on jenkins user list? You'll probably get quicker response
there.

On Mon, Feb 2, 2015 at 6:33 PM, Hepple, Robert <RH...@tnsi.com> wrote:

> On Mon, 2015-02-02 at 11:43 -0800, Vinod Kone wrote:
> > The "config-history" thing is interesting. Do you happen to know when
> > it gets generated? Are the config.xml files different in each of those
> > directories?
>
> I did a bit more digging and reduced the number of config.xml change
> files to about 4500 by removing all those with no actual change with:
>
> cd /mnt/storage/jenkins/config-history/config
> ls -r1| ( PREV=; while read DIR; do [[ "$PREV" ]] || { PREV=$DIR;
> continue; }; diff -q $PREV/config.xml $DIR/config.xml && rm -rf $PREV;
> PREV=$DIR; done )
>
> that left me with actual changes caused by mesos slaves coming and
> going: eg:
>
> [jenkins@ci01bldmst01v config]$ diff <( xmlstarlet fo
> 2015-02-03_00-15-35/config.xml) <( xmlstarlet fo
> 2015-02-03_00-11-28/config.xml )
> 396c396,497
> <   <slaves/>
> ---
> >   <slaves>
> >     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
> >       <name>mesos-jenkins-125029f9-ce19-4ac8-a095-62ead45a9f34</name>
> >       <description>mesos RHEL6</description>
> >       <remoteFS>jenkins</remoteFS>
> >       <numExecutors>1</numExecutors>
> >       <mode>NORMAL</mode>
> >       <retentionStrategy
> class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
> >         <idleTerminationMinutes>3</idleTerminationMinutes>
> >       </retentionStrategy>
> >       <launcher
> class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
> >         <state>INIT</state>
> >         <name>mesos-jenkins-125029f9-ce19-4ac8-a095-62ead45a9f34</name>
> >       </launcher>
> >       <label>mesos RHEL6</label>
> >       <nodeProperties/>
> >       <userId>anonymous</userId>
> >       <slaveInfo
> reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
> >       <cpus>0.2</cpus>
> >       <mem>640</mem>
> >     </org.jenkinsci.plugins.mesos.MesosSlave>
> >     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
> >       <name>mesos-jenkins-7a325b77-1515-4105-8bda-ce8c90515e6c</name>
> >       <description>mesos RHEL6</description>
> >       <remoteFS>jenkins</remoteFS>
> >       <numExecutors>1</numExecutors>
> >       <mode>NORMAL</mode>
> >       <retentionStrategy
> class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
> >         <idleTerminationMinutes>3</idleTerminationMinutes>
> >       </retentionStrategy>
> >       <launcher
> class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
> >         <state>INIT</state>
> >         <name>mesos-jenkins-7a325b77-1515-4105-8bda-ce8c90515e6c</name>
> >       </launcher>
> >       <label>mesos RHEL6</label>
> >       <nodeProperties/>
> >       <userId>anonymous</userId>
> >       <slaveInfo
> reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
> >       <cpus>0.2</cpus>
> >       <mem>640</mem>
> >     </org.jenkinsci.plugins.mesos.MesosSlave>
> >     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
> >       <name>mesos-jenkins-e52d98cf-fcd5-4b31-8fe3-5b48d58913b8</name>
> >       <description>mesos RHEL6</description>
> >       <remoteFS>jenkins</remoteFS>
> >       <numExecutors>1</numExecutors>
> >       <mode>NORMAL</mode>
> >       <retentionStrategy
> class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
> >         <idleTerminationMinutes>3</idleTerminationMinutes>
> >       </retentionStrategy>
> >       <launcher
> class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
> >         <state>INIT</state>
> >         <name>mesos-jenkins-e52d98cf-fcd5-4b31-8fe3-5b48d58913b8</name>
> >       </launcher>
> >       <label>mesos RHEL6</label>
> >       <nodeProperties/>
> >       <userId>anonymous</userId>
> >       <slaveInfo
> reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
> >       <cpus>0.2</cpus>
> >       <mem>640</mem>
> >     </org.jenkinsci.plugins.mesos.MesosSlave>
> >     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
> >       <name>mesos-jenkins-a5e52cdb-ac59-402d-bd69-6eeb284a3fbd</name>
> >       <description>mesos RHEL6</description>
> >       <remoteFS>jenkins</remoteFS>
> >       <numExecutors>1</numExecutors>
> >       <mode>NORMAL</mode>
> >       <retentionStrategy
> class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
> >         <idleTerminationMinutes>3</idleTerminationMinutes>
> >       </retentionStrategy>
> >       <launcher
> class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
> >         <state>INIT</state>
> >         <name>mesos-jenkins-a5e52cdb-ac59-402d-bd69-6eeb284a3fbd</name>
> >       </launcher>
> >       <label>mesos RHEL6</label>
> >       <nodeProperties/>
> >       <userId>anonymous</userId>
> >       <slaveInfo
> reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
> >       <cpus>0.2</cpus>
> >       <mem>640</mem>
> >     </org.jenkinsci.plugins.mesos.MesosSlave>
> >     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
> >       <name>mesos-jenkins-d62cea3b-cee1-427b-85cb-8326bb8ed868</name>
> >       <description>mesos RHEL6</description>
> >       <remoteFS>jenkins</remoteFS>
> >       <numExecutors>1</numExecutors>
> >       <mode>NORMAL</mode>
> >       <retentionStrategy
> class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
> >         <idleTerminationMinutes>3</idleTerminationMinutes>
> >       </retentionStrategy>
> >       <launcher
> class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
> >         <state>INIT</state>
> >         <name>mesos-jenkins-d62cea3b-cee1-427b-85cb-8326bb8ed868</name>
> >       </launcher>
> >       <label>mesos RHEL6</label>
> >       <nodeProperties/>
> >       <userId>anonymous</userId>
> >       <slaveInfo
> reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
> >       <cpus>0.2</cpus>
> >       <mem>640</mem>
> >     </org.jenkinsci.plugins.mesos.MesosSlave>
> >   </slaves>
>
> ... it's not really desirable to have these automatic and temporary
> changes recorded in config-history - is there some way to stop them?
>
> Cheers
>
>
>
> Bob
>
> --
> Senior Software Engineer
> T. 07 3224 9778
> M. 04 1177 6888
> Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.
>
> On 18th December 2014, MasterCard acquired the Gateway Services business
> (TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
> join MasterCard’s global gateway business, DataCash.
>
>
>
>
>
>
>
>

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Mon, 2015-02-02 at 11:43 -0800, Vinod Kone wrote:
> The "config-history" thing is interesting. Do you happen to know when
> it gets generated? Are the config.xml files different in each of those
> directories?

I did a bit more digging and reduced the number of config.xml change
files to about 4500 by removing all those with no actual change with:

cd /mnt/storage/jenkins/config-history/config
ls -r1| ( PREV=; while read DIR; do [[ "$PREV" ]] || { PREV=$DIR; continue; }; diff -q $PREV/config.xml $DIR/config.xml && rm -rf $PREV; PREV=$DIR; done )

that left me with actual changes caused by mesos slaves coming and
going: eg:

[jenkins@ci01bldmst01v config]$ diff <( xmlstarlet fo 2015-02-03_00-15-35/config.xml) <( xmlstarlet fo 2015-02-03_00-11-28/config.xml ) 
396c396,497
<   <slaves/>
---
>   <slaves>
>     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
>       <name>mesos-jenkins-125029f9-ce19-4ac8-a095-62ead45a9f34</name>
>       <description>mesos RHEL6</description>
>       <remoteFS>jenkins</remoteFS>
>       <numExecutors>1</numExecutors>
>       <mode>NORMAL</mode>
>       <retentionStrategy class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
>         <idleTerminationMinutes>3</idleTerminationMinutes>
>       </retentionStrategy>
>       <launcher class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
>         <state>INIT</state>
>         <name>mesos-jenkins-125029f9-ce19-4ac8-a095-62ead45a9f34</name>
>       </launcher>
>       <label>mesos RHEL6</label>
>       <nodeProperties/>
>       <userId>anonymous</userId>
>       <slaveInfo reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
>       <cpus>0.2</cpus>
>       <mem>640</mem>
>     </org.jenkinsci.plugins.mesos.MesosSlave>
>     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
>       <name>mesos-jenkins-7a325b77-1515-4105-8bda-ce8c90515e6c</name>
>       <description>mesos RHEL6</description>
>       <remoteFS>jenkins</remoteFS>
>       <numExecutors>1</numExecutors>
>       <mode>NORMAL</mode>
>       <retentionStrategy class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
>         <idleTerminationMinutes>3</idleTerminationMinutes>
>       </retentionStrategy>
>       <launcher class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
>         <state>INIT</state>
>         <name>mesos-jenkins-7a325b77-1515-4105-8bda-ce8c90515e6c</name>
>       </launcher>
>       <label>mesos RHEL6</label>
>       <nodeProperties/>
>       <userId>anonymous</userId>
>       <slaveInfo reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
>       <cpus>0.2</cpus>
>       <mem>640</mem>
>     </org.jenkinsci.plugins.mesos.MesosSlave>
>     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
>       <name>mesos-jenkins-e52d98cf-fcd5-4b31-8fe3-5b48d58913b8</name>
>       <description>mesos RHEL6</description>
>       <remoteFS>jenkins</remoteFS>
>       <numExecutors>1</numExecutors>
>       <mode>NORMAL</mode>
>       <retentionStrategy class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
>         <idleTerminationMinutes>3</idleTerminationMinutes>
>       </retentionStrategy>
>       <launcher class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
>         <state>INIT</state>
>         <name>mesos-jenkins-e52d98cf-fcd5-4b31-8fe3-5b48d58913b8</name>
>       </launcher>
>       <label>mesos RHEL6</label>
>       <nodeProperties/>
>       <userId>anonymous</userId>
>       <slaveInfo reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
>       <cpus>0.2</cpus>
>       <mem>640</mem>
>     </org.jenkinsci.plugins.mesos.MesosSlave>
>     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
>       <name>mesos-jenkins-a5e52cdb-ac59-402d-bd69-6eeb284a3fbd</name>
>       <description>mesos RHEL6</description>
>       <remoteFS>jenkins</remoteFS>
>       <numExecutors>1</numExecutors>
>       <mode>NORMAL</mode>
>       <retentionStrategy class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
>         <idleTerminationMinutes>3</idleTerminationMinutes>
>       </retentionStrategy>
>       <launcher class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
>         <state>INIT</state>
>         <name>mesos-jenkins-a5e52cdb-ac59-402d-bd69-6eeb284a3fbd</name>
>       </launcher>
>       <label>mesos RHEL6</label>
>       <nodeProperties/>
>       <userId>anonymous</userId>
>       <slaveInfo reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
>       <cpus>0.2</cpus>
>       <mem>640</mem>
>     </org.jenkinsci.plugins.mesos.MesosSlave>
>     <org.jenkinsci.plugins.mesos.MesosSlave plugin="mesos@0.5.0">
>       <name>mesos-jenkins-d62cea3b-cee1-427b-85cb-8326bb8ed868</name>
>       <description>mesos RHEL6</description>
>       <remoteFS>jenkins</remoteFS>
>       <numExecutors>1</numExecutors>
>       <mode>NORMAL</mode>
>       <retentionStrategy class="org.jenkinsci.plugins.mesos.MesosRetentionStrategy">
>         <idleTerminationMinutes>3</idleTerminationMinutes>
>       </retentionStrategy>
>       <launcher class="org.jenkinsci.plugins.mesos.MesosComputerLauncher">
>         <state>INIT</state>
>         <name>mesos-jenkins-d62cea3b-cee1-427b-85cb-8326bb8ed868</name>
>       </launcher>
>       <label>mesos RHEL6</label>
>       <nodeProperties/>
>       <userId>anonymous</userId>
>       <slaveInfo reference="../../../clouds/org.jenkinsci.plugins.mesos.MesosCloud/slaveInfos/org.jenkinsci.plugins.mesos.MesosSlaveInfo"/>
>       <cpus>0.2</cpus>
>       <mem>640</mem>
>     </org.jenkinsci.plugins.mesos.MesosSlave>
>   </slaves>

... it's not really desirable to have these automatic and temporary
changes recorded in config-history - is there some way to stop them?

Cheers



Bob

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Mon, 2015-02-02 at 11:43 -0800, Vinod Kone wrote:
> The "config-history" thing is interesting. Do you happen to know when
> it gets generated? Are the config.xml files different in each of those
> directories?

Hi,

Jenkins (version 1.583) is throwing a new config.xml version about once
a second. There is no diff between the old and new config files. The
history.xml file shows just a timestamp diff, eg.:

[jenkins@ci01bldmst01v config]$ diff 2015-02-03_00-15-38/config.xml 2015-02-03_00-15-39/config.xml 
[jenkins@ci01bldmst01v config]$ diff 2015-02-03_00-15-38/history.xml 2015-02-03_00-15-39/history.xml 
6c6
<   <timestamp>2015-02-03_00-15-38</timestamp>
---
>   <timestamp>2015-02-03_00-15-39</timestamp>

I currently have 12502 versions of my config file in 1.5Gb!!! I'm gonna
have to think up a way to clean it all up soon.


Cheers


Bob

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Vinod Kone <vi...@apache.org>.
The "config-history" thing is interesting. Do you happen to know when it
gets generated? Are the config.xml files different in each of those
directories?

On Mon, Feb 2, 2015 at 10:43 AM, Adam Bordelon <ad...@mesosphere.io> wrote:

> The Jenkins framework may not be properly declining offers, nor merging
> new offers with other outstanding offers from the same slave. For example,
> if two smaller (455MB) tasks on the same slave complete in sequence, your
> Jenkins framework could get two separate 455MB offers, but you could merge
> them and launch a single task that uses 910MB.
>
> On Sun, Feb 1, 2015 at 11:50 PM, Dick Davies <di...@hellooperator.net>
> wrote:
>
>> The offer is only for 455 Mb of RAM. You can check that in the slave UI,
>> but it looks like you have other tasks running that are using some of that
>> 1863Mb.
>>
>> On 2 February 2015 at 05:11, Hepple, Robert <RH...@tnsi.com> wrote:
>>
>> > Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS - so how come
>> > that is rejected by jenkins which is asking for the default 0.1 cpu and
>> > 512Mb RAM???
>> >
>> >
>> > Thanks
>> >
>> >
>> > Bob
>>
>
>

Re: Is mesos spamming me?

Posted by Adam Bordelon <ad...@mesosphere.io>.
The Jenkins framework may not be properly declining offers, nor merging new
offers with other outstanding offers from the same slave. For example, if
two smaller (455MB) tasks on the same slave complete in sequence, your
Jenkins framework could get two separate 455MB offers, but you could merge
them and launch a single task that uses 910MB.

On Sun, Feb 1, 2015 at 11:50 PM, Dick Davies <di...@hellooperator.net> wrote:

> The offer is only for 455 Mb of RAM. You can check that in the slave UI,
> but it looks like you have other tasks running that are using some of that
> 1863Mb.
>
> On 2 February 2015 at 05:11, Hepple, Robert <RH...@tnsi.com> wrote:
>
> > Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS - so how come
> > that is rejected by jenkins which is asking for the default 0.1 cpu and
> > 512Mb RAM???
> >
> >
> > Thanks
> >
> >
> > Bob
>

Re: Is mesos spamming me?

Posted by Dick Davies <di...@hellooperator.net>.
The offer is only for 455 Mb of RAM. You can check that in the slave UI,
but it looks like you have other tasks running that are using some of that
1863Mb.

On 2 February 2015 at 05:11, Hepple, Robert <RH...@tnsi.com> wrote:

> Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS - so how come
> that is rejected by jenkins which is asking for the default 0.1 cpu and
> 512Mb RAM???
>
>
> Thanks
>
>
> Bob

Re: Is mesos spamming me?

Posted by Vinod Kone <vi...@apache.org>.
On Mon, Feb 2, 2015 at 5:18 PM, Hepple, Robert <RH...@tnsi.com> wrote:

> ... whatever all that means!! And why would it be requesting 704Mb and
> 0.2 CPUs? Where do those numbers come from? Adding "Jenkins Slave Memory
> in MB" and "Jenkins Executor Memory in MB" comes to 640Mb
>

The extra overhead is for JVM itself (currently it is set at 10% of
requested memory.). In other words when the Jenkins slave is started by
Mesos slave it sets "java -Xmx 640MB ....... slave.jar". But the container
in which Jenkins slave is spawned has a resource limit of 704MB to account
for JVM's own overhead. The math is here:
https://github.com/jenkinsci/mesos-plugin/blob/master/src/main/java/org/jenkinsci/plugins/mesos/JenkinsScheduler.java#L273

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Mon, 2015-02-02 at 08:47 +0100, Geoffroy Jabouley wrote:
> Hello
> 
> 
> let's have a look at the message displayed in Jenkins log:
> 
> INFO: Offer not sufficient for slave request:
> [name: "cpus"
> type: SCALAR
> scalar {
>   value: 1.6
> }
> role: "*"
> 
> ==> The Mesos slave is currently offering 1.6 CPU resources
> 
> name: "mem"
> type: SCALAR
> scalar {
>   value: 455.0
> }
> role: "*"
> ==> The Mesos slave is currently offering 455MB of RAM resources
> 
> 
>  name: "disk"
> type: SCALAR
> scalar {
>   value: 32833.0
> }
> role: "*"
> ==> The Mesos slave is currently offering 32GB of Disk resources
> 
> , name: "ports"
> type: RANGES
> ranges {
>   range {
>     begin: 31000
>     end: 32000
>   }
> }
> role: "*"
> ==> The Mesos slave is currently offering ports between 31000 & 32000
> (default)
> 
> ]
> []
> Requested for Jenkins slave:
>   cpus: 0.2
>   mem:  704.0
> 
> 
> 
> ==> Your Jenkins slave is requesting 0.2 CPU and 704 MB of RAM
> 
> 
> 
> So for me it is normal that your Jenkins slave request cannot be
> fullfilled by Mesos, at least by this mesos slave, as it only has
> 455MB of RAM to offer and you need 704MB.

Hi Geoffroy,


Thanks for taking the time to explain this but I think I'm getting more
confused.

These slaves are all single-tasked as far as Jenkins is concerned -
there is only 1 executor per slave so a slave should not (surely) be
offered if it's already running a job. They all have 3Gb RAM so I don't
know how a slave could have only 455Mb RAM available.

The jenkins plugin settings are set to the defaults:

Jenkins Slave CPUs                     0.1
Jenkins Slave Memory in MB             512
Maximum number of Executors per Slave  1
Jenkins Executor CPUs                  0.1
Jenkins Executor Memory in MB          128

... whatever all that means!! And why would it be requesting 704Mb and
0.2 CPUs? Where do those numbers come from? Adding "Jenkins Slave Memory
in MB" and "Jenkins Executor Memory in MB" comes to 640Mb


Thanks


Bob
> 
> 
> FYI, the requested memory for a Jenkins slave is derived from the
> following calculation: Jenkins Slave Memory in MB + (Maximum number of
> Executors per Slave * Jenkins Executor Memory in MB).
> Maybe that's why you are seeing 704MB here and not 512MB as expected.
> 
> 
> 
> 
> But if you have several other Mesos slaves each offering 2CPU/2GB RAM,
> then this should not be a problem and the Jenkins slave should be
> created on another Mesos slave (log message is something like "offers
> match")
> 
> 
> 
> Are there any other "apps" running on your Mesos slave (another
> jenkins slave, a jenkins master, ...) that would consume missing
> resources?
> 
> 
> 
> 
> 2015-02-02 6:11 GMT+01:00 Hepple, Robert <RH...@tnsi.com>:
>         On Sun, 2015-02-01 at 21:02 -0800, Vinod Kone wrote:
>         >
>         >
>         >
>         > On Sun, Feb 1, 2015 at 8:58 PM, Vinod Kone
>         <vi...@gmail.com>
>         > wrote:
>         >         By default mesos slave leaves some RAM and CPU for
>         system
>         >         processes. You can override this behavior by
>         --resources flag.
>         >
>         
>         Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS - so
>         how come
>         that is rejected by jenkins which is asking for the default
>         0.1 cpu and
>         512Mb RAM???
>         
>         
>         Thanks
>         
>         
>         Bob
>         
>         >         On Sun, Feb 1, 2015 at 6:05 PM, Hepple, Robert
>         >         <RH...@tnsi.com> wrote:
>         >                 On Fri, 2015-01-30 at 10:00 +0100, Geoffroy
>         Jabouley
>         >                 wrote:
>         >                 > Hello
>         >                 >
>         >                 >
>         >                 > The message means that the received
>         resource offer
>         >                 from Mesos cluster
>         >                 > does not meet your jenkins slave
>         requirements
>         >                 (memory or cpu). This is
>         >                 > normal message.
>         >                 >
>         >
>         >                 ... and here's another thing - the mesos
>         master
>         >                 registers the slave as
>         >                 having 2*cpus and 1.8Gb RAM:
>         >
>         >                 I0202 11:43:47.623059 25809
>         >                 hierarchical_allocator_process.hpp:442]
>         Added slave
>         >                 20150129-120204-1408111020-5050-10811-S18
>         >                 (ci00bldslv02
>         >                 v.ss.corp.cnp.tnsi.com) with cpus(*):2;
>         mem(*):1863;
>         >                 disk(*):32961; ports(*):[31000-32000] (and
>         cpus(*):2;
>         >                 mem(*):1863; disk(*):32961;
>         >                 ports(*):[31000-32000] available)
>         >
>         >
>         >                 --
>         >                 Senior Software Engineer
>         >                 T. 07 3224 9778
>         >                 M. 04 1177 6888
>         >                 Level 20, 300 Adelaide Street, Brisbane QLD
>         4000,
>         >                 Australia.
>         >
>         >                 On 18th December 2014, MasterCard acquired
>         the Gateway
>         >                 Services business
>         >                 (TNSPay Retail and TNSPay eCommerce) of
>         Transaction
>         >                 Network Services, to
>         >                 join MasterCard’s global gateway business,
>         DataCash.
>         >
>         >
>         >
>         >
>         >
>         >
>         >
>         >
>         >
>         >
>         >
>         >
>         
>         --
>         Senior Software Engineer
>         T. 07 3224 9778
>         M. 04 1177 6888
>         Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.
>         
>         On 18th December 2014, MasterCard acquired the Gateway
>         Services business
>         (TNSPay Retail and TNSPay eCommerce) of Transaction Network
>         Services, to
>         join MasterCard’s global gateway business, DataCash.
>         
>         
>         
>         
>         
>         
>         
>         
> 
> 

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Geoffroy Jabouley <ge...@gmail.com>.
Hello

let's have a look at the message displayed in Jenkins log:

INFO: Offer not sufficient for slave request:
[name: "cpus"
type: SCALAR
scalar {
  value: 1.6
}
role: "*"
*==> The Mesos slave is currently offering 1.6 CPU resources*

name: "mem"
type: SCALAR
scalar {
  value: 455.0
}
role: "*"
*==> The Mesos slave is currently offering 455MB of RAM resources*


 name: "disk"
type: SCALAR
scalar {
  value: 32833.0
}
role: "*"
==> The Mesos slave is currently offering 32GB of Disk resources

, name: "ports"
type: RANGES
ranges {
  range {
    begin: 31000
    end: 32000
  }
}
role: "*"
==> The Mesos slave is currently offering ports between 31000 & 32000
(default)

]
[]



*Requested for Jenkins slave:  cpus: 0.2  mem:  704.0*

*==> Your Jenkins slave is requesting 0.2 CPU and 704 MB of RAM*


So for me it is normal that your Jenkins slave request cannot be fullfilled
by Mesos, at least by *this mesos slave, as it only has 455MB of RAM to
offer and you need 704MB*.

FYI, the requested memory for a Jenkins slave is derived from the following
calculation: *Jenkins Slave Memory in MB + (Maximum number of Executors per
Slave * Jenkins Executor Memory in MB)*.
Maybe that's why you are seeing 704MB here and not 512MB as expected.


But if you have several other Mesos slaves each offering 2CPU/2GB RAM, then
this should not be a problem and the Jenkins slave should be created on
another Mesos slave (log message is something like "offers match")

Are there any other "apps" running on your Mesos slave (another jenkins
slave, a jenkins master, ...) that would consume missing resources?



2015-02-02 6:11 GMT+01:00 Hepple, Robert <RH...@tnsi.com>:

> On Sun, 2015-02-01 at 21:02 -0800, Vinod Kone wrote:
> >
> >
> >
> > On Sun, Feb 1, 2015 at 8:58 PM, Vinod Kone <vi...@gmail.com>
> > wrote:
> >         By default mesos slave leaves some RAM and CPU for system
> >         processes. You can override this behavior by --resources flag.
> >
>
> Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS - so how come
> that is rejected by jenkins which is asking for the default 0.1 cpu and
> 512Mb RAM???
>
>
> Thanks
>
>
> Bob
>
> >         On Sun, Feb 1, 2015 at 6:05 PM, Hepple, Robert
> >         <RH...@tnsi.com> wrote:
> >                 On Fri, 2015-01-30 at 10:00 +0100, Geoffroy Jabouley
> >                 wrote:
> >                 > Hello
> >                 >
> >                 >
> >                 > The message means that the received resource offer
> >                 from Mesos cluster
> >                 > does not meet your jenkins slave requirements
> >                 (memory or cpu). This is
> >                 > normal message.
> >                 >
> >
> >                 ... and here's another thing - the mesos master
> >                 registers the slave as
> >                 having 2*cpus and 1.8Gb RAM:
> >
> >                 I0202 11:43:47.623059 25809
> >                 hierarchical_allocator_process.hpp:442] Added slave
> >                 20150129-120204-1408111020-5050-10811-S18
> >                 (ci00bldslv02
> >                 v.ss.corp.cnp.tnsi.com) with cpus(*):2; mem(*):1863;
> >                 disk(*):32961; ports(*):[31000-32000] (and cpus(*):2;
> >                 mem(*):1863; disk(*):32961;
> >                 ports(*):[31000-32000] available)
> >
> >
> >                 --
> >                 Senior Software Engineer
> >                 T. 07 3224 9778
> >                 M. 04 1177 6888
> >                 Level 20, 300 Adelaide Street, Brisbane QLD 4000,
> >                 Australia.
> >
> >                 On 18th December 2014, MasterCard acquired the Gateway
> >                 Services business
> >                 (TNSPay Retail and TNSPay eCommerce) of Transaction
> >                 Network Services, to
> >                 join MasterCard’s global gateway business, DataCash.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
> --
> Senior Software Engineer
> T. 07 3224 9778
> M. 04 1177 6888
> Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.
>
> On 18th December 2014, MasterCard acquired the Gateway Services business
> (TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
> join MasterCard’s global gateway business, DataCash.
>
>
>
>
>
>
>
>

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Sun, 2015-02-01 at 21:02 -0800, Vinod Kone wrote:
> 
> 
> 
> On Sun, Feb 1, 2015 at 8:58 PM, Vinod Kone <vi...@gmail.com>
> wrote:
>         By default mesos slave leaves some RAM and CPU for system
>         processes. You can override this behavior by --resources flag.
>         

Yeah but ... the slave is reporting 1863Mb RAM and 2 CPUS - so how come
that is rejected by jenkins which is asking for the default 0.1 cpu and
512Mb RAM???


Thanks


Bob

>         On Sun, Feb 1, 2015 at 6:05 PM, Hepple, Robert
>         <RH...@tnsi.com> wrote:
>                 On Fri, 2015-01-30 at 10:00 +0100, Geoffroy Jabouley
>                 wrote:
>                 > Hello
>                 >
>                 >
>                 > The message means that the received resource offer
>                 from Mesos cluster
>                 > does not meet your jenkins slave requirements
>                 (memory or cpu). This is
>                 > normal message.
>                 >
>                 
>                 ... and here's another thing - the mesos master
>                 registers the slave as
>                 having 2*cpus and 1.8Gb RAM:
>                 
>                 I0202 11:43:47.623059 25809
>                 hierarchical_allocator_process.hpp:442] Added slave
>                 20150129-120204-1408111020-5050-10811-S18
>                 (ci00bldslv02
>                 v.ss.corp.cnp.tnsi.com) with cpus(*):2; mem(*):1863;
>                 disk(*):32961; ports(*):[31000-32000] (and cpus(*):2;
>                 mem(*):1863; disk(*):32961;
>                 ports(*):[31000-32000] available)
>                 
>                 
>                 --
>                 Senior Software Engineer
>                 T. 07 3224 9778
>                 M. 04 1177 6888
>                 Level 20, 300 Adelaide Street, Brisbane QLD 4000,
>                 Australia.
>                 
>                 On 18th December 2014, MasterCard acquired the Gateway
>                 Services business
>                 (TNSPay Retail and TNSPay eCommerce) of Transaction
>                 Network Services, to
>                 join MasterCard’s global gateway business, DataCash.
>                 
>                 
>                 
>                 
>                 
>                 
>                 
>                 
>         
>         
> 
> 

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Vinod Kone <vi...@apache.org>.
On Sun, Feb 1, 2015 at 8:58 PM, Vinod Kone <vi...@gmail.com> wrote:

> By default mesos slave leaves some RAM and CPU for system processes. You
> can override this behavior by --resources flag.
>
> On Sun, Feb 1, 2015 at 6:05 PM, Hepple, Robert <RH...@tnsi.com> wrote:
>
>> On Fri, 2015-01-30 at 10:00 +0100, Geoffroy Jabouley wrote:
>> > Hello
>> >
>> >
>> > The message means that the received resource offer from Mesos cluster
>> > does not meet your jenkins slave requirements (memory or cpu). This is
>> > normal message.
>> >
>>
>> ... and here's another thing - the mesos master registers the slave as
>> having 2*cpus and 1.8Gb RAM:
>>
>> I0202 11:43:47.623059 25809 hierarchical_allocator_process.hpp:442] Added
>> slave 20150129-120204-1408111020-5050-10811-S18 (ci00bldslv02
>> v.ss.corp.cnp.tnsi.com) with cpus(*):2; mem(*):1863; disk(*):32961;
>> ports(*):[31000-32000] (and cpus(*):2; mem(*):1863; disk(*):32961;
>> ports(*):[31000-32000] available)
>>
>>
>> --
>> Senior Software Engineer
>> T. 07 3224 9778
>> M. 04 1177 6888
>> Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.
>>
>> On 18th December 2014, MasterCard acquired the Gateway Services business
>> (TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
>> join MasterCard’s global gateway business, DataCash.
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Fri, 2015-01-30 at 10:00 +0100, Geoffroy Jabouley wrote:
> Hello
> 
> 
> The message means that the received resource offer from Mesos cluster
> does not meet your jenkins slave requirements (memory or cpu). This is
> normal message.
> 

... and here's another thing - the mesos master registers the slave as
having 2*cpus and 1.8Gb RAM:

I0202 11:43:47.623059 25809 hierarchical_allocator_process.hpp:442] Added slave 20150129-120204-1408111020-5050-10811-S18 (ci00bldslv02
v.ss.corp.cnp.tnsi.com) with cpus(*):2; mem(*):1863; disk(*):32961; ports(*):[31000-32000] (and cpus(*):2; mem(*):1863; disk(*):32961; 
ports(*):[31000-32000] available)


-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.








Re: Is mesos spamming me?

Posted by Geoffroy Jabouley <ge...@gmail.com>.
Hello

The message means that the received resource offer from Mesos cluster does
not meet your jenkins slave requirements (memory or cpu). This is normal
message.


you can filter logs from specific classes in Jenkins

   1. from the webUI, in the "jenkins_url"/log/levels panel, set the
   logging level for org.jenkinsci.plugins.mesos.JenkinsScheduler to
   *WARNING*
   2. use a logging.properties file


We use the second solution.


Content of the logging.properties file is:

--------------------------











*# Global logging handlershandlers=java.util.logging.ConsoleHandler# Define
custom logger for Jenkins mesos plugin (too
verbose!)org.jenkinsci.plugins.mesos.JenkinsScheduler=java.util.logging.ConsoleHandlerorg.jenkinsci.plugins.mesos.JenkinsScheduler.useParentHandlers=FALSEorg.jenkinsci.plugins.mesos.JenkinsScheduler.level=WARNING#
Define common logging
configurationjava.util.logging.ConsoleHandler.level=INFOjava.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter*

--------------------------

Jenkins instance is then started using:* java
-Djava.util.logging.config.file=/path/to/logging.properties -jar
$HOME/jenkins.war*


One drawback of this solution is that it also filters other interestings
logs from the mesos JenkinsScheduler class...

Hope this helps
Regards

2015-01-30 6:06 GMT+01:00 Hepple, Robert <RH...@tnsi.com>:

> I have a single mesos master and 19 slaves. I have several jenkins
> servers making on-demand requests using the jenkins-mesos plugin - it
> all seems to be working correctly, mesos slaves are assigned to the
> jenkins servers, they execute jobs and eventually they detach.
>
> Except.
>
> Except the jenkins servers are getting spammed about every 1 or 2
> seconds with this in /var/log/jenkins/jenkins.log:
>
> Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler
> matches
> WARNING: Ignoring disk resources from offer
> Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler
> matches
> INFO: Ignoring ports resources from offer
> Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler
> matches
> INFO: Offer not sufficient for slave request:
> [name: "cpus"
> type: SCALAR
> scalar {
>   value: 1.6
> }
> role: "*"
> , name: "mem"
> type: SCALAR
> scalar {
>   value: 455.0
> }
> role: "*"
> , name: "disk"
> type: SCALAR
> scalar {
>   value: 32833.0
> }
> role: "*"
> , name: "ports"
> type: RANGES
> ranges {
>   range {
>     begin: 31000
>     end: 32000
>   }
> }
> role: "*"
> ]
> []
> Requested for Jenkins slave:
>   cpus: 0.2
>   mem:  704.0
>   attributes:
>
>
> The mesos master side is also hitting the logs with eg:
>
> I0130 14:59:43.789172 10828 master.cpp:2344] Processing reply for offers:
> [ 20150129-120204-1408111020-5050-10811-O665754 ] on slave
> 20150129-120204-1408111020-5050-10811-S2 at slave(1)@172.17.238.75:5051 (
> ci00bldslv15v.ss.corp.cnp.tnsi.com) for framework
> 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
> scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.789654 10828 master.cpp:2344] Processing reply for offers:
> [ 20150129-120204-1408111020-5050-10811-O665755 ] on slave
> 20150129-120204-1408111020-5050-10811-S13 at slave(1)@172.17.238.98:5051 (
> ci00bldslv12v.ss.corp.cnp.tnsi.com) for framework
> 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
> scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.790004 10828 master.cpp:2344] Processing reply for offers:
> [ 20150129-120204-1408111020-5050-10811-O665756 ] on slave
> 20150129-120204-1408111020-5050-10811-S11 at slave(1)@172.17.238.95:5051 (
> ci00bldslv11v.ss.corp.cnp.tnsi.com) for framework
> 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
> scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.790349 10828 master.cpp:2344] Processing reply for offers:
> [ 20150129-120204-1408111020-5050-10811-O665757 ] on slave
> 20150129-120204-1408111020-5050-10811-S7 at slave(1)@172.17.238.108:5051 (
> ci00bldslv19v.ss.corp.cnp.tnsi.com) for framework
> 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
> scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.790670 10828 master.cpp:2344] Processing reply for offers:
> [ 20150129-120204-1408111020-5050-10811-O665758 ] on slave
> 20150129-120204-1408111020-5050-10811-S14 at slave(1)@172.17.238.78:5051 (
> ci00bldslv06v.ss.corp.cnp.tnsi.com) for framework
> 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
> scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.791192 10828 hierarchical_allocator_process.hpp:563]
> Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]
> (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961;
> ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S2
> from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.791507 10828 hierarchical_allocator_process.hpp:563]
> Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]
> (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961;
> ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S13
> from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.791857 10828 hierarchical_allocator_process.hpp:563]
> Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]
> (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961;
> ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S11
> from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.792145 10828 hierarchical_allocator_process.hpp:563]
> Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]
> (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961;
> ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S7
> from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.792417 10828 hierarchical_allocator_process.hpp:563]
> Recovered cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000]
> (total allocatable: cpus(*):1.6; mem(*):455; disk(*):32833;
> ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S14
> from framework 20150129-120204-1408111020-5050-10811-0001
>
>
> Is that normal? Certainly it's not desirable especially as jenkins is
> also throwing a new config.xml file into the config-history directory on
> every iteration and filling up the disc!!!!:
>
> jenkins/config-history/config/2015-01-30_10-49-10
> jenkins/config-history/config/2015-01-30_10-49-11
> jenkins/config-history/config/2015-01-30_10-49-12
> jenkins/config-history/config/2015-01-30_10-49-13
> jenkins/config-history/config/2015-01-30_10-49-14
> jenkins/config-history/config/2015-01-30_10-49-15
> jenkins/config-history/config/2015-01-30_10-49-16
> jenkins/config-history/config/2015-01-30_10-49-17
>
> Any advice? I'm not too concerned about the log spamming, but the
> version history spamming is serious.
>
>
> Thanks
>
>
> Bob
>
> --
> Senior Software Engineer
> T. 07 3224 9778
> M. 04 1177 6888
> Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.
>
> On 18th December 2014, MasterCard acquired the Gateway Services business
> (TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
> join MasterCard’s global gateway business, DataCash.
>
>
>
>
>
>
>
>

Re: Is mesos spamming me?

Posted by "Hepple, Robert" <RH...@tnsi.com>.
On Fri, 2015-01-30 at 05:06 +0000, Hepple, Robert wrote:
> I have a single mesos master and 19 slaves. I have several jenkins
> servers making on-demand requests using the jenkins-mesos plugin - it
> all seems to be working correctly, mesos slaves are assigned to the
> jenkins servers, they execute jobs and eventually they detach.
> 
> Except.
> 
> Except the jenkins servers are getting spammed about every 1 or 2
> seconds with this in /var/log/jenkins/jenkins.log:
> 
> Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
> WARNING: Ignoring disk resources from offer
> Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
> INFO: Ignoring ports resources from offer
> Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
> INFO: Offer not sufficient for slave request:
> [name: "cpus"
> type: SCALAR
> scalar {
>   value: 1.6
> }
> role: "*"
> , name: "mem"
> type: SCALAR
> scalar {
>   value: 455.0
> }
> role: "*"
> , name: "disk"
> type: SCALAR
> scalar {
>   value: 32833.0
> }
> role: "*"
> , name: "ports"
> type: RANGES
> ranges {
>   range {
>     begin: 31000
>     end: 32000
>   }
> }
> role: "*"
> ]
> []
> Requested for Jenkins slave:
>   cpus: 0.2
>   mem:  704.0
>   attributes:  
> 
> 
> The mesos master side is also hitting the logs with eg:
> 
> I0130 14:59:43.789172 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665754 ] on slave 20150129-120204-1408111020-5050-10811-S2 at slave(1)@172.17.238.75:5051 (ci00bldslv15v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.789654 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665755 ] on slave 20150129-120204-1408111020-5050-10811-S13 at slave(1)@172.17.238.98:5051 (ci00bldslv12v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.790004 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665756 ] on slave 20150129-120204-1408111020-5050-10811-S11 at slave(1)@172.17.238.95:5051 (ci00bldslv11v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.790349 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665757 ] on slave 20150129-120204-1408111020-5050-10811-S7 at slave(1)@172.17.238.108:5051 (ci00bldslv19v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.790670 10828 master.cpp:2344] Processing reply for offers: [ 20150129-120204-1408111020-5050-10811-O665758 ] on slave 20150129-120204-1408111020-5050-10811-S14 at slave(1)@172.17.238.78:5051 (ci00bldslv06v.ss.corp.cnp.tnsi.com) for framework 20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at scheduler-1aab9acc-fba9-4123-b1ac-56ce74c0365b@172.17.152.201:54503
> I0130 14:59:43.791192 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S2 from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.791507 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S13 from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.791857 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S11 from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.792145 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S7 from framework 20150129-120204-1408111020-5050-10811-0001
> I0130 14:59:43.792417 10828 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000] (total allocatable: cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000]) on slave 20150129-120204-1408111020-5050-10811-S14 from framework 20150129-120204-1408111020-5050-10811-0001
> 
> 
> Is that normal? Certainly it's not desirable especially as jenkins is
> also throwing a new config.xml file into the config-history directory on
> every iteration and filling up the disc!!!!:
> 
> jenkins/config-history/config/2015-01-30_10-49-10
> jenkins/config-history/config/2015-01-30_10-49-11
> jenkins/config-history/config/2015-01-30_10-49-12
> jenkins/config-history/config/2015-01-30_10-49-13
> jenkins/config-history/config/2015-01-30_10-49-14
> jenkins/config-history/config/2015-01-30_10-49-15
> jenkins/config-history/config/2015-01-30_10-49-16
> jenkins/config-history/config/2015-01-30_10-49-17
> 
> Any advice? I'm not too concerned about the log spamming, but the
> version history spamming is serious.
> 
> 
> Thanks
> 
> 
> Bob
> 

Also, I notice that after a build-storm on jenkins, there are 38 'idle'
mesos slaves visible and another 18 'offline'. It just doesn't look
right.



Cheers


Bob

-- 
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.

On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.