You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by shane knapp <sk...@berkeley.edu> on 2014/12/09 19:49:00 UTC

adding new jenkins worker nodes to eventually replace existing ones

i just turned up a new jenkins slave (amp-jenkins-worker-01) to ensure it
builds properly.  these machines have half the ram, same number of
processors and more disk, which will hopefully help us achieve more than
the ~15-20% system utilization we're getting on the current
amp-jenkins-slave-{01..05} nodes.

instead of 5 super beefy slaves w/16 workers each, we're planning on 8 less
beefy slaves w/12 workers each.  this should definitely cut down on the
build queue, and not impact build times in a negative way at all.

i'll keep a close eye on amp-jenkins-worker-01 before i start releasing the
other seven in to the wild.

there should be a minimal user impact, but if i happen to miss something,
please don't hesitate to let me know!

thanks,

shane

Re: adding new jenkins worker nodes to eventually replace existing ones

Posted by shane knapp <sk...@berkeley.edu>.
forgot to install git on this node.  /headdesk

i retirggered the failed spark prb jobs.

On Tue, Dec 9, 2014 at 10:49 AM, shane knapp <sk...@berkeley.edu> wrote:

> i just turned up a new jenkins slave (amp-jenkins-worker-01) to ensure it
> builds properly.  these machines have half the ram, same number of
> processors and more disk, which will hopefully help us achieve more than
> the ~15-20% system utilization we're getting on the current
> amp-jenkins-slave-{01..05} nodes.
>
> instead of 5 super beefy slaves w/16 workers each, we're planning on 8
> less beefy slaves w/12 workers each.  this should definitely cut down on
> the build queue, and not impact build times in a negative way at all.
>
> i'll keep a close eye on amp-jenkins-worker-01 before i start releasing
> the other seven in to the wild.
>
> there should be a minimal user impact, but if i happen to miss something,
> please don't hesitate to let me know!
>
> thanks,
>
> shane
>