You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mesos.apache.org by Chris Lambertus <cm...@apache.org> on 2019/06/10 17:57:34 UTC

ACTION REQUIRED: disk space on jenkins master nearly full

Hello,

The jenkins master is nearly full.

The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.

It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:

https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
https://issues.jenkins-ci.org/browse/JENKINS-35642
https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489



NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 


If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 

I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.


594G    Packaging
425G    pulsar-website-build
274G    pulsar-master
195G    hadoop-multibranch
173G    HBase Nightly
138G    HBase-Flaky-Tests
119G    netbeans-release
108G    Any23-trunk
101G    netbeans-linux-experiment
96G     Jackrabbit-Oak-Windows
94G     HBase-Find-Flaky-Tests
88G     PreCommit-ZOOKEEPER-github-pr-build
74G     netbeans-windows
71G     stanbol-0.12
68G     Sling
63G     Atlas-master-NoTests
48G     FlexJS Framework (maven)
45G     HBase-PreCommit-GitHub-PR
42G     pulsar-pull-request
40G     Atlas-1.0-NoTests



Thanks,
Chris
ASF Infra

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Any23,

I have removed all builds older than 120 days from /x1/jenkins/jenkins-home/jobs/Any23-trunk/modules/org.apache.any23$apache-any23-service/builds

There were builds dating back to 2014. Please evaluate your jenkins jobs and remove any which are no longer being used, and update the ones that are to discard old builds. PMC, could you please ensure this happens?

Thanks,
Chris



	

> On Jun 14, 2019, at 6:09 PM, Chris Lambertus <cm...@apache.org> wrote:
> 
> All,
> 
> Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.
> 
> Here is the current list of builds storing over 40GB on the master:
> 
> 597G    Packaging
> 204G    pulsar-master
> 199G    hadoop-multibranch
> 108G    Any23-trunk
> 93G     HBase Nightly
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 71G     stanbol-0.12
> 64G     Atlas-master-NoTests
> 50G     HBase-Find-Flaky-Tests
> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
> 
> 
> If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.
> 
> Thanks,
> Chris
> 
> 
> 
> 
>> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Hello,
>> 
>> The jenkins master is nearly full.
>> 
>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>> 
>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>> 
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> 
>> 
>> 
>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
>> 
>> 
>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
>> 
>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>> 
>> 
>> 594G    Packaging
>> 425G    pulsar-website-build
>> 274G    pulsar-master
>> 195G    hadoop-multibranch
>> 173G    HBase Nightly
>> 138G    HBase-Flaky-Tests
>> 119G    netbeans-release
>> 108G    Any23-trunk
>> 101G    netbeans-linux-experiment
>> 96G     Jackrabbit-Oak-Windows
>> 94G     HBase-Find-Flaky-Tests
>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> 74G     netbeans-windows
>> 71G     stanbol-0.12
>> 68G     Sling
>> 63G     Atlas-master-NoTests
>> 48G     FlexJS Framework (maven)
>> 45G     HBase-PreCommit-GitHub-PR
>> 42G     pulsar-pull-request
>> 40G     Atlas-1.0-NoTests
>> 
>> 
>> 
>> Thanks,
>> Chris
>> ASF Infra
> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Enrico Olivelli <eo...@gmail.com>.
Il sab 15 giu 2019, 19:36 Patrick Hunt <ph...@apache.org> ha scritto:

> On Sat, Jun 15, 2019 at 10:29 AM Enrico Olivelli <eo...@gmail.com>
> wrote:
>
> > Il sab 15 giu 2019, 18:18 Patrick Hunt <ph...@apache.org> ha scritto:
> >
> > > Narrowing this down to just the ZK folks.
> > >
> > > We're currently discarding the builds after 90 days for both of the
> jobs.
> > > Perhaps we can narrow down to 60? The PRs link to these builds, are
> they
> > > valuable after that point (vs just retriggering the build if missing)?
> > >
> > > I also notice that "PreCommit-ZOOKEEPER-github-pr-build-maven" is
> saving
> > > all artifacts, rather than a subset (e.g. the logs) as is being done by
> > > "PreCommit-ZOOKEEPER-github-pr-build" job. Perhaps we can update that?
> > > Enrico or Norbert any insight?
> > >
> >
> > I had enabled archiving in order to track some issue I can't recall.
> > We should only keep logs in case of failure
> >
> > I think that 30 days is enough, but I am okay with 60.
> > We are now working at a faster pace and a precommit run more than one
> month
> > ago is probably out of date.
> >
> >
> Sounds like 30 for both is fine then.
>

Okay I will update the config.
I will also disable ant based jobs and clear all of the unused workspaces


Enrico


> Patrick
>
>
> >
> > Enrico
> >
> >
> > > Patrick
> > >
> > > On Fri, Jun 14, 2019 at 6:09 PM Chris Lambertus <cm...@apache.org>
> wrote:
> > >
> > >> All,
> > >>
> > >> Thanks to those who have addressed this so far. The immediate storage
> > >> issue has been resolved, but some builds still need to be fixed to
> > ensure
> > >> the build master does not run out of space again anytime soon.
> > >>
> > >> Here is the current list of builds storing over 40GB on the master:
> > >>
> > >> 597G    Packaging
> > >> 204G    pulsar-master
> > >> 199G    hadoop-multibranch
> > >> 108G    Any23-trunk
> > >> 93G     HBase Nightly
> > >> 88G     PreCommit-ZOOKEEPER-github-pr-build
> > >> 71G     stanbol-0.12
> > >> 64G     Atlas-master-NoTests
> > >> 50G     HBase-Find-Flaky-Tests
> > >> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
> > >>
> > >>
> > >> If you are unable to reduce the size of your retained builds, please
> let
> > >> me know. I have added some additional project dev lists to the CC as I
> > >> would like to hear back from everyone on this list as to the state of
> > their
> > >> stored builds.
> > >>
> > >> Thanks,
> > >> Chris
> > >>
> > >>
> > >>
> > >>
> > >> > On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org>
> wrote:
> > >> >
> > >> > Hello,
> > >> >
> > >> > The jenkins master is nearly full.
> > >> >
> > >> > The workspaces listed below need significant size reduction within
> 24
> > >> hours or Infra will need to perform some manual pruning of old builds
> to
> > >> keep the jenkins system running. The Mesos “Packaging” job also needs
> > to be
> > >> corrected to include the project name (mesos-packaging) please.
> > >> >
> > >> > It appears that the typical ‘Discard Old Builds’ checkbox in the job
> > >> configuration may not be working for multibranch pipeline jobs. Please
> > >> refer to these articles for information on discarding builds in
> > multibranch
> > >> jobs:
> > >> >
> > >> >
> > >>
> >
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > >> > https://issues.jenkins-ci.org/browse/JENKINS-35642
> > >> >
> > >>
> >
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> > >> >
> > >> >
> > >> >
> > >> > NB: I have not fully vetted the above information, I just notice
> that
> > >> many of these jobs have ‘Discard old builds’ checked, but it is
> clearly
> > not
> > >> working.
> > >> >
> > >> >
> > >> > If you are unable to reduce your disk usage beyond what is listed,
> > >> please let me know what the reasons are and we’ll see if we can find a
> > >> solution. If you believe you’ve configured your job properly and the
> > space
> > >> usage is more than you expect, please comment here and we’ll take a
> > look at
> > >> what might be going on.
> > >> >
> > >> > I cut this list off arbitrarily at 40GB workspaces and larger. There
> > >> are many which are between 20 and 30GB which also need to be
> addressed,
> > but
> > >> these are the current top contributors to the disk space situation.
> > >> >
> > >> >
> > >> > 594G    Packaging
> > >> > 425G    pulsar-website-build
> > >> > 274G    pulsar-master
> > >> > 195G    hadoop-multibranch
> > >> > 173G    HBase Nightly
> > >> > 138G    HBase-Flaky-Tests
> > >> > 119G    netbeans-release
> > >> > 108G    Any23-trunk
> > >> > 101G    netbeans-linux-experiment
> > >> > 96G     Jackrabbit-Oak-Windows
> > >> > 94G     HBase-Find-Flaky-Tests
> > >> > 88G     PreCommit-ZOOKEEPER-github-pr-build
> > >> > 74G     netbeans-windows
> > >> > 71G     stanbol-0.12
> > >> > 68G     Sling
> > >> > 63G     Atlas-master-NoTests
> > >> > 48G     FlexJS Framework (maven)
> > >> > 45G     HBase-PreCommit-GitHub-PR
> > >> > 42G     pulsar-pull-request
> > >> > 40G     Atlas-1.0-NoTests
> > >> >
> > >> >
> > >> >
> > >> > Thanks,
> > >> > Chris
> > >> > ASF Infra
> > >>
> > >>
> >
>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Patrick Hunt <ph...@apache.org>.
On Sat, Jun 15, 2019 at 10:29 AM Enrico Olivelli <eo...@gmail.com>
wrote:

> Il sab 15 giu 2019, 18:18 Patrick Hunt <ph...@apache.org> ha scritto:
>
> > Narrowing this down to just the ZK folks.
> >
> > We're currently discarding the builds after 90 days for both of the jobs.
> > Perhaps we can narrow down to 60? The PRs link to these builds, are they
> > valuable after that point (vs just retriggering the build if missing)?
> >
> > I also notice that "PreCommit-ZOOKEEPER-github-pr-build-maven" is saving
> > all artifacts, rather than a subset (e.g. the logs) as is being done by
> > "PreCommit-ZOOKEEPER-github-pr-build" job. Perhaps we can update that?
> > Enrico or Norbert any insight?
> >
>
> I had enabled archiving in order to track some issue I can't recall.
> We should only keep logs in case of failure
>
> I think that 30 days is enough, but I am okay with 60.
> We are now working at a faster pace and a precommit run more than one month
> ago is probably out of date.
>
>
Sounds like 30 for both is fine then.

Patrick


>
> Enrico
>
>
> > Patrick
> >
> > On Fri, Jun 14, 2019 at 6:09 PM Chris Lambertus <cm...@apache.org> wrote:
> >
> >> All,
> >>
> >> Thanks to those who have addressed this so far. The immediate storage
> >> issue has been resolved, but some builds still need to be fixed to
> ensure
> >> the build master does not run out of space again anytime soon.
> >>
> >> Here is the current list of builds storing over 40GB on the master:
> >>
> >> 597G    Packaging
> >> 204G    pulsar-master
> >> 199G    hadoop-multibranch
> >> 108G    Any23-trunk
> >> 93G     HBase Nightly
> >> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >> 71G     stanbol-0.12
> >> 64G     Atlas-master-NoTests
> >> 50G     HBase-Find-Flaky-Tests
> >> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
> >>
> >>
> >> If you are unable to reduce the size of your retained builds, please let
> >> me know. I have added some additional project dev lists to the CC as I
> >> would like to hear back from everyone on this list as to the state of
> their
> >> stored builds.
> >>
> >> Thanks,
> >> Chris
> >>
> >>
> >>
> >>
> >> > On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> >> >
> >> > Hello,
> >> >
> >> > The jenkins master is nearly full.
> >> >
> >> > The workspaces listed below need significant size reduction within 24
> >> hours or Infra will need to perform some manual pruning of old builds to
> >> keep the jenkins system running. The Mesos “Packaging” job also needs
> to be
> >> corrected to include the project name (mesos-packaging) please.
> >> >
> >> > It appears that the typical ‘Discard Old Builds’ checkbox in the job
> >> configuration may not be working for multibranch pipeline jobs. Please
> >> refer to these articles for information on discarding builds in
> multibranch
> >> jobs:
> >> >
> >> >
> >>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >> > https://issues.jenkins-ci.org/browse/JENKINS-35642
> >> >
> >>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >> >
> >> >
> >> >
> >> > NB: I have not fully vetted the above information, I just notice that
> >> many of these jobs have ‘Discard old builds’ checked, but it is clearly
> not
> >> working.
> >> >
> >> >
> >> > If you are unable to reduce your disk usage beyond what is listed,
> >> please let me know what the reasons are and we’ll see if we can find a
> >> solution. If you believe you’ve configured your job properly and the
> space
> >> usage is more than you expect, please comment here and we’ll take a
> look at
> >> what might be going on.
> >> >
> >> > I cut this list off arbitrarily at 40GB workspaces and larger. There
> >> are many which are between 20 and 30GB which also need to be addressed,
> but
> >> these are the current top contributors to the disk space situation.
> >> >
> >> >
> >> > 594G    Packaging
> >> > 425G    pulsar-website-build
> >> > 274G    pulsar-master
> >> > 195G    hadoop-multibranch
> >> > 173G    HBase Nightly
> >> > 138G    HBase-Flaky-Tests
> >> > 119G    netbeans-release
> >> > 108G    Any23-trunk
> >> > 101G    netbeans-linux-experiment
> >> > 96G     Jackrabbit-Oak-Windows
> >> > 94G     HBase-Find-Flaky-Tests
> >> > 88G     PreCommit-ZOOKEEPER-github-pr-build
> >> > 74G     netbeans-windows
> >> > 71G     stanbol-0.12
> >> > 68G     Sling
> >> > 63G     Atlas-master-NoTests
> >> > 48G     FlexJS Framework (maven)
> >> > 45G     HBase-PreCommit-GitHub-PR
> >> > 42G     pulsar-pull-request
> >> > 40G     Atlas-1.0-NoTests
> >> >
> >> >
> >> >
> >> > Thanks,
> >> > Chris
> >> > ASF Infra
> >>
> >>
>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Enrico Olivelli <eo...@gmail.com>.
Il sab 15 giu 2019, 18:18 Patrick Hunt <ph...@apache.org> ha scritto:

> Narrowing this down to just the ZK folks.
>
> We're currently discarding the builds after 90 days for both of the jobs.
> Perhaps we can narrow down to 60? The PRs link to these builds, are they
> valuable after that point (vs just retriggering the build if missing)?
>
> I also notice that "PreCommit-ZOOKEEPER-github-pr-build-maven" is saving
> all artifacts, rather than a subset (e.g. the logs) as is being done by
> "PreCommit-ZOOKEEPER-github-pr-build" job. Perhaps we can update that?
> Enrico or Norbert any insight?
>

I had enabled archiving in order to track some issue I can't recall.
We should only keep logs in case of failure

I think that 30 days is enough, but I am okay with 60.
We are now working at a faster pace and a precommit run more than one month
ago is probably out of date.


Enrico


> Patrick
>
> On Fri, Jun 14, 2019 at 6:09 PM Chris Lambertus <cm...@apache.org> wrote:
>
>> All,
>>
>> Thanks to those who have addressed this so far. The immediate storage
>> issue has been resolved, but some builds still need to be fixed to ensure
>> the build master does not run out of space again anytime soon.
>>
>> Here is the current list of builds storing over 40GB on the master:
>>
>> 597G    Packaging
>> 204G    pulsar-master
>> 199G    hadoop-multibranch
>> 108G    Any23-trunk
>> 93G     HBase Nightly
>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> 71G     stanbol-0.12
>> 64G     Atlas-master-NoTests
>> 50G     HBase-Find-Flaky-Tests
>> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
>>
>>
>> If you are unable to reduce the size of your retained builds, please let
>> me know. I have added some additional project dev lists to the CC as I
>> would like to hear back from everyone on this list as to the state of their
>> stored builds.
>>
>> Thanks,
>> Chris
>>
>>
>>
>>
>> > On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
>> >
>> > Hello,
>> >
>> > The jenkins master is nearly full.
>> >
>> > The workspaces listed below need significant size reduction within 24
>> hours or Infra will need to perform some manual pruning of old builds to
>> keep the jenkins system running. The Mesos “Packaging” job also needs to be
>> corrected to include the project name (mesos-packaging) please.
>> >
>> > It appears that the typical ‘Discard Old Builds’ checkbox in the job
>> configuration may not be working for multibranch pipeline jobs. Please
>> refer to these articles for information on discarding builds in multibranch
>> jobs:
>> >
>> >
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> > https://issues.jenkins-ci.org/browse/JENKINS-35642
>> >
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> >
>> >
>> >
>> > NB: I have not fully vetted the above information, I just notice that
>> many of these jobs have ‘Discard old builds’ checked, but it is clearly not
>> working.
>> >
>> >
>> > If you are unable to reduce your disk usage beyond what is listed,
>> please let me know what the reasons are and we’ll see if we can find a
>> solution. If you believe you’ve configured your job properly and the space
>> usage is more than you expect, please comment here and we’ll take a look at
>> what might be going on.
>> >
>> > I cut this list off arbitrarily at 40GB workspaces and larger. There
>> are many which are between 20 and 30GB which also need to be addressed, but
>> these are the current top contributors to the disk space situation.
>> >
>> >
>> > 594G    Packaging
>> > 425G    pulsar-website-build
>> > 274G    pulsar-master
>> > 195G    hadoop-multibranch
>> > 173G    HBase Nightly
>> > 138G    HBase-Flaky-Tests
>> > 119G    netbeans-release
>> > 108G    Any23-trunk
>> > 101G    netbeans-linux-experiment
>> > 96G     Jackrabbit-Oak-Windows
>> > 94G     HBase-Find-Flaky-Tests
>> > 88G     PreCommit-ZOOKEEPER-github-pr-build
>> > 74G     netbeans-windows
>> > 71G     stanbol-0.12
>> > 68G     Sling
>> > 63G     Atlas-master-NoTests
>> > 48G     FlexJS Framework (maven)
>> > 45G     HBase-PreCommit-GitHub-PR
>> > 42G     pulsar-pull-request
>> > 40G     Atlas-1.0-NoTests
>> >
>> >
>> >
>> > Thanks,
>> > Chris
>> > ASF Infra
>>
>>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Patrick Hunt <ph...@apache.org>.
Narrowing this down to just the ZK folks.

We're currently discarding the builds after 90 days for both of the jobs.
Perhaps we can narrow down to 60? The PRs link to these builds, are they
valuable after that point (vs just retriggering the build if missing)?

I also notice that "PreCommit-ZOOKEEPER-github-pr-build-maven" is saving
all artifacts, rather than a subset (e.g. the logs) as is being done by
"PreCommit-ZOOKEEPER-github-pr-build" job. Perhaps we can update that?
Enrico or Norbert any insight?

Patrick

On Fri, Jun 14, 2019 at 6:09 PM Chris Lambertus <cm...@apache.org> wrote:

> All,
>
> Thanks to those who have addressed this so far. The immediate storage
> issue has been resolved, but some builds still need to be fixed to ensure
> the build master does not run out of space again anytime soon.
>
> Here is the current list of builds storing over 40GB on the master:
>
> 597G    Packaging
> 204G    pulsar-master
> 199G    hadoop-multibranch
> 108G    Any23-trunk
> 93G     HBase Nightly
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 71G     stanbol-0.12
> 64G     Atlas-master-NoTests
> 50G     HBase-Find-Flaky-Tests
> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
>
>
> If you are unable to reduce the size of your retained builds, please let
> me know. I have added some additional project dev lists to the CC as I
> would like to hear back from everyone on this list as to the state of their
> stored builds.
>
> Thanks,
> Chris
>
>
>
>
> > On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> >
> > Hello,
> >
> > The jenkins master is nearly full.
> >
> > The workspaces listed below need significant size reduction within 24
> hours or Infra will need to perform some manual pruning of old builds to
> keep the jenkins system running. The Mesos “Packaging” job also needs to be
> corrected to include the project name (mesos-packaging) please.
> >
> > It appears that the typical ‘Discard Old Builds’ checkbox in the job
> configuration may not be working for multibranch pipeline jobs. Please
> refer to these articles for information on discarding builds in multibranch
> jobs:
> >
> >
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > https://issues.jenkins-ci.org/browse/JENKINS-35642
> >
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >
> >
> >
> > NB: I have not fully vetted the above information, I just notice that
> many of these jobs have ‘Discard old builds’ checked, but it is clearly not
> working.
> >
> >
> > If you are unable to reduce your disk usage beyond what is listed,
> please let me know what the reasons are and we’ll see if we can find a
> solution. If you believe you’ve configured your job properly and the space
> usage is more than you expect, please comment here and we’ll take a look at
> what might be going on.
> >
> > I cut this list off arbitrarily at 40GB workspaces and larger. There are
> many which are between 20 and 30GB which also need to be addressed, but
> these are the current top contributors to the disk space situation.
> >
> >
> > 594G    Packaging
> > 425G    pulsar-website-build
> > 274G    pulsar-master
> > 195G    hadoop-multibranch
> > 173G    HBase Nightly
> > 138G    HBase-Flaky-Tests
> > 119G    netbeans-release
> > 108G    Any23-trunk
> > 101G    netbeans-linux-experiment
> > 96G     Jackrabbit-Oak-Windows
> > 94G     HBase-Find-Flaky-Tests
> > 88G     PreCommit-ZOOKEEPER-github-pr-build
> > 74G     netbeans-windows
> > 71G     stanbol-0.12
> > 68G     Sling
> > 63G     Atlas-master-NoTests
> > 48G     FlexJS Framework (maven)
> > 45G     HBase-PreCommit-GitHub-PR
> > 42G     pulsar-pull-request
> > 40G     Atlas-1.0-NoTests
> >
> >
> >
> > Thanks,
> > Chris
> > ASF Infra
>
>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Rupert Westenthaler <ru...@gmail.com>.
Hi all,

I deleted the old (retained) builds for stanbol-0.12. I hope this
fixes this issue.

I was unable to find out how to get the size of the current build. So
I decided to keep the most recent build (from mid of June 2017). If we
are still over the limit we could also delete this one and only keep
the most current build of 1.0.0 (trunk).

best
Rupert

On Sat, 15 Jun 2019 at 03:09, Chris Lambertus <cm...@apache.org> wrote:
>
> All,
>
> Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.
>
> Here is the current list of builds storing over 40GB on the master:
>
> 597G    Packaging
> 204G    pulsar-master
> 199G    hadoop-multibranch
> 108G    Any23-trunk
> 93G     HBase Nightly
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 71G     stanbol-0.12
> 64G     Atlas-master-NoTests
> 50G     HBase-Find-Flaky-Tests
> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
>
>
> If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.
>
> Thanks,
> Chris
>
>
>
>
> > On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> >
> > Hello,
> >
> > The jenkins master is nearly full.
> >
> > The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> >
> > It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> >
> > https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > https://issues.jenkins-ci.org/browse/JENKINS-35642
> > https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >
> >
> >
> > NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
> >
> >
> > If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
> >
> > I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> >
> >
> > 594G    Packaging
> > 425G    pulsar-website-build
> > 274G    pulsar-master
> > 195G    hadoop-multibranch
> > 173G    HBase Nightly
> > 138G    HBase-Flaky-Tests
> > 119G    netbeans-release
> > 108G    Any23-trunk
> > 101G    netbeans-linux-experiment
> > 96G     Jackrabbit-Oak-Windows
> > 94G     HBase-Find-Flaky-Tests
> > 88G     PreCommit-ZOOKEEPER-github-pr-build
> > 74G     netbeans-windows
> > 71G     stanbol-0.12
> > 68G     Sling
> > 63G     Atlas-master-NoTests
> > 48G     FlexJS Framework (maven)
> > 45G     HBase-PreCommit-GitHub-PR
> > 42G     pulsar-pull-request
> > 40G     Atlas-1.0-NoTests
> >
> >
> >
> > Thanks,
> > Chris
> > ASF Infra
>


-- 
| Rupert Westenthaler             rupert.westenthaler@gmail.com
| Bodenlehenstraße 11                              ++43-699-11108907
| A-5500 Bischofshofen
| REDLINK.CO ..........................................................................
| http://redlink.co/

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Alex Harui <ah...@adobe.com.INVALID>.
I deleted 3 FlexJS builds that we haven’t used in a year and a half.

On 6/10/19, 2:29 PM, "Chris Lambertus" <cm...@apache.org> wrote:

    Matteo,
    
    pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
    
    -Chris
    
    
    
    > On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
    > 
    > Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
    > 
    > -Chris
    > 
    > 
    >> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
    >> 
    >> For pulsar-website-build and pulsar-master, the "discard old builds"
    >> wasn't set unfortunately. I just enabled it now. Not sure if there's a
    >> way to quickly trigger a manual cleanup.
    >> 
    >> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
    >> used (since we switched to multiple smaller PR validation jobs a while
    >> ago). I have removed the Jenkins job. Hopefully that should take care
    >> of cleaning all the files.
    >> 
    >> 
    >> Thanks,
    >> Matteo
    >> 
    >> --
    >> Matteo Merli
    >> <mm...@apache.org>
    >> 
    >> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
    >>> 
    >>> Hello,
    >>> 
    >>> The jenkins master is nearly full.
    >>> 
    >>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
    >>> 
    >>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
    >>> 
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsupport.cloudbees.com%2Fhc%2Fen-us%2Farticles%2F115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528414630&amp;sdata=xuGt3py408pHoBUHBf8skeMNcDcztYbFdJnbVabYVmw%3D&amp;reserved=0
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.jenkins-ci.org%2Fbrowse%2FJENKINS-35642&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528414630&amp;sdata=H6SJZYZYXYTfdMY5qfuIjue0Kne%2Fb0tpY3kU0yWUSYA%3D&amp;reserved=0
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.jenkins-ci.org%2Fbrowse%2FJENKINS-34738%3FfocusedCommentId%3D263489%26page%3Dcom.atlassian.jira.plugin.system.issuetabpanels%253Acomment-tabpanel%23comment-263489&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528424624&amp;sdata=3fwh%2FcDFdYD87fS7SMxM6YIIquomsJfKvQ%2FGRwmmOZY%3D&amp;reserved=0
    >>> 
    >>> 
    >>> 
    >>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
    >>> 
    >>> 
    >>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
    >>> 
    >>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
    >>> 
    >>> 
    >>> 594G    Packaging
    >>> 425G    pulsar-website-build
    >>> 274G    pulsar-master
    >>> 195G    hadoop-multibranch
    >>> 173G    HBase Nightly
    >>> 138G    HBase-Flaky-Tests
    >>> 119G    netbeans-release
    >>> 108G    Any23-trunk
    >>> 101G    netbeans-linux-experiment
    >>> 96G     Jackrabbit-Oak-Windows
    >>> 94G     HBase-Find-Flaky-Tests
    >>> 88G     PreCommit-ZOOKEEPER-github-pr-build
    >>> 74G     netbeans-windows
    >>> 71G     stanbol-0.12
    >>> 68G     Sling
    >>> 63G     Atlas-master-NoTests
    >>> 48G     FlexJS Framework (maven)
    >>> 45G     HBase-PreCommit-GitHub-PR
    >>> 42G     pulsar-pull-request
    >>> 40G     Atlas-1.0-NoTests
    >>> 
    >>> 
    >>> 
    >>> Thanks,
    >>> Chris
    >>> ASF Infra
    > 
    
    


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Maxim Solodovnik <so...@gmail.com>.
Hello Chris,

could you please take a look at
https://builds.apache.org/view/M-R/view/OpenMeetings/job/openmeetings

It has rule to keep 3 last builds, but somehow it keeps 5
Manual delete fails with stacktrace:

ava.io.IOException: openmeetings #2860:
/x1/jenkins/jenkins-home/jobs/openmeetings/builds/2860 looks to have
already been deleted; siblings: [2864, 2869, lastSuccessfulBuild,
lastStableBuild, 1696, 1699, 2867, lastUnstableBuild, 2868, 1700,
legacyIds, lastFailedBuild, 1698, lastUnsuccessfulBuild, 2866]
	at hudson.model.Run.delete(Run.java:1564)
	at hudson.maven.MavenModuleSetBuild.delete(MavenModuleSetBuild.java:450)
	at hudson.model.Run.doDoDelete(Run.java:2298)
	at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
	at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:396)



On Sat, 15 Jun 2019 at 09:19, Chris Lambertus <cm...@apache.org> wrote:

> Hmm. Upon further investigation, I do see that the
> pulsar-master/modules/*/builds directories contain current builds between
> 13 and 15 June, and then hundreds of builds from 2018 and 2017. It looks
> like these are indeed “orphaned” builds, and your ‘discard old builds’
> configuration is working properly.
>
> I have manually removed all builds older than 14 days from the
> pulsar-master/modules directory, and your usage is looking good now:
>
> root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
> 23G     pulsar-master
>
> I really appreciate your attention to this. I’ll check back again in a few
> weeks time to make sure that the builds are getting pruned as intended.
>
>
> I will also look for this pathology in other build directories too, in
> case the problem is with jenkins rather than the build config.
>
> -Chris
>
>
>
> > On Jun 14, 2019, at 6:15 PM, Matteo Merli <ma...@gmail.com>
> wrote:
> >
> > Hi Chris,
> > sorry, I lost the updates on this thread.
> >
> > After applying the "discard old builds" check, I saw all the old stuff
> > going away. Even now I don't see any of the old builds, from the
> > Jenkins UI.
> > https://builds.apache.org/job/pulsar-master/
> >
> > Is it possible that maybe Jenkins failed to cleanup these, for some
> > reason? In any case, please go ahead and remove those directories.
> >
> > Matteo
> > --
> > Matteo Merli
> > <ma...@gmail.com>
> >
> > On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
> >>
> >> Matteo,
> >>
> >> pulsar-website cleaned up nicely. pulsar-master is still problematic -
> despite having run a few minutes ago, there are still builds dating back to
> 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it
> also appears that maven module ‘discard old builds’ is not working either.
> I have not yet found any suggested solutions to this.
> >>
> >> -Chris
> >>
> >>
> >>
> >>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> >>>
> >>> Outstanding, thanks. I believe the job cleanup runs when the next
> build runs. You could manually trigger a build to test, or we can check
> next time the build runs automatically (presuming it runs nighty.)
> >>>
> >>> -Chris
> >>>
> >>>
> >>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> >>>>
> >>>> For pulsar-website-build and pulsar-master, the "discard old builds"
> >>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> >>>> way to quickly trigger a manual cleanup.
> >>>>
> >>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> >>>> used (since we switched to multiple smaller PR validation jobs a while
> >>>> ago). I have removed the Jenkins job. Hopefully that should take care
> >>>> of cleaning all the files.
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Matteo
> >>>>
> >>>> --
> >>>> Matteo Merli
> >>>> <mm...@apache.org>
> >>>>
> >>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org>
> wrote:
> >>>>>
> >>>>> Hello,
> >>>>>
> >>>>> The jenkins master is nearly full.
> >>>>>
> >>>>> The workspaces listed below need significant size reduction within
> 24 hours or Infra will need to perform some manual pruning of old builds to
> keep the jenkins system running. The Mesos “Packaging” job also needs to be
> corrected to include the project name (mesos-packaging) please.
> >>>>>
> >>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> configuration may not be working for multibranch pipeline jobs. Please
> refer to these articles for information on discarding builds in multibranch
> jobs:
> >>>>>
> >>>>>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>>>>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>>>
> >>>>>
> >>>>>
> >>>>> NB: I have not fully vetted the above information, I just notice
> that many of these jobs have ‘Discard old builds’ checked, but it is
> clearly not working.
> >>>>>
> >>>>>
> >>>>> If you are unable to reduce your disk usage beyond what is listed,
> please let me know what the reasons are and we’ll see if we can find a
> solution. If you believe you’ve configured your job properly and the space
> usage is more than you expect, please comment here and we’ll take a look at
> what might be going on.
> >>>>>
> >>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There
> are many which are between 20 and 30GB which also need to be addressed, but
> these are the current top contributors to the disk space situation.
> >>>>>
> >>>>>
> >>>>> 594G    Packaging
> >>>>> 425G    pulsar-website-build
> >>>>> 274G    pulsar-master
> >>>>> 195G    hadoop-multibranch
> >>>>> 173G    HBase Nightly
> >>>>> 138G    HBase-Flaky-Tests
> >>>>> 119G    netbeans-release
> >>>>> 108G    Any23-trunk
> >>>>> 101G    netbeans-linux-experiment
> >>>>> 96G     Jackrabbit-Oak-Windows
> >>>>> 94G     HBase-Find-Flaky-Tests
> >>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>>>> 74G     netbeans-windows
> >>>>> 71G     stanbol-0.12
> >>>>> 68G     Sling
> >>>>> 63G     Atlas-master-NoTests
> >>>>> 48G     FlexJS Framework (maven)
> >>>>> 45G     HBase-PreCommit-GitHub-PR
> >>>>> 42G     pulsar-pull-request
> >>>>> 40G     Atlas-1.0-NoTests
> >>>>>
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>> Chris
> >>>>> ASF Infra
> >>>
> >>
>
>

-- 
WBR
Maxim aka solomax

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Maxim Solodovnik <so...@gmail.com>.
Hello Chris,

could you please take a look at
https://builds.apache.org/view/M-R/view/OpenMeetings/job/openmeetings

It has rule to keep 3 last builds, but somehow it keeps 5
Manual delete fails with stacktrace:

ava.io.IOException: openmeetings #2860:
/x1/jenkins/jenkins-home/jobs/openmeetings/builds/2860 looks to have
already been deleted; siblings: [2864, 2869, lastSuccessfulBuild,
lastStableBuild, 1696, 1699, 2867, lastUnstableBuild, 2868, 1700,
legacyIds, lastFailedBuild, 1698, lastUnsuccessfulBuild, 2866]
	at hudson.model.Run.delete(Run.java:1564)
	at hudson.maven.MavenModuleSetBuild.delete(MavenModuleSetBuild.java:450)
	at hudson.model.Run.doDoDelete(Run.java:2298)
	at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
	at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:396)



On Sat, 15 Jun 2019 at 09:19, Chris Lambertus <cm...@apache.org> wrote:

> Hmm. Upon further investigation, I do see that the
> pulsar-master/modules/*/builds directories contain current builds between
> 13 and 15 June, and then hundreds of builds from 2018 and 2017. It looks
> like these are indeed “orphaned” builds, and your ‘discard old builds’
> configuration is working properly.
>
> I have manually removed all builds older than 14 days from the
> pulsar-master/modules directory, and your usage is looking good now:
>
> root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
> 23G     pulsar-master
>
> I really appreciate your attention to this. I’ll check back again in a few
> weeks time to make sure that the builds are getting pruned as intended.
>
>
> I will also look for this pathology in other build directories too, in
> case the problem is with jenkins rather than the build config.
>
> -Chris
>
>
>
> > On Jun 14, 2019, at 6:15 PM, Matteo Merli <ma...@gmail.com>
> wrote:
> >
> > Hi Chris,
> > sorry, I lost the updates on this thread.
> >
> > After applying the "discard old builds" check, I saw all the old stuff
> > going away. Even now I don't see any of the old builds, from the
> > Jenkins UI.
> > https://builds.apache.org/job/pulsar-master/
> >
> > Is it possible that maybe Jenkins failed to cleanup these, for some
> > reason? In any case, please go ahead and remove those directories.
> >
> > Matteo
> > --
> > Matteo Merli
> > <ma...@gmail.com>
> >
> > On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
> >>
> >> Matteo,
> >>
> >> pulsar-website cleaned up nicely. pulsar-master is still problematic -
> despite having run a few minutes ago, there are still builds dating back to
> 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it
> also appears that maven module ‘discard old builds’ is not working either.
> I have not yet found any suggested solutions to this.
> >>
> >> -Chris
> >>
> >>
> >>
> >>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> >>>
> >>> Outstanding, thanks. I believe the job cleanup runs when the next
> build runs. You could manually trigger a build to test, or we can check
> next time the build runs automatically (presuming it runs nighty.)
> >>>
> >>> -Chris
> >>>
> >>>
> >>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> >>>>
> >>>> For pulsar-website-build and pulsar-master, the "discard old builds"
> >>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> >>>> way to quickly trigger a manual cleanup.
> >>>>
> >>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> >>>> used (since we switched to multiple smaller PR validation jobs a while
> >>>> ago). I have removed the Jenkins job. Hopefully that should take care
> >>>> of cleaning all the files.
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Matteo
> >>>>
> >>>> --
> >>>> Matteo Merli
> >>>> <mm...@apache.org>
> >>>>
> >>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org>
> wrote:
> >>>>>
> >>>>> Hello,
> >>>>>
> >>>>> The jenkins master is nearly full.
> >>>>>
> >>>>> The workspaces listed below need significant size reduction within
> 24 hours or Infra will need to perform some manual pruning of old builds to
> keep the jenkins system running. The Mesos “Packaging” job also needs to be
> corrected to include the project name (mesos-packaging) please.
> >>>>>
> >>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> configuration may not be working for multibranch pipeline jobs. Please
> refer to these articles for information on discarding builds in multibranch
> jobs:
> >>>>>
> >>>>>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>>>>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>>>
> >>>>>
> >>>>>
> >>>>> NB: I have not fully vetted the above information, I just notice
> that many of these jobs have ‘Discard old builds’ checked, but it is
> clearly not working.
> >>>>>
> >>>>>
> >>>>> If you are unable to reduce your disk usage beyond what is listed,
> please let me know what the reasons are and we’ll see if we can find a
> solution. If you believe you’ve configured your job properly and the space
> usage is more than you expect, please comment here and we’ll take a look at
> what might be going on.
> >>>>>
> >>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There
> are many which are between 20 and 30GB which also need to be addressed, but
> these are the current top contributors to the disk space situation.
> >>>>>
> >>>>>
> >>>>> 594G    Packaging
> >>>>> 425G    pulsar-website-build
> >>>>> 274G    pulsar-master
> >>>>> 195G    hadoop-multibranch
> >>>>> 173G    HBase Nightly
> >>>>> 138G    HBase-Flaky-Tests
> >>>>> 119G    netbeans-release
> >>>>> 108G    Any23-trunk
> >>>>> 101G    netbeans-linux-experiment
> >>>>> 96G     Jackrabbit-Oak-Windows
> >>>>> 94G     HBase-Find-Flaky-Tests
> >>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>>>> 74G     netbeans-windows
> >>>>> 71G     stanbol-0.12
> >>>>> 68G     Sling
> >>>>> 63G     Atlas-master-NoTests
> >>>>> 48G     FlexJS Framework (maven)
> >>>>> 45G     HBase-PreCommit-GitHub-PR
> >>>>> 42G     pulsar-pull-request
> >>>>> 40G     Atlas-1.0-NoTests
> >>>>>
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>> Chris
> >>>>> ASF Infra
> >>>
> >>
>
>

-- 
WBR
Maxim aka solomax

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Maxim Solodovnik <so...@gmail.com>.
Hello Chris,

could you please take a look at
https://builds.apache.org/view/M-R/view/OpenMeetings/job/openmeetings

It has rule to keep 3 last builds, but somehow it keeps 5
Manual delete fails with stacktrace:

ava.io.IOException: openmeetings #2860:
/x1/jenkins/jenkins-home/jobs/openmeetings/builds/2860 looks to have
already been deleted; siblings: [2864, 2869, lastSuccessfulBuild,
lastStableBuild, 1696, 1699, 2867, lastUnstableBuild, 2868, 1700,
legacyIds, lastFailedBuild, 1698, lastUnsuccessfulBuild, 2866]
	at hudson.model.Run.delete(Run.java:1564)
	at hudson.maven.MavenModuleSetBuild.delete(MavenModuleSetBuild.java:450)
	at hudson.model.Run.doDoDelete(Run.java:2298)
	at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
	at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:396)



On Sat, 15 Jun 2019 at 09:19, Chris Lambertus <cm...@apache.org> wrote:

> Hmm. Upon further investigation, I do see that the
> pulsar-master/modules/*/builds directories contain current builds between
> 13 and 15 June, and then hundreds of builds from 2018 and 2017. It looks
> like these are indeed “orphaned” builds, and your ‘discard old builds’
> configuration is working properly.
>
> I have manually removed all builds older than 14 days from the
> pulsar-master/modules directory, and your usage is looking good now:
>
> root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
> 23G     pulsar-master
>
> I really appreciate your attention to this. I’ll check back again in a few
> weeks time to make sure that the builds are getting pruned as intended.
>
>
> I will also look for this pathology in other build directories too, in
> case the problem is with jenkins rather than the build config.
>
> -Chris
>
>
>
> > On Jun 14, 2019, at 6:15 PM, Matteo Merli <ma...@gmail.com>
> wrote:
> >
> > Hi Chris,
> > sorry, I lost the updates on this thread.
> >
> > After applying the "discard old builds" check, I saw all the old stuff
> > going away. Even now I don't see any of the old builds, from the
> > Jenkins UI.
> > https://builds.apache.org/job/pulsar-master/
> >
> > Is it possible that maybe Jenkins failed to cleanup these, for some
> > reason? In any case, please go ahead and remove those directories.
> >
> > Matteo
> > --
> > Matteo Merli
> > <ma...@gmail.com>
> >
> > On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
> >>
> >> Matteo,
> >>
> >> pulsar-website cleaned up nicely. pulsar-master is still problematic -
> despite having run a few minutes ago, there are still builds dating back to
> 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it
> also appears that maven module ‘discard old builds’ is not working either.
> I have not yet found any suggested solutions to this.
> >>
> >> -Chris
> >>
> >>
> >>
> >>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> >>>
> >>> Outstanding, thanks. I believe the job cleanup runs when the next
> build runs. You could manually trigger a build to test, or we can check
> next time the build runs automatically (presuming it runs nighty.)
> >>>
> >>> -Chris
> >>>
> >>>
> >>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> >>>>
> >>>> For pulsar-website-build and pulsar-master, the "discard old builds"
> >>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> >>>> way to quickly trigger a manual cleanup.
> >>>>
> >>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> >>>> used (since we switched to multiple smaller PR validation jobs a while
> >>>> ago). I have removed the Jenkins job. Hopefully that should take care
> >>>> of cleaning all the files.
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Matteo
> >>>>
> >>>> --
> >>>> Matteo Merli
> >>>> <mm...@apache.org>
> >>>>
> >>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org>
> wrote:
> >>>>>
> >>>>> Hello,
> >>>>>
> >>>>> The jenkins master is nearly full.
> >>>>>
> >>>>> The workspaces listed below need significant size reduction within
> 24 hours or Infra will need to perform some manual pruning of old builds to
> keep the jenkins system running. The Mesos “Packaging” job also needs to be
> corrected to include the project name (mesos-packaging) please.
> >>>>>
> >>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> configuration may not be working for multibranch pipeline jobs. Please
> refer to these articles for information on discarding builds in multibranch
> jobs:
> >>>>>
> >>>>>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>>>>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>>>
> >>>>>
> >>>>>
> >>>>> NB: I have not fully vetted the above information, I just notice
> that many of these jobs have ‘Discard old builds’ checked, but it is
> clearly not working.
> >>>>>
> >>>>>
> >>>>> If you are unable to reduce your disk usage beyond what is listed,
> please let me know what the reasons are and we’ll see if we can find a
> solution. If you believe you’ve configured your job properly and the space
> usage is more than you expect, please comment here and we’ll take a look at
> what might be going on.
> >>>>>
> >>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There
> are many which are between 20 and 30GB which also need to be addressed, but
> these are the current top contributors to the disk space situation.
> >>>>>
> >>>>>
> >>>>> 594G    Packaging
> >>>>> 425G    pulsar-website-build
> >>>>> 274G    pulsar-master
> >>>>> 195G    hadoop-multibranch
> >>>>> 173G    HBase Nightly
> >>>>> 138G    HBase-Flaky-Tests
> >>>>> 119G    netbeans-release
> >>>>> 108G    Any23-trunk
> >>>>> 101G    netbeans-linux-experiment
> >>>>> 96G     Jackrabbit-Oak-Windows
> >>>>> 94G     HBase-Find-Flaky-Tests
> >>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>>>> 74G     netbeans-windows
> >>>>> 71G     stanbol-0.12
> >>>>> 68G     Sling
> >>>>> 63G     Atlas-master-NoTests
> >>>>> 48G     FlexJS Framework (maven)
> >>>>> 45G     HBase-PreCommit-GitHub-PR
> >>>>> 42G     pulsar-pull-request
> >>>>> 40G     Atlas-1.0-NoTests
> >>>>>
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>> Chris
> >>>>> ASF Infra
> >>>
> >>
>
>

-- 
WBR
Maxim aka solomax

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Hmm. Upon further investigation, I do see that the pulsar-master/modules/*/builds directories contain current builds between 13 and 15 June, and then hundreds of builds from 2018 and 2017. It looks like these are indeed “orphaned” builds, and your ‘discard old builds’ configuration is working properly. 

I have manually removed all builds older than 14 days from the pulsar-master/modules directory, and your usage is looking good now:

root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
23G	pulsar-master

I really appreciate your attention to this. I’ll check back again in a few weeks time to make sure that the builds are getting pruned as intended.


I will also look for this pathology in other build directories too, in case the problem is with jenkins rather than the build config.

-Chris



> On Jun 14, 2019, at 6:15 PM, Matteo Merli <ma...@gmail.com> wrote:
> 
> Hi Chris,
> sorry, I lost the updates on this thread.
> 
> After applying the "discard old builds" check, I saw all the old stuff
> going away. Even now I don't see any of the old builds, from the
> Jenkins UI.
> https://builds.apache.org/job/pulsar-master/
> 
> Is it possible that maybe Jenkins failed to cleanup these, for some
> reason? In any case, please go ahead and remove those directories.
> 
> Matteo
> --
> Matteo Merli
> <ma...@gmail.com>
> 
> On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Matteo,
>> 
>> pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
>> 
>> -Chris
>> 
>> 
>> 
>>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
>>> 
>>> Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
>>> 
>>> -Chris
>>> 
>>> 
>>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
>>>> 
>>>> For pulsar-website-build and pulsar-master, the "discard old builds"
>>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>>>> way to quickly trigger a manual cleanup.
>>>> 
>>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>>>> used (since we switched to multiple smaller PR validation jobs a while
>>>> ago). I have removed the Jenkins job. Hopefully that should take care
>>>> of cleaning all the files.
>>>> 
>>>> 
>>>> Thanks,
>>>> Matteo
>>>> 
>>>> --
>>>> Matteo Merli
>>>> <mm...@apache.org>
>>>> 
>>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>>>>> 
>>>>> Hello,
>>>>> 
>>>>> The jenkins master is nearly full.
>>>>> 
>>>>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>>>>> 
>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>>>>> 
>>>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>>>> 
>>>>> 
>>>>> 
>>>>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>>>>> 
>>>>> 
>>>>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>>>>> 
>>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>>>>> 
>>>>> 
>>>>> 594G    Packaging
>>>>> 425G    pulsar-website-build
>>>>> 274G    pulsar-master
>>>>> 195G    hadoop-multibranch
>>>>> 173G    HBase Nightly
>>>>> 138G    HBase-Flaky-Tests
>>>>> 119G    netbeans-release
>>>>> 108G    Any23-trunk
>>>>> 101G    netbeans-linux-experiment
>>>>> 96G     Jackrabbit-Oak-Windows
>>>>> 94G     HBase-Find-Flaky-Tests
>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>>>> 74G     netbeans-windows
>>>>> 71G     stanbol-0.12
>>>>> 68G     Sling
>>>>> 63G     Atlas-master-NoTests
>>>>> 48G     FlexJS Framework (maven)
>>>>> 45G     HBase-PreCommit-GitHub-PR
>>>>> 42G     pulsar-pull-request
>>>>> 40G     Atlas-1.0-NoTests
>>>>> 
>>>>> 
>>>>> 
>>>>> Thanks,
>>>>> Chris
>>>>> ASF Infra
>>> 
>> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Hmm. Upon further investigation, I do see that the pulsar-master/modules/*/builds directories contain current builds between 13 and 15 June, and then hundreds of builds from 2018 and 2017. It looks like these are indeed “orphaned” builds, and your ‘discard old builds’ configuration is working properly. 

I have manually removed all builds older than 14 days from the pulsar-master/modules directory, and your usage is looking good now:

root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
23G	pulsar-master

I really appreciate your attention to this. I’ll check back again in a few weeks time to make sure that the builds are getting pruned as intended.


I will also look for this pathology in other build directories too, in case the problem is with jenkins rather than the build config.

-Chris



> On Jun 14, 2019, at 6:15 PM, Matteo Merli <ma...@gmail.com> wrote:
> 
> Hi Chris,
> sorry, I lost the updates on this thread.
> 
> After applying the "discard old builds" check, I saw all the old stuff
> going away. Even now I don't see any of the old builds, from the
> Jenkins UI.
> https://builds.apache.org/job/pulsar-master/
> 
> Is it possible that maybe Jenkins failed to cleanup these, for some
> reason? In any case, please go ahead and remove those directories.
> 
> Matteo
> --
> Matteo Merli
> <ma...@gmail.com>
> 
> On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Matteo,
>> 
>> pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
>> 
>> -Chris
>> 
>> 
>> 
>>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
>>> 
>>> Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
>>> 
>>> -Chris
>>> 
>>> 
>>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
>>>> 
>>>> For pulsar-website-build and pulsar-master, the "discard old builds"
>>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>>>> way to quickly trigger a manual cleanup.
>>>> 
>>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>>>> used (since we switched to multiple smaller PR validation jobs a while
>>>> ago). I have removed the Jenkins job. Hopefully that should take care
>>>> of cleaning all the files.
>>>> 
>>>> 
>>>> Thanks,
>>>> Matteo
>>>> 
>>>> --
>>>> Matteo Merli
>>>> <mm...@apache.org>
>>>> 
>>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>>>>> 
>>>>> Hello,
>>>>> 
>>>>> The jenkins master is nearly full.
>>>>> 
>>>>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>>>>> 
>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>>>>> 
>>>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>>>> 
>>>>> 
>>>>> 
>>>>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>>>>> 
>>>>> 
>>>>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>>>>> 
>>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>>>>> 
>>>>> 
>>>>> 594G    Packaging
>>>>> 425G    pulsar-website-build
>>>>> 274G    pulsar-master
>>>>> 195G    hadoop-multibranch
>>>>> 173G    HBase Nightly
>>>>> 138G    HBase-Flaky-Tests
>>>>> 119G    netbeans-release
>>>>> 108G    Any23-trunk
>>>>> 101G    netbeans-linux-experiment
>>>>> 96G     Jackrabbit-Oak-Windows
>>>>> 94G     HBase-Find-Flaky-Tests
>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>>>> 74G     netbeans-windows
>>>>> 71G     stanbol-0.12
>>>>> 68G     Sling
>>>>> 63G     Atlas-master-NoTests
>>>>> 48G     FlexJS Framework (maven)
>>>>> 45G     HBase-PreCommit-GitHub-PR
>>>>> 42G     pulsar-pull-request
>>>>> 40G     Atlas-1.0-NoTests
>>>>> 
>>>>> 
>>>>> 
>>>>> Thanks,
>>>>> Chris
>>>>> ASF Infra
>>> 
>> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Hmm. Upon further investigation, I do see that the pulsar-master/modules/*/builds directories contain current builds between 13 and 15 June, and then hundreds of builds from 2018 and 2017. It looks like these are indeed “orphaned” builds, and your ‘discard old builds’ configuration is working properly. 

I have manually removed all builds older than 14 days from the pulsar-master/modules directory, and your usage is looking good now:

root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
23G	pulsar-master

I really appreciate your attention to this. I’ll check back again in a few weeks time to make sure that the builds are getting pruned as intended.


I will also look for this pathology in other build directories too, in case the problem is with jenkins rather than the build config.

-Chris



> On Jun 14, 2019, at 6:15 PM, Matteo Merli <ma...@gmail.com> wrote:
> 
> Hi Chris,
> sorry, I lost the updates on this thread.
> 
> After applying the "discard old builds" check, I saw all the old stuff
> going away. Even now I don't see any of the old builds, from the
> Jenkins UI.
> https://builds.apache.org/job/pulsar-master/
> 
> Is it possible that maybe Jenkins failed to cleanup these, for some
> reason? In any case, please go ahead and remove those directories.
> 
> Matteo
> --
> Matteo Merli
> <ma...@gmail.com>
> 
> On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Matteo,
>> 
>> pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
>> 
>> -Chris
>> 
>> 
>> 
>>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
>>> 
>>> Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
>>> 
>>> -Chris
>>> 
>>> 
>>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
>>>> 
>>>> For pulsar-website-build and pulsar-master, the "discard old builds"
>>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>>>> way to quickly trigger a manual cleanup.
>>>> 
>>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>>>> used (since we switched to multiple smaller PR validation jobs a while
>>>> ago). I have removed the Jenkins job. Hopefully that should take care
>>>> of cleaning all the files.
>>>> 
>>>> 
>>>> Thanks,
>>>> Matteo
>>>> 
>>>> --
>>>> Matteo Merli
>>>> <mm...@apache.org>
>>>> 
>>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>>>>> 
>>>>> Hello,
>>>>> 
>>>>> The jenkins master is nearly full.
>>>>> 
>>>>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>>>>> 
>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>>>>> 
>>>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>>>> 
>>>>> 
>>>>> 
>>>>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>>>>> 
>>>>> 
>>>>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>>>>> 
>>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>>>>> 
>>>>> 
>>>>> 594G    Packaging
>>>>> 425G    pulsar-website-build
>>>>> 274G    pulsar-master
>>>>> 195G    hadoop-multibranch
>>>>> 173G    HBase Nightly
>>>>> 138G    HBase-Flaky-Tests
>>>>> 119G    netbeans-release
>>>>> 108G    Any23-trunk
>>>>> 101G    netbeans-linux-experiment
>>>>> 96G     Jackrabbit-Oak-Windows
>>>>> 94G     HBase-Find-Flaky-Tests
>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>>>> 74G     netbeans-windows
>>>>> 71G     stanbol-0.12
>>>>> 68G     Sling
>>>>> 63G     Atlas-master-NoTests
>>>>> 48G     FlexJS Framework (maven)
>>>>> 45G     HBase-PreCommit-GitHub-PR
>>>>> 42G     pulsar-pull-request
>>>>> 40G     Atlas-1.0-NoTests
>>>>> 
>>>>> 
>>>>> 
>>>>> Thanks,
>>>>> Chris
>>>>> ASF Infra
>>> 
>> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Matteo Merli <ma...@gmail.com>.
Hi Chris,
sorry, I lost the updates on this thread.

After applying the "discard old builds" check, I saw all the old stuff
going away. Even now I don't see any of the old builds, from the
Jenkins UI.
https://builds.apache.org/job/pulsar-master/

Is it possible that maybe Jenkins failed to cleanup these, for some
reason? In any case, please go ahead and remove those directories.

Matteo
--
Matteo Merli
<ma...@gmail.com>

On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
>
> Matteo,
>
> pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
>
> -Chris
>
>
>
> > On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> >
> > Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
> >
> > -Chris
> >
> >
> >> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> >>
> >> For pulsar-website-build and pulsar-master, the "discard old builds"
> >> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> >> way to quickly trigger a manual cleanup.
> >>
> >> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> >> used (since we switched to multiple smaller PR validation jobs a while
> >> ago). I have removed the Jenkins job. Hopefully that should take care
> >> of cleaning all the files.
> >>
> >>
> >> Thanks,
> >> Matteo
> >>
> >> --
> >> Matteo Merli
> >> <mm...@apache.org>
> >>
> >> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
> >>>
> >>> Hello,
> >>>
> >>> The jenkins master is nearly full.
> >>>
> >>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> >>>
> >>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> >>>
> >>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>
> >>>
> >>>
> >>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
> >>>
> >>>
> >>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
> >>>
> >>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> >>>
> >>>
> >>> 594G    Packaging
> >>> 425G    pulsar-website-build
> >>> 274G    pulsar-master
> >>> 195G    hadoop-multibranch
> >>> 173G    HBase Nightly
> >>> 138G    HBase-Flaky-Tests
> >>> 119G    netbeans-release
> >>> 108G    Any23-trunk
> >>> 101G    netbeans-linux-experiment
> >>> 96G     Jackrabbit-Oak-Windows
> >>> 94G     HBase-Find-Flaky-Tests
> >>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>> 74G     netbeans-windows
> >>> 71G     stanbol-0.12
> >>> 68G     Sling
> >>> 63G     Atlas-master-NoTests
> >>> 48G     FlexJS Framework (maven)
> >>> 45G     HBase-PreCommit-GitHub-PR
> >>> 42G     pulsar-pull-request
> >>> 40G     Atlas-1.0-NoTests
> >>>
> >>>
> >>>
> >>> Thanks,
> >>> Chris
> >>> ASF Infra
> >
>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Matteo Merli <ma...@gmail.com>.
Hi Chris,
sorry, I lost the updates on this thread.

After applying the "discard old builds" check, I saw all the old stuff
going away. Even now I don't see any of the old builds, from the
Jenkins UI.
https://builds.apache.org/job/pulsar-master/

Is it possible that maybe Jenkins failed to cleanup these, for some
reason? In any case, please go ahead and remove those directories.

Matteo
--
Matteo Merli
<ma...@gmail.com>

On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
>
> Matteo,
>
> pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
>
> -Chris
>
>
>
> > On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> >
> > Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
> >
> > -Chris
> >
> >
> >> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> >>
> >> For pulsar-website-build and pulsar-master, the "discard old builds"
> >> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> >> way to quickly trigger a manual cleanup.
> >>
> >> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> >> used (since we switched to multiple smaller PR validation jobs a while
> >> ago). I have removed the Jenkins job. Hopefully that should take care
> >> of cleaning all the files.
> >>
> >>
> >> Thanks,
> >> Matteo
> >>
> >> --
> >> Matteo Merli
> >> <mm...@apache.org>
> >>
> >> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
> >>>
> >>> Hello,
> >>>
> >>> The jenkins master is nearly full.
> >>>
> >>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> >>>
> >>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> >>>
> >>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>
> >>>
> >>>
> >>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
> >>>
> >>>
> >>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
> >>>
> >>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> >>>
> >>>
> >>> 594G    Packaging
> >>> 425G    pulsar-website-build
> >>> 274G    pulsar-master
> >>> 195G    hadoop-multibranch
> >>> 173G    HBase Nightly
> >>> 138G    HBase-Flaky-Tests
> >>> 119G    netbeans-release
> >>> 108G    Any23-trunk
> >>> 101G    netbeans-linux-experiment
> >>> 96G     Jackrabbit-Oak-Windows
> >>> 94G     HBase-Find-Flaky-Tests
> >>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>> 74G     netbeans-windows
> >>> 71G     stanbol-0.12
> >>> 68G     Sling
> >>> 63G     Atlas-master-NoTests
> >>> 48G     FlexJS Framework (maven)
> >>> 45G     HBase-PreCommit-GitHub-PR
> >>> 42G     pulsar-pull-request
> >>> 40G     Atlas-1.0-NoTests
> >>>
> >>>
> >>>
> >>> Thanks,
> >>> Chris
> >>> ASF Infra
> >
>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Matteo Merli <ma...@gmail.com>.
Hi Chris,
sorry, I lost the updates on this thread.

After applying the "discard old builds" check, I saw all the old stuff
going away. Even now I don't see any of the old builds, from the
Jenkins UI.
https://builds.apache.org/job/pulsar-master/

Is it possible that maybe Jenkins failed to cleanup these, for some
reason? In any case, please go ahead and remove those directories.

Matteo
--
Matteo Merli
<ma...@gmail.com>

On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <cm...@apache.org> wrote:
>
> Matteo,
>
> pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
>
> -Chris
>
>
>
> > On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> >
> > Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
> >
> > -Chris
> >
> >
> >> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> >>
> >> For pulsar-website-build and pulsar-master, the "discard old builds"
> >> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> >> way to quickly trigger a manual cleanup.
> >>
> >> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> >> used (since we switched to multiple smaller PR validation jobs a while
> >> ago). I have removed the Jenkins job. Hopefully that should take care
> >> of cleaning all the files.
> >>
> >>
> >> Thanks,
> >> Matteo
> >>
> >> --
> >> Matteo Merli
> >> <mm...@apache.org>
> >>
> >> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
> >>>
> >>> Hello,
> >>>
> >>> The jenkins master is nearly full.
> >>>
> >>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> >>>
> >>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> >>>
> >>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>
> >>>
> >>>
> >>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
> >>>
> >>>
> >>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
> >>>
> >>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> >>>
> >>>
> >>> 594G    Packaging
> >>> 425G    pulsar-website-build
> >>> 274G    pulsar-master
> >>> 195G    hadoop-multibranch
> >>> 173G    HBase Nightly
> >>> 138G    HBase-Flaky-Tests
> >>> 119G    netbeans-release
> >>> 108G    Any23-trunk
> >>> 101G    netbeans-linux-experiment
> >>> 96G     Jackrabbit-Oak-Windows
> >>> 94G     HBase-Find-Flaky-Tests
> >>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>> 74G     netbeans-windows
> >>> 71G     stanbol-0.12
> >>> 68G     Sling
> >>> 63G     Atlas-master-NoTests
> >>> 48G     FlexJS Framework (maven)
> >>> 45G     HBase-PreCommit-GitHub-PR
> >>> 42G     pulsar-pull-request
> >>> 40G     Atlas-1.0-NoTests
> >>>
> >>>
> >>>
> >>> Thanks,
> >>> Chris
> >>> ASF Infra
> >
>

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Alex Harui <ah...@adobe.com.INVALID>.
I deleted 3 FlexJS builds that we haven’t used in a year and a half.

On 6/10/19, 2:29 PM, "Chris Lambertus" <cm...@apache.org> wrote:

    Matteo,
    
    pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
    
    -Chris
    
    
    
    > On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
    > 
    > Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
    > 
    > -Chris
    > 
    > 
    >> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
    >> 
    >> For pulsar-website-build and pulsar-master, the "discard old builds"
    >> wasn't set unfortunately. I just enabled it now. Not sure if there's a
    >> way to quickly trigger a manual cleanup.
    >> 
    >> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
    >> used (since we switched to multiple smaller PR validation jobs a while
    >> ago). I have removed the Jenkins job. Hopefully that should take care
    >> of cleaning all the files.
    >> 
    >> 
    >> Thanks,
    >> Matteo
    >> 
    >> --
    >> Matteo Merli
    >> <mm...@apache.org>
    >> 
    >> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
    >>> 
    >>> Hello,
    >>> 
    >>> The jenkins master is nearly full.
    >>> 
    >>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
    >>> 
    >>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
    >>> 
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsupport.cloudbees.com%2Fhc%2Fen-us%2Farticles%2F115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528414630&amp;sdata=xuGt3py408pHoBUHBf8skeMNcDcztYbFdJnbVabYVmw%3D&amp;reserved=0
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.jenkins-ci.org%2Fbrowse%2FJENKINS-35642&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528414630&amp;sdata=H6SJZYZYXYTfdMY5qfuIjue0Kne%2Fb0tpY3kU0yWUSYA%3D&amp;reserved=0
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.jenkins-ci.org%2Fbrowse%2FJENKINS-34738%3FfocusedCommentId%3D263489%26page%3Dcom.atlassian.jira.plugin.system.issuetabpanels%253Acomment-tabpanel%23comment-263489&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528424624&amp;sdata=3fwh%2FcDFdYD87fS7SMxM6YIIquomsJfKvQ%2FGRwmmOZY%3D&amp;reserved=0
    >>> 
    >>> 
    >>> 
    >>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
    >>> 
    >>> 
    >>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
    >>> 
    >>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
    >>> 
    >>> 
    >>> 594G    Packaging
    >>> 425G    pulsar-website-build
    >>> 274G    pulsar-master
    >>> 195G    hadoop-multibranch
    >>> 173G    HBase Nightly
    >>> 138G    HBase-Flaky-Tests
    >>> 119G    netbeans-release
    >>> 108G    Any23-trunk
    >>> 101G    netbeans-linux-experiment
    >>> 96G     Jackrabbit-Oak-Windows
    >>> 94G     HBase-Find-Flaky-Tests
    >>> 88G     PreCommit-ZOOKEEPER-github-pr-build
    >>> 74G     netbeans-windows
    >>> 71G     stanbol-0.12
    >>> 68G     Sling
    >>> 63G     Atlas-master-NoTests
    >>> 48G     FlexJS Framework (maven)
    >>> 45G     HBase-PreCommit-GitHub-PR
    >>> 42G     pulsar-pull-request
    >>> 40G     Atlas-1.0-NoTests
    >>> 
    >>> 
    >>> 
    >>> Thanks,
    >>> Chris
    >>> ASF Infra
    > 
    
    


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Alex Harui <ah...@adobe.com.INVALID>.
I deleted 3 FlexJS builds that we haven’t used in a year and a half.

On 6/10/19, 2:29 PM, "Chris Lambertus" <cm...@apache.org> wrote:

    Matteo,
    
    pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.
    
    -Chris
    
    
    
    > On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
    > 
    > Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
    > 
    > -Chris
    > 
    > 
    >> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
    >> 
    >> For pulsar-website-build and pulsar-master, the "discard old builds"
    >> wasn't set unfortunately. I just enabled it now. Not sure if there's a
    >> way to quickly trigger a manual cleanup.
    >> 
    >> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
    >> used (since we switched to multiple smaller PR validation jobs a while
    >> ago). I have removed the Jenkins job. Hopefully that should take care
    >> of cleaning all the files.
    >> 
    >> 
    >> Thanks,
    >> Matteo
    >> 
    >> --
    >> Matteo Merli
    >> <mm...@apache.org>
    >> 
    >> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
    >>> 
    >>> Hello,
    >>> 
    >>> The jenkins master is nearly full.
    >>> 
    >>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
    >>> 
    >>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
    >>> 
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsupport.cloudbees.com%2Fhc%2Fen-us%2Farticles%2F115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528414630&amp;sdata=xuGt3py408pHoBUHBf8skeMNcDcztYbFdJnbVabYVmw%3D&amp;reserved=0
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.jenkins-ci.org%2Fbrowse%2FJENKINS-35642&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528414630&amp;sdata=H6SJZYZYXYTfdMY5qfuIjue0Kne%2Fb0tpY3kU0yWUSYA%3D&amp;reserved=0
    >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.jenkins-ci.org%2Fbrowse%2FJENKINS-34738%3FfocusedCommentId%3D263489%26page%3Dcom.atlassian.jira.plugin.system.issuetabpanels%253Acomment-tabpanel%23comment-263489&amp;data=02%7C01%7Caharui%40adobe.com%7Cd9bdd20998a542e7d09b08d6edeaad9c%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636957989528424624&amp;sdata=3fwh%2FcDFdYD87fS7SMxM6YIIquomsJfKvQ%2FGRwmmOZY%3D&amp;reserved=0
    >>> 
    >>> 
    >>> 
    >>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
    >>> 
    >>> 
    >>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
    >>> 
    >>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
    >>> 
    >>> 
    >>> 594G    Packaging
    >>> 425G    pulsar-website-build
    >>> 274G    pulsar-master
    >>> 195G    hadoop-multibranch
    >>> 173G    HBase Nightly
    >>> 138G    HBase-Flaky-Tests
    >>> 119G    netbeans-release
    >>> 108G    Any23-trunk
    >>> 101G    netbeans-linux-experiment
    >>> 96G     Jackrabbit-Oak-Windows
    >>> 94G     HBase-Find-Flaky-Tests
    >>> 88G     PreCommit-ZOOKEEPER-github-pr-build
    >>> 74G     netbeans-windows
    >>> 71G     stanbol-0.12
    >>> 68G     Sling
    >>> 63G     Atlas-master-NoTests
    >>> 48G     FlexJS Framework (maven)
    >>> 45G     HBase-PreCommit-GitHub-PR
    >>> 42G     pulsar-pull-request
    >>> 40G     Atlas-1.0-NoTests
    >>> 
    >>> 
    >>> 
    >>> Thanks,
    >>> Chris
    >>> ASF Infra
    > 
    
    


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Matteo,

pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.

-Chris



> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
> 
> -Chris
> 
> 
>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
>> 
>> For pulsar-website-build and pulsar-master, the "discard old builds"
>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>> way to quickly trigger a manual cleanup.
>> 
>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>> used (since we switched to multiple smaller PR validation jobs a while
>> ago). I have removed the Jenkins job. Hopefully that should take care
>> of cleaning all the files.
>> 
>> 
>> Thanks,
>> Matteo
>> 
>> --
>> Matteo Merli
>> <mm...@apache.org>
>> 
>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>>> 
>>> Hello,
>>> 
>>> The jenkins master is nearly full.
>>> 
>>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>>> 
>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>>> 
>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>> 
>>> 
>>> 
>>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>>> 
>>> 
>>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>>> 
>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>>> 
>>> 
>>> 594G    Packaging
>>> 425G    pulsar-website-build
>>> 274G    pulsar-master
>>> 195G    hadoop-multibranch
>>> 173G    HBase Nightly
>>> 138G    HBase-Flaky-Tests
>>> 119G    netbeans-release
>>> 108G    Any23-trunk
>>> 101G    netbeans-linux-experiment
>>> 96G     Jackrabbit-Oak-Windows
>>> 94G     HBase-Find-Flaky-Tests
>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>> 74G     netbeans-windows
>>> 71G     stanbol-0.12
>>> 68G     Sling
>>> 63G     Atlas-master-NoTests
>>> 48G     FlexJS Framework (maven)
>>> 45G     HBase-PreCommit-GitHub-PR
>>> 42G     pulsar-pull-request
>>> 40G     Atlas-1.0-NoTests
>>> 
>>> 
>>> 
>>> Thanks,
>>> Chris
>>> ASF Infra
> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Matteo,

pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.

-Chris



> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
> 
> -Chris
> 
> 
>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
>> 
>> For pulsar-website-build and pulsar-master, the "discard old builds"
>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>> way to quickly trigger a manual cleanup.
>> 
>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>> used (since we switched to multiple smaller PR validation jobs a while
>> ago). I have removed the Jenkins job. Hopefully that should take care
>> of cleaning all the files.
>> 
>> 
>> Thanks,
>> Matteo
>> 
>> --
>> Matteo Merli
>> <mm...@apache.org>
>> 
>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>>> 
>>> Hello,
>>> 
>>> The jenkins master is nearly full.
>>> 
>>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>>> 
>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>>> 
>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>> 
>>> 
>>> 
>>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>>> 
>>> 
>>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>>> 
>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>>> 
>>> 
>>> 594G    Packaging
>>> 425G    pulsar-website-build
>>> 274G    pulsar-master
>>> 195G    hadoop-multibranch
>>> 173G    HBase Nightly
>>> 138G    HBase-Flaky-Tests
>>> 119G    netbeans-release
>>> 108G    Any23-trunk
>>> 101G    netbeans-linux-experiment
>>> 96G     Jackrabbit-Oak-Windows
>>> 94G     HBase-Find-Flaky-Tests
>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>> 74G     netbeans-windows
>>> 71G     stanbol-0.12
>>> 68G     Sling
>>> 63G     Atlas-master-NoTests
>>> 48G     FlexJS Framework (maven)
>>> 45G     HBase-PreCommit-GitHub-PR
>>> 42G     pulsar-pull-request
>>> 40G     Atlas-1.0-NoTests
>>> 
>>> 
>>> 
>>> Thanks,
>>> Chris
>>> ASF Infra
> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Matteo,

pulsar-website cleaned up nicely. pulsar-master is still problematic - despite having run a few minutes ago, there are still builds dating back to 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also appears that maven module ‘discard old builds’ is not working either. I have not yet found any suggested solutions to this.

-Chris



> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
> 
> -Chris
> 
> 
>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
>> 
>> For pulsar-website-build and pulsar-master, the "discard old builds"
>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>> way to quickly trigger a manual cleanup.
>> 
>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>> used (since we switched to multiple smaller PR validation jobs a while
>> ago). I have removed the Jenkins job. Hopefully that should take care
>> of cleaning all the files.
>> 
>> 
>> Thanks,
>> Matteo
>> 
>> --
>> Matteo Merli
>> <mm...@apache.org>
>> 
>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>>> 
>>> Hello,
>>> 
>>> The jenkins master is nearly full.
>>> 
>>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>>> 
>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>>> 
>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>> 
>>> 
>>> 
>>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>>> 
>>> 
>>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>>> 
>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>>> 
>>> 
>>> 594G    Packaging
>>> 425G    pulsar-website-build
>>> 274G    pulsar-master
>>> 195G    hadoop-multibranch
>>> 173G    HBase Nightly
>>> 138G    HBase-Flaky-Tests
>>> 119G    netbeans-release
>>> 108G    Any23-trunk
>>> 101G    netbeans-linux-experiment
>>> 96G     Jackrabbit-Oak-Windows
>>> 94G     HBase-Find-Flaky-Tests
>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>> 74G     netbeans-windows
>>> 71G     stanbol-0.12
>>> 68G     Sling
>>> 63G     Atlas-master-NoTests
>>> 48G     FlexJS Framework (maven)
>>> 45G     HBase-PreCommit-GitHub-PR
>>> 42G     pulsar-pull-request
>>> 40G     Atlas-1.0-NoTests
>>> 
>>> 
>>> 
>>> Thanks,
>>> Chris
>>> ASF Infra
> 


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)

-Chris


> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> 
> For pulsar-website-build and pulsar-master, the "discard old builds"
> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> way to quickly trigger a manual cleanup.
> 
> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> used (since we switched to multiple smaller PR validation jobs a while
> ago). I have removed the Jenkins job. Hopefully that should take care
> of cleaning all the files.
> 
> 
> Thanks,
> Matteo
> 
> --
> Matteo Merli
> <mm...@apache.org>
> 
> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Hello,
>> 
>> The jenkins master is nearly full.
>> 
>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>> 
>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>> 
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> 
>> 
>> 
>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>> 
>> 
>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>> 
>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>> 
>> 
>> 594G    Packaging
>> 425G    pulsar-website-build
>> 274G    pulsar-master
>> 195G    hadoop-multibranch
>> 173G    HBase Nightly
>> 138G    HBase-Flaky-Tests
>> 119G    netbeans-release
>> 108G    Any23-trunk
>> 101G    netbeans-linux-experiment
>> 96G     Jackrabbit-Oak-Windows
>> 94G     HBase-Find-Flaky-Tests
>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> 74G     netbeans-windows
>> 71G     stanbol-0.12
>> 68G     Sling
>> 63G     Atlas-master-NoTests
>> 48G     FlexJS Framework (maven)
>> 45G     HBase-PreCommit-GitHub-PR
>> 42G     pulsar-pull-request
>> 40G     Atlas-1.0-NoTests
>> 
>> 
>> 
>> Thanks,
>> Chris
>> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)

-Chris


> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> 
> For pulsar-website-build and pulsar-master, the "discard old builds"
> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> way to quickly trigger a manual cleanup.
> 
> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> used (since we switched to multiple smaller PR validation jobs a while
> ago). I have removed the Jenkins job. Hopefully that should take care
> of cleaning all the files.
> 
> 
> Thanks,
> Matteo
> 
> --
> Matteo Merli
> <mm...@apache.org>
> 
> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Hello,
>> 
>> The jenkins master is nearly full.
>> 
>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>> 
>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>> 
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> 
>> 
>> 
>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>> 
>> 
>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>> 
>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>> 
>> 
>> 594G    Packaging
>> 425G    pulsar-website-build
>> 274G    pulsar-master
>> 195G    hadoop-multibranch
>> 173G    HBase Nightly
>> 138G    HBase-Flaky-Tests
>> 119G    netbeans-release
>> 108G    Any23-trunk
>> 101G    netbeans-linux-experiment
>> 96G     Jackrabbit-Oak-Windows
>> 94G     HBase-Find-Flaky-Tests
>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> 74G     netbeans-windows
>> 71G     stanbol-0.12
>> 68G     Sling
>> 63G     Atlas-master-NoTests
>> 48G     FlexJS Framework (maven)
>> 45G     HBase-PreCommit-GitHub-PR
>> 42G     pulsar-pull-request
>> 40G     Atlas-1.0-NoTests
>> 
>> 
>> 
>> Thanks,
>> Chris
>> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)

-Chris


> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mm...@apache.org> wrote:
> 
> For pulsar-website-build and pulsar-master, the "discard old builds"
> wasn't set unfortunately. I just enabled it now. Not sure if there's a
> way to quickly trigger a manual cleanup.
> 
> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
> used (since we switched to multiple smaller PR validation jobs a while
> ago). I have removed the Jenkins job. Hopefully that should take care
> of cleaning all the files.
> 
> 
> Thanks,
> Matteo
> 
> --
> Matteo Merli
> <mm...@apache.org>
> 
> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>> 
>> Hello,
>> 
>> The jenkins master is nearly full.
>> 
>> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>> 
>> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>> 
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> 
>> 
>> 
>> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>> 
>> 
>> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>> 
>> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>> 
>> 
>> 594G    Packaging
>> 425G    pulsar-website-build
>> 274G    pulsar-master
>> 195G    hadoop-multibranch
>> 173G    HBase Nightly
>> 138G    HBase-Flaky-Tests
>> 119G    netbeans-release
>> 108G    Any23-trunk
>> 101G    netbeans-linux-experiment
>> 96G     Jackrabbit-Oak-Windows
>> 94G     HBase-Find-Flaky-Tests
>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> 74G     netbeans-windows
>> 71G     stanbol-0.12
>> 68G     Sling
>> 63G     Atlas-master-NoTests
>> 48G     FlexJS Framework (maven)
>> 45G     HBase-PreCommit-GitHub-PR
>> 42G     pulsar-pull-request
>> 40G     Atlas-1.0-NoTests
>> 
>> 
>> 
>> Thanks,
>> Chris
>> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Matteo Merli <mm...@apache.org>.
For pulsar-website-build and pulsar-master, the "discard old builds"
wasn't set unfortunately. I just enabled it now. Not sure if there's a
way to quickly trigger a manual cleanup.

Regarding "pulsar-pull-request": this was an old Jenkins job no longer
used (since we switched to multiple smaller PR validation jobs a while
ago). I have removed the Jenkins job. Hopefully that should take care
of cleaning all the files.


Thanks,
Matteo

--
Matteo Merli
<mm...@apache.org>

On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>
> Hello,
>
> The jenkins master is nearly full.
>
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>
>
>
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>
>
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>
>
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
>
>
>
> Thanks,
> Chris
> ASF Infra

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


RE: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Eric Barboni <sk...@apache.org>.
Hi,
 Problematic Apache Netbeans build have a discard build older than 5 items now.

 Are the workspace static "easy to get and parse" to make a website with ? (may help on frontend)

Best Regards
Eric



-----Message d'origine-----
De : Robert Munteanu <ro...@apache.org> 
Envoyé : mardi 11 juin 2019 10:55
À : builds@apache.org
Objet : Re: ACTION REQUIRED: disk space on jenkins master nearly full

Hi again,

On Mon, 2019-06-10 at 10:57 -0700, Chris Lambertus wrote:
> 68G     Sling

The Sling github folder now retains only the last 10 builds for each job. This being a GH folder with ~300 jobs I did not trigger them all to get the cleanup to have effect.

If you have certain jobs that stand out as being large let me know and I'll retrigger them. Note that we don't rely on the workspace data for anything important and it should be ok to just wipe all of them in case you need the space now.

Thanks,

Robert



Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Robert Munteanu <ro...@apache.org>.
Hi again,

On Mon, 2019-06-10 at 10:57 -0700, Chris Lambertus wrote:
> 68G     Sling

The Sling github folder now retains only the last 10 builds for each
job. This being a GH folder with ~300 jobs I did not trigger them all
to get the cleanup to have effect.

If you have certain jobs that stand out as being large let me know and
I'll retrigger them. Note that we don't rely on the workspace data for
anything important and it should be ok to just wipe all of them in case
you need the space now.

Thanks,

Robert


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Matteo Merli <mm...@apache.org>.
For pulsar-website-build and pulsar-master, the "discard old builds"
wasn't set unfortunately. I just enabled it now. Not sure if there's a
way to quickly trigger a manual cleanup.

Regarding "pulsar-pull-request": this was an old Jenkins job no longer
used (since we switched to multiple smaller PR validation jobs a while
ago). I have removed the Jenkins job. Hopefully that should take care
of cleaning all the files.


Thanks,
Matteo

--
Matteo Merli
<mm...@apache.org>

On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>
> Hello,
>
> The jenkins master is nearly full.
>
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>
>
>
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>
>
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>
>
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
>
>
>
> Thanks,
> Chris
> ASF Infra

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Chris Lambertus <cm...@apache.org>.
All,

Thanks to those who have addressed this so far. The immediate storage issue has been resolved, but some builds still need to be fixed to ensure the build master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597G    Packaging
204G    pulsar-master
199G    hadoop-multibranch
108G    Any23-trunk
93G     HBase Nightly
88G     PreCommit-ZOOKEEPER-github-pr-build
71G     stanbol-0.12
64G     Atlas-master-NoTests
50G     HBase-Find-Flaky-Tests
42G     PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me know. I have added some additional project dev lists to the CC as I would like to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cm...@apache.org> wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra


Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Peter Somogyi <ps...@apache.org>.
Thanks Josh for acting on this!

On Tue, Jun 11, 2019 at 3:15 AM Sean Busbey <bu...@apache.org> wrote:

> We used to have a build step that compressed our logs for us. I don't think
> Jenkins can read the test results if we do the xml files from surefire, so
> I'm not sure how much space we can save. That's where I'd start though.
>
> On Mon, Jun 10, 2019, 19:46 张铎(Duo Zhang) <pa...@gmail.com> wrote:
>
> > Does surefire have some options to truncate the test output if it is too
> > large? Or jenkins has some options to truncate or compress a file when
> > archiving?
> >
> > Josh Elser <el...@apache.org> 于2019年6月11日周二 上午8:40写道:
> >
> > > Just a cursory glance at some build artifacts showed just test output
> > > which sometimes extended into the multiple megabytes.
> > >
> > > So everyone else knows, I just chatted with ChrisL in Slack and he
> > > confirmed that our disk utilization is down already (after
> HBASE-22563).
> > > He thanked us for the quick response.
> > >
> > > We should keep pulling on this thread now that we're looking at it :)
> > >
> > > On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > > > Oh, it is the build artifacts, not the jars...
> > > >
> > > > Most of our build artifacts are build logs, but maybe the problem is
> > that
> > > > some of the logs are very large if the test hangs...
> > > >
> > > > 张铎(Duo Zhang) <pa...@gmail.com> 于2019年6月11日周二 上午8:16写道:
> > > >
> > > >> For flakey we just need the commit id in the console output then we
> > can
> > > >> build the artifacts locally. +1 on removing artifacts caching.
> > > >>
> > > >> Josh Elser <el...@apache.org> 于2019年6月11日周二 上午7:50写道:
> > > >>
> > > >>> Sure, Misty. No arguments here.
> > > >>>
> > > >>> I think that might be a bigger untangling. Maybe Peter or Busbey
> know
> > > >>> better about how these could be de-coupled (e.g. I think flakies
> > > >>> actually look back at old artifacts), but I'm not sure off the top
> of
> > > my
> > > >>> head. I was just going for a quick fix to keep Infra from doing
> > > >>> something super-destructive.
> > > >>>
> > > >>> For context, I've dropped them a note in Slack to make sure what
> I'm
> > > >>> doing is having a positive effect.
> > > >>>
> > > >>> On 6/10/19 7:34 PM, Misty Linville wrote:
> > > >>>> Keeping artifacts and keeping build logs are two separate things.
> I
> > > >>> don’t
> > > >>>> see a need to keep any artifacts past the most recent green and
> most
> > > >>> recent
> > > >>>> red builds. Alternately if we need the artifacts let’s have
> Jenkins
> > > put
> > > >>>> them somewhere rather than keeping them there. You can get back to
> > > >>> whatever
> > > >>>> hash you need within git to reproduce a build problem.
> > > >>>>
> > > >>>> On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org>
> > wrote:
> > > >>>>
> > > >>>>> https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> > > bandaid
> > > >>> (I
> > > >>>>> hope).
> > > >>>>>
> > > >>>>> On 6/10/19 4:31 PM, Josh Elser wrote:
> > > >>>>>> Eyes on.
> > > >>>>>>
> > > >>>>>> Looking at master, we already have the linked configuration, set
> > to
> > > >>>>>> retain 30 builds.
> > > >>>>>>
> > > >>>>>> We have some extra branches which we can lop off (branch-1.2,
> > > >>>>>> branch-2.0, maybe some feature branches too). A quick fix might
> be
> > > to
> > > >>>>>> just pull back that 30 to 10.
> > > >>>>>>
> > > >>>>>> Largely figuring out how this stuff works now, give me a shout
> in
> > > >>> Slack
> > > >>>>>> if anyone else has cycles.
> > > >>>>>>
> > > >>>>>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> > > >>>>>>> Hi,
> > > >>>>>>>
> > > >>>>>>> HBase jobs are using more than 400GB based on this list.
> > > >>>>>>> Could someone take a look at the job configurations today?
> > > >>> Otherwise, I
> > > >>>>>>> will look into it tomorrow morning.
> > > >>>>>>>
> > > >>>>>>> Thanks,
> > > >>>>>>> Peter
> > > >>>>>>>
> > > >>>>>>> ---------- Forwarded message ---------
> > > >>>>>>> From: Chris Lambertus <cm...@apache.org>
> > > >>>>>>> Date: Mon, Jun 10, 2019 at 7:57 PM
> > > >>>>>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly
> > full
> > > >>>>>>> To: <bu...@apache.org>
> > > >>>>>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> Hello,
> > > >>>>>>>
> > > >>>>>>> The jenkins master is nearly full.
> > > >>>>>>>
> > > >>>>>>> The workspaces listed below need significant size reduction
> > within
> > > 24
> > > >>>>>>> hours
> > > >>>>>>> or Infra will need to perform some manual pruning of old builds
> > to
> > > >>>>>>> keep the
> > > >>>>>>> jenkins system running. The Mesos “Packaging” job also needs to
> > be
> > > >>>>>>> corrected to include the project name (mesos-packaging) please.
> > > >>>>>>>
> > > >>>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in
> the
> > > job
> > > >>>>>>> configuration may not be working for multibranch pipeline jobs.
> > > >>> Please
> > > >>>>>>> refer to these articles for information on discarding builds in
> > > >>>>>>> multibranch
> > > >>>>>>> jobs:
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>
> > > >>>
> > >
> >
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > > >>>>>>>
> > > >>>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> > > >>>>>>>
> > > >>>>>
> > > >>>
> > >
> >
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> NB: I have not fully vetted the above information, I just
> notice
> > > that
> > > >>>>>>> many
> > > >>>>>>> of these jobs have ‘Discard old builds’ checked, but it is
> > clearly
> > > >>> not
> > > >>>>>>> working.
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> If you are unable to reduce your disk usage beyond what is
> > listed,
> > > >>>>> please
> > > >>>>>>> let me know what the reasons are and we’ll see if we can find a
> > > >>>>> solution.
> > > >>>>>>> If you believe you’ve configured your job properly and the
> space
> > > >>> usage
> > > >>>>> is
> > > >>>>>>> more than you expect, please comment here and we’ll take a look
> > at
> > > >>> what
> > > >>>>>>> might be going on.
> > > >>>>>>>
> > > >>>>>>> I cut this list off arbitrarily at 40GB workspaces and larger.
> > > There
> > > >>> are
> > > >>>>>>> many which are between 20 and 30GB which also need to be
> > addressed,
> > > >>> but
> > > >>>>>>> these are the current top contributors to the disk space
> > situation.
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> 594G    Packaging
> > > >>>>>>> 425G    pulsar-website-build
> > > >>>>>>> 274G    pulsar-master
> > > >>>>>>> 195G    hadoop-multibranch
> > > >>>>>>> 173G    HBase Nightly
> > > >>>>>>> 138G    HBase-Flaky-Tests
> > > >>>>>>> 119G    netbeans-release
> > > >>>>>>> 108G    Any23-trunk
> > > >>>>>>> 101G    netbeans-linux-experiment
> > > >>>>>>> 96G     Jackrabbit-Oak-Windows
> > > >>>>>>> 94G     HBase-Find-Flaky-Tests
> > > >>>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> > > >>>>>>> 74G     netbeans-windows
> > > >>>>>>> 71G     stanbol-0.12
> > > >>>>>>> 68G     Sling
> > > >>>>>>> 63G     Atlas-master-NoTests
> > > >>>>>>> 48G     FlexJS Framework (maven)
> > > >>>>>>> 45G     HBase-PreCommit-GitHub-PR
> > > >>>>>>> 42G     pulsar-pull-request
> > > >>>>>>> 40G     Atlas-1.0-NoTests
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> Thanks,
> > > >>>>>>> Chris
> > > >>>>>>> ASF Infra
> > > >>>>>>>
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > > >
> > >
> >
>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Sean Busbey <bu...@apache.org>.
We used to have a build step that compressed our logs for us. I don't think
Jenkins can read the test results if we do the xml files from surefire, so
I'm not sure how much space we can save. That's where I'd start though.

On Mon, Jun 10, 2019, 19:46 张铎(Duo Zhang) <pa...@gmail.com> wrote:

> Does surefire have some options to truncate the test output if it is too
> large? Or jenkins has some options to truncate or compress a file when
> archiving?
>
> Josh Elser <el...@apache.org> 于2019年6月11日周二 上午8:40写道:
>
> > Just a cursory glance at some build artifacts showed just test output
> > which sometimes extended into the multiple megabytes.
> >
> > So everyone else knows, I just chatted with ChrisL in Slack and he
> > confirmed that our disk utilization is down already (after HBASE-22563).
> > He thanked us for the quick response.
> >
> > We should keep pulling on this thread now that we're looking at it :)
> >
> > On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > > Oh, it is the build artifacts, not the jars...
> > >
> > > Most of our build artifacts are build logs, but maybe the problem is
> that
> > > some of the logs are very large if the test hangs...
> > >
> > > 张铎(Duo Zhang) <pa...@gmail.com> 于2019年6月11日周二 上午8:16写道:
> > >
> > >> For flakey we just need the commit id in the console output then we
> can
> > >> build the artifacts locally. +1 on removing artifacts caching.
> > >>
> > >> Josh Elser <el...@apache.org> 于2019年6月11日周二 上午7:50写道:
> > >>
> > >>> Sure, Misty. No arguments here.
> > >>>
> > >>> I think that might be a bigger untangling. Maybe Peter or Busbey know
> > >>> better about how these could be de-coupled (e.g. I think flakies
> > >>> actually look back at old artifacts), but I'm not sure off the top of
> > my
> > >>> head. I was just going for a quick fix to keep Infra from doing
> > >>> something super-destructive.
> > >>>
> > >>> For context, I've dropped them a note in Slack to make sure what I'm
> > >>> doing is having a positive effect.
> > >>>
> > >>> On 6/10/19 7:34 PM, Misty Linville wrote:
> > >>>> Keeping artifacts and keeping build logs are two separate things. I
> > >>> don’t
> > >>>> see a need to keep any artifacts past the most recent green and most
> > >>> recent
> > >>>> red builds. Alternately if we need the artifacts let’s have Jenkins
> > put
> > >>>> them somewhere rather than keeping them there. You can get back to
> > >>> whatever
> > >>>> hash you need within git to reproduce a build problem.
> > >>>>
> > >>>> On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org>
> wrote:
> > >>>>
> > >>>>> https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> > bandaid
> > >>> (I
> > >>>>> hope).
> > >>>>>
> > >>>>> On 6/10/19 4:31 PM, Josh Elser wrote:
> > >>>>>> Eyes on.
> > >>>>>>
> > >>>>>> Looking at master, we already have the linked configuration, set
> to
> > >>>>>> retain 30 builds.
> > >>>>>>
> > >>>>>> We have some extra branches which we can lop off (branch-1.2,
> > >>>>>> branch-2.0, maybe some feature branches too). A quick fix might be
> > to
> > >>>>>> just pull back that 30 to 10.
> > >>>>>>
> > >>>>>> Largely figuring out how this stuff works now, give me a shout in
> > >>> Slack
> > >>>>>> if anyone else has cycles.
> > >>>>>>
> > >>>>>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> > >>>>>>> Hi,
> > >>>>>>>
> > >>>>>>> HBase jobs are using more than 400GB based on this list.
> > >>>>>>> Could someone take a look at the job configurations today?
> > >>> Otherwise, I
> > >>>>>>> will look into it tomorrow morning.
> > >>>>>>>
> > >>>>>>> Thanks,
> > >>>>>>> Peter
> > >>>>>>>
> > >>>>>>> ---------- Forwarded message ---------
> > >>>>>>> From: Chris Lambertus <cm...@apache.org>
> > >>>>>>> Date: Mon, Jun 10, 2019 at 7:57 PM
> > >>>>>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly
> full
> > >>>>>>> To: <bu...@apache.org>
> > >>>>>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> Hello,
> > >>>>>>>
> > >>>>>>> The jenkins master is nearly full.
> > >>>>>>>
> > >>>>>>> The workspaces listed below need significant size reduction
> within
> > 24
> > >>>>>>> hours
> > >>>>>>> or Infra will need to perform some manual pruning of old builds
> to
> > >>>>>>> keep the
> > >>>>>>> jenkins system running. The Mesos “Packaging” job also needs to
> be
> > >>>>>>> corrected to include the project name (mesos-packaging) please.
> > >>>>>>>
> > >>>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the
> > job
> > >>>>>>> configuration may not be working for multibranch pipeline jobs.
> > >>> Please
> > >>>>>>> refer to these articles for information on discarding builds in
> > >>>>>>> multibranch
> > >>>>>>> jobs:
> > >>>>>>>
> > >>>>>>>
> > >>>>>
> > >>>
> >
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > >>>>>>>
> > >>>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> > >>>>>>>
> > >>>>>
> > >>>
> >
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> NB: I have not fully vetted the above information, I just notice
> > that
> > >>>>>>> many
> > >>>>>>> of these jobs have ‘Discard old builds’ checked, but it is
> clearly
> > >>> not
> > >>>>>>> working.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> If you are unable to reduce your disk usage beyond what is
> listed,
> > >>>>> please
> > >>>>>>> let me know what the reasons are and we’ll see if we can find a
> > >>>>> solution.
> > >>>>>>> If you believe you’ve configured your job properly and the space
> > >>> usage
> > >>>>> is
> > >>>>>>> more than you expect, please comment here and we’ll take a look
> at
> > >>> what
> > >>>>>>> might be going on.
> > >>>>>>>
> > >>>>>>> I cut this list off arbitrarily at 40GB workspaces and larger.
> > There
> > >>> are
> > >>>>>>> many which are between 20 and 30GB which also need to be
> addressed,
> > >>> but
> > >>>>>>> these are the current top contributors to the disk space
> situation.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> 594G    Packaging
> > >>>>>>> 425G    pulsar-website-build
> > >>>>>>> 274G    pulsar-master
> > >>>>>>> 195G    hadoop-multibranch
> > >>>>>>> 173G    HBase Nightly
> > >>>>>>> 138G    HBase-Flaky-Tests
> > >>>>>>> 119G    netbeans-release
> > >>>>>>> 108G    Any23-trunk
> > >>>>>>> 101G    netbeans-linux-experiment
> > >>>>>>> 96G     Jackrabbit-Oak-Windows
> > >>>>>>> 94G     HBase-Find-Flaky-Tests
> > >>>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> > >>>>>>> 74G     netbeans-windows
> > >>>>>>> 71G     stanbol-0.12
> > >>>>>>> 68G     Sling
> > >>>>>>> 63G     Atlas-master-NoTests
> > >>>>>>> 48G     FlexJS Framework (maven)
> > >>>>>>> 45G     HBase-PreCommit-GitHub-PR
> > >>>>>>> 42G     pulsar-pull-request
> > >>>>>>> 40G     Atlas-1.0-NoTests
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> Thanks,
> > >>>>>>> Chris
> > >>>>>>> ASF Infra
> > >>>>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> > >
> >
>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by "张铎 (Duo Zhang)" <pa...@gmail.com>.
Does surefire have some options to truncate the test output if it is too
large? Or jenkins has some options to truncate or compress a file when
archiving?

Josh Elser <el...@apache.org> 于2019年6月11日周二 上午8:40写道:

> Just a cursory glance at some build artifacts showed just test output
> which sometimes extended into the multiple megabytes.
>
> So everyone else knows, I just chatted with ChrisL in Slack and he
> confirmed that our disk utilization is down already (after HBASE-22563).
> He thanked us for the quick response.
>
> We should keep pulling on this thread now that we're looking at it :)
>
> On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > Oh, it is the build artifacts, not the jars...
> >
> > Most of our build artifacts are build logs, but maybe the problem is that
> > some of the logs are very large if the test hangs...
> >
> > 张铎(Duo Zhang) <pa...@gmail.com> 于2019年6月11日周二 上午8:16写道:
> >
> >> For flakey we just need the commit id in the console output then we can
> >> build the artifacts locally. +1 on removing artifacts caching.
> >>
> >> Josh Elser <el...@apache.org> 于2019年6月11日周二 上午7:50写道:
> >>
> >>> Sure, Misty. No arguments here.
> >>>
> >>> I think that might be a bigger untangling. Maybe Peter or Busbey know
> >>> better about how these could be de-coupled (e.g. I think flakies
> >>> actually look back at old artifacts), but I'm not sure off the top of
> my
> >>> head. I was just going for a quick fix to keep Infra from doing
> >>> something super-destructive.
> >>>
> >>> For context, I've dropped them a note in Slack to make sure what I'm
> >>> doing is having a positive effect.
> >>>
> >>> On 6/10/19 7:34 PM, Misty Linville wrote:
> >>>> Keeping artifacts and keeping build logs are two separate things. I
> >>> don’t
> >>>> see a need to keep any artifacts past the most recent green and most
> >>> recent
> >>>> red builds. Alternately if we need the artifacts let’s have Jenkins
> put
> >>>> them somewhere rather than keeping them there. You can get back to
> >>> whatever
> >>>> hash you need within git to reproduce a build problem.
> >>>>
> >>>> On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org> wrote:
> >>>>
> >>>>> https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> bandaid
> >>> (I
> >>>>> hope).
> >>>>>
> >>>>> On 6/10/19 4:31 PM, Josh Elser wrote:
> >>>>>> Eyes on.
> >>>>>>
> >>>>>> Looking at master, we already have the linked configuration, set to
> >>>>>> retain 30 builds.
> >>>>>>
> >>>>>> We have some extra branches which we can lop off (branch-1.2,
> >>>>>> branch-2.0, maybe some feature branches too). A quick fix might be
> to
> >>>>>> just pull back that 30 to 10.
> >>>>>>
> >>>>>> Largely figuring out how this stuff works now, give me a shout in
> >>> Slack
> >>>>>> if anyone else has cycles.
> >>>>>>
> >>>>>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> HBase jobs are using more than 400GB based on this list.
> >>>>>>> Could someone take a look at the job configurations today?
> >>> Otherwise, I
> >>>>>>> will look into it tomorrow morning.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> Peter
> >>>>>>>
> >>>>>>> ---------- Forwarded message ---------
> >>>>>>> From: Chris Lambertus <cm...@apache.org>
> >>>>>>> Date: Mon, Jun 10, 2019 at 7:57 PM
> >>>>>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
> >>>>>>> To: <bu...@apache.org>
> >>>>>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
> >>>>>>>
> >>>>>>>
> >>>>>>> Hello,
> >>>>>>>
> >>>>>>> The jenkins master is nearly full.
> >>>>>>>
> >>>>>>> The workspaces listed below need significant size reduction within
> 24
> >>>>>>> hours
> >>>>>>> or Infra will need to perform some manual pruning of old builds to
> >>>>>>> keep the
> >>>>>>> jenkins system running. The Mesos “Packaging” job also needs to be
> >>>>>>> corrected to include the project name (mesos-packaging) please.
> >>>>>>>
> >>>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the
> job
> >>>>>>> configuration may not be working for multibranch pipeline jobs.
> >>> Please
> >>>>>>> refer to these articles for information on discarding builds in
> >>>>>>> multibranch
> >>>>>>> jobs:
> >>>>>>>
> >>>>>>>
> >>>>>
> >>>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>>>>>>
> >>>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>>>>>>
> >>>>>
> >>>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> NB: I have not fully vetted the above information, I just notice
> that
> >>>>>>> many
> >>>>>>> of these jobs have ‘Discard old builds’ checked, but it is clearly
> >>> not
> >>>>>>> working.
> >>>>>>>
> >>>>>>>
> >>>>>>> If you are unable to reduce your disk usage beyond what is listed,
> >>>>> please
> >>>>>>> let me know what the reasons are and we’ll see if we can find a
> >>>>> solution.
> >>>>>>> If you believe you’ve configured your job properly and the space
> >>> usage
> >>>>> is
> >>>>>>> more than you expect, please comment here and we’ll take a look at
> >>> what
> >>>>>>> might be going on.
> >>>>>>>
> >>>>>>> I cut this list off arbitrarily at 40GB workspaces and larger.
> There
> >>> are
> >>>>>>> many which are between 20 and 30GB which also need to be addressed,
> >>> but
> >>>>>>> these are the current top contributors to the disk space situation.
> >>>>>>>
> >>>>>>>
> >>>>>>> 594G    Packaging
> >>>>>>> 425G    pulsar-website-build
> >>>>>>> 274G    pulsar-master
> >>>>>>> 195G    hadoop-multibranch
> >>>>>>> 173G    HBase Nightly
> >>>>>>> 138G    HBase-Flaky-Tests
> >>>>>>> 119G    netbeans-release
> >>>>>>> 108G    Any23-trunk
> >>>>>>> 101G    netbeans-linux-experiment
> >>>>>>> 96G     Jackrabbit-Oak-Windows
> >>>>>>> 94G     HBase-Find-Flaky-Tests
> >>>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>>>>>> 74G     netbeans-windows
> >>>>>>> 71G     stanbol-0.12
> >>>>>>> 68G     Sling
> >>>>>>> 63G     Atlas-master-NoTests
> >>>>>>> 48G     FlexJS Framework (maven)
> >>>>>>> 45G     HBase-PreCommit-GitHub-PR
> >>>>>>> 42G     pulsar-pull-request
> >>>>>>> 40G     Atlas-1.0-NoTests
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> Chris
> >>>>>>> ASF Infra
> >>>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >
>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Josh Elser <el...@apache.org>.
Just a cursory glance at some build artifacts showed just test output 
which sometimes extended into the multiple megabytes.

So everyone else knows, I just chatted with ChrisL in Slack and he 
confirmed that our disk utilization is down already (after HBASE-22563). 
He thanked us for the quick response.

We should keep pulling on this thread now that we're looking at it :)

On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> Oh, it is the build artifacts, not the jars...
> 
> Most of our build artifacts are build logs, but maybe the problem is that
> some of the logs are very large if the test hangs...
> 
> 张铎(Duo Zhang) <pa...@gmail.com> 于2019年6月11日周二 上午8:16写道:
> 
>> For flakey we just need the commit id in the console output then we can
>> build the artifacts locally. +1 on removing artifacts caching.
>>
>> Josh Elser <el...@apache.org> 于2019年6月11日周二 上午7:50写道:
>>
>>> Sure, Misty. No arguments here.
>>>
>>> I think that might be a bigger untangling. Maybe Peter or Busbey know
>>> better about how these could be de-coupled (e.g. I think flakies
>>> actually look back at old artifacts), but I'm not sure off the top of my
>>> head. I was just going for a quick fix to keep Infra from doing
>>> something super-destructive.
>>>
>>> For context, I've dropped them a note in Slack to make sure what I'm
>>> doing is having a positive effect.
>>>
>>> On 6/10/19 7:34 PM, Misty Linville wrote:
>>>> Keeping artifacts and keeping build logs are two separate things. I
>>> don’t
>>>> see a need to keep any artifacts past the most recent green and most
>>> recent
>>>> red builds. Alternately if we need the artifacts let’s have Jenkins put
>>>> them somewhere rather than keeping them there. You can get back to
>>> whatever
>>>> hash you need within git to reproduce a build problem.
>>>>
>>>> On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org> wrote:
>>>>
>>>>> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid
>>> (I
>>>>> hope).
>>>>>
>>>>> On 6/10/19 4:31 PM, Josh Elser wrote:
>>>>>> Eyes on.
>>>>>>
>>>>>> Looking at master, we already have the linked configuration, set to
>>>>>> retain 30 builds.
>>>>>>
>>>>>> We have some extra branches which we can lop off (branch-1.2,
>>>>>> branch-2.0, maybe some feature branches too). A quick fix might be to
>>>>>> just pull back that 30 to 10.
>>>>>>
>>>>>> Largely figuring out how this stuff works now, give me a shout in
>>> Slack
>>>>>> if anyone else has cycles.
>>>>>>
>>>>>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> HBase jobs are using more than 400GB based on this list.
>>>>>>> Could someone take a look at the job configurations today?
>>> Otherwise, I
>>>>>>> will look into it tomorrow morning.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Peter
>>>>>>>
>>>>>>> ---------- Forwarded message ---------
>>>>>>> From: Chris Lambertus <cm...@apache.org>
>>>>>>> Date: Mon, Jun 10, 2019 at 7:57 PM
>>>>>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
>>>>>>> To: <bu...@apache.org>
>>>>>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
>>>>>>>
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> The jenkins master is nearly full.
>>>>>>>
>>>>>>> The workspaces listed below need significant size reduction within 24
>>>>>>> hours
>>>>>>> or Infra will need to perform some manual pruning of old builds to
>>>>>>> keep the
>>>>>>> jenkins system running. The Mesos “Packaging” job also needs to be
>>>>>>> corrected to include the project name (mesos-packaging) please.
>>>>>>>
>>>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
>>>>>>> configuration may not be working for multibranch pipeline jobs.
>>> Please
>>>>>>> refer to these articles for information on discarding builds in
>>>>>>> multibranch
>>>>>>> jobs:
>>>>>>>
>>>>>>>
>>>>>
>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>>>>>>
>>>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>>>>>>
>>>>>
>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> NB: I have not fully vetted the above information, I just notice that
>>>>>>> many
>>>>>>> of these jobs have ‘Discard old builds’ checked, but it is clearly
>>> not
>>>>>>> working.
>>>>>>>
>>>>>>>
>>>>>>> If you are unable to reduce your disk usage beyond what is listed,
>>>>> please
>>>>>>> let me know what the reasons are and we’ll see if we can find a
>>>>> solution.
>>>>>>> If you believe you’ve configured your job properly and the space
>>> usage
>>>>> is
>>>>>>> more than you expect, please comment here and we’ll take a look at
>>> what
>>>>>>> might be going on.
>>>>>>>
>>>>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There
>>> are
>>>>>>> many which are between 20 and 30GB which also need to be addressed,
>>> but
>>>>>>> these are the current top contributors to the disk space situation.
>>>>>>>
>>>>>>>
>>>>>>> 594G    Packaging
>>>>>>> 425G    pulsar-website-build
>>>>>>> 274G    pulsar-master
>>>>>>> 195G    hadoop-multibranch
>>>>>>> 173G    HBase Nightly
>>>>>>> 138G    HBase-Flaky-Tests
>>>>>>> 119G    netbeans-release
>>>>>>> 108G    Any23-trunk
>>>>>>> 101G    netbeans-linux-experiment
>>>>>>> 96G     Jackrabbit-Oak-Windows
>>>>>>> 94G     HBase-Find-Flaky-Tests
>>>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>>>>>> 74G     netbeans-windows
>>>>>>> 71G     stanbol-0.12
>>>>>>> 68G     Sling
>>>>>>> 63G     Atlas-master-NoTests
>>>>>>> 48G     FlexJS Framework (maven)
>>>>>>> 45G     HBase-PreCommit-GitHub-PR
>>>>>>> 42G     pulsar-pull-request
>>>>>>> 40G     Atlas-1.0-NoTests
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Chris
>>>>>>> ASF Infra
>>>>>>>
>>>>>
>>>>
>>>
>>
> 

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by "张铎 (Duo Zhang)" <pa...@gmail.com>.
Oh, it is the build artifacts, not the jars...

Most of our build artifacts are build logs, but maybe the problem is that
some of the logs are very large if the test hangs...

张铎(Duo Zhang) <pa...@gmail.com> 于2019年6月11日周二 上午8:16写道:

> For flakey we just need the commit id in the console output then we can
> build the artifacts locally. +1 on removing artifacts caching.
>
> Josh Elser <el...@apache.org> 于2019年6月11日周二 上午7:50写道:
>
>> Sure, Misty. No arguments here.
>>
>> I think that might be a bigger untangling. Maybe Peter or Busbey know
>> better about how these could be de-coupled (e.g. I think flakies
>> actually look back at old artifacts), but I'm not sure off the top of my
>> head. I was just going for a quick fix to keep Infra from doing
>> something super-destructive.
>>
>> For context, I've dropped them a note in Slack to make sure what I'm
>> doing is having a positive effect.
>>
>> On 6/10/19 7:34 PM, Misty Linville wrote:
>> > Keeping artifacts and keeping build logs are two separate things. I
>> don’t
>> > see a need to keep any artifacts past the most recent green and most
>> recent
>> > red builds. Alternately if we need the artifacts let’s have Jenkins put
>> > them somewhere rather than keeping them there. You can get back to
>> whatever
>> > hash you need within git to reproduce a build problem.
>> >
>> > On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org> wrote:
>> >
>> >> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid
>> (I
>> >> hope).
>> >>
>> >> On 6/10/19 4:31 PM, Josh Elser wrote:
>> >>> Eyes on.
>> >>>
>> >>> Looking at master, we already have the linked configuration, set to
>> >>> retain 30 builds.
>> >>>
>> >>> We have some extra branches which we can lop off (branch-1.2,
>> >>> branch-2.0, maybe some feature branches too). A quick fix might be to
>> >>> just pull back that 30 to 10.
>> >>>
>> >>> Largely figuring out how this stuff works now, give me a shout in
>> Slack
>> >>> if anyone else has cycles.
>> >>>
>> >>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
>> >>>> Hi,
>> >>>>
>> >>>> HBase jobs are using more than 400GB based on this list.
>> >>>> Could someone take a look at the job configurations today?
>> Otherwise, I
>> >>>> will look into it tomorrow morning.
>> >>>>
>> >>>> Thanks,
>> >>>> Peter
>> >>>>
>> >>>> ---------- Forwarded message ---------
>> >>>> From: Chris Lambertus <cm...@apache.org>
>> >>>> Date: Mon, Jun 10, 2019 at 7:57 PM
>> >>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
>> >>>> To: <bu...@apache.org>
>> >>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
>> >>>>
>> >>>>
>> >>>> Hello,
>> >>>>
>> >>>> The jenkins master is nearly full.
>> >>>>
>> >>>> The workspaces listed below need significant size reduction within 24
>> >>>> hours
>> >>>> or Infra will need to perform some manual pruning of old builds to
>> >>>> keep the
>> >>>> jenkins system running. The Mesos “Packaging” job also needs to be
>> >>>> corrected to include the project name (mesos-packaging) please.
>> >>>>
>> >>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
>> >>>> configuration may not be working for multibranch pipeline jobs.
>> Please
>> >>>> refer to these articles for information on discarding builds in
>> >>>> multibranch
>> >>>> jobs:
>> >>>>
>> >>>>
>> >>
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> >>>>
>> >>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>> >>>>
>> >>
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> NB: I have not fully vetted the above information, I just notice that
>> >>>> many
>> >>>> of these jobs have ‘Discard old builds’ checked, but it is clearly
>> not
>> >>>> working.
>> >>>>
>> >>>>
>> >>>> If you are unable to reduce your disk usage beyond what is listed,
>> >> please
>> >>>> let me know what the reasons are and we’ll see if we can find a
>> >> solution.
>> >>>> If you believe you’ve configured your job properly and the space
>> usage
>> >> is
>> >>>> more than you expect, please comment here and we’ll take a look at
>> what
>> >>>> might be going on.
>> >>>>
>> >>>> I cut this list off arbitrarily at 40GB workspaces and larger. There
>> are
>> >>>> many which are between 20 and 30GB which also need to be addressed,
>> but
>> >>>> these are the current top contributors to the disk space situation.
>> >>>>
>> >>>>
>> >>>> 594G    Packaging
>> >>>> 425G    pulsar-website-build
>> >>>> 274G    pulsar-master
>> >>>> 195G    hadoop-multibranch
>> >>>> 173G    HBase Nightly
>> >>>> 138G    HBase-Flaky-Tests
>> >>>> 119G    netbeans-release
>> >>>> 108G    Any23-trunk
>> >>>> 101G    netbeans-linux-experiment
>> >>>> 96G     Jackrabbit-Oak-Windows
>> >>>> 94G     HBase-Find-Flaky-Tests
>> >>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> >>>> 74G     netbeans-windows
>> >>>> 71G     stanbol-0.12
>> >>>> 68G     Sling
>> >>>> 63G     Atlas-master-NoTests
>> >>>> 48G     FlexJS Framework (maven)
>> >>>> 45G     HBase-PreCommit-GitHub-PR
>> >>>> 42G     pulsar-pull-request
>> >>>> 40G     Atlas-1.0-NoTests
>> >>>>
>> >>>>
>> >>>>
>> >>>> Thanks,
>> >>>> Chris
>> >>>> ASF Infra
>> >>>>
>> >>
>> >
>>
>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by "张铎 (Duo Zhang)" <pa...@gmail.com>.
For flakey we just need the commit id in the console output then we can
build the artifacts locally. +1 on removing artifacts caching.

Josh Elser <el...@apache.org> 于2019年6月11日周二 上午7:50写道:

> Sure, Misty. No arguments here.
>
> I think that might be a bigger untangling. Maybe Peter or Busbey know
> better about how these could be de-coupled (e.g. I think flakies
> actually look back at old artifacts), but I'm not sure off the top of my
> head. I was just going for a quick fix to keep Infra from doing
> something super-destructive.
>
> For context, I've dropped them a note in Slack to make sure what I'm
> doing is having a positive effect.
>
> On 6/10/19 7:34 PM, Misty Linville wrote:
> > Keeping artifacts and keeping build logs are two separate things. I don’t
> > see a need to keep any artifacts past the most recent green and most
> recent
> > red builds. Alternately if we need the artifacts let’s have Jenkins put
> > them somewhere rather than keeping them there. You can get back to
> whatever
> > hash you need within git to reproduce a build problem.
> >
> > On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org> wrote:
> >
> >> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid
> (I
> >> hope).
> >>
> >> On 6/10/19 4:31 PM, Josh Elser wrote:
> >>> Eyes on.
> >>>
> >>> Looking at master, we already have the linked configuration, set to
> >>> retain 30 builds.
> >>>
> >>> We have some extra branches which we can lop off (branch-1.2,
> >>> branch-2.0, maybe some feature branches too). A quick fix might be to
> >>> just pull back that 30 to 10.
> >>>
> >>> Largely figuring out how this stuff works now, give me a shout in Slack
> >>> if anyone else has cycles.
> >>>
> >>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> >>>> Hi,
> >>>>
> >>>> HBase jobs are using more than 400GB based on this list.
> >>>> Could someone take a look at the job configurations today? Otherwise,
> I
> >>>> will look into it tomorrow morning.
> >>>>
> >>>> Thanks,
> >>>> Peter
> >>>>
> >>>> ---------- Forwarded message ---------
> >>>> From: Chris Lambertus <cm...@apache.org>
> >>>> Date: Mon, Jun 10, 2019 at 7:57 PM
> >>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
> >>>> To: <bu...@apache.org>
> >>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>> The jenkins master is nearly full.
> >>>>
> >>>> The workspaces listed below need significant size reduction within 24
> >>>> hours
> >>>> or Infra will need to perform some manual pruning of old builds to
> >>>> keep the
> >>>> jenkins system running. The Mesos “Packaging” job also needs to be
> >>>> corrected to include the project name (mesos-packaging) please.
> >>>>
> >>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> >>>> configuration may not be working for multibranch pipeline jobs. Please
> >>>> refer to these articles for information on discarding builds in
> >>>> multibranch
> >>>> jobs:
> >>>>
> >>>>
> >>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>>>
> >>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>>>
> >>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> NB: I have not fully vetted the above information, I just notice that
> >>>> many
> >>>> of these jobs have ‘Discard old builds’ checked, but it is clearly not
> >>>> working.
> >>>>
> >>>>
> >>>> If you are unable to reduce your disk usage beyond what is listed,
> >> please
> >>>> let me know what the reasons are and we’ll see if we can find a
> >> solution.
> >>>> If you believe you’ve configured your job properly and the space usage
> >> is
> >>>> more than you expect, please comment here and we’ll take a look at
> what
> >>>> might be going on.
> >>>>
> >>>> I cut this list off arbitrarily at 40GB workspaces and larger. There
> are
> >>>> many which are between 20 and 30GB which also need to be addressed,
> but
> >>>> these are the current top contributors to the disk space situation.
> >>>>
> >>>>
> >>>> 594G    Packaging
> >>>> 425G    pulsar-website-build
> >>>> 274G    pulsar-master
> >>>> 195G    hadoop-multibranch
> >>>> 173G    HBase Nightly
> >>>> 138G    HBase-Flaky-Tests
> >>>> 119G    netbeans-release
> >>>> 108G    Any23-trunk
> >>>> 101G    netbeans-linux-experiment
> >>>> 96G     Jackrabbit-Oak-Windows
> >>>> 94G     HBase-Find-Flaky-Tests
> >>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >>>> 74G     netbeans-windows
> >>>> 71G     stanbol-0.12
> >>>> 68G     Sling
> >>>> 63G     Atlas-master-NoTests
> >>>> 48G     FlexJS Framework (maven)
> >>>> 45G     HBase-PreCommit-GitHub-PR
> >>>> 42G     pulsar-pull-request
> >>>> 40G     Atlas-1.0-NoTests
> >>>>
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Chris
> >>>> ASF Infra
> >>>>
> >>
> >
>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Josh Elser <el...@apache.org>.
Sure, Misty. No arguments here.

I think that might be a bigger untangling. Maybe Peter or Busbey know 
better about how these could be de-coupled (e.g. I think flakies 
actually look back at old artifacts), but I'm not sure off the top of my 
head. I was just going for a quick fix to keep Infra from doing 
something super-destructive.

For context, I've dropped them a note in Slack to make sure what I'm 
doing is having a positive effect.

On 6/10/19 7:34 PM, Misty Linville wrote:
> Keeping artifacts and keeping build logs are two separate things. I don’t
> see a need to keep any artifacts past the most recent green and most recent
> red builds. Alternately if we need the artifacts let’s have Jenkins put
> them somewhere rather than keeping them there. You can get back to whatever
> hash you need within git to reproduce a build problem.
> 
> On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org> wrote:
> 
>> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid (I
>> hope).
>>
>> On 6/10/19 4:31 PM, Josh Elser wrote:
>>> Eyes on.
>>>
>>> Looking at master, we already have the linked configuration, set to
>>> retain 30 builds.
>>>
>>> We have some extra branches which we can lop off (branch-1.2,
>>> branch-2.0, maybe some feature branches too). A quick fix might be to
>>> just pull back that 30 to 10.
>>>
>>> Largely figuring out how this stuff works now, give me a shout in Slack
>>> if anyone else has cycles.
>>>
>>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
>>>> Hi,
>>>>
>>>> HBase jobs are using more than 400GB based on this list.
>>>> Could someone take a look at the job configurations today? Otherwise, I
>>>> will look into it tomorrow morning.
>>>>
>>>> Thanks,
>>>> Peter
>>>>
>>>> ---------- Forwarded message ---------
>>>> From: Chris Lambertus <cm...@apache.org>
>>>> Date: Mon, Jun 10, 2019 at 7:57 PM
>>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
>>>> To: <bu...@apache.org>
>>>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
>>>>
>>>>
>>>> Hello,
>>>>
>>>> The jenkins master is nearly full.
>>>>
>>>> The workspaces listed below need significant size reduction within 24
>>>> hours
>>>> or Infra will need to perform some manual pruning of old builds to
>>>> keep the
>>>> jenkins system running. The Mesos “Packaging” job also needs to be
>>>> corrected to include the project name (mesos-packaging) please.
>>>>
>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
>>>> configuration may not be working for multibranch pipeline jobs. Please
>>>> refer to these articles for information on discarding builds in
>>>> multibranch
>>>> jobs:
>>>>
>>>>
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>>>
>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>>>
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>>>
>>>>
>>>>
>>>>
>>>> NB: I have not fully vetted the above information, I just notice that
>>>> many
>>>> of these jobs have ‘Discard old builds’ checked, but it is clearly not
>>>> working.
>>>>
>>>>
>>>> If you are unable to reduce your disk usage beyond what is listed,
>> please
>>>> let me know what the reasons are and we’ll see if we can find a
>> solution.
>>>> If you believe you’ve configured your job properly and the space usage
>> is
>>>> more than you expect, please comment here and we’ll take a look at what
>>>> might be going on.
>>>>
>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are
>>>> many which are between 20 and 30GB which also need to be addressed, but
>>>> these are the current top contributors to the disk space situation.
>>>>
>>>>
>>>> 594G    Packaging
>>>> 425G    pulsar-website-build
>>>> 274G    pulsar-master
>>>> 195G    hadoop-multibranch
>>>> 173G    HBase Nightly
>>>> 138G    HBase-Flaky-Tests
>>>> 119G    netbeans-release
>>>> 108G    Any23-trunk
>>>> 101G    netbeans-linux-experiment
>>>> 96G     Jackrabbit-Oak-Windows
>>>> 94G     HBase-Find-Flaky-Tests
>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>>> 74G     netbeans-windows
>>>> 71G     stanbol-0.12
>>>> 68G     Sling
>>>> 63G     Atlas-master-NoTests
>>>> 48G     FlexJS Framework (maven)
>>>> 45G     HBase-PreCommit-GitHub-PR
>>>> 42G     pulsar-pull-request
>>>> 40G     Atlas-1.0-NoTests
>>>>
>>>>
>>>>
>>>> Thanks,
>>>> Chris
>>>> ASF Infra
>>>>
>>
> 

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Misty Linville <mi...@apache.org>.
Keeping artifacts and keeping build logs are two separate things. I don’t
see a need to keep any artifacts past the most recent green and most recent
red builds. Alternately if we need the artifacts let’s have Jenkins put
them somewhere rather than keeping them there. You can get back to whatever
hash you need within git to reproduce a build problem.

On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <el...@apache.org> wrote:

> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid (I
> hope).
>
> On 6/10/19 4:31 PM, Josh Elser wrote:
> > Eyes on.
> >
> > Looking at master, we already have the linked configuration, set to
> > retain 30 builds.
> >
> > We have some extra branches which we can lop off (branch-1.2,
> > branch-2.0, maybe some feature branches too). A quick fix might be to
> > just pull back that 30 to 10.
> >
> > Largely figuring out how this stuff works now, give me a shout in Slack
> > if anyone else has cycles.
> >
> > On 6/10/19 2:34 PM, Peter Somogyi wrote:
> >> Hi,
> >>
> >> HBase jobs are using more than 400GB based on this list.
> >> Could someone take a look at the job configurations today? Otherwise, I
> >> will look into it tomorrow morning.
> >>
> >> Thanks,
> >> Peter
> >>
> >> ---------- Forwarded message ---------
> >> From: Chris Lambertus <cm...@apache.org>
> >> Date: Mon, Jun 10, 2019 at 7:57 PM
> >> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
> >> To: <bu...@apache.org>
> >> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
> >>
> >>
> >> Hello,
> >>
> >> The jenkins master is nearly full.
> >>
> >> The workspaces listed below need significant size reduction within 24
> >> hours
> >> or Infra will need to perform some manual pruning of old builds to
> >> keep the
> >> jenkins system running. The Mesos “Packaging” job also needs to be
> >> corrected to include the project name (mesos-packaging) please.
> >>
> >> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> >> configuration may not be working for multibranch pipeline jobs. Please
> >> refer to these articles for information on discarding builds in
> >> multibranch
> >> jobs:
> >>
> >>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>
> >> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>
> >>
> >>
> >>
> >> NB: I have not fully vetted the above information, I just notice that
> >> many
> >> of these jobs have ‘Discard old builds’ checked, but it is clearly not
> >> working.
> >>
> >>
> >> If you are unable to reduce your disk usage beyond what is listed,
> please
> >> let me know what the reasons are and we’ll see if we can find a
> solution.
> >> If you believe you’ve configured your job properly and the space usage
> is
> >> more than you expect, please comment here and we’ll take a look at what
> >> might be going on.
> >>
> >> I cut this list off arbitrarily at 40GB workspaces and larger. There are
> >> many which are between 20 and 30GB which also need to be addressed, but
> >> these are the current top contributors to the disk space situation.
> >>
> >>
> >> 594G    Packaging
> >> 425G    pulsar-website-build
> >> 274G    pulsar-master
> >> 195G    hadoop-multibranch
> >> 173G    HBase Nightly
> >> 138G    HBase-Flaky-Tests
> >> 119G    netbeans-release
> >> 108G    Any23-trunk
> >> 101G    netbeans-linux-experiment
> >> 96G     Jackrabbit-Oak-Windows
> >> 94G     HBase-Find-Flaky-Tests
> >> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >> 74G     netbeans-windows
> >> 71G     stanbol-0.12
> >> 68G     Sling
> >> 63G     Atlas-master-NoTests
> >> 48G     FlexJS Framework (maven)
> >> 45G     HBase-PreCommit-GitHub-PR
> >> 42G     pulsar-pull-request
> >> 40G     Atlas-1.0-NoTests
> >>
> >>
> >>
> >> Thanks,
> >> Chris
> >> ASF Infra
> >>
>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Josh Elser <el...@apache.org>.
https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid (I 
hope).

On 6/10/19 4:31 PM, Josh Elser wrote:
> Eyes on.
> 
> Looking at master, we already have the linked configuration, set to 
> retain 30 builds.
> 
> We have some extra branches which we can lop off (branch-1.2, 
> branch-2.0, maybe some feature branches too). A quick fix might be to 
> just pull back that 30 to 10.
> 
> Largely figuring out how this stuff works now, give me a shout in Slack 
> if anyone else has cycles.
> 
> On 6/10/19 2:34 PM, Peter Somogyi wrote:
>> Hi,
>>
>> HBase jobs are using more than 400GB based on this list.
>> Could someone take a look at the job configurations today? Otherwise, I
>> will look into it tomorrow morning.
>>
>> Thanks,
>> Peter
>>
>> ---------- Forwarded message ---------
>> From: Chris Lambertus <cm...@apache.org>
>> Date: Mon, Jun 10, 2019 at 7:57 PM
>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
>> To: <bu...@apache.org>
>> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
>>
>>
>> Hello,
>>
>> The jenkins master is nearly full.
>>
>> The workspaces listed below need significant size reduction within 24 
>> hours
>> or Infra will need to perform some manual pruning of old builds to 
>> keep the
>> jenkins system running. The Mesos “Packaging” job also needs to be
>> corrected to include the project name (mesos-packaging) please.
>>
>> It appears that the typical ‘Discard Old Builds’ checkbox in the job
>> configuration may not be working for multibranch pipeline jobs. Please
>> refer to these articles for information on discarding builds in 
>> multibranch
>> jobs:
>>
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job- 
>>
>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489 
>>
>>
>>
>>
>> NB: I have not fully vetted the above information, I just notice that 
>> many
>> of these jobs have ‘Discard old builds’ checked, but it is clearly not
>> working.
>>
>>
>> If you are unable to reduce your disk usage beyond what is listed, please
>> let me know what the reasons are and we’ll see if we can find a solution.
>> If you believe you’ve configured your job properly and the space usage is
>> more than you expect, please comment here and we’ll take a look at what
>> might be going on.
>>
>> I cut this list off arbitrarily at 40GB workspaces and larger. There are
>> many which are between 20 and 30GB which also need to be addressed, but
>> these are the current top contributors to the disk space situation.
>>
>>
>> 594G    Packaging
>> 425G    pulsar-website-build
>> 274G    pulsar-master
>> 195G    hadoop-multibranch
>> 173G    HBase Nightly
>> 138G    HBase-Flaky-Tests
>> 119G    netbeans-release
>> 108G    Any23-trunk
>> 101G    netbeans-linux-experiment
>> 96G     Jackrabbit-Oak-Windows
>> 94G     HBase-Find-Flaky-Tests
>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>> 74G     netbeans-windows
>> 71G     stanbol-0.12
>> 68G     Sling
>> 63G     Atlas-master-NoTests
>> 48G     FlexJS Framework (maven)
>> 45G     HBase-PreCommit-GitHub-PR
>> 42G     pulsar-pull-request
>> 40G     Atlas-1.0-NoTests
>>
>>
>>
>> Thanks,
>> Chris
>> ASF Infra
>>

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Josh Elser <el...@apache.org>.
Eyes on.

Looking at master, we already have the linked configuration, set to 
retain 30 builds.

We have some extra branches which we can lop off (branch-1.2, 
branch-2.0, maybe some feature branches too). A quick fix might be to 
just pull back that 30 to 10.

Largely figuring out how this stuff works now, give me a shout in Slack 
if anyone else has cycles.

On 6/10/19 2:34 PM, Peter Somogyi wrote:
> Hi,
> 
> HBase jobs are using more than 400GB based on this list.
> Could someone take a look at the job configurations today? Otherwise, I
> will look into it tomorrow morning.
> 
> Thanks,
> Peter
> 
> ---------- Forwarded message ---------
> From: Chris Lambertus <cm...@apache.org>
> Date: Mon, Jun 10, 2019 at 7:57 PM
> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
> To: <bu...@apache.org>
> Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>
> 
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours
> or Infra will need to perform some manual pruning of old builds to keep the
> jenkins system running. The Mesos “Packaging” job also needs to be
> corrected to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> configuration may not be working for multibranch pipeline jobs. Please
> refer to these articles for information on discarding builds in multibranch
> jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many
> of these jobs have ‘Discard old builds’ checked, but it is clearly not
> working.
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please
> let me know what the reasons are and we’ll see if we can find a solution.
> If you believe you’ve configured your job properly and the space usage is
> more than you expect, please comment here and we’ll take a look at what
> might be going on.
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are
> many which are between 20 and 30GB which also need to be addressed, but
> these are the current top contributors to the disk space situation.
> 
> 
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra
> 

Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Peter Somogyi <ps...@apache.org>.
Hi,

HBase jobs are using more than 400GB based on this list.
Could someone take a look at the job configurations today? Otherwise, I
will look into it tomorrow morning.

Thanks,
Peter

---------- Forwarded message ---------
From: Chris Lambertus <cm...@apache.org>
Date: Mon, Jun 10, 2019 at 7:57 PM
Subject: ACTION REQUIRED: disk space on jenkins master nearly full
To: <bu...@apache.org>
Cc: <de...@mesos.apache.org>, <de...@pulsar.apache.org>


Hello,

The jenkins master is nearly full.

The workspaces listed below need significant size reduction within 24 hours
or Infra will need to perform some manual pruning of old builds to keep the
jenkins system running. The Mesos “Packaging” job also needs to be
corrected to include the project name (mesos-packaging) please.

It appears that the typical ‘Discard Old Builds’ checkbox in the job
configuration may not be working for multibranch pipeline jobs. Please
refer to these articles for information on discarding builds in multibranch
jobs:

https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
https://issues.jenkins-ci.org/browse/JENKINS-35642
https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489



NB: I have not fully vetted the above information, I just notice that many
of these jobs have ‘Discard old builds’ checked, but it is clearly not
working.


If you are unable to reduce your disk usage beyond what is listed, please
let me know what the reasons are and we’ll see if we can find a solution.
If you believe you’ve configured your job properly and the space usage is
more than you expect, please comment here and we’ll take a look at what
might be going on.

I cut this list off arbitrarily at 40GB workspaces and larger. There are
many which are between 20 and 30GB which also need to be addressed, but
these are the current top contributors to the disk space situation.


594G    Packaging
425G    pulsar-website-build
274G    pulsar-master
195G    hadoop-multibranch
173G    HBase Nightly
138G    HBase-Flaky-Tests
119G    netbeans-release
108G    Any23-trunk
101G    netbeans-linux-experiment
96G     Jackrabbit-Oak-Windows
94G     HBase-Find-Flaky-Tests
88G     PreCommit-ZOOKEEPER-github-pr-build
74G     netbeans-windows
71G     stanbol-0.12
68G     Sling
63G     Atlas-master-NoTests
48G     FlexJS Framework (maven)
45G     HBase-PreCommit-GitHub-PR
42G     pulsar-pull-request
40G     Atlas-1.0-NoTests



Thanks,
Chris
ASF Infra

Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Robert Munteanu <ro...@apache.org>.
Hi,

(keeing build@apache.org only)

On Mon, 2019-06-10 at 10:57 -0700, Chris Lambertus wrote:
> 96G     Jackrabbit-Oak-Windows

I've enabled build timeouts and limited history to last 10 builds. I've
also wiped out the workspace, for good measure. That should hopefully
keep this build under control.

Thanks,

Robert


Re: ACTION REQUIRED: disk space on jenkins master nearly full

Posted by Matteo Merli <mm...@apache.org>.
For pulsar-website-build and pulsar-master, the "discard old builds"
wasn't set unfortunately. I just enabled it now. Not sure if there's a
way to quickly trigger a manual cleanup.

Regarding "pulsar-pull-request": this was an old Jenkins job no longer
used (since we switched to multiple smaller PR validation jobs a while
ago). I have removed the Jenkins job. Hopefully that should take care
of cleaning all the files.


Thanks,
Matteo

--
Matteo Merli
<mm...@apache.org>

On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <cm...@apache.org> wrote:
>
> Hello,
>
> The jenkins master is nearly full.
>
> The workspaces listed below need significant size reduction within 24 hours or Infra will need to perform some manual pruning of old builds to keep the jenkins system running. The Mesos “Packaging” job also needs to be corrected to include the project name (mesos-packaging) please.
>
> It appears that the typical ‘Discard Old Builds’ checkbox in the job configuration may not be working for multibranch pipeline jobs. Please refer to these articles for information on discarding builds in multibranch jobs:
>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>
>
>
> NB: I have not fully vetted the above information, I just notice that many of these jobs have ‘Discard old builds’ checked, but it is clearly not working.
>
>
> If you are unable to reduce your disk usage beyond what is listed, please let me know what the reasons are and we’ll see if we can find a solution. If you believe you’ve configured your job properly and the space usage is more than you expect, please comment here and we’ll take a look at what might be going on.
>
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many which are between 20 and 30GB which also need to be addressed, but these are the current top contributors to the disk space situation.
>
>
> 594G    Packaging
> 425G    pulsar-website-build
> 274G    pulsar-master
> 195G    hadoop-multibranch
> 173G    HBase Nightly
> 138G    HBase-Flaky-Tests
> 119G    netbeans-release
> 108G    Any23-trunk
> 101G    netbeans-linux-experiment
> 96G     Jackrabbit-Oak-Windows
> 94G     HBase-Find-Flaky-Tests
> 88G     PreCommit-ZOOKEEPER-github-pr-build
> 74G     netbeans-windows
> 71G     stanbol-0.12
> 68G     Sling
> 63G     Atlas-master-NoTests
> 48G     FlexJS Framework (maven)
> 45G     HBase-PreCommit-GitHub-PR
> 42G     pulsar-pull-request
> 40G     Atlas-1.0-NoTests
>
>
>
> Thanks,
> Chris
> ASF Infra