You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomee.apache.org by Richard Zowalla <rz...@apache.org> on 2022/04/20 07:37:21 UTC

Fwd: ci-builds all 3.6TB disk is full!

Hi,

seems we are the "top" consumers with 1,6TB disk usage on the CI
infrastructure.

I looked at some of our jobs and found, that there is no retention
policy in place (for some of them). I added a policy similar to what we
had in the past for newly created jobs. Looks like the retention policy
is not copied then cloning jobs.

In addition, I asked Gavin, if he can provide a "du" listing for our
jobs, so we can better dig into this issue.

Gruß
Richard

-------- Ursprüngliche Nachricht --------
Von: Gavin McDonald <gm...@apache.org>
Antwort an: builds@apache.org, gmcdonald@apache.org
An: builds <bu...@apache.org>
Betreff: ci-builds all 3.6TB disk is full!
Datum: Wed, 20 Apr 2022 09:27:28 +0200

> Hi All,
> 
> Seems we need to do another cull of projects storing way too much
> data.
> 
> Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB is
> fine,
> but above
> that, its just too much. I will be removing 1TB of data from wherever
> I can
> get it.
> 
> Please, look after your jobs, and your fellow projects by limiting
> what you
> keep.
> 
> 1.6T    Tomee
> 451G    Kafka
> 303G    james
> 176G    carbondata
> 129G    Jackrabbit
> 71G     Brooklyn
> 64G     Sling
> 64G     Netbeans
> 60G     Ranger
> 38G     AsterixDB
> 33G     OODT
> 29G     Tika
> 27G     Syncope
> 24G     Atlas
> 20G     IoTDB
> 18G     CXF
> 16G     POI
> 11G     Solr
> 11G     Mesos
> 8.7G    Royale
> 7.8G    Lucene
> 7.6G    MyFaces
> 7.6G    Directory
> 6.4G    OpenJPA
> 6.0G    ManifoldCF
> 5.9G    ActiveMQ
> 5.7G    Logging
> 5.6G    Archiva
> 5.5G    UIMA
> 5.3G    ctakes
> 4.7G    Heron
> 4.6G    Jena
> 4.0G    OpenOffice
> 3.8G    Cloudstack
> 3.4G    Shiro
> 2.5G    Qpid
> 2.1G    JSPWiki
> 2.1G    JMeter
> 2.0G    JClouds
> 1.8G    Santuario
> 1.8G    OpenMeetings
> 1.8G    Camel
> 1.7G    Karaf
> 1.7G    HttpComponents
> 1.7G    Ant
> 1.5G    Tapestry
> 1.5G    Commons
> 1.3G    DeltaSpike
> 1.2G    Rya
> 1.2G    Aries
> 1.2G    Accumulo
> 1.1G    PDFBox
> 
> -- 
> 
> *Gavin McDonald*
> Systems Administrator
> ASF Infrastructure Team


Re: ci-builds all 3.6TB disk is full!

Posted by Richard Zowalla <rz...@apache.org>.
Thanks for the listing. Retention policies are now in place for our
daily deploy jobs, so we shouldn't produce this huge amount of data
over time now.

If the retention kicks in, we should be fine.

Gruß
Richard


Am Mittwoch, dem 20.04.2022 um 09:41 +0200 schrieb Gavin McDonald:
> Thanks Richard,
> 
> On Wed, Apr 20, 2022 at 9:37 AM Richard Zowalla <rz...@apache.org>
> wrote:
> > Hi,
> > 
> > seems we are the "top" consumers with 1,6TB disk usage on the CI
> > infrastructure.
> > 
> > I looked at some of our jobs and found, that there is no retention
> > policy in place (for some of them). I added a policy similar to
> > what we
> > had in the past for newly created jobs. Looks like the retention
> > policy
> > is not copied then cloning jobs.
> > 
> > In addition, I asked Gavin, if he can provide a "du" listing for
> > our
> > jobs, so we can better dig into this issue.
> 
> Here is your listing:
> 
> 834G master-deploy
> 
> of which:
> 
> 445G org.apache.tomee$apache-tomee
> 111G org.apache.tomee$tomee-embedded
> 60G org.apache.tomee$openejb-standalone
> 44G org.apache.tomee$tomee-plume-webapp
> 39G org.apache.tomee$tomee-plus-webapp
> 36G org.apache.tomee$tomee-microprofile-webapp
> 26G org.apache.tomee$tomee-webapp
> 25G org.apache.tomee$openejb-lite
> 5.4G org.apache.tomee$tomee-webaccess
> 5.1G org.apache.tomee$taglibs-shade
> 4.4G org.apache.tomee$openejb-provisionning
> 4.0G org.apache.tomee$openejb-itests-app
> 3.4G org.apache.tomee$openejb-ssh
> 3.3G org.apache.tomee$arquillian-tomee-moviefun-example
> 3.1G org.apache.tomee$cxf-shade
> 2.3G org.apache.tomee$openejb-core
> 
> 597G jakarta-deploy
> 
> of which:
> 
> 354G org.apache.tomee$apache-tomee
> 73G org.apache.tomee$tomee-plume-webapp
> 66G org.apache.tomee$tomee-plus-webapp
> 60G org.apache.tomee$tomee-microprofile-webapp
> 42G org.apache.tomee$tomee-webprofile-webapp
> 4.3G org.apache.tomee$jakartaee-api
> 65M org.apache.tomee$tomee-project
> 44M org.apache.tomee$tomee
> 38M org.apache.tomee$transform
> 20M org.apache.tomee.jakarta$apache-tomee
> 7.2M org.apache.tomee.bom$jaxb-runtime
> 6.5M org.apache.tomee.bom$boms
> 
> 63G tomee-8.x-deploy
> 25G jakarta-wip-9.x-deploy
> 23G master-build-full
> 7.0G site-publish
> 4.2G tomee-8.x-sanity-checks
> 3.1G tomee-7.0.x
> 2.5G master-sanity-checks
> 2.1G pull-request
> 2.0G tomee-8.x-build-full
> 2.0G TOMEE-3872
> 1.7G master-pull-request
> 1.1G tomee-8.x-owasp-check
> 1.1G master-owasp-check
> 1.1G master-build-quick
> 945M tomee-8.x-build-quick
> 35M tomee-jakartaee-api-master
> 27M tomee-patch-plugin-deploy
> 428K clean-repo
> 256K tomee-jenkins-pipelines
> 
> > Gruß
> > Richard
> > 
> > -------- Ursprüngliche Nachricht --------
> > Von: Gavin McDonald <gm...@apache.org>
> > Antwort an: builds@apache.org, gmcdonald@apache.org
> > An: builds <bu...@apache.org>
> > Betreff: ci-builds all 3.6TB disk is full!
> > Datum: Wed, 20 Apr 2022 09:27:28 +0200
> > 
> > > Hi All,
> > > 
> > > Seems we need to do another cull of projects storing way too much
> > > data.
> > > 
> > > Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB
> > is
> > > fine,
> > > but above
> > > that, its just too much. I will be removing 1TB of data from
> > wherever
> > > I can
> > > get it.
> > > 
> > > Please, look after your jobs, and your fellow projects by
> > limiting
> > > what you
> > > keep.
> > > 
> > > 1.6T    Tomee
> > > 451G    Kafka
> > > 303G    james
> > > 176G    carbondata
> > > 129G    Jackrabbit
> > > 71G     Brooklyn
> > > 64G     Sling
> > > 64G     Netbeans
> > > 60G     Ranger
> > > 38G     AsterixDB
> > > 33G     OODT
> > > 29G     Tika
> > > 27G     Syncope
> > > 24G     Atlas
> > > 20G     IoTDB
> > > 18G     CXF
> > > 16G     POI
> > > 11G     Solr
> > > 11G     Mesos
> > > 8.7G    Royale
> > > 7.8G    Lucene
> > > 7.6G    MyFaces
> > > 7.6G    Directory
> > > 6.4G    OpenJPA
> > > 6.0G    ManifoldCF
> > > 5.9G    ActiveMQ
> > > 5.7G    Logging
> > > 5.6G    Archiva
> > > 5.5G    UIMA
> > > 5.3G    ctakes
> > > 4.7G    Heron
> > > 4.6G    Jena
> > > 4.0G    OpenOffice
> > > 3.8G    Cloudstack
> > > 3.4G    Shiro
> > > 2.5G    Qpid
> > > 2.1G    JSPWiki
> > > 2.1G    JMeter
> > > 2.0G    JClouds
> > > 1.8G    Santuario
> > > 1.8G    OpenMeetings
> > > 1.8G    Camel
> > > 1.7G    Karaf
> > > 1.7G    HttpComponents
> > > 1.7G    Ant
> > > 1.5G    Tapestry
> > > 1.5G    Commons
> > > 1.3G    DeltaSpike
> > > 1.2G    Rya
> > > 1.2G    Aries
> > > 1.2G    Accumulo
> > > 1.1G    PDFBox
> > > 
> > > -- 
> > > 
> > > *Gavin McDonald*
> > > Systems Administrator
> > > ASF Infrastructure Team
> > 
> 
> 

Re: ci-builds all 3.6TB disk is full!

Posted by Gavin McDonald <gm...@apache.org>.
Thanks Richard,

On Wed, Apr 20, 2022 at 9:37 AM Richard Zowalla <rz...@apache.org> wrote:

> Hi,
>
> seems we are the "top" consumers with 1,6TB disk usage on the CI
> infrastructure.
>
> I looked at some of our jobs and found, that there is no retention
> policy in place (for some of them). I added a policy similar to what we
> had in the past for newly created jobs. Looks like the retention policy
> is not copied then cloning jobs.
>
> In addition, I asked Gavin, if he can provide a "du" listing for our
> jobs, so we can better dig into this issue.
>

Here is your listing:

834G master-deploy

of which:

445G org.apache.tomee$apache-tomee
111G org.apache.tomee$tomee-embedded
60G org.apache.tomee$openejb-standalone
44G org.apache.tomee$tomee-plume-webapp
39G org.apache.tomee$tomee-plus-webapp
36G org.apache.tomee$tomee-microprofile-webapp
26G org.apache.tomee$tomee-webapp
25G org.apache.tomee$openejb-lite
5.4G org.apache.tomee$tomee-webaccess
5.1G org.apache.tomee$taglibs-shade
4.4G org.apache.tomee$openejb-provisionning
4.0G org.apache.tomee$openejb-itests-app
3.4G org.apache.tomee$openejb-ssh
3.3G org.apache.tomee$arquillian-tomee-moviefun-example
3.1G org.apache.tomee$cxf-shade
2.3G org.apache.tomee$openejb-core

597G jakarta-deploy

of which:

354G org.apache.tomee$apache-tomee
73G org.apache.tomee$tomee-plume-webapp
66G org.apache.tomee$tomee-plus-webapp
60G org.apache.tomee$tomee-microprofile-webapp
42G org.apache.tomee$tomee-webprofile-webapp
4.3G org.apache.tomee$jakartaee-api
65M org.apache.tomee$tomee-project
44M org.apache.tomee$tomee
38M org.apache.tomee$transform
20M org.apache.tomee.jakarta$apache-tomee
7.2M org.apache.tomee.bom$jaxb-runtime
6.5M org.apache.tomee.bom$boms

63G tomee-8.x-deploy
25G jakarta-wip-9.x-deploy
23G master-build-full
7.0G site-publish
4.2G tomee-8.x-sanity-checks
3.1G tomee-7.0.x
2.5G master-sanity-checks
2.1G pull-request
2.0G tomee-8.x-build-full
2.0G TOMEE-3872
1.7G master-pull-request
1.1G tomee-8.x-owasp-check
1.1G master-owasp-check
1.1G master-build-quick
945M tomee-8.x-build-quick
35M tomee-jakartaee-api-master
27M tomee-patch-plugin-deploy
428K clean-repo
256K tomee-jenkins-pipelines


> Gruß
> Richard
>
> -------- Ursprüngliche Nachricht --------
> Von: Gavin McDonald <gm...@apache.org>
> Antwort an: builds@apache.org, gmcdonald@apache.org
> An: builds <bu...@apache.org>
> Betreff: ci-builds all 3.6TB disk is full!
> Datum: Wed, 20 Apr 2022 09:27:28 +0200
>
> > Hi All,
> >
> > Seems we need to do another cull of projects storing way too much
> > data.
> >
> > Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB is
> > fine,
> > but above
> > that, its just too much. I will be removing 1TB of data from wherever
> > I can
> > get it.
> >
> > Please, look after your jobs, and your fellow projects by limiting
> > what you
> > keep.
> >
> > 1.6T    Tomee
> > 451G    Kafka
> > 303G    james
> > 176G    carbondata
> > 129G    Jackrabbit
> > 71G     Brooklyn
> > 64G     Sling
> > 64G     Netbeans
> > 60G     Ranger
> > 38G     AsterixDB
> > 33G     OODT
> > 29G     Tika
> > 27G     Syncope
> > 24G     Atlas
> > 20G     IoTDB
> > 18G     CXF
> > 16G     POI
> > 11G     Solr
> > 11G     Mesos
> > 8.7G    Royale
> > 7.8G    Lucene
> > 7.6G    MyFaces
> > 7.6G    Directory
> > 6.4G    OpenJPA
> > 6.0G    ManifoldCF
> > 5.9G    ActiveMQ
> > 5.7G    Logging
> > 5.6G    Archiva
> > 5.5G    UIMA
> > 5.3G    ctakes
> > 4.7G    Heron
> > 4.6G    Jena
> > 4.0G    OpenOffice
> > 3.8G    Cloudstack
> > 3.4G    Shiro
> > 2.5G    Qpid
> > 2.1G    JSPWiki
> > 2.1G    JMeter
> > 2.0G    JClouds
> > 1.8G    Santuario
> > 1.8G    OpenMeetings
> > 1.8G    Camel
> > 1.7G    Karaf
> > 1.7G    HttpComponents
> > 1.7G    Ant
> > 1.5G    Tapestry
> > 1.5G    Commons
> > 1.3G    DeltaSpike
> > 1.2G    Rya
> > 1.2G    Aries
> > 1.2G    Accumulo
> > 1.1G    PDFBox
> >
> > --
> >
> > *Gavin McDonald*
> > Systems Administrator
> > ASF Infrastructure Team
>
>

-- 

*Gavin McDonald*
Systems Administrator
ASF Infrastructure Team