You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:38:20 UTC

[jira] [Resolved] (SPARK-15619) spark builds filling up /tmp

     [ https://issues.apache.org/jira/browse/SPARK-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-15619.
----------------------------------
    Resolution: Incomplete

> spark builds filling up /tmp
> ----------------------------
>
>                 Key: SPARK-15619
>                 URL: https://issues.apache.org/jira/browse/SPARK-15619
>             Project: Spark
>          Issue Type: Bug
>          Components: Build
>            Reporter: shane knapp
>            Priority: Minor
>              Labels: bulk-closed
>
> spark builds aren't cleaning up /tmp after they run...  it's hard to pinpoint EXACTLY what is left there by the spark builds (as other builds are also guilty of doing this), but a quick perusal of the /tmp directory during some spark builds show that there are myriad empty directories being created and a massive pile of shared object libraries being dumped there.
> $ for x in $(cat jenkins_workers.txt ); do echo $x; ssh $x "ls -l /tmp/*.so | wc -l"; done
> amp-jenkins-worker-01
> 0
> ls: cannot access /tmp/*.so: No such file or directory
> amp-jenkins-worker-02
> 22312
> amp-jenkins-worker-03
> 39673
> amp-jenkins-worker-04
> 39548
> amp-jenkins-worker-05
> 39577
> amp-jenkins-worker-06
> 39299
> amp-jenkins-worker-07
> 39315
> amp-jenkins-worker-08
> 38529
> to help combat this, i set up a cron job on each worker that runs tmpwatch during system downtime on sundays to clean up files older than a week.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org