You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2023/06/09 13:08:00 UTC

[jira] [Updated] (HADOOP-17386) fs.s3a.buffer.dir to be under Yarn container path on yarn applications

     [ https://issues.apache.org/jira/browse/HADOOP-17386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran updated HADOOP-17386:
------------------------------------
    Fix Version/s: 3.3.9

> fs.s3a.buffer.dir to be under Yarn container path on yarn applications
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-17386
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17386
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Steve Loughran
>            Assignee: Monthon Klongklaew
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0, 3.3.9
>
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> # fs.s3a.buffer.dir defaults to hadoop.tmp.dir which is /tmp or similar
> # we use this for storing file blocks during upload
> # staging committers use it for all files in a task, which can be a lot more
> # a lot of systems don't clean up /tmp until reboot -and if they stay up for a long time then they accrue files written through s3a staging committer from spark containers which fail
> Fix: use ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}/s3a as the option so that if env.LOCAL_DIRS is set is used over hadoop.tmp.dir. YARN-deployed apps will use that for the buffer dir. When the app container is destroyed, so is the directory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org