You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/05/15 14:45:00 UTC

[jira] [Created] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under set

Steve Loughran created HADOOP-15469:
---------------------------------------

             Summary: S3A directory committer commit job fails if _temporary directory created under set
                 Key: HADOOP-15469
                 URL: https://issues.apache.org/jira/browse/HADOOP-15469
             Project: Hadoop Common
          Issue Type: Sub-task
          Components: fs/s3
    Affects Versions: 3.1.0
         Environment: spark test runs
            Reporter: Steve Loughran
            Assignee: Steve Loughran


The directory staging committer fails in commit job if any temporary files/dirs have been created. Spark work can create such a dir for placement of absolute files.

This is because commitJob() looks for the dest dir existing, not containing non-hidden files.
As the comment says, "its kind of superfluous". More specifically, it means jobs which would commit with the classic committer & overwrite=false will fail

Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org