You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Pawel Bartoszek (JIRA)" <ji...@apache.org> on 2018/11/09 14:15:00 UTC

[jira] [Created] (BEAM-6031) Add retry logic to S3FileSystem

Pawel Bartoszek created BEAM-6031:
-------------------------------------

             Summary: Add retry logic to S3FileSystem 
                 Key: BEAM-6031
                 URL: https://issues.apache.org/jira/browse/BEAM-6031
             Project: Beam
          Issue Type: Bug
          Components: io-java-aws
    Affects Versions: 2.8.0, 2.7.0
            Reporter: Pawel Bartoszek
            Assignee: Ismaël Mejía


S3FileSystem should have some retry behaviour if ObjectsDelete fails. I have seen such example in our job where 1 item from the delete batch cannot be deleted due to S3 InternalError causing the whole job to restart. The source code I am referring to:  

[https://github.com/apache/beam/blob/8a88e72f293ef7f9be6c872aa0dda681458c7ca5/sdks/java/io/amazon-web-services/src/main/java/org/apache/beam/sdk/io/aws/s3/S3FileSystem.java#L633]

 

The retry logic might be added to other S3 calls in S3FileSystem as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)