You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/02/19 16:50:00 UTC

[jira] [Resolved] (HADOOP-15216) S3AInputStream to handle reconnect on read() failure better

     [ https://issues.apache.org/jira/browse/HADOOP-15216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran resolved HADOOP-15216.
-------------------------------------
    Resolution: Duplicate

> S3AInputStream to handle reconnect on read() failure better
> -----------------------------------------------------------
>
>                 Key: HADOOP-15216
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15216
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0
>            Reporter: Steve Loughran
>            Priority: Major
>
> {{S3AInputStream}} handles any IOE through a close() of stream and single re-invocation of the read, with 
> * no backoff
> * no abort of the HTTPS connection, which is just returned to the pool, If httpclient hasn't noticed the failure, it may get returned to the caller on the next read
> Proposed
> * switch to invoker
> * retry policy explicitly for stream (EOF => throw, timeout => close, sleep, retry, etc)
> We could think about extending the fault injection to inject stream read failures intermittently too, though it would need something in S3AInputStream to (optionally) wrap the http input streams with the failing stream. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org