You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2017/02/24 14:07:45 UTC
[jira] [Resolved] (HADOOP-12376) S3NInputStream.close() downloads
the remaining bytes of the object from S3
[ https://issues.apache.org/jira/browse/HADOOP-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran resolved HADOOP-12376.
-------------------------------------
Resolution: Won't Fix
Closing as a wontfix as there is a solution "move to S3a". That has a lot of logic about when to skip forwards vs. close, seek optimisation for different io policies, metrics on all of this, etc.
> S3NInputStream.close() downloads the remaining bytes of the object from S3
> --------------------------------------------------------------------------
>
> Key: HADOOP-12376
> URL: https://issues.apache.org/jira/browse/HADOOP-12376
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 2.6.0, 2.7.1
> Reporter: Steve Loughran
> Assignee: Ajith S
>
> This is the same as HADOOP-11570, possibly the swift code has the same problem.
> Apparently (as raised on ASF lists), when you close an s3n input stream, it
> reads through the remainder of the file. This kills performance on partial reads of large files.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org