You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Justin Uang (JIRA)" <ji...@apache.org> on 2019/02/21 16:10:01 UTC

[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

     [ https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Justin Uang updated HADOOP-16132:
---------------------------------
    Description: 
I noticed that I get 150MB/s when I use the AWS CLI
{code:java}
aws s3 cp s3://<bucket>/<key> - > /dev/null{code}
vs 50MB/s when I use the S3AFileSystem
{code:java}
hadoop fs -cat s3://<bucket>/<key> > /dev/null{code}
Looking into the AWS CLI code, it looks like the [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py] logic is quite clever. It downloads the next couple parts in parallel using range requests, and then buffers them in memory in order to reorder them and expose a single contiguous stream. I translated the logic to Java and modified the S3AFileSystem to do similar things, and am able to achieve 150MB/s download speeds as well. It is mostly done but I have some things to clean up first.

It would be great to get some other eyes on it to see what we need to do to get it merged.

  was:
I noticed that I get 150MB/s when I use the aws CLI
{code:java}
aws s3 cp s3://<bucket>/<key> - > /dev/null{code}
 

vs 50MB/s when I use the S3AFileSystem
{code:java}
hadoop fs -cat s3://<bucket>/<key> > /dev/null{code}
Looking into the AWS CLI code, it looks like the [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py] logic is quite clever. It downloads the next couple parts in parallel using range requests, and then buffers them in memory in order to reorder them and expose a single contiguous stream. I translated the logic to Java and modified the S3AFileSystem to do similar things, and am able to achieve 150MB/s download speeds as well. It is mostly done but I have some things to clean up first.

It would be great to get some other eyes on it to see what we need to do to get it merged.


> Support multipart download in S3AFileSystem
> -------------------------------------------
>
>                 Key: HADOOP-16132
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16132
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Justin Uang
>            Priority: Major
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3://<bucket>/<key> - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3://<bucket>/<key> > /dev/null{code}
> Looking into the AWS CLI code, it looks like the [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py] logic is quite clever. It downloads the next couple parts in parallel using range requests, and then buffers them in memory in order to reorder them and expose a single contiguous stream. I translated the logic to Java and modified the S3AFileSystem to do similar things, and am able to achieve 150MB/s download speeds as well. It is mostly done but I have some things to clean up first.
> It would be great to get some other eyes on it to see what we need to do to get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org