You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2022/03/28 18:27:00 UTC

[jira] [Comment Edited] (HADOOP-18028) High performance S3A input stream with prefetching & caching

    [ https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17513468#comment-17513468 ] 

Steve Loughran edited comment on HADOOP-18028 at 3/28/22, 6:26 PM:
-------------------------------------------------------------------

other issues/changes to consider
* S3InMemoryInputStream to support direct byte buffers. not sure the benefit here, given the http api is all on heap.
* org.apache.hadoop.fs.common.Validate to invoke o.a.h.util.Preconditions, . or better: move the methods there for better adoption.
* code to use Objects.checkNotNull over Validate when assigning params. this will change the exception class raised on null params, so break tests.
* tune classnames, e.g S3InputStream -> S3ABufferedStream, S3Reader -> StoreBlockReader, S3File -> OpenS3File. I think we just want to get the S3 prefixes off as all too often that means an AWS SDK class, not something in our own code
* prefetchBlockSize to use longBytesOption so that you can set a value like "64M"


was (Author: stevel@apache.org):
other issues/changes to consider
* S3InMemoryInputStream to support direct byte buffers. not sure the benefit here, given the http api is all on heap.
* org.apache.hadoop.fs.common.Validate to invoke o.a.h.util.Preconditions, . or better: move the methods there for better adoption.
* code to use Objects.checkNotNull over Validate when assigning params. this will change the exception class raised on null params, so break tests.
* tune classnames, e.g S3InputStream -> S3ABufferedStream, S3Reader -> StoreBlockReader, S3File -> OpenS3File. I think we just want to get the S3 prefixes off as all too often that means an AWS SDK class, not something in our own code

> High performance S3A input stream with prefetching & caching
> ------------------------------------------------------------
>
>                 Key: HADOOP-18028
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18028
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Bhalchandra Pandit
>            Assignee: Bhalchandra Pandit
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read throughput when reading from the S3 file system. It not only helps the sequential read case (like reading a SequenceFile) but also significantly improves read throughput of a random access case (like reading Parquet). This technique has been very useful in significantly improving efficiency of the data processing jobs at Pinterest. 
>  
> I would like to contribute that feature to Apache Hadoop. More details on this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org