You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Sneha Vijayarajan (Jira)" <ji...@apache.org> on 2020/10/08 09:04:00 UTC

[jira] [Comment Edited] (HADOOP-17296) ABFS: Allow Random Reads to be of Buffer Size

    [ https://issues.apache.org/jira/browse/HADOOP-17296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17209897#comment-17209897 ] 

Sneha Vijayarajan edited comment on HADOOP-17296 at 10/8/20, 9:03 AM:
----------------------------------------------------------------------

[~mukund-thakur] - 

Readahead.range will provide a static increase to whatever the read size will be for the requested read, which makes the read to store to be of different size. 

 The specific case mentioned in description was a pattern observed for a parquet file which had very small row group size, which I gather isnt an optimal structure for parquet file. Gen1 job run was more performant as it was reading a full buffer and a buffer read ended up reading more row groups.

Gen2's random read logic ended up triggering more IOPs as when random it reads only the requested bytes. To highlight that it was the randomness of read pattern that lead to high job runtime and more IOPs, forcing Gen2 to read a full buffer like Gen1 helped. 

But reading a full buffer for every random read is definitely not ideal esp a blocking read call for app. Hence the configs that enforce a full buffer read will be set to false by default. We get similar asks for comparisons between Gen1 to Gen2 for same workloads, and this Jira configs will get a gen1 customer migrating to gen2 the same overall i/o pattern as gen1 and the same perf characteristics.

Readahead.range which will be a consistent amount of data read ahead on top of the varying requested read size is definitely a better solution for a performant random read on Gen2 and we should pursue on that. And this change wont make any override with the range update. 

 

 


was (Author: snvijaya):
[~mukund-thakur] - 

Readahead.range will provide a static increase to whatever the read size will be for the requested read, which makes the read to store to be of different size. 

 The specific case mentioned in description was a pattern observed for a parquet file which had very small row group size, which I gather isnt an optimal structure for parquet file. Gen1 job run was more performant as it was reading a full buffer and a buffer read ended up reading more row groups. 

Gen2's random read logic ended up triggering more IOPs as when random it reads only the requested bytes. To highlight that it was the randomness of read pattern that lead to high job runtime and more IOPs, forcing Gen2 to read a full buffer like Gen1 helped. 

But reading a full buffer for every random read is definitely not ideal esp a blocking read call for app. Hence the configs that enforce a full buffer read will be set to false by default. We get similar asks for comparisons between Gen1 to Gen2 for same workloads, and we are hoping that rerun of the workload with this config turned on will be easier to get that information with the IO pattern matching.

Readahead.range which will be a consistent amount of data read ahead on top of the varying requested read size is definitely a better solution for a performant random read on Gen2 and we should pursue on that. 

 

 

> ABFS: Allow Random Reads to be of Buffer Size
> ---------------------------------------------
>
>                 Key: HADOOP-17296
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17296
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Sneha Vijayarajan
>            Assignee: Sneha Vijayarajan
>            Priority: Major
>              Labels: abfsactive
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next read doesn't skip by a lot and can be served by the earlier read if read was done in buffer size. As a result the job triggered a higher count of read calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org