You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Daniel Carl Jones (Jira)" <ji...@apache.org> on 2022/06/23 15:39:00 UTC
[jira] [Commented] (HADOOP-18246) Remove lower limit on s3a prefetching/caching block size
[ https://issues.apache.org/jira/browse/HADOOP-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17558139#comment-17558139 ]
Daniel Carl Jones commented on HADOOP-18246:
--------------------------------------------
I think it'd be reasonable to allow a block size as small as 1 byte - that may not be performant or safe for production, but it is up to the users discretion.
> Remove lower limit on s3a prefetching/caching block size
> --------------------------------------------------------
>
> Key: HADOOP-18246
> URL: https://issues.apache.org/jira/browse/HADOOP-18246
> Project: Hadoop Common
> Issue Type: Sub-task
> Reporter: Daniel Carl Jones
> Assignee: Daniel Carl Jones
> Priority: Minor
>
> The minimum allowed block size currently is {{PREFETCH_BLOCK_DEFAULT_SIZE}} (8MB).
> {code:java}
> this.prefetchBlockSize = intOption(
> conf, PREFETCH_BLOCK_SIZE_KEY, PREFETCH_BLOCK_DEFAULT_SIZE, PREFETCH_BLOCK_DEFAULT_SIZE);{code}
> [https://github.com/apache/hadoop/blob/3aa03e0eb95bbcb066144706e06509f0e0549196/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487-L488]
> Why is this the case and should we lower or remove it?
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org