You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Viraj Jasani (Jira)" <ji...@apache.org> on 2023/05/10 23:47:00 UTC

[jira] [Commented] (HADOOP-18291) SingleFilePerBlockCache does not have a limit

    [ https://issues.apache.org/jira/browse/HADOOP-18291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17721569#comment-17721569 ] 

Viraj Jasani commented on HADOOP-18291:
---------------------------------------

{quote}you'd maybe want a block cache - readers would lock their block before a read; unlock after. Use an LRU policy for recycling blocks, with unbuffer/close releasing all blocks of a caller.
{quote}
if jobs using s3a prefetching get aborted without calling s3afs#close, and prefetched block files are kept on EBS volumes that could be accessed again by new vm instance or container that resume the jobs, we might also want to consider deleting all old local block files as part of s3afs#initialize

> SingleFilePerBlockCache does not have a limit
> ---------------------------------------------
>
>                 Key: HADOOP-18291
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18291
>             Project: Hadoop Common
>          Issue Type: Sub-task
>    Affects Versions: 3.4.0
>            Reporter: Ahmar Suhail
>            Priority: Major
>
> Currently there is no limit on the size of disk cache. This means we could have a large number of files on files, especially for access patterns that are very random and do not always read the block fully. 
>  
> eg:
> in.seek(5);
> in.read(); 
> in.seek(blockSize + 10) // block 0 gets saved to disk as it's not fully read
> in.read();
> in.seek(2 * blockSize + 10) // block 1 gets saved to disk
> .. and so on
>  
> The in memory cache is bounded, and by default has a limit of 72MB (9 blocks). When a block is fully read, and a seek is issued it's released [here|https://github.com/apache/hadoop/blob/feature-HADOOP-18028-s3a-prefetch/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3CachingInputStream.java#L109]. We can also delete the on disk file for the block here if it exists. 
>  
> Also maybe add an upper limit on disk space, and delete the file which stores data of the block furthest from the current block (similar to the in memory cache) when this limit is reached. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org