You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Wei-Chiu Chuang (Jira)" <ji...@apache.org> on 2022/09/06 19:23:00 UTC
[jira] [Commented] (HDDS-3748) Consider reusing bytebuffer in FilePerBlockStrategy::readChunk
[ https://issues.apache.org/jira/browse/HDDS-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600952#comment-17600952 ]
Wei-Chiu Chuang commented on HDDS-3748:
---------------------------------------
The stacktrace has changed quite a big but I believe HDDS-7117 identified the same issue and [~szetszwo] is solving it using a different approach.
> Consider reusing bytebuffer in FilePerBlockStrategy::readChunk
> --------------------------------------------------------------
>
> Key: HDDS-3748
> URL: https://issues.apache.org/jira/browse/HDDS-3748
> Project: Apache Ozone
> Issue Type: Improvement
> Reporter: Rajesh Balamohan
> Priority: Major
> Labels: performance
> Attachments: Screenshot 2020-06-08 at 10.18.03 AM.png
>
>
> [https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/FilePerBlockStrategy.java#L148]
>
> {code:java}
> long len = info.getLen();
> long offset = info.getOffset();
> ByteBuffer data = ByteBuffer.allocate((int) len);
> ChunkUtils.readData(chunkFile, data, offset, len, volumeIOStats);
> {code}
>
> Instead of allocating buffer in every readChunk, it may be possible to reuse via threadLocal ByteBuffer. Sizes/Limits could be adjusted on need basis.
> This is to reduce memory pressure on DN.
> !Screenshot 2020-06-08 at 10.18.03 AM.png|width=1022,height=668!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org