You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Yi Liu (JIRA)" <ji...@apache.org> on 2014/11/26 14:51:14 UTC

[jira] [Comment Edited] (HADOOP-11339) Reuse buffer for Hadoop RPC

    [ https://issues.apache.org/jira/browse/HADOOP-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14226074#comment-14226074 ] 

Yi Liu edited comment on HADOOP-11339 at 11/26/14 1:50 PM:
-----------------------------------------------------------

For the buffer, the rpc data size may be different, so it's hard to reuse as a single byte array.
Currently in my initial patch, I write a chunked byte array, when the new rpc message is larger, we allocate new chunk. And the chunked byte array is shared.

Thoughts?


was (Author: hitliuyi):
For the buffer, the rpc data size may be different, so it's hard to reuse as a single byte array.
Currently in my initial patch, I write a chunked byte array, when the new rpc message is larger, we allocate new chunk.

Thoughts?

> Reuse buffer for Hadoop RPC
> ---------------------------
>
>                 Key: HADOOP-11339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11339
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc, performance
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>
> For Hadoop RPCs, we will try to reuse the available connections.
> But when we process each rpc in the same connection, we will allocate a fresh heap byte buffer to store the rpc bytes data. The rpc message may be very large, i.e., datanode blocks report. 
> There is chance to cause full gc as discussed in HDFS-7435



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)