You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Suresh Srinivas (JIRA)" <ji...@apache.org> on 2013/06/29 00:01:20 UTC
[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size
configurable
[ https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695847#comment-13695847 ]
Suresh Srinivas commented on HADOOP-9676:
-----------------------------------------
+1 for the patch.
Could you move the all the dataLength check to a static method and add a test for that?
> make maximum RPC buffer size configurable
> -----------------------------------------
>
> Key: HADOOP-9676
> URL: https://issues.apache.org/jira/browse/HADOOP-9676
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: HADOOP-9676.001.patch
>
>
> Currently the RPC server just allocates however much memory the client asks for, without validating. It would be nice to make the maximum RPC buffer size configurable. This would prevent a rogue client from bringing down the NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers. It would also make it easier to debug issues with super-large RPCs or malformed headers, since OOMs can be difficult for developers to reproduce.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira