You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Lohit Vijayarenu (JIRA)" <ji...@apache.org> on 2013/11/11 23:16:18 UTC

[jira] [Created] (HDFS-5499) Provide way to throttle per FileSystem read/write bandwidth

Lohit Vijayarenu created HDFS-5499:
--------------------------------------

             Summary: Provide way to throttle per FileSystem read/write bandwidth
                 Key: HDFS-5499
                 URL: https://issues.apache.org/jira/browse/HDFS-5499
             Project: Hadoop HDFS
          Issue Type: Improvement
            Reporter: Lohit Vijayarenu


In some cases it might be worth to throttle read/writer bandwidth on per JVM basis so that clients do not spawn too many thread and start data movement causing other JVMs to starve. Ability to throttle read/write bandwidth on per FileSystem would help avoid such issues. 

Challenge seems to be how well this can be fit into FileSystem code. If one enables throttling around FileSystem APIs, then any hidden data transfer within cluster using them might also be affected. Eg. copying job jar during job submission, localizing resources for distributed cache and such. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)