You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by John Lilley <jo...@redpoint.net> on 2013/09/21 15:55:35 UTC

connection overload strategies

If my YARN application tasks are all reading/writing HDFS simultaneously and some node is unable to honor a connection request because it is overloaded, what happens?  I've seen HDFS attempt to retry connections.
For that matter, how does MR under YARN deal with connection overload during the shuffle phase, where it seems that such cases would be likely?
Thanks
John