You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Ly, Kiet" <ki...@g.harvard.edu> on 2019/02/19 00:36:44 UTC
hdfs remote copy timeout
Whenever my Hadoop cluster is under heavy load, I can't copy a file from
HDFS (using hdfs -copyToLocal) to my desktop. Is there anything that I can
tune in DataNode to avoid socket time out?
Re: hdfs remote copy timeout
Posted by Ayush Saxena <ay...@gmail.com>.
Hi Kiet
You can try increasing the timeout using the configuration
dfs.client.socket-timeout
The default is 60000 milliseconds, can increase accordingly.
Thanks
-Ayush
> On 19-Feb-2019, at 6:06 AM, Ly, Kiet <ki...@g.harvard.edu> wrote:
>
> Whenever my Hadoop cluster is under heavy load, I can't copy a file from HDFS (using hdfs -copyToLocal) to my desktop. Is there anything that I can tune in DataNode to avoid socket time out?
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org