You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Zheng Shao (JIRA)" <ji...@apache.org> on 2010/10/26 22:20:19 UTC
[jira] Created: (HDFS-1479) Massive file deletion causes some
timeouts in writers
Massive file deletion causes some timeouts in writers
-----------------------------------------------------
Key: HDFS-1479
URL: https://issues.apache.org/jira/browse/HDFS-1479
Project: Hadoop HDFS
Issue Type: Improvement
Affects Versions: 0.20.2
Reporter: Zheng Shao
Assignee: Zheng Shao
Priority: Minor
When we do a massive deletion of files, we saw some timeouts in writers who's writing to HDFS. This does not happen to all DataNodes, but it's happening regularly enough that we would like to fix it.
{code}
yyy.xxx.com: 10/10/25 00:55:32 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block blk_-5459995953259765112_37619608java.net.SocketTimeoutException: 69000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.10.10.10:56319 remote=/10.10.10.10:50010]
{code}
This is caused by the default setting of AsyncDiskService, which starts 4 threads per volume to delete files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.