You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Brahma Reddy Battula (JIRA)" <ji...@apache.org> on 2017/04/27 14:05:04 UTC
[jira] [Created] (HDFS-11711) DN should not delete the block On
"Too many open files" Exception
Brahma Reddy Battula created HDFS-11711:
-------------------------------------------
Summary: DN should not delete the block On "Too many open files" Exception
Key: HDFS-11711
URL: https://issues.apache.org/jira/browse/HDFS-11711
Project: Hadoop HDFS
Issue Type: Bug
Components: datanode
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
*Seen the following scenario in one of our customer environment*
* while jobclient writing {{"job.xml"}} there are pipeline failures and written to only one DN.
* when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as system exceed limit) and block got deleted. Hence mapper failed to read and job got failed.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org