You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Harshakiran Reddy (Jira)" <ji...@apache.org> on 2020/08/27 10:04:00 UTC
[jira] [Created] (HDFS-15543) RBF: Write Should allow, when a
subcluster is unavailable for RANDOM mount points with fault Tolerance
enabled.
Harshakiran Reddy created HDFS-15543:
----------------------------------------
Summary: RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount points with fault Tolerance enabled.
Key: HDFS-15543
URL: https://issues.apache.org/jira/browse/HDFS-15543
Project: Hadoop HDFS
Issue Type: Bug
Components: rbf
Affects Versions: 3.1.1
Environment: FI_MultiDestination_client]# *hdfs dfsrouteradmin -ls /test_ec*
*Mount Table Entries:*
Source Destinations Owner Group Mode Quota/Usage
*/test_ec* *hacluster->/tes_ec,hacluster1->/tes_ec* test ficommon rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
Reporter: Harshakiran Reddy
A RANDOM mount point should allow to creating new files if one subcluster is down also with Fault Tolerance was enabled. but here it's failed.
*File Write throne the Exception:-*
2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[DISK]
2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Unable to create new block.
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block locations. Source file "/test_ec/f1._COPYING_" - Aborting...block==null
put: Could not get block locations. Source file "/test_ec/f1._COPYING_" - Aborting...block==null
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org