You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "ZanderXu (Jira)" <ji...@apache.org> on 2022/05/28 01:24:00 UTC

[jira] [Created] (HDFS-16598) All datanodes [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]] are bad. Aborting...

ZanderXu created HDFS-16598:
-------------------------------

             Summary: All datanodes [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]] are bad. Aborting...
                 Key: HDFS-16598
                 URL: https://issues.apache.org/jira/browse/HDFS-16598
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: ZanderXu
            Assignee: ZanderXu


org.apache.hadoop.hdfs.testPipelineRecoveryOnRestartFailure failed with the stack like:
{code:java}
java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1667)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1601)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1587)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1371)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:674)
{code}

After tracing the root cause, this bug was introduced by [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. Because the block GS of client may be smaller than DN when pipeline recovery failed.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org