You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/02/27 13:25:04 UTC

[GitHub] [hadoop] ayushtkn commented on a change in pull request #2721: HDFS-15856: Make recover the pipeline in same packet exceed times for…

ayushtkn commented on a change in pull request #2721:
URL: https://github.com/apache/hadoop/pull/2721#discussion_r584121631



##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -1263,14 +1265,18 @@ private boolean processDatanodeOrExternalError() throws IOException {
       packetSendTime.clear();
     }
 
-    // If we had to recover the pipeline five times in a row for the
+    // If we had to recover the pipeline exceed times which
+    // defined in maxPipelineRecoveryRetries in a row for the

Review comment:
       nit:
   Looks some grammatical error, can we change to,
   ``
   If we had to recover the pipeline more than the value
    defined by maxPipelineRecoveryRetries in a row for the
   ``
   

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
##########
@@ -4352,6 +4352,17 @@
   </description>
 </property>
 
+<property>
+  <name>dfs.client.pipeline.recovery.max-retries</name>
+  <value>5</value>
+  <description>
+    If we had to recover the pipeline exceed times which
+    this value defined in a row for the same packet,
+    this client likely has corrupt data or corrupting
+    during transmission.

Review comment:
       Same as above.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org