You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "sodonnel (via GitHub)" <gi...@apache.org> on 2023/05/24 08:38:21 UTC

[GitHub] [hadoop] sodonnel commented on a diff in pull request #5687: HDFS-17024. Potential data race introduced by HDFS-15865.

sodonnel commented on code in PR #5687:
URL: https://github.com/apache/hadoop/pull/5687#discussion_r1203703794


##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java:
##########
@@ -916,7 +917,7 @@ void waitForAckedSeqno(long seqno) throws IOException {
     try (TraceScope ignored = dfsClient.getTracer().
         newScope("waitForAckedSeqno")) {
       LOG.debug("{} waiting for ack for: {}", this, seqno);
-      int dnodes = nodes != null ? nodes.length : 3;
+      int dnodes = nodes.length > 0 ? nodes.length : 3;

Review Comment:
   I think there is still a risk of a race here. In the original problem after nodes.length evaulated to say 3, then nodes was set to null. Then the "IF" statement tried to call null.length, leading to the error.
   
   In this changed code, you could evaluate nodes.length to get 3, and then nodes is swapped to be the new `EMPTY_DATANODES`, then in the IF statement its length will be zero, setting dnodes to zero rather than 3, which will probably trip up the `getDatanodeWriteTimeout` method it is passed too.
   
   To make it safe, you probably have to:
   
   ```
   int currentNodes = nodes.length;
   int dnodes = currentNodes > 0 ? currentNoides : 3;
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org