You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@uniffle.apache.org by ck...@apache.org on 2022/11/12 07:07:28 UTC

[incubator-uniffle] 01/04: Fix potenial missing reads of exclude nodes (#269)

This is an automated email from the ASF dual-hosted git repository.

ckj pushed a commit to branch branch-0.6
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit fa894ba3c7fc59cbf0940f91b07a925462a1530d
Author: Junfan Zhang <zu...@apache.org>
AuthorDate: Tue Oct 18 19:12:03 2022 +0800

    Fix potenial missing reads of exclude nodes (#269)
    
    ### What changes were proposed in this pull request?
    Use the original latest modification time to fix potenial missing reads of exclude nodes
    
    ### Why are the changes needed?
    I found that after doing `parseExcludeNodesFile(hadoopFileSystem.open(hadoopPath));`, the file was updated of adding the new nodes, in the current implementation, the newly added nodes wont be recognized.
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    By hand
---
 .../java/org/apache/uniffle/coordinator/SimpleClusterManager.java    | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java b/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java
index c60a0d8d..ea65c398 100644
--- a/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java
+++ b/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java
@@ -131,9 +131,10 @@ public class SimpleClusterManager implements ClusterManager {
       Path hadoopPath = new Path(path);
       FileStatus fileStatus = hadoopFileSystem.getFileStatus(hadoopPath);
       if (fileStatus != null && fileStatus.isFile()) {
-        if (excludeLastModify.get() != fileStatus.getModificationTime()) {
+        long latestModificationTime = fileStatus.getModificationTime();
+        if (excludeLastModify.get() != latestModificationTime) {
           parseExcludeNodesFile(hadoopFileSystem.open(hadoopPath));
-          excludeLastModify.set(fileStatus.getModificationTime());
+          excludeLastModify.set(latestModificationTime);
         }
       } else {
         excludeNodes = Sets.newConcurrentHashSet();