You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@uniffle.apache.org by ro...@apache.org on 2022/10/18 11:12:09 UTC
[incubator-uniffle] branch master updated: Fix potenial missing reads of exclude nodes (#269)
This is an automated email from the ASF dual-hosted git repository.
roryqi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git
The following commit(s) were added to refs/heads/master by this push:
new b845381e Fix potenial missing reads of exclude nodes (#269)
b845381e is described below
commit b845381ebf3ee53377d7500a1124e1fb8c958870
Author: Junfan Zhang <zu...@apache.org>
AuthorDate: Tue Oct 18 19:12:03 2022 +0800
Fix potenial missing reads of exclude nodes (#269)
### What changes were proposed in this pull request?
Use the original latest modification time to fix potenial missing reads of exclude nodes
### Why are the changes needed?
I found that after doing `parseExcludeNodesFile(hadoopFileSystem.open(hadoopPath));`, the file was updated of adding the new nodes, in the current implementation, the newly added nodes wont be recognized.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
By hand
---
.../java/org/apache/uniffle/coordinator/SimpleClusterManager.java | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java b/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java
index cc9b553e..0ba266cf 100644
--- a/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java
+++ b/coordinator/src/main/java/org/apache/uniffle/coordinator/SimpleClusterManager.java
@@ -141,9 +141,10 @@ public class SimpleClusterManager implements ClusterManager {
Path hadoopPath = new Path(path);
FileStatus fileStatus = hadoopFileSystem.getFileStatus(hadoopPath);
if (fileStatus != null && fileStatus.isFile()) {
- if (excludeLastModify.get() != fileStatus.getModificationTime()) {
+ long latestModificationTime = fileStatus.getModificationTime();
+ if (excludeLastModify.get() != latestModificationTime) {
parseExcludeNodesFile(hadoopFileSystem.open(hadoopPath));
- excludeLastModify.set(fileStatus.getModificationTime());
+ excludeLastModify.set(latestModificationTime);
}
} else {
excludeNodes = Sets.newConcurrentHashSet();